Docker Installation
Tip
Recommended Installation Method
Docker is the recommended way to install LocalAI as it works across all platforms (Linux, macOS, Windows) and provides the easiest setup experience.
LocalAI provides Docker images that work with Docker, Podman, and other container engines. These images are available on Docker Hub and Quay.io.
Prerequisites
Before you begin, ensure you have Docker or Podman installed:
- Install Docker Desktop (Mac, Windows, Linux)
- Install Podman (Linux alternative)
- Install Docker Engine (Linux servers)
Quick Start
The fastest way to get started is with the CPU image:
This will:
- Start LocalAI (you’ll need to install models separately)
- Make the API available at
http://localhost:8080
Tip
Docker Run vs Docker Start
docker runcreates and starts a new container. If a container with the same name already exists, this command will fail.docker startstarts an existing container that was previously created withdocker run.
If you’ve already run LocalAI before and want to start it again, use: docker start -i local-ai
Image Types
LocalAI provides several image types to suit different needs:
Standard Images
Standard images don’t include pre-configured models. Use these if you want to configure models manually.
CPU Image
GPU Images
NVIDIA CUDA 12:
NVIDIA CUDA 11:
AMD GPU (ROCm):
Intel GPU:
Vulkan:
NVIDIA Jetson (L4T ARM64):
All-in-One (AIO) Images
Recommended for beginners - These images come pre-configured with models and backends, ready to use immediately.
CPU Image
GPU Images
NVIDIA CUDA 12:
NVIDIA CUDA 11:
AMD GPU (ROCm):
Intel GPU:
Using Docker Compose
For a more manageable setup, especially with persistent volumes, use Docker Compose:
Save this as docker-compose.yml and run:
Persistent Storage
To persist models and configurations, mount a volume:
Or use a named volume:
What’s Included in AIO Images
All-in-One images come pre-configured with:
- Text Generation: LLM models for chat and completion
- Image Generation: Stable Diffusion models
- Text to Speech: TTS models
- Speech to Text: Whisper models
- Embeddings: Vector embedding models
- Function Calling: Support for OpenAI-compatible function calling
The AIO images use OpenAI-compatible model names (like gpt-4, gpt-4-vision-preview) but are backed by open-source models. See the container images documentation for the complete mapping.
Next Steps
After installation:
- Access the WebUI at
http://localhost:8080 - Check available models:
curl http://localhost:8080/v1/models - Install additional models
- Try out examples
Advanced Configuration
For detailed information about:
- All available image tags and versions
- Advanced Docker configuration options
- Custom image builds
- Backend management
See the Container Images documentation.
Troubleshooting
Container won’t start
- Check Docker is running:
docker ps - Check port 8080 is available:
netstat -an | grep 8080(Linux/Mac) - View logs:
docker logs local-ai
GPU not detected
- Ensure Docker has GPU access:
docker run --rm --gpus all nvidia/cuda:12.0.0-base-ubuntu22.04 nvidia-smi - For NVIDIA: Install NVIDIA Container Toolkit
- For AMD: Ensure devices are accessible:
ls -la /dev/kfd /dev/dri
Models not downloading
- Check internet connection
- Verify disk space:
df -h - Check Docker logs for errors:
docker logs local-ai
See Also
- Container Images Reference - Complete image reference
- Install Models - Install and configure models
- GPU Acceleration - GPU setup and optimization
- Kubernetes Installation - Deploy on Kubernetes