Run with container images

LocalAI provides a variety of images to support different environments. These images are available on quay.io and Docker Hub.

All-in-One images comes with a pre-configured set of models and backends, standard images instead do not have any model pre-configured and installed.

For GPU Acceleration support for Nvidia video graphic cards, use the Nvidia/CUDA images, if you don’t have a GPU, use the CPU images. If you have AMD or Mac Silicon, see the build section.

Tip

Available Images Types:

  • Images ending with -core are smaller images without predownload python dependencies. Use these images if you plan to use llama.cpp, stablediffusion-ncn or rwkv backends - if you are not sure which one to use, do not use these images.
  • Images containing the aio tag are all-in-one images with all the features enabled, and come with an opinionated set of configuration.

Prerequisites

Before you begin, ensure you have a container engine installed if you are not using the binaries. Suitable options include Docker or Podman. For installation instructions, refer to the following guides:

Tip

Hardware Requirements: The hardware requirements for LocalAI vary based on the model size and quantization method used. For performance benchmarks with different backends, such as llama.cpp, visit this link. The rwkv backend is noted for its lower resource consumption.

Standard container images

Standard container images do not have pre-installed models. Use these if you want to configure models manually.

DescriptionQuayDocker Hub
Latest images from the branch (development)quay.io/go-skynet/local-ai:masterlocalai/localai:master
Latest tagquay.io/go-skynet/local-ai:latestlocalai/localai:latest
Versioned imagequay.io/go-skynet/local-ai:v3.7.0localai/localai:v3.7.0
DescriptionQuayDocker Hub
Latest images from the branch (development)quay.io/go-skynet/local-ai:master-gpu-nvidia-cuda-11localai/localai:master-gpu-nvidia-cuda-11
Latest tagquay.io/go-skynet/local-ai:latest-gpu-nvidia-cuda-11localai/localai:latest-gpu-nvidia-cuda-11
Versioned imagequay.io/go-skynet/local-ai:v3.7.0-gpu-nvidia-cuda-11localai/localai:v3.7.0-gpu-nvidia-cuda-11
DescriptionQuayDocker Hub
Latest images from the branch (development)quay.io/go-skynet/local-ai:master-gpu-nvidia-cuda-12localai/localai:master-gpu-nvidia-cuda-12
Latest tagquay.io/go-skynet/local-ai:latest-gpu-nvidia-cuda-12localai/localai:latest-gpu-nvidia-cuda-12
Versioned imagequay.io/go-skynet/local-ai:v3.7.0-gpu-nvidia-cuda-12localai/localai:v3.7.0-gpu-nvidia-cuda-12
DescriptionQuayDocker Hub
Latest images from the branch (development)quay.io/go-skynet/local-ai:master-gpu-intellocalai/localai:master-gpu-intel
Latest tagquay.io/go-skynet/local-ai:latest-gpu-intellocalai/localai:latest-gpu-intel
Versioned imagequay.io/go-skynet/local-ai:v3.7.0-gpu-intellocalai/localai:v3.7.0-gpu-intel
DescriptionQuayDocker Hub
Latest images from the branch (development)quay.io/go-skynet/local-ai:master-gpu-hipblaslocalai/localai:master-gpu-hipblas
Latest tagquay.io/go-skynet/local-ai:latest-gpu-hipblaslocalai/localai:latest-gpu-hipblas
Versioned imagequay.io/go-skynet/local-ai:v3.7.0-gpu-hipblaslocalai/localai:v3.7.0-gpu-hipblas
DescriptionQuayDocker Hub
Latest images from the branch (development)quay.io/go-skynet/local-ai:master-vulkanlocalai/localai:master-vulkan
Latest tagquay.io/go-skynet/local-ai:latest-gpu-vulkanlocalai/localai:latest-gpu-vulkan
Versioned imagequay.io/go-skynet/local-ai:v3.7.0-vulkanlocalai/localai:v3.7.0-vulkan

These images are compatible with Nvidia ARM64 devices, such as the Jetson Nano, Jetson Xavier NX, and Jetson AGX Xavier. For more information, see the Nvidia L4T guide.

DescriptionQuayDocker Hub
Latest images from the branch (development)quay.io/go-skynet/local-ai:master-nvidia-l4t-arm64localai/localai:master-nvidia-l4t-arm64
Latest tagquay.io/go-skynet/local-ai:latest-nvidia-l4t-arm64localai/localai:latest-nvidia-l4t-arm64
Versioned imagequay.io/go-skynet/local-ai:v3.7.0-nvidia-l4t-arm64localai/localai:v3.7.0-nvidia-l4t-arm64

All-in-one images

All-In-One images are images that come pre-configured with a set of models and backends to fully leverage almost all the LocalAI featureset. These images are available for both CPU and GPU environments. The AIO images are designed to be easy to use and require no configuration. Models configuration can be found here separated by size.

In the AIO images there are models configured with the names of OpenAI models, however, they are really backed by Open Source models. You can find the table below

CategoryModel nameReal model (CPU)Real model (GPU)
Text Generationgpt-4phi-2hermes-2-pro-mistral
Multimodal Visiongpt-4-vision-previewbakllavallava-1.6-mistral
Image Generationstablediffusionstablediffusiondreamshaper-8
Speech to Textwhisper-1whisper with whisper-base model<= same
Text to Speechtts-1en-us-amy-low.onnx from rhasspy/piper<= same
Embeddingstext-embedding-ada-002all-MiniLM-L6-v2 in Q4all-MiniLM-L6-v2

Usage

Select the image (CPU or GPU) and start the container with Docker:

docker run -p 8080:8080 --name local-ai -ti localai/localai:latest-aio-cpu

LocalAI will automatically download all the required models, and the API will be available at localhost:8080.

Or with a docker-compose file:

version: "3.9"
services:
  api:
    image: localai/localai:latest-aio-cpu
    # For a specific version:
    # image: localai/localai:v3.7.0-aio-cpu
    # For Nvidia GPUs decomment one of the following (cuda11 or cuda12):
    # image: localai/localai:v3.7.0-aio-gpu-nvidia-cuda-11
    # image: localai/localai:v3.7.0-aio-gpu-nvidia-cuda-12
    # image: localai/localai:latest-aio-gpu-nvidia-cuda-11
    # image: localai/localai:latest-aio-gpu-nvidia-cuda-12
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8080/readyz"]
      interval: 1m
      timeout: 20m
      retries: 5
    ports:
      - 8080:8080
    environment:
      - DEBUG=true
      # ...
    volumes:
      - ./models:/models:cached
    # decomment the following piece if running with Nvidia GPUs
    # deploy:
    #   resources:
    #     reservations:
    #       devices:
    #         - driver: nvidia
    #           count: 1
    #           capabilities: [gpu]
Tip

Models caching: The AIO image will download the needed models on the first run if not already present and store those in /models inside the container. The AIO models will be automatically updated with new versions of AIO images.

You can change the directory inside the container by specifying a MODELS_PATH environment variable (or --models-path).

If you want to use a named model or a local directory, you can mount it as a volume to /models:

docker run -p 8080:8080 --name local-ai -ti -v $PWD/models:/models localai/localai:latest-aio-cpu

or associate a volume:

docker volume create localai-models
docker run -p 8080:8080 --name local-ai -ti -v localai-models:/models localai/localai:latest-aio-cpu

Available AIO images

DescriptionQuayDocker Hub
Latest images for CPUquay.io/go-skynet/local-ai:latest-aio-cpulocalai/localai:latest-aio-cpu
Versioned image (e.g. for CPU)quay.io/go-skynet/local-ai:v3.7.0-aio-cpulocalai/localai:v3.7.0-aio-cpu
Latest images for Nvidia GPU (CUDA11)quay.io/go-skynet/local-ai:latest-aio-gpu-nvidia-cuda-11localai/localai:latest-aio-gpu-nvidia-cuda-11
Latest images for Nvidia GPU (CUDA12)quay.io/go-skynet/local-ai:latest-aio-gpu-nvidia-cuda-12localai/localai:latest-aio-gpu-nvidia-cuda-12
Latest images for AMD GPUquay.io/go-skynet/local-ai:latest-aio-gpu-hipblaslocalai/localai:latest-aio-gpu-hipblas
Latest images for Intel GPUquay.io/go-skynet/local-ai:latest-aio-gpu-intellocalai/localai:latest-aio-gpu-intel

Available environment variables

The AIO Images are inheriting the same environment variables as the base images and the environment of LocalAI (that you can inspect by calling --help). However, it supports additional environment variables available only from the container image

VariableDefaultDescription
PROFILEAuto-detectedThe size of the model to use. Available: cpu, gpu-8g
MODELSAuto-detectedA list of models YAML Configuration file URI/URL (see also running models)

See Also