Run with container images
LocalAI provides a variety of images to support different environments. These images are available on quay.io and Docker Hub.
All-in-One images comes with a pre-configured set of models and backends, standard images instead do not have any model pre-configured and installed.
For GPU Acceleration support for Nvidia video graphic cards, use the Nvidia/CUDA images, if you don’t have a GPU, use the CPU images. If you have AMD or Mac Silicon, see the build section.
Available Images Types:
Images ending with
-core
are smaller images without predownload python dependencies. Use these images if you plan to usellama.cpp
,stablediffusion-ncn
,tinydream
orrwkv
backends - if you are not sure which one to use, do not use these images.Images containing the
aio
tag are all-in-one images with all the features enabled, and come with an opinionated set of configuration.FFMpeg is not included in the default images due to its licensing. If you need FFMpeg, use the images ending with
-ffmpeg
. Note thatffmpeg
is needed in case of usingaudio-to-text
LocalAI’s features.If using old and outdated CPUs and no GPUs you might need to set
REBUILD
totrue
as environment variable along with options to disable the flags which your CPU does not support, however note that inference will perform poorly and slow. See also flagset compatibility.
Prerequisites
Before you begin, ensure you have a container engine installed if you are not using the binaries. Suitable options include Docker or Podman. For installation instructions, refer to the following guides:
Hardware Requirements: The hardware requirements for LocalAI vary based on the model size and quantization method used. For performance benchmarks with different backends, such as llama.cpp
, visit this link. The rwkv
backend is noted for its lower resource consumption.
All-in-one images
All-In-One images are images that come pre-configured with a set of models and backends to fully leverage almost all the LocalAI featureset. These images are available for both CPU and GPU environments. The AIO images are designed to be easy to use and requires no configuration. Models configuration can be found here separated by size.
In the AIO images there are models configured with the names of OpenAI models, however, they are really backed by Open Source models. You can find the table below
Category | Model name | Real model (CPU) | Real model (GPU) |
---|---|---|---|
Text Generation | gpt-4 | phi-2 | hermes-2-pro-mistral |
Multimodal Vision | gpt-4-vision-preview | bakllava | llava-1.6-mistral |
Image Generation | stablediffusion | stablediffusion | dreamshaper-8 |
Speech to Text | whisper-1 | whisper with whisper-base model | <= same |
Text to Speech | tts-1 | en-us-amy-low.onnx from rhasspy/piper | <= same |
Embeddings | text-embedding-ada-002 | all-MiniLM-L6-v2 in Q4 | all-MiniLM-L6-v2 |
Usage
Select the image (CPU or GPU) and start the container with Docker:
# CPU example
docker run -p 8080:8080 --name local-ai -ti localai/localai:latest-aio-cpu
# For Nvidia GPUs:
# docker run -p 8080:8080 --gpus all --name local-ai -ti localai/localai:latest-aio-gpu-nvidia-cuda-11
# docker run -p 8080:8080 --gpus all --name local-ai -ti localai/localai:latest-aio-gpu-nvidia-cuda-12
LocalAI will automatically download all the required models, and the API will be available at localhost:8080.
Or with a docker-compose file:
version: "3.9"
services:
api:
image: localai/localai:latest-aio-cpu
# For a specific version:
# image: localai/localai:v2.21.1-aio-cpu
# For Nvidia GPUs decomment one of the following (cuda11 or cuda12):
# image: localai/localai:v2.21.1-aio-gpu-nvidia-cuda-11
# image: localai/localai:v2.21.1-aio-gpu-nvidia-cuda-12
# image: localai/localai:latest-aio-gpu-nvidia-cuda-11
# image: localai/localai:latest-aio-gpu-nvidia-cuda-12
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/readyz"]
interval: 1m
timeout: 20m
retries: 5
ports:
- 8080:8080
environment:
- DEBUG=true
# ...
volumes:
- ./models:/build/models:cached
# decomment the following piece if running with Nvidia GPUs
# deploy:
# resources:
# reservations:
# devices:
# - driver: nvidia
# count: 1
# capabilities: [gpu]
Models caching: The AIO image will download the needed models on the first run if not already present and store those in /build/models
inside the container. The AIO models will be automatically updated with new versions of AIO images.
You can change the directory inside the container by specifying a MODELS_PATH
environment variable (or --models-path
).
If you want to use a named model or a local directory, you can mount it as a volume to /build/models
:
docker run -p 8080:8080 --name local-ai -ti -v $PWD/models:/build/models localai/localai:latest-aio-cpu
or associate a volume:
docker volume create localai-models
docker run -p 8080:8080 --name local-ai -ti -v localai-models:/build/models localai/localai:latest-aio-cpu
Available AIO images
Description | Quay | Docker Hub |
---|---|---|
Latest images for CPU | quay.io/go-skynet/local-ai:latest-aio-cpu | localai/localai:latest-aio-cpu |
Versioned image (e.g. for CPU) | quay.io/go-skynet/local-ai:v2.21.1-aio-cpu | localai/localai:v2.21.1-aio-cpu |
Latest images for Nvidia GPU (CUDA11) | quay.io/go-skynet/local-ai:latest-aio-gpu-nvidia-cuda-11 | localai/localai:latest-aio-gpu-nvidia-cuda-11 |
Latest images for Nvidia GPU (CUDA12) | quay.io/go-skynet/local-ai:latest-aio-gpu-nvidia-cuda-12 | localai/localai:latest-aio-gpu-nvidia-cuda-12 |
Latest images for AMD GPU | quay.io/go-skynet/local-ai:latest-aio-gpu-hipblas | localai/localai:latest-aio-gpu-hipblas |
Latest images for Intel GPU (sycl f16) | quay.io/go-skynet/local-ai:latest-aio-gpu-intel-f16 | localai/localai:latest-aio-gpu-intel-f16 |
Latest images for Intel GPU (sycl f32) | quay.io/go-skynet/local-ai:latest-aio-gpu-intel-f32 | localai/localai:latest-aio-gpu-intel-f32 |
Available environment variables
The AIO Images are inheriting the same environment variables as the base images and the environment of LocalAI (that you can inspect by calling --help
). However, it supports additional environment variables available only from the container image
Variable | Default | Description |
---|---|---|
PROFILE | Auto-detected | The size of the model to use. Available: cpu , gpu-8g |
MODELS | Auto-detected | A list of models YAML Configuration file URI/URL (see also running models) |
Standard container images
Standard container images do not have pre-installed models.
Images are available with and without python dependencies. Note that images with python dependencies are bigger (in order of 17GB).
Images with core
in the tag are smaller and do not contain any python dependencies.
See Also
Last updated 20 Sep 2024, 18:16 +0200 .