Chapter 3

Getting started

Welcome to LocalAI! This section covers everything you need to know after installation to start using LocalAI effectively.

Tip

Haven’t installed LocalAI yet?

See the Installation guide to install LocalAI first. Docker is the recommended installation method for most users.

What’s in This Section

Subsections of Getting started

Quickstart

LocalAI is a free, open-source alternative to OpenAI (Anthropic, etc.), functioning as a drop-in replacement REST API for local inferencing. It allows you to run LLMs, generate images, and produce audio, all locally or on-premises with consumer-grade hardware, supporting multiple model families and architectures.

Tip

Security considerations

If you are exposing LocalAI remotely, make sure you protect the API endpoints adequately with a mechanism which allows to protect from the incoming traffic or alternatively, run LocalAI with API_KEY to gate the access with an API key. The API key guarantees a total access to the features (there is no role separation), and it is to be considered as likely as an admin role.

Quickstart

This guide assumes you have already installed LocalAI. If you haven’t installed it yet, see the Installation guide first.

Starting LocalAI

Once installed, start LocalAI. For Docker installations:

docker run -p 8080:8080 --name local-ai -ti localai/localai:latest

The API will be available at http://localhost:8080.

Downloading models on start

When starting LocalAI (either via Docker or via CLI) you can specify as argument a list of models to install automatically before starting the API, for example:

local-ai run llama-3.2-1b-instruct:q4_k_m
local-ai run huggingface://TheBloke/phi-2-GGUF/phi-2.Q8_0.gguf
local-ai run ollama://gemma:2b
local-ai run https://gist.githubusercontent.com/.../phi-2.yaml
local-ai run oci://localai/phi-2:latest
Tip

Automatic Backend Detection: When you install models from the gallery or YAML files, LocalAI automatically detects your system’s GPU capabilities (NVIDIA, AMD, Intel) and downloads the appropriate backend. For advanced configuration options, see GPU Acceleration.

For a full list of options, you can run LocalAI with --help or refer to the Linux Installation guide for installer configuration options.

Using LocalAI and the full stack with LocalAGI

LocalAI is part of the Local family stack, along with LocalAGI and LocalRecall.

LocalAGI is a powerful, self-hostable AI Agent platform designed for maximum privacy and flexibility which encompassess and uses all the software stack. It provides a complete drop-in replacement for OpenAI’s Responses APIs with advanced agentic capabilities, working entirely locally on consumer-grade hardware (CPU and GPU).

Quick Start

git clone https://github.com/mudler/LocalAGI
cd LocalAGI

docker compose up

docker compose -f docker-compose.nvidia.yaml up

docker compose -f docker-compose.intel.yaml up

MODEL_NAME=gemma-3-12b-it docker compose up

MODEL_NAME=gemma-3-12b-it \
MULTIMODAL_MODEL=minicpm-v-4_5 \
IMAGE_MODEL=flux.1-dev-ggml \
docker compose -f docker-compose.nvidia.yaml up

Key Features

  • Privacy-Focused: All processing happens locally, ensuring your data never leaves your machine
  • Flexible Deployment: Supports CPU, NVIDIA GPU, and Intel GPU configurations
  • Multiple Model Support: Compatible with various models from Hugging Face and other sources
  • Web Interface: User-friendly chat interface for interacting with AI agents
  • Advanced Capabilities: Supports multimodal models, image generation, and more
  • Docker Integration: Easy deployment using Docker Compose

Environment Variables

You can customize your LocalAGI setup using the following environment variables:

  • MODEL_NAME: Specify the model to use (e.g., gemma-3-12b-it)
  • MULTIMODAL_MODEL: Set a custom multimodal model
  • IMAGE_MODEL: Configure an image generation model

For more advanced configuration and API documentation, visit the LocalAGI GitHub repository.

What’s Next?

There is much more to explore with LocalAI! You can run any model from Hugging Face, perform video generation, and also voice cloning. For a comprehensive overview, check out the features section.

Explore additional resources and community contributions:

Install and Run Models

To install models with LocalAI, you can:

  • Browse the Model Gallery from the Web Interface and install models with a couple of clicks. For more details, refer to the Gallery Documentation.
  • Specify a model from the LocalAI gallery during startup, e.g., local-ai run <model_gallery_name>.
  • Use a URI to specify a model file (e.g., huggingface://..., oci://, or ollama://) when starting LocalAI, e.g., local-ai run huggingface://TheBloke/phi-2-GGUF/phi-2.Q8_0.gguf.
  • Specify a URL to a model configuration file when starting LocalAI, e.g., local-ai run https://gist.githubusercontent.com/.../phi-2.yaml.
  • Manually install the models by copying the files into the models directory (--models).

To run models available in the LocalAI gallery, you can use the WebUI or specify the model name when starting LocalAI. Models can be found in the gallery via the Web interface, the model gallery, or the CLI with: local-ai models list.

To install a model from the gallery, use the model name as the URI. For example, to run LocalAI with the Hermes model, execute:

local-ai run hermes-2-theta-llama-3-8b

To install only the model, use:

local-ai models install hermes-2-theta-llama-3-8b

Note: The galleries available in LocalAI can be customized to point to a different URL or a local directory. For more information on how to setup your own gallery, see the Gallery Documentation.

Run Models via URI

To run models via URI, specify a URI to a model file or a configuration file when starting LocalAI. Valid syntax includes:

  • file://path/to/model
  • huggingface://repository_id/model_file (e.g., huggingface://TheBloke/phi-2-GGUF/phi-2.Q8_0.gguf)
  • From OCIs: oci://container_image:tag, ollama://model_id:tag
  • From configuration files: https://gist.githubusercontent.com/.../phi-2.yaml

Configuration files can be used to customize the model defaults and settings. For advanced configurations, refer to the Customize Models section.

Examples

local-ai run huggingface://TheBloke/phi-2-GGUF/phi-2.Q8_0.gguf
local-ai run ollama://gemma:2b
local-ai run https://gist.githubusercontent.com/.../phi-2.yaml
local-ai run oci://localai/phi-2:latest

Run Models Manually

Follow these steps to manually run models using LocalAI:

  1. Prepare Your Model and Configuration Files: Ensure you have a model file and, if necessary, a configuration YAML file. Customize model defaults and settings with a configuration file. For advanced configurations, refer to the Advanced Documentation.

  2. GPU Acceleration: For instructions on GPU acceleration, visit the GPU Acceleration page.

  3. Run LocalAI: Choose one of the following methods to run LocalAI:

mkdir models

cp your-model.gguf models/

docker run -p 8080:8080 -v $PWD/models:/models -ti --rm quay.io/go-skynet/local-ai:latest --models-path /models --context-size 700 --threads 4


curl http://localhost:8080/v1/completions -H "Content-Type: application/json" -d '{
     "model": "your-model.gguf",
     "prompt": "A long time ago in a galaxy far, far away",
     "temperature": 0.7
   }'
Tip

Other Docker Images:

For other Docker images, please refer to the table in the container images section.

Example:

mkdir models

wget https://huggingface.co/TheBloke/Luna-AI-Llama2-Uncensored-GGUF/resolve/main/luna-ai-llama2-uncensored.Q4_0.gguf -O models/luna-ai-llama2

cp -rf prompt-templates/getting_started.tmpl models/luna-ai-llama2.tmpl

docker run -p 8080:8080 -v $PWD/models:/models -ti --rm quay.io/go-skynet/local-ai:latest --models-path /models --context-size 700 --threads 4

curl http://localhost:8080/v1/models

curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
     "model": "luna-ai-llama2",
     "messages": [{"role": "user", "content": "How are you?"}],
     "temperature": 0.9
   }'
Note
  • If running on Apple Silicon (ARM), it is not recommended to run on Docker due to emulation. Follow the build instructions to use Metal acceleration for full GPU support.
  • If you are running on Apple x86_64, you can use Docker without additional gain from building it from source.
git clone https://github.com/go-skynet/LocalAI

cd LocalAI


cp your-model.gguf models/


docker compose up -d --pull always

curl http://localhost:8080/v1/models

curl http://localhost:8080/v1/completions -H "Content-Type: application/json" -d '{
     "model": "your-model.gguf",
     "prompt": "A long time ago in a galaxy far, far away",
     "temperature": 0.7
   }'
Tip

Other Docker Images:

For other Docker images, please refer to the table in Getting Started.

Note: If you are on Windows, ensure the project is on the Linux filesystem to avoid slow model loading. For more information, see the Microsoft Docs.

For Kubernetes deployment, see the Kubernetes installation guide.

LocalAI binary releases are available on GitHub.

Tip

If installing on macOS, you might encounter a message saying:

“local-ai-git-Darwin-arm64” (or the name you gave the binary) can’t be opened because Apple cannot check it for malicious software.

Hit OK, then go to Settings > Privacy & Security > Security and look for the message:

“local-ai-git-Darwin-arm64” was blocked from use because it is not from an identified developer.

Press “Allow Anyway.”

For instructions on building LocalAI from source, see the Build from Source guide.

For more model configurations, visit the Examples Section.

Try it out

Once LocalAI is installed, you can start it (either by using docker, or the cli, or the systemd service).

By default the LocalAI WebUI should be accessible from http://localhost:8080. You can also use 3rd party projects to interact with LocalAI as you would use OpenAI (see also Integrations ).

After installation, install new models by navigating the model gallery, or by using the local-ai CLI.

Tip

To install models with the WebUI, see the Models section. With the CLI you can list the models with local-ai models list and install them with local-ai models install <model-name>.

You can also run models manually by copying files into the models directory.

You can test out the API endpoints using curl, few examples are listed below. The models we are referring here (gpt-4, gpt-4-vision-preview, tts-1, whisper-1) are the default models that come with the AIO images - you can also use any other model you have installed.

Text Generation

Creates a model response for the given chat conversation. OpenAI documentation.

curl http://localhost:8080/v1/chat/completions \
      -H "Content-Type: application/json" \
      -d '{ "model": "gpt-4", "messages": [{"role": "user", "content": "How are you doing?", "temperature": 0.1}] }' 

GPT Vision

Understand images.

curl http://localhost:8080/v1/chat/completions \
    -H "Content-Type: application/json" \
    -d '{ 
        "model": "gpt-4-vision-preview", 
        "messages": [
          {
            "role": "user", "content": [
              {"type":"text", "text": "What is in the image?"},
              {
                "type": "image_url", 
                "image_url": {
                  "url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg" 
                }
              }
            ], 
          "temperature": 0.9
          }
        ]
      }' 

Function calling

Call functions

curl http://localhost:8080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4",
    "messages": [
      {
        "role": "user",
        "content": "What is the weather like in Boston?"
      }
    ],
    "tools": [
      {
        "type": "function",
        "function": {
          "name": "get_current_weather",
          "description": "Get the current weather in a given location",
          "parameters": {
            "type": "object",
            "properties": {
              "location": {
                "type": "string",
                "description": "The city and state, e.g. San Francisco, CA"
              },
              "unit": {
                "type": "string",
                "enum": ["celsius", "fahrenheit"]
              }
            },
            "required": ["location"]
          }
        }
      }
    ],
    "tool_choice": "auto"
  }'

Image Generation

Creates an image given a prompt. OpenAI documentation.

curl http://localhost:8080/v1/images/generations \
      -H "Content-Type: application/json" -d '{
          "prompt": "A cute baby sea otter",
          "size": "256x256"
        }'

Text to speech

Generates audio from the input text. OpenAI documentation.

curl http://localhost:8080/v1/audio/speech \
  -H "Content-Type: application/json" \
  -d '{
    "model": "tts-1",
    "input": "The quick brown fox jumped over the lazy dog.",
    "voice": "alloy"
  }' \
  --output speech.mp3

Audio Transcription

Transcribes audio into the input language. OpenAI Documentation.

Download first a sample to transcribe:

wget --quiet --show-progress -O gb1.ogg https://upload.wikimedia.org/wikipedia/commons/1/1f/George_W_Bush_Columbia_FINAL.ogg 

Send the example audio file to the transcriptions endpoint :

curl http://localhost:8080/v1/audio/transcriptions \
    -H "Content-Type: multipart/form-data" \
    -F file="@$PWD/gb1.ogg" -F model="whisper-1"

Embeddings Generation

Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms. OpenAI Embeddings.

curl http://localhost:8080/embeddings \
    -X POST -H "Content-Type: application/json" \
    -d '{ 
        "input": "Your text string goes here", 
        "model": "text-embedding-ada-002"
      }'
Tip

Don’t use the model file as model in the request unless you want to handle the prompt template for yourself.

Use the model names like you would do with OpenAI like in the examples below. For instance gpt-4-vision-preview, or gpt-4.

Customizing the Model

To customize the prompt template or the default settings of the model, a configuration file is utilized. This file must adhere to the LocalAI YAML configuration standards. For comprehensive syntax details, refer to the advanced documentation. The configuration file can be located either remotely (such as in a Github Gist) or within the local filesystem or a remote URL.

LocalAI can be initiated using either its container image or binary, with a command that includes URLs of model config files or utilizes a shorthand format (like huggingface:// or github://), which is then expanded into complete URLs.

The configuration can also be set via an environment variable. For instance:

local-ai github://owner/repo/file.yaml@branch

MODELS="github://owner/repo/file.yaml@branch,github://owner/repo/file.yaml@branch" local-ai

Here’s an example to initiate the phi-2 model:

docker run -p 8080:8080 localai/localai:v3.7.0 https://gist.githubusercontent.com/mudler/ad601a0488b497b69ec549150d9edd18/raw/a8a8869ef1bb7e3830bf5c0bae29a0cce991ff8d/phi-2.yaml

You can also check all the embedded models configurations here.

Tip

The model configurations used in the quickstart are accessible here: https://github.com/mudler/LocalAI/tree/master/embedded/models. Contributions are welcome; please feel free to submit a Pull Request.

The phi-2 model configuration from the quickstart is expanded from https://github.com/mudler/LocalAI/blob/master/examples/configurations/phi-2.yaml.

Example: Customizing the Prompt Template

To modify the prompt template, create a Github gist or a Pastebin file, and copy the content from https://github.com/mudler/LocalAI/blob/master/examples/configurations/phi-2.yaml. Alter the fields as needed:

name: phi-2
context_size: 2048
f16: true
threads: 11
gpu_layers: 90
mmap: true
parameters:
  # Reference any HF model or a local file here
  model: huggingface://TheBloke/phi-2-GGUF/phi-2.Q8_0.gguf
  temperature: 0.2
  top_k: 40
  top_p: 0.95
template:
  
  chat: &template |
    Instruct: {{.Input}}
    Output:
  # Modify the prompt template here ^^^ as per your requirements
  completion: *template

Then, launch LocalAI using your gist’s URL:

## Important! Substitute with your gist's URL!
docker run -p 8080:8080 localai/localai:v3.7.0 https://gist.githubusercontent.com/xxxx/phi-2.yaml

Next Steps

Build LocalAI from source

Building LocalAI from source is an installation method that allows you to compile LocalAI yourself, which is useful for custom configurations, development, or when you need specific build options.

For complete build instructions, see the Build from Source documentation in the Installation section.

Run with container images

LocalAI provides a variety of images to support different environments. These images are available on quay.io and Docker Hub.

All-in-One images comes with a pre-configured set of models and backends, standard images instead do not have any model pre-configured and installed.

For GPU Acceleration support for Nvidia video graphic cards, use the Nvidia/CUDA images, if you don’t have a GPU, use the CPU images. If you have AMD or Mac Silicon, see the build section.

Tip

Available Images Types:

  • Images ending with -core are smaller images without predownload python dependencies. Use these images if you plan to use llama.cpp, stablediffusion-ncn or rwkv backends - if you are not sure which one to use, do not use these images.
  • Images containing the aio tag are all-in-one images with all the features enabled, and come with an opinionated set of configuration.

Prerequisites

Before you begin, ensure you have a container engine installed if you are not using the binaries. Suitable options include Docker or Podman. For installation instructions, refer to the following guides:

Tip

Hardware Requirements: The hardware requirements for LocalAI vary based on the model size and quantization method used. For performance benchmarks with different backends, such as llama.cpp, visit this link. The rwkv backend is noted for its lower resource consumption.

Standard container images

Standard container images do not have pre-installed models. Use these if you want to configure models manually.

DescriptionQuayDocker Hub
Latest images from the branch (development)quay.io/go-skynet/local-ai:masterlocalai/localai:master
Latest tagquay.io/go-skynet/local-ai:latestlocalai/localai:latest
Versioned imagequay.io/go-skynet/local-ai:v3.7.0localai/localai:v3.7.0
DescriptionQuayDocker Hub
Latest images from the branch (development)quay.io/go-skynet/local-ai:master-gpu-nvidia-cuda-11localai/localai:master-gpu-nvidia-cuda-11
Latest tagquay.io/go-skynet/local-ai:latest-gpu-nvidia-cuda-11localai/localai:latest-gpu-nvidia-cuda-11
Versioned imagequay.io/go-skynet/local-ai:v3.7.0-gpu-nvidia-cuda-11localai/localai:v3.7.0-gpu-nvidia-cuda-11
DescriptionQuayDocker Hub
Latest images from the branch (development)quay.io/go-skynet/local-ai:master-gpu-nvidia-cuda-12localai/localai:master-gpu-nvidia-cuda-12
Latest tagquay.io/go-skynet/local-ai:latest-gpu-nvidia-cuda-12localai/localai:latest-gpu-nvidia-cuda-12
Versioned imagequay.io/go-skynet/local-ai:v3.7.0-gpu-nvidia-cuda-12localai/localai:v3.7.0-gpu-nvidia-cuda-12
DescriptionQuayDocker Hub
Latest images from the branch (development)quay.io/go-skynet/local-ai:master-gpu-intellocalai/localai:master-gpu-intel
Latest tagquay.io/go-skynet/local-ai:latest-gpu-intellocalai/localai:latest-gpu-intel
Versioned imagequay.io/go-skynet/local-ai:v3.7.0-gpu-intellocalai/localai:v3.7.0-gpu-intel
DescriptionQuayDocker Hub
Latest images from the branch (development)quay.io/go-skynet/local-ai:master-gpu-hipblaslocalai/localai:master-gpu-hipblas
Latest tagquay.io/go-skynet/local-ai:latest-gpu-hipblaslocalai/localai:latest-gpu-hipblas
Versioned imagequay.io/go-skynet/local-ai:v3.7.0-gpu-hipblaslocalai/localai:v3.7.0-gpu-hipblas
DescriptionQuayDocker Hub
Latest images from the branch (development)quay.io/go-skynet/local-ai:master-vulkanlocalai/localai:master-vulkan
Latest tagquay.io/go-skynet/local-ai:latest-gpu-vulkanlocalai/localai:latest-gpu-vulkan
Versioned imagequay.io/go-skynet/local-ai:v3.7.0-vulkanlocalai/localai:v3.7.0-vulkan

These images are compatible with Nvidia ARM64 devices, such as the Jetson Nano, Jetson Xavier NX, and Jetson AGX Xavier. For more information, see the Nvidia L4T guide.

DescriptionQuayDocker Hub
Latest images from the branch (development)quay.io/go-skynet/local-ai:master-nvidia-l4t-arm64localai/localai:master-nvidia-l4t-arm64
Latest tagquay.io/go-skynet/local-ai:latest-nvidia-l4t-arm64localai/localai:latest-nvidia-l4t-arm64
Versioned imagequay.io/go-skynet/local-ai:v3.7.0-nvidia-l4t-arm64localai/localai:v3.7.0-nvidia-l4t-arm64

All-in-one images

All-In-One images are images that come pre-configured with a set of models and backends to fully leverage almost all the LocalAI featureset. These images are available for both CPU and GPU environments. The AIO images are designed to be easy to use and require no configuration. Models configuration can be found here separated by size.

In the AIO images there are models configured with the names of OpenAI models, however, they are really backed by Open Source models. You can find the table below

CategoryModel nameReal model (CPU)Real model (GPU)
Text Generationgpt-4phi-2hermes-2-pro-mistral
Multimodal Visiongpt-4-vision-previewbakllavallava-1.6-mistral
Image Generationstablediffusionstablediffusiondreamshaper-8
Speech to Textwhisper-1whisper with whisper-base model<= same
Text to Speechtts-1en-us-amy-low.onnx from rhasspy/piper<= same
Embeddingstext-embedding-ada-002all-MiniLM-L6-v2 in Q4all-MiniLM-L6-v2

Usage

Select the image (CPU or GPU) and start the container with Docker:

docker run -p 8080:8080 --name local-ai -ti localai/localai:latest-aio-cpu

LocalAI will automatically download all the required models, and the API will be available at localhost:8080.

Or with a docker-compose file:

version: "3.9"
services:
  api:
    image: localai/localai:latest-aio-cpu
    # For a specific version:
    # image: localai/localai:v3.7.0-aio-cpu
    # For Nvidia GPUs decomment one of the following (cuda11 or cuda12):
    # image: localai/localai:v3.7.0-aio-gpu-nvidia-cuda-11
    # image: localai/localai:v3.7.0-aio-gpu-nvidia-cuda-12
    # image: localai/localai:latest-aio-gpu-nvidia-cuda-11
    # image: localai/localai:latest-aio-gpu-nvidia-cuda-12
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8080/readyz"]
      interval: 1m
      timeout: 20m
      retries: 5
    ports:
      - 8080:8080
    environment:
      - DEBUG=true
      # ...
    volumes:
      - ./models:/models:cached
    # decomment the following piece if running with Nvidia GPUs
    # deploy:
    #   resources:
    #     reservations:
    #       devices:
    #         - driver: nvidia
    #           count: 1
    #           capabilities: [gpu]
Tip

Models caching: The AIO image will download the needed models on the first run if not already present and store those in /models inside the container. The AIO models will be automatically updated with new versions of AIO images.

You can change the directory inside the container by specifying a MODELS_PATH environment variable (or --models-path).

If you want to use a named model or a local directory, you can mount it as a volume to /models:

docker run -p 8080:8080 --name local-ai -ti -v $PWD/models:/models localai/localai:latest-aio-cpu

or associate a volume:

docker volume create localai-models
docker run -p 8080:8080 --name local-ai -ti -v localai-models:/models localai/localai:latest-aio-cpu

Available AIO images

DescriptionQuayDocker Hub
Latest images for CPUquay.io/go-skynet/local-ai:latest-aio-cpulocalai/localai:latest-aio-cpu
Versioned image (e.g. for CPU)quay.io/go-skynet/local-ai:v3.7.0-aio-cpulocalai/localai:v3.7.0-aio-cpu
Latest images for Nvidia GPU (CUDA11)quay.io/go-skynet/local-ai:latest-aio-gpu-nvidia-cuda-11localai/localai:latest-aio-gpu-nvidia-cuda-11
Latest images for Nvidia GPU (CUDA12)quay.io/go-skynet/local-ai:latest-aio-gpu-nvidia-cuda-12localai/localai:latest-aio-gpu-nvidia-cuda-12
Latest images for AMD GPUquay.io/go-skynet/local-ai:latest-aio-gpu-hipblaslocalai/localai:latest-aio-gpu-hipblas
Latest images for Intel GPUquay.io/go-skynet/local-ai:latest-aio-gpu-intellocalai/localai:latest-aio-gpu-intel

Available environment variables

The AIO Images are inheriting the same environment variables as the base images and the environment of LocalAI (that you can inspect by calling --help). However, it supports additional environment variables available only from the container image

VariableDefaultDescription
PROFILEAuto-detectedThe size of the model to use. Available: cpu, gpu-8g
MODELSAuto-detectedA list of models YAML Configuration file URI/URL (see also running models)

See Also

Run with Kubernetes

For installing LocalAI in Kubernetes, the deployment file from the examples can be used and customized as preferred:

kubectl apply -f https://raw.githubusercontent.com/mudler/LocalAI-examples/refs/heads/main/kubernetes/deployment.yaml

For Nvidia GPUs:

kubectl apply -f https://raw.githubusercontent.com/mudler/LocalAI-examples/refs/heads/main/kubernetes/deployment-nvidia.yaml

Alternatively, the helm chart can be used as well:

helm repo add go-skynet https://go-skynet.github.io/helm-charts/
helm repo update
helm show values go-skynet/local-ai > values.yaml


helm install local-ai go-skynet/local-ai -f values.yaml