Quickstart
LocalAI is a free, open-source alternative to OpenAI (Anthropic, etc.), functioning as a drop-in replacement REST API for local inferencing. It allows you to run LLMs, generate images, and produce audio, all locally or on-premises with consumer-grade hardware, supporting multiple model families and architectures.
Security considerations
If you are exposing LocalAI remotely, make sure you protect the API endpoints adequately with a mechanism which allows to protect from the incoming traffic or alternatively, run LocalAI with API_KEY
to gate the access with an API key. The API key guarantees a total access to the features (there is no role separation), and it is to be considered as likely as an admin role.
To access the WebUI with an API_KEY, browser extensions such as Requestly can be used (see also https://github.com/mudler/LocalAI/issues/2227#issuecomment-2093333752). See also API flags for the flags / options available when starting LocalAI.
Using the Bash Installer
Install LocalAI easily using the bash installer with the following command:
curl https://localai.io/install.sh | sh
For a full list of options, refer to the Installer Options documentation.
Binaries can also be manually downloaded.
Using Container Images or Kubernetes
LocalAI is available as a container image compatible with various container engines such as Docker, Podman, and Kubernetes. Container images are published on quay.io and Docker Hub.
For detailed instructions, see Using container images. For Kubernetes deployment, see Run with Kubernetes.
Running LocalAI with All-in-One (AIO) Images
Already have a model file? Skip to Run models manually.
LocalAI’s All-in-One (AIO) images are pre-configured with a set of models and backends to fully leverage almost all the features of LocalAI. If pre-configured models are not required, you can use the standard images.
These images are available for both CPU and GPU environments. AIO images are designed for ease of use and require no additional configuration.
It is recommended to use AIO images if you prefer not to configure the models manually or via the web interface. For running specific models, refer to the manual method.
The AIO images come pre-configured with the following features:
- Text to Speech (TTS)
- Speech to Text
- Function calling
- Large Language Models (LLM) for text generation
- Image generation
- Embedding server
For instructions on using AIO images, see Using container images.
What’s Next?
There is much more to explore with LocalAI! You can run any model from Hugging Face, perform video generation, and also voice cloning. For a comprehensive overview, check out the features section.
Explore additional resources and community contributions:
Last updated 24 Aug 2024, 17:01 -0400 .