Integrations
Community integrations
List of projects that are using directly LocalAI behind the scenes can be found here.
The list below is a list of software that integrates with LocalAI.
- AnythingLLM
- Logseq GPT3 OpenAI plugin allows to set a base URL, and works with LocalAI.
- https://plugins.jetbrains.com/plugin/21056-codegpt allows for custom OpenAI compatible endpoints since 2.4.0
- Wave Terminal has native support for LocalAI!
- https://github.com/longy2k/obsidian-bmo-chatbot
- https://github.com/FlowiseAI/Flowise
- https://github.com/k8sgpt-ai/k8sgpt
- https://github.com/kairos-io/kairos
- https://github.com/langchain4j/langchain4j
- https://github.com/henomis/lingoose
- https://github.com/trypromptly/LLMStack
- https://github.com/mattermost/openops
- https://github.com/charmbracelet/mods
- https://github.com/cedriking/spark
- Big AGI is a powerful web interface entirely running in the browser, supporting LocalAI
- Midori AI Subsystem Manager is a powerful docker subsystem for running all types of AI programs
- LLPhant is a PHP library for interacting with LLMs and Vector Databases
- GPTLocalhost (Word Add-in) - run LocalAI in Microsoft Word locally
- use LocalAI from Nextcloud with the integration plugin and AI assistant
- Langchain integration package pypi
Feel free to open up a Pull request (by clicking at the “Edit page” below) to get a page for your project made or if you see a error on one of the pages!
Configuration Guides
This section provides step-by-step instructions for configuring specific software to work with LocalAI.
OpenCode
OpenCode is an AI-powered code editor that can be configured to use LocalAI as its backend provider.
Prerequisites
- LocalAI must be running and accessible (either locally or on a network)
- You need to know your LocalAI server’s IP address/hostname and port (default is
8080)
Configuration Steps
Edit the OpenCode configuration file
Open the OpenCode configuration file located at
~/.config/opencode/opencode.jsonin your editor.Add LocalAI provider configuration
Add the following configuration to your
opencode.jsonfile, replacing the values with your own:Customize the configuration
- baseURL: Replace
http://127.0.0.1:8080/v1with your LocalAI server’s address and port. - name: Change “LocalAI (local)” to a descriptive name for your setup.
- models: Replace the model names with the actual model names available in your LocalAI instance. You can find available models by checking your LocalAI models directory or using the LocalAI API.
- limit: Adjust the
contextandoutputtoken limits based on your model’s capabilities and available resources.
- baseURL: Replace
Verify your models
Ensure that the model names in the configuration match exactly with the model names configured in your LocalAI instance. You can verify available models by checking your LocalAI configuration or using the
/v1/modelsendpoint.Restart OpenCode
After saving the configuration file, restart OpenCode for the changes to take effect.
GitHub Actions
You can use LocalAI in GitHub Actions workflows to perform AI-powered tasks like code review, diff summarization, or automated analysis. The LocalAI GitHub Action makes it easy to spin up a LocalAI instance in your CI/CD pipeline.
Prerequisites
- A GitHub repository with Actions enabled
- A model name from models.localai.io or a Hugging Face model reference
Example Workflow
This example workflow demonstrates how to use LocalAI to summarize pull request diffs and send notifications:
Create a workflow file
Create a new file in your repository at
.github/workflows/localai.yml:
Configuration Options
- Model selection: Replace
qwen_qwen3-4b-instruct-2507with any model from models.localai.io. You can also use Hugging Face models by using the full huggingface model url`. - Trigger conditions: Customize the
ifcondition to control when the workflow runs. The example only runs when a PR is merged and has a specific label. - API endpoint: The LocalAI container runs on
http://localhost:8080by default. The action exposes the service on the standard port. - Custom prompts: Modify the system message in the JSON payload to change what LocalAI is asked to do with the diff.
Use Cases
- Code review automation: Automatically review code changes and provide feedback
- Diff summarization: Generate human-readable summaries of code changes
- Documentation generation: Create documentation from code changes
- Security scanning: Analyze code for potential security issues
- Test generation: Generate test cases based on code changes
Additional Resources
Realtime Voice Assistant
LocalAI supports realtime voice interactions , enabling voice assistant applications with real-time speech-to-speech communication. A complete example implementation is available in the LocalAI-examples repository.
Overview
The realtime voice assistant example demonstrates how to build a voice assistant that:
- Captures audio input from the user in real-time
- Transcribes speech to text using LocalAI’s transcription capabilities
- Processes the text with a language model
- Generates audio responses using text-to-speech
- Streams audio back to the user in real-time
Prerequisites
- A transcription model (e.g., Whisper) configured in LocalAI
- A text-to-speech model configured in LocalAI
- A language model for generating responses
Getting Started
Clone the example repository
Start LocalAI with Docker Compose
The first time you start docker compose, it will take a while to download the available models. You can follow the model downloads in real-time:
Install host dependencies
Install the required host dependencies (sudo is required):
Run the voice assistant
Start the voice assistant application:
Configuration Notes
- CPU vs GPU: The example is optimized for CPU usage. However, you can run LocalAI with a GPU for better performance and to use bigger/better models.
- Python client: The Python part downloads PyTorch for CPU, but this is fine as computation is offloaded to LocalAI. The Python client only runs Silero VAD (Voice Activity Detection), which is fast, and handles audio recording.
- Thin client architecture: The Python client is designed to run on thin clients such as Raspberry PIs, while LocalAI handles the heavier computational workload on a more powerful machine.
Key Features
- Real-time processing: Low-latency audio streaming for natural conversations
- Voice Activity Detection (VAD): Automatic detection of when the user is speaking
- Turn-taking: Handles conversation flow with proper turn detection
- OpenAI-compatible API: Uses LocalAI’s OpenAI-compatible realtime API endpoints
Use Cases
- Voice assistants: Build custom voice assistants for home automation or productivity
- Accessibility tools: Create voice interfaces for accessibility applications
- Interactive applications: Add voice interaction to games, educational software, or entertainment apps
- Customer service: Implement voice-based customer support systems