๐Ÿงจ Diffusers

Diffusers is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules. LocalAI has a diffusers backend which allows image generation using the diffusers library.

anime_girl anime_girl (Generated with AnimagineXL)

Note: currently only the image generation is supported. It is experimental, so you might encounter some issues on models which weren’t tested yet.


This is an extra backend - in the container is already available and there is nothing to do for the setup.

Model setup

The models will be downloaded the first time you use the backend from huggingface automatically.

Create a model configuration file in the models directory, for instance to use Linaqruf/animagine-xl with CPU:

name: animagine-xl
  model: Linaqruf/animagine-xl
backend: diffusers

# Force CPU usage - set to true for GPU
f16: false
  pipeline_type: StableDiffusionXLPipeline
  cuda: false # Enable for GPU usage (CUDA)
  scheduler_type: euler_a

Local models

You can also use local models, or modify some parameters like clip_skip, scheduler_type, for instance:

name: stablediffusion
  model: toonyou_beta6.safetensors
backend: diffusers
step: 30
f16: true
  pipeline_type: StableDiffusionPipeline
  cuda: true
  enable_parameters: "negative_prompt,num_inference_steps,clip_skip"
  scheduler_type: "k_dpmpp_sde"
  cfg_scale: 8
  clip_skip: 11

Configuration parameters

The following parameters are available in the configuration file:

Parameter Description Default
f16 Force the usage of float16 instead of float32 false
step Number of steps to run the model for 30
cuda Enable CUDA acceleration false
enable_parameters Parameters to enable for the model negative_prompt,num_inference_steps,clip_skip
scheduler_type Scheduler type k_dpp_sde
cfg_scale Configuration scale 8
clip_skip Clip skip None
pipeline_type Pipeline type StableDiffusionPipeline

There are available several types of schedulers:

Scheduler Description
ddim DDIM
pndm PNDM
heun Heun
unipc UniPC
euler Euler
euler_a Euler a
lms LMS
k_lms LMS Karras
dpm_2 DPM2
k_dpm_2 DPM2 Karras
dpm_2_a DPM2 a
k_dpm_2_a DPM2 a Karras
dpmpp_2m DPM++ 2M
k_dpmpp_2m DPM++ 2M Karras
dpmpp_sde DPM++ SDE
k_dpmpp_sde DPM++ SDE Karras
dpmpp_2m_sde DPM++ 2M SDE
k_dpmpp_2m_sde DPM++ 2M SDE Karras

Pipelines types available:

Pipeline type Description
StableDiffusionPipeline Stable diffusion pipeline
StableDiffusionImg2ImgPipeline Stable diffusion image to image pipeline
StableDiffusionDepth2ImgPipeline Stable diffusion depth to image pipeline
DiffusionPipeline Diffusion pipeline
StableDiffusionXLPipeline Stable diffusion XL pipeline


Text to Image

Use the image generation endpoint with the model name from the configuration file:

curl http://localhost:8080/v1/images/generations \
    -H "Content-Type: application/json" \
    -d '{
      "prompt": "<positive prompt>|<negative prompt>", 
      "model": "animagine-xl", 
      "step": 51,
      "size": "1024x1024" 

Image to Image


An example model (GPU):

name: stablediffusion-edit
  model: nitrosocke/Ghibli-Diffusion
backend: diffusers
step: 25

f16: true
  pipeline_type: StableDiffusionImg2ImgPipeline
  cuda: true
  enable_parameters: "negative_prompt,num_inference_steps,image"
(echo -n '{"image": "'; base64 $IMAGE_PATH; echo '", "prompt": "a sky background","size": "512x512","model":"stablediffusion-edit"}') |
curl -H "Content-Type: application/json" -d @-  http://localhost:8080/v1/images/generations

Depth to Image


name: stablediffusion-depth
  model: stabilityai/stable-diffusion-2-depth
backend: diffusers
step: 50
# Force CPU usage
f16: true
  pipeline_type: StableDiffusionDepth2ImgPipeline
  cuda: true
  enable_parameters: "negative_prompt,num_inference_steps,image"
  cfg_scale: 6
(echo -n '{"image": "'; base64 ~/path/to/image.jpeg; echo '", "prompt": "a sky background","size": "512x512","model":"stablediffusion-depth"}') |
curl -H "Content-Type: application/json" -d @-  http://localhost:8080/v1/images/generations