a tool for scanning your Kubernetes clusters, diagnosing, and triaging issues in simple English.

It has SRE experience codified into its analyzers and helps to pull out the most relevant information to enrich it with AI.

Github Link - https://github.com/k8sgpt-ai/k8sgpt

CLI Installation

Linux/Mac via brew

brew tap k8sgpt-ai/k8sgpt
brew install k8sgpt
RPM-based installation (RedHat/CentOS/Fedora)

32 bit:

curl -LO https://github.com/k8sgpt-ai/k8sgpt/releases/download/v0.3.18/k8sgpt_386.rpm
sudo rpm -ivh k8sgpt_386.rpm

64 bit:

curl -LO https://github.com/k8sgpt-ai/k8sgpt/releases/download/v0.3.18/k8sgpt_amd64.rpm
sudo rpm -ivh -i k8sgpt_amd64.rpm
DEB-based installation (Ubuntu/Debian)

32 bit:

curl -LO https://github.com/k8sgpt-ai/k8sgpt/releases/download/v0.3.18/k8sgpt_386.deb
sudo dpkg -i k8sgpt_386.deb

64 bit:

curl -LO https://github.com/k8sgpt-ai/k8sgpt/releases/download/v0.3.18/k8sgpt_amd64.deb
sudo dpkg -i k8sgpt_amd64.deb
APK-based installation (Alpine)

32 bit:

curl -LO https://github.com/k8sgpt-ai/k8sgpt/releases/download/v0.3.18/k8sgpt_386.apk
apk add k8sgpt_386.apk

64 bit:

curl -LO https://github.com/k8sgpt-ai/k8sgpt/releases/download/v0.3.18/k8sgpt_amd64.apk
apk add k8sgpt_amd64.apk
Failing Installation on WSL or Linux (missing gcc) When installing Homebrew on WSL or Linux, you may encounter the following error:
==> Installing k8sgpt from k8sgpt-ai/k8sgpt Error: The following formula cannot be installed from a bottle and must be
built from the source. k8sgpt Install Clang or run brew install gcc.

If you install gcc as suggested, the problem will persist. Therefore, you need to install the build-essential package.

   sudo apt-get update
   sudo apt-get install build-essential


  • Download the latest Windows binaries of k8sgpt from the Release tab based on your system architecture.
  • Extract the downloaded package to your desired location. Configure the system path variable with the binary location

Operator Installation

To install within a Kubernetes cluster please use our k8sgpt-operator with installation instructions available here

This mode of operation is ideal for continuous monitoring of your cluster and can integrate with your existing monitoring such as Prometheus and Alertmanager.

Quick Start

  • Currently the default AI provider is OpenAI, you will need to generate an API key from OpenAI
    • You can do this by running k8sgpt generate to open a browser link to generate it
  • Run k8sgpt auth add to set it in k8sgpt.
    • You can provide the password directly using the --password flag.
  • Run k8sgpt filters to manage the active filters used by the analyzer. By default, all filters are executed during analysis.
  • Run k8sgpt analyze to run a scan.
  • And use k8sgpt analyze --explain to get a more detailed explanation of the issues.
  • You also run k8sgpt analyze --with-doc (with or without the explain flag) to get the official documentation from kubernetes.


K8sGPT uses analyzers to triage and diagnose issues in your cluster. It has a set of analyzers that are built in, but you will be able to write your own analyzers.

Built in analyzers

Enabled by default

  • podAnalyzer
  • pvcAnalyzer
  • rsAnalyzer
  • serviceAnalyzer
  • eventAnalyzer
  • ingressAnalyzer
  • statefulSetAnalyzer
  • deploymentAnalyzer
  • cronJobAnalyzer
  • nodeAnalyzer
  • mutatingWebhookAnalyzer
  • validatingWebhookAnalyzer


  • hpaAnalyzer
  • pdbAnalyzer
  • networkPolicyAnalyzer


Run a scan with the default analyzers

k8sgpt generate
k8sgpt auth add
k8sgpt analyze --explain
k8sgpt analyze --explain --with-doc

Filter on resource

k8sgpt analyze --explain --filter=Service

Filter by namespace

k8sgpt analyze --explain --filter=Pod --namespace=default

Output to JSON

k8sgpt analyze --explain --filter=Service --output=json

Anonymize during explain

k8sgpt analyze --explain --filter=Service --output=json --anonymize
Using filters

List filters

k8sgpt filters list

Add default filters

k8sgpt filters add [filter(s)]

Examples :

  • Simple filter : k8sgpt filters add Service
  • Multiple filters : k8sgpt filters add Ingress,Pod

Remove default filters

k8sgpt filters remove [filter(s)]

Examples :

  • Simple filter : k8sgpt filters remove Service
  • Multiple filters : k8sgpt filters remove Ingress,Pod
Additional commands

List configured backends

k8sgpt auth list

Update configured backends

k8sgpt auth update $MY_BACKEND1,$MY_BACKEND2..

Remove configured backends

k8sgpt auth remove $MY_BACKEND1,$MY_BACKEND2..

List integrations

k8sgpt integrations list

Activate integrations

k8sgpt integrations activate [integration(s)]

Use integration

k8sgpt analyze --filter=[integration(s)]

Deactivate integrations

k8sgpt integrations deactivate [integration(s)]

Serve mode

k8sgpt serve

Analysis with serve mode

curl -X GET "http://localhost:8080/analyze?namespace=k8sgpt&explain=false"

Key Features

LocalAI provider

To run local models, it is possible to use OpenAI compatible APIs, for instance LocalAI which uses llama.cpp to run inference on consumer-grade hardware. Models supported by LocalAI for instance are Vicuna, Alpaca, LLaMA, Cerebras, GPT4ALL, GPT4ALL-J, Llama2 and koala.

To run local inference, you need to download the models first, for instance you can find gguf compatible models in huggingface.com (for example vicuna, alpaca and koala).

Start the API server

To start the API server, follow the instruction in LocalAI.

Run k8sgpt

To run k8sgpt, run k8sgpt auth add with the localai backend:

k8sgpt auth add --backend localai --model <model_name> --baseurl http://localhost:8080/v1 --temperature 0.7

Now you can analyze with the localai backend:

k8sgpt analyze --explain --backend localai
Setting a new default AI provider

There may be scenarios where you wish to have K8sGPT plugged into several default AI providers. In this case you may wish to use one as a new default, other than OpenAI which is the project default.

To view available providers

k8sgpt auth list
> openai
> openai
> azureopenai
> localai
> noopai

To set a new default provider

k8sgpt auth default -p azureopenai
Default provider set to azureopenai

With this option, the data is anonymized before being sent to the AI Backend. During the analysis execution, k8sgpt retrieves sensitive data (Kubernetes object names, labels, etc.). This data is masked when sent to the AI backend and replaced by a key that can be used to de-anonymize the data when the solution is returned to the user.

  1. Error reported during analysis:
Error: HorizontalPodAutoscaler uses StatefulSet/fake-deployment as ScaleTargetRef which does not exist.
  1. Payload sent to the AI backend:
Error: HorizontalPodAutoscaler uses StatefulSet/tGLcCRcHa1Ce5Rs as ScaleTargetRef which does not exist.
  1. Payload returned by the AI:
The Kubernetes system is trying to scale a StatefulSet named tGLcCRcHa1Ce5Rs using the HorizontalPodAutoscaler, but it cannot find the StatefulSet. The solution is to verify that the StatefulSet name is spelled correctly and exists in the same namespace as the HorizontalPodAutoscaler.
  1. Payload returned to the user:
The Kubernetes system is trying to scale a StatefulSet named fake-deployment using the HorizontalPodAutoscaler, but it cannot find the StatefulSet. The solution is to verify that the StatefulSet name is spelled correctly and exists in the same namespace as the HorizontalPodAutoscaler.

Note: Anonymization does not currently apply to events.

Further Details

Anonymization does not currently apply to events.

In a few analysers like Pod, we feed to the AI backend the event messages which are not known beforehand thus we are not masking them for the time being.

  • The following is the list of analysers in which data is being masked:-

    • Statefulset
    • Service
    • PodDisruptionBudget
    • Node
    • NetworkPolicy
    • Ingress
    • HPA
    • Deployment
    • Cronjob
  • The following is the list of analysers in which data is not being masked:-

    • RepicaSet
    • PersistentVolumeClaim
    • Pod
    • *Events


  • k8gpt will not mask the above analysers because they do not send any identifying information except Events analyser.

  • Masking for Events analyzer is scheduled in the near future as seen in this issue. Further research has to be made to understand the patterns and be able to mask the sensitive parts of an event like pod name, namespace etc.

  • The following is the list of fields which are not being masked:-

    • Describe
    • ObjectStatus
    • Replicas
    • ContainerStatus
    • *Event Message
    • ReplicaStatus
    • Count (Pod)


  • It is quite possible the payload of the event message might have something like “super-secret-project-pod-X crashed” which we don’t currently redact (scheduled in the near future as seen in this issue).

Proceed with care

  • The K8gpt team recommends using an entirely different backend (a local model) in critical production environments. By using a local model, you can rest assured that everything stays within your DMZ, and nothing is leaked.
  • If there is any uncertainty about the possibility of sending data to a public LLM (open AI, Azure AI) and it poses a risk to business-critical operations, then, in such cases, the use of public LLM should be avoided based on personal assessment and the jurisdiction of risks involved.
Configuration management

k8sgpt stores config data in the $XDG_CONFIG_HOME/k8sgpt/k8sgpt.yaml file. The data is stored in plain text, including your OpenAI key.

Config file locations:

OS Path
MacOS ~/Library/Application Support/k8sgpt/k8sgpt.yaml
Linux ~/.config/k8sgpt/k8sgpt.yaml
Windows %LOCALAPPDATA%/k8sgpt/k8sgpt.yaml
There may be scenarios where caching remotely is preferred. In these scenarios K8sGPT supports AWS S3 Integration. Remote caching

As a prerequisite AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are required as environmental variables.

Adding a remote cache

Note: this will create the bucket if it does not exist

k8sgpt cache add --region <aws region> --bucket <name>

Listing cache items

k8sgpt cache list

Removing the remote cache Note: this will not delete the bucket

k8sgpt cache remove --bucket <name>


Find our official documentation available here