🤖 The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed, P2P inference https://localai.io
Find a file
Ettore Di Giacinto 9a34fc8c66 feat(backend gallery): add meta packages
So we can have meta packages such as "vllm" that automatically installs
the corresponding package depending on the GPU that is being currently
detected in the system.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-06-20 22:08:12 +02:00
.devcontainer chore: drop extras references from docs 2025-06-19 22:04:28 +02:00
.devcontainer-scripts test: preliminary tests and merge fix for authv2 (#3584) 2024-09-24 09:32:48 +02:00
.github chore(ci): try to optimize disk space when tagging latest (#5695) 2025-06-20 15:54:14 +02:00
.vscode chore(stablediffusion-ncn): drop in favor of ggml implementation (#4652) 2025-01-22 19:34:16 +01:00
aio feat: Add backend gallery (#5607) 2025-06-15 14:56:52 +02:00
backend Fix Typos in Documentation and Python Comments (#5658) 2025-06-18 22:11:13 +02:00
configuration refactor: move remaining api packages to core (#1731) 2024-03-01 16:19:53 +01:00
core feat(backend gallery): add meta packages 2025-06-20 22:08:12 +02:00
custom-ca-certs feat(certificates): add support for custom CA certificates (#880) 2023-11-01 20:10:14 +01:00
docs Drop latest references to extras images 2025-06-20 15:51:16 +02:00
examples chore: create examples/README to redirect to the new repository 2024-10-30 09:11:32 +01:00
gallery feat: Add backend gallery (#5607) 2025-06-15 14:56:52 +02:00
internal feat: cleanups, small enhancements 2023-07-04 18:58:19 +02:00
models Add docker-compose 2023-04-13 01:13:14 +02:00
pkg feat(backend gallery): display download progress (#5687) 2025-06-18 23:49:44 +02:00
prompt-templates Requested Changes from GPT4ALL to Luna-AI-Llama2 (#1092) 2023-09-22 11:22:17 +02:00
scripts chore(scripts): allow to specify quants (#5430) 2025-05-22 11:53:30 +02:00
swagger feat: Add backend gallery (#5607) 2025-06-15 14:56:52 +02:00
tests feat: Add backend gallery (#5607) 2025-06-15 14:56:52 +02:00
.dockerignore feat: Initial Version of vscode DevContainer (#3217) 2024-08-14 09:06:41 +02:00
.editorconfig feat(stores): Vector store backend (#1795) 2024-03-22 21:14:04 +01:00
.env chore(deps): bump llama.cpp to '1d36b3670b285e69e58b9d687c770a2a0a192194 (#5307) 2025-05-03 18:44:40 +02:00
.gitattributes chore(linguist): add *.hpp files to linguist-vendored (#4154) 2024-11-14 14:12:16 +01:00
.gitignore feat(bark-cpp): add new bark.cpp backend (#4287) 2024-11-28 22:16:44 +01:00
.gitmodules docs/examples: enhancements (#1572) 2024-01-18 19:41:08 +01:00
.yamllint fix: yamlint warnings and errors (#2131) 2024-04-25 17:25:56 +00:00
assets.go fix: use rice when embedding large binaries (#5309) 2025-05-04 16:42:42 +02:00
CONTRIBUTING.md Update CONTRIBUTING.md (#3723) 2024-10-03 20:03:35 +02:00
docker-compose.yaml feat: Initial Version of vscode DevContainer (#3217) 2024-08-14 09:06:41 +02:00
Dockerfile fix: add python symlink, use absolute python env path when running backends (#5664) 2025-06-16 23:00:53 +02:00
Dockerfile.aio feat(aio): entrypoint, update workflows (#1872) 2024-03-21 22:09:04 +01:00
Earthfile Rename project to LocalAI (#35) 2023-04-19 18:43:10 +02:00
Entitlements.plist Feat: OSX Local Codesigning (#1319) 2023-11-23 15:22:54 +01:00
entrypoint.sh deps(llama.cpp): bump to latest, update build variables (#2669) 2024-06-27 23:10:04 +02:00
go.mod feat: Add backend gallery (#5607) 2025-06-15 14:56:52 +02:00
go.sum feat: Add backend gallery (#5607) 2025-06-15 14:56:52 +02:00
LICENSE chore(docs): update license year 2025-02-15 18:17:15 +01:00
main.go feat: Add backend gallery (#5607) 2025-06-15 14:56:52 +02:00
Makefile chore: ⬆️ Update ggml-org/llama.cpp to 8f71d0f3e86ccbba059350058af8758cafed73e6 (#5692) 2025-06-20 15:53:55 +02:00
README.md Update README.md 2025-06-19 21:46:09 +02:00
renovate.json ci: manually update deps 2023-05-04 15:01:29 +02:00
SECURITY.md Create SECURITY.md 2024-02-29 19:53:04 +01:00
webui_static.yaml chore: drop embedded models (#4715) 2025-01-30 00:03:01 +01:00




LocalAI forks LocalAI stars LocalAI pull-requests

LocalAI Docker hub LocalAI Quay.io

Follow LocalAI_API Join LocalAI Discord Community

mudler%2FLocalAI | Trendshift

💡 Get help - FAQ 💭Discussions 💬 Discord 📖 Documentation website

💻 Quickstart 🖼️ Models 🚀 Roadmap 🥽 Demo 🌍 Explorer 🛫 Examples Try on Telegram

testsBuild and Releasebuild container imagesBump dependenciesArtifact Hub

LocalAI is the free, Open Source OpenAI alternative. LocalAI act as a drop-in replacement REST API that's compatible with OpenAI (Elevenlabs, Anthropic... ) API specifications for local AI inferencing. It allows you to run LLMs, generate images, audio (and not only) locally or on-prem with consumer grade hardware, supporting multiple model families. Does not require GPU. It is created and maintained by Ettore Di Giacinto.

📚🆕 Local Stack Family

🆕 LocalAI is now part of a comprehensive suite of AI tools designed to work together:

LocalAGI Logo

LocalAGI

A powerful Local AI agent management platform that serves as a drop-in replacement for OpenAI's Responses API, enhanced with advanced agentic capabilities.

LocalRecall Logo

LocalRecall

A REST-ful API and knowledge base management system that provides persistent memory and storage capabilities for AI agents.

Screenshots

Talk Interface Generate Audio
Screenshot 2025-03-31 at 12-01-36 LocalAI - Talk Screenshot 2025-03-31 at 12-01-29 LocalAI - Generate audio with voice-en-us-ryan-low
Models Overview Generate Images
Screenshot 2025-03-31 at 12-01-20 LocalAI - Models Screenshot 2025-03-31 at 12-31-41 LocalAI - Generate images with flux 1-dev
Chat Interface Home
Screenshot 2025-03-31 at 11-57-44 LocalAI - Chat with localai-functioncall-qwen2 5-7b-v0 5 Screenshot 2025-03-31 at 11-57-23 LocalAI API - c2a39e3 (c2a39e3639227cfd94ffffe9f5691239acc275a8)
Login Swarm
Screenshot 2025-03-31 at 12-09-59 Screenshot 2025-03-31 at 12-10-39 LocalAI - P2P dashboard

💻 Quickstart

Run the installer script:

# Basic installation
curl https://localai.io/install.sh | sh

For more installation options, see Installer Options.

Or run with docker:

CPU only image:

docker run -ti --name local-ai -p 8080:8080 localai/localai:latest

NVIDIA GPU Images:

# CUDA 12.0
docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-gpu-nvidia-cuda-12

# CUDA 11.7
docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-gpu-nvidia-cuda-11

# NVIDIA Jetson (L4T) ARM64
docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-nvidia-l4t-arm64

AMD GPU Images (ROCm):

docker run -ti --name local-ai -p 8080:8080 --device=/dev/kfd --device=/dev/dri --group-add=video localai/localai:latest-gpu-hipblas

Intel GPU Images (oneAPI):

# Intel GPU with FP16 support
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-gpu-intel-f16

# Intel GPU with FP32 support
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-gpu-intel-f32

Vulkan GPU Images:

docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-gpu-vulkan

AIO Images (pre-downloaded models):

# CPU version
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-aio-cpu

# NVIDIA CUDA 12 version
docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-aio-gpu-nvidia-cuda-12

# NVIDIA CUDA 11 version
docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-aio-gpu-nvidia-cuda-11

# Intel GPU version
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-aio-gpu-intel-f16

# AMD GPU version
docker run -ti --name local-ai -p 8080:8080 --device=/dev/kfd --device=/dev/dri --group-add=video localai/localai:latest-aio-gpu-hipblas

For more information about the AIO images and pre-downloaded models, see Container Documentation.

To load models:

# From the model gallery (see available models with `local-ai models list`, in the WebUI from the model tab, or visiting https://models.localai.io)
local-ai run llama-3.2-1b-instruct:q4_k_m
# Start LocalAI with the phi-2 model directly from huggingface
local-ai run huggingface://TheBloke/phi-2-GGUF/phi-2.Q8_0.gguf
# Install and run a model from the Ollama OCI registry
local-ai run ollama://gemma:2b
# Run a model from a configuration file
local-ai run https://gist.githubusercontent.com/.../phi-2.yaml
# Install and run a model from a standard OCI registry (e.g., Docker Hub)
local-ai run oci://localai/phi-2:latest

For more information, see 💻 Getting started

📰 Latest project news

Roadmap items: List of issues

🚀 Features

🔗 Community and integrations

Build and deploy custom containers:

WebUIs:

Model galleries

Other:

🔗 Resources

📖 🎥 Media, Blogs, Social

Citation

If you utilize this repository, data in a downstream project, please consider citing it with:

@misc{localai,
  author = {Ettore Di Giacinto},
  title = {LocalAI: The free, Open source OpenAI alternative},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/go-skynet/LocalAI}},

❤️ Sponsors

Do you find LocalAI useful?

Support the project by becoming a backer or sponsor. Your logo will show up here with a link to your website.

A huge thank you to our generous sponsors who support this project covering CI expenses, and our Sponsor list:


🌟 Star history

LocalAI Star history Chart

📖 License

LocalAI is a community-driven project created by Ettore Di Giacinto.

MIT - Author Ettore Di Giacinto mudler@localai.io

🙇 Acknowledgements

LocalAI couldn't have been built without the help of great software already available from the community. Thank you!

🤗 Contributors

This is a community project, a special thanks to our contributors! 🤗