LocalAI/aio/cpu
Ettore Di Giacinto a0faf87ec2 feat(aio): update AIO image defaults
cpu:
 - text-to-text: llama3.1
 - embeddings: granite-embeddings
 - vision: moonream2

gpu/intel:
 - text-to-text: localai-functioncall-qwen2.5-7b-v0.5
 - embeddings: granite-embeddings
 - vision: minicpm

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-03-12 11:48:58 +01:00
..
embeddings.yaml feat(aio): update AIO image defaults 2025-03-12 11:48:58 +01:00
image-gen.yaml chore(stablediffusion-ncn): drop in favor of ggml implementation (#4652) 2025-01-22 19:34:16 +01:00
README.md feat(aio): entrypoint, update workflows (#1872) 2024-03-21 22:09:04 +01:00
rerank.yaml feat(rerankers): Add new backend, support jina rerankers API (#2121) 2024-04-25 00:19:02 +02:00
speech-to-text.yaml feat(aio): add tests, update model definitions (#1880) 2024-03-22 21:13:11 +01:00
text-to-speech.yaml feat(aio): add tests, update model definitions (#1880) 2024-03-22 21:13:11 +01:00
text-to-text.yaml feat(aio): update AIO image defaults 2025-03-12 11:48:58 +01:00
vad.yaml feat: Centralized Request Processing middleware (#3847) 2025-02-10 12:06:16 +01:00
vision.yaml feat(aio): update AIO image defaults 2025-03-12 11:48:58 +01:00

AIO CPU size

Use this image with CPU-only.

Please keep using only C++ backends so the base image is as small as possible (without CUDA, cuDNN, python, etc).