mirror of
https://github.com/mudler/LocalAI.git
synced 2025-06-26 20:55:00 +00:00
![]() cpu: - text-to-text: llama3.1 - embeddings: granite-embeddings - vision: moonream2 gpu/intel: - text-to-text: localai-functioncall-qwen2.5-7b-v0.5 - embeddings: granite-embeddings - vision: minicpm Signed-off-by: Ettore Di Giacinto <mudler@localai.io> |
||
---|---|---|
.. | ||
embeddings.yaml | ||
image-gen.yaml | ||
README.md | ||
rerank.yaml | ||
speech-to-text.yaml | ||
text-to-speech.yaml | ||
text-to-text.yaml | ||
vad.yaml | ||
vision.yaml |
AIO CPU size
Use this image with CPU-only.
Please keep using only C++ backends so the base image is as small as possible (without CUDA, cuDNN, python, etc).