mirror of
https://github.com/mudler/LocalAI.git
synced 2025-05-20 10:35:01 +00:00
![]() * fix(clip): do not imply GPUs by default Until a better solution is found upstream, be conservative and default to GPU. https://github.com/ggml-org/llama.cpp/pull/12322 https://github.com/ggml-org/llama.cpp/pull/12322#issuecomment-2720970695 Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * allow to override gpu via backend options Signed-off-by: Ettore Di Giacinto <mudler@localai.io> --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io> |
||
---|---|---|
.. | ||
cpp | ||
go | ||
python | ||
backend.proto |