LocalAI/backend
Ettore Di Giacinto 423514a5a5
fix(clip): do not imply GPU offload by default (#5010)
* fix(clip): do not imply GPUs by default

Until a better solution is found upstream, be conservative and default
to GPU.

https://github.com/ggml-org/llama.cpp/pull/12322
https://github.com/ggml-org/llama.cpp/pull/12322#issuecomment-2720970695

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* allow to override gpu via backend options

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-03-13 15:14:11 +01:00
..
cpp fix(clip): do not imply GPU offload by default (#5010) 2025-03-13 15:14:11 +01:00
go chore(stable-diffusion-ggml): update, adapt upstream changes (#4889) 2025-02-23 08:36:41 +01:00
python chore(deps): Bump grpcio to 1.71.0 (#4993) 2025-03-11 09:44:21 +01:00
backend.proto chore(deps): update llama.cpp and sync with upstream changes (#4950) 2025-03-06 00:40:58 +01:00