LocalAI/backend/cpp
Ettore Di Giacinto 423514a5a5
fix(clip): do not imply GPU offload by default (#5010)
* fix(clip): do not imply GPUs by default

Until a better solution is found upstream, be conservative and default
to GPU.

https://github.com/ggml-org/llama.cpp/pull/12322
https://github.com/ggml-org/llama.cpp/pull/12322#issuecomment-2720970695

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* allow to override gpu via backend options

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-03-13 15:14:11 +01:00
..
grpc fix: speedup git submodule update with --single-branch (#2847) 2024-07-13 22:32:25 +02:00
llama fix(clip): do not imply GPU offload by default (#5010) 2025-03-13 15:14:11 +01:00