LocalAI/backend
Ettore Di Giacinto ab5adf40af
chore(deps): bump llama.cpp to '924518e2e5726e81f3aeb2518fb85963a500e… (#4592)
chore(deps): bump llama.cpp to '924518e2e5726e81f3aeb2518fb85963a500e93a'

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-01-13 17:33:06 +01:00
..
cpp chore(deps): bump llama.cpp to '924518e2e5726e81f3aeb2518fb85963a500e… (#4592) 2025-01-13 17:33:06 +01:00
go fix(stablediffusion-ggml): enable oneapi before build (#4593) 2025-01-13 10:11:48 +01:00
python chore(deps): bump grpcio to 1.69.0 (#4543) 2025-01-05 15:01:49 +01:00
backend.proto feat(llama.cpp): expose cache_type_k and cache_type_v for quant of kv cache (#4329) 2024-12-06 10:23:59 +01:00