LocalAI/backend
2025-03-11 08:28:54 +01:00
..
cpp fix(llama.cpp): correctly handle embeddings in batches (#4957) 2025-03-07 19:29:52 +01:00
go chore(stable-diffusion-ggml): update, adapt upstream changes (#4889) 2025-02-23 08:36:41 +01:00
python Revert "chore(deps): Bump intel-extension-for-pytorch from 2.3.110+xpu to 2.6…" 2025-03-11 08:28:54 +01:00
backend.proto chore(deps): update llama.cpp and sync with upstream changes (#4950) 2025-03-06 00:40:58 +01:00