LocalAI/core
Ettore Di Giacinto 090f5065fc
chore(deps): bump llama.cpp to 'fef693dc6b959a8e8ba11558fbeaad0b264dd457' (#5467)
Also try to use a smaller model for integration tests

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-05-26 17:19:46 +02:00
..
application feat(video-gen): add endpoint for video generation (#5247) 2025-04-26 18:05:01 +02:00
backend feat: Realtime API support reboot (#5392) 2025-05-25 22:25:05 +02:00
cli fix: use rice when embedding large binaries (#5309) 2025-05-04 16:42:42 +02:00
clients feat(store): add Golang client (#1977) 2024-04-16 15:54:14 +02:00
config feat: Realtime API support reboot (#5392) 2025-05-25 22:25:05 +02:00
dependencies_manager fix: be consistent in downloading files, check for scanner errors (#3108) 2024-08-02 20:06:25 +02:00
explorer feat(explorer): make possible to run sync in a separate process (#3224) 2024-08-12 19:25:44 +02:00
gallery feat(ui): improve chat interface (#4910) 2025-02-26 18:27:18 +01:00
http chore(deps): bump llama.cpp to 'fef693dc6b959a8e8ba11558fbeaad0b264dd457' (#5467) 2025-05-26 17:19:46 +02:00
p2p feat(ui): complete design overhaul (#4942) 2025-03-05 08:27:03 +01:00
schema feat(video-gen): add endpoint for video generation (#5247) 2025-04-26 18:05:01 +02:00
services feat: Centralized Request Processing middleware (#3847) 2025-02-10 12:06:16 +01:00