LocalAI/core/http
Ettore Di Giacinto 090f5065fc
chore(deps): bump llama.cpp to 'fef693dc6b959a8e8ba11558fbeaad0b264dd457' (#5467)
Also try to use a smaller model for integration tests

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-05-26 17:19:46 +02:00
..
elements fix(webui): improve model display, do not block view (#5133) 2025-04-07 18:03:25 +02:00
endpoints feat: Realtime API support reboot (#5392) 2025-05-25 22:25:05 +02:00
middleware feat(llama.cpp): add support for audio input (#5466) 2025-05-26 16:06:03 +02:00
routes feat: Realtime API support reboot (#5392) 2025-05-25 22:25:05 +02:00
static fix(talk): Talk interface sends content-type headers to chatgpt (#5200) 2025-04-17 15:02:11 +02:00
utils feat(ui): path prefix support via HTTP header (#4497) 2025-01-07 17:18:21 +01:00
views feat(ui): add error page to display errors (#5418) 2025-05-20 12:17:27 +02:00
app.go feat: Realtime API support reboot (#5392) 2025-05-25 22:25:05 +02:00
app_test.go chore(deps): bump llama.cpp to 'fef693dc6b959a8e8ba11558fbeaad0b264dd457' (#5467) 2025-05-26 17:19:46 +02:00
explorer.go feat: rebrand - LocalAGI and LocalRecall joins the LocalAI stack family (#5159) 2025-04-15 17:51:24 +02:00
http_suite_test.go fix: rename fiber entrypoint from http/api to http/app (#2096) 2024-04-21 22:39:28 +02:00
render.go feat(ui): path prefix support via HTTP header (#4497) 2025-01-07 17:18:21 +01:00