LocalAI/pkg
Ettore Di Giacinto 695935c184 chore(llama-ggml): drop deprecated backend
The GGML format is now dead, since in the next version of LocalAI we
already bring many breaking compatibility changes, taking the occasion
also to drop ggml support (pre-gguf).

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-02-06 17:25:46 +01:00
..
assets chore: fix go.mod module (#2635) 2024-06-23 08:24:36 +00:00
concurrency chore: update jobresult_test.go (#4124) 2024-11-12 08:52:18 +01:00
downloader chore(downloader): support hf.co and hf:// URIs (#4677) 2025-01-24 08:27:22 +01:00
functions feat(llama.cpp): Add support to grammar triggers (#4733) 2025-02-02 13:25:03 +01:00
grpc feat: stream tokens usage (#4415) 2024-12-18 09:48:50 +01:00
langchain feat(llama.cpp): do not specify backends to autoload and add llama.cpp variants (#2232) 2024-05-04 17:56:12 +02:00
library rf: centralize base64 image handling (#2595) 2024-06-24 08:34:36 +02:00
model chore(llama-ggml): drop deprecated backend 2025-02-06 17:25:46 +01:00
oci chore: fix go.mod module (#2635) 2024-06-23 08:24:36 +00:00
startup chore: drop embedded models (#4715) 2025-01-30 00:03:01 +01:00
store chore: fix go.mod module (#2635) 2024-06-23 08:24:36 +00:00
templates feat(template): read jinja templates from gguf files (#4332) 2024-12-08 13:50:33 +01:00
utils feat(tts): Implement naive response_format for tts endpoint (#4035) 2024-11-02 19:13:35 +00:00
xsync chore: fix go.mod module (#2635) 2024-06-23 08:24:36 +00:00
xsysinfo feat(default): use number of physical cores as default (#2483) 2024-06-04 15:23:29 +02:00