LocalAI/core
Ettore Di Giacinto 695935c184 chore(llama-ggml): drop deprecated backend
The GGML format is now dead, since in the next version of LocalAI we
already bring many breaking compatibility changes, taking the occasion
also to drop ggml support (pre-gguf).

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-02-06 17:25:46 +01:00
..
application chore: drop embedded models (#4715) 2025-01-30 00:03:01 +01:00
backend feat: tokenization with llama.cpp (#4724) 2025-02-02 17:39:43 +00:00
cli chore: drop embedded models (#4715) 2025-01-30 00:03:01 +01:00
clients feat(store): add Golang client (#1977) 2024-04-16 15:54:14 +02:00
config fix(tests): pin to branch for config used in tests (#4721) 2025-01-31 09:57:58 +01:00
dependencies_manager fix: be consistent in downloading files, check for scanner errors (#3108) 2024-08-02 20:06:25 +02:00
explorer feat(explorer): make possible to run sync in a separate process (#3224) 2024-08-12 19:25:44 +02:00
gallery fix(gallery): do not return overrides and additional config (#4768) 2025-02-05 18:37:09 +01:00
http chore(llama-ggml): drop deprecated backend 2025-02-06 17:25:46 +01:00
p2p fix(p2p): parse maddr correctly (#4219) 2024-11-21 14:06:49 +01:00
schema chore(stablediffusion-ncn): drop in favor of ggml implementation (#4652) 2025-01-22 19:34:16 +01:00
services chore: drop embedded models (#4715) 2025-01-30 00:03:01 +01:00