LocalAI/backend
Ettore Di Giacinto a9b0e264f2 chore(exllama): drop exllama backend
For polishing and cleaning up it makes now sense to drop exllama which
is completely unmaintained, and was only supporting the llamav1
architecture (nowadays it's superseded by llamav1) .

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-09-13 19:09:43 +02:00
..
cpp chore(deps): update llama.cpp (#3497) 2024-09-12 20:55:27 +02:00
go fix: untangle pkg/grpc and core/schema for Transcription (#3419) 2024-09-02 15:48:53 +02:00
python chore(exllama): drop exllama backend 2024-09-13 19:09:43 +02:00
backend.proto feat: elevenlabs sound-generation api (#3355) 2024-08-24 00:20:28 +00:00