LocalAI/backend
Ettore Di Giacinto 8814b31805
chore: drop gpt4all.cpp (#3106)
chore: drop gpt4all

gpt4all is already supported in llama.cpp - the backend was kept for
keeping compatibility with old gpt4all models (prior to gguf format).

It is good time now to clean up and remove it to slim the compilation
process.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-08-07 23:35:55 +02:00
..
cpp chore: ⬆️ Update ggerganov/llama.cpp to 1e6f6554aa11fa10160a5fda689e736c3c34169f (#3189) 2024-08-07 01:10:21 +02:00
go chore: drop gpt4all.cpp (#3106) 2024-08-07 23:35:55 +02:00
python fix(python): move vllm to after deps, drop diffusers main deps 2024-08-07 23:34:37 +02:00
backend.proto feat(whisper): add translate option (#2649) 2024-06-24 19:21:22 +02:00