LocalAI/backend/cpp/llama
Ettore Di Giacinto 217c24160f chore: bump grpc limits to 50MB
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-04-18 22:57:27 +02:00
..
patches chore(llava): update clip.patch (#4453) 2024-12-23 19:11:31 +01:00
CMakeLists.txt chore: ⬆️ Update ggml-org/llama.cpp to 6bf28f0111ff9f21b3c1b1eace20c590281e7ba6 (#5127) 2025-04-06 14:01:51 +02:00
grpc-server.cpp chore: bump grpc limits to 50MB 2025-04-18 22:57:27 +02:00
json.hpp 🔥 add LaVA support and GPT vision API, Multiple requests for llama.cpp, return JSON types (#1254) 2023-11-11 13:14:59 +01:00
Makefile feat(stablediffusion): Enable SYCL (#5144) 2025-04-10 15:20:53 +02:00
prepare.sh chore: ⬆️ Update ggml-org/llama.cpp to 6bf28f0111ff9f21b3c1b1eace20c590281e7ba6 (#5127) 2025-04-06 14:01:51 +02:00
utils.hpp chore(deps): update llama.cpp (#3497) 2024-09-12 20:55:27 +02:00