LocalAI/backend/cpp/llama
Ettore Di Giacinto adb24214c6
chore(deps): bump llama.cpp to b34c859146630dff136943abc9852ca173a7c9d6 (#5323)
chore(deps): bump llama.cpp to 'b34c859146630dff136943abc9852ca173a7c9d6'

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-05-06 11:21:25 +02:00
..
patches chore(deps): bump llama.cpp to b34c859146630dff136943abc9852ca173a7c9d6 (#5323) 2025-05-06 11:21:25 +02:00
CMakeLists.txt chore: ⬆️ Update ggml-org/llama.cpp to 6bf28f0111ff9f21b3c1b1eace20c590281e7ba6 (#5127) 2025-04-06 14:01:51 +02:00
grpc-server.cpp chore(deps): bump llama.cpp to '1d36b3670b285e69e58b9d687c770a2a0a192194 (#5307) 2025-05-03 18:44:40 +02:00
json.hpp 🔥 add LaVA support and GPT vision API, Multiple requests for llama.cpp, return JSON types (#1254) 2023-11-11 13:14:59 +01:00
Makefile chore(deps): bump llama.cpp to '1d36b3670b285e69e58b9d687c770a2a0a192194 (#5307) 2025-05-03 18:44:40 +02:00
prepare.sh chore(deps): bump llama.cpp to b34c859146630dff136943abc9852ca173a7c9d6 (#5323) 2025-05-06 11:21:25 +02:00
utils.hpp chore(deps): bump llama.cpp to b34c859146630dff136943abc9852ca173a7c9d6 (#5323) 2025-05-06 11:21:25 +02:00