This website requires JavaScript.
Explore
Help
Register
Sign in
dearwolf
/
LocalAI
Watch
1
Star
0
Fork
You've already forked LocalAI
0
mirror of
https://github.com/mudler/LocalAI.git
synced
2025-05-20 10:35:01 +00:00
Code
Issues
Projects
Releases
Packages
Wiki
Activity
Actions
21
fa9e330fc6
LocalAI
/
backend
/
cpp
/
llama
History
Download ZIP
Download TAR.GZ
Ettore Di Giacinto
fa9e330fc6
fix(llama.cpp): fix eos without cache (
#1852
)
2024-03-18 18:59:24 +01:00
..
CMakeLists.txt
deps(llama.cpp): update, support Gemma models (
#1734
)
2024-02-21 17:23:38 +01:00
grpc-server.cpp
fix(llama.cpp): fix eos without cache (
#1852
)
2024-03-18 18:59:24 +01:00
json.hpp
🔥
add LaVA support and GPT vision API, Multiple requests for llama.cpp, return JSON types (
#1254
)
2023-11-11 13:14:59 +01:00
Makefile
fix(make): allow to parallelize jobs (
#1845
)
2024-03-17 15:39:20 +01:00
utils.hpp
feat(sycl): Add support for Intel GPUs with sycl (
#1647
) (
#1660
)
2024-02-01 19:21:52 +01:00