This website requires JavaScript.
Explore
Help
Register
Sign in
dearwolf
/
LocalAI
Watch
1
Star
0
Fork
You've already forked LocalAI
0
mirror of
https://github.com/mudler/LocalAI.git
synced
2025-05-29 23:15:01 +00:00
Code
Issues
Projects
Releases
Packages
Wiki
Activity
Actions
15
94261b1717
LocalAI
/
backend
/
cpp
History
Download ZIP
Download TAR.GZ
Sebastian
eaf85a30f9
fix(llama.cpp): Enable parallel requests (
#1616
)
...
integrate changes from llama.cpp Signed-off-by: Sebastian <tauven@gmail.com>
2024-01-21 09:56:14 +01:00
..
grpc
move BUILD_GRPC_FOR_BACKEND_LLAMA logic to makefile: errors in this section now immediately fail the build (
#1576
)
2024-01-13 10:08:26 +01:00
llama
fix(llama.cpp): Enable parallel requests (
#1616
)
2024-01-21 09:56:14 +01:00