mirror of
https://github.com/mudler/LocalAI.git
synced 2025-05-20 10:35:01 +00:00
![]() * chore(sycl): Update oneapi to 2025:1 Signed-off-by: Richard Palethorpe <io@richiejp.com> * fix(sycl): Pass -fsycl flag as workaround -fsycl should be set by llama.cpp's cmake file, but something goes wrong and it doesn't appear to get added Signed-off-by: Richard Palethorpe <io@richiejp.com> * fix(build): Speed up llama build by using all CPUs Signed-off-by: Richard Palethorpe <io@richiejp.com> --------- Signed-off-by: Richard Palethorpe <io@richiejp.com> |
||
---|---|---|
.. | ||
patches | ||
CMakeLists.txt | ||
grpc-server.cpp | ||
json.hpp | ||
Makefile | ||
prepare.sh | ||
utils.hpp |