LocalAI/backend/cpp/llama
Richard Palethorpe d2cf8ef070
fix(sycl): kernel not found error by forcing -fsycl (#5115)
* chore(sycl): Update oneapi to 2025:1

Signed-off-by: Richard Palethorpe <io@richiejp.com>

* fix(sycl): Pass -fsycl flag as workaround

-fsycl should be set by llama.cpp's cmake file, but something goes wrong
and it doesn't appear to get added

Signed-off-by: Richard Palethorpe <io@richiejp.com>

* fix(build): Speed up llama build by using all CPUs

Signed-off-by: Richard Palethorpe <io@richiejp.com>

---------

Signed-off-by: Richard Palethorpe <io@richiejp.com>
2025-04-03 16:22:59 +02:00
..
patches chore(llava): update clip.patch (#4453) 2024-12-23 19:11:31 +01:00
CMakeLists.txt deps(llama.cpp): update, support Gemma models (#1734) 2024-02-21 17:23:38 +01:00
grpc-server.cpp chore(deps): bump llama.cpp to 'f01bd02376f919b05ee635f438311be8dfc91d7c (#5110) 2025-04-03 10:23:14 +02:00
json.hpp 🔥 add LaVA support and GPT vision API, Multiple requests for llama.cpp, return JSON types (#1254) 2023-11-11 13:14:59 +01:00
Makefile fix(sycl): kernel not found error by forcing -fsycl (#5115) 2025-04-03 16:22:59 +02:00
prepare.sh chore(deps): update llama.cpp (#3497) 2024-09-12 20:55:27 +02:00
utils.hpp chore(deps): update llama.cpp (#3497) 2024-09-12 20:55:27 +02:00