LocalAI/backend
Richard Palethorpe d2cf8ef070
fix(sycl): kernel not found error by forcing -fsycl (#5115)
* chore(sycl): Update oneapi to 2025:1

Signed-off-by: Richard Palethorpe <io@richiejp.com>

* fix(sycl): Pass -fsycl flag as workaround

-fsycl should be set by llama.cpp's cmake file, but something goes wrong
and it doesn't appear to get added

Signed-off-by: Richard Palethorpe <io@richiejp.com>

* fix(build): Speed up llama build by using all CPUs

Signed-off-by: Richard Palethorpe <io@richiejp.com>

---------

Signed-off-by: Richard Palethorpe <io@richiejp.com>
2025-04-03 16:22:59 +02:00
..
cpp fix(sycl): kernel not found error by forcing -fsycl (#5115) 2025-04-03 16:22:59 +02:00
go chore(stable-diffusion-ggml): update, adapt upstream changes (#4889) 2025-02-23 08:36:41 +01:00
python chore(deps): Bump grpcio to 1.71.0 (#4993) 2025-03-11 09:44:21 +01:00
backend.proto chore(deps): update llama.cpp and sync with upstream changes (#4950) 2025-03-06 00:40:58 +01:00