LocalAI/docs/content/docs/getting-started
Ettore Di Giacinto 1c57f8d077
feat(sycl): Add support for Intel GPUs with sycl (#1647) (#1660)
* feat(sycl): Add sycl support (#1647)

* onekit: install without prompts

* set cmake args only in grpc-server

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* cleanup

* fixup sycl source env

* Cleanup docs

* ci: runs on self-hosted

* fix typo

* bump llama.cpp

* llama.cpp: update server

* adapt to upstream changes

* adapt to upstream changes

* docs: add sycl

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-02-01 19:21:52 +01:00
..
_index.en.md docs/examples: enhancements (#1572) 2024-01-18 19:41:08 +01:00
build.md feat(sycl): Add support for Intel GPUs with sycl (#1647) (#1660) 2024-02-01 19:21:52 +01:00
customize-model.md transformers: correctly load automodels (#1643) 2024-01-26 00:13:21 +01:00
manual.md Expanded and interlinked Docker documentation (#1614) 2024-01-20 10:05:14 +01:00
quickstart.md Update quickstart.md 2024-01-26 17:55:20 +01:00