Ettore Di Giacinto
|
adb24214c6
|
chore(deps): bump llama.cpp to b34c859146630dff136943abc9852ca173a7c9d6 (#5323)
chore(deps): bump llama.cpp to 'b34c859146630dff136943abc9852ca173a7c9d6'
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
|
2025-05-06 11:21:25 +02:00 |
|
Ettore Di Giacinto
|
1fc6d469ac
|
chore(deps): bump llama.cpp to '1d36b3670b285e69e58b9d687c770a2a0a192194 (#5307)
chore(deps): bump llama.cpp to '1d36b3670b285e69e58b9d687c770a2a0a192194'
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
|
2025-05-03 18:44:40 +02:00 |
|
Ettore Di Giacinto
|
d51444d606
|
chore(deps): update llama.cpp (#3497)
* Apply llava patch
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
|
2024-09-12 20:55:27 +02:00 |
|
Ettore Di Giacinto
|
1c57f8d077
|
feat(sycl): Add support for Intel GPUs with sycl (#1647) (#1660)
* feat(sycl): Add sycl support (#1647)
* onekit: install without prompts
* set cmake args only in grpc-server
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* cleanup
* fixup sycl source env
* Cleanup docs
* ci: runs on self-hosted
* fix typo
* bump llama.cpp
* llama.cpp: update server
* adapt to upstream changes
* adapt to upstream changes
* docs: add sycl
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
|
2024-02-01 19:21:52 +01:00 |
|