Ettore Di Giacinto
6c751d98f3
Sync from server.cpp
2025-05-16 21:47:08 +02:00
Ettore Di Giacinto
3d397d8aab
embedding: do not use oai type
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-05-16 21:35:57 +02:00
Ettore Di Giacinto
1f536c5ed7
Keep header
2025-05-16 20:08:26 +02:00
Ettore Di Giacinto
b35483742c
Remove some debug logging
2025-05-16 18:28:18 +02:00
Ettore Di Giacinto
b81896a297
correctly return timings
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-05-16 18:27:46 +02:00
Ettore Di Giacinto
ef96c4f859
disable streaming
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-05-16 18:27:28 +02:00
Ettore Di Giacinto
6b38c32a65
use completion type
2025-05-16 18:26:55 +02:00
Ettore Di Giacinto
8d16602e6d
Simplify image loading
2025-05-16 18:26:40 +02:00
Ettore Di Giacinto
632b0b175b
Placeholder
2025-05-16 18:25:55 +02:00
Ettore Di Giacinto
31b280f894
This seems to be broken - 360a9c98e1 (diff-a18a8e64e12a01167d8e98fc)
[…]cccf0d4eed09d76d879L2998-L3207
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-05-16 18:25:45 +02:00
Ettore Di Giacinto
141ceaf581
Re-enable grammars
2025-05-16 18:25:26 +02:00
Ettore Di Giacinto
73cb2f8fa5
Reset auto detected template
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-05-15 23:17:08 +02:00
Ettore Di Giacinto
b087a44fa0
Add logs
2025-05-15 23:17:00 +02:00
Ettore Di Giacinto
1dc76be5f8
this shouldn't be private for now
2025-05-15 23:16:47 +02:00
Ettore Di Giacinto
b1e0d0ad3b
Update json.hpp
2025-05-15 22:41:55 +02:00
Ettore Di Giacinto
6381f9bda2
Make it compile
2025-05-15 22:41:42 +02:00
Ettore Di Giacinto
453eb7d1c8
wip
2025-05-15 20:04:07 +02:00
Ettore Di Giacinto
cd4c0b8aa6
wip
2025-05-14 22:57:56 +02:00
Ettore Di Giacinto
7437d0c9ca
WIP
2025-05-14 20:11:06 +02:00
Ettore Di Giacinto
adb24214c6
chore(deps): bump llama.cpp to b34c859146630dff136943abc9852ca173a7c9d6
( #5323 )
...
chore(deps): bump llama.cpp to 'b34c859146630dff136943abc9852ca173a7c9d6'
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-05-06 11:21:25 +02:00
Ettore Di Giacinto
1fc6d469ac
chore(deps): bump llama.cpp to '1d36b3670b285e69e58b9d687c770a2a0a192194 ( #5307 )
...
chore(deps): bump llama.cpp to '1d36b3670b285e69e58b9d687c770a2a0a192194'
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-05-03 18:44:40 +02:00
Ettore Di Giacinto
8abecb4a18
chore: bump grpc limits to 50MB ( #5212 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-04-19 08:53:24 +02:00
Richard Palethorpe
1b899e1a68
feat(stablediffusion): Enable SYCL ( #5144 )
...
* feat(sycl): Enable SYCL for stable diffusion
This is a pain because we compile with CGO, but SD is compiled with
CMake. I don't think we can easily use CMake to set the linker flags
necessary. Also I could not find pkg-config calls that would fully set
the flags, so some of them are set manually.
See https://www.intel.com/content/www/us/en/developer/tools/oneapi/onemkl-link-line-advisor.html
for reference. I also resorted to searching the shared object files in
MKLROOT/lib for the symbols.
Signed-off-by: Richard Palethorpe <io@richiejp.com>
* fix(ci): Don't set nproc on cmake
Signed-off-by: Richard Palethorpe <io@richiejp.com>
---------
Signed-off-by: Richard Palethorpe <io@richiejp.com>
2025-04-10 15:20:53 +02:00
Ettore Di Giacinto
25e6f21322
chore(deps): bump llama.cpp to 4ccea213bc629c4eef7b520f7f6c59ce9bbdaca0
( #5143 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-04-08 11:26:06 +02:00
Ettore Di Giacinto
ece239966f
chore: ⬆️ Update ggml-org/llama.cpp to 6bf28f0111ff9f21b3c1b1eace20c590281e7ba6
( #5127 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-04-06 14:01:51 +02:00
Richard Palethorpe
d2cf8ef070
fix(sycl): kernel not found error by forcing -fsycl ( #5115 )
...
* chore(sycl): Update oneapi to 2025:1
Signed-off-by: Richard Palethorpe <io@richiejp.com>
* fix(sycl): Pass -fsycl flag as workaround
-fsycl should be set by llama.cpp's cmake file, but something goes wrong
and it doesn't appear to get added
Signed-off-by: Richard Palethorpe <io@richiejp.com>
* fix(build): Speed up llama build by using all CPUs
Signed-off-by: Richard Palethorpe <io@richiejp.com>
---------
Signed-off-by: Richard Palethorpe <io@richiejp.com>
2025-04-03 16:22:59 +02:00
Ettore Di Giacinto
18b320d577
chore(deps): bump llama.cpp to 'f01bd02376f919b05ee635f438311be8dfc91d7c ( #5110 )
...
chore(deps): bump llama.cpp to 'f01bd02376f919b05ee635f438311be8dfc91d7c'
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-04-03 10:23:14 +02:00
Ettore Di Giacinto
c2a39e3639
fix(llama.cpp): properly handle sigterm ( #5099 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-03-30 18:08:29 +02:00
Ettore Di Giacinto
423514a5a5
fix(clip): do not imply GPU offload by default ( #5010 )
...
* fix(clip): do not imply GPUs by default
Until a better solution is found upstream, be conservative and default
to GPU.
https://github.com/ggml-org/llama.cpp/pull/12322
https://github.com/ggml-org/llama.cpp/pull/12322#issuecomment-2720970695
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* allow to override gpu via backend options
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-03-13 15:14:11 +01:00
Ettore Di Giacinto
e4fa894153
fix(llama.cpp): correctly handle embeddings in batches ( #4957 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-03-07 19:29:52 +01:00
Ettore Di Giacinto
67f7bffd18
chore(deps): update llama.cpp and sync with upstream changes ( #4950 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-03-06 00:40:58 +01:00
Ettore Di Giacinto
9e32fda304
fix(llama.cpp): improve context shift handling ( #4820 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-02-14 14:55:03 +01:00
Shraddha
03974a4dd4
feat: tokenization with llama.cpp ( #4724 )
...
feat: tokenization
Signed-off-by: shraddhazpy <shraddha@shraddhafive.in>
2025-02-02 17:39:43 +00:00
Ettore Di Giacinto
1d6afbd65d
feat(llama.cpp): Add support to grammar triggers ( #4733 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-02-02 13:25:03 +01:00
Ettore Di Giacinto
958f6eb722
chore(llama.cpp): update dependency ( #4628 )
...
Update to '3edfa7d3753c29e44b964c0ff424d2ea8d5fdee6' and adapt to upstream changes
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-01-18 11:55:13 +01:00
mintyleaf
96f8ec0402
feat: add machine tag and inference timings ( #4577 )
...
* Add machine tag option, add extraUsage option, grpc-server -> proto -> endpoint extraUsage data is broken for now
Signed-off-by: mintyleaf <mintyleafdev@gmail.com>
* remove redurant timing fields, fix not working timings output
Signed-off-by: mintyleaf <mintyleafdev@gmail.com>
* use middleware for Machine-Tag only if tag is specified
Signed-off-by: mintyleaf <mintyleafdev@gmail.com>
---------
Signed-off-by: mintyleaf <mintyleafdev@gmail.com>
2025-01-17 17:05:58 +01:00
Ettore Di Giacinto
ab5adf40af
chore(deps): bump llama.cpp to '924518e2e5726e81f3aeb2518fb85963a500e… ( #4592 )
...
chore(deps): bump llama.cpp to '924518e2e5726e81f3aeb2518fb85963a500e93a'
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-01-13 17:33:06 +01:00
Ettore Di Giacinto
c553d73748
chore(deps): bump llama.cpp to 4b0c638b9 ( #4532 )
...
deps(llama.cpp): bump to 4b0c638b9
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-01-04 09:40:08 +01:00
Ettore Di Giacinto
0eb2911aad
chore(llava): update clip.patch ( #4453 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-12-23 19:11:31 +01:00
Ettore Di Giacinto
708cba0c1b
chore(llama.cpp): bump, drop penalize_nl ( #4418 )
...
deps(llama.cpp): bump, drop penalize_nl
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-12-17 00:47:52 +01:00
Ettore Di Giacinto
fc4a714992
feat(llama.cpp): bump and adapt to upstream changes ( #4378 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-12-14 00:30:52 +01:00
Ettore Di Giacinto
d4c1746c7d
feat(llama.cpp): expose cache_type_k and cache_type_v for quant of kv cache ( #4329 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-12-06 10:23:59 +01:00
Ettore Di Giacinto
cbedf2f428
fix(llama.cpp): embed metal file into result binary for darwin ( #4279 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-11-28 04:17:00 +00:00
Ettore Di Giacinto
2b62260b6d
feat(models): use rwkv from llama.cpp ( #4264 )
...
feat(rwkv): use rwkv from llama.cpp
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-11-26 14:22:55 +01:00
Ettore Di Giacinto
404ca3cc23
chore(deps): bump llama.cpp to 47f931c8f9a26c072d71224bc8013cc66ea9e445
( #4263 )
...
chore(deps): bump llama.cpp to '47f931c8f9a26c072d71224bc8013cc66ea9e445'
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-11-26 11:12:57 +01:00
Ettore Di Giacinto
939fbe59cc
chore(deps): bump llama-cpp to ae8de6d50a09d49545e0afab2e50cc4acfb280e2 ( #4157 )
...
* chore(deps): bump llama-cpp to ae8de6d50a09d49545e0afab2e50cc4acfb280e2
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix(metal): metal file has moved
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-11-15 12:51:43 +01:00
Ettore Di Giacinto
3d4bb757d2
chore(deps): bump llama-cpp to 8f275a7c4593aa34147595a90282cf950a853690 ( #4016 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-30 08:31:13 +01:00
Ettore Di Giacinto
32db787991
chore(deps): bump llama-cpp to cda0e4b648dde8fac162b3430b14a99597d3d74f ( #3884 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-20 00:26:49 +02:00
Ettore Di Giacinto
6257e2f510
chore(deps): bump llama-cpp to 96776405a17034dcfd53d3ddf5d142d34bdbb657 ( #3793 )
...
This adapts also to upstream changes
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-12 01:25:03 +02:00
siddimore
f84b55d1ef
feat: Add Get Token Metrics to GRPC server ( #3687 )
...
* Add Get Token Metrics to GRPC server
Signed-off-by: Siddharth More <siddimore@gmail.com>
* Expose LocalAI endpoint
Signed-off-by: Siddharth More <siddimore@gmail.com>
---------
Signed-off-by: Siddharth More <siddimore@gmail.com>
2024-10-01 14:41:20 +02:00