Ettore Di Giacinto
dd381f84ba
feat: sync with server.cpp by including it
...
Security Scan / tests (push) Has been cancelled
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-05-17 09:49:21 +02:00
Ettore Di Giacinto
66341857c9
Embeddiongs: set OAICOMPAT_TYPE_EMBEDDING
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-05-17 08:33:08 +02:00
Ettore Di Giacinto
f30a790052
fix: add httplib
2025-05-16 22:10:10 +02:00
Ettore Di Giacinto
67786c9c41
fix: copy json.hpp from the correct location
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-05-16 22:01:38 +02:00
Ettore Di Giacinto
b9cf7c31b9
Sync with upstream
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-05-16 21:51:59 +02:00
Ettore Di Giacinto
d2a5905500
Use utils and json directly from llama.cpp
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-05-16 21:47:59 +02:00
Ettore Di Giacinto
6c751d98f3
Sync from server.cpp
2025-05-16 21:47:08 +02:00
Ettore Di Giacinto
3d397d8aab
embedding: do not use oai type
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-05-16 21:35:57 +02:00
Ettore Di Giacinto
1f536c5ed7
Keep header
2025-05-16 20:08:26 +02:00
Ettore Di Giacinto
b35483742c
Remove some debug logging
2025-05-16 18:28:18 +02:00
Ettore Di Giacinto
b81896a297
correctly return timings
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-05-16 18:27:46 +02:00
Ettore Di Giacinto
ef96c4f859
disable streaming
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-05-16 18:27:28 +02:00
Ettore Di Giacinto
6b38c32a65
use completion type
2025-05-16 18:26:55 +02:00
Ettore Di Giacinto
8d16602e6d
Simplify image loading
2025-05-16 18:26:40 +02:00
Ettore Di Giacinto
632b0b175b
Placeholder
2025-05-16 18:25:55 +02:00
Ettore Di Giacinto
31b280f894
This seems to be broken - 360a9c98e1 (diff-a18a8e64e12a01167d8e98fc)
[…]cccf0d4eed09d76d879L2998-L3207
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-05-16 18:25:45 +02:00
Ettore Di Giacinto
141ceaf581
Re-enable grammars
2025-05-16 18:25:26 +02:00
Ettore Di Giacinto
73cb2f8fa5
Reset auto detected template
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-05-15 23:17:08 +02:00
Ettore Di Giacinto
b087a44fa0
Add logs
2025-05-15 23:17:00 +02:00
Ettore Di Giacinto
1dc76be5f8
this shouldn't be private for now
2025-05-15 23:16:47 +02:00
Ettore Di Giacinto
b1e0d0ad3b
Update json.hpp
2025-05-15 22:41:55 +02:00
Ettore Di Giacinto
6381f9bda2
Make it compile
2025-05-15 22:41:42 +02:00
Ettore Di Giacinto
453eb7d1c8
wip
2025-05-15 20:04:07 +02:00
Ettore Di Giacinto
cd4c0b8aa6
wip
2025-05-14 22:57:56 +02:00
Ettore Di Giacinto
7437d0c9ca
WIP
2025-05-14 20:11:06 +02:00
Ettore Di Giacinto
dc21604741
chore(deps): bump whisper.cpp ( #5338 )
...
* chore(deps): bump whisper.cpp
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* add libggml-metal
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Fixups macOS arm64
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* adjust cublas for whisper.cpp
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-05-09 08:17:45 +02:00
Ettore Di Giacinto
adb24214c6
chore(deps): bump llama.cpp to b34c859146630dff136943abc9852ca173a7c9d6
( #5323 )
...
chore(deps): bump llama.cpp to 'b34c859146630dff136943abc9852ca173a7c9d6'
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-05-06 11:21:25 +02:00
Ettore Di Giacinto
1fc6d469ac
chore(deps): bump llama.cpp to '1d36b3670b285e69e58b9d687c770a2a0a192194 ( #5307 )
...
chore(deps): bump llama.cpp to '1d36b3670b285e69e58b9d687c770a2a0a192194'
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-05-03 18:44:40 +02:00
Wyatt Neal
4076ea0494
fix: vllm missing logprobs ( #5279 )
...
* working to address missing items
referencing #3436 , #2930 - if i could test it, this might show that the
output from the vllm backend is processed and returned to the user
Signed-off-by: Wyatt Neal <wyatt.neal+git@gmail.com>
* adding in vllm tests to test-extras
Signed-off-by: Wyatt Neal <wyatt.neal+git@gmail.com>
* adding in tests to pipeline for execution
Signed-off-by: Wyatt Neal <wyatt.neal+git@gmail.com>
* removing todo block, test via pipeline
Signed-off-by: Wyatt Neal <wyatt.neal+git@gmail.com>
---------
Signed-off-by: Wyatt Neal <wyatt.neal+git@gmail.com>
2025-04-30 12:55:07 +00:00
Ettore Di Giacinto
6e8f4f584b
fix(diffusers): consider options only in form of key/value ( #5277 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-04-29 17:08:55 +02:00
Ettore Di Giacinto
2c9279a542
feat(video-gen): add endpoint for video generation ( #5247 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-04-26 18:05:01 +02:00
Ettore Di Giacinto
cae9bf1308
chore(deps): bump grpcio to 1.72.0 ( #5244 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-04-25 21:32:37 +02:00
Richard Palethorpe
7f61d397d5
fix(stablediffusion-ggml): Build with DSD CUDA, HIP and Metal flags ( #5236 )
...
Signed-off-by: Richard Palethorpe <io@richiejp.com>
2025-04-24 10:27:17 +02:00
Ettore Di Giacinto
61cc76c455
chore(autogptq): drop archived backend ( #5214 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-04-19 15:52:29 +02:00
Ettore Di Giacinto
8abecb4a18
chore: bump grpc limits to 50MB ( #5212 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-04-19 08:53:24 +02:00
Richard Palethorpe
0f0fafacd9
fix(stablediffusion): Avoid overwriting SYCL specific flags from outer make call ( #5181 )
...
Signed-off-by: Richard Palethorpe <io@richiejp.com>
2025-04-15 19:31:25 +02:00
Richard Palethorpe
1b899e1a68
feat(stablediffusion): Enable SYCL ( #5144 )
...
* feat(sycl): Enable SYCL for stable diffusion
This is a pain because we compile with CGO, but SD is compiled with
CMake. I don't think we can easily use CMake to set the linker flags
necessary. Also I could not find pkg-config calls that would fully set
the flags, so some of them are set manually.
See https://www.intel.com/content/www/us/en/developer/tools/oneapi/onemkl-link-line-advisor.html
for reference. I also resorted to searching the shared object files in
MKLROOT/lib for the symbols.
Signed-off-by: Richard Palethorpe <io@richiejp.com>
* fix(ci): Don't set nproc on cmake
Signed-off-by: Richard Palethorpe <io@richiejp.com>
---------
Signed-off-by: Richard Palethorpe <io@richiejp.com>
2025-04-10 15:20:53 +02:00
Ettore Di Giacinto
d484028532
feat(diffusers): add support for Lumina2Text2ImgPipeline ( #4806 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-04-10 09:55:51 +02:00
Ettore Di Giacinto
25e6f21322
chore(deps): bump llama.cpp to 4ccea213bc629c4eef7b520f7f6c59ce9bbdaca0
( #5143 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-04-08 11:26:06 +02:00
Ettore Di Giacinto
ece239966f
chore: ⬆️ Update ggml-org/llama.cpp to 6bf28f0111ff9f21b3c1b1eace20c590281e7ba6
( #5127 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-04-06 14:01:51 +02:00
Richard Palethorpe
d2cf8ef070
fix(sycl): kernel not found error by forcing -fsycl ( #5115 )
...
* chore(sycl): Update oneapi to 2025:1
Signed-off-by: Richard Palethorpe <io@richiejp.com>
* fix(sycl): Pass -fsycl flag as workaround
-fsycl should be set by llama.cpp's cmake file, but something goes wrong
and it doesn't appear to get added
Signed-off-by: Richard Palethorpe <io@richiejp.com>
* fix(build): Speed up llama build by using all CPUs
Signed-off-by: Richard Palethorpe <io@richiejp.com>
---------
Signed-off-by: Richard Palethorpe <io@richiejp.com>
2025-04-03 16:22:59 +02:00
Ettore Di Giacinto
18b320d577
chore(deps): bump llama.cpp to 'f01bd02376f919b05ee635f438311be8dfc91d7c ( #5110 )
...
chore(deps): bump llama.cpp to 'f01bd02376f919b05ee635f438311be8dfc91d7c'
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-04-03 10:23:14 +02:00
Ettore Di Giacinto
c2a39e3639
fix(llama.cpp): properly handle sigterm ( #5099 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-03-30 18:08:29 +02:00
Ettore Di Giacinto
423514a5a5
fix(clip): do not imply GPU offload by default ( #5010 )
...
* fix(clip): do not imply GPUs by default
Until a better solution is found upstream, be conservative and default
to GPU.
https://github.com/ggml-org/llama.cpp/pull/12322
https://github.com/ggml-org/llama.cpp/pull/12322#issuecomment-2720970695
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* allow to override gpu via backend options
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-03-13 15:14:11 +01:00
Ettore Di Giacinto
1db2b9943c
chore(deps): Bump grpcio to 1.71.0 ( #4993 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-03-11 09:44:21 +01:00
Ettore Di Giacinto
879dc73eba
Revert "chore(deps): Bump intel-extension-for-pytorch from 2.3.110+xpu to 2.6.10+xpu in /backend/python/diffusers" ( #4992 )
...
Revert "chore(deps): Bump intel-extension-for-pytorch from 2.3.110+xpu to 2.6…"
This reverts commit 1dfc52de16
.
2025-03-11 08:29:05 +01:00
dependabot[bot]
1dfc52de16
chore(deps): Bump intel-extension-for-pytorch from 2.3.110+xpu to 2.6.10+xpu in /backend/python/diffusers ( #4973 )
...
chore(deps): Bump intel-extension-for-pytorch
Bumps intel-extension-for-pytorch from 2.3.110+xpu to 2.6.10+xpu.
---
updated-dependencies:
- dependency-name: intel-extension-for-pytorch
dependency-type: direct:production
update-type: version-update:semver-minor
...
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-10 21:14:43 +00:00
Ettore Di Giacinto
e4fa894153
fix(llama.cpp): correctly handle embeddings in batches ( #4957 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-03-07 19:29:52 +01:00
Ettore Di Giacinto
67f7bffd18
chore(deps): update llama.cpp and sync with upstream changes ( #4950 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-03-06 00:40:58 +01:00
Ettore Di Giacinto
30bf6c962f
chore(stable-diffusion-ggml): update, adapt upstream changes ( #4889 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-02-23 08:36:41 +01:00