Ettore Di Giacinto
632b0b175b
Placeholder
2025-05-16 18:25:55 +02:00
Ettore Di Giacinto
31b280f894
This seems to be broken - 360a9c98e1 (diff-a18a8e64e12a01167d8e98fc)
[…]cccf0d4eed09d76d879L2998-L3207
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-05-16 18:25:45 +02:00
Ettore Di Giacinto
141ceaf581
Re-enable grammars
2025-05-16 18:25:26 +02:00
Ettore Di Giacinto
73cb2f8fa5
Reset auto detected template
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-05-15 23:17:08 +02:00
Ettore Di Giacinto
b087a44fa0
Add logs
2025-05-15 23:17:00 +02:00
Ettore Di Giacinto
1dc76be5f8
this shouldn't be private for now
2025-05-15 23:16:47 +02:00
Ettore Di Giacinto
b1e0d0ad3b
Update json.hpp
2025-05-15 22:41:55 +02:00
Ettore Di Giacinto
6381f9bda2
Make it compile
2025-05-15 22:41:42 +02:00
Ettore Di Giacinto
453eb7d1c8
wip
2025-05-15 20:04:07 +02:00
Ettore Di Giacinto
cd4c0b8aa6
wip
2025-05-14 22:57:56 +02:00
Ettore Di Giacinto
7437d0c9ca
WIP
2025-05-14 20:11:06 +02:00
Ettore Di Giacinto
dc21604741
chore(deps): bump whisper.cpp ( #5338 )
...
* chore(deps): bump whisper.cpp
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* add libggml-metal
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Fixups macOS arm64
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* adjust cublas for whisper.cpp
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-05-09 08:17:45 +02:00
Ettore Di Giacinto
adb24214c6
chore(deps): bump llama.cpp to b34c859146630dff136943abc9852ca173a7c9d6
( #5323 )
...
chore(deps): bump llama.cpp to 'b34c859146630dff136943abc9852ca173a7c9d6'
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-05-06 11:21:25 +02:00
Ettore Di Giacinto
1fc6d469ac
chore(deps): bump llama.cpp to '1d36b3670b285e69e58b9d687c770a2a0a192194 ( #5307 )
...
chore(deps): bump llama.cpp to '1d36b3670b285e69e58b9d687c770a2a0a192194'
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-05-03 18:44:40 +02:00
Wyatt Neal
4076ea0494
fix: vllm missing logprobs ( #5279 )
...
* working to address missing items
referencing #3436 , #2930 - if i could test it, this might show that the
output from the vllm backend is processed and returned to the user
Signed-off-by: Wyatt Neal <wyatt.neal+git@gmail.com>
* adding in vllm tests to test-extras
Signed-off-by: Wyatt Neal <wyatt.neal+git@gmail.com>
* adding in tests to pipeline for execution
Signed-off-by: Wyatt Neal <wyatt.neal+git@gmail.com>
* removing todo block, test via pipeline
Signed-off-by: Wyatt Neal <wyatt.neal+git@gmail.com>
---------
Signed-off-by: Wyatt Neal <wyatt.neal+git@gmail.com>
2025-04-30 12:55:07 +00:00
Ettore Di Giacinto
6e8f4f584b
fix(diffusers): consider options only in form of key/value ( #5277 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-04-29 17:08:55 +02:00
Ettore Di Giacinto
2c9279a542
feat(video-gen): add endpoint for video generation ( #5247 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-04-26 18:05:01 +02:00
Ettore Di Giacinto
cae9bf1308
chore(deps): bump grpcio to 1.72.0 ( #5244 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-04-25 21:32:37 +02:00
Richard Palethorpe
7f61d397d5
fix(stablediffusion-ggml): Build with DSD CUDA, HIP and Metal flags ( #5236 )
...
Signed-off-by: Richard Palethorpe <io@richiejp.com>
2025-04-24 10:27:17 +02:00
Ettore Di Giacinto
61cc76c455
chore(autogptq): drop archived backend ( #5214 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-04-19 15:52:29 +02:00
Ettore Di Giacinto
8abecb4a18
chore: bump grpc limits to 50MB ( #5212 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-04-19 08:53:24 +02:00
Richard Palethorpe
0f0fafacd9
fix(stablediffusion): Avoid overwriting SYCL specific flags from outer make call ( #5181 )
...
Signed-off-by: Richard Palethorpe <io@richiejp.com>
2025-04-15 19:31:25 +02:00
Richard Palethorpe
1b899e1a68
feat(stablediffusion): Enable SYCL ( #5144 )
...
* feat(sycl): Enable SYCL for stable diffusion
This is a pain because we compile with CGO, but SD is compiled with
CMake. I don't think we can easily use CMake to set the linker flags
necessary. Also I could not find pkg-config calls that would fully set
the flags, so some of them are set manually.
See https://www.intel.com/content/www/us/en/developer/tools/oneapi/onemkl-link-line-advisor.html
for reference. I also resorted to searching the shared object files in
MKLROOT/lib for the symbols.
Signed-off-by: Richard Palethorpe <io@richiejp.com>
* fix(ci): Don't set nproc on cmake
Signed-off-by: Richard Palethorpe <io@richiejp.com>
---------
Signed-off-by: Richard Palethorpe <io@richiejp.com>
2025-04-10 15:20:53 +02:00
Ettore Di Giacinto
d484028532
feat(diffusers): add support for Lumina2Text2ImgPipeline ( #4806 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-04-10 09:55:51 +02:00
Ettore Di Giacinto
25e6f21322
chore(deps): bump llama.cpp to 4ccea213bc629c4eef7b520f7f6c59ce9bbdaca0
( #5143 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-04-08 11:26:06 +02:00
Ettore Di Giacinto
ece239966f
chore: ⬆️ Update ggml-org/llama.cpp to 6bf28f0111ff9f21b3c1b1eace20c590281e7ba6
( #5127 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-04-06 14:01:51 +02:00
Richard Palethorpe
d2cf8ef070
fix(sycl): kernel not found error by forcing -fsycl ( #5115 )
...
* chore(sycl): Update oneapi to 2025:1
Signed-off-by: Richard Palethorpe <io@richiejp.com>
* fix(sycl): Pass -fsycl flag as workaround
-fsycl should be set by llama.cpp's cmake file, but something goes wrong
and it doesn't appear to get added
Signed-off-by: Richard Palethorpe <io@richiejp.com>
* fix(build): Speed up llama build by using all CPUs
Signed-off-by: Richard Palethorpe <io@richiejp.com>
---------
Signed-off-by: Richard Palethorpe <io@richiejp.com>
2025-04-03 16:22:59 +02:00
Ettore Di Giacinto
18b320d577
chore(deps): bump llama.cpp to 'f01bd02376f919b05ee635f438311be8dfc91d7c ( #5110 )
...
chore(deps): bump llama.cpp to 'f01bd02376f919b05ee635f438311be8dfc91d7c'
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-04-03 10:23:14 +02:00
Ettore Di Giacinto
c2a39e3639
fix(llama.cpp): properly handle sigterm ( #5099 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-03-30 18:08:29 +02:00
Ettore Di Giacinto
423514a5a5
fix(clip): do not imply GPU offload by default ( #5010 )
...
* fix(clip): do not imply GPUs by default
Until a better solution is found upstream, be conservative and default
to GPU.
https://github.com/ggml-org/llama.cpp/pull/12322
https://github.com/ggml-org/llama.cpp/pull/12322#issuecomment-2720970695
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* allow to override gpu via backend options
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-03-13 15:14:11 +01:00
Ettore Di Giacinto
1db2b9943c
chore(deps): Bump grpcio to 1.71.0 ( #4993 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-03-11 09:44:21 +01:00
Ettore Di Giacinto
879dc73eba
Revert "chore(deps): Bump intel-extension-for-pytorch from 2.3.110+xpu to 2.6.10+xpu in /backend/python/diffusers" ( #4992 )
...
Revert "chore(deps): Bump intel-extension-for-pytorch from 2.3.110+xpu to 2.6…"
This reverts commit 1dfc52de16
.
2025-03-11 08:29:05 +01:00
dependabot[bot]
1dfc52de16
chore(deps): Bump intel-extension-for-pytorch from 2.3.110+xpu to 2.6.10+xpu in /backend/python/diffusers ( #4973 )
...
chore(deps): Bump intel-extension-for-pytorch
Bumps intel-extension-for-pytorch from 2.3.110+xpu to 2.6.10+xpu.
---
updated-dependencies:
- dependency-name: intel-extension-for-pytorch
dependency-type: direct:production
update-type: version-update:semver-minor
...
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-10 21:14:43 +00:00
Ettore Di Giacinto
e4fa894153
fix(llama.cpp): correctly handle embeddings in batches ( #4957 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-03-07 19:29:52 +01:00
Ettore Di Giacinto
67f7bffd18
chore(deps): update llama.cpp and sync with upstream changes ( #4950 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-03-06 00:40:58 +01:00
Ettore Di Giacinto
30bf6c962f
chore(stable-diffusion-ggml): update, adapt upstream changes ( #4889 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-02-23 08:36:41 +01:00
Ettore Di Giacinto
af3bb64e42
fix(coqui): pin transformers ( #4875 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-02-20 16:16:54 +01:00
Brandon Beiler
6a6e1a0ea9
feat(vllm): Additional vLLM config options (Disable logging, dtype, and Per-Prompt media limits) ( #4855 )
...
* Adding the following vLLM config options: disable_log_status, dtype, limit_mm_per_prompt
Signed-off-by: TheDropZone <brandonbeiler@gmail.com>
* using " marks in the config.yaml file
Signed-off-by: TheDropZone <brandonbeiler@gmail.com>
* adding in missing colon
Signed-off-by: TheDropZone <brandonbeiler@gmail.com>
---------
Signed-off-by: TheDropZone <brandonbeiler@gmail.com>
2025-02-18 19:27:58 +01:00
Ettore Di Giacinto
9e32fda304
fix(llama.cpp): improve context shift handling ( #4820 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-02-14 14:55:03 +01:00
Ettore Di Giacinto
f5638a6354
feat(diffusers): allow to override image gen options ( #4807 )
...
Use the options field in the model to override kwargs if needed.
This allows to specify from the model yaml config:
```yaml
options:
- foo:bar
```
And each option will be used directly when calling the diffusers
pipeline, e.g:
```python
pipe(
foo="bar",
)
```
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-02-11 10:16:32 +01:00
Ettore Di Giacinto
7f90ff7aec
chore(llama-ggml): drop deprecated backend ( #4775 )
...
The GGML format is now dead, since in the next version of LocalAI we
already bring many breaking compatibility changes, taking the occasion
also to drop ggml support (pre-gguf).
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-02-06 18:36:23 +01:00
dependabot[bot]
5a19094d3a
chore(deps): Bump sentence-transformers from 3.4.0 to 3.4.1 in /backend/python/transformers ( #4748 )
...
chore(deps): Bump sentence-transformers in /backend/python/transformers
Bumps [sentence-transformers](https://github.com/UKPLab/sentence-transformers ) from 3.4.0 to 3.4.1.
- [Release notes](https://github.com/UKPLab/sentence-transformers/releases )
- [Commits](https://github.com/UKPLab/sentence-transformers/compare/v3.4.0...v3.4.1 )
---
updated-dependencies:
- dependency-name: sentence-transformers
dependency-type: direct:production
update-type: version-update:semver-patch
...
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-04 08:56:51 +01:00
Shraddha
03974a4dd4
feat: tokenization with llama.cpp ( #4724 )
...
feat: tokenization
Signed-off-by: shraddhazpy <shraddha@shraddhafive.in>
2025-02-02 17:39:43 +00:00
Ettore Di Giacinto
1d6afbd65d
feat(llama.cpp): Add support to grammar triggers ( #4733 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-02-02 13:25:03 +01:00
dependabot[bot]
fff35d5528
chore(deps): Bump sentence-transformers from 3.3.1 to 3.4.0 in /backend/python/transformers ( #4702 )
...
chore(deps): Bump sentence-transformers in /backend/python/transformers
Bumps [sentence-transformers](https://github.com/UKPLab/sentence-transformers ) from 3.3.1 to 3.4.0.
- [Release notes](https://github.com/UKPLab/sentence-transformers/releases )
- [Commits](https://github.com/UKPLab/sentence-transformers/compare/v3.3.1...v3.4.0 )
---
updated-dependencies:
- dependency-name: sentence-transformers
dependency-type: direct:production
update-type: version-update:semver-minor
...
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-01-27 21:09:50 +00:00
Ettore Di Giacinto
4d44ebc2f2
chore(deps): bump grpcio to 1.70.0 ( #4682 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-01-24 10:18:22 +01:00
Ettore Di Giacinto
073eaec729
chore(openvoice): drop backend ( #4673 )
...
The project (MeloTTS) has been quite since long, newer backends are much
performant and better quality overall.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-01-23 10:00:36 +01:00
Ettore Di Giacinto
318225f631
chore(parler-tts): drop backend ( #4672 )
...
We support at this point more extensive backends that are SOTA and
support also voice cloning, and many other features. This backend is
superseded and also poses significant maintenance burden as there is an
open issue https://github.com/mudler/LocalAI/issues/3941 which is still
open as it deps are pinning old versions of grpc.
Closes https://github.com/mudler/LocalAI/issues/3941
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-01-23 09:46:16 +01:00
Ettore Di Giacinto
89429a439b
feat(transformers): add support to Mamba ( #4669 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-01-23 09:30:47 +01:00
Ettore Di Giacinto
e426ab7c23
feat(faster-whisper): add backend ( #4666 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-01-23 08:06:18 +01:00