mirror of
https://github.com/mudler/LocalAI.git
synced 2025-06-24 03:35:00 +00:00
3 commits
Author | SHA1 | Message | Date | |
---|---|---|---|---|
![]() |
9a34fc8c66 |
feat(backend gallery): add meta packages
So we can have meta packages such as "vllm" that automatically installs the corresponding package depending on the GPU that is being currently detected in the system. Signed-off-by: Ettore Di Giacinto <mudler@localai.io> |
||
![]() |
efde0eaf83
|
feat(backend gallery): display download progress (#5687)
Some checks are pending
Tests extras backends / tests-rerankers (push) Waiting to run
build backend container images / backend-jobs (rerankers, ubuntu:22.04, cublas, ./backend, 12, 0, ./backend/Dockerfile.python, latest-gpu-nvidia-cuda-12-rerankers, linux/amd64, ubuntu-latest, true, -gpu-nvidia-cuda-12-rerankers) (push) Waiting to run
build backend container images / backend-jobs (transformers, quay.io/go-skynet/intel-oneapi-base:latest, sycl_f16, ./backend, , , ./backend/Dockerfile.python, latest-gpu-intel-sycl-f16-transformers, linux/amd64, ubuntu-latest, true, -gpu-intel-sycl-f16-transformers) (push) Waiting to run
build backend container images / backend-jobs (transformers, quay.io/go-skynet/intel-oneapi-base:latest, sycl_f32, ./backend, , , ./backend/Dockerfile.python, latest-gpu-intel-sycl-f32-transformers, linux/amd64, ubuntu-latest, true, -gpu-intel-sycl-f32-transformers) (push) Waiting to run
build backend container images / backend-jobs (transformers, rocm/dev-ubuntu-22.04:6.1, hipblas, ./backend, , , ./backend/Dockerfile.python, latest-gpu-rocm-hipblas-transformers, linux/amd64, ubuntu-latest, true, -gpu-rocm-hipblas-transformers) (push) Waiting to run
build backend container images / backend-jobs (transformers, ubuntu:22.04, cublas, ./backend, 11, 7, ./backend/Dockerfile.python, latest-gpu-nvidia-cuda-11-transformers, linux/amd64, ubuntu-latest, true, -gpu-nvidia-cuda-11-transformers) (push) Waiting to run
build backend container images / backend-jobs (transformers, ubuntu:22.04, cublas, ./backend, 12, 0, ./backend/Dockerfile.python, latest-gpu-nvidia-cuda-12-transformers, linux/amd64, ubuntu-latest, true, -gpu-nvidia-cuda-12-transformers) (push) Waiting to run
Explorer deployment / build-linux (push) Waiting to run
generate and publish intel docker caches / generate_caches (intel/oneapi-basekit:2025.1.0-0-devel-ubuntu22.04, linux/amd64, ubuntu-latest) (push) Waiting to run
build container images / core-image-build (-aio-gpu-nvidia-cuda-11, ubuntu:22.04, cublas, 11, 7, true, latest-gpu-nvidia-cuda-11, latest-aio-gpu-nvidia-cuda-11, --jobs=4 --output-sync=target, linux/amd64, ubuntu-latest, false, false, -cublas-cuda11) (push) Waiting to run
build container images / core-image-build (-aio-gpu-nvidia-cuda-12, ubuntu:22.04, cublas, 12, 0, true, latest-gpu-nvidia-cuda-12, latest-aio-gpu-nvidia-cuda-12, --jobs=4 --output-sync=target, linux/amd64, ubuntu-latest, false, false, -cublas-cuda12) (push) Waiting to run
build container images / core-image-build (-aio-gpu-vulkan, ubuntu:22.04, vulkan, true, latest-gpu-vulkan, latest-aio-gpu-vulkan, --jobs=4 --output-sync=target, linux/amd64, ubuntu-latest, false, false, -vulkan) (push) Waiting to run
build container images / gh-runner (nvcr.io/nvidia/l4t-jetpack:r36.4.0, cublas, 12, 0, true, latest-nvidia-l4t-arm64, --jobs=4 --output-sync=target, linux/arm64, ubuntu-24.04-arm, true, false, -nvidia-l4t-arm64) (push) Waiting to run
Security Scan / tests (push) Waiting to run
build backend container images / backend-jobs (vllm, quay.io/go-skynet/intel-oneapi-base:latest, sycl_f16, ./backend, , , ./backend/Dockerfile.python, latest-gpu-intel-sycl-f16-vllm, linux/amd64, ubuntu-latest, true, -gpu-intel-sycl-f16-vllm) (push) Waiting to run
build backend container images / backend-jobs (vllm, quay.io/go-skynet/intel-oneapi-base:latest, sycl_f32, ./backend, , , ./backend/Dockerfile.python, latest-gpu-intel-sycl-f32-vllm, linux/amd64, ubuntu-latest, true, -gpu-intel-sycl-f32-vllm) (push) Waiting to run
build backend container images / backend-jobs (vllm, rocm/dev-ubuntu-22.04:6.1, hipblas, ./backend, , , ./backend/Dockerfile.python, latest-gpu-rocm-hipblas-vllm, linux/amd64, ubuntu-latest, true, -gpu-rocm-hipblas-vllm) (push) Waiting to run
build backend container images / backend-jobs (vllm, ubuntu:22.04, cublas, ./backend, 11, 7, ./backend/Dockerfile.python, latest-gpu-nvidia-cuda-11-vllm, linux/amd64, ubuntu-latest, true, -gpu-nvidia-cuda-11-vllm) (push) Waiting to run
build backend container images / backend-jobs (vllm, ubuntu:22.04, cublas, ./backend, 12, 0, ./backend/Dockerfile.python, latest-gpu-nvidia-cuda-12-vllm, linux/amd64, ubuntu-latest, true, -gpu-nvidia-cuda-12-vllm) (push) Waiting to run
GPU tests / ubuntu-latest (1.21.x) (push) Waiting to run
build container images / hipblas-jobs (-aio-gpu-hipblas, rocm/dev-ubuntu-22.04:6.1, hipblas, true, ubuntu:22.04, latest-gpu-hipblas, latest-aio-gpu-hipblas, --jobs=3 --output-sync=target, linux/amd64, ubuntu-latest, false, -hipblas) (push) Waiting to run
build container images / core-image-build (-aio-cpu, ubuntu:22.04, , true, latest-cpu, latest-aio-cpu, --jobs=4 --output-sync=target, linux/amd64,linux/arm64, ubuntu-latest, false, auto, ) (push) Waiting to run
build container images / core-image-build (-aio-gpu-intel-f16, quay.io/go-skynet/intel-oneapi-base:latest, sycl_f16, true, ubuntu:22.04, latest-gpu-intel-f16, latest-aio-gpu-intel-f16, --jobs=3 --output-sync=target, linux/amd64, ubuntu-latest, false, -sycl-f16) (push) Waiting to run
build container images / core-image-build (-aio-gpu-intel-f32, quay.io/go-skynet/intel-oneapi-base:latest, sycl_f32, true, ubuntu:22.04, latest-gpu-intel-f32, latest-aio-gpu-intel-f32, --jobs=3 --output-sync=target, linux/amd64, ubuntu-latest, false, -sycl-f32) (push) Waiting to run
Tests extras backends / tests-transformers (push) Waiting to run
tests / tests-linux (1.21.x) (push) Waiting to run
tests / tests-aio-container (push) Waiting to run
tests / tests-apple (1.21.x) (push) Waiting to run
Tests extras backends / tests-diffusers (push) Waiting to run
Tests extras backends / tests-coqui (push) Waiting to run
Signed-off-by: Ettore Di Giacinto <mudler@localai.io> |
||
![]() |
2d64269763
|
feat: Add backend gallery (#5607)
* feat: Add backend gallery This PR add support to manage backends as similar to models. There is now available a backend gallery which can be used to install and remove extra backends. The backend gallery can be configured similarly as a model gallery, and API calls allows to install and remove new backends in runtime, and as well during the startup phase of LocalAI. Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Add backends docs Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * wip: Backend Dockerfile for python backends Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * feat: drop extras images, build python backends separately Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * fixup on all backends Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * test CI Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Tweaks Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Drop old backends leftovers Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Fixup CI Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Move dockerfile upper Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Fix proto Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Feature dropped for consistency - we prefer model galleries Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Add missing packages in the build image Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * exllama is ponly available on cublas Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * pin torch on chatterbox Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Fixups to index Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * CI Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Debug CI * Install accellerators deps Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Add target arch * Add cuda minor version Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Use self-hosted runners Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * ci: use quay for test images Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * fixups for vllm and chatterbox Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Small fixups on CI Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * chatterbox is only available for nvidia Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Simplify CI builds Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Adapt test, use qwen3 Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * chore(model gallery): add jina-reranker-v1-tiny-en-gguf Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * fix(gguf-parser): recover from potential panics that can happen while reading ggufs with gguf-parser Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Use reranker from llama.cpp in AIO images Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Limit concurrent jobs Signed-off-by: Ettore Di Giacinto <mudler@localai.io> --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io> Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com> |