LocalAI/gallery
Ettore Di Giacinto f8fbfd4fa3
Some checks are pending
Explorer deployment / build-linux (push) Waiting to run
GPU tests / ubuntu-latest (1.21.x) (push) Waiting to run
generate and publish intel docker caches / generate_caches (intel/oneapi-basekit:2025.1.0-0-devel-ubuntu22.04, linux/amd64, ubuntu-latest) (push) Waiting to run
build container images / hipblas-jobs (-aio-gpu-hipblas, rocm/dev-ubuntu-22.04:6.1, hipblas, true, ubuntu:22.04, extras, latest-gpu-hipblas-extras, latest-aio-gpu-hipblas, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, auto, -hipblas-extras) (push) Waiting to run
build container images / hipblas-jobs (rocm/dev-ubuntu-22.04:6.1, hipblas, true, ubuntu:22.04, core, latest-gpu-hipblas, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, false, -hipblas) (push) Waiting to run
build container images / self-hosted-jobs (-aio-gpu-intel-f16, quay.io/go-skynet/intel-oneapi-base:latest, sycl_f16, true, ubuntu:22.04, extras, latest-gpu-intel-f16-extras, latest-aio-gpu-intel-f16, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, false, -sycl-f16-… (push) Waiting to run
build container images / self-hosted-jobs (-aio-gpu-intel-f32, quay.io/go-skynet/intel-oneapi-base:latest, sycl_f32, true, ubuntu:22.04, extras, latest-gpu-intel-f32-extras, latest-aio-gpu-intel-f32, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, false, -sycl-f32-… (push) Waiting to run
build container images / self-hosted-jobs (-aio-gpu-nvidia-cuda-11, ubuntu:22.04, cublas, 11, 7, true, extras, latest-gpu-nvidia-cuda-11-extras, latest-aio-gpu-nvidia-cuda-11, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, false, -cublas-cuda11-extras) (push) Waiting to run
build container images / self-hosted-jobs (-aio-gpu-nvidia-cuda-12, ubuntu:22.04, cublas, 12, 0, true, extras, latest-gpu-nvidia-cuda-12-extras, latest-aio-gpu-nvidia-cuda-12, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, false, -cublas-cuda12-extras) (push) Waiting to run
build container images / self-hosted-jobs (quay.io/go-skynet/intel-oneapi-base:latest, sycl_f16, true, ubuntu:22.04, core, latest-gpu-intel-f16, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, false, -sycl-f16) (push) Waiting to run
build container images / self-hosted-jobs (quay.io/go-skynet/intel-oneapi-base:latest, sycl_f32, true, ubuntu:22.04, core, latest-gpu-intel-f32, --jobs=3 --output-sync=target, linux/amd64, arc-runner-set, false, -sycl-f32) (push) Waiting to run
build container images / core-image-build (-aio-cpu, ubuntu:22.04, , true, core, latest-cpu, latest-aio-cpu, --jobs=4 --output-sync=target, linux/amd64,linux/arm64, arc-runner-set, false, auto, ) (push) Waiting to run
build container images / core-image-build (ubuntu:22.04, cublas, 11, 7, true, core, latest-gpu-nvidia-cuda-12, --jobs=4 --output-sync=target, linux/amd64, arc-runner-set, false, false, -cublas-cuda11) (push) Waiting to run
build container images / core-image-build (ubuntu:22.04, cublas, 12, 0, true, core, latest-gpu-nvidia-cuda-12, --jobs=4 --output-sync=target, linux/amd64, arc-runner-set, false, false, -cublas-cuda12) (push) Waiting to run
build container images / core-image-build (ubuntu:22.04, vulkan, true, core, latest-gpu-vulkan, --jobs=4 --output-sync=target, linux/amd64, arc-runner-set, false, false, -vulkan) (push) Waiting to run
build container images / gh-runner (nvcr.io/nvidia/l4t-jetpack:r36.4.0, cublas, 12, 0, true, core, latest-nvidia-l4t-arm64, --jobs=4 --output-sync=target, linux/arm64, ubuntu-24.04-arm, true, false, -nvidia-l4t-arm64) (push) Waiting to run
Security Scan / tests (push) Waiting to run
Tests extras backends / tests-transformers (push) Waiting to run
Tests extras backends / tests-rerankers (push) Waiting to run
Tests extras backends / tests-diffusers (push) Waiting to run
Tests extras backends / tests-coqui (push) Waiting to run
tests / tests-linux (1.21.x) (push) Waiting to run
tests / tests-aio-container (push) Waiting to run
tests / tests-apple (1.21.x) (push) Waiting to run
chore(model gallery): add a-m-team_am-thinking-v1 (#5395)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-05-19 17:31:38 +02:00
..
alpaca.yaml models(gallery): add leetwizard (#3093) 2024-07-31 10:43:45 +02:00
arch-function.yaml models(gallery): add versatillama-llama-3.2-3b-instruct-abliterated (#3771) 2024-10-09 16:58:34 +02:00
cerbero.yaml fix: yamlint warnings and errors (#2131) 2024-04-25 17:25:56 +00:00
chatml-hercules.yaml models(gallery): add hercules and helpingAI (#2376) 2024-05-22 22:42:41 +02:00
chatml.yaml fix(chatml): add endoftext stopword 2025-03-01 21:16:10 +01:00
codellama.yaml fix: yamlint warnings and errors (#2131) 2024-04-25 17:25:56 +00:00
command-r.yaml models(gallery): add mistral-0.3 and command-r, update functions (#2388) 2024-05-23 19:16:08 +02:00
deephermes.yaml fix(deephermes): correct typo 2025-03-01 17:07:12 +01:00
deepseek-r1.yaml chore(model gallery): update deepseek-r1 prompt template (#4686) 2025-01-25 09:04:38 +01:00
deepseek.yaml feat: models(gallery): add deepseek-v2-lite (#2658) 2024-07-13 17:09:59 -04:00
dreamshaper.yaml fix: yamlint warnings and errors (#2131) 2024-04-25 17:25:56 +00:00
falcon3.yaml chore(model gallery): add falcon3-1b-instruct (#4423) 2024-12-18 10:12:06 +01:00
flux-ggml.yaml fix(flux): Set CFG=1 so that prompts are followed (#5378) 2025-05-16 17:53:54 +02:00
flux.yaml fix(flux): Set CFG=1 so that prompts are followed (#5378) 2025-05-16 17:53:54 +02:00
gemma.yaml fix(gemma): improve prompt for tool calls (#5142) 2025-04-08 10:12:42 +02:00
granite.yaml models(gallery): add granite-3.0-1b-a400m-instruct (#3994) 2024-10-28 19:33:52 +01:00
granite3-2.yaml chore(model gallery): add ibm-granite_granite-3.2-8b-instruct (#4927) 2025-03-02 10:19:27 +01:00
hermes-2-pro-mistral.yaml models(gallery): add hermes-3 (#3252) 2024-08-16 00:02:21 +02:00
hermes-vllm.yaml chore(model-gallery): add more quants for popular models (#3365) 2024-08-24 00:29:24 +02:00
index.yaml chore(model gallery): add a-m-team_am-thinking-v1 (#5395) 2025-05-19 17:31:38 +02:00
llama3-instruct.yaml Update llama3-instruct.yaml 2024-07-27 15:30:13 +02:00
llama3.1-instruct-grammar.yaml Update llama3.1-instruct-grammar.yaml 2024-07-27 15:30:01 +02:00
llama3.1-instruct.yaml Update llama3.1-instruct.yaml 2024-07-27 15:29:50 +02:00
llama3.1-reflective.yaml models(gallery): add llama3.1-reflective config 2024-09-20 17:35:06 +02:00
llama3.2-fcall.yaml chore(model gallery): small fixups to llama3.2-fcall template 2025-02-03 17:58:57 +01:00
llama3.2-quantized.yaml chore(model gallery): add specific message templates for llama3.2 based models (#4707) 2025-01-29 10:19:48 +01:00
llava.yaml fix: yamlint warnings and errors (#2131) 2024-04-25 17:25:56 +00:00
mathstral.yaml models(gallery): add mathstral-7b-v0.1-imat (#2901) 2024-07-17 18:19:54 +02:00
mistral-0.3.yaml models(gallery): add mistral-0.3 and command-r, update functions (#2388) 2024-05-23 19:16:08 +02:00
moondream.yaml chore(gallery): do not specify backend with moondream 2024-10-10 19:54:07 +02:00
mudler.yaml models(gallery): add LocalAI-Llama3-8b-Function-Call-v0.2-GGUF (#2355) 2024-05-20 00:59:17 +02:00
noromaid.yaml fix: yamlint warnings and errors (#2131) 2024-04-25 17:25:56 +00:00
openvino.yaml gallery: Added some OpenVINO models (#2249) 2024-05-06 10:52:05 +02:00
parler-tts.yaml fix: yamlint warnings and errors (#2131) 2024-04-25 17:25:56 +00:00
phi-2-chat.yaml fix: yamlint warnings and errors (#2131) 2024-04-25 17:25:56 +00:00
phi-2-orange.yaml fix: yamlint warnings and errors (#2131) 2024-04-25 17:25:56 +00:00
phi-3-chat.yaml models(gallery): add cream-phi-13b (#2417) 2024-05-26 20:11:57 +02:00
phi-3-vision.yaml fix(phi3-vision): add multimodal template (#3944) 2024-10-23 15:34:45 +02:00
phi-4-chat-fcall.yaml chore(model gallery): add LocalAI-functioncall-phi-4-v0.3 (#4599) 2025-01-14 09:27:18 +01:00
phi-4-chat.yaml chore(model gallery): add phi-4 (#4562) 2025-01-08 23:26:25 +01:00
piper.yaml fix: yamlint warnings and errors (#2131) 2024-04-25 17:25:56 +00:00
qwen-fcall.yaml chore(model gallery): add localai-functioncall-qwen2.5-7b-v0.5 (#4796) 2025-02-10 12:07:35 +01:00
qwen3.yaml chore(model gallery): add qwen3-30b-a3b (#5269) 2025-04-29 09:44:44 +02:00
rerankers.yaml fix: yamlint warnings and errors (#2131) 2024-04-25 17:25:56 +00:00
rwkv.yaml fix(rwkv model): add stoptoken (#4283) 2024-11-28 09:34:35 +01:00
sd-ggml.yaml chore(model gallery): add sd-3.5-large-ggml (#4647) 2025-01-20 19:04:23 +01:00
sentencetransformers.yaml fix: yamlint warnings and errors (#2131) 2024-04-25 17:25:56 +00:00
stablediffusion3.yaml feat(sd-3): add stablediffusion 3 support (#2591) 2024-06-18 15:09:39 +02:00
tuluv2.yaml models(gallery): add archangel_sft_pythia2-8b (#2933) 2024-07-20 16:17:34 +02:00
vicuna-chat.yaml models(gallery): add apollo2-9b (#3860) 2024-10-17 10:16:52 +02:00
virtual.yaml fix: yamlint warnings and errors (#2131) 2024-04-25 17:25:56 +00:00
vllm.yaml feat(vllm): Additional vLLM config options (Disable logging, dtype, and Per-Prompt media limits) (#4855) 2025-02-18 19:27:58 +01:00
whisper-base.yaml models(gallery): add all whisper variants (#2462) 2024-06-01 20:04:03 +02:00
wizardlm2.yaml models(gallery): add wizardlm2 (#2209) 2024-05-02 18:31:02 +02:00