chore(model-gallery): ⬆️ update checksum (#5346)

⬆️ Checksum updates in gallery/index.yaml

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
This commit is contained in:
LocalAI [bot] 2025-05-10 22:24:04 +02:00 committed by GitHub
parent 6978eec69f
commit 2dcb6d7247
No known key found for this signature in database
GPG key ID: B5690EEEBB952194

View file

@ -7078,13 +7078,7 @@
urls: urls:
- https://huggingface.co/ServiceNow-AI/Apriel-Nemotron-15b-Thinker - https://huggingface.co/ServiceNow-AI/Apriel-Nemotron-15b-Thinker
- https://huggingface.co/bartowski/ServiceNow-AI_Apriel-Nemotron-15b-Thinker-GGUF - https://huggingface.co/bartowski/ServiceNow-AI_Apriel-Nemotron-15b-Thinker-GGUF
description: | description: "Apriel-Nemotron-15b-Thinker is a 15billionparameter reasoning model in ServiceNows Apriel SLM series which achieves competitive performance against similarly sized state-of-the-art models like o1mini, QWQ32b, and EXAONEDeep32b, all while maintaining only half the memory footprint of those alternatives. It builds upon the Apriel15bbase checkpoint through a threestage training pipeline (CPT, SFT and GRPO).\nHighlights\n Half the size of SOTA models like QWQ-32b and EXAONE-32b and hence memory efficient.\n It consumes 40% less tokens compared to QWQ-32b, making it super efficient in production. \U0001F680\U0001F680\U0001F680\n On par or outperforms on tasks like - MBPP, BFCL, Enterprise RAG, MT Bench, MixEval, IFEval and Multi-Challenge making it great for Agentic / Enterprise tasks.\n Competitive performance on academic benchmarks like AIME-24 AIME-25, AMC-23, MATH-500 and GPQA considering model size.\n"
Apriel-Nemotron-15b-Thinker is a 15billionparameter reasoning model in ServiceNows Apriel SLM series which achieves competitive performance against similarly sized state-of-the-art models like o1mini, QWQ32b, and EXAONEDeep32b, all while maintaining only half the memory footprint of those alternatives. It builds upon the Apriel15bbase checkpoint through a threestage training pipeline (CPT, SFT and GRPO).
Highlights
Half the size of SOTA models like QWQ-32b and EXAONE-32b and hence memory efficient.
It consumes 40% less tokens compared to QWQ-32b, making it super efficient in production. 🚀🚀🚀
On par or outperforms on tasks like - MBPP, BFCL, Enterprise RAG, MT Bench, MixEval, IFEval and Multi-Challenge making it great for Agentic / Enterprise tasks.
Competitive performance on academic benchmarks like AIME-24 AIME-25, AMC-23, MATH-500 and GPQA considering model size.
overrides: overrides:
parameters: parameters:
model: ServiceNow-AI_Apriel-Nemotron-15b-Thinker-Q4_K_M.gguf model: ServiceNow-AI_Apriel-Nemotron-15b-Thinker-Q4_K_M.gguf
@ -9013,8 +9007,8 @@
model: deepseek-r1-distill-llama-8b-Q4_K_M.gguf model: deepseek-r1-distill-llama-8b-Q4_K_M.gguf
files: files:
- filename: deepseek-r1-distill-llama-8b-Q4_K_M.gguf - filename: deepseek-r1-distill-llama-8b-Q4_K_M.gguf
sha256: f8eba201522ab44b79bc54166126bfaf836111ff4cbf2d13c59c3b57da10573b
uri: huggingface://unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF/DeepSeek-R1-Distill-Llama-8B-Q4_K_M.gguf uri: huggingface://unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF/DeepSeek-R1-Distill-Llama-8B-Q4_K_M.gguf
sha256: 0addb1339a82385bcd973186cd80d18dcc71885d45eabd899781a118d03827d9
- !!merge <<: *llama31 - !!merge <<: *llama31
name: "selene-1-mini-llama-3.1-8b" name: "selene-1-mini-llama-3.1-8b"
icon: https://atla-ai.notion.site/image/https%3A%2F%2Fprod-files-secure.s3.us-west-2.amazonaws.com%2Ff08e6e70-73af-4363-9621-90e906b92ebc%2F1bfb4316-1ce6-40a0-800c-253739cfcdeb%2Fatla_white3x.svg?table=block&id=17c309d1-7745-80f9-8f60-e755409acd8d&spaceId=f08e6e70-73af-4363-9621-90e906b92ebc&userId=&cache=v2 icon: https://atla-ai.notion.site/image/https%3A%2F%2Fprod-files-secure.s3.us-west-2.amazonaws.com%2Ff08e6e70-73af-4363-9621-90e906b92ebc%2F1bfb4316-1ce6-40a0-800c-253739cfcdeb%2Fatla_white3x.svg?table=block&id=17c309d1-7745-80f9-8f60-e755409acd8d&spaceId=f08e6e70-73af-4363-9621-90e906b92ebc&userId=&cache=v2