mirror of
https://github.com/mudler/LocalAI.git
synced 2025-06-29 22:20:43 +00:00
⬆️ Checksum updates in gallery/index.yaml
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
This commit is contained in:
parent
07655c0c2e
commit
5d430be640
1 changed files with 5 additions and 12 deletions
|
@ -21,8 +21,8 @@
|
||||||
model: phi-4-Q4_K_M.gguf
|
model: phi-4-Q4_K_M.gguf
|
||||||
files:
|
files:
|
||||||
- filename: phi-4-Q4_K_M.gguf
|
- filename: phi-4-Q4_K_M.gguf
|
||||||
sha256: e38bd5fa5f1c03d51ebc34a8d7b284e0da089c8af05e7f409a0079a9c831a10b
|
|
||||||
uri: huggingface://bartowski/phi-4-GGUF/phi-4-Q4_K_M.gguf
|
uri: huggingface://bartowski/phi-4-GGUF/phi-4-Q4_K_M.gguf
|
||||||
|
sha256: 009aba717c09d4a35890c7d35eb59d54e1dba884c7c526e7197d9c13ab5911d9
|
||||||
- &falcon3
|
- &falcon3
|
||||||
name: "falcon3-1b-instruct"
|
name: "falcon3-1b-instruct"
|
||||||
url: "github:mudler/LocalAI/gallery/falcon3.yaml@master"
|
url: "github:mudler/LocalAI/gallery/falcon3.yaml@master"
|
||||||
|
@ -2726,14 +2726,7 @@
|
||||||
urls:
|
urls:
|
||||||
- https://huggingface.co/Krystalan/DRT-o1-7B
|
- https://huggingface.co/Krystalan/DRT-o1-7B
|
||||||
- https://huggingface.co/QuantFactory/DRT-o1-7B-GGUF
|
- https://huggingface.co/QuantFactory/DRT-o1-7B-GGUF
|
||||||
description: |
|
description: "In this work, we introduce DRT-o1, an attempt to bring the success of long thought reasoning to neural machine translation (MT). To this end,\n\n\U0001F31F We mine English sentences with similes or metaphors from existing literature books, which are suitable for translation via long thought.\n\U0001F31F We propose a designed multi-agent framework with three agents (i.e., a translator, an advisor and an evaluator) to synthesize the MT samples with long thought. There are 22,264 synthesized samples in total.\n\U0001F31F We train DRT-o1-8B, DRT-o1-7B and DRT-o1-14B using Llama-3.1-8B-Instruct, Qwen2.5-7B-Instruct and Qwen2.5-14B-Instruct as backbones.\n\nOur goal is not to achieve competitive performance with OpenAI’s O1 in neural machine translation (MT). Instead, we explore technical routes to bring the success of long thought to MT. To this end, we introduce DRT-o1, a byproduct of our exploration, and we hope it could facilitate the corresponding research in this direction.\n"
|
||||||
In this work, we introduce DRT-o1, an attempt to bring the success of long thought reasoning to neural machine translation (MT). To this end,
|
|
||||||
|
|
||||||
🌟 We mine English sentences with similes or metaphors from existing literature books, which are suitable for translation via long thought.
|
|
||||||
🌟 We propose a designed multi-agent framework with three agents (i.e., a translator, an advisor and an evaluator) to synthesize the MT samples with long thought. There are 22,264 synthesized samples in total.
|
|
||||||
🌟 We train DRT-o1-8B, DRT-o1-7B and DRT-o1-14B using Llama-3.1-8B-Instruct, Qwen2.5-7B-Instruct and Qwen2.5-14B-Instruct as backbones.
|
|
||||||
|
|
||||||
Our goal is not to achieve competitive performance with OpenAI’s O1 in neural machine translation (MT). Instead, we explore technical routes to bring the success of long thought to MT. To this end, we introduce DRT-o1, a byproduct of our exploration, and we hope it could facilitate the corresponding research in this direction.
|
|
||||||
overrides:
|
overrides:
|
||||||
parameters:
|
parameters:
|
||||||
model: DRT-o1-7B.Q4_K_M.gguf
|
model: DRT-o1-7B.Q4_K_M.gguf
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue