mirror of
https://github.com/mudler/LocalAI.git
synced 2025-06-01 16:34:59 +00:00
chore(model gallery): add medgemma-27b-text-it (#5461)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
This commit is contained in:
parent
f784986e19
commit
e5978dc714
1 changed files with 22 additions and 0 deletions
|
@ -1554,6 +1554,28 @@
|
|||
- filename: mmproj-medgemma-4b-it-F16.gguf
|
||||
sha256: 13913a7e70893b09c40154cbd43456611ea58f12bfe1e5d4ad5b7e4875644dc3
|
||||
uri: https://huggingface.co/unsloth/medgemma-4b-it-GGUF/resolve/main/mmproj-F16.gguf
|
||||
- !!merge <<: *gemma3
|
||||
name: "medgemma-27b-text-it"
|
||||
urls:
|
||||
- https://huggingface.co/google/medgemma-27b-text-it
|
||||
- https://huggingface.co/unsloth/medgemma-27b-text-it-GGUF
|
||||
description: |
|
||||
MedGemma is a collection of Gemma 3 variants that are trained for performance on medical text and image comprehension. Developers can use MedGemma to accelerate building healthcare-based AI applications. MedGemma currently comes in two variants: a 4B multimodal version and a 27B text-only version.
|
||||
|
||||
MedGemma 4B utilizes a SigLIP image encoder that has been specifically pre-trained on a variety of de-identified medical data, including chest X-rays, dermatology images, ophthalmology images, and histopathology slides. Its LLM component is trained on a diverse set of medical data, including radiology images, histopathology patches, ophthalmology images, and dermatology images.
|
||||
|
||||
MedGemma 4B is available in both pre-trained (suffix: -pt) and instruction-tuned (suffix -it) versions. The instruction-tuned version is a better starting point for most applications. The pre-trained version is available for those who want to experiment more deeply with the models.
|
||||
|
||||
MedGemma 27B has been trained exclusively on medical text and optimized for inference-time computation. MedGemma 27B is only available as an instruction-tuned model.
|
||||
|
||||
MedGemma variants have been evaluated on a range of clinically relevant benchmarks to illustrate their baseline performance. These include both open benchmark datasets and curated datasets. Developers can fine-tune MedGemma variants for improved performance. Consult the Intended Use section below for more details.
|
||||
overrides:
|
||||
parameters:
|
||||
model: medgemma-27b-text-it-Q4_K_M.gguf
|
||||
files:
|
||||
- filename: medgemma-27b-text-it-Q4_K_M.gguf
|
||||
sha256: 383b1c414d3f2f1a9c577a61e623d29a4ed4f7834f60b9e5412f5ff4e8aaf080
|
||||
uri: huggingface://unsloth/medgemma-27b-text-it-GGUF/medgemma-27b-text-it-Q4_K_M.gguf
|
||||
- &llama4
|
||||
url: "github:mudler/LocalAI/gallery/llama3.1-instruct.yaml@master"
|
||||
icon: https://avatars.githubusercontent.com/u/153379578
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue