LocalAI/core
Ettore Di Giacinto cc1f6f913f
fix(llama.cpp): disable mirostat as default (#2911)
Even if increasing the quality of the output, it has shown to have
performance drawbacks to be so noticeable that the confuses users about
speed of LocalAI ( see also
https://github.com/mudler/LocalAI/issues/2780 ).

This changeset disables Mirostat by default (which can
be still enabled manually).

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Co-authored-by: Dave <dave@gray101.com>
2025-02-06 19:39:59 +01:00
..
application chore: drop embedded models (#4715) 2025-01-30 00:03:01 +01:00
backend feat: tokenization with llama.cpp (#4724) 2025-02-02 17:39:43 +00:00
cli chore: drop embedded models (#4715) 2025-01-30 00:03:01 +01:00
clients feat(store): add Golang client (#1977) 2024-04-16 15:54:14 +02:00
config fix(llama.cpp): disable mirostat as default (#2911) 2025-02-06 19:39:59 +01:00
dependencies_manager fix: be consistent in downloading files, check for scanner errors (#3108) 2024-08-02 20:06:25 +02:00
explorer feat(explorer): make possible to run sync in a separate process (#3224) 2024-08-12 19:25:44 +02:00
gallery fix(gallery): do not return overrides and additional config (#4768) 2025-02-05 18:37:09 +01:00
http chore(llama-ggml): drop deprecated backend (#4775) 2025-02-06 18:36:23 +01:00
p2p fix(p2p): parse maddr correctly (#4219) 2024-11-21 14:06:49 +01:00
schema chore(stablediffusion-ncn): drop in favor of ggml implementation (#4652) 2025-01-22 19:34:16 +01:00
services chore: drop embedded models (#4715) 2025-01-30 00:03:01 +01:00