LocalAI/pkg
Ettore Di Giacinto 2c9279a542
feat(video-gen): add endpoint for video generation (#5247)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-04-26 18:05:01 +02:00
..
assets chore: fix go.mod module (#2635) 2024-06-23 08:24:36 +00:00
concurrency chore: update jobresult_test.go (#4124) 2024-11-12 08:52:18 +01:00
downloader chore(downloader): support hf.co and hf:// URIs (#4677) 2025-01-24 08:27:22 +01:00
functions chore(deps): update llama.cpp and sync with upstream changes (#4950) 2025-03-06 00:40:58 +01:00
grpc feat(video-gen): add endpoint for video generation (#5247) 2025-04-26 18:05:01 +02:00
langchain feat(llama.cpp): do not specify backends to autoload and add llama.cpp variants (#2232) 2024-05-04 17:56:12 +02:00
library rf: centralize base64 image handling (#2595) 2024-06-24 08:34:36 +02:00
model feat(llama.cpp/clip): inject gpu options if we detect GPUs (#5243) 2025-04-26 00:04:47 +02:00
oci chore: fix go.mod module (#2635) 2024-06-23 08:24:36 +00:00
startup chore: drop embedded models (#4715) 2025-01-30 00:03:01 +01:00
store chore: fix go.mod module (#2635) 2024-06-23 08:24:36 +00:00
templates feat(ui): complete design overhaul (#4942) 2025-03-05 08:27:03 +01:00
utils feat(tts): Implement naive response_format for tts endpoint (#4035) 2024-11-02 19:13:35 +00:00
xsync chore: fix go.mod module (#2635) 2024-06-23 08:24:36 +00:00
xsysinfo feat(llama.cpp/clip): inject gpu options if we detect GPUs (#5243) 2025-04-26 00:04:47 +02:00