LocalAI/backend
Wyatt Neal 1569bc4959 working to address missing items
referencing #3436, #2930 - if i could test it, this might show that the
output from the vllm backend is processed and returned to the user

Signed-off-by: Wyatt Neal <wyatt.neal+git@gmail.com>
2025-04-29 18:48:13 -04:00
..
cpp chore: bump grpc limits to 50MB (#5212) 2025-04-19 08:53:24 +02:00
go fix(stablediffusion-ggml): Build with DSD CUDA, HIP and Metal flags (#5236) 2025-04-24 10:27:17 +02:00
python working to address missing items 2025-04-29 18:48:13 -04:00
backend.proto feat(video-gen): add endpoint for video generation (#5247) 2025-04-26 18:05:01 +02:00