LocalAI/backend
Wyatt Neal 4076ea0494
fix: vllm missing logprobs (#5279)
* working to address missing items

referencing #3436, #2930 - if i could test it, this might show that the
output from the vllm backend is processed and returned to the user

Signed-off-by: Wyatt Neal <wyatt.neal+git@gmail.com>

* adding in vllm tests to test-extras

Signed-off-by: Wyatt Neal <wyatt.neal+git@gmail.com>

* adding in tests to pipeline for execution

Signed-off-by: Wyatt Neal <wyatt.neal+git@gmail.com>

* removing todo block, test via pipeline

Signed-off-by: Wyatt Neal <wyatt.neal+git@gmail.com>

---------

Signed-off-by: Wyatt Neal <wyatt.neal+git@gmail.com>
2025-04-30 12:55:07 +00:00
..
cpp chore: bump grpc limits to 50MB (#5212) 2025-04-19 08:53:24 +02:00
go fix(stablediffusion-ggml): Build with DSD CUDA, HIP and Metal flags (#5236) 2025-04-24 10:27:17 +02:00
python fix: vllm missing logprobs (#5279) 2025-04-30 12:55:07 +00:00
backend.proto feat(video-gen): add endpoint for video generation (#5247) 2025-04-26 18:05:01 +02:00