LocalAI/backend/python/vllm
Wyatt Neal 1569bc4959 working to address missing items
referencing #3436, #2930 - if i could test it, this might show that the
output from the vllm backend is processed and returned to the user

Signed-off-by: Wyatt Neal <wyatt.neal+git@gmail.com>
2025-04-29 18:48:13 -04:00
..
backend.py working to address missing items 2025-04-29 18:48:13 -04:00
install.sh chore(deps): bump grpcio to 1.68.1 (#4301) 2024-12-02 19:13:26 +01:00
Makefile feat: create bash library to handle install/run/test of python backends (#2286) 2024-05-11 18:32:46 +02:00
README.md refactor: move backends into the backends directory (#1279) 2023-11-13 22:40:16 +01:00
requirements-after.txt fix(python): move vllm to after deps, drop diffusers main deps 2024-08-07 23:34:37 +02:00
requirements-cpu.txt fix(dependencies): pin pytorch version (#3872) 2024-10-18 09:11:59 +02:00
requirements-cublas11-after.txt feat(venv): shared env (#3195) 2024-08-07 19:45:14 +02:00
requirements-cublas11.txt fix(dependencies): pin pytorch version (#3872) 2024-10-18 09:11:59 +02:00
requirements-cublas12-after.txt feat(venv): shared env (#3195) 2024-08-07 19:45:14 +02:00
requirements-cublas12.txt fix(dependencies): pin pytorch version (#3872) 2024-10-18 09:11:59 +02:00
requirements-hipblas.txt fix(dependencies): pin pytorch version (#3872) 2024-10-18 09:11:59 +02:00
requirements-install.txt feat: migrate python backends from conda to uv (#2215) 2024-05-10 15:08:08 +02:00
requirements-intel.txt fix(intel): pin torch and intel-extensions (#4435) 2024-12-19 15:39:32 +01:00
requirements.txt chore(deps): bump grpcio to 1.72.0 (#5244) 2025-04-25 21:32:37 +02:00
run.sh feat: create bash library to handle install/run/test of python backends (#2286) 2024-05-11 18:32:46 +02:00
test.py working to address missing items 2025-04-29 18:48:13 -04:00
test.sh feat: create bash library to handle install/run/test of python backends (#2286) 2024-05-11 18:32:46 +02:00

Creating a separate environment for the vllm project

make vllm