mirror of
https://github.com/mudler/LocalAI.git
synced 2025-05-20 02:24:59 +00:00
![]() * working to address missing items referencing #3436, #2930 - if i could test it, this might show that the output from the vllm backend is processed and returned to the user Signed-off-by: Wyatt Neal <wyatt.neal+git@gmail.com> * adding in vllm tests to test-extras Signed-off-by: Wyatt Neal <wyatt.neal+git@gmail.com> * adding in tests to pipeline for execution Signed-off-by: Wyatt Neal <wyatt.neal+git@gmail.com> * removing todo block, test via pipeline Signed-off-by: Wyatt Neal <wyatt.neal+git@gmail.com> --------- Signed-off-by: Wyatt Neal <wyatt.neal+git@gmail.com> |
||
---|---|---|
.. | ||
backend.py | ||
install.sh | ||
Makefile | ||
README.md | ||
requirements-after.txt | ||
requirements-cpu.txt | ||
requirements-cublas11-after.txt | ||
requirements-cublas11.txt | ||
requirements-cublas12-after.txt | ||
requirements-cublas12.txt | ||
requirements-hipblas.txt | ||
requirements-install.txt | ||
requirements-intel.txt | ||
requirements.txt | ||
run.sh | ||
test.py | ||
test.sh |
Creating a separate environment for the vllm project
make vllm