LocalAI/extra/grpc/vllm
Ettore Di Giacinto 803a0ac02a
feat(llama.cpp): support lora with scale and yarn (#1277)
* feat(llama.cpp): support lora with scale

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* feat(llama.cpp): support yarn

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-11-11 18:40:48 +01:00
..
backend_pb2.py feat(llama.cpp): support lora with scale and yarn (#1277) 2023-11-11 18:40:48 +01:00
backend_pb2_grpc.py feat(vllm): Initial vllm backend implementation 2023-09-09 17:03:23 +02:00
backend_vllm.py feat(conda): conda environments (#1144) 2023-11-04 15:30:32 +01:00
Makefile feat(conda): conda environments (#1144) 2023-11-04 15:30:32 +01:00
README.md feat(conda): conda environments (#1144) 2023-11-04 15:30:32 +01:00
run.sh feat(conda): conda environments (#1144) 2023-11-04 15:30:32 +01:00
test_backend_vllm.py feat(conda): conda environments (#1144) 2023-11-04 15:30:32 +01:00
vllm.yml feat(conda): conda environments (#1144) 2023-11-04 15:30:32 +01:00

Creating a separate environment for the vllm project

make vllm