LocalAI/cmd/grpc
Ettore Di Giacinto fab26ac6fe feat: add llama-master backend
So we can keep one stable and one master to point to latest upstream
changes

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-15 23:32:42 +02:00
..
bert-embeddings feat: move other backends to grpc 2023-07-15 01:19:43 +02:00
bloomz feat: move other backends to grpc 2023-07-15 01:19:43 +02:00
dolly feat: use gRPC for transformers 2023-07-15 01:19:43 +02:00
falcon feat: add falcon ggllm via grpc client 2023-07-15 01:19:43 +02:00
falcon-ggml feat: move other backends to grpc 2023-07-15 01:19:43 +02:00
gpt2 feat: use gRPC for transformers 2023-07-15 01:19:43 +02:00
gpt4all feat: move gpt4all to a grpc service 2023-07-15 01:19:43 +02:00
gptj feat: use gRPC for transformers 2023-07-15 01:19:43 +02:00
gptneox feat: use gRPC for transformers 2023-07-15 01:19:43 +02:00
langchain-huggingface feat: move other backends to grpc 2023-07-15 01:19:43 +02:00
llama feat: move llama to a grpc 2023-07-15 01:19:43 +02:00
llama-master feat: add llama-master backend 2023-07-15 23:32:42 +02:00
mpt feat: use gRPC for transformers 2023-07-15 01:19:43 +02:00
piper feat: move other backends to grpc 2023-07-15 01:19:43 +02:00
replit feat: use gRPC for transformers 2023-07-15 01:19:43 +02:00
rwkv feat: move other backends to grpc 2023-07-15 01:19:43 +02:00
stablediffusion feat: move other backends to grpc 2023-07-15 01:19:43 +02:00
starcoder feat: use gRPC for transformers 2023-07-15 01:19:43 +02:00
whisper feat: move other backends to grpc 2023-07-15 01:19:43 +02:00