From 5a6a6de3d7687b7caddf42f0d73816fd72347ec6 Mon Sep 17 00:00:00 2001 From: B4ckslash Date: Fri, 24 Nov 2023 18:21:04 +0100 Subject: [PATCH] docs: Update Features->Embeddings page to reflect backend restructuring (#1325) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Update path to sentencetransformers backend for local execution Signed-off-by: Marcus Köhler * Rename huggingface-embeddings -> sentencetransformers in embeddings.md for consistency with the backend structure The Dockerfile still knows the "huggingface-embeddings" backend (I assume for compatibility reasons) but uses the sentencetransformers backend under the hood anyway. I figured it would be good to update the docs to use the new naming to make it less confusing moving forward. As the docker container knows both the "huggingface-embeddings" and the "sentencetransformers" backend, this should not break anything. Signed-off-by: Marcus Köhler --------- Signed-off-by: Marcus Köhler --- docs/content/features/embeddings.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/docs/content/features/embeddings.md b/docs/content/features/embeddings.md index 58992c30..09151af8 100644 --- a/docs/content/features/embeddings.md +++ b/docs/content/features/embeddings.md @@ -61,23 +61,23 @@ curl http://localhost:8080/embeddings -X POST -H "Content-Type: application/json ## Huggingface embeddings -To use `sentence-formers` and models in `huggingface` you can use the `huggingface` embedding backend. +To use `sentence-transformers` and models in `huggingface` you can use the `sentencetransformers` embedding backend. ```yaml name: text-embedding-ada-002 -backend: huggingface-embeddings +backend: sentencetransformers embeddings: true parameters: model: all-MiniLM-L6-v2 ``` -The `huggingface` backend uses Python [sentence-transformers](https://github.com/UKPLab/sentence-transformers). For a list of all pre-trained models available see here: https://github.com/UKPLab/sentence-transformers#pre-trained-models +The `sentencetransformers` backend uses Python [sentence-transformers](https://github.com/UKPLab/sentence-transformers). For a list of all pre-trained models available see here: https://github.com/UKPLab/sentence-transformers#pre-trained-models {{% notice note %}} -- The `huggingface` backend is an optional backend of LocalAI and uses Python. If you are running `LocalAI` from the containers you are good to go and should be already configured for use. If you are running `LocalAI` manually you must install the python dependencies (`pip install -r /path/to/LocalAI/extra/requirements`) and specify the extra backend in the `EXTERNAL_GRPC_BACKENDS` environment variable ( `EXTERNAL_GRPC_BACKENDS="huggingface-embeddings:/path/to/LocalAI/extra/grpc/huggingface/huggingface.py"` ) . -- The `huggingface` backend does support only embeddings of text, and not of tokens. If you need to embed tokens you can use the `bert` backend or `llama.cpp`. -- No models are required to be downloaded before using the `huggingface` backend. The models will be downloaded automatically the first time the API is used. +- The `sentencetransformers` backend is an optional backend of LocalAI and uses Python. If you are running `LocalAI` from the containers you are good to go and should be already configured for use. If you are running `LocalAI` manually you must install the python dependencies (`pip install -r /path/to/LocalAI/extra/requirements`) and specify the extra backend in the `EXTERNAL_GRPC_BACKENDS` environment variable ( `EXTERNAL_GRPC_BACKENDS="sentencetransformers:/path/to/LocalAI/backend/python/sentencetransformers/sentencetransformers.py"` ) . +- The `sentencetransformers` backend does support only embeddings of text, and not of tokens. If you need to embed tokens you can use the `bert` backend or `llama.cpp`. +- No models are required to be downloaded before using the `sentencetransformers` backend. The models will be downloaded automatically the first time the API is used. {{% /notice %}}