mirror of
https://github.com/mudler/LocalAI.git
synced 2025-05-22 11:35:00 +00:00
1038 - Streamlit bot with LocalAI (#1072)
**Description** This PR fixes #1038 Added Streamlit example and also updated readme for examples. **[Signed commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)** - [X] Yes, I signed my commits. <!-- Thank you for contributing to LocalAI! Contributing Conventions: 1. Include descriptive PR titles with [<component-name>] prepended. 2. Build and test your changes before submitting a PR. 3. Sign your commits By following the community's contribution conventions upfront, the review process will be accelerated and your PR merged more quickly. -->
This commit is contained in:
parent
31ed13094b
commit
f37a4ec9c8
10 changed files with 353 additions and 2 deletions
54
examples/streamlit-bot/README.md
Normal file
54
examples/streamlit-bot/README.md
Normal file
|
@ -0,0 +1,54 @@
|
|||
## Streamlit bot
|
||||
|
||||

|
||||
|
||||
This is an example to deploy a Streamlit bot with LocalAI instead of OpenAI. Instructions are for Windows.
|
||||
|
||||
```bash
|
||||
# Install & run Git Bash
|
||||
|
||||
# Clone LocalAI
|
||||
git clone https://github.com/go-skynet/LocalAI.git
|
||||
cd LocalAI
|
||||
|
||||
# (optional) Checkout a specific LocalAI tag
|
||||
# git checkout -b build <TAG>
|
||||
|
||||
# Use a template from the examples
|
||||
cp -rf prompt-templates/ggml-gpt4all-j.tmpl models/
|
||||
|
||||
# (optional) Edit the .env file to set things like context size and threads
|
||||
# vim .env
|
||||
# Download model
|
||||
curl --progress-bar -C - -O https://gpt4all.io/models/ggml-gpt4all-j.bin > models/ggml-gpt4all-j.bin
|
||||
|
||||
# Install & Run Docker Desktop for Windows
|
||||
https://www.docker.com/products/docker-desktop/
|
||||
|
||||
# start with docker-compose
|
||||
docker-compose up -d --pull always
|
||||
# or you can build the images with:
|
||||
# docker-compose up -d --build
|
||||
# Now API is accessible at localhost:8080
|
||||
curl http://localhost:8080/v1/models
|
||||
# {"object":"list","data":[{"id":"ggml-gpt4all-j","object":"model"}]}
|
||||
|
||||
curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
|
||||
"model": "ggml-gpt4all-j",
|
||||
"messages": [{"role": "user", "content": "How are you?"}],
|
||||
"temperature": 0.9
|
||||
}'
|
||||
|
||||
# {"model":"ggml-gpt4all-j","choices":[{"message":{"role":"assistant","content":"I'm doing well, thanks. How about you?"}}]}
|
||||
|
||||
cd examples/streamlit-bot
|
||||
|
||||
install_requirements.bat
|
||||
|
||||
# run the bot
|
||||
start_windows.bat
|
||||
|
||||
# UI will be launched automatically (http://localhost:8501/) in browser.
|
||||
|
||||
```
|
||||
|
Loading…
Add table
Add a link
Reference in a new issue