mirror of
https://github.com/mudler/LocalAI.git
synced 2025-05-25 04:54:59 +00:00
Merge branch 'master' into fix-selinux-nvidia-smi
This commit is contained in:
commit
40883c410d
30 changed files with 560 additions and 188 deletions
|
@ -481,8 +481,7 @@ In the help text below, BASEPATH is the location that local-ai is being executed
|
|||
|-----------|---------|-------------|----------------------|
|
||||
| --models-path | BASEPATH/models | Path containing models used for inferencing | $LOCALAI_MODELS_PATH |
|
||||
| --backend-assets-path |/tmp/localai/backend_data | Path used to extract libraries that are required by some of the backends in runtime | $LOCALAI_BACKEND_ASSETS_PATH |
|
||||
| --image-path | /tmp/generated/images | Location for images generated by backends (e.g. stablediffusion) | $LOCALAI_IMAGE_PATH |
|
||||
| --audio-path | /tmp/generated/audio | Location for audio generated by backends (e.g. piper) | $LOCALAI_AUDIO_PATH |
|
||||
| --generated-content-path | /tmp/generated/content | Location for assets generated by backends (e.g. stablediffusion) | $LOCALAI_GENERATED_CONTENT_PATH |
|
||||
| --upload-path | /tmp/localai/upload | Path to store uploads from files api | $LOCALAI_UPLOAD_PATH |
|
||||
| --config-path | /tmp/localai/config | | $LOCALAI_CONFIG_PATH |
|
||||
| --localai-config-dir | BASEPATH/configuration | Directory for dynamic loading of certain configuration files (currently api_keys.json and external_backends.json) | $LOCALAI_CONFIG_DIR |
|
||||
|
|
|
@ -278,3 +278,36 @@ docker run --rm -ti --device /dev/dri -p 8080:8080 -e DEBUG=true -e MODELS_PATH=
|
|||
```
|
||||
|
||||
Note also that sycl does have a known issue to hang with `mmap: true`. You have to disable it in the model configuration if explicitly enabled.
|
||||
|
||||
## Vulkan acceleration
|
||||
|
||||
### Requirements
|
||||
|
||||
If using nvidia, follow the steps in the [CUDA](#cudanvidia-acceleration) section to configure your docker runtime to allow access to the GPU.
|
||||
|
||||
### Container images
|
||||
|
||||
To use Vulkan, use the images with the `vulkan` tag, for example `{{< version >}}-vulkan-ffmpeg-core`.
|
||||
|
||||
#### Example
|
||||
|
||||
To run LocalAI with Docker and Vulkan, you can use the following command as an example:
|
||||
|
||||
```bash
|
||||
docker run -p 8080:8080 -e DEBUG=true -v $PWD/models:/build/models localai/localai:latest-vulkan-ffmpeg-core
|
||||
```
|
||||
|
||||
### Notes
|
||||
|
||||
In addition to the commands to run LocalAI normally, you need to specify additonal flags to pass the GPU hardware to the container.
|
||||
|
||||
These flags are the same as the sections above, depending on the hardware, for [nvidia](#cudanvidia-acceleration), [AMD](#rocmamd-acceleration) or [Intel](#intel-acceleration-sycl).
|
||||
|
||||
If you have mixed hardware, you can pass flags for multiple GPUs, for example:
|
||||
|
||||
```bash
|
||||
docker run -p 8080:8080 -e DEBUG=true -v $PWD/models:/build/models \
|
||||
--gpus=all \ # nvidia passthrough
|
||||
--device /dev/dri --device /dev/kfd \ # AMD/Intel passthrough
|
||||
localai/localai:latest-vulkan-ffmpeg-core
|
||||
```
|
Loading…
Add table
Add a link
Reference in a new issue