mirror of
https://github.com/mudler/LocalAI.git
synced 2025-05-20 10:35:01 +00:00
fix(CUDA): Add note for how to run CUDA with SELinux (#5259)
* Add note to help run nvidia containers with SELinux * Use correct CUDA container references as noted in the dockerhub overview * Clean trailing whitespaces
This commit is contained in:
parent
23f347e687
commit
88857696d4
1 changed files with 8 additions and 6 deletions
|
@ -57,12 +57,14 @@ diffusers:
|
||||||
|
|
||||||
Requirement: nvidia-container-toolkit (installation instructions [1](https://www.server-world.info/en/note?os=Ubuntu_22.04&p=nvidia&f=2) [2](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html))
|
Requirement: nvidia-container-toolkit (installation instructions [1](https://www.server-world.info/en/note?os=Ubuntu_22.04&p=nvidia&f=2) [2](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html))
|
||||||
|
|
||||||
|
If using a system with SELinux, ensure you have the policies installed, such as those [provided by nvidia](https://github.com/NVIDIA/dgx-selinux/)
|
||||||
|
|
||||||
To check what CUDA version do you need, you can either run `nvidia-smi` or `nvcc --version`.
|
To check what CUDA version do you need, you can either run `nvidia-smi` or `nvcc --version`.
|
||||||
|
|
||||||
Alternatively, you can also check nvidia-smi with docker:
|
Alternatively, you can also check nvidia-smi with docker:
|
||||||
|
|
||||||
```
|
```
|
||||||
docker run --runtime=nvidia --rm nvidia/cuda nvidia-smi
|
docker run --runtime=nvidia --rm nvidia/cuda:12.8.0-base-ubuntu24.04 nvidia-smi
|
||||||
```
|
```
|
||||||
|
|
||||||
To use CUDA, use the images with the `cublas` tag, for example.
|
To use CUDA, use the images with the `cublas` tag, for example.
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue