Graphics Card Passthrough
*heretofore is rough draft at this time, won’t you kindly report anything I might have missed? Thank you.
THIS IS THE MOST IMPORTANT LINK FOR PASSING PERMISSIONS:
GitHub - H3rz3n/proxmox-lxc-unprivileged-gpu-passthrough: A small guide to help user correctly passthrough their GPUs to an unprivileged LXC container
GPU passthrough to an LXC in Proxmox is not as straightforward as doing it for a VM because LXCs share the host kernel. However, it is possible by binding the GPU to the container using lxc.cgroup
settings and configuring the necessary drivers.
Here’s a step-by-step guide to passing through your GPU to your LXC container in Proxmox:
1. Enable IOMMU on the Proxmox Host
First, make sure IOMMU is enabled on your Proxmox server.
Check if IOMMU is enabled:
dmesg | grep -e DMAR -e IOMMU
If you don’t see output indicating IOMMU is enabled, proceed to enable it.
Edit GRUB Configuration:
nano /etc/default/grub
Find the line:
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
Modify it based on your CPU vendor:
- For Intel:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"
- For AMD:
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt"
Save the file and update GRUB:
update-grub
reboot
2. Check GPU and Bind It to the Host
After rebooting, check if your GPU is recognized:
lspci -nnk | grep -i -A3 vga
If you see something like:
makefile
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [xxxx] (rev a1)
Make note of the PCI ID (01:00.0
in this case).
3. Configure the GPU for LXC
Since LXCs share the kernel, we cannot use PCI passthrough like in a VM, but we can pass through the GPU devices.
3.1. Identify GPU Devices
Run:
ls -l /dev/dri
You should see something like:
/dev/dri/card0
/dev/dri/renderD128
These are the devices your container needs access to.
3.2. Modify the LXC Configuration
Edit your LXC container’s config file:
nano /etc/pve/lxc/100.conf
(Add the following lines)
ini
lxc.cgroup2.devices.allow = c 226:* rwm
lxc.mount.entry = /dev/dri dev/dri none bind,optional,create=dir
(Replace 100
with your actual LXC ID)
If you’re using an NVIDIA GPU, also add:
ini
lxc.cgroup2.devices.allow = c 195:* rwm
lxc.mount.entry = /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
lxc.mount.entry = /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
lxc.mount.entry = /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
lxc.mount.entry = /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file
Save and exit.
4. Install GPU Drivers in the LXC
Start the container:
bash
pct start 100
Inside the container, install the required drivers.
- For NVIDIA GPUs:
apt install -y nvidia-driver nvidia-cuda-toolkit
- For AMD GPUs (ROCm support):
apt install -y mesa-utils rocm-opencl-runtime
Verify the GPU is accessible in the container:
- For NVIDIA:
nvidia-smi
- For AMD:
clinfo
5. Configure Open WebUI & Deepseek-R1
If you’re running Open WebUI or DeepSeek-R1, ensure it detects the GPU.
Inside the LXC, set the environment variable:
export CUDA_VISIBLE_DEVICES=0
Then, start DeepSeek-R1 with GPU acceleration:
deepseek-r1 --device cuda
For Open WebUI: Modify its config file to ensure it uses the GPU.
Final Notes
- If the container doesn’t detect the GPU, check if the
/dev/dri
devices are accessible. - Make sure your host Proxmox kernel has the necessary GPU modules loaded (
modprobe nvidia
ormodprobe amdgpu
). - If you’re using an NVIDIA GPU, ensure the NVIDIA Container Toolkit is installed for compatibility.
Some useful drivers I found over at:
"From what I’ve gathered:
For Nvidia you need drivers: ollama/docs/linux.md at main · ollama/ollama · GitHub
For AMD you need additional package and drivers: ollama/docs/linux.md at main · ollama/ollama · GitHub ollama/docs/linux.md at main · ollama/ollama · GitHub
For Intel you need to build Ollama source code from scratch with intel-basekit and have Intel drivers (this script covers it: Proxmox/install/ollama-install.sh at main · tteck/Proxmox · GitHub) "