Jump to content

VAAPI hardware acceleration in Debian 12 LXC on ProxMox


MischaBoender

Recommended Posts

MischaBoender

My setup is based on a Debian 12 Bookworm LXC Container running on ProxMox 8.1.3. Spent quite some time on this and maybe this helps someone else to get it working. If not, I'm sorry... As I'm using an NFS mount within the container I'm using a privileged container. Under Options -> Features I've enabled "Nesting" and "NFS".

Don't touch the LXC's .conf file just yet! Make sure there's no mount entries in it!

On the ProxMox host:
Find out the GID of the "render" group:

getent group render

I'm using 104 in this example.

In the LXC container:
Find a unused GID:

cat /etc/group

I'm using 112 in this example.

Find the group name for the GID of the host's render group

getent group 104

I'm using sgx in this example.

Change the GID of the sgx group to the unused GID and update the filesystem: 

groupmod -g 112 sgx
find / -group 104 -exec chgrp -h sgx {} \;

112 = new GID, 104 = old GID, sgx = group name.

Find the GID of the container's render group

getent group render

I'm using 106 in this example.

Change the GID of the "render" group to the GID of the host's render group and update the file system: 

groupmod -g 104 render
find / -group 106 -exec chgrp -h render {} \;

104 = new GID, 106 = old GID, render = group name.

Update the sources list:

nano /etc/apt/sources.list

Add the following lines:

#non-free firmwares
deb http://deb.debian.org/debian bookworm non-free-firmware

#non-free drivers and components
deb http://deb.debian.org/debian bookworm non-free

Install the drivers:

apt update && apt install intel-media-va-driver-non-free intel-gpu-tools vainfo

Shutdown the container.

On the ProxMox host:
Update the LXC's .conf file:

nano /etc/pve/nodes/<NODE NAME>/lxc/<CT ID>.conf

Add the following lines:

lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.mount.entry: /dev/dri/card0 dev/dri/card0 none bind,optional,create=file
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file

Start the container.

In the LXC container:
Check if permissions are correct:

ls -Alh /dev/dri

Output should be similar to:

total 0
crw-rw---- 1 root video  226,   0 Dec 10 21:21 card0
crw-rw---- 1 root render 226, 128 Dec 10 21:21 renderD128

Important is that it shows video and render as the groups!

Check if VAAPI is available:

vainfo

Output should be similar to:

Spoiler

error: XDG_RUNTIME_DIR is invalid or not set in the environment.
error: can't connect to X server!
libva info: VA-API version 1.17.0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_17
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.17 (libva 2.12.0)
vainfo: Driver version: Intel iHD driver for Intel(R) Gen Graphics - 23.1.1 ()
vainfo: Supported profile and entrypoints
      VAProfileNone                   : VAEntrypointVideoProc
      VAProfileNone                   : VAEntrypointStats
      VAProfileMPEG2Simple            : VAEntrypointVLD
      VAProfileMPEG2Simple            : VAEntrypointEncSlice
      VAProfileMPEG2Main              : VAEntrypointVLD
      VAProfileMPEG2Main              : VAEntrypointEncSlice
      VAProfileH264Main               : VAEntrypointVLD
      VAProfileH264Main               : VAEntrypointEncSlice
      VAProfileH264Main               : VAEntrypointFEI
      VAProfileH264Main               : VAEntrypointEncSliceLP
      VAProfileH264High               : VAEntrypointVLD
      VAProfileH264High               : VAEntrypointEncSlice
      VAProfileH264High               : VAEntrypointFEI
      VAProfileH264High               : VAEntrypointEncSliceLP
      VAProfileVC1Simple              : VAEntrypointVLD
      VAProfileVC1Main                : VAEntrypointVLD
      VAProfileVC1Advanced            : VAEntrypointVLD
      VAProfileJPEGBaseline           : VAEntrypointVLD
      VAProfileJPEGBaseline           : VAEntrypointEncPicture
      VAProfileH264ConstrainedBaseline: VAEntrypointVLD
      VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice
      VAProfileH264ConstrainedBaseline: VAEntrypointFEI
      VAProfileH264ConstrainedBaseline: VAEntrypointEncSliceLP
      VAProfileVP8Version0_3          : VAEntrypointVLD
      VAProfileHEVCMain               : VAEntrypointVLD
      VAProfileHEVCMain               : VAEntrypointEncSlice
      VAProfileHEVCMain               : VAEntrypointFEI
      VAProfileHEVCMain               : VAEntrypointEncSliceLP
      VAProfileHEVCMain10             : VAEntrypointVLD
      VAProfileHEVCMain10             : VAEntrypointEncSlice
      VAProfileHEVCMain10             : VAEntrypointEncSliceLP
      VAProfileVP9Profile0            : VAEntrypointVLD
      VAProfileVP9Profile0            : VAEntrypointEncSliceLP
      VAProfileVP9Profile1            : VAEntrypointVLD
      VAProfileVP9Profile1            : VAEntrypointEncSliceLP
      VAProfileVP9Profile2            : VAEntrypointVLD
      VAProfileVP9Profile2            : VAEntrypointEncSliceLP
      VAProfileVP9Profile3            : VAEntrypointVLD
      VAProfileVP9Profile3            : VAEntrypointEncSliceLP
      VAProfileHEVCMain12             : VAEntrypointVLD
      VAProfileHEVCMain12             : VAEntrypointEncSlice
      VAProfileHEVCMain422_10         : VAEntrypointVLD
      VAProfileHEVCMain422_10         : VAEntrypointEncSlice
      VAProfileHEVCMain422_12         : VAEntrypointVLD
      VAProfileHEVCMain422_12         : VAEntrypointEncSlice
      VAProfileHEVCMain444            : VAEntrypointVLD
      VAProfileHEVCMain444            : VAEntrypointEncSliceLP
      VAProfileHEVCMain444_10         : VAEntrypointVLD
      VAProfileHEVCMain444_10         : VAEntrypointEncSliceLP
      VAProfileHEVCMain444_12         : VAEntrypointVLD
      VAProfileHEVCSccMain            : VAEntrypointVLD
      VAProfileHEVCSccMain            : VAEntrypointEncSliceLP
      VAProfileHEVCSccMain10          : VAEntrypointVLD
      VAProfileHEVCSccMain10          : VAEntrypointEncSliceLP
      VAProfileHEVCSccMain444         : VAEntrypointVLD
      VAProfileHEVCSccMain444         : VAEntrypointEncSliceLP
      VAProfileAV1Profile0            : VAEntrypointVLD
      VAProfileHEVCSccMain444_10      : VAEntrypointVLD
      VAProfileHEVCSccMain444_10      : VAEntrypointEncSliceLP

Install Emby Server. I used the 4.8 Beta version as it seems to have some fixes for Intel Alder Lake-N CPUs.

Add the Emby user to the video and render groups:

sudo usermod -aG video emby
sudo usermod -aG render emby

Reboot your container and (hopefully) enjoy hardware acceleration!

Try starting a movie and change the quality to something that would trigger transcoding. Check the CPU usage in the ProxMox Summary screen and watch the GPU in the container usage with:

intel_gpu_top

 

  • Thanks 2
Link to comment
Share on other sites

  • 2 weeks later...
MischaBoender
On 12/13/2023 at 4:12 AM, moonman said:

https://gist.github.com/packerdl/a4887c30c38a0225204f451103d82ac5
 

Excellent guide for privileged LXC container without changing GID

This is actually for an unprivileged container. In an unprivileged container you can use "lxc.idmap" to map IDs, but that doesn't work in a privileged container. And I needed that privileged container for the NFS mount. 

Link to comment
Share on other sites

On 12/24/2023 at 10:04 AM, MischaBoender said:

This is actually for an unprivileged container. In an unprivileged container you can use "lxc.idmap" to map IDs, but that doesn't work in a privileged container. And I needed that privileged container for the NFS mount. 

Yeah it is for unprivileged. You could mount NFS on the host and pass it through to LXC. I just dont trust privileged containers and wanted to do it the right way.

Link to comment
Share on other sites

MischaBoender

Proxmox doesn't provide (or maybe I don't know) an option to mount NFS read only just for a container to use, it's always for Proxmox content (ISOs, VZDump, etc) and Proxmox wants to create folders for that purpose. 

You can go the /etc/fstab way and have the OS do the mounting, but I don't think that is "the right way" when running a multi-node Proxmox cluster. 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...