Search the Community
Showing results for tags 'renderD128'.
-
My setup is based on a Debian 12 Bookworm LXC Container running on ProxMox 8.1.3. Spent quite some time on this and maybe this helps someone else to get it working. If not, I'm sorry... As I'm using an NFS mount within the container I'm using a privileged container. Under Options -> Features I've enabled "Nesting" and "NFS". Don't touch the LXC's .conf file just yet! Make sure there's no mount entries in it! On the ProxMox host: Find out the GID of the "render" group: getent group render I'm using 104 in this example. In the LXC container: Find a unused GID: cat /etc/group I'm using 112 in this example. Find the group name for the GID of the host's render group getent group 104 I'm using sgx in this example. Change the GID of the sgx group to the unused GID and update the filesystem: groupmod -g 112 sgx find / -group 104 -exec chgrp -h sgx {} \; 112 = new GID, 104 = old GID, sgx = group name. Find the GID of the container's render group getent group render I'm using 106 in this example. Change the GID of the "render" group to the GID of the host's render group and update the file system: groupmod -g 104 render find / -group 106 -exec chgrp -h render {} \; 104 = new GID, 106 = old GID, render = group name. Update the sources list: nano /etc/apt/sources.list Add the following lines: #non-free firmwares deb http://deb.debian.org/debian bookworm non-free-firmware #non-free drivers and components deb http://deb.debian.org/debian bookworm non-free Install the drivers: apt update && apt install intel-media-va-driver-non-free intel-gpu-tools vainfo Shutdown the container. On the ProxMox host: Update the LXC's .conf file: nano /etc/pve/nodes/<NODE NAME>/lxc/<CT ID>.conf Add the following lines: lxc.cgroup2.devices.allow: c 226:0 rwm lxc.cgroup2.devices.allow: c 226:128 rwm lxc.mount.entry: /dev/dri/card0 dev/dri/card0 none bind,optional,create=file lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file Start the container. In the LXC container: Check if permissions are correct: ls -Alh /dev/dri Output should be similar to: total 0 crw-rw---- 1 root video 226, 0 Dec 10 21:21 card0 crw-rw---- 1 root render 226, 128 Dec 10 21:21 renderD128 Important is that it shows video and render as the groups! Check if VAAPI is available: vainfo Output should be similar to: Install Emby Server. I used the 4.8 Beta version as it seems to have some fixes for Intel Alder Lake-N CPUs. Add the Emby user to the video and render groups: sudo usermod -aG video emby sudo usermod -aG render emby Reboot your container and (hopefully) enjoy hardware acceleration! Try starting a movie and change the quality to something that would trigger transcoding. Check the CPU usage in the ProxMox Summary screen and watch the GPU in the container usage with: intel_gpu_top
-
Hardware acceleration - Emby inside LXC(Debian/Ubuntu) container using Intel iGPU
appoli posted a topic in Linux
Hi All, I have mostly made posts moaning about how stuff doesn't work right/the way I want to, but I'm going to give back to the community today! For the TL;DR skip down a few paragraphs, just me venting my woes I go into how I got hardware acceleration to work when it wasn't and everything said it should have been on an Ubuntu LXC container I originally built a FreeNAS machine, on which I planned to store loads of stuff including media, and saw that it had media player plugins. I didn't think too much of it at the time (except I didn't want to use Plex because my experience with it showed that it wasted resources/was the 'dummy' version) so I specced out my server and I built it. I chose a Sky Lake/Kaby Lake cpu for a number of reasons, but one of them was the iGPU. I tried using an Emby plugin but that didn't work (now know it's a whole .NET issue with FreeBSD) and then tried a Docker container of Emby - worked fine, but no hardware acceleration. I wasn't sure why, but I knew I needed more power in the server for the transcoding and other stuff I was using it for so I got a Kaby Lake Xeon cpu, making sure to get one with an iGPU, and kept on plugging away. To cut to the chase: - found out I needed to pass /dev/dri to docker - found out I needed VAAPI to get use of the gpu, but FreeNAS did not support it so it would need to be in a VM and FreeNAS was going through lots of changes - looked around for a few other operating systems that could be used for my purposes, tried OMV - it didn't like ZFS - landed on Proxmox, perfect for my needs - can spin up debian/ubuntu LXC containers easy peasy while passing through whatever I want from the root OS & can make VMs for other OS'/things I want more secure - found out that my motherboard had the C232 chipset & I needed the C236 chipset to use the iGPU - Finally bought the right motherboard Honestly, you would think I had done absolutely no research! But a lot of this was new to me and I didn't realize what I would be using the machine for (didn't know how much use I could get out of Emby per se - I already had an HDHomeRun and Apple TVs...). So I swapped in the right motherboard (plus I got some more SATA ports - gonna be cloning my zpool later to a much larger one w/ more redundancy since I'm using the machine for work too now) and went about making sure that dev/dri and fb0 were passed through to the Emby LXC container. *****Skip to Here***** At this point I double checked that everything was being passed through to the container (eg lspci) & went through the Emby documentation (they state that it Emby should have all the drivers that it needs built in, e.g. their own FFmpeg build). However, when I would play a file that was a direct feed it played fine, but when I tried playing a 4K HVEC or 9/10Bit/VP9/VP10 whatever they actually decided to call that, the video would just load and never start. I went into the console and VAAPI was indeed installed and showed that it was able to decode/encode the appropriate files for my cpu. Checked the log - it looked like FFmpeg was doing it's thing and transcoding the file writing stuff to the temp folder and including a transcoding rate (e.g. at one point it said it was transcoding at 66.6 x frames ). I was about to post on the forum, but I really really really have been wanting to get this working. So I looked around and I found the following site, or rather series of files from the VAAPI sites: https://github.com/intel/media-driver The genesis before that link was basically that VA API needed some extra libraries/intel media SDK to operate depending on the OS/CPU. So that link is for an addition driver that has links to two other libraries that are needed first (libva & GmmLib) along with their dependencies/reqs to build them. Follow the links and cit clone those libraries over to a build directory, make them and install them. For less experienced people the GmmLib instructions are less clear: after git clone GmmLib, make a build directory for cmake & change into it you issue the cmake command with '-DCMAKE_BUILD_TYPE=Release', the site just shows you the possible options. I left out the -DARCH=64 bit because from what I saw on the internet others didn't use it, but you DO need to reference a cmake build file for the command. That is in the root folder of the build folder you made. so either add '..' at the end or '/wherever you git cloned to/gmmlib' to the end of the cmake command and it will run. Then you do the make -j8 command followed by the make install command. Once those two guys are added I git cloned the media-driver bit in, followed the instructions, restarted the container and honestly didn't expect anything to have changed. But hardware transcoding started working like a charm! I do NOT know if it was a combination of those libraries, if it was a dependency of libraries (or maybe just me having to reset the BMC a bunch of times b/c my fan control script was acting up, but I highly doubt that's what it was) but after the above everything worked. As a final note, I don't think this is a shortcoming of the Emby team. As far as I can tell people are using hardware transcoding via the VAAPI files Emby installs just fine. Maybe it had to do with the specifics of my case - Emby running in an LXC container, the LXC container running inside of Proxmox, the fact that because I have IPMI the BMC has it's own video device that are seen in the OS'. I'm just happy I got it to work (maybe can help the Emby team do some investigating) & hope this can help others save some time. Cheers!- 26 replies
-
- 1
-
- hardware acceleration
- linux
-
(and 5 more)
Tagged with: