Phreeak 3 Posted February 11, 2024 Posted February 11, 2024 Hi Volks, I don't know if there is already an article on this, at least I haven't found one. I have a Proxmox server with an inbuilt Intel 630 GPU. I will show you here how you can activate the hardware acceleration for it. !! This is how it works for me. There are probably better ways, but this is the way it works for me I installed Emby with the LXC script from tteck (https://github.com/tteck) thanks for your work! https://tteck.github.io/Proxmox/ If you install Emby via the script and are on the emby interface, you still have to make a few settings on Proxmox and the LXC container itself. 1. switch to your Proxmox instance and execute the following command: nano /etc/modprobe.d/i915.conf enter the following there: options i915 enable_guc=3 and save the file. 2. reboot Proxmox 3. Still in the Proxmox instance, execute this command nano /etc/pve/lxc/105.conf (You must adapt 105 to your LXC container!!) Copy the following lines into the last position lxc.cgroup2.devices.allow: c 226:0 rwm lxc.cgroup2.devices.allow: c 226:128 rwm lxc.mount.entry: /dev/denderD128 dev/dri/renderD128 none bind,optional,create=file save the file. 4. Still in the Proxmox shell, execute this command: chmod -R 777 /dev/dri/* 5. Now switch to the shell of the Emby LXC container add the Emby user to the groups: usermod -aG video emby usermod -aG input emby usermod -aG render emby 6. Restart your emby server And here you go! 3
cThumbs 0 Posted February 13, 2024 Posted February 13, 2024 This is a great guide and matches a Jellyfin one I've actually been using and that's reassuring. My issue is that my emby is nested inside the LXC using docker, so far I can't work out how to pass the gpu through (i think it's Step 5 where I basically become stuck). Does anyone have any idea what I need to do to get this working for my set up?
Luke 40065 Posted February 18, 2024 Posted February 18, 2024 On 2/13/2024 at 6:28 AM, cThumbs said: This is a great guide and matches a Jellyfin one I've actually been using and that's reassuring. My issue is that my emby is nested inside the LXC using docker, so far I can't work out how to pass the gpu through (i think it's Step 5 where I basically become stuck). Does anyone have any idea what I need to do to get this working for my set up? Hi, can you explain more about where you think you're stuck?
T_Tronix 15 Posted March 31, 2024 Posted March 31, 2024 Wonder if this is the same for someone with AMD Cpu and Nvidia GPU on a pcie
richt 86 Posted April 2, 2024 Posted April 2, 2024 Assuming the "i915" used in the above config is referring to an intel driver, then no.
T_Tronix 15 Posted April 2, 2024 Posted April 2, 2024 I ended up following this post: https://passbe.com/2020/gpu-nvidia-passthrough-on-proxmox-lxc-container/ 1
AriaGloris 2 Posted April 18, 2024 Posted April 18, 2024 On 2/12/2024 at 3:07 AM, Phreeak said: chmod -R 777 /dev/dri/* Was checking if anyone knows how to make this command stick if proxmox resets. After a server reboot, I have to run this command line again to bring back hardware transcoding.
Gecko 71 Posted June 21, 2024 Posted June 21, 2024 (edited) My setup is the following which allow hardware rendering in an LXC container on ProxMox. 1) Get the Group ID On the host, get the render and video group id with getent group render (104) and getent group video (44) On the emby lxc container, get the render and video group id with getent group render (108) and getent group video (44) 2) Append the following in your emby lxc config file by taking care to adapt the group ids accordingly to your setup : # 0 -> 43 (lxc), map groups starting at 100000 (host) lxc.idmap = g 0 100000 44 # 44 (lxc) map to 44 (host) lxc.idmap = g 44 44 1 # 45 -> 107, map groups starting at 100045 (host) lxc.idmap = g 45 100045 63 # 108 (lxc) map to 104 (host) lxc.idmap = g 108 104 1 # 109 -> 65536, map groups starting at 100109 lxc.idmap = g 109 100109 65427 # don't forget to map user as well lxc.idmap: u 0 100000 65536 3) Allow proxmox to map groups On the host, nano /etc/subgid and add the following (video and render HOST ids) root:104:1 root:44:1 Not sure if that's needed but maybe this line in /etc/subuid is also mandatory, my config has it anyway : root:100000:65536 4) On the emby lxc config again, passthrough the /dev/dri to the container : lxc.cgroup2.devices.allow: c 226:* rwm lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file 5) On the container, add the user emby to the video and render groups : usermod -a -G render emby usermod -a -G video emby Everything should now work as expected without messing around with chown, chmod or having a privileged container ! Edited June 21, 2024 by Gecko 1
Solution AriaGloris 2 Posted July 13, 2024 Solution Posted July 13, 2024 On 7/6/2024 at 2:28 PM, Luke said: @AriaGlorishas this helped? I just did a recent hardware upgrade, re-installed Proxmox and the Emby LXC container. The hardware transcoding is working with the iGPU showing up in the configurations and no code alterations needed. 1
guunter 41 Posted July 15, 2024 Posted July 15, 2024 On 2/13/2024 at 3:28 AM, cThumbs said: This is a great guide and matches a Jellyfin one I've actually been using and that's reassuring. My issue is that my emby is nested inside the LXC using docker, so far I can't work out how to pass the gpu through (i think it's Step 5 where I basically become stuck). Does anyone have any idea what I need to do to get this working for my set up? You edit the .conf file for the lxc. /etc/pve/lxc/xxx.conf
TariqK 4 Posted November 27, 2024 Posted November 27, 2024 On 13/07/2024 at 10:08, AriaGloris said: I just did a recent hardware upgrade, re-installed Proxmox and the Emby LXC container. The hardware transcoding is working with the iGPU showing up in the configurations and no code alterations needed. so it worked for you without any extra configuration file changes (like others in this thread) at all?
wpjonesnh 1 Posted May 19 Posted May 19 (edited) I know this is an old post and I am honestly not trying to hijack it. I just wanted to add a comment for anyone who is trying to do this with an nvidia gpu. It took me a while to find the correct links/process to get this to work but in short I followed the process on this link first to get my gpu working with an ubuntu 24.04 LXC container on a Proxmox 8.4.1 VE host. Note: Post actually references Plex but I was focusing on just getting my GPU to work. Github Gist NVIDIA Proxmox + LXC ( I did not need to follow the "Python/cuDNN" section) My lxc conf file ended looking like this: arch: amd64 cores: 40 features: mount=nfs,nesting=1 hostname: emby-pvr lock: backup memory: 65535 mp0: disk0:110/vm-110-disk-0.raw,mp=/opt/emby-server/data,backup=1,size=64G net0: name=eth0,bridge=vmbr1,firewall=1,hwaddr=BC:24:11:86:02:9C,ip=dhcp,type=veth net1: name=eth1,bridge=vmbr2,hwaddr=BC:24:11:C2:8D:27,ip=192.168.131.225/24,type=veth ostype: ubuntu rootfs: local-admin:110/vm-110-disk-0.raw,size=32G swap: 2048 unprivileged: 0 lxc.idmap: g 0 100000 44 lxc.idmap: g 44 44 1 lxc.idmap: g 45 100045 63 lxc.idmap: g 108 104 1 lxc.idmap: g 109 100109 65427 lxc.idmap: u 0 100000 65536 lxc.cgroup.devices.allow: c 195:* rwm lxc.cgroup.devices.allow: c 509:* rwm lxc.cgroup.devices.allow: c 238:* rwm lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file lxc.mount.entry: /dev/nvidia-modeset dev/nvidia-modeset none bind,optional,create=file lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-caps/nvidia-cap1 none bind,optional,create=file lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-caps/nvidia-cap2 none bind,optional,create=file (mp0 mountpoint is only there because I wanted to store my emby data in a different location) After I got that process completed, I installed emby and I was good to go so I thought I would share in case anyone else who is trying to use an Nvidia card comes across this post: Edited May 19 by wpjonesnh 1
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now