Jump to content

Hardware Acceleration with LXC on Proxmox Server


Go to solution Solved by AriaGloris,

Recommended Posts

Posted

Hi Volks,

I don't know if there is already an article on this, at least I haven't found one.

I have a Proxmox server with an inbuilt Intel 630 GPU. I will show you here how you can activate the hardware acceleration for it. 

!! This is how it works for me. There are probably better ways, but this is the way it works for me ;) 

I installed Emby with the LXC script from tteck (https://github.com/tteck) thanks for your work!

https://tteck.github.io/Proxmox/

If you install Emby via the script and are on the emby interface, you still have to make a few settings on Proxmox and the LXC container itself.

 

1. switch to your Proxmox instance and execute the following command:

2024-02-1117_40_02-pve-ProxmoxVirtualEnvironment.thumb.png.ba8171be95c9c3b743bda90540d533ec.png

nano /etc/modprobe.d/i915.conf

enter the following there:

options i915 enable_guc=3

and save the file.

 

2. reboot Proxmox

 

3. Still in the Proxmox instance, execute this command

nano /etc/pve/lxc/105.conf (You must adapt 105 to your LXC container!!)

2024-02-1117_43_42-pve-ProxmoxVirtualEnvironment.png.626cd53da7713515d31d669aa47f6006.png

Copy the following lines into the last position

lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.mount.entry: /dev/denderD128 dev/dri/renderD128 none bind,optional,create=file

save the file.

4. Still in the Proxmox shell, execute this command:

chmod -R 777 /dev/dri/*

5. Now switch to the shell of the Emby LXC container

2024-02-1117_50_03-pve-ProxmoxVirtualEnvironment.png.f9b670fdc05be22bb19c8c6c0dea3832.png

add the Emby user to the groups:

usermod -aG video emby
usermod -aG input emby
usermod -aG render emby

6. Restart your emby server

And here you go! 🤩🥳😍

2024-02-1117_57_09-Transkodieren.thumb.png.e50667d2f70573ba059b797b5b020457.png

  • Thanks 3
Posted

This is a great guide and matches a Jellyfin one I've actually been using and that's reassuring.

My issue is that my emby is nested inside the LXC using docker, so far I can't work out how to pass the gpu through (i think it's Step 5 where I basically become stuck). Does anyone have any idea what I need to do to get this working for my set up?

Posted
On 2/13/2024 at 6:28 AM, cThumbs said:

This is a great guide and matches a Jellyfin one I've actually been using and that's reassuring.

My issue is that my emby is nested inside the LXC using docker, so far I can't work out how to pass the gpu through (i think it's Step 5 where I basically become stuck). Does anyone have any idea what I need to do to get this working for my set up?

Hi, can you explain more about where you think you're stuck?

  • 1 month later...
T_Tronix
Posted

Wonder if this is the same for someone with AMD Cpu and Nvidia GPU on a pcie

Posted

Assuming the "i915" used in the above config is referring to an intel driver, then no.

Posted

Thanks for following up.

  • 2 weeks later...
AriaGloris
Posted
On 2/12/2024 at 3:07 AM, Phreeak said:

chmod -R 777 /dev/dri/*

Was checking if anyone knows how to make this command stick if proxmox resets. After a server reboot, I have to run this command line again to bring back hardware transcoding.

  • 2 months later...
Posted (edited)

My setup is the following which allow hardware rendering in an LXC container on ProxMox.

1) Get the Group ID

On the host, get the render and video group id with getent group render (104) and getent group video (44)
On the emby lxc container, get the render and video group id with getent group render (108) and getent group video (44)

2) Append the following in your emby lxc config file by taking care to adapt the group ids accordingly to your setup :

 

# 0 -> 43 (lxc), map groups starting at 100000 (host)
lxc.idmap = g 0 100000 44
# 44 (lxc) map to 44 (host)
lxc.idmap = g 44 44 1
# 45 -> 107, map groups starting at 100045 (host)
lxc.idmap = g 45 100045 63
# 108 (lxc) map to 104 (host)
lxc.idmap = g 108 104 1
# 109 -> 65536, map groups starting at 100109
lxc.idmap = g 109 100109 65427
# don't forget to map user as well
lxc.idmap: u 0 100000 65536

3) Allow proxmox to map groups

On the host, nano /etc/subgid and add the following (video and render HOST ids)

root:104:1
root:44:1

Not sure if that's needed but maybe this line in /etc/subuid is also mandatory, my config has it anyway :

root:100000:65536

 

4) On the emby lxc config again, passthrough the /dev/dri to the container :

lxc.cgroup2.devices.allow: c 226:* rwm
lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file

 

5) On the container, add the user emby to the video and render groups :

 

usermod -a -G render emby
usermod -a -G video emby

 

Everything should now work as expected without messing around with chown, chmod or having a privileged container !

Edited by Gecko
  • Like 1
  • 2 weeks later...
  • Solution
AriaGloris
Posted
On 7/6/2024 at 2:28 PM, Luke said:

@AriaGlorishas this helped?

I just did a recent hardware upgrade, re-installed Proxmox and the Emby LXC container. The hardware transcoding is working with the iGPU showing up in the configurations and no code alterations needed.

  • Thanks 1
Posted
On 2/13/2024 at 3:28 AM, cThumbs said:

This is a great guide and matches a Jellyfin one I've actually been using and that's reassuring.

My issue is that my emby is nested inside the LXC using docker, so far I can't work out how to pass the gpu through (i think it's Step 5 where I basically become stuck). Does anyone have any idea what I need to do to get this working for my set up?

You edit the .conf file for the lxc. /etc/pve/lxc/xxx.conf

 

 

  • 4 months later...
Posted
On 13/07/2024 at 10:08, AriaGloris said:

I just did a recent hardware upgrade, re-installed Proxmox and the Emby LXC container. The hardware transcoding is working with the iGPU showing up in the configurations and no code alterations needed.

so it worked for you without any extra configuration file changes (like others in this thread) at all?

  • 5 months later...
wpjonesnh
Posted (edited)

I know this is an old post and I am honestly not trying to hijack it. I just wanted to add a comment for anyone who is trying to do this with an nvidia gpu. It took me a while to find the correct links/process to get this to work but in short I followed the process on this link first to get my gpu working with an ubuntu 24.04 LXC container on a Proxmox 8.4.1 VE host. Note: Post actually references Plex but I was focusing on just getting my GPU to work.
Github Gist NVIDIA Proxmox + LXC

( I did not need to follow the "Python/cuDNN" section)

My lxc conf file ended looking like this:
 

arch: amd64
cores: 40
features: mount=nfs,nesting=1
hostname: emby-pvr
lock: backup
memory: 65535
mp0: disk0:110/vm-110-disk-0.raw,mp=/opt/emby-server/data,backup=1,size=64G
net0: name=eth0,bridge=vmbr1,firewall=1,hwaddr=BC:24:11:86:02:9C,ip=dhcp,type=veth
net1: name=eth1,bridge=vmbr2,hwaddr=BC:24:11:C2:8D:27,ip=192.168.131.225/24,type=veth
ostype: ubuntu
rootfs: local-admin:110/vm-110-disk-0.raw,size=32G
swap: 2048
unprivileged: 0
lxc.idmap: g 0 100000 44
lxc.idmap: g 44 44 1
lxc.idmap: g 45 100045 63
lxc.idmap: g 108 104 1
lxc.idmap: g 109 100109 65427
lxc.idmap: u 0 100000 65536
lxc.cgroup.devices.allow: c 195:* rwm
lxc.cgroup.devices.allow: c 509:* rwm
lxc.cgroup.devices.allow: c 238:* rwm
lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-modeset dev/nvidia-modeset none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-caps/nvidia-cap1 none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-caps/nvidia-cap2 none bind,optional,create=file

(mp0 mountpoint is only there because I wanted to store my emby data in a different location)
After I got that process completed, I installed emby and I was good to go so I thought I would share in case anyone else who is trying to use an Nvidia card comes across this post:
image.png.a82c7e2ce47adfa07f6355cf2652b7fc.png

Edited by wpjonesnh
  • Thanks 1

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...