Kipperdawn 5 Posted October 17, 2020 Posted October 17, 2020 Hi Support, Long story short, i recently blew up my emby server vm by installing a bad driver - my bad... and paying the price now. No big deal I thought, I can easily recreate it. So i went and created a new vm (with gpu pass through), same as the previous vm. For some reason though, emby just doesn't seem to be able to find the nv encoders or nv decoders or at least not when i am accessing the vm via VNC. Strangely enough, if i access the vm via RDP, then nvenc and nvdec appear in emby, however, if i select them, they don't get used by emby. Also the moment i reboot, nvenc/dec disappear again. I checked the logs for the host and the vm for gpu to ensure they are working correctly and no errors are reported. I also checked the nvidia control panel and all seems ok. Just to ensure the card and nvidia driver is installed and working correctly, i thought i would try another program so i grabbed the latest handbrake app... i encoded a video using nvenc (x265) and it worked fine. I even checked the gpu status (via task manager) while encoding and it showed a load on the gpu encoder while encoding and no errors were encountered. I have tried different combinations of drivers (incase i the old vm had an old driver) but no luck. At this point, I am not sure what to try / force emby into looking for the nvenc/dec coders. I have attached the logs of when i booted the vm, then access via vnc, the other is for when i boot, then access the vm via rdp (without the suggested tweak so it doesn't interfere). I just find it strange that it sees the nv enc/dec when i am in rdp, but no other time. Also frustrating since it was working until i borked my vm. Any help / guidance / suggestions on how to force emby to look for the nv enc/dec coders. thx in advance embyserver-63738467875.viaVNC.txt embyserver-63738468671.viaRDP.txt hardware_detection-63738468316.viaRDP.txt hardware_detection-63738467651.viaVNC.txt
Luke 42089 Posted October 17, 2020 Posted October 17, 2020 Hi there, have you followed our hardware acceleration setup guide? https://support.emby.media/support/solutions/articles/44001160148-hardware-acceleration-overview
Kipperdawn 5 Posted October 17, 2020 Author Posted October 17, 2020 Hi Luke, Yes, I installed the drivers from NVidia, both the lastest and one from a year ago as that is probably when i set up the previous vm. I also tried the rdp fix as pointed out in the guide and i have the same behavior as described above. thx
softworkz 5076 Posted October 21, 2020 Posted October 21, 2020 @Kipperdawn - What's your host OS, guest OS, VM software?
Kipperdawn 5 Posted October 23, 2020 Author Posted October 23, 2020 host == unraid os - 6.8.3, guest os == win 10, build 19041.572, vm software - kvm. If I couldn't access the card at all, that would make sense, but since windows thinks it is operating correctly and that handbrake works just fine and using the encoders, I find it strange that emby can't see the card. Having said that, I did more playing around and by me passing through the card with pci-e acs override to downstream, emby is now able to see the encoders so probably not a rabbit whole worth following... I just find it strange that other apps are able to use the enc/dec coders when emby can not. thx
softworkz 5076 Posted October 23, 2020 Posted October 23, 2020 Emby does its hardware detection on server startup, so what counts is the situation at that time. I'm not familiar with "kvm" (never heard of it actually, except as a term for hardware-switches allowing to switch KeyboardVideoandMouse). The only other hint I can give, is that when accessing Win10 via 'VNC', you are accessing the "Console Session" of Win10, while when logging in via RDP (unless when doing mstsc /admin), Win10 will create a separate user session. Maybe your VM software behaves in a way that it doesn't forward the GPU in the console session, only in user sessions...
Kipperdawn 5 Posted October 23, 2020 Author Posted October 23, 2020 Hi softworkz, KVM is just a short term for kernel based virtual machine - ie which is what comes standard with many linux distributions for doing virtualization.... and with the change I made to passing through the card, emby is happy now (ie see's the card and functions correctly). We can lose this issue. Thx for the responses.
softworkz 5076 Posted October 23, 2020 Posted October 23, 2020 Just now, Kipperdawn said: Hi softworkz, KVM is just a short term for kernel based virtual machine - ie which is what comes standard with many linux distributions for doing virtualization.... and with the change I made to passing through the card, emby is happy now (ie see's the card and functions correctly). We can lose this issue. Thx for the responses. You're welcome. Would you mind sharing which changes you made? It might be helpful for other users... Thanks
Solution Kipperdawn 5 Posted October 23, 2020 Author Solution Posted October 23, 2020 Yup, in Unraid (6.8.x), under vm manager, advanced settings, there is an option called "PCIe ACS override:" I had to enable it and set it to downstream. The gpu was largely isolated as an iommu group, but the pcie controller was also included. By enabling this field, it broke out the pcie controller and left just the gpu - which made emby happy. thx 1
Carlo 4561 Posted October 23, 2020 Posted October 23, 2020 Side note, don't mention KVM as that can be a hardware device or software emulation. Be specific in your VM setup so that it's a bit more clear on initial reading. Continue.
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now