Jump to content

Docker HWA NVIDIA Error


troy_
Go to solution Solved by troy_,

Recommended Posts

The instructions in this post match the current system configuration, other containers and plex are both able to access the hardware to encode/decode.

The device appears, but generates an error in the hardware log.

```yaml

      - DeviceIndex: 0
        DeviceInfo:
          VendorName: NVIDIA Corporation
          DeviceName: TU106 [GeForce RTX 2060 SUPER]
          SubsytemVendorName: Gigabyte Technology Co., Ltd
          VendorId: 4318
          DeviceId: 7942
          SubsytemVendorId: 5208
          SubsytemDeviceId: 16369
          DevPath: "/sys/bus/pci/devices/0000:0b:00.0"
          DrmCard: "/dev/dri/card0"
          DrmRender: "/dev/dri/renderD128"
          IsEnabled: 1
          IsBootVga: 1
          Error:
            Number: -1
            Message: Failed to initialize VA /dev/dri/renderD128. Error -1


```

Environment has (what I think) are correct environment variables for the groups:

```yaml

environment:
- TZ=Australia/Sydney
- UID=1000
- GID=1000
- GIDLIST=44,109,1000
- NVIDIA_DRIVER_CAPABILITIES=compute,video,utility
- NVIDIA_VISIBLE_DEVICES=all
- NVIDIA_REQUIRE_CUDA=cuda>=11.4 brand=tesla,driver>=418,driver<419 brand=tesla,driver>=450,driver<451
```

hardware.yaml

Edited by troy_
Link to comment
Share on other sites

I've upgraded to CUDA 12.0.0 with 525.60.13

This has not resolved the issue.

All other containers are able to access the GPU, it's just the Emby official docker container that is not working.

Error is still:

Failed to open the drm device /dev/dri/renderD128

 

Edited by troy_
Link to comment
Share on other sites

It seems to be a version check causing the issue.

Error message is : The minimum required Nvidia driver for nvenc is 390.25 or newer

The installed version is well beyond that.

  NvidiaCodecProvider:
    CodecProviderName: NvidiaCodecProvider
    StandardError: |+
      ffdetect version 5.1-emby_2022_11_29 Copyright (c) 2018-2022 softworkz for Emby LLC
        built with gcc 10.3.0 (crosstool-NG 1.25.0)
        configuration: --cc=x86_64-emby-linux-gnu-gcc --prefix=/home/embybuilder/Buildbot/x64/ffmpeg-x64/staging --disable-alsa --disable-debug --disable-doc --disable-ffplay --disable-libpulse --disable-libxcb --disable-vdpau --disable-xlib --enable-chromaprint --enable-fontconfig --enable-gnutls --enable-gpl --enable-iconv --enable-libaribb24 --enable-libass --enable-libdav1d --enable-libfreetype --enable-libfribidi --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libzvbi --enable-pic --enable-version3 --enable-libtesseract --enable-cuda-llvm --enable-cuvid --enable-libdrm --enable-libmfx --enable-nvdec --enable-nvenc --enable-vaapi --enable-opencl --enable-cross-compile --cross-prefix=x86_64-emby-linux-gnu- --arch=x86_64 --target-os=linux --enable-shared --disable-static --pkg-config=pkg-config --pkg-config-flags=--static --extra-libs='-lm -lstdc++ -pthread'
        libavutil      57. 28.100 / 57. 28.100
      Loaded lib: libcuda.so.1
      Loaded sym: cuInit
      Loaded sym: cuDeviceGetCount
      Loaded sym: cuDeviceGet
      Loaded sym: cuDeviceGetAttribute
      Loaded sym: cuDeviceGetName
      Loaded sym: cuDeviceComputeCapability
      Loaded sym: cuCtxCreate_v2
      Loaded sym: cuCtxSetLimit
      Loaded sym: cuCtxPushCurrent_v2
      Loaded sym: cuCtxPopCurrent_v2
      Loaded sym: cuCtxDestroy_v2
      Loaded sym: cuMemAlloc_v2
      Loaded sym: cuMemAllocPitch_v2
      Loaded sym: cuMemsetD8Async
      Loaded sym: cuMemFree_v2
      Loaded sym: cuMemcpy
      Loaded sym: cuMemcpyAsync
      Loaded sym: cuMemcpy2D_v2
      Loaded sym: cuMemcpy2DAsync_v2
      Loaded sym: cuGetErrorName
      Loaded sym: cuGetErrorString
      Loaded sym: cuCtxGetDevice
      Loaded sym: cuDevicePrimaryCtxRetain
      Loaded sym: cuDevicePrimaryCtxRelease
      Loaded sym: cuDevicePrimaryCtxSetFlags
      Loaded sym: cuDevicePrimaryCtxGetState
      Loaded sym: cuDevicePrimaryCtxReset
      Loaded sym: cuStreamCreate
      Loaded sym: cuStreamQuery
      Loaded sym: cuStreamSynchronize
      Loaded sym: cuStreamDestroy_v2
      Loaded sym: cuStreamAddCallback
      Loaded sym: cuEventCreate
      Loaded sym: cuEventDestroy_v2
      Loaded sym: cuEventSynchronize
      Loaded sym: cuEventQuery
      Loaded sym: cuEventRecord
      Loaded sym: cuLaunchKernel
      Loaded sym: cuLinkCreate
      Loaded sym: cuLinkAddData
      Loaded sym: cuLinkComplete
      Loaded sym: cuLinkDestroy
      Loaded sym: cuModuleLoadData
      Loaded sym: cuModuleUnload
      Loaded sym: cuModuleGetFunction
      Loaded sym: cuModuleGetGlobal
      Loaded sym: cuTexObjectCreate
      Loaded sym: cuTexObjectDestroy
      Loaded sym: cuGLGetDevices_v2
      Loaded sym: cuGraphicsGLRegisterImage
      Loaded sym: cuGraphicsUnregisterResource
      Loaded sym: cuGraphicsMapResources
      Loaded sym: cuGraphicsUnmapResources
      Loaded sym: cuGraphicsSubResourceGetMappedArray
      Loaded sym: cuDeviceGetUuid
      Loaded sym: cuImportExternalMemory
      Loaded sym: cuDestroyExternalMemory
      Loaded sym: cuExternalMemoryGetMappedBuffer
      Loaded sym: cuExternalMemoryGetMappedMipmappedArray
      Loaded sym: cuMipmappedArrayGetLevel
      Loaded sym: cuMipmappedArrayDestroy
      Loaded sym: cuImportExternalSemaphore
      Loaded sym: cuDestroyExternalSemaphore
      Loaded sym: cuSignalExternalSemaphoresAsync
      Loaded sym: cuWaitExternalSemaphoresAsync
      Cannot load libnvcuvid.so.1
      Failed loading nvcuvid functions.
      Cannot load libnvidia-encode.so.1
      Failed loading nvenc functions.
      The minimum required Nvidia driver for nvenc is 390.25 or newer

    Result:
      ProgramVersion:
        Version: 5.1-emby_2022_11_29
        Copyright: Copyright (c) 2018-2022 softworkz for Emby Llc
        Compiler: gcc 10.3.0 (crosstool-NG 1.25.0)
        Configuration: "--cc=x86_64-emby-linux-gnu-gcc --prefix=/home/embybuilder/Buildbot/x64/ffmpeg-x64/staging
          --disable-alsa --disable-debug --disable-doc --disable-ffplay --disable-libpulse
          --disable-libxcb --disable-vdpau --disable-xlib --enable-chromaprint --enable-fontconfig
          --enable-gnutls --enable-gpl --enable-iconv --enable-libaribb24 --enable-libass
          --enable-libdav1d --enable-libfreetype --enable-libfribidi --enable-libmp3lame
          --enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libwebp
          --enable-libx264 --enable-libx265 --enable-libzvbi --enable-pic --enable-version3
          --enable-libtesseract --enable-cuda-llvm --enable-cuvid --enable-libdrm
          --enable-libmfx --enable-nvdec --enable-nvenc --enable-vaapi --enable-opencl
          --enable-cross-compile --cross-prefix=x86_64-emby-linux-gnu- --arch=x86_64
          --target-os=linux --enable-shared --disable-static --pkg-config=pkg-config
          --pkg-config-flags=--static --extra-libs='-lm -lstdc++ -pthread'"
      Error:
        Number: -1
        Message: Operation not permitted
      Log:
      - Level: 16
        Category: 0
        Message: Cannot load libnvcuvid.so.1
      - Level: 16
        Category: 0
        Message: Failed loading nvcuvid functions.
      - Level: 16
        Category: 0
        Message: Cannot load libnvidia-encode.so.1
      - Level: 16
        Category: 0
        Message: Failed loading nvenc functions.
      - Level: 16
        Category: 0
        Message: The minimum required Nvidia driver for nvenc is 390.25 or newer
    ExitCode: 1

 

Edited by troy_
Link to comment
Share on other sites

The official Emby container seems to be missing symlinks.

It's clear these are version based, so I'm not sure what a simple solution to this would be.

This script (gist here and below) can fix it - but I don't know how to get this to run on start each time.

I could wget it and run it, but if I supply a CMD to the docker-compose, it looks like it's a one shot run and exit deal from the entrypoint.

#!/bin/ash

# Get NVIDIA version from /usr/lib64/libcuda.so.X.Y.Z file
NVIDIA_VERSION=$(ls /usr/lib64/libcuda.so.* | sed -E 's/.*libcuda\.so(\.1|\.)?//' | tr -d \\n)

# Create links
cd /usr/lib64 && \
ln -s libcuda.so.${NVIDIA_VERSION} libcuda.so.1 && \
ln -s libnvidia-opticalflow.so.${NVIDIA_VERSION} libnvidia-opticalflow.so.1 && \
ln -s libnvidia-encode.so.${NVIDIA_VERSION} libnvidia-encode.so.1 && \
ln -s libnvcuvid.so.${NVIDIA_VERSION} libnvcuvid.so.1 && \
ln -s libnvidia-cfg.so.${NVIDIA_VERSION} libnvidia-cfg.so.1 && \
ln -s libvdpau_nvidia.so.${NVIDIA_VERSION} libvdpau_nvidia.so.1

# Create base links
ln -s libnvidia-encode.so.1 libnvidia-encode.so && \
ln -s libnvcuvid.so.1 libnvcuvid.so && \
ln -s libnvidia-cfg.so.1 libnvidia-cfg.so
ln -s libvdpau_nvidia.so.1 libvdpau_nvidia.so

What I am trying to do:

wget -O /nvidia-fix https://gist.githubusercontent.com/troykelly/4e759dd29a12d4cf2e25f3a1c73e4ed8/raw/nvidia-fix-emby.sh && chmod +x /nvidia-fix && /nvidia-fix

 

Link to comment
Share on other sites

  • Solution

My... horrible... fix for this is to update the entrypoint for the official emby docker image thusly:

The entrypoint has a base64 encoded version of the fix script and then hands back to /init

Important: you shouldn't trust the code below, somebody could edit this post or I could be evil. You don't know.
You should base64 encode your own script until the container can be fixed.

services:
  emby:
    image: emby/embyserver:beta
	.
	.
	.
    deploy:
      resources:
        reservations:
          devices:
            - capabilities:
                - gpu
	.
    .
    .
    entrypoint:
      [
        "/bin/sh",
        "-c",
        "echo IyEvYmluL2FzaAoKTlZJRElBX1ZFUlNJT049JChscyAvdXNyL2xpYjY0L2xpYmN1ZGEuc28uKiB8IHNlZCAtRSAncy8uKmxpYmN1ZGFcLnNvKFwuMXxcLik/Ly8nIHwgdHIgLWQgXFxuKQoKY2QgL3Vzci9saWI2NCAmJiBcCmxuIC1zIGxpYmN1ZGEuc28uJHtOVklESUFfVkVSU0lPTn0gbGliY3VkYS5zby4xICYmIFwKbG4gLXMgbGlibnZpZGlhLW9wdGljYWxmbG93LnNvLiR7TlZJRElBX1ZFUlNJT059IGxpYm52aWRpYS1vcHRpY2FsZmxvdy5zby4xICYmIFwKbG4gLXMgbGlibnZpZGlhLWVuY29kZS5zby4ke05WSURJQV9WRVJTSU9OfSBsaWJudmlkaWEtZW5jb2RlLnNvLjEgJiYgXApsbiAtcyBsaWJudmN1dmlkLnNvLiR7TlZJRElBX1ZFUlNJT059IGxpYm52Y3V2aWQuc28uMSAmJiBcCmxuIC1zIGxpYm52aWRpYS1jZmcuc28uJHtOVklESUFfVkVSU0lPTn0gbGlibnZpZGlhLWNmZy5zby4xICYmIFwKbG4gLXMgbGlidmRwYXVfbnZpZGlhLnNvLiR7TlZJRElBX1ZFUlNJT059IGxpYnZkcGF1X252aWRpYS5zby4xCgpsbiAtcyBsaWJudmlkaWEtZW5jb2RlLnNvLjEgbGlibnZpZGlhLWVuY29kZS5zbyAmJiBcCmxuIC1zIGxpYm52Y3V2aWQuc28uMSBsaWJudmN1dmlkLnNvICYmIFwKbG4gLXMgbGlibnZpZGlhLWNmZy5zby4xIGxpYm52aWRpYS1jZmcuc28KbG4gLXMgbGlidmRwYXVfbnZpZGlhLnNvLjEgbGlidmRwYXVfbnZpZGlhLnNvCgovaW5pdCAkQA== | base64 -d > /nvidia-fix-init && chmod +x /nvidia-fix-init && /nvidia-fix-init $$@",
      ]

 

  • Like 1
Link to comment
Share on other sites

alucryd

I'm not very well-versed in docker/nvidia but it seems our official instructions might be a bit outdated. I can't test this right now though. Are you using `nvidia-docker`, or the newer `nvidia-container-runtime` ? The latter has apparently superseded the former: https://github.com/NVIDIA/nvidia-container-runtime

Also, our image is based on busybox, so it may not be what nvidia expects in terms of filesystem layout, I'm unsure how these symlinks can work as none of these libs are in our image, and we don't even have a `/usr/lib64` directory. Hence I'm guessing it must be mounted by the nvidia runtime, so any missing symlink should be the responsibility of that runtime, not ours. Still I can't explain why it works with other images but I'll investigate as soon as I can test all this.

Link to comment
Share on other sites

Thank you @alucryd

The same issue with both `nvidia-docker2` and `nvidia-container-toolkit` - but using `nvidia-container-toolkit`.

I can't quite work out what is going wrong with the Emby container. I maintain an ffmpeg container for cuda here, and I had to use the nvidia cuda container as a base to get it to work reliably - building from scratch always ended up somewhere painful.

I'm guessing this isn't an option as the Emby container needs to be all things to all people (no GPU, nvidia GPU, intel etc)

Perhaps (as others have done) the cuda container might just have to be stand alone?

The commands to get Toolkit and drivers I use are below if it helps as a reference as to how the server is built. The VM is running under VMWare ESXi.

https://gist.github.com/troykelly/7445ca387aa069a852a4a96c9a57d6a6

Link to comment
Share on other sites

alucryd

Thanks for the follow up. Yeah, having an image specific to nvidia based on cuda is something I'll certainly consider.

Link to comment
Share on other sites

In all fairness @alucryd the hackiness of what I have done could be cleaned up... without having to rebuild everything you have already done.

Once the files are linked, I can't see other issues - the official container is working well for me now (with hackery in place)

Link to comment
Share on other sites

alucryd

@troy_ Was finally able to test. Running in a fresh Ubuntu 22.04 VM, installed docker-ce and nvidia-docker2, and ran:

sudo docker run -it --rm --gpus all --publish 8096:8096 emby/embyserver

It works out of the box here, ffdetect does not crash and detects nvenc and nvdec fine here. Really not sure why you need those hacks, maybe you're missing `--gpus all`? Modified our instructions on docker hub to include this flag.

image.png.7c69b501a8ebcf6a6c756075df4f3257.png

Edited by alucryd
Link to comment
Share on other sites

I'm on `22.04.1 LTS (Jammy Jellyfish)`

I wonder @3n8 are you using the open drivers? or Portainer to manage containers?

@alucryd It's strange - I can reproduce the GPU issue easily as well. Are you building the VM host the same way I am? Could it be a nvidia driver version issue, I am `525.60.13`?

Without the hack - the libraries are not available. If I wasn't starting the container with GPU availability, even the hack wouldn't get it working.

Link to comment
Share on other sites

I'm wrong about the distro then 🤦‍♀️

no portainer, Docker-compose:
docker 1:20.10.22-1
docker-compose 2.14.2-1


if you want my full compose it is here:

NVIDIA-SMI 525.78.01    Driver Version: 525.78.01    CUDA Version: 12.0     |
libnvidia-container 1.5.1-1
libnvidia-container-tools 1.5.1-1
nvidia-container-runtime 3.5.0-2
nvidia-container-toolkit 1.5.1-1


 GPU:Quadro P620
 

Edited by 3n8
Link to comment
Share on other sites

alucryd

@troy_ I have less steps and don't tinker with anything (no patch, no blacklist, no nothing), really I just installed vanilla ubuntu, let it detect and install the right nvidia drivers for my card, enabled the docker and nvidia repositories, and installed docker-ce and nvidia-docker2.

I can see the symlinks you're manually creating without any extra step:

/ # ls -lah /usr/lib64
total 143M   
drwxr-xr-x    2 root     root        4.0K Jan 22 14:28 .
drwxr-xr-x    1 root     root        4.0K Jan 22 14:28 ..
lrwxrwxrwx    1 root     root          12 Jan 22 14:28 libcuda.so -> libcuda.so.1
lrwxrwxrwx    1 root     root          20 Jan 22 14:28 libcuda.so.1 -> libcuda.so.525.60.11
-rw-r--r--    1 root     root       28.3M Nov 23 23:21 libcuda.so.525.60.11
lrwxrwxrwx    1 root     root          28 Jan 22 14:28 libcudadebugger.so.1 -> libcudadebugger.so.525.60.11
-rw-r--r--    1 root     root       10.0M Nov 23 22:46 libcudadebugger.so.525.60.11
lrwxrwxrwx    1 root     root          23 Jan 22 14:28 libnvcuvid.so.1 -> libnvcuvid.so.525.60.11
-rw-r--r--    1 root     root        7.4M Nov 23 22:49 libnvcuvid.so.525.60.11
lrwxrwxrwx    1 root     root          32 Jan 22 14:28 libnvidia-allocator.so.1 -> libnvidia-allocator.so.525.60.11
-rw-r--r--    1 root     root      148.6K Nov 23 22:47 libnvidia-allocator.so.525.60.11
lrwxrwxrwx    1 root     root          26 Jan 22 14:28 libnvidia-cfg.so.1 -> libnvidia-cfg.so.525.60.11
-rw-r--r--    1 root     root      252.4K Nov 23 22:47 libnvidia-cfg.so.525.60.11
-rw-r--r--    1 root     root       53.7M Nov 23 23:31 libnvidia-compiler.so.525.60.11
lrwxrwxrwx    1 root     root          29 Jan 22 14:28 libnvidia-encode.so.1 -> libnvidia-encode.so.525.60.11
-rw-r--r--    1 root     root      194.1K Nov 23 22:47 libnvidia-encode.so.525.60.11
lrwxrwxrwx    1 root     root          25 Jan 22 14:28 libnvidia-ml.so.1 -> libnvidia-ml.so.525.60.11
-rw-r--r--    1 root     root        1.7M Nov 23 22:49 libnvidia-ml.so.525.60.11
lrwxrwxrwx    1 root     root          29 Jan 22 14:28 libnvidia-opencl.so.1 -> libnvidia-opencl.so.525.60.11
-rw-r--r--    1 root     root       21.8M Nov 23 23:22 libnvidia-opencl.so.525.60.11
lrwxrwxrwx    1 root     root          26 Jan 22 14:28 libnvidia-opticalflow.so -> libnvidia-opticalflow.so.1
lrwxrwxrwx    1 root     root          34 Jan 22 14:28 libnvidia-opticalflow.so.1 -> libnvidia-opticalflow.so.525.60.11
-rw-r--r--    1 root     root       66.0K Nov 23 22:47 libnvidia-opticalflow.so.525.60.11
lrwxrwxrwx    1 root     root          37 Jan 22 14:28 libnvidia-ptxjitcompiler.so.1 -> libnvidia-ptxjitcompiler.so.525.60.11
-rw-r--r--    1 root     root       19.7M Nov 23 22:57 libnvidia-ptxjitcompiler.so.525.60.11

As you can see I was at 525.60.11, but I just upgraded to 525.78.01, symlinks are still there and ffdetect still works fine.

@3n8 I tried a slimmed down version of your compose file, it also works fine here, exact compose below.

version: "2.3"
services:
  emby:
    image: emby/embyserver:latest
    container_name: emby
    restart: always
    mem_limit: 4g
    runtime: nvidia
    devices:
      - /dev/dri:/dev/dri
    deploy:
      resources:
        reservations:
          devices:
            - capabilities: [gpu]
    environment:
      - UID=999
      - GID=999
      - GIDLIST=44,110
      - NVIDIA_VISIBLE_DEVICES=all

Not sure where to go from here, I could create an Arch Linux VM to try on, but docker is supposed to make it so the underlying host doesn't matter (too much).

My best guess is, either the nvidia patch or the manual cuda install is messing with the setup.

Edited by alucryd
  • Like 1
Link to comment
Share on other sites

alucryd

Gave the nvidia-patch repo a try, symlinks are still there, and ffdetect is still working great :/

Detected nvidia driver version: 525.78.01
Attention! Backup not found. Copying current libnvidia-encode.so to backup.
5d1533ea4cfe301d678ff20b8c08633ba27e4dae  /opt/nvidia/libnvidia-encode-backup/libnvidia-encode.so.525.78.01
5202003f8ea39c85aaa8652cb99c0147732b6e42  /usr/lib/x86_64-linux-gnu/libnvidia-encode.so.525.78.01
Patched!

@troy_Why do you need to manually install cuda? Looks like installing from the repository is enough.

Edited by alucryd
Link to comment
Share on other sites

alucryd

@3n8Same on Arch Linux, installed a fresh EndeavourOS VM to speed up the process, installed nvidia, docker and nvidia-container-toolkit, and started with the same command as on Ubuntu. I can see all transcoding options.

Link to comment
Share on other sites

19 hours ago, alucryd said:

do you need to manually install

Weeks of pain led to that install unfortunately.. we were seeing a lot of issues and ended up finding the open drivers were far more reliable under ESXi than the distro installed drivers.

There are also NVIDIA encode/decode concurrency limitations that can be um.. worked around.. by applying a patch.

I work a lot with virtualisation and GPUs for $DAYJOB so I very easily could have over complicated things.

I've just tried clean Ubuntu with distro drivers and can't get GPU working still (inside ESXi), so in my case I will need to stick with the hackery.

Edited by troy_
Clarity
Link to comment
Share on other sites

  • 1 month later...
karlshea

I was sick of reinstalling the manual nvidia driver after each kernel update, so I tried switching to the Ubuntu packaged drivers (nvidia-headless-525). They just plain don't work with the nvidia-container-toolkit. Removing all of the driver packages and reinstalling the driver manually made everything work again. The packaged drivers must just not be doing the same thing the actual driver does.

There seems to be an issue with no resolution here: https://gitlab.com/nvidia/container-toolkit/container-toolkit/-/issues/9

Link to comment
Share on other sites

12 hours ago, karlshea said:

I was sick of reinstalling the manual nvidia driver after each kernel update, so I tried switching to the Ubuntu packaged drivers (nvidia-headless-525). They just plain don't work with the nvidia-container-toolkit. Removing all of the driver packages and reinstalling the driver manually made everything work again. The packaged drivers must just not be doing the same thing the actual driver does.

There seems to be an issue with no resolution here: https://gitlab.com/nvidia/container-toolkit/container-toolkit/-/issues/9

That's interesting, thanks for the update.

Link to comment
Share on other sites

It's strange @karlshea I haven't had issues with other containers accessing GPU, it's only been Emby.

The `pre` script I created is working really well for me with the official Emby container, I've had no issues surviving updates and even a version update of the driver.

Link to comment
Share on other sites

karlshea

Yeah, running a docker container that just runs `nvidia-smi` works fine in either case. Emby throws errors about not being able to open /dev/dri/renderDnnn when using package drivers, but works with manual ones (regardless of UID/GID).

I just don't know enough about how the nvidia-container-toolkit is adding resources but whatever is going on Emby doesn't like it.

Link to comment
Share on other sites

We've got quite a few different containers that use GPU both for video and ML, we've only noticed the issues with Emby.

I wasn't sure if it was an NVIDIA open drivers issue with the container or something else.

Most everything we use GPU for is built specifically for NVIDIA rather than being an everything for everyone sort of thing - which I think is causing the issue here.

Link to comment
Share on other sites

  • 6 months later...

If it helps:

The quadro gpu i was using for hardware encoding broke and i bought a 1660ti super used to replace it and now i no longer require the entrypoint fix.

  • Thanks 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...