Jump to content

Tips to set up emby on a QNAP NAS using Docker with HW Transcoding


Ikario

Recommended Posts

I haven't had success with that mostly because of the way the drivers are handled with QTS/QuTS: these are not on standard directories. If you check QNAP's own guide on container station though it ends up running the docker command via CLI, so I don't think there's a good way to do it using the GUI. Even the way QNAP presents to run with GPU support via the CLI wouldn't work with Emby (not an Emby problem, mounting a volume the way it requires would override /usr/local/ or/usr altogether which is obviously not good.

Link to comment
Share on other sites

On 10/12/2021 at 5:21 PM, Ikario said:

Hey, 

As I posted before, I was for a while unable to install the emby package on my TVS-872XT running QuTS Hero 4.5.4.  I have now managed to install the server natively through a different method, but before I managed to do that, I ended setting emby via docker.  This was not very straightforward but I managed to make it work with HW Transcoding, both using the integrated GPU AND the discrete NVIDIA GPU I installed (Quadro P1000).  I figured I'd write a post showing what I did for future reference.  This should work with all of the x86/64 versions for QNAP though.

This assumes you know how to log in via SSH and how docker compose works, it's by no means an in depth guide.

If you manage to install the app natively, in my case it detected both Quicksync/VAAPI and NVENC/DEC without messing with it, so this is WAY easier.

to install natively, you have to use the CLI.  Put your emby qpkg somewhere accesible, for example /share/Public. Then do:

cd /share/Public
chmod +x embypackage.qpkg
sh embypackage.qpkg

Replace embypackage with the exact filename.  This works and installs emby without issues natively, again, with FULL HW Transcoding support.  One con is that you won't be able to uninstall it via the App center GUI.  It will show up, but no uninstall button.  To uninstall, connect via SSH and use the console manager included with QNAP as soon as you log in via SSH to find the emby package and uninstall it.  The menu is pretty straightforward and you should have no issues with it.

 

Having said that, using the docker container makes it easier to migrate to a new server and/or backup or experiment without (supposedly at least) a huge performance hit, so if you want to go that route, it's a bit messier but this is what worked for me:

First of all, the official embyserver image did not work for me.  Maybe it was me but even following the instructions, I could not get it to boot at all. I ended up using the linuxserver/emby image.

If you just want to use the integrated gpu (Quicksync/VAAPI) then you need to make sure /dev/dri has the right permissions, as it normally comes with only owner (admin most likely) rw permissions.  

chmod -R 770 /dev/dri

This way, you can add whatever group owns /dev/dri (ls -l /dev/) the GIDLIST in the docker compose file and it should automatically see the integrated GPU. You could also do 777 and forget about groups or users but that's not something I'd be comfortable doing.

You need to make sure this runs before either container station or your custom docker-compose.  You could either use Entware's init.d and make a startup script or install RunLast and use either its init.d or scripts folder.   If you are using container station, I'd use Entware as it runs before container station is even loaded.  You could also use QNAP's official autorun.sh method though I don't think that'll stick, you could try though if you don't feel like installing anything else.

Getting the NVIDIA GPU to work though was harder to figure out but easier to implement.  It took me a bit of googling and combining a few methods here and there, but this is what my emby docker-compose file looks like:

version: '3.6'
volumes:
  emby_usr:
    driver: local
    driver_opts:
      type: overlay
      device: overlay
      # Change the '/share/DockerData/volume_overlays/plex' to whatever
      # directory you'd like to use to store the temp volume overlay files
      # Note: That path appears here TWICE so change both of them!
      o: lowerdir=/share/[REPLACETHIS1]/.qpkg/NVIDIA_GPU_DRV/usr,upperdir=/share/[REPLACETHIS2]/docker/config/emby/volume_overlays/emby/upper,workdir=/share/[REPLACETHIS2]/docker/config/emby/volume_overlays/emby/work

services:

    emby-prep:
        image: linuxserver/emby
        container_name: emby-prep
        environment:
          PUID:      "1000"       # Change these values as necessary for your own containers
          PGID:      "0"
          UMASK_SET: "022"
          TZ:        "Etc/GMT-3"
        volumes:
          - emby_usr:/emby_usr
          - /share/[REPLACETHIS1]/.qpkg/NVIDIA_GPU_DRV/usr/:/nvidia:ro
        entrypoint: /bin/bash -x -c "cp -Rv /usr/* /emby_usr/"
        restart: "no"       # only needs to run once
    emby:
        container_name: emby
        depends_on:
            - emby-prep
        networks:
            - medianet
        environment:
            PUID: 1000 
            PGID: 1000 
            GIDLIST: 0
            TZ: Etc/GMT-3
        volumes:
            - emby_usr:/usr
            - /share/[REPLACETHIS2]/docker/config/emby:/config
            - /share/[REPLACETHIS3]/media:/media
            - type: tmpfs
              target: /scratch
              tmpfs:
                size: 10000000000
            #- /share/ZFS19_DATA/media/scratch:/scratch
			#uncomment that line if you don't have enough RAM to use tmpfs as scratch
        ports:
            - '8096:8096' # HTTP port
            - '8920:8920' # HTTPS port
            - '7359:7359'
            - '1900:1900'
        devices:
            - /dev/dri            # uncomment this to use intel transcoder if available
            - /dev/nvidia0
            - /dev/nvidiactl
            - /dev/nvidia-uvm
        restart: unless-stopped
        image: 'linuxserver/emby:latest'

networks:
    medianet:
        driver: bridge

Again, won't go into heavy detail but this is basically what's going on:

1)Created an overlay volume called emby_usr (you can read more about it herehttps://docs.docker.com/storage/storagedriver/overlayfs-driver/#how-the-overlay-driver-works) with the GPU driver usr folder as lower directory, we will use this to merge that directory with the usr directory on the emby container.
2)First run emby_prep which runs once and closes.  Once it runs, there's a command on its entrypoint to copy the usr directory from the container into the overlay volume, so now we have an emby_usr volume which has both the usr directory from the container AND the NVIDIA stuff.  
3)When running emby, we mount the volume emby_usr as the /usr volume, now with the NVIDIA drivers.
4) I have 32gigs of RAM so I'm using a tmpfs in RAM for scratch, but if you don't you can just delete all that and uncomment the line below that.
5)Last but not least, you need to add the devices. /dev/dri will allow you to use the integrated GPU, the rest are for the NVIDIA GPU.

 

Lastly, a few extra things:

I have my docker container files in a share directory called docker.  path is either /share/docker/config or /share/ZFS20_DATA/docker/config.  Replace that with whatever works for your config.  Inside that I made a folder structure like this:

config/
    ->emby/
        ->volume_overlays/
             ->upper/
             ->work/

Same thing with my media folder, I have my media at /share/media/.  To find where your NVIDIA_GPU_DRV directory is, you could use something like "find /share/ -name ""NVIDIA_GPU_DRV" and replace that accordingly.

And that's pretty much it.  You could also use this to remove the transcoding streams limitation from lower end graphics cards by adding a script to the entrypoint, but I have not done that yet.  Shouldn't be too hard but I don't have the time to mess with it now.
Hopefully this is useful for someone out there.

I can't edit this post again for some reason but from time to time I come back to this because I was not particularly happy with this solution.  This time I've found an alternate solution that should've been OBVIOUS from the start but for some reason it took two years for me to figure it out.

Basic thing we want to do is the same: make the nvidia drivers accesible to Emby, and QNAP has those in a weird spot so the only solution right now is to copy those files into the container, I was using a temporary Emby container and a volume overlay to create a copy of the /usr/ directory in the Emby container with the NVIDIA drivers added to a volume and then use that volume to overwrite the /usr/ directory in the real Emby container. Yes, it's a mess, but it works.  The docker-compose though looks ugly.

 

So, this is another way to do basically the same without using a temporary Emby container and a volume-overlay. We'll need, as in the main post, to obviously know where the NVIDIA drivers are located.   I've since found out you could use the following command to find that directory

/sbin/getcfg NVIDIA_GPU_DRV Install_Path -f /etc/config/qpkg.conf -d None

We'll call that directory NVIDIA_GPU_DRV

1)First, create a Dockerfile, which should have the following content:

FROM linuxserver/emby:latest
ADD . /usr/

This dockerfile will create a new image based on the linuxserver/emby:latest image and copy the contents of the CONTEXT directory (more on that later) to the /usr directory inside the image.

2) Now you need to make your docker-compose.yml something like this:

version: '3.6'

services:
    emby:
        build: 
            dockerfile: [DOCKERFILELOCATION]/Dockerfile
            context: [NVIDIA_GPU_DRIV]/usr
        container_name: emby
        environment:
            PUID: 1000 
            PGID: 1000 
            GIDLIST: 0
            TZ: Etc/GMT-3
        volumes:
            - /share/[REPLACETHIS2]/docker/config/emby:/config
            - /share/[REPLACETHIS3]/media:/media
            - type: tmpfs
              target: /scratch
              tmpfs:
                size: 10000000000
            #- /share/[REPLACETHIS3]/media/scratch:/scratch
			#uncomment that line if you don't have enough RAM to use tmpfs as scratch
        ports:
            - '8096:8096' # HTTP port
            - '8920:8920' # HTTPS port
            - '7359:7359'
            - '1900:1900'
        devices:
            - /dev/dri            # uncomment this to use intel transcoder if available
            - /dev/nvidia0
            - /dev/nvidiactl
            - /dev/nvidia-uvm
        restart: unless-stopped
        image: embyhwtrc

You will notice there's no longer an overlay volume or temp container, in the build section we point to the location of the Dockerfile and in context we are setting the usr directory inside NVIDIA_GPU_DRV as the context so we can ADD/COPY the contents of it to our Emby /usr/ directory.  

3)Last step, instead of using docker-compose up -d, you now need to add --build to it

docker-compose up --build -d

There should not be any kind of performance difference between my old and current method, if anything, this last method actually copies the contents of NVIDIA_GPU_DRV/usr to the image so running docker-compose up is now a bit slower, but me personally I'd rather have that and have a cleaner and more understandable docker-compose without the volume overlay and temporary container which to me was ugly even if it worked perfectly fine.

 

 

  • Thanks 1
Link to comment
Share on other sites

jang430
1 hour ago, Ikario said:

I can't edit this post again for some reason but from time to time I come back to this because I was not particularly happy with this solution.  This time I've found an alternate solution that should've been OBVIOUS from the start but for some reason it took two years for me to figure it out.

Basic thing we want to do is the same: make the nvidia drivers accesible to Emby, and QNAP has those in a weird spot so the only solution right now is to copy those files into the container, I was using a temporary Emby container and a volume overlay to create a copy of the /usr/ directory in the Emby container with the NVIDIA drivers added to a volume and then use that volume to overwrite the /usr/ directory in the real Emby container. Yes, it's a mess, but it works.  The docker-compose though looks ugly.

 

So, this is another way to do basically the same without using a temporary Emby container and a volume-overlay. We'll need, as in the main post, to obviously know where the NVIDIA drivers are located.   I've since found out you could use the following command to find that directory

/sbin/getcfg NVIDIA_GPU_DRV Install_Path -f /etc/config/qpkg.conf -d None

We'll call that directory NVIDIA_GPU_DRV

1)First, create a Dockerfile, which should have the following content:

FROM linuxserver/emby:latest
ADD . /usr/

This dockerfile will create a new image based on the linuxserver/emby:latest image and copy the contents of the CONTEXT directory (more on that later) to the /usr directory inside the image.

2) Now you need to make your docker-compose.yml something like this:

version: '3.6'

services:
    emby:
        build: 
            dockerfile: [DOCKERFILELOCATION]/Dockerfile
            context: [NVIDIA_GPU_DRIV]/usr
        container_name: emby
        environment:
            PUID: 1000 
            PGID: 1000 
            GIDLIST: 0
            TZ: Etc/GMT-3
        volumes:
            - /share/[REPLACETHIS2]/docker/config/emby:/config
            - /share/[REPLACETHIS3]/media:/media
            - type: tmpfs
              target: /scratch
              tmpfs:
                size: 10000000000
            #- /share/[REPLACETHIS3]/media/scratch:/scratch
			#uncomment that line if you don't have enough RAM to use tmpfs as scratch
        ports:
            - '8096:8096' # HTTP port
            - '8920:8920' # HTTPS port
            - '7359:7359'
            - '1900:1900'
        devices:
            - /dev/dri            # uncomment this to use intel transcoder if available
            - /dev/nvidia0
            - /dev/nvidiactl
            - /dev/nvidia-uvm
        restart: unless-stopped
        image: embyhwtrc

You will notice there's no longer an overlay volume or temp container, in the build section we point to the location of the Dockerfile and in context we are setting the usr directory inside NVIDIA_GPU_DRV as the context so we can ADD/COPY the contents of it to our Emby /usr/ directory.  

3)Last step, instead of using docker-compose up -d, you now need to add --build to it

docker-compose up --build -d

There should not be any kind of performance difference between my old and current method, if anything, this last method actually copies the contents of NVIDIA_GPU_DRV/usr to the image so running docker-compose up is now a bit slower, but me personally I'd rather have that and have a cleaner and more understandable docker-compose without the volume overlay and temporary container which to me was ugly even if it worked perfectly fine.

 

 

Let me try this :D  Indeed, since I'm not very knowledgeable about the ins and outs of docker, I want to be able to run it with the bare minimum lines that I could understand.  This is shorter.  I'll digest this and try.

  • Thanks 1
Link to comment
Share on other sites

jang430

@IkarioFirst of all, let me say this is soooo easy and clean :D  I got it installed on the 2nd try.  1st try was my fault.  I can also see in advanced mode the options for encoders, decoders!  

Now, there's a problem :D

My media is mounted as /share/media- I validated this as my other working emby server without support for gpu has the same.  When I put the same on this container, whenever I go to add library, and click on the + sign for Folders, nothing pops up, it stays this way.  I checked portainer, and I saw my initial mounted folder is /share/ZFS19_DATA/media.  I changed it to /share/media, same as my working emby server.  Still nothing.  What could possibly be the problem?  I did try to make some changes in portainer, such as restart policy, and port numbers, and I can see portainer can change settings fine.

image.png.93180e495c20db2226ade790e1dfde99.png

 

Edit:

I added network qnet-static-eth0-79e66cc as I also want this to be a fixed IP host, 192.168.1.x, and it worked!  Is it because my main NAS share is accessible in 192.168.1.X?  Though with 192.168.1.X, I don't know why I have to use 8096 instead of 8098.  Under the original network (embyservergpu_default), it's reachable at 8098.

image.thumb.png.43de01fa724543576fd1ea86b2d48beb.png

 

image.png.a229f9bb56d72f51d1758a04ba69c798.png

Edited by jang430
additional information
Link to comment
Share on other sites

That has nothing to do with the gpu support so it's a bit off topic.  Please share your docker-compose files and what your directories are both inside and outside your container.   Default directory for media in the container side is /media and not /share/media, for what is worth.  Also, no directories are added to that library according to that screenshot.

  • Thanks 1
Link to comment
Share on other sites

My guess is that (as it happened before when you were setting up the previous version) you copypasted the template I posted and forgot to fill or change some of the directories (or did it wrong) while also keeping the same config files so the emby install is looking for a directory that's no longer there for the container.  Also, I remember you copied the network setting on my previous profile and I excluded it on this template because it had nothing to do and realized it might make things more confusing, so there's a chance that by replacing the previous docker-compose with this one, some stuff has broken if you were not careful.  I'd double check your docker-compose as that's the only thing that changed.

Link to comment
Share on other sites

jang430

version: "3.6"
services:
  emby:
    build:
      dockerfile: /share/Container/docker/apps/embyservergpu/dockerfile
      context: /share/ZFS530_DATA/.qpkg/NVIDIA_GPU_DRV/usr
    container_name: emby
    environment:
      PUID: 1000
      PGID: 1000
      GIDLIST: 0
      TZ: Etc/GMT+8
    volumes:
      - /share/Container/docker/configs/embyservergpu:/config
      - /share/ZFS19_DATA/media:/media
      - type: tmpfs
        target: /scratch
        tmpfs:
          size: 10000000000
    ports:
      - 8098:8096
      - 8920:8920
      - 7359:7359
      - 1900:1900
    devices:
      - /dev/nvidia0
      - /dev/nvidiactl
      - /dev/nvidia-uvm
    restart: unless-stopped
    image: embyhwtrc

I changed my network settings inside portainer, and changed it to qnet-static-eth0-79e66cc and assigned a permanent IP to it.  Now it works fine, but under 192.168.1.31:8096 (which is weird).  Though I don't mind using port 8096, it's even more preferable.

But major issue is solved, and I'm very happy with this latest revision of yours.  

Edited by jang430
Link to comment
Share on other sites

shdwkeeper
1 hour ago, jang430 said:

version: "3.6"
services:
  emby:
    build:
      dockerfile: /share/Container/docker/apps/embyservergpu/dockerfile
      context: /share/ZFS530_DATA/.qpkg/NVIDIA_GPU_DRV/usr
    container_name: emby
    environment:
      PUID: 1000
      PGID: 1000
      GIDLIST: 0
      TZ: Etc/GMT+8
    volumes:
      - /share/Container/docker/configs/embyservergpu:/config
      - /share/ZFS19_DATA/media:/media
      - type: tmpfs
        target: /scratch
        tmpfs:
          size: 10000000000
    ports:
      - 8098:8096
      - 8920:8920
      - 7359:7359
      - 1900:1900
    devices:
      - /dev/nvidia0
      - /dev/nvidiactl
      - /dev/nvidia-uvm
    restart: unless-stopped
    image: embyhwtrc

I changed my network settings inside portainer, and changed it to qnet-static-eth0-79e66cc and assigned a permanent IP to it.  Now it works fine, but under 192.168.1.31:8096 (which is weird).  Though I don't mind using port 8096, it's even more preferable.

But major issue is solved, and I'm very happy with this latest revision of yours.  

Shouldn't your ports be:

      - 8098:8098

      - 8096:8096

      - 8920:8920

      - 7359:7359

      - 1900:1900

Or just get rid of 8098

Edited by shdwkeeper
Link to comment
Share on other sites

How do we update this container? I'm experiencing a problem, my newly added shows don't show up in Emby Server.  I'm thinking maybe updating the container will trigger a scan of library to add new shows.  Do I issue the same docker-compose command?

Link to comment
Share on other sites

11 minutes ago, jang430 said:

How do we update this container? I'm experiencing a problem, my newly added shows don't show up in Emby Server.  I'm thinking maybe updating the container will trigger a scan of library to add new shows.  Do I issue the same docker-compose command?

What do you mean by update?

Link to comment
Share on other sites

E.g. in Emby server, if I see there are new versions available, how can we update this Emby server to newer version?  We execute docker-compose command once again, and it will get the latest?

Link to comment
Share on other sites

Yes, to update the container docker-compose up -d will check if there's a newer image.  I don't really know what that has to do with Emby not detecting your shows though.

With all due respect I feel like you are derailing the topic with unrelated issues that you are having that have pretty much nothing to do with hardware transcoding, docker or specifically the QNAP platform.  Maybe make a post in the appropiate place or do a google search?  Some of these questions are extremely basic stuff that anyone could find the answer to with less than 5 minutes of googling.

  • Like 1
Link to comment
Share on other sites

  • 1 month later...

Ok, so a quick update regarding all this.  

QNAP has done as QNAP does and there's an update to QuTS Hero.  Current version is 5.1.1 and this update brings a whole lot of problems because they change stuff of their non standar setup and that usually ends up breaking something.  In this case, it breaks pretty much everything.  First things first, it will ask you to update container station.  This will update the current docker install so "docker-compose" is no longer a thing, you now have to use docker compose.  Not only this, but build is broken and while I think it wouldn't be that hard to fix it, I just can't be bothered, especially knowing that it'll probably break again on the next update.  What this means is that it breaks my current way of adding NVIDIA drivers to the container.  Good news is that the previous way (the one from the first post) actually still works and it would be very hard to break so hey, you can go back to that.

The other thing that breaks is, nvidia drivers in several different ways.  You'll probably see an error message saying that drivers couldn't be updated and are disabled.  This can be fixed, but it took me a while because it's as finicky and obscure as it gets.  Uninstall the old driver.  Reboot, install NVKernel and WITHOUT REBOOTING install the Nvidia Driver, this may or may not work but that's the most reliable way I found to reinstall those drivers and get those working.  You may need to perform this dance a few times, rebooting and trying again, but eventually it'll work.  

Lastly, because that's not all, they changed the way drivers are loaded at boot.  if you try running the container, you'll notice that it'll scream at you that there's no /dev/nvidia-uvm.

This is because that gets loaded whenever an application requests access to the GPU in the OS.  That doesn't get loaded however when we pass it to the emby container, so it's non-existant up until then, we need to force it to load before running docker compose.  
https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#runfile-verifications 
Using that as a reference,  add the following to either your startup script or to the script you use to run docker compose:

/sbin/modprobe nvidia &&
/sbin/modprobe nvidia-uvm && D=`grep nvidia-uvm /proc/devices | awk '{print $1}'` && mknod -m 666 /dev/nvidia-uvm c $D 0

This will recreate /dev/nvidia-uvm, now you can run docker compose and hopefully stuff should work.  

 

I know this is a lot but hey, blame QNAP! I'm just letting you know how to fix it (until the next major update anyway at least).

  • Thanks 1
Link to comment
Share on other sites

Ok, I'm really sorry to spam but one quick update to the whole Build situation.  I figured a way to solve it. Apparently it's not a QNAP thing but rather a BuildKit (the engine that actually builds images in Docker) issue with ZFS. BuildKit is the standard engine used in the new version and the previous engine is deprecated, but you can still use it if you add DOCKER_BUILDKIT=0 to your command, so something like this

DOCKER_BUILDKIT=0 docker compose build --no-cache

I would still use the first method though because that won't break any time soon and using this is basically a race on whether this method stops working or they fix the current engine. But if someone doesn't really want to change their docker compose, well now you know what to do.

  • Thanks 1
Link to comment
Share on other sites

  • 2 weeks later...
On 8/28/2023 at 10:46 PM, Ikario said:

Ok, so a quick update regarding all this.  

QNAP has done as QNAP does and there's an update to QuTS Hero.  Current version is 5.1.1 and this update brings a whole lot of problems because they change stuff of their non standar setup and that usually ends up breaking something.  In this case, it breaks pretty much everything.  First things first, it will ask you to update container station.  This will update the current docker install so "docker-compose" is no longer a thing, you now have to use docker compose.  Not only this, but build is broken and while I think it wouldn't be that hard to fix it, I just can't be bothered, especially knowing that it'll probably break again on the next update.  What this means is that it breaks my current way of adding NVIDIA drivers to the container.  Good news is that the previous way (the one from the first post) actually still works and it would be very hard to break so hey, you can go back to that.

The other thing that breaks is, nvidia drivers in several different ways.  You'll probably see an error message saying that drivers couldn't be updated and are disabled.  This can be fixed, but it took me a while because it's as finicky and obscure as it gets.  Uninstall the old driver.  Reboot, install NVKernel and WITHOUT REBOOTING install the Nvidia Driver, this may or may not work but that's the most reliable way I found to reinstall those drivers and get those working.  You may need to perform this dance a few times, rebooting and trying again, but eventually it'll work.  

Lastly, because that's not all, they changed the way drivers are loaded at boot.  if you try running the container, you'll notice that it'll scream at you that there's no /dev/nvidia-uvm.

This is because that gets loaded whenever an application requests access to the GPU in the OS.  That doesn't get loaded however when we pass it to the emby container, so it's non-existant up until then, we need to force it to load before running docker compose.  
https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#runfile-verifications 
Using that as a reference,  add the following to either your startup script or to the script you use to run docker compose:

/sbin/modprobe nvidia &&
/sbin/modprobe nvidia-uvm && D=`grep nvidia-uvm /proc/devices | awk '{print $1}'` && mknod -m 666 /dev/nvidia-uvm c $D 0

This will recreate /dev/nvidia-uvm, now you can run docker compose and hopefully stuff should work.  

 

I know this is a lot but hey, blame QNAP! I'm just letting you know how to fix it (until the next major update anyway at least).

Do you have an example for this how to run it? I'm using portainer for my container management and it sounds like that portainer does not work anymore?? Portainer must be installed the old way in the docker cli.

Link to comment
Share on other sites

How to run WHAT specifically? Portainer (ran via a container using docker compose) seems to be running fine for me.  I'd love to help you I'm just not following you.

Link to comment
Share on other sites

i meant how to add this to the startup-script or docker-compose.
/sbin/modprobe nvidia && /sbin/modprobe nvidia-uvm && D=`grep nvidia-uvm /proc/devices | awk '{print $1}'` && mknod -m 666 /dev/nvidia-uvm c $D 0
i have the same problem that the nvidia-uvm is missing after a reboot or FW update.

Link to comment
Share on other sites

Oh, then 

Step 1) Use the explanation I gave on the first post, (where we create a temporary container just to copy the drivers to the usr directory of the emby container)
Step 2) A few comments later, I explained how to create an autorun script that will run on boot and I even posted an example: 

Step 3) Add those lines somewhere before running docker compose or portainer or whatever you use to run your emby docker image. And that's it!
 

One thing to take into account is that the autorun.sh I proposed there is just a rough example that used to work for me, but you should be able to create one of your own or modify that one to make it work for you.  DO NOT COPY THAT FILE AS IS because it won't work, for starters, it has a backup routine that points to my directories and docker-compose no longer works on the newer versions (as it is now docker compose).  The method used to run that autorun.sh and the general structure of that autorun.sh is correct though.  Most importantly, if you really don't know what you are doing, it is maybe not the best idea to run stuff on startup because you might break stuff that you don't really know how to fix, so be very careful.

 

Hope this helps!

 

QUICK EDIT: Have you made sure that your drivers are properly installed? This applies specifically to the latest versions of QuTS Hero and it might be the case that you updated to another version and your driver is just broken and need to be reinstalled, as I mentioned on a previous comment.  Have you tried running those two lines through the CLI and restarting the emby container to check if transcoding works before deciding to add that to a startup script?

Edited by Ikario
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...