Jump to content

Tips to set up emby on a QNAP NAS using Docker with HW Transcoding


Ikario

Recommended Posts

Hey, 

As I posted before, I was for a while unable to install the emby package on my TVS-872XT running QuTS Hero 4.5.4.  I have now managed to install the server natively through a different method, but before I managed to do that, I ended setting emby via docker.  This was not very straightforward but I managed to make it work with HW Transcoding, both using the integrated GPU AND the discrete NVIDIA GPU I installed (Quadro P1000).  I figured I'd write a post showing what I did for future reference.  This should work with all of the x86/64 versions for QNAP though.

This assumes you know how to log in via SSH and how docker compose works, it's by no means an in depth guide.

If you manage to install the app natively, in my case it detected both Quicksync/VAAPI and NVENC/DEC without messing with it, so this is WAY easier.

to install natively, you have to use the CLI.  Put your emby qpkg somewhere accesible, for example /share/Public. Then do:

cd /share/Public
chmod +x embypackage.qpkg
sh embypackage.qpkg

Replace embypackage with the exact filename.  This works and installs emby without issues natively, again, with FULL HW Transcoding support.  One con is that you won't be able to uninstall it via the App center GUI.  It will show up, but no uninstall button.  To uninstall, connect via SSH and use the console manager included with QNAP as soon as you log in via SSH to find the emby package and uninstall it.  The menu is pretty straightforward and you should have no issues with it.

 

Having said that, using the docker container makes it easier to migrate to a new server and/or backup or experiment without (supposedly at least) a huge performance hit, so if you want to go that route, it's a bit messier but this is what worked for me:

First of all, the official embyserver image did not work for me.  Maybe it was me but even following the instructions, I could not get it to boot at all. I ended up using the linuxserver/emby image.

If you just want to use the integrated gpu (Quicksync/VAAPI) then you need to make sure /dev/dri has the right permissions, as it normally comes with only owner (admin most likely) rw permissions.  

chmod -R 770 /dev/dri

This way, you can add whatever group owns /dev/dri (ls -l /dev/) the GIDLIST in the docker compose file and it should automatically see the integrated GPU. You could also do 777 and forget about groups or users but that's not something I'd be comfortable doing.

You need to make sure this runs before either container station or your custom docker-compose.  You could either use Entware's init.d and make a startup script or install RunLast and use either its init.d or scripts folder.   If you are using container station, I'd use Entware as it runs before container station is even loaded.  You could also use QNAP's official autorun.sh method though I don't think that'll stick, you could try though if you don't feel like installing anything else.

Getting the NVIDIA GPU to work though was harder to figure out but easier to implement.  It took me a bit of googling and combining a few methods here and there, but this is what my emby docker-compose file looks like:

version: '3.6'
volumes:
  emby_usr:
    driver: local
    driver_opts:
      type: overlay
      device: overlay
      # Change the '/share/DockerData/volume_overlays/plex' to whatever
      # directory you'd like to use to store the temp volume overlay files
      # Note: That path appears here TWICE so change both of them!
      o: lowerdir=/share/[REPLACETHIS1]/.qpkg/NVIDIA_GPU_DRV/usr,upperdir=/share/[REPLACETHIS2]/docker/config/emby/volume_overlays/emby/upper,workdir=/share/[REPLACETHIS2]/docker/config/emby/volume_overlays/emby/work

services:

    emby-prep:
        image: linuxserver/emby
        container_name: emby-prep
        environment:
          PUID:      "1000"       # Change these values as necessary for your own containers
          PGID:      "0"
          UMASK_SET: "022"
          TZ:        "Etc/GMT-3"
        volumes:
          - emby_usr:/emby_usr
          - /share/[REPLACETHIS1]/.qpkg/NVIDIA_GPU_DRV/usr/:/nvidia:ro
        entrypoint: /bin/bash -x -c "cp -Rv /usr/* /emby_usr/"
        restart: "no"       # only needs to run once
    emby:
        container_name: emby
        depends_on:
            - emby-prep
        networks:
            - medianet
        environment:
            PUID: 1000 
            PGID: 1000 
            GIDLIST: 0
            TZ: Etc/GMT-3
        volumes:
            - emby_usr:/usr
            - /share/[REPLACETHIS2]/docker/config/emby:/config
            - /share/[REPLACETHIS3]/media:/media
            - type: tmpfs
              target: /scratch
              tmpfs:
                size: 10000000000
            #- /share/ZFS19_DATA/media/scratch:/scratch
			#uncomment that line if you don't have enough RAM to use tmpfs as scratch
        ports:
            - '8096:8096' # HTTP port
            - '8920:8920' # HTTPS port
            - '7359:7359'
            - '1900:1900'
        devices:
            - /dev/dri            # uncomment this to use intel transcoder if available
            - /dev/nvidia0
            - /dev/nvidiactl
            - /dev/nvidia-uvm
        restart: unless-stopped
        image: 'linuxserver/emby:latest'

networks:
    medianet:
        driver: bridge

Again, won't go into heavy detail but this is basically what's going on:

1)Created an overlay volume called emby_usr (you can read more about it herehttps://docs.docker.com/storage/storagedriver/overlayfs-driver/#how-the-overlay-driver-works) with the GPU driver usr folder as lower directory, we will use this to merge that directory with the usr directory on the emby container.
2)First run emby_prep which runs once and closes.  Once it runs, there's a command on its entrypoint to copy the usr directory from the container into the overlay volume, so now we have an emby_usr volume which has both the usr directory from the container AND the NVIDIA stuff.  
3)When running emby, we mount the volume emby_usr as the /usr volume, now with the NVIDIA drivers.
4) I have 32gigs of RAM so I'm using a tmpfs in RAM for scratch, but if you don't you can just delete all that and uncomment the line below that.
5)Last but not least, you need to add the devices. /dev/dri will allow you to use the integrated GPU, the rest are for the NVIDIA GPU.

 

Lastly, a few extra things:

I have my docker container files in a share directory called docker.  path is either /share/docker/config or /share/ZFS20_DATA/docker/config.  Replace that with whatever works for your config.  Inside that I made a folder structure like this:

config/
    ->emby/
        ->volume_overlays/
             ->upper/
             ->work/

Same thing with my media folder, I have my media at /share/media/.  To find where your NVIDIA_GPU_DRV directory is, you could use something like "find /share/ -name ""NVIDIA_GPU_DRV" and replace that accordingly.

And that's pretty much it.  You could also use this to remove the transcoding streams limitation from lower end graphics cards by adding a script to the entrypoint, but I have not done that yet.  Shouldn't be too hard but I don't have the time to mess with it now.
Hopefully this is useful for someone out there.

Edited by Ikario
made a whoopsie.
Link to comment
Share on other sites

  • 2 months later...
shdwkeeper

If I just want to use the integrated GPU (Quicksync\VAAPI) what would I comment out in the compose file above?  I have this running via the app natively but was trying this out in a container as well.  I like the write-up above just trying to figure out more of a step-by-step on this and using the docker compose with the QNAP.  Little confused on how to do that as I have been using the GUI and minimal command line.  You have medianet under networks whats that representing?  Can I just use dhcp?

 

Thank you.

Link to comment
Share on other sites

Hey!

Let me see if I can help you.

The "medianet" network is just that, a network that I created and named like that and is bridged to the internet and is home to all my containers. It's not needed at all for this, just make things easier when you have tons of things and you need all of them to connect to each other.  It allows me for example to connect to other containers using the container name as a hostname.
If I wanted to connect to the emby container with IP address 172.x.x.x, I don't need to know that, I can replace that with "emby" as a hostname and not worry about IP addresses.  I left it there just for completeness, but it's not needed or related.

 

Regarding Quicksync, if all you need is quicksync, you don't really need most of this, it's way easier, here's what you need:

version: '3.6'
# volumes:
  # emby_usr:
    # driver: local
    # driver_opts:
      # type: overlay
      # device: overlay
      # # Change the '/share/DockerData/volume_overlays/plex' to whatever
      # # directory you'd like to use to store the temp volume overlay files
      # # Note: That path appears here TWICE so change both of them!
      # o: lowerdir=/share/[REPLACETHIS1]/.qpkg/NVIDIA_GPU_DRV/usr,upperdir=/share/[REPLACETHIS2]/docker/config/emby/volume_overlays/emby/upper,workdir=/share/[REPLACETHIS2]/docker/config/emby/volume_overlays/emby/work

services:
    # emby-prep:
        # image: linuxserver/emby
        # container_name: emby-prep
        # environment:
          # PUID:      "1000"       # Change these values as necessary for your own containers
          # PGID:      "0"
          # UMASK_SET: "022"
          # TZ:        "Etc/GMT-3"
        # volumes:
          # - emby_usr:/emby_usr
          # - /share/[REPLACETHIS1]/.qpkg/NVIDIA_GPU_DRV/usr/:/nvidia:ro
        # entrypoint: /bin/bash -x -c "cp -Rv /usr/* /emby_usr/"
        # restart: "no"       # only needs to run once
    emby:
        container_name: emby
        # depends_on:
            # - emby-prep
        networks:
            - medianet
        environment:
            PUID: 1000 
            PGID: 1000 
            GIDLIST: 0
            TZ: Etc/GMT-3
        volumes:
            # - emby_usr:/usr
            - /share/[REPLACETHIS2]/docker/config/emby:/config
            - /share/[REPLACETHIS3]/media:/media
            - type: tmpfs
              target: /scratch
              tmpfs:
                size: 10000000000
            #- /share/ZFS19_DATA/media/scratch:/scratch
			#uncomment that line if you don't have enough RAM to use tmpfs as scratch
        ports:
            - '8096:8096' # HTTP port
            - '8920:8920' # HTTPS port
            - '7359:7359'
            - '1900:1900'
        devices:
            - /dev/dri            # uncomment this to use intel transcoder if available
            # - /dev/nvidia0
            # - /dev/nvidiactl
            # - /dev/nvidia-uvm
        restart: unless-stopped
        image: 'linuxserver/emby:latest'

networks:
    medianet:
        driver: bridge

You'll notice almost everything is commented now, if you remove everything, you get something like this basically:

version: '3.6'

services:
    emby:
        container_name: emby
        networks:
            - medianet
        environment:
            PUID: 1000 
            PGID: 1000 
            GIDLIST: 0
            TZ: Etc/GMT-3
        volumes:
            - /share/[REPLACETHIS2]/docker/config/emby:/config
            - /share/[REPLACETHIS3]/media:/media
            - type: tmpfs
              target: /scratch
              tmpfs:
                size: 10000000000
            #- /share/ZFS19_DATA/media/scratch:/scratch
			#uncomment that line if you don't have enough RAM to use tmpfs as scratch
        ports:
            - '8096:8096' # HTTP port
            - '8920:8920' # HTTPS port
            - '7359:7359'
            - '1900:1900'
        devices:
            - /dev/dri            # uncomment this to use intel transcoder if available
        restart: unless-stopped
        image: 'linuxserver/emby:latest'

networks:
    medianet:
        driver: bridge

There is one other issue you need to fix if you are using a QNAP NAS and you want to use Quicksync though and that is to fix permissions. For some reason, /dev/dri has the wrong permissions and they will reset every time you boot your QNAP NAS, so you need to correct them at boot if you want to use Quicksync.  To do that, you need to run this command at startup:
 

chmod -R 770 /dev/dri

If you do that, you could check with ls -l which group owns /dev/dri and add it to the GIDLIST on the yml, or just do 777 instead of 770 and forget about it (though, again, giving anything 777 permissions is not something I'd recommend if you are unsure of what you are doing)

One way to run this at startup is using either Entware or RunLast, if you need any help with that let me know.  You could use QTS/QuTS Hero autorun but most changes done there don't stick so I wouldn't recommend it.

If you have any more questions, feel free to ask or shoot me a DM!

 

Link to comment
Share on other sites

Ikario

He sent me a DM asking for a bit more info on how to chmod every time the system boots and other stuff about docker.   So as to have all the info in one post, I'll give a summary of how to use RunLast to run a script at boot.  One thing to note is that modifying the default init.d QNAP offers does nothing because right after running that, the system sets everything back to default so it's pretty much useless.  That's why we use RunLast or Entware.

So, this is how I do it:

You can get RunLast from here https://www.qnapclub.eu/en/qpkg/690 you can also install the QNAPClub repo and download it from the App Center in QTS/QuTS Hero.  Once you have it installed, you WILL need some command line magic.  First you need to know where it was installed, for that you have to run the following command:

cd $(getcfg RunLast Install_path -f /etc/config/qpkg.conf)/

this will basically make your current directory the RunLast install dir.  In there, you'll find to directories, init.d and scripts.  You'll put your autorun script there, but there's a specific syntax to it.  This is the way I do it, create a file named Sautorun.sh and put it inside init.d with this content:

#/usr/bin/env bash
/sbin/log_tool -t 0 -a "Ran init.d loader" #This will add an entry to your QNAP log, just for debugging purposes.  You can remove it.
case "$1" in
    start)
        /share/ZFS20_DATA/docker/autorun.sh #REPLACE THIS LINE WITH WHEREVER YOUR AUTORUN SCRIPT IS
        ;;
    stop)
        # perform your 'stop' actions here
        ;;
    restart)
        $0 stop
        $0 start
        ;;
    *)
        echo -e "\n Usage: $0 {start|stop|restart}"
        ;;
esac

Most important thing to notice here is the filename. It HAS to start with a capital S, otherwise it won't run. 

The first line is just a way to know, using QuLog, if the script has ran, it will appear in the Event Log.  Only thing you need to change here is where your REAL autorun script is located.  Most people put it on the scripts folder, I put it somewhere easily accesible in case I need to find it or modify it easily.

 

Finally, you need to create that autorun.sh file I referenced in Sautorun.sh. This is my autorun.sh:

#!/bin/bash
#exporting path because "which docker" does not seem to be working and the script does not have the path for some reason
export PATH=$PATH:/share/ZFS530_DATA/.qpkg/container-station/bin

#BACKUP ROUTINE to save my settings in snapshots every time I run this script
echo "Doing config backup with rsync"
rm -rf /share/ZFS20_DATA/docker/backups/snapshot3 &&
mv /share/ZFS20_DATA/docker/backups/snapshot2 /share/ZFS20_DATA/docker/backups/snapshot3 &&
mv /share/ZFS20_DATA/docker/backups/snapshot1 /share/ZFS20_DATA/docker/backups/snapshot2 &&
mv /share/ZFS20_DATA/docker/backups/snapshot0 /share/ZFS20_DATA/docker/backups/snapshot1 &&
rsync -aA --delete --link-dest=/share/ZFS20_DATA/docker/backups/snapshot1 /share/ZFS20_DATA/docker/config/ /share/ZFS20_DATA/docker/backups/snapshot0/ &&

#CHMOD so that quicksync works
chmod -R 770 /dev/dri

#RUNNING AND UPDATING DOCKER CONTAINERS
echo "Updating Bazarr, Jackett, NZBGet, Transmission, Radarr, Sonarr,jDownloader2 and Emby"
#/sbin/log_tool -t 0 -a "Updating Bazarr, Jackett, NZBGet, Transmission, Radarr, Sonarr, jDownloader2, Emby, portainer, NPM, Ombi and Organizr"

cd /share/ZFS20_DATA/docker
docker-compose pull
docker-compose up -d
docker image prune -f
#MAKE SURE ANY NEW FILE I MANUALLY ADD HAS THE RIGHT PERMISSIONS IN THAT FOLDER
chown -R dockeruser:dockermedia /share/ZFS19_DATA/media/video/

/sbin/log_tool -t 0 -a "[Autorun] Backup, Update and ran Docker containers successfully using RunLast init.d"
echo "All done!"

This is my personal autorun.sh file and it might not work for you (the paths will be all wrong, for one), but I'll try to briefly explain it because it might give people some cool ideas.  First thing I do is a workaround, because at that point of the boot up process "which docker" comes out empty, so if I don't assign the docker path to the PATH variable, I won't be able to run docker using this script.  

Next is my snapshot backup routine, which is really ugly but it works so I don't mess with it, it basically keeps three snapshots of my docker config files just in case I'm messing around and break anything, this lets me go back to a previous version.

Then comes the chmod command that you NEED for Quicksync to work.

Finally, I use docker-compose to pull my updated images, create the containers and prune all images.

There's one last line and that makes sure that any file I manually put in my video folder has the right permissions.  Sometimes my girlfriend would manually add a movie with her user and while it does not break anything, I really like having all my permissions tidy.

 

In summary:

1)Install RunLast.

2)Add Sautorun.sh to the RunLast init.d dir.  This script calls your main autorun script.

3)Create an autorun.sh script that does what you want it to do (In my case, backup my docker config, fix my permissions, fixes /dev/dri permissions and update/run my containers)
 

That's pretty much it.  

  • Thanks 1
Link to comment
Share on other sites

  • 2 weeks later...
On 10/12/2021 at 3:21 PM, Ikario said:

If you are using container station, I'd use Entware as it runs before container station is even loaded.  You could also use QNAP's official autorun.sh method though I don't think that'll stick, you could try though if you don't feel like installing anything else

I was able to get Emby working with Intel QuickSync hardware decoding via Docker through Container Station on an Intel QNAP. I already had Plex hardware decoding working. My script to change /dev/dri permissions is in autorun.sh, which I can confirm does work across reboots. I'm considering revisiting that approach now though since the QNAP security counselor flags having autorun enabled as a high risk item. 

The only other thing I will mention is hardware decoding didn't kick in until I went into the Emby transcoding settings and unchecked the VAAPI devices, leaving only the QuickSync option selected for every codec.

  • Thanks 1
Link to comment
Share on other sites

  • 7 months later...
Lexizilla

HI,

I working since hours and hours on that same topic and I´m not able to get my Nvidia Quatro ready for Emby in Docker :(

I used your Docker Compose File and only changed needed things0.
 

Quote

version: "3.4"

volumes:
  emby_usr:
    driver: local
    driver_opts:
      type: overlay
      device: overlay
      # Change the '/share/DockerData/volume_overlays/emby' to whatever
      # directory you'd like to use to store the temp volume overlay files
      # Note: That path appears here TWICE so change both of them!
      o: lowerdir=/share/CE_CACHEDEV1_DATA/.qpkg/NVIDIA_GPU_DRV/usr,upperdir=/share/Docker-Files/emby/overlay_upper,workdir=/share/Docker-Files/emby/overlay_work

services:

  emby-prep:
    image: linuxserver/emby
    container_name: emby-prep
    environment:
      PUID:      "1000"       # Change these values as necessary for your own containers
      PGID:      "0"
      UMASK_SET: "022"
      TZ:        "Europe/Berlin"
    volumes:
      - emby_usr:/emby_usr
      - /share/CE_CACHEDEV1_DATA/.qpkg/NVIDIA_GPU_DRV/usr/:/nvidia:ro
    entrypoint: /bin/bash -x -c "cp -Rv /usr/* /emby_usr/"
    restart: "no"       # only needs to run once
  
  emby:
    image: linuxserver/emby
    container_name: emby
    hostname: emby
    networks:
      qnet-static:
        ipv4_address: 192.168.1.230
    depends_on:
      - emby-prep
    environment:
      PUID:      "1000"
      PGID:      "0"
      UMASK_SET: "022"
      TZ:        "Europe/Berlin"
    devices:
      - /dev/nvidia0
      - /dev/nvidiactl
      - /dev/nvidia-uvm
    volumes:
      - emby_usr:/usr         # dont' modify this

      # Change the following mounts to match your locations for config, tv, movies, etc.
      - /share/Docker-Files/emby/config:/config
      - /share/Serien:/Serien

networks:
  qnet-static:
    driver: qnet
    driver_opts:
      iface: "eth0"
    ipam:
      driver: qnet
      options:
        iface: "eth0"
      config:
        - subnet: 192.168.1.0/24
          gateway: 192.168.1.100

I already checked with SSH that the Folder /share/CE_CACHEDEV1_DATA/.qpkg is available. 
Yes it is, but seems to be empty or the "normal" User had no access to view that files ?

ANy idee, where i can check, what is going wrong. On the work and overlay folder are files in.

Kind Regards

 

hardware_detection-63796896055.txt

Edited by Lexizilla
add hardware detection log
Link to comment
Share on other sites

Ikario

Do you have your drivers installed by any chance?  You install those using the app center in qts/quts hero or by sshing and using the console with the qpkg you download from qnap's website.

If so, you can ssh and use 

find /share/ -name "NVIDIA_GPU_DRV"

to find where the NVIDIA_GPU_DRV are located.

DM me if you need more help!

Link to comment
Share on other sites

Lexizilla

It is installed by QNAP AppCenter ( NVidia Kernel and GPU Driver )

Emby local installed on the NAS by using the QNAP Package is working fine with the Nvidia card. But i like to use Docker in future :)
Will check the command later today.

Thanks

Alex

Link to comment
Share on other sites

Lexizilla

okay,

it looks like, i need to try with another folder:

I will try it tomorrow or today late evening with that location. If it´s not working, I would contact you directly :)

image.png.872cc279e459f0928a04911d58c243df.png

Link to comment
Share on other sites

Ikario

Congrats! 

If you need or want to unlock your drivers so that you can transcode unlimited streams (it's locked to 3 streams by default), I made another post explaining how to do that too! Glad my guide helped :)

Edited by Ikario
Link to comment
Share on other sites

  • 1 month later...
paddy75

Hello all,

this config from the first post was working for me until now too without any problems. After a nvidia driver update on qnap to the latest version the container don't want to start anymore. The error i get here is:
 

Error response from daemon: failed to mount local volume: mount overlay:/share/CACHEDEV1_DATA/Container/container-station-data/lib/docker/volumes/emby_usr/_data, data:
lowerdir=/share/CACHEDEV1_DATA/.qpkg/NVIDIA_GPU_DRV/usr,upperdir=/share/Docker/linuxserver-emby/overlay/upper,workdir=/share/Docker/linuxserver-emby/overlay/work: stale NFS file handle

I used this kind of configuration also for other containers. All have the same problem now 😪
I tried to uninstall container-station, rebootet qnap, reinstall it and deploy the conatainer again without luck. Renamed the path where emby is installed, modiefied the yml-file --> also no luck with the same error. Is not possible for me to find out in which files the mounts are writen in to clean it up. There must be a comand or other workaround to solve this problem?? Think this will happen every time when new drivers are installed in future.

Hope somone can help me here.

Link to comment
Share on other sites

Ikario

Can you paste your docker-compose.yml? especifically, the part where you create your overlay volume.  Also, did you update just the nvidia drivers or did something else change? if so, are you sure the install directory didn't change?
If you run the following command: 

/sbin/getcfg NVIDIA_GPU_DRV Install_Path -f /etc/config/qpkg.conf -d None

What does it spit out?

I have updated my nvidia drivers with no issues whatsoever for what it's worth.

 

EDIT:
OK, so I found this post you made in June: 
https://forum.qnap.com/viewtopic.php?t=166650&p=819823

and according to that, you are not telling the whole story here (you don't even mention updating gpu drivers in that post?):
 

Quote

Wanted to reduce the space from a thin-volume on pool1. During the process I had a power failue.
After this a docker-stack (configure with portainer) won't run anymore and tried to rebuilt the container again without success.
 

 

First thing I would ask is: have you tried doing 

docker-compose down -v

before running

docker-compose up -d

First command will delete all existing volumes so you can properly recreate them when you run the second command.

If that doesn't work, then I think you have issues in your filesystem (maybe just setting up the right permissions?). I'm not sure I could help you there, you are not really giving us all the info according to your other post.

Edited by Ikario
Clarified and added some info or possible solutions.
Link to comment
Share on other sites

paddy75

Hi,
thanks for your reply. The problem with the powerfailue could be solved after i deleted my SSD RAID1 pool, and installed everything again.
I used tdarr, handbrake and emby without any problem until the driver update. No other changes were done. Still using portainer for the deployment.

Here is the path:
 

[~] # /sbin/getcfg NVIDIA_GPU_DRV Install_Path -f /etc/config/qpkg.conf -d None
/share/CACHEDEV1_DATA/.qpkg/NVIDIA_GPU_DRV

and here what i try to use and was working before:
 

version: "3.4"

volumes:
  emby_usr:
    driver: local
    driver_opts:
      type: overlay
      device: overlay
      # Change the '/share/DockerData/volume_overlays/plex' to whatever
      # directory you'd like to use to store the temp volume overlay files
      # Note: That path appears here TWICE so change both of them!
      o: lowerdir=/share/CACHEDEV1_DATA/.qpkg/NVIDIA_GPU_DRV/usr,upperdir=/share/docker/linuxserver-emby/overlay/upper,workdir=/share/docker/linuxserver-emby/overlay/work
services:
  emby-prep:
    image: linuxserver/emby
    container_name: emby-prep
    environment:
       PUID: "1002"       # Change these values as necessary for your own containers
       PGID: "100"
       UMASK: "022"
       TZ: "Europe/Berlin"
    labels:
      - wud.watch=false
    volumes:
      - emby_usr:/emby_usr
      - /share/CACHEDEV1_DATA/.qpkg/NVIDIA_GPU_DRV/usr/:/nvidia:ro
    entrypoint: /bin/bash -x -c "cp -Rv /usr/* /emby_usr/"
    restart: "no"       # only needs to run once
    
  emby:
    image: linuxserver/emby
    container_name: linuxserver-emby
    depends_on:
      - emby-prep
    environment:
      PUID: "1002"
      PGID: "100"
      UMASK: "022"
      TZ: "Europe/Berlin"
      NVIDIA_DRIVER_CAPABILITIES: "all"
    devices:
      # - /dev/dri            # uncomment this to use intel transcoder if available
      - /dev/nvidia0
      - /dev/nvidiactl
      - /dev/nvidia-uvm
    volumes:
      - emby_usr:/usr         # dont' modify this
      # Change the following mounts to match your locations for config, tv, movies, etc.
      - /share/docker/linuxserver-emby:/config
      - /share/EmbyMedia:/EmbyMedia
    ports:
      - "8096:8096"
      - "8920:8920"
    restart: always

The command:
docker-compose down -v
gives me back the followning

[~] # docker-compose down -v
ERROR:
        Can't find a suitable configuration file in this directory or any
        parent. Are you in the right directory?

        Supported filenames: docker-compose.yml, docker-compose.yaml, compose.yml, compose.yaml

Sorry when some information are missing. I try my best to provide all I could.
Took a while to find the solution how hardware transcoding can work in docker. When it runs, no reason for me to change anything and I'm still a beginner in docker.
Wrong permissions in which directory???
QTS don't show any issues in the filesystem.

When you also can't help, maybe I try to delete my SSD RAID1-Pool and recreate all again. This helped me as I had the powerfailue. Hope not that it will happen again then with the next driver update. 

Link to comment
Share on other sites

Ikario

The down command needs to be ran on the same directory where your yml file is located for it to work (same thing with up command) looks like you are not in the right directory.

Link to comment
Share on other sites

  • 6 months later...
jang430

Is it required to fix init.d if my NAS doesn't have Quicksync?  I'm using TS-873A, Ryzen, no Quicksync.

Should the Quadro hardware be in Container mode? Using P400

Edited by jang430
Link to comment
Share on other sites

Ikario

If you don't need Quicksync then no, you don't need to do that.

Regarding which mode to put the GPU, in my experience it makes absolutely no difference  but your mileage may vary, I have it in QuTS mode right now.

  • Like 1
Link to comment
Share on other sites

  • 1 month later...
jang430

Hello.  I followed @Ikario's method to get my Nvidia P400 running on docker.  I have problems though.  I downloaded a movie1 that is 28 GB in size.  I tried to play it on an iPad that no longer supports emby app (this issue has it's own other thread).  The show plays.  I can see in emby dashboard the show transcodes h.265 and encodes h.264, and I can see the green chip right beside it.  It stops after a minute.image_2023-05-27_124447073.png.2fd387e88cfdd7a07538421c4f129e07.pngimage_2023-05-27_124524167.png.17e4b04e009ca2ed7ec2bd349fd57fa1.png

I have other files 12 GB in size, but doesn't even start to play.image_2023-05-27_124801682.png.e7faf34198b3e838018ed9779a0aec10.png  But you can see this doesn't have the green chip beside it.

 

My ipad has full bar.  What could be the problem?

Link to comment
Share on other sites

Ikario

So, way more info is needed to debug this.  Have you tried it somewhere else like a browser?

Link to comment
Share on other sites

jang430

I am actually using a browser from an ipad that is old, so emby app no longer works.  What info can I share to give more information?

Link to comment
Share on other sites

  • 2 weeks later...
  • 4 weeks later...
jang430

@IkarioQnap's compatibility list now shows support for Quadro P620, P1000 now.  I am using P400 though.  In the supported features, it shows 

HD Station / Linux Station / Container Station / Virtualization Station

whereas GTX models show additional support for Hardware Transcoding aside from the features above.  I'm wondering if we can now start using the Quadro on containers via passthrough?

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...