Jump to content

Docker HWA Nvidia Instructions


cnstarz

Recommended Posts

I was able to install Emby on Docker with NVIDIA hardware acceleration without too much fuss, though I had to rely on a few different instructionals.  Below are the steps I took get it to work (assuming you already have docker installed) on Ubuntu 19.04, for any other noobs trying to piece this all together:

 

Step 1: Install container directly from Docker Hub (assuming Docker is already installed)

docker pull emby/embyserver:latest

Step 2: Install nvidia-container-toolkit and nvidia-container-runtime

curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/ubuntu18.04/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt update
sudo apt install nvidia-container-toolkit nvidia-container-runtime

Step 3: Register the nvidia runtime with docker

sudo tee /etc/docker/daemon.json <<EOF
{
    "runtimes": {
        "nvidia": {
            "path": "/usr/bin/nvidia-container-runtime",
            "runtimeArgs": []
        }
    }
}
EOF

Step 4: Restart Docker daemon configuration

sudo pkill -SIGHUP dockerd

Step 5: Create directories that will host the Emby container's directories

sudo mkdir -p /opt/docker/volumes/emby/config

Step 6: Create Emby container

docker container create \
    --name="emby-server" \
    --network="host" \
    --volume /opt/docker/volumes/emby/config:/config \
    --volume /mnt/media:/mnt/media \
    --device /dev/dri:/dev/dri \
    --runtime=nvidia \
    --publish 8096:8096 \
    --publish 8920:8920 \
    --env UID=997 \
    --env GID=997 \
    --env GIDLIST=997 \
    --env NVIDIA_VISIBLE_DEVICES=all \
    --env NVIDIA_DRIVER_CAPABILITIES=compute,utility,video \
    emby/embyserver:latest

Step 7: Start the Emby container

docker start emby

Step 8: Create /etc/systemd/system/docker-emby-server.service file with the following text:

[Unit]
Description=Docker Emby Container
Requires=docker.service
After=docker.service

[Service]
ExecStart=/usr/bin/docker start -a emby-server
ExecStop=/usr/bin/docker stop -t 2 emby-server
Restart=always
RestartSec=2

[Install]
WantedBy=default.target

Step 9: Give the docker-emby-server.service file execute permission:

sudo chmod +x /etc/systemd/system/docker-emby-server.service

Step 10: Enable docker-emby-server.service

sudo systemctl enable docker-emby-server

 

 

Notes:

  • Step 2:
  • Step 5: You can create this directory wherever you want, it doesn't have to be exactly what I used
  • Step 6:
    • --name="emby-server"
      • This names the container on the host for easy identification.  You can call it whatever you want.
    • --volume /opt/docker/volumes/emby/config:/config
      • This maps the directory (on the physical system) on the left side of the ":" to the directory (on the docker container) on the right side.  This is the same directory you created in Step 5, and is absolutely required.
    • --volume /mnt/media:/mnt/media
      • Same concept as above.  In my case, this is where all my media is stored.
    • --env UID=997
      • ​This is the UID of the 'emby' user on my system
    • ​--env GID=997
      • ​This is the GID of the 'emby' group on my system
    • --env GIDLIST=997
      • Same as above
  • Steps 8-10: These are optional and only used to ensure my emby-server container starts on boot.  I decided to go the route of a systemd service because for all intents and purposes, Emby provides a service and is therefore a service.  If you don't want to go this route, you can instead add --restart unless-stopped \ somewhere into Step 6.
Edited by cnstarz
  • Like 2
Link to post
Share on other sites

Wait, that means our regular docker image works fine, we don't even need to extend any of the nvidia cuda images! I had created a tag specifically for nvidia using one of their images as base. I'll update the instructions on dockerhub asap, and add both nvidia env variables in our dockerfile so they're enabled by default.

Link to post
Share on other sites

Was looking at adding my Nvidia GPU to an Emby docker. From everything I can tell, the Nvidia Card is being loaded into the docker however I dont seem to be able to use it. Was wondering if someone could help point me in the right direction.

 

I have attached the log file for hardware transcoding. I also verified that --env NVIDIA_VISIBLE_DEVICES=all; --env NVIDIA_DRIVER_CAPABILITIES=compute,utility,video; --runtime=nvidia and --device /dev/dri:/dev/dri were all added

 

EmbyError.txt

Link to post
Share on other sites

Was looking at adding my Nvidia GPU to an Emby docker. From everything I can tell, the Nvidia Card is being loaded into the docker however I dont seem to be able to use it. Was wondering if someone could help point me in the right direction.

 

I have attached the log file for hardware transcoding. I also verified that --env NVIDIA_VISIBLE_DEVICES=all; --env NVIDIA_DRIVER_CAPABILITIES=compute,utility,video; --runtime=nvidia and --device /dev/dri:/dev/dri were all added

 

Hi there, did you do everything mentioned in the first post?

Link to post
Share on other sites

Yeah, i ran through it twice. It must be something with how I have my docker environment set up. Because I installed Emby on my actual machine not in the containers and it worked just fine.

Link to post
Share on other sites
D34DC3N73R

FYI, docker 19.03 has an integrated gpu option.

 

https://github.com/NVIDIA/nvidia-docker

Note that with the release of Docker 19.03, usage of nvidia-docker2 packages are deprecated since NVIDIA GPUs are now natively supported as devices in the Docker runtime.

 

 

 

 

switch out 

    --env NVIDIA_VISIBLE_DEVICES=all \
    --env NVIDIA_DRIVER_CAPABILITIES=compute,utility,video \

with

  --gpus all \

See usage and install instructions in link above.

  • Like 1
Link to post
Share on other sites
D34DC3N73R

I should also note that the integrated gpu option isn't available in docker-compose yet. Docker-Compose users should continue to use the nvidia specific environment variables along with editing /etc/docker/daemon.json. Also note that the runtime option is not supported with docker-compose 3.x+. Docker-Compose users should edit daemon.json so that nvidia becomes the default runtime (until docker-compose can make use of the new gpu option).

 

/etc/docker/daemon.json

{
    "default-runtime": "nvidia",
    "runtimes": {
        "nvidia": {
            "path": "/usr/bin/nvidia-container-runtime",
            "runtimeArgs": []
        }
    }
}

example of docker-compose

    emby:
        image: emby/embyserver
        container_name: emby
        network_mode: host
        restart: unless-stopped
        environment:
            - TZ=$TZ
            - UID=$PUID
            - GID=$PGID
            - GIDLIST=44
            - NVIDIA_VISIBLE_DEVICES=all
            - NVIDIA_DRIVER_CAPABILITIES=all
        volumes:
            - /.config/emby:/config
            - /media/Video/TV:/media/Video/TV
            - /media/Music:/media/Music
            - /media/Video/Movies:/media/Video/Movies
Edited by D34DC3N73R
Link to post
Share on other sites

hmmm...far as I can tell, this wont work on Archlinux.

 

  emby:
    image: emby/embyserver:latest
    container_name: emby
    hostname: emby
    volumes:
      - /mnt/embyraid/tvshows:/tv
      - /mnt/embyraid/movies:/movies
      - ${USERDIR}/docker/shared:/shared
      - /mnt/embyraid/4kmovies:/4kmovies
      - "/etc/localtime:/etc/localtime:ro"
      - ${USERDIR}/docker/emby:/config
    ports:
      - "8096:8096"
      - "8920:8920"
    restart: always
#    runtime: nvidia
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=${TZ}
      - GIDLIST=964
      - NVIDIA_VISIBLE_DEVICES=all
      - NVIDIA_DRIVER_CAPABILITIES=all
    networks:
      - traefik_proxy
    labels:
      - "traefik.enable=true"
      - "traefik.backend=emby"
      - "traefik.frontend.rule=Host:emby.${DOMAINNAME}"
      - "traefik.port=8096"
      - "traefik.docker.network=traefik_proxy"
      - "traefik.frontend.headers.browserXSSFilter=true"
      - "traefik.frontend.headers.contentTypeNosniff=true"
      - "traefik.frontend.headers.SSLProxyHeaders=X-Forwarded-Proto:https"
      - "traefik.frontend.headers.forceSTSHeader=true"
      - "traefik.frontend.headers.SSLRedirect=true"
      - "traefik.frontend.headers.STSSeconds=315360000"
      - "traefik.frontend.headers.SSLHost=*.domain-us.net"
      - "traefik.frontend.headers.STSIncludeSubdomains=true"
      - "traefik.frontend.headers.STSPreload=true"
Link to post
Share on other sites
D34DC3N73R

 

hmmm...far as I can tell, this wont work on Archlinux.

 

 

Won't work how? Doesn't start, or doesn't pass though the GPU? For what it's worth, the runtime variable is not supported in docker-compose 3.x+. I do see it's commented out. The nvidia runtime must be set as default in  /etc/docker/daemon.json. Are you positive nvidia drivers have been installed on the host system as well as the nvidia container toolkit?

Link to post
Share on other sites
  • 2 months later...

Won't work how? Doesn't start, or doesn't pass though the GPU? For what it's worth, the runtime variable is not supported in docker-compose 3.x+. I do see it's commented out. The nvidia runtime must be set as default in  /etc/docker/daemon.json. Are you positive nvidia drivers have been installed on the host system as well as the nvidia container toolkit?

 yeah it's not passing through the GPU

 

Docker 19.03

Compose 1.24.1

 

and yes nvidia drivers are installed and working on host as well as nvidia container runtime and toolkit

 

do i need --device /dev/dri:/dev/dri \ in my compose.yml ?

Edited by bigverm23
Link to post
Share on other sites

ls -la /dev/dri looks like this

 

drwxr-xr-x   3 root root        100 Nov 23 20:48 .
drwxr-xr-x  21 root root       3840 Nov 23 20:48 ..
drwxr-xr-x   2 root root         80 Nov 23 20:48 by-path
crw-rw----+  1 root video  226,   0 Nov 23 20:48 card0
crw-rw-rw-   1 root render 226, 128 Nov 23 20:48 renderD128
 

ID of the video group above is 986, so should that be my GIDLIST=986 ?

Link to post
Share on other sites
D34DC3N73R

ls -la /dev/dri looks like this

 

drwxr-xr-x   3 root root        100 Nov 23 20:48 .

drwxr-xr-x  21 root root       3840 Nov 23 20:48 ..

drwxr-xr-x   2 root root         80 Nov 23 20:48 by-path

crw-rw----+  1 root video  226,   0 Nov 23 20:48 card0

crw-rw-rw-   1 root render 226, 128 Nov 23 20:48 renderD128

 

ID of the video group above is 986, so should that be my GIDLIST=986 ?

 

`grep video /etc/group` should let you know the group number for video. If that's 986, then yes GIDLIST=986 is correct. FWIW, you shouldn't need the GIDLIST variable or have to pass /dev/dri when using the nvidia container toolkit. Have you edited /etc/docker/daemon.json to make nvidia the default runtime? (requires restart of the docker service after edit). Does your card work with any other containers?

Edited by D34DC3N73R
Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...