Jump to content

Docker


Luke

Recommended Posts

775 doesn't give you write access if the directory doesn't belong to uid 1000 or gid 100. How are you setting the uid/gid? This has changed between the mono and netcore version.

 

 

The dir does belong to uid 1000 and gid 100. Since I'm using unraid I just set the uid/gid in the docker template. I haven't touched this since the first time I installed the container. The key variable in the template is APP_UID and APP_GID. Then I just set the value to 1000/100

Edited by strike
Link to comment
Share on other sites

alucryd

As described on docker hub, it's a list of additional GIDs to run emby as, like the video group to access vaapi render nodes.

Link to comment
Share on other sites

I see. Will have to read up on the hw acceleration stuff to see if I can make use of it. Anyway, thanks for the help! Will report back if I still have problems. 

Link to comment
Share on other sites

cyrenbyren

 

I wasn't asking about the entrypoint, I'd like to know the whole docker run command. FYI the log mentions the service started fine, what makes you say it doesn't work? If the service didn't run as expected, you'd see something along those lines:

[cont-finish.d] executing container finish scripts...
[cont-finish.d] done.
[s6-finish] syncing disks.
[s6-finish] sending all processes the TERM signal.
[s6-finish] sending all processes the KILL signal and exiting.

 

it does seem like everything is running properly, which is what makes it hard to diagnose for me, but nothing is responding on port 8096 or 8920

 

as i'm running it from rancher, i can't see the exact command it runs to start it up. do you know if there is something you can use after it has started? 

 

docker inspect gives me the following: 

[
    {
        "Id": "c3e93d757bf5928bb7d14ce15a738769a9544fa6b3a08fbd3524403231ada527",
        "Created": "2018-01-21T16:59:02.746058856Z",
        "Path": "/.r/r",
        "Args": [
            "/init"
        ],
        "State": {
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 10723,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2018-01-21T16:59:03.159020392Z",
            "FinishedAt": "0001-01-01T00:00:00Z"
        },
        "Image": "sha256:99e3ffc4d579eb9945dfa9b5481e071f3297c885083f28c9997885938361b3e0",
        "ResolvConfPath": "/var/lib/docker/containers/c3e93d757bf5928bb7d14ce15a738769a9544fa6b3a08fbd3524403231ada527/resolv.conf",
        "HostnamePath": "/var/lib/docker/containers/c3e93d757bf5928bb7d14ce15a738769a9544fa6b3a08fbd3524403231ada527/hostname",
        "HostsPath": "/var/lib/docker/containers/c3e93d757bf5928bb7d14ce15a738769a9544fa6b3a08fbd3524403231ada527/hosts",
        "LogPath": "/var/lib/docker/containers/c3e93d757bf5928bb7d14ce15a738769a9544fa6b3a08fbd3524403231ada527/c3e93d757bf5928bb7d14ce15a738769a9544fa6b3a08fbd3524403231ada527-json.log",
        "Name": "/r-emby-emby-server-1-006a4dd4",
        "RestartCount": 0,
        "Driver": "overlay",
        "Platform": "linux",
        "MountLabel": "",
        "ProcessLabel": "",
        "AppArmorProfile": "",
        "ExecIDs": null,
        "HostConfig": {
            "Binds": [
                "emby:/config:rw",
                "/media:/media:rw",
                "rancher-cni:/.r:ro"
            ],
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "json-file",
                "Config": {}
            },
            "NetworkMode": "none",
            "PortBindings": {},
            "RestartPolicy": {
                "Name": "",
                "MaximumRetryCount": 0
            },
            "AutoRemove": false,
            "VolumeDriver": "",
            "VolumesFrom": null,
            "CapAdd": [],
            "CapDrop": [],
            "Dns": [
                "169.254.169.250"
            ],
            "DnsOptions": null,
            "DnsSearch": [
                "emby.rancher.internal",
                "emby-server.emby.rancher.internal",
                "rancher.internal"
            ],
            "ExtraHosts": null,
            "GroupAdd": null,
            "IpcMode": "shareable",
            "Cgroup": "",
            "Links": null,
            "OomScoreAdj": 0,
            "PidMode": "",
            "Privileged": false,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": null,
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "Runtime": "runc",
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "",
            "CpuShares": 2,
            "Memory": 0,
            "NanoCpus": 0,
            "CgroupParent": "",
            "BlkioWeight": 0,
            "BlkioWeightDevice": null,
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpuRealtimePeriod": 0,
            "CpuRealtimeRuntime": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": [],
            "DeviceCgroupRules": null,
            "DiskQuota": 0,
            "KernelMemory": 0,
            "MemoryReservation": 0,
            "MemorySwap": 0,
            "MemorySwappiness": null,
            "OomKillDisable": false,
            "PidsLimit": 0,
            "Ulimits": null,
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0,
            "Init": false
        },
        "GraphDriver": {
            "Data": {
                "LowerDir": "/var/lib/docker/overlay/bab1464d2acc4cebe404d4046db002ba93bb392d57022f0808a1359dce0ad719/root",
                "MergedDir": "/var/lib/docker/overlay/c7df937337023788ad1075cf0591d45a517a335a2d035c8f49d77a44372b28e3/merged",
                "UpperDir": "/var/lib/docker/overlay/c7df937337023788ad1075cf0591d45a517a335a2d035c8f49d77a44372b28e3/upper",
                "WorkDir": "/var/lib/docker/overlay/c7df937337023788ad1075cf0591d45a517a335a2d035c8f49d77a44372b28e3/work"
            },
            "Name": "overlay"
        },
        "Mounts": [
            {
                "Type": "volume",
                "Name": "emby",
                "Source": "/var/lib/rancher/volumes/rancher-nfs/emby",
                "Destination": "/config",
                "Driver": "rancher-nfs",
                "Mode": "rw",
                "RW": true,
                "Propagation": ""
            },
            {
                "Type": "bind",
                "Source": "/media",
                "Destination": "/media",
                "Mode": "rw",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Type": "volume",
                "Name": "rancher-cni",
                "Source": "/var/lib/docker/volumes/rancher-cni/_data",
                "Destination": "/.r",
                "Driver": "local",
                "Mode": "ro",
                "RW": false,
                "Propagation": ""
            }
        ],
        "Config": {
            "Hostname": "c3e93d757bf5",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "ExposedPorts": {
                "1900/udp": {},
                "7359/udp": {},
                "8096/tcp": {},
                "8920/tcp": {}
            },
            "Tty": true,
            "OpenStdin": true,
            "StdinOnce": false,
            "Env": [
                "TZ=Europe/Stockholm",
                "APP_USER=root",
                "APP_UID=0",
                "APP_GID=0",
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "LANG=en_US.UTF-8",
                "FONTCONFIG_PATH=/etc/fonts",
                "ICU_DATA=/share/icu/59.1",
                "LD_LIBRARY_PATH=/lib:/lib/samba:/system",
                "LIBVA_DRIVERS_PATH=/lib/dri",
                "UID=2",
                "GID=2",
                "GIDLIST=2"
            ],
            "Cmd": null,
            "Healthcheck": {},
            "Image": "emby/embyserver:amd64_3.2.70.2",
            "Volumes": {
                "/config": {},
                "/media": {}
            },
            "WorkingDir": "",
            "Entrypoint": [
                "/.r/r",
                "/init"
            ],
            "OnBuild": null,
            "Labels": {
                "io.rancher.cni.network": "ipsec",
                "io.rancher.cni.wait": "true",
                "io.rancher.container.ip": "10.42.193.208/16",
                "io.rancher.container.mac_address": "02:e4:3b:6f:9a:c7",
                "io.rancher.container.name": "emby-emby-server-1",
                "io.rancher.container.pull_image": "always",
                "io.rancher.container.uuid": "006a4dd4-1728-4f82-ae1a-653b51d1d0ad",
                "io.rancher.environment.uuid": "adminProject",
                "io.rancher.project.name": "emby",
                "io.rancher.project_service.name": "emby/emby-server",
                "io.rancher.scheduler.affinity:host_label": "type=media",
                "io.rancher.service.deployment.unit": "c69098fd-b974-4b43-b5aa-bf7653922e33",
                "io.rancher.service.launch.config": "io.rancher.service.primary.launch.config",
                "io.rancher.stack.name": "emby",
                "io.rancher.stack_service.name": "emby/emby-server",
                "maintainer": "Emby LLC <apps@[member="Emby.media"]>"
            }
        },
        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "3e557ffa8557733eb51a7d814814f6ce81cea0d4534a5df6ade5f0ee0129537c",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {},
            "SandboxKey": "/var/run/docker/netns/3e557ffa8557",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "",
            "Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "",
            "IPPrefixLen": 0,
            "IPv6Gateway": "",
            "MacAddress": "",
            "Networks": {
                "none": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "499ca17824c3c1612a9e99c5a5d4d4a0f7636fd31e289eeb10324b6e65344f09",
                    "EndpointID": "ee52ee1284ab50a8e66a08a71339b3521550542a19f4458884f359f399e843ff",
                    "Gateway": "",
                    "IPAddress": "",
                    "IPPrefixLen": 0,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "",
                    "DriverOpts": null
                }
            }
        }
    }
]
Link to comment
Share on other sites

alucryd

@@cyrenbyren Looks like you're not exposing any port, my docker inspect shows the following:

"Ports": {
    "1900/udp": null,
    "7359/udp": null,
    "8096/tcp": [
        {
            "HostIp": "0.0.0.0",
            "HostPort": "8097"
        }
    ],
    "8920/tcp": null
},

Port 8097 on the host points to 8096 in the container. You should add --publish 8096:8096 to the command line, or the equivalent option in rancher.

Link to comment
Share on other sites

cyrenbyren

Sounds like something is blocking the ports. Would rancher do anything to cause that?

 

i am using a separate container linked to the emby one as a reverse proxy, and all traffic goes through there, so the emby container only exposes the udp ports (7359 and 1900). i can try to remove the reverse proxy and see if that changes anything, but i am inclined to believe it shouldn't matter. 

 

that said, when i boot up the older version i see the memory usage bump up, while on the newer version it stays at 30mb and no cpu activity. so something appears to make it not want to finish starting up.

 

EDIT: actually rises to above 30mb, by about 4mb/min. not sure what it is actually doing though. nothing shows up in the logs, and still no response on the tcp-ports.

Edited by cyrenbyren
Link to comment
Share on other sites

cyrenbyren

Ok, please see alucryd's response in post #1008 and let us know if that helps. Thanks !

 

i disabled the reverse proxy and tried running the container with the ports exposed directly (8096 and 8920), but there is no response on either port. just connection refused.

 

do you know if anything has changed between .60 and .70 that could affect settings, old configs, permissions etc? it feels like it should be something fundamental, if even the ports fail to map, right?

Link to comment
Share on other sites

I really can't think of anything, it is probably related to running rancher, perhaps just something we're not accounting for.

 

We'll have to see what @@alucryd thinks. Thanks !

Link to comment
Share on other sites

alucryd

@@cyrenbyren Where you using custom APP_UID and APP_GID variables ? The netcore is using a standard daemon launcher from s6 which wants the UID, GID and GIDLIST variables instead. Please see "Installation (.NET Core)" on Docker Hub for more information, you can probably ignore GIDLIST if you don't need hardware acceleration.

 

Also could you enter the container with sh as entrypoint and try launching "/init" manually to see if you can get some additional log ?

Edited by alucryd
Link to comment
Share on other sites

@@alucryd

I've noticed the changes to the EmbyServer.xml, template for unRAID.

Thanks for that.
 
Now the UID and GID are set to 2 (deamon) is this recommended?
The setting where UID=99 (nobody) and GID=100 (users).
 
The new netcore settings I'am using is UID=99 (nobody), GID=100 (users) and GIDLIST=100, 18 (video).
Everything is running beautifully.
 
Thanks Ben
Link to comment
Share on other sites

alucryd

@@ben-nl With modern distros embracing systemd more and more, you can't expect default users and groups to have fixed UIDs and GIDs anymore. systemd-sysusers can be used to handle system users and assign random UIDs and GIDs for additional security. Arch Linux is such an example, the nobody user doesn't exist anymore, and the users group is not GID 100.

 

The daemon user/group seemed the logical default, it's not root and still guaranteed to be 2:2 about everywhere. You can of course keep using the previous defaults if they work for you, switching to daemon will not provide any advantage.

  • Like 1
Link to comment
Share on other sites

alucryd

Thanks for the explanation.

 

I'll change the UID and GID.

 

No problem, Please make sure your media files are readable by the daemon user if you do. No need to worry about the directory containing emby's configuration (/config in the container), it will be fine no matter what as I'm doing a chmod when you start the container.

  • Like 1
Link to comment
Share on other sites

No problem, Please make sure your media files are readable by the daemon user if you do. No need to worry about the directory containing emby's configuration (/config in the container), it will be fine no matter what as I'm doing a chmod when you start the container.

 Thanks , it's completely clear now.

Link to comment
Share on other sites

mastrmind11

Alright, I have successfully moved all my stand-alone stuff to docker and have been holding out on Emby for obvious reasons.  However, I think the net core version has matured to the point that I'm ready to pull the rip cord this weekend and trying the core docker install.  My concerns:

 

Disclaimer:  I'm experienced w/ docker and linux, etc.

 

1)  Is it possible to migrate my existing user data, library, etc to my standard docker user config directory?  If no, does the standard backup plugin work for users?  How about trakt integration?  (I only started using trakt a few months ago, but have been using Emby for a couple years.... is watched status native to emby backup or???)

 

2)  For VAAPI, should I add the admin and/or video user to the GID list when I setup the container, or should I tweak the video group to include the emby user instead?

 

Side note, this looks well done, and I look forward to breaking shit this weekend :).  Appreciate any feedback.

Edited by mastrmind11
Link to comment
Share on other sites

cyrenbyren

@@cyrenbyren Where you using custom APP_UID and APP_GID variables ? The netcore is using a standard daemon launcher from s6 which wants the UID, GID and GIDLIST variables instead. Please see "Installation (.NET Core)" on Docker Hub for more information, you can probably ignore GIDLIST if you don't need hardware acceleration.

 

Also could you enter the container with sh as entrypoint and try launching "/init" manually to see if you can get some additional log ?

 

i was using APP_UID and APP_GID, yes. i've changed these to UID and GID respectively.

 

i tried running /init from within the container, and it gets the same result. 

1/20/2018 11:58:25 AM[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
1/20/2018 11:58:25 AM[s6-init] ensuring user provided files have correct perms...exited 0.
1/20/2018 11:58:25 AM[fix-attrs.d] applying ownership & permissions fixes...
1/20/2018 11:58:25 AM[fix-attrs.d] done.
1/20/2018 11:58:25 AM[cont-init.d] executing container initialization scripts...
1/20/2018 11:58:25 AM[cont-init.d] done.
1/20/2018 11:58:25 AM[services.d] starting services
1/20/2018 11:58:25 AM[services.d] done.

i can get it running however, if i manually run: 

s6-applyuidgid -u 0 -g 0 /system/EmbyServer -programdata /config -ffmpeg /bin/ffmpeg -ffprobe /bin/ffprobe -restartexitcode 3

as specified in /etc/services.d/emby-server/run

 

seems like everything is running as it should, so it doesn't appear that the init-script actually reaches this point.

 

i am not familiar with s6, but do you have any idea where things might freeze up? i'll try to figure it out on my end as well.

 

EDIT: i think i mentioned this before, but just to be overly cautious: rancher prepends the entrypoint with its own wrapper-startup script (/.r/r /init). might there be a reference somewhere to the entrypoint path where it only expects the 1st argument to exist?

Edited by cyrenbyren
Link to comment
Share on other sites

cyrenbyren

it seems like it is the chown part in /etc/services.d/emby-server/run that doesn't finish. it never completes when run with -R, i can however chown individuals items with no problem. i'll see if i can find where it actually stops.

 

EDIT: alternatively, it might just take a _really_ long time.

EDIT2: it does take a _really_ long time (30 minutes). i have over 100k items that needs chowning (well, not actually needs, the permissions are already set correctly), maybe make this optional somehow? i'll tinker a bit. 

 

it seems like chowning on a mounted volume is super slow in some cases. not sure why yet.

 

EDIT3: it seems to be due to how i mount the volume via nfs. file meta operations are super slow compared to direct filesystem access.

i think the only solution (for me, at least, if i want to keep using nfs) is for the chown-operation to be either configurable or optional somehow. would you support using a env-variable to control whether the chown operation should run?

Edited by cyrenbyren
  • Like 1
Link to comment
Share on other sites

mastrmind11

it seems like it is the chown part in /etc/services.d/emby-server/run that doesn't finish. it never completes when run with -R, i can however chown individuals items with no problem. i'll see if i can find where it actually stops.

 

EDIT: alternatively, it might just take a _really_ long time.

EDIT2: it does take a _really_ long time (30 minutes). i have over 100k items that needs chowning (well, not actually needs, the permissions are already set correctly), maybe make this optional somehow? i'll tinker a bit. 

 

it seems like chowning on a mounted volume is super slow in some cases. not sure why yet.

 

EDIT3: it seems to be due to how i mount the volume via nfs. file meta operations are super slow compared to direct filesystem access.

i think the only solution (for me, at least, if i want to keep using nfs) is for the chown-operation to be either configurable or optional somehow. would you support using a env-variable to control whether the chown operation should run?

Yes, recursive meta operations are brutal via NFS, I've noticed this too.  I also agree there should be a flag in the container command to disable chowning... all of my media is already the proper user:group, no reason to force it again.

  • Like 1
Link to comment
Share on other sites

dcrdev

Yes, recursive meta operations are brutal via NFS, I've noticed this too.  I also agree there should be a flag in the container command to disable chowning... all of my media is already the proper user:group, no reason to force it again.

 

I've not noticed this -

 

I remember you saying you use zfs, right? Have you tried storing the xattrs in the disk inodes as oppose to the default xml that zfs creates? You can do so by setting xattr=sa on the pool.

Link to comment
Share on other sites

cyrenbyren

Yes, recursive meta operations are brutal via NFS, I've noticed this too.  I also agree there should be a flag in the container command to disable chowning... all of my media is already the proper user:group, no reason to force it again.

 

for the sake of accuracy, the chown is forced on the /config folder only. but i agree with you nonetheless, it should be optional.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...