Jump to content

Docker


Luke

Recommended Posts

What's everyone using to automate updating containers?  and how do you recommend doing it manually?  I'm fairly certain doing an update from the "update" button from inside a container is pretty frowned upon.  Thanks

 

 

i'm just stopping the container and starting it up with the latest tag and the same startup arguments. :)

(with rancher, this is done simply with an "upgrade" feature)

 

 

I personally have been using the manual emby server upgrade as described in the hub.docker page:

docker exec emby-server update

Which leads me to wondering which way is recommended:

1. Pull the latest container image

or

2. Update the emby server within the same container (as above).

 

Is there any difference at all in the end?

Link to comment
Share on other sites

BarryAmerika

What's everyone using to automate updating containers?  and how do you recommend doing it manually?  I'm fairly certain doing an update from the "update" button from inside a container is pretty frowned upon.  Thanks

 

For auto updates I use watchtower ( https://hub.docker.com/r/v2tec/watchtower/ ) sometimes auto updates are not such a good thing though. You can set it to only update certain containers and specify the checking interval.

 

For a manual update I would use

docker stop emby
docker rm emby
docker pull emby/embyserver

and then start it up again. ( probably docker start emby )

 

For manual updates of all my docker containers at once...

docker rm -f $(docker ps -a -q)
docker images | awk '{print $1}' | grep -v 'none' | grep -iv 'repo' | xargs -n1 docker pull

I choose to just update the containers rather than diddle the contents. I have also just updated from the emby manage server front end a few times just to see if it worked and have never struck trouble.

Link to comment
Share on other sites

mastrmind11

tried watchtower in the past, works as expected but I generally update 1 version behind, once the brave give it a thorough run through and confirm libraries aren't wiped, etc.

 

@@cyrenbyren I tried Rancher.  It's way too much for my small home lab, unless I'm missing something.  ie, the concept of a stack seems pretty good, but why if it can sense that I have several containers already running (under the container menu) I can't add them to a stack?  I don't mind rebuilding the container, but it doesn't even suck in existing container params.... I don't feel the need to redo every single start param considering they're already running flawlessly now.  How do I add an external repo that isn't part fo the rancher ecosystem?  It wasn't apparent to me (granted, I've not spent a ton of time or effort, but it's not obvious).  Lastly, wtf is going on w docker ps -a?  There are like 10 containers running aside my own....  while the convenience might be nice assuming I can get it working properly, manually wtf?  If I stop rancher the others sit there and spin.  Guessing this thing is way too enterprise grade for my needs?  side note, wtf do you do that you need that kinds of functionality ? :)

Edited by mastrmind11
Link to comment
Share on other sites

cyrenbyren

tried watchtower in the past, works as expected but I generally update 1 version behind, once the brave give it a thorough run through and confirm libraries aren't wiped, etc.

 

@@cyrenbyren I tried Rancher.  It's way too much for my small home lab, unless I'm missing something.  ie, the concept of a stack seems pretty good, but why if it can sense that I have several containers already running (under the container menu) I can't add them to a stack?  I don't mind rebuilding the container, but it doesn't even suck in existing container params.... I don't feel the need to redo every single start param considering they're already running flawlessly now.  How do I add an external repo that isn't part fo the rancher ecosystem?  It wasn't apparent to me (granted, I've not spent a ton of time or effort, but it's not obvious).  Lastly, wtf is going on w docker ps -a?  There are like 10 containers running aside my own....  while the convenience might be nice assuming I can get it working properly, manually wtf?  If I stop rancher the others sit there and spin.  Guessing this thing is way too enterprise grade for my needs?  side note, wtf do you do that you need that kinds of functionality ? :)

 

haha :D

 

it is a bit enterprise.

 

so what rancher does is hook into the docker sock of the host so it can manage everything itself. but as you say, any currently running container will be detected but not _owned_ by rancher. so you do actually need to take your solo containers down and add them to a stack in rancher with all their parameters. after that is done however, all those parameters are saved so an upgrade is as easy as clicking upgrade, set the new image tag (or not, if you're using a tag such as "latest" or similar), and click ok. it will stop (but not remove) the container, spin up the new one on the new tag. when it's up and running and you are satisifed everything works, you click "finish upgrade". if something goes wrong, you can just rollback and get back the old container. once finish upgrade is done though, the old container is removed.

 

as to adding an external repo, that kinda depends on what you mean! :) rancher has something it calls "catalogs", which is a catalog of images that can be added directly from within rancher, with settings listed for you etc. i have never tried setting a new one up myself, just using ranchers own and its "community" one. aside from catalogs, it uses the docker hub by default if you don't specify where you want to pull the image from, but you can add in the whole path if you have somewhere else you want it to pull from. and if you are hardcore, and have your own docker repo, you can set that as the default one instead of docker hub. this is in advanced settings.

 

as to additional containers it spins up, it's mostly related to the existance of rancher stacks. each stack has its own network, essentially, which means that any container added to a specific stack is accessible by any other container in that stack. this is similar to what docker compose uses (and you can in fact use a docker compose-file as base for a stack) but is, in my opinion, more flexible. but there is more! it also has a container that handles health monitoring, and you can specify how you determine a specific container being healthy (e.g. answers 200 OK on port 80, has a specific tcp port open, etc). it's helpful. so in essence, the additional containers do: networking, health monitoring and container management (via rancher-agent and the scheduler).

 

i started using rancher for evaluation really, for work stuff. then i kinda fell in love with how easy it was to manage. it also lets me forget about all parameters to start up a container, because it saves them all independently from the container itself. if you were to accidently remove a container, you'd lose all setup for it. with rancher, it's not really that important. the folder that the volumes are pointing to are still there, so just map them up once and then check them if you ever need to refresh your memory, but they are persisted in a way docker by itself doesn't do. the most important feature for me though, as i have several hosts that run docker stuff, is storage management. i have one server with zfs that i have a volume shared over nfs with, and i use this for all volumes used in my containers. the rancher-nfs storage driver will mount this up on all my hosts, making them accessible on the fly for any new container i spin up. it will just add a new folder in the nfs-share named after the container, and it just works. :D

 

EDIT: so i lied a little. any container run on a host owned by rancher is managable by rancher, but it differentiates between a container and a service. a service is a setup for a container existing in a stack. a container is just a standalone. however, a service IS just a setup for a container, so i'm looking now if there is an easy way to convert a container (a preexisting one) and just clone the exact settings into a service. you can clone the container easily enough, just click the options for the container and hit "clone" and you'll get a pre-filled out template of that exact container, but it doesn't seem to want to change it into a service. i'll have a looksee!

 

EDIT2: also, as a sidenote. while rancher is a bit enterprise, i hear good things about portainer. it does similar things to rancher, but is not as enterprise, as it were. check it out?

Edited by cyrenbyren
  • Like 1
Link to comment
Share on other sites

adrianwi

I tried, and failed, to get my head around Rancher, but have found Portainer much easier to understand and set-up.  Whilst I can see the obvious benefits of Docker containers, not sure how I'm going to use it within my FreeNAS set-up as I can do pretty much the same with Jails and don't have the resource constraints of running docker in a restricted VM.

 

I'm still playing with emby in a Ubuntu VM running Docker, and it does seem a little quicker, although probably not enough to ditch my emby Jail.

Link to comment
Share on other sites

mastrmind11

haha :D

 

it is a bit enterprise.

 

so what rancher does is hook into the docker sock of the host so it can manage everything itself. but as you say, any currently running container will be detected but not _owned_ by rancher. so you do actually need to take your solo containers down and add them to a stack in rancher with all their parameters. after that is done however, all those parameters are saved so an upgrade is as easy as clicking upgrade, set the new image tag (or not, if you're using a tag such as "latest" or similar), and click ok. it will stop (but not remove) the container, spin up the new one on the new tag. when it's up and running and you are satisifed everything works, you click "finish upgrade". if something goes wrong, you can just rollback and get back the old container. once finish upgrade is done though, the old container is removed.

 

as to adding an external repo, that kinda depends on what you mean! :) rancher has something it calls "catalogs", which is a catalog of images that can be added directly from within rancher, with settings listed for you etc. i have never tried setting a new one up myself, just using ranchers own and its "community" one. aside from catalogs, it uses the docker hub by default if you don't specify where you want to pull the image from, but you can add in the whole path if you have somewhere else you want it to pull from. and if you are hardcore, and have your own docker repo, you can set that as the default one instead of docker hub. this is in advanced settings.

 

as to additional containers it spins up, it's mostly related to the existance of rancher stacks. each stack has its own network, essentially, which means that any container added to a specific stack is accessible by any other container in that stack. this is similar to what docker compose uses (and you can in fact use a docker compose-file as base for a stack) but is, in my opinion, more flexible. but there is more! it also has a container that handles health monitoring, and you can specify how you determine a specific container being healthy (e.g. answers 200 OK on port 80, has a specific tcp port open, etc). it's helpful. so in essence, the additional containers do: networking, health monitoring and container management (via rancher-agent and the scheduler).

 

i started using rancher for evaluation really, for work stuff. then i kinda fell in love with how easy it was to manage. it also lets me forget about all parameters to start up a container, because it saves them all independently from the container itself. if you were to accidently remove a container, you'd lose all setup for it. with rancher, it's not really that important. the folder that the volumes are pointing to are still there, so just map them up once and then check them if you ever need to refresh your memory, but they are persisted in a way docker by itself doesn't do. the most important feature for me though, as i have several hosts that run docker stuff, is storage management. i have one server with zfs that i have a volume shared over nfs with, and i use this for all volumes used in my containers. the rancher-nfs storage driver will mount this up on all my hosts, making them accessible on the fly for any new container i spin up. it will just add a new folder in the nfs-share named after the container, and it just works. :D

 

EDIT: so i lied a little. any container run on a host owned by rancher is managable by rancher, but it differentiates between a container and a service. a service is a setup for a container existing in a stack. a container is just a standalone. however, a service IS just a setup for a container, so i'm looking now if there is an easy way to convert a container (a preexisting one) and just clone the exact settings into a service. you can clone the container easily enough, just click the options for the container and hit "clone" and you'll get a pre-filled out template of that exact container, but it doesn't seem to want to change it into a service. i'll have a looksee!

 

EDIT2: also, as a sidenote. while rancher is a bit enterprise, i hear good things about portainer. it does similar things to rancher, but is not as enterprise, as it were. check it out?

Awesome response. Thanks man

 

Sent from my SM-G950U using Tapatalk

  • Like 1
Link to comment
Share on other sites

EDIT2: also, as a sidenote. while rancher is a bit enterprise, i hear good things about portainer. it does similar things to rancher, but is not as enterprise, as it were. check it out?

 

 

I tried, and failed, to get my head around Rancher, but have found Portainer much easier to understand and set-up.

 

Agreed with Portainer... I was interested in a way to manage my containers and looked quickly at Rancher and Portainer, and got the same reactions. While Rancher seemed to be very powerful and full of features, Portainer was very easy to setup and use. For someone like me who does not need (and probably cannot use fully) something as "refined" as Rancher, Portainer is a very good option!

Link to comment
Share on other sites

FlyGuy94

I use unraid updating containers is just a push of a button. But most of my containers auto update at 2 in the morning using a unraid plugin.

Link to comment
Share on other sites

mastrmind11

Agreed with Portainer... I was interested in a way to manage my containers and looked quickly at Rancher and Portainer, and got the same reactions. While Rancher seemed to be very powerful and full of features, Portainer was very easy to setup and use. For someone like me who does not need (and probably cannot use fully) something as "refined" as Rancher, Portainer is a very good option!

Does Portainer tell you when there's a new container update available?  Based on git and the issue tracker, it seems that it's not yet implemented?

Link to comment
Share on other sites

Does Portainer tell you when there's a new container update available?  Based on git and the issue tracker, it seems that it's not yet implemented?

 

Not that I'm aware of. I think that is where something like "Watchtower" would come into play. I personally am always a little hesitant to update right away so I usually don't like auto-updates and prefer waiting a little bit and manually trigger the update when ready.[/background]

 

 

Speaking of, would anybody has any insight on my following questions?

   

I personally have been using the manual emby server upgrade as described in the hub.docker page:

docker exec emby-server update
Which leads me to wondering which way is recommended:

1. Pull the latest container image

or

2. Update the emby server within the same container (as above).

 

Is there any difference at all in the end?

 

Link to comment
Share on other sites

Heinrich

Hi Guys. If I convert to Emby for Docker I can turn off my Crashplan / Emby server and retire it. What I need advice on before I do that is:


 


1)  Check the library function about saving the metadata with the source file (however that's worded.)


2) Install Emby Docker


3) Restore backup configuration via Backup plugin (I'm Emby Premiere.)


 


Would this get me back to where I was when I turn off my current server? Is all this data compatible? (images / thumbnails / configuration settings). Is the functionality 1:1 with what I am accustomed to with Emby Windows?


 


thanks


Link to comment
Share on other sites

mastrmind11

 

Hi Guys. If I convert to Emby for Docker I can turn off my Crashplan / Emby server and retire it. What I need advice on before I do that is:

 

1)  Check the library function about saving the metadata with the source file (however that's worded.)

2) Install Emby Docker

3) Restore backup configuration via Backup plugin (I'm Emby Premiere.)

 

Would this get me back to where I was when I turn off my current server? Is all this data compatible? (images / thumbnails / configuration settings). Is the functionality 1:1 with what I am accustomed to with Emby Windows?

 

thanks

 

If your metadata is saved w/ your media, then yes.  I just switched from emby mono to the new docker, restored the stuff from the backup plugin, added my libraries and ran a scan.  Worked like a charm, all my watched statuses and users came over, and was back in business in under an hour total.  So yeah, you should be good to go.  Just don't do anything crazy like deleting your old setup prior to getting docker set up.  I shut my old server down for about a week while I let the docker version run as my main server just to be sure nothing was broken.  Haven't had the need to turn it back on and am about a week away from removing it entirely from the box.  Good luck.

 

edit:  If your current setup is not already saving metadata w/ media, when you flip the switch it isn't retroactive so you're going to have to kick off a manual library scan for everything.

Edited by mastrmind11
Link to comment
Share on other sites

Heinrich

Been struggling for EIGHT HOURS on this. Wasting away my time on my days off here :( I'm really exasperated.

 

What has happened is that my Emby server died last week. It was a Windows 10 server that would not boot. It was also my Crashplan backup machine. Since I was running out of UnRaid disk space, I decided to shut the thing down, re-install windows 10, and pull the backup drives out to use at least the 8TB over on UnRaid server for storage. So I did two things (that turned into three.) Rebuilt WHS2011 / Emby and expanded UnRaid storage.

 

Back on WHS 2011/Emby from yesterday my TV shows were working but movies would not play. A few would play on Kodi Windows but not iso's. Nothing would play on either Android TV devices I have. The screen just flashed.

 

So I just made a third change this morning. I shut down WHS2011 Emby and stood up Emby Docker on the unraid server. YAY. It's really only made things worse. Absolutely nothing plays. The images are pulled in, the stats are correct, but when I try to play anything the screen flashes.

 

I'm struggling getting to kodi log ... the screen says storage/emulated and so does the screen but the browser with "show hidden on" does show storge but not emulated. 

 

The emby log is so big it's crashing pastebin and my browser trying to throw it in there. ARGH I could throw all this stuff out the window!

 

Edited for pastebin https://pastebin.com/q5tcML24

Edited by Heinrich
Link to comment
Share on other sites

mastrmind11

@@Heinrich, this log looks OK to me. Are you trying to play a video file or an .ISO? 

He's got another post in the last couple days elsewhere outlining his issue.  

Link to comment
Share on other sites

Heinrich

.ISO

 

I have made progress with Windows. Essentially I changed all server names, deleted the media folders from Emby, and then re-added them and let them rebuild. 

 

My Android devices still don't play movies, though. And I can't find the kodi log on them but I guess that's a kodi issue.

 

That's part of the problem - I just don't know where the issue lies. Something in Kodi, Emby, or Unraid.  

 

At least by changing the server name, I don't get any more crashes from the add-ons. They all crash in a very-difficult-to-get-out-of manner after a couple re-installs of Emby using same setup info (server name / credentials.)

 

I realized that part of what baffles me is that I don't understand the relationship between Emby Connect / Emby local server credentials / Windows credentials / Unraid credentials. I don't understand how they all four inter-operate and maybe I have something wrong in the chain. 

Link to comment
Share on other sites

Heinrich

Morning - Zidoo X9S has their own player baked into Kodi. HiMedia has the same they though they call it a "wrapper." Both these products have worked fine for a year. When I rebuilt Emby server, and upgraded 1 unraid hard drive, I started having problems exactly the same across all platforms - Windows & androids - 2 different machines of both (one Intel and one AMD Windows.) But the behavior was exactly the same.

 

Now I've got Windows working (and I hope it still is as of today, I just got up LOL) but not Android. Perhaps with some scheduled tasks last night I'll get lucky and Android will work. I'm going to do a full database reset / rebuild if not. If that doesn't work I'll struggle with finding the kodi log on that thing. 

 

this is my post from a few days ago with a kodi log

 

https://emby.media/community/index.php?/topic/55716-movies-just-blink-dont-play/ 

Edited by Heinrich
Link to comment
Share on other sites

mastrmind11

I cannot get my login to stick since switching to docker.  I have my kids setup for no password when local, and it isn't honored.  I have the client set to never timeout and remember last logged in.  I'm guessing it has to do with how emby detects local networks since the container IP is not in my subnet.  I'm surprised this hasn't been brought up.  I wonder whether this is happening on my other clients where a password is required but set to remember last login/never timeout.  I'll report back in a couple mins.

 

edit:  ok, neither client is honoring the "prompt for password" option being unticked.  Strangely, when I open Emby it says "unable to connect to server 'Emby'" which was the name of my old (pre-docker) server.  I can see the docker emby server in the server pick list and can connect fine.  I also tried manually entering the server info using the non-docker (local) IP thinking maybe it was getting tangled up w/ the change in IP from local to docker, and was able to connect but that did not solve the issue.

Edited by mastrmind11
Link to comment
Share on other sites

cyrenbyren

I cannot get my login to stick since switching to docker.  I have my kids setup for no password when local, and it isn't honored.  I have the client set to never timeout and remember last logged in.  I'm guessing it has to do with how emby detects local networks since the container IP is not in my subnet.  I'm surprised this hasn't been brought up.  I wonder whether this is happening on my other clients where a password is required but set to remember last login/never timeout.  I'll report back in a couple mins.

 

edit:  ok, neither client is honoring the "prompt for password" option being unticked.  Strangely, when I open Emby it says "unable to connect to server 'Emby'" which was the name of my old (pre-docker) server.  I can see the docker emby server in the server pick list and can connect fine.  I also tried manually entering the server info using the non-docker (local) IP thinking maybe it was getting tangled up w/ the change in IP from local to docker, and was able to connect but that did not solve the issue.

 

you need to make sure that the docker container is running on "host" network. otherwise (ass you say) it will use its own network. :)

 

add this to your startup line:

--network="host"
Edited by cyrenbyren
Link to comment
Share on other sites

mastrmind11

 

you need to make sure that the docker container is running on "host" network. otherwise (ass you say) it will use its own network. :)

 

add this to your startup line:

--network="host"

I figured as much, but I'd like to know from @@Luke or @@ebr whether not running in host mode causes the emby internal/external logic to flip?  I know this is a FR that is supposedly rolling out being able to specify IP ranges for internal/external, so I'm curious if this is the actual culprit, and if so, shoudl *definitely* be included in the docker docs on dockerhub.  I'm ok adjusting for the former, but this is going to be a pain in the dick for noobs just blindly following the docker instruction defacto.

 

edit:  I just spun up another container w/ the host flag set, and everything is working properly everywhere.  This definitely needs to be documented somewhere, or the server needs to account for this by default.  This has the potential to be a support nightmare, and I am stunned I'm the first to have encountered this.

 

edit2:  You should also mention restart policy on the page, and probably default  the command to --restart unless-stopped,for obvious reasons.

Edited by mastrmind11
Link to comment
Share on other sites

I don't recall from last i tested but even if it does i think you have enough settings now to control what you need.

Link to comment
Share on other sites

wedgekc

I'm not a docker expert but I don't think it is true that you need to use --network="host".  I have not used it and have no connection problems using the default bridge network and exposing the ports.  I did notice that under the advanced, hosting setting that the host ip is bound to Emby.  I also don't use upnp ro dlna so maybe it has something to do with that.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...