FredFrin 5 Posted June 30, 2018 Posted June 30, 2018 Hi, I'm on stable (as opposed to beta), the server UI tells me 3.4.1.0 is available. As previously, I run emby-server update this does not seem to update. A little digging inside the docker container showed: /bin/update calls source /etc/cont-init.d/03-upgrade-onetime which tries to retrieve the VERSION to fetch using: VERSION=$(curl -sL https://github.com/mediaBrowser/Emby/releases.atom | grep -A 1 -e 'link.*alternate' | grep -e ' <' | sed 'N;s/\n/ /' | grep -v 'beta' | head -1 | sed 's%.*/tag/\([^"]*\).*%\1%') I reduced this & ran the following command: curl -sL https://github.com/mediaBrowser/Emby/releases.atom | grep '3.4.1' The output does not include 3.4.1.0, it only lists beta versions 3.4.1.8 to 3.4.1.18. Opening https://github.com/mediaBrowser/Emby/releases.atom in the browser & searching for 3.4.1.0 confirms it is not there. I then tried setting VERSION=3.4.1.0 in the shell & running the remaining commands in /etc/cont-init.d/03-upgrade-onetime and update. This does fetch 3.4.1.0/Emby.Mono.zip and in fact installed it. I saw the updated version conformed in the UI, but after docker container restart it was back to the old version - so I'm missing a step to make the change permanent in the container somewhere. Any chance of including the non-beta 3.4.1.0 in https://github.com/mediaBrowser/Emby/releases.atom please? cheers, FF
mastrmind11 722 Posted June 30, 2018 Posted June 30, 2018 why not just pull the latest docker image and rerun it? that's generally how docker containers are updated.
FredFrin 5 Posted June 30, 2018 Author Posted June 30, 2018 (edited) I agree this is how it should be done, but I believe emby stores all data & config in the server container. Running a newly pulled docker image creates a new container which wipes all plugins, configuration, data-sources, requires re-scan from scratch etc ... And that will need repeating for every upgrade. If all the config were to be stored in a docker data volume, this would not be required, but emby does not appear to make use of them. Edited June 30, 2018 by FredFrin
mastrmind11 722 Posted June 30, 2018 Posted June 30, 2018 (edited) I agree this is how it should be done, but I believe emby stores all data & config in the server container. Running a newly pulled docker image creates a new container which wipes all plugins, configuration, data-sources, requires re-scan from scratch etc ... And that will need repeating for every upgrade. If all the config were to be stored in a docker data volume, this would not be required, but emby does not appear to make use of them. Emby stores configs/etc where you specify it in the run command string. for eg, mine are stored in /home/docker/emby/configs. Every docker container I've ever spun up uses this technique for this exact reason. In fact, it should be mandatory in order to run the container. Check it out and report back. edit: just checked docker hub: docker run -d \ --volume /path/to/programdata:/config \ # This is mandatory --volume /path/to/share1:/mnt/share1 \ # To mount a first share --volume /path/to/share2:/mnt/share2 \ # To mount a second share --device /dev/dri/renderD128 \ # To mount a render node for VAAPI --publish 8096:8096 \ # To expose the HTTP port --publish 8920:8920 \ # To expose the HTTPS port --env UID=1000 \ # The UID to run emby as (default: 2) --env GID=100 \ # The GID to run emby as (default 2) --env GIDLIST=100 \ # A comma-separated list of additional GIDs to run emby as (default: 2) emby/embyserver:latest So assuming you're using the official docker, check your command and that's where your persistent data is stored, and thus a pull and run will update. Edited June 30, 2018 by mastrmind11
Luke 42078 Posted June 30, 2018 Posted June 30, 2018 What docker container are you running? Are you sure it's our official one?
FredFrin 5 Posted July 1, 2018 Author Posted July 1, 2018 From my notes I have the official image installed like this: docker pull emby/embyserver:latest docker run -it --rm -v /usr/local/bin:/target \ emby/embyserver instl # Systemd startup - create a unit file /etc/systemd/system/emby-server@.service docker run -it --rm -v /etc/systemd/system:/target \ emby/embyserver instl service There is a bash script /usr/local/bin/emby-server (presumably installed by one of the above commands??) which is called via a systemd unit file, and can also be called manually to start the server. I noticed there was a difference in the tag - I was running a container based on an image with the x86_64 tag (set in the emby-server script). Changing the tag to latest indeed pulls the correct image & runs the latest version. No idea how this x86_64 tag came about now. And of course mastermind11 is right - it's using the cfg in /home/<usr>/.embyconfig docker images REPOSITORY TAG IMAGE ID CREATED SIZE emby/embyserver <none> 99aaf865649d 3 weeks ago 211MB emby/embyserver <none> b4beabaa310f 5 months ago 195MB emby/embyserver x86_64 1aefd8f81be1 6 months ago 411MB docker inspect emby-server [ { "Id": "2852dcda2ea8b34358db4fad131cea075fd74fb5906efc306a7a163db0e45339", "Created": "2018-06-30T09:10:03.173477963Z", However the emby-server script does support an update cmd I described in my first post. Is this an offcial script?
mastrmind11 722 Posted July 1, 2018 Posted July 1, 2018 I've never seen an official script for doing this.
FredFrin 5 Posted July 1, 2018 Author Posted July 1, 2018 (edited) This is it - oops sorry, looks like I'm not permitted to attach scripts ... so pasting ... #!/bin/bash PATH=/usr/sbin:/usr/bin:/sbin:/bin # TAG_NAME="x86_64" TAG_NAME="latest" IMG_NAME="embyserver" APP_NAME="emby-server" APP_USER=${APP_USER:-$USER} APP_REPO=${APP_REPO:-emby} APP_CONFIG=${APP_CONFIG:-"/home/${APP_USER}/.${APP_NAME}"} APP_GCONFIG=${APP_GCONFIG:-"/config"} EDGE=${EDGE:-0} UMASK=${UMASK:-002} RUN_MODE="-d" ( id -Gn | grep -q docker ) || [[ $EUID == 0 ]] || SUDO=sudo if [[ "${APP_USER}" != "appuser" ]]; then APP_UID=$(getent passwd $APP_USER | awk -F":" '{print $3}') APP_GID=$(getent passwd $APP_USER | awk -F":" '{print $4}') else APP_UID=$(id -u) APP_GID=$(id -g) fi if [[ ${APP_USER} == "nobody" ]]; then APP_USER="appuser" fi list_options() { echo "" echo "Launch ${APP_NAME} using:" echo " ${APP_NAME} - Launch ${APP_NAME}" echo " ${APP_NAME} bash - Launch bash shell in ${APP_NAME} container" echo "" exit 1 } cleanup_stopped_instances() { for c in $(${SUDO} docker ps -a -q) do image=$(${SUDO} docker inspect --format="{{.Config.Image}}" ${c}) if [[ $(echo "${image}" | grep -c -e "${APP_REPO}/embyserver.*${TAG_NAME}") -gt 0 ]]; then running=$(${SUDO} docker inspect --format="{{.State.Running}}" ${c}) if [[ ${running} != true ]]; then echo "Cleaning up stopped ${APP_NAME} instances..." ${SUDO} docker rm -v "${c}" >/dev/null fi fi done running=$(${SUDO} docker inspect --format="{{.State.Running}}" ${APP_NAME} 2> /dev/null) } prepare_docker_env_parameters() { ENV_VARS+=" --env=APP_UID=${APP_UID}" ENV_VARS+=" --env=APP_GID=${APP_GID}" ENV_VARS+=" --env=APP_USER=${APP_USER}" ENV_VARS+=" --env=EDGE=${EDGE}" ENV_VARS+=" --env=UMASK=${UMASK}" if [[ -f /etc/timezone ]]; then ENV_VARS+=" --env=TZ=$(cat /etc/timezone)" else [[ ! -z "${TIMEZONE}" ]] && ENV_VARS+=" --env=TZ=${TIMEZONE}" fi } prepare_docker_volume_parameters() { if [[ ! -d "${APP_CONFIG}" ]]; then mkdir -p ${APP_CONFIG} fi CURR_SID=$(stat -c"%u:%g" ${APP_CONFIG}) if [[ "$CURR_SID" != "$APP_UID:$APP_GID" ]] ; then chown -R $APP_UID:$APP_GID ${APP_CONFIG} fi # prepare local filesystems VOLUMES+=" --volume=${APP_CONFIG}:${APP_GCONFIG}" # hardware volumes if [[ -e "/dev/dri/" ]]; then while IFS= read -r dev; do VOLUMES+=" --device=$dev:$dev" done < <(find /dev/dri -maxdepth 1 ! -type d) fi } prepare_user_volume_parameters() { # Ensure app config directory exist. if [[ ! -d "${APP_CONFIG}" ]]; then echo "Error, ${APP_NAME} data directory: ${APP_CONFIG} does not exist." exit 1 fi if [[ -f "${APP_CONFIG}/.embydockervolumes" ]]; then mv "${APP_CONFIG}/.embydockervolumes" "${APP_CONFIG}/.${APP_NAME}.volumes" fi if [[ ! -e "${APP_CONFIG}/.${APP_NAME}.volumes" ]]; then declare -a user_volumes echo "No existing user volumes for: ${APP_NAME}." echo "Pleae enter full paths you want accessible from within the container" echo "Enter one entry per line." echo "Enter \"done\" or "Ctrl+D" when finished" while read hostpath; do if [[ "$hostpath" == "done" ]]; then break fi if [[ ! -d "$hostpath" ]]; then echo "Sorry, $hostpath is not a valid path." else user_volumes+=("$hostpath") fi done < /dev/stdin for user_volume in "${user_volumes[@]}"; do echo "${user_volume}" >> "${APP_CONFIG}/.${APP_NAME}.volumes" done fi # setup user volumes while read user_volume; do VOLUMES+=" --volume=${user_volume}:${user_volume}" done < "${APP_CONFIG}/.${APP_NAME}.volumes" } cleanup_stopped_instances prepare_docker_env_parameters prepare_docker_volume_parameters prepare_user_volume_parameters if [[ "${running}" != "true" ]]; then case "$1" in "") ;; "bash") RUN_MODE="-it" && prog="bash" && shift ;; "service") RUN_MODE="" && shift ;; "help") list_options && exit 1 ;; *) echo "${APP_NAME} not started yet." echo "Ignoring all arguments!" set -- ;; esac if [[ -z "${prog}" ]]; then msg="Starting $APP_NAME" else msg="Starting ${prog} in ${APP_NAME} docker" fi echo "${msg}..." ${SUDO} docker run \ --log-opt max-size=2m \ --log-opt max-file=2 \ --net=host \ ${RUN_MODE} \ ${ENV_VARS} \ ${VOLUMES} \ --name=${APP_NAME} \ ${APP_REPO}/${IMG_NAME}:${TAG_NAME} ${prog} $@ else case "$1" in "update") docker exec -it ${APP_NAME} update ;; "stop") docker stop ${APP_NAME} ;; "status") echo "${APP_NAME} is running." echo "UpTime: $(ps -p $(pgrep -u ${APP_UID} -f mono-sgen) -o etime=)" ;; "logs") docker logs ${APP_NAME} ;; "console") docker exec -it ${APP_NAME} bash ;; *) echo "Already running." ;; esac exit 0 fi Edited July 1, 2018 by FredFrin
mastrmind11 722 Posted July 1, 2018 Posted July 1, 2018 yeah I dunno man, that seems overly complex. I'd just switch to the normal method and use watchtower to autoupdate if that's what you're concerned with. This seems like more pain than it's worth imo.
FredFrin 5 Posted July 1, 2018 Author Posted July 1, 2018 Yep - I'm going to convert the docker run cmd on docker hib to a docker-compose file & tell systemd to run that.
mastrmind11 722 Posted July 1, 2018 Posted July 1, 2018 Yep - I'm going to convert the docker run cmd on docker hib to a docker-compose file & tell systemd to run that. WHy do you need systemd involved at all?
FredFrin 5 Posted July 1, 2018 Author Posted July 1, 2018 Because it's now the default means managing process startup in most Linux distros & I want emby to be started on reboot.
mastrmind11 722 Posted July 1, 2018 Posted July 1, 2018 (edited) Because it's now the default means managing process startup in most Linux distros & I want emby to be started on reboot. Not in docker-land, this stuff is handled in the run command. You can even add it after the fact using the update command. The whole point is to have a centralized, containerized environment. I recommend a few minutes of reading https://docs.docker.com/config/containers/start-containers-automatically/ Edited July 2, 2018 by mastrmind11
FredFrin 5 Posted July 2, 2018 Author Posted July 2, 2018 (edited) I'm aware of docker restart policies, these tend to work well with single container (micro-)services. However I'm used to working with multi-container apps with inter-dependencies including start-up sequences etc., dedicated docker networks etc. These are best managed using docker-compose, then the app as a whole can be managed as a service via a process manager such as systemd. In these environments the restart of single containers can be problematic. So you are right that restart=always is a valid option for emby-server, however I prefer a single approach for all docker apps, so I'll stick with compose + systemd. Edited July 5, 2018 by FredFrin 1
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now