Jump to content

Emby Server - High Availability


killride

Recommended Posts

killride

Hello,

 

I think it would be great to have a HA option to have 2 emby server totally redundancy.

 

What do you think about that?

 

 

  • Like 16
  • Agree 2
Link to comment
Share on other sites

Bobba86

I really like the idea of this.. But i would want to use it in a slightly different way.

My case scenario would work in a multi-master type setup. With costs/preferences assigned to Emby servers to make one server more preferred than others.

 

In my case I have.

1. NAS storage device used to store: movies, TV shows, music, pictures

 

2. Always on-line, Low powered Microserver, hosting Emby, referencing the nas storage device for the media files... This is fine unless transcoding is needed.

 

3. Only powered on when needed. Powerful muli-cored desktop pc, hosting Emby referencing the nas storage device for the media files.. This when available is what I want as my preferred Emby server, perfect for transcending.

 

Hope I have articulated this well enough to understand.. In my mind this is a multi-master setup similar to Active Director, or DFS.. Both these systems implement a single name space, now this isn't possible, but an always on Emby server could redirect to the preferred Emby server when available.

Link to comment
Share on other sites

  • 2 weeks later...
Teknologist

You could round robin the DNS...

Not ideal but it will let you use both servers in the same namespace.

 

I run my Server in a Highly Available Hyper-V Cluster. It's only one VM with multiple physical hosts. Works very well.

Link to comment
Share on other sites

kingy444

My config would be similar to bobba86, with one altercation

 

I want to run my primary server as the core of the network, that does all the metadata fetching and manages the library etc etc

 

I could then add as many 'dumb' servers into the cluster that I could link to the main server and they update theirselves daily or something so libraries remain in sync.

 

In case that's confusing

 

Main server is basically the standard core that we run at the moment

 

During the initial config of any additional servers I could select to connect the server to an existing cluster.

By doing this the 'dumb' servers can automatically pull their config from the sister servers and remain updated.

 

Then if the main server goes offline, the secondaries can be utilised to read the meta etc.

I mean, could be a lot more potential here, but also a lot of work.

 

One thought is, if the main server dies halfway through the stream, the apps could be updated to present the end user with a 'buffering' and in the backend auto connect to the failover

Link to comment
Share on other sites

Saw this and thought I would chime in. My use case for multi server would be one box dedicated to doing live tv recordings. It would eliminate any resource conflict with serving streams to people watching content and would enable a small energy efficient box --think NUC low end-- to be used for this. Low power use means longer survival on battery backup too.

 

On my current older system I have noticed some glitches when I'm recording with WMC while watching stuff with Emby. Might like to dedicate a box to capturing live tv.

 

Just a thought.

  • Like 2
Link to comment
Share on other sites

  • 3 weeks later...
Lighthammer

This sounds like we're getting into small clusters at this point. 

To me, it feels like this has drifted into "New Software" territory with a very different use case then Emby.

It almost sounds like some of these redundancies should be created as a sort of Emby Pro Server or something.

Part of my thinking is, to do this right, you really need some team members dedicated to the creation of this tech and some members remaining on the core application; at least for the duration of the project.

This is somewhat of a costly shift in resources. It almost seems fair to ask for additional resources to help develop it.

  • Like 1
Link to comment
Share on other sites

  • 6 months later...
smidley

Has any of this been looked at by the developers at all?  I have multiple extra servers that I would like to put to use :)  Either to spread the load of transcoding, or I think it would also be great if there were some HA component built in.  Something like failover clustering could be used to make the Emby service part of a cluster and when the primary node goes offline, the service is failed over to the secondary cluster node.  This way Emby stays online for users while maintenance is performed on the offline server.

Link to comment
Share on other sites

Untoten

Wouldn't something like this be better suited to be achieved at the OS level?  Seems quite proprietary, especially with multiple services.

Link to comment
Share on other sites

  • 1 year later...
BaneOfSerenity

Recently someone has done this with Plex -> https://forums.plex.tv/t/unicorntranscoder-create-a-plex-transcoding-cluster/281679

It requires a bit of setup, not fully feature complete and is not out of the box in terms of being integrated into Plex itself but a good start nonetheless.

Maybe this will help as a starting point on what can/has been done with a streaming server in regards to clustering/high availability.

Link to comment
Share on other sites

  • 10 months later...
harrv

This sort of thing is *almost* available already if you run Emby in docker and create a kubernetes (or docker swarm) cluster. I haven't tried it yet, so I don't know what all of the gotchas are, but I can think of three potential issues I'd expect to run into.

 

Assumptions:

  1. You'd want to set up each node in your k8s cluster to use shared storage (shared by all the nodes) to access the same configuration (programdata folder), and the same media files.
  2. Ingress is configured (external to Emby itself) for the Emby service to provide round-robin access but also so that any given user would be routed to the same node for the duration of his session (or for the life of the pod on that node, whichever is shortest). That way session cookies should continue to work as they do now.

Potential Issues:

  1. Shared configuration. The way Emby currently deals with config would mean shared sqlite3 database files accessed by more than one node. Care would need to be taken to make sure DB operations are atomic, roll back in case of an error, and to allow access from more than one node without database file locks. I don't know if that can be done with sqlite files, or if Emby would need to allow something like a mysql database to hold config as an alternative to sqlite.
  2. Scheduled tasks. Emby server instances would need to have a way to do some minimal coordination with other instances so that the same thing wouldn't be done twice by concurrently running Emby server instances. For example one instance would say, "I'm going to do scheduled task A now!" and get a "lock" on that task. That should be totally do-able with a shared database (see potential issue #1). If another Emby server instance tries to get a lock on task A before doing it, and that task is already locked, it would assume it's in process by another instance and skip it. Or, better yet, it would monitor the task being performed by the other instance to make sure it's successful, and if it errors or never indicates success, then it would get a "lock" on the task and execute it itself.
  3. Hardware accelerated video encoding. If I were to set up kubernetes at home to run Emby, I would set up three worker nodes, but only one is capable of hardware accelerated video encoding. Ideally, it would be possible for an Emby Server instance capable of hardware acceleration to act as an "agent" for other Emby instances that don't have that capability, so that a session handled by one of the non-hardware-accelerated nodes would be capable of doing everything else, but if available, it could ask the hardware accelerated node to do the encoding tasks. If the transcoding-temp directory is on shared storage, maybe that'd be possible without too much additional work by the Emby devs? Also, instead of a whole Emby server instance making itself as an agent to other server instances, maybe it would be easier for the video encoding to be handled by a separate module/program that only does encoding tasks.

Those three potential issues are listed in order of the priority I'd give them.

 

I think that looking at a project like this from a birds-eye view it could be overwhelming and hard to begin, but if you didn't worry so much about handling multi-servers or clusters yourself, and let kubernetes take that role, and instead focused on making Emby server play nicely in an environment where there may be more than one instance running and sharing the same config and media, then that would be easier to begin and you could let the users figure out the details of how to manage the multiple nodes.

 

Just my thoughts as I'm in the process of moving just about everything else I run at home into kubernetes (Radarr, Sonarr, Ombi, qbittorrent, sabnzb, and Jackett).

  • Like 2
Link to comment
Share on other sites

gotsourcheeks

Wanted to chime in here since I'm currently running Emby on Kubernetes as well, and to add on to what @@harrv is suggesting.

 

In an ideal kubernetes world we would want to separate emby into its component pieces such as database, ui, ffmpeg, etc. but not sure there are enough of us running on kubernetes to make it worth the effort.

 

Instead i believe we can achieve something similar if emby had the option to deploy additional instances as slaves to a master instance. Slaves would have a separate config than the master but parts could be synchronized as needed. This could be deployed without kubernetes by simply installing emby on multiple servers. Otherwise using kubernetes we could simply deploy emby as a statefulset instead of a replicated set, which gives us unique persistent instances of emby that we can then define a master-slave relationship with. High availability could be achieved by failover of the master role.

 

The benefit of this is the ability to use the resources of multiple nodes or servers, but with the appearance of a single emby instance to the end user. 

 

My personal use case is this, I have several old "gaming" computers that ive set up as kubernetes cluster and have emby deployed on the cluster. Emby is working great so far, however im limited to using the resources of a single node at a time which means my setup is only capable of a single 4k stream. If i could deploy additional instances of emby that could coordinate with each other, i could put an instance of each node and my setup would be able to handle several times as many simultaneous transcode jobs.

  • Like 2
Link to comment
Share on other sites

sansoo22

Wanted to chime in here since I'm currently running Emby on Kubernetes as well, and to add on to what @@harrv is suggesting.

 

In an ideal kubernetes world we would want to separate emby into its component pieces such as database, ui, ffmpeg, etc. but not sure there are enough of us running on kubernetes to make it worth the effort.

 

Instead i believe we can achieve something similar if emby had the option to deploy additional instances as slaves to a master instance. Slaves would have a separate config than the master but parts could be synchronized as needed. This could be deployed without kubernetes by simply installing emby on multiple servers. Otherwise using kubernetes we could simply deploy emby as a statefulset instead of a replicated set, which gives us unique persistent instances of emby that we can then define a master-slave relationship with. High availability could be achieved by failover of the master role.

 

The benefit of this is the ability to use the resources of multiple nodes or servers, but with the appearance of a single emby instance to the end user. 

 

My personal use case is this, I have several old "gaming" computers that ive set up as kubernetes cluster and have emby deployed on the cluster. Emby is working great so far, however im limited to using the resources of a single node at a time which means my setup is only capable of a single 4k stream. If i could deploy additional instances of emby that could coordinate with each other, i could put an instance of each node and my setup would be able to handle several times as many simultaneous transcode jobs.

 

I think the crux of the problem is the rabbit hole this could go down.  Simple master-slave not terrible to code in.  However with more than 2 instances you get into the realm of voting members.  MongoDB deployed to a cluster does this.  You, the admin, elect a primary node.  If that primary happens to die the remaining nodes in the cluster will vote to elect a new primary until the original is brought back online.

 

In theory you should be able to setup a basic cluster with Docker and an nginx webserver using an upstream.

http {
    upstream emby {
        server emby.example.com;
        server emby1.example.com;
        server emby2.example.com;
    }

    server {
        listen 80;

        location / {
            proxy_pass http://emby;
        }
    }
}

That's a very basic example and in no way will let you manage resources but for basic failover/redundancy should do the trick.  If i get a chance I will spin up a second emby instance on unRAID and give this a test.  

Link to comment
Share on other sites

sansoo22

 

The one thing I haven't seen mentioned yet is using Docker Swarm or even basic Docker with persistent volumes.  Is there a reason you couldn't mount the same /config volume to each docker so they all share the same DB?  Read/writes may overwhelm whichever machine is hosting the /config volume but theoretically it should work.  One physical SAN with SSD drives and 10 small blades in a cluster of docker containers all mapped to the same shares on the SAN should technically do the trick.  Of course as you mentioned you need a proper load balancer in nginx with ip-hash and user limits.  Or if your media is cloned across a few arrays mount one array per docker.

 

If Emby supports hardware acceleration than a few of the 1U GPU servers ought to chew thru some transcode work pretty quickly.  I'm afraid of what they cost though.  My years as a sys admin and software architect have taught me if it says "contact sales for pricing" then your pockets had better be deep.

 

And if we are throwing out ideas for a centralized DB then I vote MongoDB.  They have a free version and its pretty awesome to store your data in the same format your view consumes it.  I assume the views in emby use json but i could be wrong.  Not to mention if you have deep pockets then a MongoDB replica set is pretty sweet.  We use a couple of them at work and they are stupid fast.  But then again we put enough RAM in that cluster the whole DB lives in it.

Link to comment
Share on other sites

otispresley

In Docker Swarm or in Kubernetes, this is called a service. There have to be 3 hosts minimum for a High Availability cluster, but these hosts would share the storage that the persistent data is stored on. A service is known by a single IP address, and a load balancer built into Kubernetes/Swarm determines which container on which host to send connections to. One potential problem with this setup could occur if one container in the service locks the database file and the other containers cannot access it until the lock is removed.

 

The one thing I haven't seen mentioned yet is using Docker Swarm or even basic Docker with persistent volumes.  Is there a reason you couldn't mount the same /config volume to each docker so they all share the same DB?  Read/writes may overwhelm whichever machine is hosting the /config volume but theoretically it should work.  One physical SAN with SSD drives and 10 small blades in a cluster of docker containers all mapped to the same shares on the SAN should technically do the trick.  Of course as you mentioned you need a proper load balancer in nginx with ip-hash and user limits.  Or if your media is cloned across a few arrays mount one array per docker.

 

If Emby supports hardware acceleration than a few of the 1U GPU servers ought to chew thru some transcode work pretty quickly.  I'm afraid of what they cost though.  My years as a sys admin and software architect have taught me if it says "contact sales for pricing" then your pockets had better be deep.

 

And if we are throwing out ideas for a centralized DB then I vote MongoDB.  They have a free version and its pretty awesome to store your data in the same format your view consumes it.  I assume the views in emby use json but i could be wrong.  Not to mention if you have deep pockets then a MongoDB replica set is pretty sweet.  We use a couple of them at work and they are stupid fast.  But then again we put enough RAM in that cluster the whole DB lives in it.

Link to comment
Share on other sites

gotsourcheeks

The one thing I haven't seen mentioned yet is using Docker Swarm or even basic Docker with persistent volumes.  Is there a reason you couldn't mount the same /config volume to each docker so they all share the same DB?  Read/writes may overwhelm whichever machine is hosting the /config volume but theoretically it should work.  One physical SAN with SSD drives and 10 small blades in a cluster of docker containers all mapped to the same shares on the SAN should technically do the trick.  Of course as you mentioned you need a proper load balancer in nginx with ip-hash and user limits.  Or if your media is cloned across a few arrays mount one array per docker.

 

If Emby supports hardware acceleration than a few of the 1U GPU servers ought to chew thru some transcode work pretty quickly.  I'm afraid of what they cost though.  My years as a sys admin and software architect have taught me if it says "contact sales for pricing" then your pockets had better be deep.

 

And if we are throwing out ideas for a centralized DB then I vote MongoDB.  They have a free version and its pretty awesome to store your data in the same format your view consumes it.  I assume the views in emby use json but i could be wrong.  Not to mention if you have deep pockets then a MongoDB replica set is pretty sweet.  We use a couple of them at work and they are stupid fast.  But then again we put enough RAM in that cluster the whole DB lives in it.

 

I already do this with kubernetes, emby replicatedset service with nginx ingress and loadbalancer. The problem is you cannot scale the pods up presumably because only a single instance can lock the db at once, although I dont know enough about emby to diagnose the root cause. I could go to a statefulset with seperate unique persistent storage for each container, but they would each have separate configs which would be a nightmare to manage. Thats where the idea of master-slave comes in. I can make changes to only the master instance and the changes would somehow propagate to the slaves. At the same time, user viewing data should be passed from slave to master so that if a user begins a session on a different slave instance, their viewing information is preserved.

 

My current thought is to run two separate emby services on my cluster. One master service with only a single pod to act as the master. This pod will have read/write to my media and handle metadata.  Then ill have secondary statefulset service where pods have read only to media. This is the service i'll make available to users via ingress. Theoretically this service will be able to scale while accessing the same media. The only issue here is i would have to set up a manual job to sync the config from the master to the secondary(s) and to sync the user viewing data from secondary(s) to master.

 

Currently looking at the emby "restore from backup" procedure for guidance, but open to any and all ideas.

Link to comment
Share on other sites

sansoo22

I already do this with kubernetes, emby replicatedset service with nginx ingress and loadbalancer. The problem is you cannot scale the pods up presumably because only a single instance can lock the db at once, although I dont know enough about emby to diagnose the root cause. I could go to a statefulset with seperate unique persistent storage for each container, but they would each have separate configs which would be a nightmare to manage. Thats where the idea of master-slave comes in. I can make changes to only the master instance and the changes would somehow propagate to the slaves. At the same time, user viewing data should be passed from slave to master so that if a user begins a session on a different slave instance, their viewing information is preserved.

 

My current thought is to run two separate emby services on my cluster. One master service with only a single pod to act as the master. This pod will have read/write to my media and handle metadata.  Then ill have secondary statefulset service where pods have read only to media. This is the service i'll make available to users via ingress. Theoretically this service will be able to scale while accessing the same media. The only issue here is i would have to set up a manual job to sync the config from the master to the secondary(s) and to sync the user viewing data from secondary(s) to master.

 

Currently looking at the emby "restore from backup" procedure for guidance, but open to any and all ideas.

That makes a lot more sense.  Thanks for pointing out the write lock issue.  We just started with docker swarms at work and i often forget about that.  With how we configured MongoDB and being in charge of our own code base we just avoid it.  But as you pointed out the discussion here is to come up with a solution without having access to the code. 

Link to comment
Share on other sites

  • 1 month later...
bennycooly

This sort of thing is *almost* available already if you run Emby in docker and create a kubernetes (or docker swarm) cluster. I haven't tried it yet, so I don't know what all of the gotchas are, but I can think of three potential issues I'd expect to run into.

 

Assumptions:

  1. You'd want to set up each node in your k8s cluster to use shared storage (shared by all the nodes) to access the same configuration (programdata folder), and the same media files.
  2. Ingress is configured (external to Emby itself) for the Emby service to provide round-robin access but also so that any given user would be routed to the same node for the duration of his session (or for the life of the pod on that node, whichever is shortest). That way session cookies should continue to work as they do now.

Potential Issues:

  1. Shared configuration. The way Emby currently deals with config would mean shared sqlite3 database files accessed by more than one node. Care would need to be taken to make sure DB operations are atomic, roll back in case of an error, and to allow access from more than one node without database file locks. I don't know if that can be done with sqlite files, or if Emby would need to allow something like a mysql database to hold config as an alternative to sqlite.
  2. Scheduled tasks. Emby server instances would need to have a way to do some minimal coordination with other instances so that the same thing wouldn't be done twice by concurrently running Emby server instances. For example one instance would say, "I'm going to do scheduled task A now!" and get a "lock" on that task. That should be totally do-able with a shared database (see potential issue #1). If another Emby server instance tries to get a lock on task A before doing it, and that task is already locked, it would assume it's in process by another instance and skip it. Or, better yet, it would monitor the task being performed by the other instance to make sure it's successful, and if it errors or never indicates success, then it would get a "lock" on the task and execute it itself.
  3. Hardware accelerated video encoding. If I were to set up kubernetes at home to run Emby, I would set up three worker nodes, but only one is capable of hardware accelerated video encoding. Ideally, it would be possible for an Emby Server instance capable of hardware acceleration to act as an "agent" for other Emby instances that don't have that capability, so that a session handled by one of the non-hardware-accelerated nodes would be capable of doing everything else, but if available, it could ask the hardware accelerated node to do the encoding tasks. If the transcoding-temp directory is on shared storage, maybe that'd be possible without too much additional work by the Emby devs? Also, instead of a whole Emby server instance making itself as an agent to other server instances, maybe it would be easier for the video encoding to be handled by a separate module/program that only does encoding tasks.

Those three potential issues are listed in order of the priority I'd give them.

 

I think that looking at a project like this from a birds-eye view it could be overwhelming and hard to begin, but if you didn't worry so much about handling multi-servers or clusters yourself, and let kubernetes take that role, and instead focused on making Emby server play nicely in an environment where there may be more than one instance running and sharing the same config and media, then that would be easier to begin and you could let the users figure out the details of how to manage the multiple nodes.

 

Just my thoughts as I'm in the process of moving just about everything else I run at home into kubernetes (Radarr, Sonarr, Ombi, qbittorrent, sabnzb, and Jackett).

 

I've been running Emby in Kubernetes for a few months now with MetalLB, Istio, and Rook (Ceph) and I think Emby can definitely support high availability without too many changes. I fully agree with all of your points and have some thoughts about possible solutions:

 

1. There would be a need for a database and/or object store to store metadata and server state. The object store could be used to store media images, subtitles, transcoding temps, and anything else that is stored as plain files currently. We could possibly use a relational database for what is currently being stored in sqlite.

 

2. To solve the aspect of scheduled tasks and synchronizing events in general, we'd need to modify the way Emby schedules these tasks so that it is multi-instance aware. Since we already would've needed to program a way for Emby to be aware of all running instances to support HA in general, we could just add some logic inside the scheduler component to just pick one of them to run the job.

 

3. Kubernetes provides device plugins to abstract GPU resources so this should be possible to let Kubernetes handle fully in regards to scheduling the pod on the right node. If not all nodes have a GPU for video acceleration, we can add some logic when Emby first starts up to mark that instance as not having a GPU resource available. For the transcoding temp files, we could store those in an object store or a shared filesystem.

 

Overall I think there a just a few main features that would need to be implemented to support HA:

- Add cluster state metadata to make Emby aware of all instances in the cluster and a shared IP that would be set to the k8s service endpoint. We can store this state in a shared storage solution (see next point).

- Refactoring of storing metadata and temporary server files to use an external database, shared filesystem, and/or object store. There are many solutions we could go with here including Postgres, Mongo, Minio, Ceph, NFS/Samba, etc. Any stateless instance-specific data such as logs can still be saved to the local filesystem.

- Refactoring components such as the task scheduler to allocate jobs to a single instance in the cluster.

 

I'd definitely be happy to help contribute to this effort if people think this feature would be useful, and this would truly set Emby apart as being a scalable and highly available media server.

  • Like 1
Link to comment
Share on other sites

  • 11 months later...
  • 3 months later...
ObsidianGr

Since there hasn't been any further discussion on this, thought I would see if there was any updates / news / etc?

Basically, what we are looking for is a way to develop a media server cluster, because the number of users who will be accessing content could scale up to 500-800 users watching streams concurrently.  Load balancing via nginx and ip_hash, while could work, also presents inherent problems of it's own.  Not to mention, the problem of the DB sync is still there.  Because of the use of SQLite, this further complicates matters.

I understand fully that with there is a device limit of 15 concurrent users with the premiere version, and in order to scale to that number of concurrent users we would either need to use non-premiere features, or work out licensing for that amount of concurrent users.  I'm also aware of the bottlenecks that we can hit with storage.  Both of these points aside, just seeing if any headway has been made.  I've also been looking at the program written for Plex, UnicornTranscoder and while this is very promising, There may be other issues there.

Link to comment
Share on other sites

pir8radio
7 hours ago, Floflobel said:

I discovered the possibility to use transcoding servers via this tool:

https://github.com/joshuaboniface/rffmpeg

 

The problem is that the tutorial is only available for Jellyfin. And I did not test under Emby. If people have more information, I'm a taker.

maybe @Luke and team could create an emby specific version of this that can work well with emby without any hackery

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...