Jump to content

Server creating multiple ffmpeg process for one stream


Recommended Posts

Posted

Hi everyone,

 

I have the latest beta installed on Win10.

 

This is an error that occurs randomly (or at least I cannot see any pattern). When playing a movie from chome browser, safari, or ios client, sometimes it wont start.

 

Looking at the task manager I can see 2 or more ffmpeg processes, one taking 90% to 100% of my network and 1-5% of CPU, and other process taking almots 100% of CPU and nothing from the network.

If I manually kill the process chewing up bandwith, the stream automatically starts with no problems.

 

I have my server in a dedicated pc and the media stored in a Synology NAS. On the library configuration I have them pointing to something like \\SYNOLOGY_NAS\movies , no mapped network units.

 

Any ideas on this issue?

 

 

 

 

Posted

This might happen when any of the following happen:

  • You don't stop by closing the player
  • The player opens a stream but then playback fails to start
  • You get impatient and click the play button several times

This is being looked at and improved for upcoming releases.

  • 6 months later...
Posted

Hi Luke, sorry to dig an older post but i didnt think it worth raising a new one to write this.

 

And there seams to be many treads regarding this already, so no need to create another.

 

Ive mostly been using emby locally (head end and playing using smb), but after a recent switch to the linux flavour for a host ive been watching the server more closely as i ran some tests.

 

I can see this is still a big issue, skip a movie, and i get another thread, from android sometimes pressing back also leaves another thread, so if i try and flick through a few things quickly i can eat all the cpu cycles with ffmpeg threads and the server eats its self.

 

Surely the logic to fix this is pretty simple.

 

When a user first starts a stream you dump out their device id, username and the pid number of the ffmpeg thread, then when ever a new stream is started it looks up that temporary lookup table and if if the user and device id matches it kills the existing pids before launching the new ones (or shortly after if there needs to be hand over).

 

Or to make it more robust:

 

First play, checks lookup table and its empty, drops user id and device id into lookup table, starts ffmpeg and drops in pid(s) into a store.

 

User skips, or any action that results in a new stream from the same device and all existing pids are killed a new ffmpeg started and its pids recorded.

 

It also appears that if a user just vanishes this also results in an orphaned ffmpeg, i can start something and kill emby on android, turn the device completely off and the ffmpeg thread soldiers on regardless.

 

 Surely there must be some mechanism by where its possible to tell this users is gone as no ones talking to the server to collect any further video when this happens.

 

Again using the same temporary lookup table idea as before the server would also recorded the users ip at the start of an ffmpeg thread. Then once every 30 - 60 seconds the server would loop a job to look for active ffmpeg threads (in the lookup table), then it would double check this users ip was actively connected to the http/https server. Maybe for safety to ensure no false positives it would wait 10-20 seconds and double check, still not there, then the ffmpeg threads get dropped / killed and the entries in the lookup table cleared.

 

Doing both of the above things would essentially cure all the streaming issues.

 

From what i can make out currently, the ffmpeg thread only dies quickly ahead of of completion (end of file) is if the user presses stop.

 

My phones been off for 10-15 minutes now (no communication with the server what so ever) but still multiple ffmpeg threads much away at the cpu.

 

Finally at some time around the 30 minute mark i can see threads starting to release, this should be much much quicker.

 

All of this should be possible, as emby knows the users id, knows the device id, knows the users ip address and must know the pid of ffmpeg to be able to make it stop (for that user) when the user hits stop.

 

If not and your doing this by pid and are instead doing a kill all ffmpeg instances that are running against x file, simply use that instead of pid number for the lookup table in the logic i explained above both should work equally as well.

 

When a stream is stopped, or completes the users active ffmpeg lookup table would be cleared.

 

Possibly part of the issue is that once an ffmpeg process has been launched and then orphaned its impossible to know who it was launched by, to address this would it not be possible to abuse the ffmpeg -metadata comment="" , writing out the user id, device id and ip into the comment. This way it would help emby when launching a new thread to first kill off all existing threads for that user on that device. Even if mpeg dash doesn't support the comment tag (not sure if it does, but a quick google looked promising) it would hopefully just ignore it and still process the video still. As all you would be interested in was populating the used id, device id and ip into the process list as thats were emby would be looking for the info. This would then negate the need for a lookup table, as the process list it self would be the lookup table.

 

This would mean, even if the server got restarted and left orphaned ffmpegs going, when restarted it would kill off existing ffmpeg threads where the user was also gone and would then kill off old threads when a user returned to watch something else.

Posted

Hi, can you please outline a specific example? thanks.

yodaboy01
Posted

Hi

 

I came to this forum today looking for this exact issue. I don't want to hijack it but add my specific example

 

I have the latest version of server and the iOS app. I use the app to stream to my Chormecast. I have my emby server transcoding set to a max instances of 2 and throtling enabled.

 

In the last week when streaming to chromecast I have noticed that each time I skip in a movie file it creates another 2 instances of ffmpeg. Stopping the movie in the app does not close the instances. Nor does starting another movie. As a result my CPU is getting overloaded quickly and causes everything to stutter. The only thing that kills the processes is restarting Emby server.

 

Thanks

Posted

Hi

 

I came to this forum today looking for this exact issue. I don't want to hijack it but add my specific example

 

I have the latest version of server and the iOS app. I use the app to stream to my Chormecast. I have my emby server transcoding set to a max instances of 2 and throtling enabled.

 

In the last week when streaming to chromecast I have noticed that each time I skip in a movie file it creates another 2 instances of ffmpeg. Stopping the movie in the app does not close the instances. Nor does starting another movie. As a result my CPU is getting overloaded quickly and causes everything to stutter. The only thing that kills the processes is restarting Emby server.

 

Thanks

 

Hi, please discuss a specific example by seeing how to report a media playback issue. Thanks!

MSattler
Posted

I don't understand why months and months later Emby still is not tracking and managing ffmpeg processes in a better fashion.   

davidtjudd
Posted

Luke, I provided repro with logs in another thread earlier this week. All you have to do to reproduce this is to start and stop playing a few times from a client that is transcoding.

Posted

Luke, I provided repro with logs in another thread earlier this week. All you have to do to reproduce this is to start and stop playing a few times from a client that is transcoding.

 

yes thank you, and your issue has been resolved for the next release.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...