Jump to content

Show Intro Skip Option


Liquidfire88

Recommended Posts

Micael456
16 hours ago, chef said:

@Micael456 I'd love to know if you have a better experience with this upcoming release.  There are still ways to limit resources we use during the task. But it would make the task a lot longer. 

@chef

So I thought I'd be brave and try this on my main setup :D.

My setup is Headless Linux Server on an Odroid, connecting to media stored on a Windows 'NAS' (old hp microserver) via a gigabit switch.

image.png.b3ab1b80a0cad56b5ac03713b99844a7.png

It's been just over an hour and it's done 4.9%, so just under a day total I'd expect. I have debug logs on and everything seems good so far- not sure what the "standard" speed everyone else is getting with the latest version, but if anyone has Sliders, Hart of Dixie, or Buffy they can compare :D.

image.png.ffb9794d35e8f26b773c571c7ae00512.png

 

I'm not sure on whether 4 concurrent streams are best for my particular setup, as it runs fairly hot and doesn't seem to match 1 ffmpeg to 1 core. Interestingly, as you can see from the below, I seem to have 5 ffmpeg instances running when doing this.

2021-09-06.png.055aae3cc6c15336f6cbd0f19c47ac75.png44907606_2021-09-06(1).png.52a5c991e3a2089dc7a87b901174227b.png

  • Like 1
Link to comment
Share on other sites

6 hours ago, Micael456 said:

@chef

So I thought I'd be brave and try this on my main setup :D.

My setup is Headless Linux Server on an Odroid, connecting to media stored on a Windows 'NAS' (old hp microserver) via a gigabit switch.

image.png.b3ab1b80a0cad56b5ac03713b99844a7.png

It's been just over an hour and it's done 4.9%, so just under a day total I'd expect. I have debug logs on and everything seems good so far- not sure what the "standard" speed everyone else is getting with the latest version, but if anyone has Sliders, Hart of Dixie, or Buffy they can compare :D.

image.png.ffb9794d35e8f26b773c571c7ae00512.png

 

I'm not sure on whether 4 concurrent streams are best for my particular setup, as it runs fairly hot and doesn't seem to match 1 ffmpeg to 1 core. Interestingly, as you can see from the below, I seem to have 5 ffmpeg instances running when doing this.

2021-09-06.png.055aae3cc6c15336f6cbd0f19c47ac75.png44907606_2021-09-06(1).png.52a5c991e3a2089dc7a87b901174227b.png

Very interesting outcomes here. Thank you for taki g the time. 

58s is a little long, but it could be the processor trying hard to keep up with encoding 5 at a time. 

The encodings are successful, so this is a good thing. 

 

It will be interesting to see how the sequence task executes.

 

I'm not sure how smart we want to try and make the plugin, but we could set try and default processes based on CPU. 

Im just wondering if the CPU would take 58s doing 5 encodings at a time, it might be faster to encode a small amount at once. Maybe 2 or 3.

Doing 2 or 3 at once at a 9s encoding would speed things up.

 

Link to comment
Share on other sites

rbjtech

This is exactly what I found @chef - lowering the number of parallel tasks speeds up the entire process because the CPU is not waiting for disk I/O. 

The 'balance' is going to be very tricky to get right - and it also depends on how much overhead you want this task to take.  

Again, a personal preference is to not take all the resources as this is a 'shared' server (ie it has other vM's and services running) - but on a dedicated machine with no active clients, then you may want it to go as fast as possible.

I think the manual settings is all you can do here - with the default as a very cautionary 2 or 3 max - if the user wants to increase (and we use a suitable warning), then that is up to them.

Remember that at the end of the day - this is a one off task - so for new episodes, it should be reasonably transparent in all the other new episode processing ?

Edited by rbjtech
  • Like 1
Link to comment
Share on other sites

Cheesegeezer

Quick Question @chef  @rbjtech

What is on the "TODO" list.

Not sure where to help now I have access to code.

Cheers

 

Edited by Cheesegeezer
  • Like 1
Link to comment
Share on other sites

rbjtech
12 minutes ago, Cheesegeezer said:

Quick Question @chef  @rbjtech

What is on the "TODO" list.

Not sure where to help now I have access to code.

Cheers

 

Hi @Cheesegeezer - This is the status from last week (link below)

I *think* chef has written the code for #5 (manual skip - add to the chapter points) - but has disabled the code for the time being.

My personal view is a conversation with Luke and ebr is the next step to understand how they would want this integrated into the Core code.  They may have some basic requirements that we are missing ?  I think it may also be a good time to reserve a slot in the 'Beta' schedule to maybe test on just a web browser in an official Beta ?  I'm not sure how this scheduling works .. 🤪

👍

https://emby.media/community/index.php?/topic/48304-show-intro-skip-option/&do=findComment&comment=1063686

 

Edited by rbjtech
  • Like 2
  • Thanks 1
Link to comment
Share on other sites

Cheesegeezer
32 minutes ago, rbjtech said:

Hi @Cheesegeezer - This is the status from last week (link below)

I *think* chef has written the code for #5 (manual skip - add to the chapter points) - but has disabled the code for the time being.

My personal view is a conversation with Luke and ebr is the next step to understand how they would want this integrated into the Core code.  They may have some basic requirements that we are missing ?  I think it may also be a good time to reserve a slot in the 'Beta' schedule to maybe test on just a web browser in an official Beta ?  I'm not sure how this scheduling works .. 🤪

👍

https://emby.media/community/index.php?/topic/48304-show-intro-skip-option/&do=findComment&comment=1063686

 

Thanks for the list.  I'll maybe look at 5 6 & 7.  @chef is in his groove now for the first 4.  We can provisionally code and local test for VideoOSD manipulation, so hopefully @Admin can just fork, pull or modify.

 

  • Like 2
Link to comment
Share on other sites

1 hour ago, Cheesegeezer said:

Quick Question @chef  @rbjtech

What is on the "TODO" list.

Not sure where to help now I have access to code.

Cheers

 

 

I had this happen to me overnight.

Season 3 of Rick and Morty was marked as processed, but title sequences were set as false.

It was a quick fix, using the 'Remove Season Data' button,  and a rescan found the sequence, okay.

 

 I updated the Github.

On line 139 of TitleSequenceDetectionManager, -  if for some reason a set of episodes is skipped, they would still  be marked a 'processed'.

 

The best I could do is set a 'Warn' in the log that the user needs to give attention to  that specific season worth of episodes.

 

 

 

 

 

Link to comment
Share on other sites

1 hour ago, rbjtech said:

This is exactly what I found @chef - lowering the number of parallel tasks speeds up the entire process because the CPU is not waiting for disk I/O. 

The 'balance' is going to be very tricky to get right - and it also depends on how much overhead you want this task to take.  

Again, a personal preference is to not take all the resources as this is a 'shared' server (ie it has other vM's and services running) - but on a dedicated machine with no active clients, then you may want it to go as fast as possible.

I think the manual settings is all you can do here - with the default as a very cautionary 2 or 3 max - if the user wants to increase (and we use a suitable warning), then that is up to them.

Remember that at the end of the day - this is a one off task - so for new episodes, it should be reasonably transparent in all the other new episode processing ?

So we should default to 2 at a time, and allow the user to increase it based on their preferences, and understanding of their own computer.

That should be written in the UI. 

  • Agree 3
Link to comment
Share on other sites

Micael456

So right now I'm approx halfway through, a couple more observations.

I'm definitely hitting some resource constraints somewhere, prime suspect is the CPU rather than disk I/O.

At first I thought that some of the encode times might have been based on file size, since my TNG Bluray Rips were taking a long time, but this does not seem to be case, as evidenced below by "Ghosts". The fingerprint time in the case of Ghosts also isn't affected by where the intros are- which leaves resource conflict as the only thing I can think of for the abnormal bumps.

image.png.cff4194a40bc32a19a5fd647f13b8191.png

I also thought I'd see what happened if I tried to watch an episode as it was being fingerprinted. I can safely say that also doesn't seem to adversely affect speeds. In fact, in my (admittedly small sample size), it increased speeds. Possibly due to disk caching?

image.png.5c689f2e21bcfb6c8b2ba733885d9538.png  <<- "Normal" speed.

  image.png.6fd2d9d912cefd9dd3aca51413c958bd.png<<- "Abnormal" speed.

image.png.5acc2ea0cfdb67b3b67f988cb0c6dcfb.png<<- "Normal" speed.

image.png.9a357f6585f3715c7a8fdc3a0e45d2ed.png<<- "Normal" speed.

image.png.e07fe851b62a17a3736d1009a8c06efc.png <<-- File Streaming at the same time

image.png.2861a0f80db71567b70a5a2a0b26e193.png<<- "Normal" speed.

image.png.e4f77d5ad7d2a17ffba0c58216770da5.png <<-- File Streaming at the same time

 

 

So far so good, and upon reflection it makes sense that larger filesizes don't affect it so much, as the audio stream is likely to be the same whether it's a 720, 1080, or even downmuxed 4k rip.

  • Thanks 1
Link to comment
Share on other sites

I think I can explain some of the instances where the task takes a longer time. I think it has to do with attempting parallel threading on an item set with one item.

If there is only one series to process, we still use Task Parallel Library, when perhaps that is a mistake. Instead we should plow through the series on the main thread. 

It would involve a check of processed items prior to processing the items 🙃. But, it might be nesessary if we continue to see processing taking 96s. 

Link to comment
Share on other sites

rbjtech
57 minutes ago, Micael456 said:

So far so good, and upon reflection it makes sense that larger filesizes don't affect it so much, as the audio stream is likely to be the same whether it's a 720, 1080, or even downmuxed 4k rip.

Hmm - it still has to read the file though to get to the Audio as it's interleaved into the Video.

So taking an example - if I have a 1Gig file - and need to read a 1/2 of it - I still need to 'read' 500Meg of data.  If I now have a 80Gb file (a 4K remux) and need to read 1/2 of it - then I need to read 40Gb's worth ..  With local disk access, this is probably not an issue, but over a LAN, this is going to be a large contributing factor - I think. 

Link to comment
Share on other sites

rbjtech
13 minutes ago, chef said:

I think I can explain some of the instances where the task takes a longer time. I think it has to do with attempting parallel threading on an item set with one item.

If there is only one series to process, we still use Task Parallel Library, when perhaps that is a mistake. Instead we should plow through the series on the main thread. 

It would involve a check of processed items prior to processing the items 🙃. But, it might be nesessary if we continue to see processing taking 96s. 

Interesting.

Can you expand on this a bit more @chef  - I'm not 100% sure I'm with you.

If a show only has a single season - then yes it will get finish 'first' vs a show that has say ten seasons's (assume they start at the same time) but then it moves onto a new show and that 'thread' is now freed up ?

Link to comment
Share on other sites

Micael456
57 minutes ago, rbjtech said:

With local disk access, this is probably not an issue, but over a LAN, this is going to be a large contributing factor - I think

As per my earlier post, mine is going over LAN. It might be the format I use though? I use MKVs almost exclusively (I rip in MKV, sometimes I get copies of MP4s from friends).

 

1 hour ago, chef said:

But, it might be nesessary if we continue to see processing taking 96s. 

Would now be a bad time to share those TNG fingerprint times then? 😂 They routinely hover above 80, and spike to a bit more.

image.png.36d30113f25966099962701825c01a1a.png

 

Perhaps that is to do with the filesize then after all? I'll see if I can spot Stargate Atlantis when those rips come up, as they're also ~3GB an episode. @rbjtech, do you have any similar sized episodes to compare against? (even if only relative to the rest of your library).

Link to comment
Share on other sites

52 minutes ago, rbjtech said:

Interesting.

Can you expand on this a bit more @chef  - I'm not 100% sure I'm with you.

If a show only has a single season - then yes it will get finish 'first' vs a show that has say ten seasons's (assume they start at the same time) but then it moves onto a new show and that 'thread' is now freed up ?

https://stackoverflow.com/questions/49281750/parallel-foreach-performance

I think we need a partitioner object in our parallel loop. 

I need to learn this quickly to make sure I'm right. 

 

 

Link to comment
Share on other sites

2 minutes ago, Micael456 said:

As per my earlier post, mine is going over LAN. It might be the format I use though? I use MKVs almost exclusively (I rip in MKV, sometimes I get copies of MP4s from friends).

 

Would now be a bad time to share those TNG fingerprint times then? 😂 They routinely hover above 80, and spike to a bit more.

image.png.36d30113f25966099962701825c01a1a.png

 

Perhaps that is to do with the filesize then after all? I'll see if I can spot Stargate Atlantis when those rips come up, as they're also ~3GB an episode. @rbjtech, do you have any similar sized episodes to compare against? (even if only relative to the rest of your library).

Yeah, that is a long time. It has to be the processor. It gets done, but it's a long time.

Link to comment
Share on other sites

Micael456
30 minutes ago, chef said:

Yeah, that is a long time. It has to be the processor. It gets done, but it's a long time.

I wonder how it's going to go on a Raspberry Pi then- the Odroid is a fair bit more powerful than one of those. So long as we chunk it OK though should be fine even if it takes a week the first time around!

Link to comment
Share on other sites

Oh boy! here is a bug! 😬

 

We  have to remove the 'trash-can' icons on each row of the UI for the plugin.

 

Because we free up space by removing fingerprints. If you remove one episode from a processed season, and then rescan. The processed season no longer has any fingerprint data to compare the one removed episode too, and the plugin will fail.

Only way is to remove the entire season and scan again.

 

How to fix:

 

1.Remove trash-can icon from each row.

2. Re-name "Remove Season Data" button to: "Rescan". This will make the user feel less like they are about to lose data.

Edited by chef
Link to comment
Share on other sites

v2.0.2.8

  • Removed trash can icon from table rows (can't remove one episode if complete season's fingerprints are no longer accessible)
  • Fix Table edit (can't add edited timestamps back to the DB if fingerprints are NULL)
  • Rename button at the bottom of the config page to "Rescan Season"

IntroSkip_v2.0.2.8.zip

Clear browser data.

 

Can we add these to TODO:

  • Config dialogue needs only one setting for MaxDegreeOfParalellism.
  • MaxDegreeOfParalellism needs a description
Edited by chef
  • Like 1
Link to comment
Share on other sites

rbjtech
15 hours ago, Micael456 said:

Perhaps that is to do with the filesize then after all? I'll see if I can spot Stargate Atlantis when those rips come up, as they're also ~3GB an episode. @rbjtech, do you have any similar sized episodes to compare against? (even if only relative to the rest of your library).

I'll run the new version tomorrow (2.0.2.8) with debug and look at the timings but from memory, they were about 9 Seconds, that with direct attached storage and an ancient i5 750.   Most episodes in the 1-3Gb range (720p or 1080p).

edit - sorry, just realised, you are referring to chromaprint timings, not detect timings - these are very different.  I'll post an update in a new post based off the latest version.

Edited by rbjtech
Link to comment
Share on other sites

rbjtech
6 hours ago, Micael456 said:

I wonder how it's going to go on a Raspberry Pi then- the Odroid is a fair bit more powerful than one of those. So long as we chunk it OK though should be fine even if it takes a week the first time around!

And this is the main point - as long as progress is steady, can be paused, stopped and resumed - then it doesn't really matter how long it takes within reason.   This is some heavy processing, so we probably need to set expectations on the first run.

  • Agree 2
Link to comment
Share on other sites

Micael456

So it's finished the fingerprinting ~1 hour ago, which is reasonable I suppose. From the look of the screenshot below I'd imaging it will try Title detection in another hour, will see how that triggers.

image.png.7fccfd425d7916a5b1897266c846d89c.png

 

I did have a thought, what do you think about when we chunk it (for slower systems) we also have an option to chunk Title Sequence Detection as well? i.e. once each series is fingerprinted it runs the Detection on them rather than waiting for all the fingerprinting to be complete?

It might slow down the overall processing time switching between tasks, but for those slower systems the "payoff" might come earlier so it feels faster and that it's making progress.

  • Agree 1
Link to comment
Share on other sites

rbjtech
1 hour ago, Micael456 said:

So it's finished the fingerprinting ~1 hour ago, which is reasonable I suppose. From the look of the screenshot below I'd imaging it will try Title detection in another hour, will see how that triggers.

image.png.7fccfd425d7916a5b1897266c846d89c.png

 

I did have a thought, what do you think about when we chunk it (for slower systems) we also have an option to chunk Title Sequence Detection as well? i.e. once each series is fingerprinted it runs the Detection on them rather than waiting for all the fingerprinting to be complete?

It might slow down the overall processing time switching between tasks, but for those slower systems the "payoff" might come earlier so it feels faster and that it's making progress.

Yes this my my thought as well  earlier in the thread - and I believe Chef did look at this as an option.  I too suggested after X number of 'per show/series' or even per show as you have suggested.

There is actually nothing to say you can't do this crudely anyway by using smart scheduling by interval - ie you run FP for 1 hr, then you run Detect for 1 hr on a round robin.  And ideally, the 3rd stage is to write the results to the Chapters dB - then start again with another round of FP etc.

For me personally, I think that works better as I agree if people need to wait say 2 days+ for FP to finish, they are going to think it doesn't work ('no Intro points are being created in my media')or will just kill the process as 'it's not doing much' ..

Link to comment
Share on other sites

Micael456

Hit another resource issue with my main setup.

Seems that the "refresh internet channels" scheduled task and the Episode Title Sequence detection tasks don't play well with each other. My instance completely froze up- no web interface/ not responding to clients. I had to switch the odroid off and on again. Unfortunately I didn't have debug logs enabled at that point, though I've re-enabled them now incase it happens again.

I'm honestly not sure which one is at fault, since they both seem to show up as "EmbyServer" processes. But clearly limited to the CPU on the box.

image.png.dea26b1f5480f2f81a5a24538ded92d2.png

@chef@Luke Is there any way that the Sequence Detection can detect if any other scheduled tasks are running and pause automatically? The Audio Fingerprinting didn't seem to be affected, I imagine that's because it spawned its own separate ffmpeg processes.

image.png.e10b97b3c50b2bc7e52afc25a87e9897.png

Edited by Micael456
Link to comment
Share on other sites

Micael456

Looks like I can't edit my previous post. Crashed again, lost the emby server process. When I restart it, which logs (if any) would be useful?

1983234720_2021-09-07(1).png.e239bcdcab303ddd44c837bee3bd6818.png

144215282_2021-09-07(2).png.2e0b688b29436c81f12ac057767a3ca9.png

image.png.4b81b4d23578d47a9af8be6e9735ba6e.png

Edited by Micael456
Link to comment
Share on other sites

3 hours ago, Micael456 said:

Hit another resource issue with my main setup.

Seems that the "refresh internet channels" scheduled task and the Episode Title Sequence detection tasks don't play well with each other. My instance completely froze up- no web interface/ not responding to clients. I had to switch the odroid off and on again. Unfortunately I didn't have debug logs enabled at that point, though I've re-enabled them now incase it happens again.

I'm honestly not sure which one is at fault, since they both seem to show up as "EmbyServer" processes. But clearly limited to the CPU on the box.

image.png.dea26b1f5480f2f81a5a24538ded92d2.png

@chef@Luke Is there any way that the Sequence Detection can detect if any other scheduled tasks are running and pause automatically? The Audio Fingerprinting didn't seem to be affected, I imagine that's because it spawned its own separate ffmpeg processes.

image.png.e10b97b3c50b2bc7e52afc25a87e9897.png

Yes. I can try to make the task stop for other tasks. 

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...