Jump to content

Recommended Posts

Darkseidd
Posted

i take this request not official yet?
voting up

  • Like 1
  • 7 months later...
Posted

I would very much like to see this included in coming release as well! 🙏

  • Like 1
Posted

There are features where you know all about it - how it can work, how it needs to be implemented and where you are sure that it will work out successfully.

Other features require a certain amount of research to evaluate and determine whether it's doable, which effort it would take and whether the results would be reasonably useful.

Another category of features exists, where you know up-front that there's nothing to win, which means that you know that you can't deliver it in a way that it would satisfy users' expectations, and you even know that this feature would eventually cause much more dissatisfaction than not providing the feature at all.

Unfortunately, this feature falls into the latter category - even though it seems to be so simple and even though some local players support it (somewhat), but this feature would suck in too many cases, even with high effort put into it. 

To be a bit more specific: There will be offsets between audio and video. And not only offsets but also drifts. And these two things cannot be automatically fixed by the server as it requires a human to assess and adjust. This means in turn that we would need to have controls for adjusting these two parameters - in all clients.
But besides adjustment controls, many clients can't handle such things locally in terms of playback, and the server would need to do those adjustments - but the server, which is transcoding and/or muxing audio and video together, is often way ahead of time in processing, so adjustments made at the client, won't happen instantly. But only with instant changes can adjustments be made properly.
Also, when changing offsets of audio/video in a live-transcoding/live-streaming scenario, HLS clients may fail and stop playback, as players usually sync by audio not by video. For players that support offsets, the range of these is quite limited because it happens at a point where video frames are already decompressed, and raw video frames take a huge amount of memory per frame.

Then there's transcoding itself at the server. We support a gigantic amount of different transcoding setups and we know from subtitles that the behavior regarding data flow in ffmpeg is fundamentally different when there's a stream involved that is coming from a separate external file. For subtitles it doesn't matter that much, because all subtitle frames can be loaded into memory at once, but for audio, it can be different. As always, we don't have a single case to consider, but ALL cases and there's no room for an additional factor and potential point of error.
It's also important to understand that you cannot compare a local player to what Emby Server is doing. Emby server is an intermediate instance in the chain of media delivery to clients. It doesn't work by outputting AV to local hardware (display/sound) directly, but instead it re-encodes/remuxes media which will finally be demuxed and decoded and presented by clients/hw/sw which is usually a lot less capable (than VLC for example).

The deeper you look into this, the more difficult it gets and what I've mentioned so far is just scratching the surface. While everything is always possible for the future, I think we shouldn't make any false promises. It's too unlikely that this could ever be part of our playback procedures, not just because these are quite complex already, but also because this wouldn't work as a feature that would match our own standards in terms of quality and reliability and cause an endless amount of user reports instead.

 

A more realistic feature might be some kind of "interactive remuxing", which allows you to set offset and drift values interactively through some UI in the server administration area (or maybe through a plugin) and mux the external audio track into the video file so that eventual playback will use the prepared file..

  • Like 3
sydlexius
Posted
9 hours ago, softworkz said:

There are features where you know all about it - how it can work, how it needs to be implemented and where you are sure that it will work out successfully.

Other features require a certain amount of research to evaluate and determine whether it's doable, which effort it would take and whether the results would be reasonably useful.

Another category of features exists, where you know up-front that there's nothing to win, which means that you know that you can't deliver it in a way that it would satisfy users' expectations, and you even know that this feature would eventually cause much more dissatisfaction than not providing the feature at all.

Unfortunately, this feature falls into the latter category - even though it seems to be so simple and even though some local players support it (somewhat), but this feature would suck in too many cases, even with high effort put into it. 

To be a bit more specific: There will be offsets between audio and video. And not only offsets but also drifts. And these two things cannot be automatically fixed by the server as it requires a human to assess and adjust. This means in turn that we would need to have controls for adjusting these two parameters - in all clients.
But besides adjustment controls, many clients can't handle such things locally in terms of playback, and the server would need to do those adjustments - but the server, which is transcoding and/or muxing audio and video together, is often way ahead of time in processing, so adjustments made at the client, won't happen instantly. But only with instant changes can adjustments be made properly.
Also, when changing offsets of audio/video in a live-transcoding/live-streaming scenario, HLS clients may fail and stop playback, as players usually sync by audio not by video. For players that support offsets, the range of these is quite limited because it happens at a point where video frames are already decompressed, and raw video frames take a huge amount of memory per frame.

Then there's transcoding itself at the server. We support a gigantic amount of different transcoding setups and we know from subtitles that the behavior regarding data flow in ffmpeg is fundamentally different when there's a stream involved that is coming from a separate external file. For subtitles it doesn't matter that much, because all subtitle frames can be loaded into memory at once, but for audio, it can be different. As always, we don't have a single case to consider, but ALL cases and there's no room for an additional factor and potential point of error.
It's also important to understand that you cannot compare a local player to what Emby Server is doing. Emby server is an intermediate instance in the chain of media delivery to clients. It doesn't work by outputting AV to local hardware (display/sound) directly, but instead it re-encodes/remuxes media which will finally be demuxed and decoded and presented by clients/hw/sw which is usually a lot less capable (than VLC for example).

The deeper you look into this, the more difficult it gets and what I've mentioned so far is just scratching the surface. While everything is always possible for the future, I think we shouldn't make any false promises. It's too unlikely that this could ever be part of our playback procedures, not just because these are quite complex already, but also because this wouldn't work as a feature that would match our own standards in terms of quality and reliability and cause an endless amount of user reports instead.

 

A more realistic feature might be some kind of "interactive remuxing", which allows you to set offset and drift values interactively through some UI in the server administration area (or maybe through a plugin) and mux the external audio track into the video file so that eventual playback will use the prepared file..

Thanks for your detailed explanation.  I haven't encountered any issues with the Rifftrax media, but I understand what you're driving at as this being an impractical general solution.

  • Like 1
  • Thanks 1
Posted
16 minutes ago, sydlexius said:

Thanks for your detailed explanation.  I haven't encountered any issues with the Rifftrax media, but I understand what you're driving at as this being an impractical general solution.

Yea, it's not something that can be "sold" as a feature. 

You are "the one guy" who says:

On 2/21/2018 at 12:14 AM, sydlexius said:

I wouldn't expect Emby to handle the offset calculations needed to sync the audio.  It's not too difficult for me to modify that now.

But all others will say: "Hey the feature isn't working, audio and video are out of sync!"

  • Like 2
sydlexius
Posted
2 minutes ago, softworkz said:

Yea, it's not something that can be "sold" as a feature. 

You are "the one guy" who says:

But all others will say: "Hey the feature isn't working, audio and video are out of sync!"

Yeah, my narrow use-case does allow for a myopic view like that :) I'm just going to work on developing a better workflow to integrate my audio tracks with my existing media.  I still think there should be a "lazy-friendly" solution to media like this!

  • Like 1
Posted
48 minutes ago, sydlexius said:

Yeah, my narrow use-case does allow for a myopic view like that :) I'm just going to work on developing a better workflow to integrate my audio tracks with my existing media.  I still think there should be a "lazy-friendly" solution to media like this!

One way would be some kind of super-charged client which requests both, the main A/V item plus the extra-audio item simultaneously - both for direct play. That would be a situation then, which is comparable to those other players that were mentioned.

It's not a very flexible solution, but more doable, even though the same things apply that I mentioned above:
The client-player would need to have controls for offset and drift adjustment.

  • 4 months later...
Posted

External audio tracks are working on Kodi with Embycon.

I tried it myself with these files:

  • Weathering With You (2019).51ch.mka
  • Weathering With You (2019).mkv
  • Weathering With You (2019).ass

I can select the 5.1 sound track while watching on Kodi.

People have been talking about this for five years and no progress.

Choose Kodi and Embycon!

  • Like 2
sydlexius
Posted
6 hours ago, dfsdf said:

External audio tracks are working on Kodi with Embycon.

I tried it myself with these files:

  • Weathering With You (2019).51ch.mka
  • Weathering With You (2019).mkv
  • Weathering With You (2019).ass

I can select the 5.1 sound track while watching on Kodi.

People have been talking about this for five years and no progress.

Choose Kodi and Embycon!

Thanks for the report!  Too bad Kodi has too low a WAF/SAF to adopt in my household!

HawkXP71
Posted

+1 for this functionality.

However, I see this as a plugin.

The plugin when run, would look for paired files of *.MKV (or whatever video format) and the audio file.
Then it would remux the video, using a copy of the video stream + adding the audio file as a new stream, and add this as a new version of the mkv (So *-Audio Mux.mkv)

Then using the "Auto Movie Version Grouping" plugin, both versions of the MKV will be picked up and merged into a choice pull down for the video

  • 7 months later...
Posted

External audio only with direct play seems like a good solution. I know it's possible to use other client in some way, but it's not an elegant process and not well supported in different platforms. Is it possible to add this functionality to Emby Theatre? This is very meaningful for many use cases. Personally I have many external audio files similar in size to its paired video files which make merging them not a good option.  As I know Jellyfin has external audio file support now, though I'm not clear it's achieved by its server or client.

On 12/13/2022 at 2:51 AM, softworkz said:

One way would be some kind of super-charged client which requests both, the main A/V item plus the extra-audio item simultaneously - both for direct play. That would be a situation then, which is comparable to those other players that were mentioned.

It's not a very flexible solution, but more doable, even though the same things apply that I mentioned above:
The client-player would need to have controls for offset and drift adjustment.

 

  • Like 2
  • Agree 2
  • 4 months later...
Posted

Bump...

Jellyfin supports external audio tracks just fine. Even as MKA. Would love to see this in Emby.

  • Agree 1
  • 2 months later...
Posted (edited)

Hi , this is an extremely great idea to see Emby support external audio file as well. This feature has been suggested long time ago and hopefully finally see it on emby.

Any update on this matter?

Edited by kyma
  • Agree 1
  • 4 months later...
Posted

I recently purchased a lifetime Emby Premiere subscription, and I’m very happy with it. In Jellyfin, this feature is available and works seamlessly without any configuration. I was wondering, why hasn’t this feature been implemented in Emby yet? I’m not trying to say it’s easy, but has it been considered?

From a post by softworkz, I understand that implementing this feature while maintaining the high quality standards you’ve set is very challenging. However, the responsibility for audio quality and synchronization doesn’t rest solely on you, does it? I also believe that the fact this feature might not be perfect 100% of the time shouldn’t stop it from being implemented.

That said, I’m very happy with Emby and want to thank the team for being so responsive—it truly means a lot.

  • Agree 1
  • Thanks 1
  • 1 month later...
visproduction
Posted (edited)

Can this work?

  1. Audio track is some codec that normally needs transcoding for a TV or other output.
  2. A new Emby feature option in user settings for each login can turn off audio conversion, so Emby ignores the audio and no longer transcodes it, just sends only video for playback.  --- (Any user turning this on, should know that audio playback has to go to 3rd party speakers.  This type of software switch is used with high end blu-ray players.  Perhaps the user could get an optional pop-up reminder and a checkbox to turn it off for a single movie.  There should be an alert that mentions transcoding may start with a delay, and reduction of audio to 2 channels may result.)
  3. User has audio card with digital-out that can go to digital-in on whatever AV amplification they use together with TV video playback.
  4. The 3rd party AV amp accepts audio only and processes it for playback to separate non-TV speakers (or  sometimes users sets the TV as a center surround sound speaker).  These amps have delay settings that can be used together with possibly a TV video playback delay to lock in audio and video, as needed.  Once a delay setup is found, it should no longer be out of sync.  The audio is always being pulled directly from the original media file.  

Even with such a new Emby settings feature, I would guess there might be under 5% of users who have separate AV surround sound hardware and speakers.  It would perhaps be recommended that any such user logs in to one account that has this audio transcoding turned off, when they want to use a full surround sound AV amp and speakers.  The same user may more easily have a different login for normal use.  Or, they remember to change the settings, or handle media playback with the optional pop-up switch, per media play.

Is this blocked from working because there is no "off switch" to ignore audio for transcoding?  Would such a new feature switch make this all possible for users with the right AV amp and speakers?  There should be no sync issues because any pause or stutter for the media also causes exactly the same change in audio.  I think this feature would not require anything extra to do for sync, since that would be handled by the digital audio processing of the 3rd part AV amp.

Hope this makes sense.

 

 

 

Edited by visproduction
  • 2 months later...
Posted

Vote for this, emby just need to handle it like external subtitles. (Emby does not need to handle offsets, issues such as track deviation are something that users should consider, and jellyfin supports this feature well.)

Posted

Can someone please provide a sample video and corresponding audio track? Thanks.

Posted
2 hours ago, Luke said:

Can someone please provide a sample video and corresponding audio track? Thanks.

samples: https://drive.google.com/drive/folders/1Jv68K8vCaXqjara8WzuNJvALukHTUzn3


The only thing emby needs to do is present and play the external audio track, like jellyfin (doc:https://jellyfin.org/docs/general/server/media/external-files/) does.
 

Will any user complain that the audio track is out of sync with the video track? No! This is a false proposition, when someone has an external audio track, he naturally already has knowledge about video containers and he knows how to encapsulate the audio track into a container!

The truth of the matter is that in many use cases encapsulating the audio track into a container is a very invasive process, the user just needs to emby display and play it like external subtitle, will emby care the external subtitle sync or not? No! It's the stuff that the user needs to handle!

Posted

OK you can try this out on the beta channel in the next 4.9 build. Movies and episodes only, and must be in movie or tv libraries (based on content type). Not going to support this in a mixed content library right now as that will get messy with it's own supported audio file features.

Anyway, this is also extremely sub-optimal because you're going to lose direct play in order to accomplish this, but I guess you already knew that.

At some point we might be able to go into the native android, iOS and windows apps and add support for sending the external audio directly if the video player can accept that. On platforms that use mpv, I'm sure that will be possible, but for everything else I'm not so sure.

Posted
13 hours ago, syvcxm said:

Will any user complain that the audio track is out of sync with the video track? No! This is a false proposition, when someone has an external audio track, he naturally already has knowledge about video containers and he knows how to encapsulate the audio track into a container!

Then how come people complain all the time that their external subs are out of sync?  :) 

Posted (edited)
1 hour ago, ebr said:

Then how come people complain all the time that their external subs are out of sync?  :) 

If a user can't even adjust the timeline of an external subtitle and only complains about it online, is his complaint worth heeding?

If emby comes across a user complaining about out-of-sync external subtitle timelines, would emby only suggest that he wrap the subtitles into to the container and remove support for external subtitles instead of adjusting the timeline?

Going back to the external audio track issue, if a user really encounters an external audio track that is out of sync with the video, is it worth paying attention to it if he rather chooses to complain about it rather than encapsulating the audio into a container and adjusting the track delay?

If the emby dev team is simply afraid of people complaining about audio track and video track timing being out of sync, would you remove support for external subtitles because of the same complaint? Is it reasonable to shelve a feature that 100 users voted for several years because of complaint from 1 hands-off user? Not to mention that Jellyfin has already implemented it.

Why should external audio be supported? Let me give you an example: I have a comment audio track, and I have a million reasons why I don't want to encapsulate it in a container, so why can't emby provide playback support? If this audio track is really out of sync, then I would naturally encapsulate it into a container and adjust the latency instead of coming to the community and complaining about it like a baby!

Edited by syvcxm
Posted
23 minutes ago, syvcxm said:

If a user can't even adjust the timeline of an external subtitle and only complains about it online, is his complaint worth heeding?

If emby comes across a user complaining about out-of-sync external subtitle timelines, would emby only suggest that he wrap the subtitles into to the container and remove support for external subtitles instead of adjusting the timeline?

Going back to the external audio track issue, if a user really encounters an external audio track that is out of sync with the video, is it worth paying attention to it if he rather chooses to complain about it rather than encapsulating the audio into a container and adjusting the track delay?

If the emby dev team is simply afraid of people complaining about audio track and video track timing being out of sync, would you remove support for external subtitles because of the same complaint? Is it reasonable to shelve a feature that 100 users voted for several years because of complaint from 1 hands-off user? Not to mention that Jellyfin has already implemented it.

Why should external audio be supported? Let me give you an example: I have a comment audio track, and I have a million reasons why I don't want to encapsulate it in a container, so why can't emby provide playback support? If this audio track is really out of sync, then I would naturally encapsulate it into a container and adjust the latency instead of coming to the community and complaining about it like a baby!

By the way, even plex has workaround method for external audio tracks.

Posted
Quote

If the emby dev team is simply afraid of people complaining about audio track and video track timing being out of sync

Nobody is saying that.

Posted (edited)
5 hours ago, syvcxm said:

If a user can't even adjust the timeline of an external subtitle and only complains about it online, is his complaint worth heeding?

If emby comes across a user complaining about out-of-sync external subtitle timelines, would emby only suggest that he wrap the subtitles into to the container and remove support for external subtitles instead of adjusting the timeline?

You can adjust the offset of external subs in some Emby Clients.
Here a screen capture from the Emby Windows client.

image.png

IMHO, external audio will be far more problematic because the slightest bit of offset will show as lip sync issues.  I know this is something that drives me crazy. I can't watch movies or shows when the audio isn't 100% in sync with the visuals.

Subtitles don't need to have perfect timing but audio does.

Edited by Carlo
  • Agree 2
Posted
22 hours ago, Luke said:

OK you can try this out on the beta channel in the next 4.9 build. Movies and episodes only, and must be in movie or tv libraries (based on content type). Not going to support this in a mixed content library right now as that will get messy with it's own supported audio file features.

Anyway, this is also extremely sub-optimal because you're going to lose direct play in order to accomplish this, but I guess you already knew that.

At some point we might be able to go into the native android, iOS and windows apps and add support for sending the external audio directly if the video player can accept that. On platforms that use mpv, I'm sure that will be possible, but for everything else I'm not so sure.

Thanks luke, I have tested the external audio track in beta server 4.9.0.49, it works as expected and I was able to play it correctly through third-party client.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...