Jump to content

Emby 4k hdr streaming


nickwrgggdd123

Recommended Posts

nickwrgggdd123

Does emby support streaming HDR10 someone in the avsforum.com told me no I am getting a new 2018 p series vizio and am curious as to what is have to do to stream my 4k hdr rips

 

Sent from my SM-N950U using Tapatalk

Link to comment
Share on other sites

Hi, it's not about what Emby supports, it's about what your devices support. Yes Emby Server can handle HDR 10.

Link to comment
Share on other sites

Guest asrequested

Is the server (considering it has enough power) able to transcode 4k 10bit hdr to 4k/hd 8bit sdr?

It can transcode it, but HDR is metadata, which ffmpeg doesn't support right now.

Edited by Doofus
  • Like 1
Link to comment
Share on other sites

mastrmind11

deleted.  doofus beat me

Edited by mastrmind11
Link to comment
Share on other sites

Waldonnis

It can transcode it, but HDR is metadata, which ffmpeg doesn't support right now.

 

You actually can encode with HDR metadata using ffmpeg/libx265, but have to pass some of the parameters to x265 via x265_params and add a few additional flags (I'm encoding about 15 HDR files right now).  Getting the metadata from the source requires some ffprobe-fu, since it's not stored at the container level, but it's possible if you read a single frame's side data.  You just have to pass that metadata appropriately to x265 and it works nicely.  Tonemapping to 8bit is a bit more involved, comparatively.

 

Using hardware encoding for HDR content is iffy or maybe not currently possible, though.  NVENC didn't support including the metadata in the SEI packets last time I looked, and I have no idea if/how Intel handles it.

 

Here's what I use to inspect HDR metadata/side data, in case anyone's curious:

ffprobe -v quiet -select_streams v -show_frames -read_intervals "%+#1" -show_entries "frame=color_space,color_primaries,color_transfer,side_data_list,pix_fmt" -of "default=nw=1" -i file.mkv

...and I'm kinda back around again.  Between relatives visiting for two months, some minor health issues, and my computer being tied up for days at a time with HEVC encoding, I couldn't find much time to play or help for a bit.  I'm still more time-limited than before, but I'll try to pop by more often.

Edited by Waldonnis
  • Like 1
Link to comment
Share on other sites

Guest asrequested

You actually can encode with HDR metadata using ffmpeg/libx265, but have to pass some of the parameters to x265 via x265_params and add a few additional flags (I'm encoding about 15 HDR files right now).  Getting the metadata from the source requires some ffprobe-fu, since it's not stored at the container level, but it's possible if you read a single frame's side data.  You just have to pass that metadata appropriately to x265 and it works nicely.  Tonemapping to 8bit is a bit more involved, comparatively.

 

Using hardware encoding for HDR content is iffy or maybe not currently possible, though.  NVENC didn't support including the metadata in the SEI packets last time I looked, and I have no idea if/how Intel handles it.

 

Here's what I use to inspect HDR metadata/side data, in case anyone's curious:

ffprobe -v quiet -select_streams v -show_frames -read_intervals "%+#1" -show_entries "frame=color_space,color_primaries,color_transfer,side_data_list,pix_fmt" -of "default=nw=1" -i file.mkv

...and I'm kinda back around again.  Between relatives visiting for two months, some minor health issues, and my computer being tied up for days at a time with HEVC encoding, I couldn't find much time to play or help for a bit.  I'm still more time-limited than before, but I'll try to pop by more often.

 

Well, I was referring to what the server can do :)

 

But that is very interesting. So it's only possible with x265? So what little I've read about muxing/ripping HDR content is that the big issue seems to be with layering? In that HDR content is primarily made with 2 layers, and remuxing to a single layer is proving to be tricky. Giving rise to only being able to use certain containers. Is this correct? I didn't dig into it very deeply.

 

And what about using a perceptual quantizer to apply the metadata directly to the render, and not pass it through to the remux? Possible?

Edited by Doofus
Link to comment
Share on other sites

Waldonnis

Well, I was referring to what the server can do :)

 

But that is very interesting. So it's only possible with x265? So what little I've read about muxing/ripping HDR content is that the big issue seems to be with layering? In that HDR content is primarily made with 2 layers, and remuxing to a single layer is proving to be tricky. Giving rise to only being able to use certain containers. Is this correct? I didn't dig into it very deeply.

 

And what about using a perceptual quantizer to apply the metadata directly to the render, and not pass it through to the remux? Possible?

 

The dual-layer issues are more about Dolby Vision content, but since DV is essentially a black box anyway, transcoding DV files using the DV mapping isn't really possible without the proper Dolby tools (read: lots of cash).  I suppose HDR10+ is in the same boat albeit with an open spec, but I have yet to see any HDR10+ content in the wild so far (coming soon, though).  One could probably try to rip a DV title manually to preserve the DV data in a compatible container, but I haven't tried it and transcoding it would likely be limited to using the HDR10 mapping info.  I may give remuxing DV a shot some time just for grins since I probably have 1-2 dual-layer DV sources somewhere, but even detecting DV properly isn't easy without Dolby's stuff.

 

HDR mastering metadata is encoded in the bitstream as SEI messages and VUI info, so any remux of the elementary stream would include it.  The rationale as I understand it is that the metadata is signaled frequently enough so that it's essentially always available to the decoder and image processors (for tonemapping).  I'm a bit confused by the PQ question, though, if you could elaborate.  I suppose you could inject new metadata into the bitstream such that the renderer maps it differently compared to the original metadata or in the absence of metadata entirely...albeit it would probably require a re-encoding to accomplish unless you modify the bitstream itself (I've been tempted to do just this on content with elevated black levels).

 

The whole HDR/DV thing is a big honkin' mess for content creators and distributors, and it goes way beyond transfer functions.  From what I've been hearing, mastering for "SDR", HDR10, HDR10+, and DV alone is maddening due to having different tools and pipelines, so adding the other "moving parts" of the industry just makes it worse  :wacko:

  • Like 1
Link to comment
Share on other sites

Guest asrequested

For the PQ, mpv uses it to apply the colorspace when playing HDR content, so the display doesn't need to be HDR capable. I was wondering if the same process could be applied during transcode, negating the need to pass the HDR metadata. Here's the post where I was looking into what mpv does with HDR.

Link to comment
Share on other sites

Waldonnis

Ah, okay, now I know what you're talking about.  First off, Windows itself does not handle HDR well at all, which is pretty much the reason for you asking and your issues before.  I spent probably about as much time as you did trying to jiggle that handle with much the same results (I have a 4k HDR monitor as my primary and now a 4k television as well).  madshi made a post about a year ago about how Microsoft thinks HDR should work, which is logical to nobody except Microsoft  :P

 

As for HDR signaling, during playback, metadata is signaled in the output stage.  Most playback devices simply "pass through" the info properly, but playback on Windows is a little different.  In the case of Windows' awful HDR arrangement, I don't think Windows on its own will pass through (or even consider) the signaling to the video driver unless you use their HDR mode.  From what I can tell, all HDR functionality in Windows is locked behind that stupid switch, so if it's off, the SMPTE ST 2084 info isn't even used or provided to the driver for output.  Thankfully, some video drivers do have a private API for such things, which basically allows for the metadata to be provided to the driver directly so it can handle the signaling on its own (think of it like metadata bypassing the bouncer by going in the side door).  Unfortunately, my knowledge of the hardware side of HDMI transmission is limited, so I can't really speak to how everything is "muxed" on the HDMI side of things...I just assume the driver's API is doing the right thing if the output is what I expect.  I have no doubt that the API is provided in the video driver primarily to allow games to leverage HDR in full screen mode directly, but it can thankfully be used for video playback as well since Microsoft can't seem to understand that "all or nothing" isn't the right approach for any use case....

 

When it comes to HDR content on non-HDR devices, it would need to be tonemapped since the potential colour range on the HDR source would be much larger than the non-HDR monitor can display (leads to everything looking washed out usually).  Things like madVR can handle this with internal tonemapping functions (very well too) since its configuration includes info on the attached monitors so it knows its target (which can and should be adjusted if you have your monitor calibrated to a specific colorspace or the EDID info just sucks).  Even HDR sets do tonemapping on HDR sources since the mastering display specs are often different/better than consumer televisions' capabilities when it comes to coverage, WP, and luminance...it's the whole point of the metadata, actually.  The difference when going HDR->SDR, though, is that you don't necessarily know the destination monitor's information and there's no built-in processing in the monitor to do the tonemapping.  You can probably assume a few things (1-200 nits, etc) and still get good results with some of the known techniques and filters, though.

 

I actually tonemap HDR->SDR when extracting frames from HDR sources for still shots or contact-sheet-like image output already using ffmpeg's tonemap filter and am planning on offering up the syntax for BIF/chapter thumbnail generation in Emby so I don't have to look at washed out thumbs from HDR sources  :D   I just need to tweak it a bit and settle on an algorithm (so far, leaning towards hable, but mobius has some things going for it as well).

Edited by Waldonnis
  • Like 2
Link to comment
Share on other sites

Guest asrequested

MPV seems to manage it, very well. I've compared it to the output from the smartcast app on my tv (which handles HDR, directly), and I can't tell the difference. I was just thinking that wouldn't it be better to apply the HDR during transcode, like mpv does, so that if a user is watching one of those, they would at least have a good picture without the complication of figuring out that they can't because they don't have an HDR display and the server doesn't need to worry about passing metadata to a display that doesn't support it.

Link to comment
Share on other sites

Waldonnis

MPV seems to manage it, very well. I've compared it to the output from the smartcast app on my tv (which handles HDR, directly), and I can't tell the difference. I was just thinking that wouldn't it be better to apply the HDR during transcode, like mpv does, so that if a user is watching one of those, they would at least have a good picture without the complication of figuring out that they can't because they don't have an HDR display and the server doesn't need to worry about passing metadata to a display that doesn't support it.

 

The real question, then, isn't about injecting metadata, it's about when/if to tonemap.  mpv already knows the answer because whatever render API they're using will tell them what the display "object" can handle, so it can decide to tonemap on its own, then just silently do it.  In a Roku/Shield/SmartTV "device" playback transcoding scenario, though, you'd need to rely on the devices representing their capabilities well enough to make that determination.  If they do their own tonemapping when connected to 8bit displays, then there's no point in tonemapping during transcode (let the device do it internally; it's easier and computationally cheaper).  If they don't, then you'd have to make some assumptions, like "1080p max or no HEVC support = Rec.709 tonemap".  Bear in mind, something like mpv isn't "applying" HDR, it's remapping the colour range to fit in the target's colour space/coverage.  HDR televisions do the same thing, but it's more of a scaling operation than a conversion (since hardly any monitors fully support the standards' limits; good luck finding a 10k nit monitor lol).

 

In my experience, most people won't notice the difference between an HDR source that's been professionally tonemapped to 8bit and the unaltered HDR source itself, but only because most movies/videos rarely use anywhere near the entire wide gamut to begin with.  For most films/shows, the "HDR additions" are fairly minor/subtle overall and you can usually regrade to give a scene the same "feel" even with the reduced available colour range.  It's probably safe to assume tonemapping is needed if you couldn't tell otherwise because of this, but it'll be painfully obvious on sources that actually use a bigger chunk of BT.2020 (you'll notice banding or a general "dull"/flat feel).  Take an HDR video of a campfire or some of those 4k parrot videos that really use the extra colours...you'll see the change more easily.

Edited by Waldonnis
Link to comment
Share on other sites

Guest asrequested

Right, but in a transcode scenario, ffprobe should return enough information to determine what the output should be.

Link to comment
Share on other sites

Jdiesel

I expect HDR metadata to be the new DD/DTS debate. Just like how multichannel PCM audio decoded at the client is identical to bitstreamed DD/DTS audio people will complain if they don't get the little light on their AVR, I suspect the same will be said for not getting the HDR notification on their display even if the final output is identical. 

  • Like 1
Link to comment
Share on other sites

Guest asrequested

Ha! I totally see that happening :D Not knowing what to look for in the picture, they need to see those three little letters.

Link to comment
Share on other sites

Waldonnis

Right, but in a transcode scenario, ffprobe should return enough information to determine what the output should be.

 

Except you can't know that based on the input alone.  If it's HDR content transcode to HDR output, yes, ffprobe can provide enough info to preserve (or really reproduce) HDR metadata/signaling.  SDR targets, though...yeah, you need to know that the playback is non-HDR.  The only way to "know" the metadata currently is to read a single frame from the video and look at its side data, but I suspect we may see some improvement in that area in ffprobe eventually.  mediainfo already incorporates a similar scheme, but it's reporting is lacking and doesn't always output exact mastering display values (it seems to generalise ranges of values and just shows strings like BT.709 instead, which is useless for transcoding purposes).

 

I expect HDR metadata to be the new DD/DTS debate. Just like how multichannel PCM audio decoded at the client is identical to bitstreamed DD/DTS audio people will complain if they don't get the little light on their AVR, I suspect the same will be said for not getting the HDR notification on their display even if the final output is identical. 

 

Ha!  Yeah, if HDR doesn't pop up in the corner, they will bust out the torches and barking dogs.  We already have that with "real vs fake 4k", though, so you'd have to splinter the membership of that rather silly mob first.

Link to comment
Share on other sites

Guest asrequested

Except you can't know that based on the input alone.  If it's HDR content transcode to HDR output, yes, ffprobe can provide enough info to preserve (or really reproduce) HDR metadata/signaling.  SDR targets, though...yeah, you need to know that the playback is non-HDR.  The only way to "know" the metadata currently is to read a single frame from the video and look at its side data, but I suspect we may see some improvement in that area in ffprobe eventually.  mediainfo already incorporates a similar scheme, but it's reporting is lacking and doesn't always output exact mastering display values (it seems to generalise ranges of values and just shows strings like BT.709 instead, which is useless for transcoding purposes).

 

Ah yeah, I keep forgetting about needing the correct colorspace. I was thinking that using PQ might negate all of that, but of course it wouldn't.

Link to comment
Share on other sites

Waldonnis

Ah yeah, I keep forgetting about needing the correct colorspace. I was thinking that using PQ might negate all of that, but of course it wouldn't.

 

Yep, the old "what to do with these extra bits' worth of colour" problem  :D  To be fair, my more distant past work history includes producing some print media, so grading/conversions are things I consider out of sheer habit.  I could tell some real colour management-related goofs/horror stories, though, since most of the people supplying the source material had no clue about that kinda thing (mostly medical doctors and scientists).  Gotta love those "that's not supposed to be blue" moments an hour before the deliverables are due at the print shop...

Edited by Waldonnis
Link to comment
Share on other sites

  • 4 months later...
AmericanCrisis

Hey guys help me understand this more completely. At the moment, there is no way to take a 4k/UHD HDR mkv file and have Emby transcode this to a streaming friendly 3.2 mbps SDR mp4? 

 

I have a single title placed on the server and it works great locally via Nvidia Shield (HDR10 + Atmos, etc.) however remotely and even SDR over the LAN the colors are incorrect as expected. My initial thought was re-encoding the mkv file via Handbrake and placing an SDR - 1080 version on the server but then stumbling upon this thread led me to conclude that due to tonemapping re-encoding wouldn't really work?

Link to comment
Share on other sites

Guest asrequested

Presently ffmpeg isn't configured to tonemap HDR, But if you direct play on a device that's connected to an SDR display, some of them can tone map, just fine.

Link to comment
Share on other sites

AmericanCrisis

Presently ffmpeg isn't configured to tonemap HDR, But if you direct play on a device that's connected to an SDR display, some of them can tone map, just fine.

 

I think I figured it out. I'm using a Mi Box connected to an SDR 1080p screen. The Android TV Emby app would not correctly play HDR content, however the Kodi app w/Emby addon direct played the HDR content accurately.

Link to comment
Share on other sites

AmericanCrisis

No it never was transcoding. The problem was I am not getting good HDR to SDR using the native Emby app for Android TV. The colors were extremely washed out and yellow. Using Kodi with Emby plugin I was getting good results though. On my HDR display the Emby app works great via the Shield so no problem there.

 

These were all direct play over LAN so no transcoding.

 

None of my out of network users have UHD/HDR displays so I'm not sure how they'll do with a transcoded file. I created a separate folder "UHD" so hopefully the user will realize they need a UHD/HDR display to play those files. I wonder though if a transcoded UHD/HDR file will work well streamed even when the display is UHD/HDR? Maybe I'll try to get a user to upgrade their display to test this. I have my streaming capped at 3.2 mbps so it'll be a highly encoded file they get.

Link to comment
Share on other sites

Guest asrequested

Transcoding 4k HDR is a beast. Avoid it. Put lower versions in the folder.

 

https://github.com/MediaBrowser/Wiki/wiki/Movie%20naming#multi-version-movies

 

Or you can do as I've done, and make a 4k library, and don't share it with those users that can't play them.

 

As for your MiBox, when it updated to Oreo, an HDR to SDR setting was added. By default, it's on auto. You may want to try setting it to always do it.

Edited by Doofus
Link to comment
Share on other sites

AmericanCrisis

That's a great idea! Do now my obstacle would be how to re-encode a UHD HDR file to HD SDR? Not an easy task as I've read. I only had access to the UHD disc otherwise I'd have ripped the HD disc as well.

 

I do have a separate UHD library from the UHD file. It is accessible to all users. That's a good idea to remove its access for the time being.

 

The Mi Box had HDR to SDR to "on" versus "auto" when I tested that. Is it a technical problem that Emby app doesn't properly display HDR content as SDR? After getting it to work on Kodi I figured it was a limitation in the Emby app and not the Mi Box or the display. 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...