Jump to content

What is the reason behind no HDR tone mapping


rechigo

Recommended Posts

rechigo

I have some questions in regards to HDR tone mapping and Emby. 

 

Why hasn't it been implemented yet? Is it a software limitation? From what I've looked at, it seems to be possible to do this with FFmpeg but I have yet to find a straight answer to this question. Are we still waiting for this to be implemented into FFmpeg? Is it just not worth the time because the majority of servers would not have the processing power to achieve tone mapping in real time? 

 

@@softworkz I might be completely wrong with everything I theorized, but I'm guessing you would have some more insight into this.

  • Like 1
Link to comment
Share on other sites

I have some questions in regards to HDR tone mapping and Emby. 

 

Why hasn't it been implemented yet? Is it a software limitation? From what I've looked at, it seems to be possible to do this with FFmpeg but I have yet to find a straight answer to this question. Are we still waiting for this to be implemented into FFmpeg? Is it just not worth the time because the majority of servers would not have the processing power to achieve tone mapping in real time? 

 

@@softworkz I might be completely wrong with everything I theorized, but I'm guessing you would have some more insight into this.

 

Doing accurate tone-mapping is a rather complicated matter.

There are different standards, individual transfer functions, different ways to render the output - e.g. dithering (you might know dithering from converting pictures to the GIF format having only 256 or less colors available) and other things to consider.

 

You are right - there is a tone-mapping filter in ffmpeg, but it's implementing just a few simple algorithms.  Though, if we wouldn't have to care about hardware accelerations, we could have added that already. But we need to provide a consistent experience - with and without hw acceleration and to do that, we'll need to be able to do tone-mapping with hw acceleration as well - at least for the major acceleration methods. Hw accelerated tone-mapping is just about to come up for some..

 

Our immediate goal is a bit smaller than the above: That's about being able to do hard conversions (no mapping, just cutting the last 2 bits off) in a reliable, predictable and hardware-accelerated way. Only afterwards we'll be able to revisit the tone-mapping feature.

 

 

One more note:

  • When you come to the conclusion that you'd need tone-mapping on a regular basis, then you are most likely doing wrong

    .

  • A lot of people are starting to change their video acquisition standard to 10bit HDR formats.

    Obviously, 10 is more than 8, so 10bit must be better than 8bit content?

    .

  • That's a misconception, though - as long as you don't actually have a 10bit HDR display for viewing.

    .

  • Even the best tone-mapping algorithm in the world won't create an 8bit video that is better than the original and hand-processed 8bit video as it's been released be the studio

 

 

So I can only recommend - think twice before choosing 10bit over 8bit

 

- If your main display is 10bit HDR and other displays are just for occasional use ==> Absolutely, go for 10bit content

 

- If you have several primary displays that are mixed (HDR and SDR) ==> For best possible experience, always get both, 10bit and 8bit versions

 

- In all other cases: Keep away from 10bit content! It's not better, not even equal - it's definitely worse for viewing on 8bit/SDR displays!

  • Like 1
Link to comment
Share on other sites

rechigo

So by cutting off the last two bits of an HDR video, this would bring back more color detail than what Emby's current FFmpeg configuration can do? 

Link to comment
Share on other sites

So by cutting off the last two bits of an HDR video, this would bring back more color detail than what Emby's current FFmpeg configuration can do? 

 

No - that's exactly the same that's happening already - it just hasn't worked well with hw accelerations involved and that is going to be fixed.

Link to comment
Share on other sites

A mapping (or projection) can be described by a function. Sometimes, the graph of a function can be described by a spline, so I suppose that this is what you're referring to.

 

Do you actually know what you're talking about or are you just dropping keywords.

 

Where would the parameters for that "spline" come from?

Link to comment
Share on other sites

Guest asrequested

I'm assuming the issue with only using a single thread of a CPU has been resolved? Or was that a misnomer? Do you think you'll write your own algorithms, or use/augment the existing? Is Reinhard the most likely to be used? And I'm assuming bt.709 will be the colorspace used as most transcoding will be expected to be on SDR displays? Sorry for rattling of successive questions :) The apps report to the server what they are connected to, correct? Or will it just defaulted to the basic SDR parameters? So many questions :)

Link to comment
Share on other sites

I'm assuming the issue with only using a single thread of a CPU has been resolved? 

 

During our internal reworking of ffmpeg command generation, I got aware that the 'threads' parameter can be individually applied to encoders and decoders. We will make use of it soon, I'm not yet sure how...

 

Do you think you'll write your own algorithms, or use/augment the existing? Is Reinhard the most likely to be used? 

 

Definitely one that is commonly used, maybe port one to CUDA kernel code because for Nvidia, there's no hw solution in sighte yet.

 

And I'm assuming bt.709 will be the colorspace used as most transcoding will be expected to be on SDR displays? Sorry for rattling of successive questions :) The apps report to the server what they are connected to, correct? Or will it just defaulted to the basic SDR parameters? 

 

Transcoding will always target default SDR.

Link to comment
Share on other sites

Guest asrequested

Nice! Thanks for the explanation. I look forward to testing/seeing the results. I know it will be a lot of work.

Link to comment
Share on other sites

scb99

I did a project a year or so back. It’s not so hard to map colour and luminance values from HDR10 to BT-709 using splines.

Link to comment
Share on other sites

Guest asrequested

You realize he's gotta do this for all the Nix distros, all the Windows versions, Mac and Android? And then all the different hardware configurations and drivers? None of this is going to be simple.

Link to comment
Share on other sites

You realize he's gotta do this for all the Nix distros, all the Windows versions, Mac and Android? And then all the different hardware configurations and drivers? None of this is going to be simple.

 

Not exactly. This is code in ffmpeg for the CPU conversion. This will be compiled and run on all platforms in the same way.

It gets a bit more complicated for the hw accelerations because here it's about code that needs to run in the GPU rather than the CPU.

Link to comment
Share on other sites

I did a project a year or so back. It’s not so hard to map colour and luminance values from HDR10 to BT-709 using splines.

 

I never said that the mapping alone would be hard. ffmpeg already includes a tonemapping filter like I mentioned above and it offers several mapping functions.

The problem is that mapping alone is not sufficient for good results because - not matter which algorithm - the number of colors gets reduced, and this will lead to 'banding' effects.

To fix these, some kind of dithering needs to be applied - like I also mentioned above. Then we would have real and useful tone-mapping.

 

But we'll go through this step by step and the first step will be truncation of the last bits.

 

Link to comment
Share on other sites

  • 3 months later...

This is still on the roadmap right? It was mentioned previously tonemapping was slated for release 4.4. but didn't make it in. When I look at the 4.5 beta changelog I don't see mention of it.

 

In the high-end AV community there's a ton of debate around tonemapping. How to balance between contrast: either overall brightness vs highlight detail, some methods prefer one or the other or some try to get the best of both worlds (madVR). madVR requires massive GPU horsepower though.

 

But for Emby playback, honestly my opinion is to go for the easiest to develop for, computationally cheapest option. Because right now, HDR video is basically useless on a SDR display. Even a very imperfect HDR to SDR tonemap is better than a washed out, color-muted nearly gray-scale image, and the storage costs of keeping both a SDR and HDR copy is hugely expensive. I would venture a guess that if you are storing HDR media, and image quality is important to you, then you're direct playing it to a HDR TV primarily or using madVR for the conversion on a HTPC, and SDR playback from that HDR source would be reserved for secondary cases when image quality is not as important, so the tonemap doesn't have to be perfect.

Edited by Xorp
Link to comment
Share on other sites

HI, yes it is. 4.4 brought quite a bit of hardware transcoding improvements. HDR tone mapping is one of the next two or three items on our transcoding to do list.

  • Like 1
Link to comment
Share on other sites

  • 8 months later...

HDR Tone mapping when transcoding will be in Emby Server 4.6. Stay tuned to the beta channel over the next week if you'd like to help test.

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...