> Excellent explanation. I think I get your idea. Will refine the code per your > suggestion. > But still some question, will people/tools tend to fill in the mastering > information for HLG video? > I currently see no document that recommend to fill the mastering display for > HLG. > I only have one HLG sample download from 4kmedia.org. seems it has no > mastering metadata. > Do you have any more HLG videos that show it will often be filled in? > My concern here is will all video players correctly parse the mastering > display metadata to decode HLG, or just skip it because most HLG video has no > metadata?
I think there's probably going to be three ways to approach this situation. Part of the problem is surely the fact that HLG is sort of designed to be "implicitly" tone-mapped. That is, the way the HLG standard is written, you'd just always encode things so that 12.0 is the brightest peak brightness, and a user with a 500 cd/m² peak TV would just apply the HLG OOTF tuned for 500 cd/m² on the original signal as received from the (e.g. blu-ray) source. Sure, the mastering engineer may have used a 1500 cd/m² screen to master it, but since the HLG OOTF-OOTF round-trip essentially constitutes a simple form of tone-mapping, the overall result on-screen will look more or less reasonable. (Certainly more reasonable than e.g. PQ) So surely there's the camp of people that believe HLG doesn't need mastering metadata and will therefore not include it, because the end result without metadata looks more or less good enough. However, I disagree with this result. First of all, it prevents color-accurate round-trips. The HLG OOTF is inherently color-distorting, so in a color-managed workflow with calibrated devices, this methodology will not be sufficient to ensure perceptually accurate reproduction. The second reason is that as I said, the HLG OOTF-OOTF interaction essentially constitutes a simple form of tone-mapping; but we can do significantly better. I believe our tone mapping algorithm produces a far better result (visually) than applying the HLG OOTF as-is, especially when going to an SDR display. (If you're using mpv, you can test this by playing a HLG source once with --vf=format:peak=10 and once with --vf=format:peak=1. In the latter case, the only tone mapping being done is the implicit HLG tone mapping). Not only are HLG sources I've found inconsistently encoded, but also I find that the inherent HLG tone-mapping tends to over-saturate the signal (similar to the result we get if the desaturation strength is 0.0) while also boosting the gamma. So if we subscribe to the idea that we need metadata to do color-accurate tone mapping and reproduction, then the question becomes: what do we do for un-tagged sources? The obvious approach is to assume a (display-referred) signal peak of 10.0 (thus corresponding to a display peak of 1000 cd/m², i.e. the HLG reference device). But I think if I was to make a HLG release of my own, I would definitely try and include the most accurate tagging possible. For example, if we have a clip available in both PQ and HLG, I would use the PQ version's mastering metadata for HLG as well. Finally, to put the nail in the coffin of the idea that HLG doesn't need metadata, we should realize that the mastering metadata isn't just there to help you tone map the brightness levels, it also includes the display's gamut capabilities - and for a good reason. When doing desaturation in order to fit the BT.2020 signal into a (typically far less than BT.2020) display response, knowing the gamut limitations of the signal can similarly help users do a far better job than having to assume the worst case scenario - for much the same reason that knowing the signal's actual peak brightness can help users do a far better job tone-mapping than having to assume a worst-case peak of 10,000 cd/m². Indeed, in the best case scenario (your own display's gamut and brightness capabilities match or exceed the mastering display's), both of these can just be no-ops. So if mastering metadata is beneficial at all, then we should also agree that mastering metadata is beneficial to BT.2020 + HLG sources, simply for the gamut data alone. The fact that HLG is ill-defined without knowing the mastering display's brightness is just icing on the cake at this point. > As what I do now is tone mapping from HDR to SDR, do you think it is > meaningful to add the metadata for SDR video? The mastering metadata is still useful for the gamut information as explained. Since you're (most likely) encoding a BT.2020 signal that doesn't use the full gamut range of BT.2020, even for SDR curves it can be a good idea to preserve it. > And looks like using a peak of 100 in inverse_ootf() when tone-mapping to sdr > is just ok? Sure. That won't blow up, but using HLG to store an SDR signal is sort of wasteful/pointless. Might as well use an actual SDR curve and skip the inverse_ootf step altogether. > > Thanks again for your kinder advice and suggestion! _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel