# Welcome Back! (And Happy New Year!)

This is the second post of a two-part series discussing the technical aspects of adding HDR support to Infamous Second Son and Infamous First Light. If you haven’t read it already, Part 1 discusses our HDR tonemapping and color grading solutions, as well as the HDR-friendly render target format we used to help improve performance. In this post I’ll discuss how we matched the look of the SDR and HDR modes (in the darker parts of the image), a couple more performance optimizations, as well as some issues we ran into when combining HDR and 4K on the PS4 Pro.

# Matching SDR HDTV Output

When we started working with HDR, we realized fairly early on that taking a linear tonemapped image, encoding it with PQ, and displaying it on an HDR TV resulted in a “washed-out”, unsaturated look compared to SDR mode, especially in the darker parts of the image. Before I can explain why this occurs, we’ll need to take a detour into the land of SDR OETFs and EOTFs.

## SDR Transfer Functions

Regular SDR HDTVs use an EOTF (electro-optical transfer function) that is governed by the BT.1886 standard. It specifies a gamma of 2.4, while the Rec. 709 standard, which describes the OETF (opto-electrical transfer function) for HDTV signals, specifies a function with an average gamma of about 0.5. The product of these gamma exponents (about 1.2) results in a non-identity “scene-to-screen” transfer function which has the effect of increasing the image constrast, especially in the dark areas. (In practice there is a lot of variance in both the OETFs used to produce content, and in the EOTFs implemented by displays. For more on this, see the sidebar below.)

It is clear from the graph above that the combination of the Rec. 709 OETF and the BT.1886 EOTF significantly darkens dark colors. Other OETF/EOTF combinations also have this result (though less pronounced) — this even occurs when sRGB colors are displayed with gamma 2.2, even though the average gamma of the sRGB OETF is 1.0 / 2.2. The important thing to note is that for SDR displays in general, the OETF and EOTF are not inverses of each other.

You may be asking yourself why it’s desirable to have a non-identity scene-to-screen transform. Shouldn’t we be aiming to have the TV faithfully reproduce the relative luminance values of the scene we’re rendering? The answer that you find in textbooks (e.g., Section 3.4 in Principles of Digital Image Synthesis by Andrew Glassner, and Chapter 6 of A Technical Introduction to Digital Video by Charles Poynton) is that the increased contrast helps to compensate for the dim surround conditions in which the display is typically viewed, which has the effect of reducing apparent contrast. This argument seems less relevant for games, since they are typically authored in similar viewing conditions than those in which they are played — but it’s a reality of SDR displays that we need to be aware of.

Thankfully, things are somewhat simpler for HDR displays: the PQ EOTF and OETF are inverses of each other. However, this means that if you want the darker portions of your HDR image to resemble the SDR image you’ve spent so much time tweaking, then you need to apply the scene-to-screen transform yourself. Using the notation $f(x)$ for OETFs and $F(x)$ for EOTFs, we computed the final PQ value as follows:

$$x_\mathrm{pq} = f_\mathrm{pq}\left(F_\mathrm{1886}\left(f_\mathrm{709}\left(x_\mathrm{linear}\right)\right)\right)$$

We used this transform over the entire range of values, even those greater than 1.0. This increased the maximum luminance of our signal (from 1000 nits to about 1400 nits), but we were very pleased with the results and so did not attempt to treat values greater than 1.0 differently.

To demonstrate the difference, I’ve simulated the effect of removing the scene-to-screen transform on a couple of SDR images below (click the images to toggle between the versions with and without the scene-to-screen transform).

# Optimized PQ and sRGB Conversions

The following optimizations, while minor, seem worth mentioning because they may have wide applicability and are simple to implemement.

We fit a curve to the PQ OETF which saves a few cycles, and the results are visually indistinguishable from the exact function. The curve was optimized to be accurate between 0.01 nits (a reasonable value for a “just noticeable difference” from black) and 1400 nits (the maximum output luminance that we generate, after matching SDR TV output, as explained above).

// Fast PQ encoding. Input is assumed to be positive and scaled such that 1.0 corresponds to 100 nits. Accurate over range
//  0.01 - 1400 nits (and reasonable behavior outside of that range).

float3 RgbPqFromLinearFast(float3 x)
{
x = (x * (x * (x * (x * (x * 533095.76 + 47438306.2) + 29063622.1) + 575216.76) + 383.09104) + 0.000487781) /
(x * (x * (x * (x * 66391357.4 + 81884528.2) + 4182885.1) + 10668.404) + 1.0);
return x;
}


We also fit a curve to the sRGB-to-linear conversion, which was useful in cases where we needed to filter linear values (but could not use the hardware sRGB-to-linear conversion because of its limited precision). Again the results were visually indistinguishable (but beware of error accumulation from repeated use).

// Fast(er) approximate sRGB-to-linear conversion. Accurate over range 0-2.7 (0-10 linear).

float3 RgbLinearFromSrgbFast(float3 x)
{
return (x * (x * (x * 5.873392 + 0.2533932) + 0.07841727)) /
(x * (x * (x * 0.1470415 - 1.2869875) + 6.3594828) + 1.0);
}


# PQ Upscaling

On the PS4 Pro, we render to a resolution of 3200x1800, which is then upscaled to 4K (3840x2160) for output to the TV. Since the PQ OETF is highly non-linear, upscaling a PQ-encoded buffer introduces filtering artifacts (e.g., it tends to significantly reduce the luminance of small bright features). Because of this, we chose to do the upscaling in software, using a compute shader and LDS memory to amortize the conversion of samples.

# In Closing…

I hope you’ve found these posts useful — please don’t hesitate to leave comments or questions below!