Most brands’ latest cameras are adding 10-bit video (and often static) capture options. The Canon EOS R5 and R6, for example, both benefit. |
The ability to shoot 10-bit video, 10-bit still mode, and the ability to shoot ‘HEIF’ files are increasingly being added to cameras. But what are the benefits and when should you use these modes? We’re going to look at how data is captured, how it’s stored, and so what benefits you should (and shouldn’t) expect from 10-bit capture.
Linear encoding
Inefficiency of linear encodingLinear encoding dedicates half the value to the brightest stop, a quarter of the next stop, and much more. Now consider the word photon shot; The randomness of the light you capture. Shot noise is basically the square root of a signal. So the brightest parts of the image (in which you are dedicating most of your raw values) contain the maximum amount of sound, because the square root of a large number is larger than a small number. It doesn’t look like noise, because noise is more closely related to the signal-to-noise ratio, not the absolute sound level. But that means you’re capturing a lot of detail about something with very high variability. Worse, the human visual system is less good at understanding details and colors in bright areas than in the dark: you are encoding an extra noise signal that is not particularly visually meaningful. In short: Linear encoding is highly inefficient. (Some take advantage of raw compression: compressing over-encoded highlights in a way that does not have a significant effect on the image in terms of visual or editing flexibility). |
In contrast to human visual systems, cameras record light in a linear way: a signal that doubles as a photon strikes the sensor, which is recorded twice as large, and twice as large using a digital number.
This means that half of your available raw values ​​are always consumed by the bright stops of your captured light. This is just the basic argument: whatever the bright value you have captured, half the light (e.g. one stop less light) will be captured with half the larger value.
The result is that in Raw, you can save about the same number of stops in a dynamic range as your camera’s bit depth. Or, to get the cause and effect in the right way: the analog-to-digital converter will be selected because it has enough bit-depth to encode the signals coming from the sensor. This is primarily a question of ensuring that you can capture and retrieve all information coming from the sensor. If you increase the bit-depth beyond what is required to encode the signal completely, you will not get ‘more subtle gradations’ or ‘x million more colors’, which means creating much larger files that record the sound in more detail in the shadows. .
So, why would we record a linear raw file? Because it’s the easiest thing to do from a processing standpoint, it holds everything* The information you first captured and is not controllably large, since you are usually capturing only a single raw value per pixel, the red, green and blue values ​​are no different.
Gamma encoding
Gamma encoding is the process of applying a non-linear transformation to linear data or, more simply, redistributing raw data in a more space-efficient manner. The reverse gamma curve is usually applied when you open the image to view it, so that you go back to something that looks like the real thing you were trying to capture.
Gamma encoding + tone curve
Since this encoding is non-linear, you can compress some or all of the data in a linear raw file into a much less bit-deep file. Almost all modern cameras output 8-bit JPEGs, which typically include about nine stops of DR (more often when DR expansion mode and adaptive tone curve are used). In principle you can probably fit more, but without gamma encoding**An ‘S’ curve is usually applied, which gives a nice punchy image.
After gamma encoding, an ‘S’-shaped curve is applied to most JPEG files to provide an image with a good level of contrast. This determines about 70% of the 256 values ​​available at the four stops around medium gray. This reduces the chances of change. |
With clever compression, a JPEG can easily be 1/6 of the size of a raw file but it does a good job of letting you know what you saw when you took the picture. Or, at least, everything your 8-bit, standard dynamic range display can show. However, since a lot of data has been disposed of, and since the ‘S’ curve has crushed highlights and shadows, a JPEG offers much less flexibility if you want to make big changes to it.
An 8-bit file contains 256 data values ​​for each color channel, and when you split those values ​​into nine stops, there’s not much chance of adjusting the values ​​without the gaps starting to appear, and instead posterizing cream is being done. Smooth tonal transition. This is a great end point, though, especially for SDR displays.
The decision to shoot Raw or JPEG is, broadly speaking, a question of whether you plan to edit the results or present more or less as a shot. Raw data is usually 12 or 14-bit and although its linear encoding is really inefficient, the fact that it is not demosized makes it manageable. For most uses, however, the final image may well be expressed in an 8-bit JPEG. So why do we need a 10-bit alternative?
Log encoding – a middle ground
Log encoding divides its available values ​​more evenly: most stops are given the same number of data values ​​instead of spectacular weights towards highlights, as they focus on mid-tones, including linear encoding, or like most standard tone curves.
The log curve (Sony’s S-Log3 in this example), evenly distribute the values ​​available in the captured data, maintain a good level of flexibility for editing but without the inefficiency of linear encoding. Note that the relationship between the shadows is not logarithmic (where that method will dedicate more data value than the original linear capture). |
Basically, it’s a clever way to maintain a good degree of editability in a much more efficient file. You can see why this would be a popular way to work with video, where you can maintain the flexibility of editing, but still benefit from the highly efficient, well-optimized codecs and file types created for video.
Moving from 8 to 10-bit means you have 1024 values ​​to share, so you can keep four times more information about each stop captured. Instead, it means a lot more flexibility. If you try to make significant adjustments in color and contrast, the risk of posteriorization is much lower.
Typically a manufacturer will look at the performance of its camera then create a log curve that can encode most of the usable dynamic range of the camera. This is why most manufacturers end up with multiple log curves: you don’t want to share your 1024 value at 14 stops if your camera’s output is unnecessarily noisy outside Stop 11.
For most applications, though, the 10-bit encoding file sizes of the logs give a big boost in editability without being too unexpected.
Why would I need 10-bit?
Thus, 10-bit capture allows cameras to offer more editable video, without the added size and potential compatibility issues (and legal complications) for shooting raw video. There are other uses, however, that promise convenience for both video and still shooters.
A new generation TV is now widely available that can show a wider range of dynamics than older, SDR displays. An increasing number of movies and TV shows are being shot in HDR, and steaming services can deliver this HDR footage to people’s homes.
Hybrid Log Gamma (HLG) and Perceptual Quantizer (PQ) are the two most common ways to encode HDR data. Both require 10-bit data because they are trying to store a wider tonal range than a typical 8-bit footage. Don’t be fooled by the word ‘log’ in the name HLG: only part of the curve is logarithmic. Both HLG and PQ, like JPEG, are designed to be end-point rather than intermediate.
An increasing number of cameras can shoot HLG or PQ video for playback on HDR TVs. But it is increasingly common for them to offer 10-bit stills based on these standards for playback on HDR displays.
So 10-bit stills for vibrant HDR?
What is HEIC? What is a HEIF file?Recent camera 10-bit static modes shoot the single-image ‘HEIC’ form of HEIF files, which is basically a single frame of H.265 video. Despite being 10-bit, more efficient compression means they can maintain the same image quality in much smaller files. Panasonic instead uses the less common ‘HSP’ format for its 10-bit steels, which is part of the HLG standard. It’s even less widely supported than HEIF, but you’ll need to plug your camera into a display to see the files correctly anyway. |
It is fair to say that there is not much consistency in what the various camera makers are offering. Some camera makers only allow you to shoot 10-bit files while you’re capturing real HDR images, while others only offer SDR profiles in HEIF mode and some allow you to shoot the combination you want.
From our point of view, shooting 10-bit still with conventional SDR curves is not much of a point: data is not stored the way it was designed for editing and Raw remains a more powerful alternative, so what you are doing is capturing Something that will end up being seen A lot Like a JPEG but not widely supported.
It makes a lot more sense to shoot real HDR images (HLG or PQ) at 10-bit. When viewed on an HDR display, they can look spectacular, with highlights flashing in a way that is difficult to convey in conventional photos. At the moment, though, you usually need to plug your camera into a TV using an HDMI lead to view images, which isn’t very practical. But for us, here is the value of 10-bit stills, photographicly.
There is a lot of work to be done across the imaging industry to increase support for true HDR steels. We need our editing tools to fine-tune Raws to HDR stills, just as we used to when creating our own JPEG. But above all, we need greater support and cross-compatibility so that we can share and view 10-bit files without connecting our cameras to the display. Until that is resolved, the ability to shoot 10-bit steels is frustratingly limited.
* Or almost all if your camera offers harmful Raw compression
** Strictly speaking, the ‘Electro Optical Transfer Function’ (EOTF) of sRGB is not a simple 2.2 gamma conversion, but it is quite similar to what it is often referred to as.