|Panasonic’s DFD autofocus system tries to determine distance information without masking pixels as most on-sensor phase detection systems do.|
We’ve been impressed by what we’ve seen so far of the autofocus changes Panasonic introduced with its new S5. The latest version doesn’t iron-out all the quirks but continuous AF for stills, in particular, appears much improved. But beyond this, the details we were given about how these improvements had been achieved are interesting. They help to highlight both the benefits and the continued challenges of the company’s Depth-from-Defocus system.
It’s a system with a poor reputation in some quarters but one that’s continued to improve significantly in recent years. The S5 shows both how far DFD has come as well as hinting at what’s still needed.
What is depth-from-defocus?
Fundamentally, focus is a a question of distance: adjusting the lens optics until the light rays from a subject at a particular distance converge on the sensor plane.
The alternative: phase detection
Most manufacturers have settled on phase-detection as the heart of their AF systems: this views the target from two different perspectives then works out how much the focus needs to be moved in order to bring those two perspectives into phase with one another (the point at which that subject is in focus).
In mirrorless cameras, this is usually done by having partial pixels that only receive light from one or other half of the lens, to provide two differing perspectives. The downsides of these systems tend to be that these partial pixels either receive less light than a full pixel or that the complexity of the electronics (and the noise they produce) increases, in systems that combine pairs of half pixels. The performance can be excellent, but to a degree you’re trading away some light capture or noise performance to attain that AF performance.
There are two broad approaches used by cameras to conduct autofocus: ones that hunt until they find the point that’s in focus and those that try to interpret the depth in the scene, so that they can drive the focus without the same need to hunt.
DFD is Panasonic’s system for interpreting depth. It works by making a tiny focus adjustment and analyzing how the image has changed as a result. With an understanding of the out-of-focus characteristics of the lens being used, the camera can interpret these changes and build a depth map of the scene.
This challenge is made more difficult if elements in the scene are moving: the camera’s depth map needs to be constantly updated, because the distances are changing. This is where subject-recognition and algorithms designed to anticipate subject movement come into play, since they allow the camera to understand which bits of the scene are moving and what’s likely to happen next.
What’s new with the S5
Panasonic told us that the S5’s autofocus has been improved by a number of fundamental changes. Part of it comes from improved subject recognition. This is based on deep learning (an algorithm trained to recognize specific types of subject) which helps the camera know what to focus on and to not refocus away from it. For instance, teaching the algorithms to recognize human heads when they’re looking away means the camera understands it doesn’t need to find a new subject or refocus when the face it had recognized suddenly ‘disappears.’
Another part comes from re-writing the AF code to make better use of the available processing power. During the development of the S5, Panasonic’s engineers discovered they didn’t have to lean on the machine-learning trained algorithms for both subject recognition and movement tracking: they could combine the machine-learned recognition with their existing, faster, distance and movement algorithms, which freed-up processing power to run the process much more frequently.
This video shows the view though the viewfinders of the S5 (left) and older S1 (right). Note that even when the S1 is in focus, there’s still some very obvious pulsing and fluttering, this is much less noticeable in the S5.
Finally, other software improvements allowed the entire AF system to be run faster: providing more up-to-date information to the processor. The combined result of these changes, for stills shooters at least, is much improved autofocus with less reliance on the trial-and-error hunting of contrast detection AF. This, in turn, reduces the focus flutter in the viewfinder, making it easier for a photographer to follow the action they’re trying to capture, so you get an improved experience as well as improved focus accuracy.
Video is a greater challenge
But this approach is primarily a benefit for stills photography. Video is a more difficult challenge, partly because the focusing process is visible in the resulting video but on a technical level, because you have to read out the sensor in a manner that’s similar to the video you’re trying to produce. In stills mode you can reduce the resolution of the sensor feed (in terms of spatial resolution or bit-depth), to increase the readout rate, which increases how often the AF system receives new information about what’s happening. This low-res feed during focus doesn’t have any impact on the final image.
For video you need to run the sensor in a mode that’s tied to that of the footage you’re trying to capture
In high res video modes you need to run the sensor at a bit depth, pixel resolution and frame rate tied much more closely to those of the footage you’re trying to capture. At best, you get to read the sensor at double the output frame rate. Video is typically shot using shutter speeds at least twice as fast as the frame rate, meaning you can read the sensor out at 60 fps for 30p output, because each frame of video is usually made up from 1/60th second chunk of time or less, leaving you time to conduct another readout for the AF system before you have to expose your next frame.
The problem is that full frame sensors are big and slow to read out. The sensor in the S5 is very similar to the ones used in the likes of the Sony a7 III, which typically take over 21ms to read-out in 12-bit mode: not quite fast enough to run at 48 fps for double-speed capture of 24p footage. This has the unfortunate side-effect of meaning the camera’s worst AF performance comes in the mode most likely to be used by the most demanding video shooters.
Unfortunately for a brand so associated with video, the S5’s full-frame 4K/24p is the mode that delivers its weakest AF performance.
Despite this challenge, Panasonic has re-worked the AF response even in this weakest mode, to be less prone to unnecessary refocusing.
A bright new tomorrow
The updates in the S5 show us a couple of things. Firstly, that Panasonic is well aware of the criticisms being leveled at its cameras and is continuing to fine-tune its software to squeeze everything it can out of the current hardware.
DFD is not there yet but, in principle, staying committed to an AF method that gets better as hardware gets faster may prove a good choice
But, more significantly, the improvements we’re seeing when shooting stills and when using AF-C during bursts of stills in particular suggest that some of the downsides we’ve seen in the past aren’t necessarily inherent flaws of the DFD concept. Instead they’re aspects that can improve as sensor readout and processing power improve. You don’t need to be a semiconductor physicist to recognize that improvements in those areas are always coming.
In principle, in the long run, staying committed to an AF method that gets better as hardware gets faster may prove to be a better choice than an approach that trades-off light capture for AF performance. But the S5’s performance, particularly in video, shows DFD is not there yet. The risk for Panasonic is whether these fast-readout sensors and powerful processors arrive before the majority of full frame buyers have already committed themselves to other camera systems.