iPhone 11 Deep Fusion Camera Just Hit Beta: What You Need to Know

(Image credit: Tom's Guide)

The iPhone 11 Pro has already impressed us with its new triple-camera system, particularly in low-light. However, we've yet to truly grasp everything Apple's latest handsets can do from a photography perspective, because we've been waiting on the much-vaunted Deep Fusion mode the company teased during its Sept. 10 keynote unveiling the new phones.

It seems we won't be waiting much longer, as Deep Fusion is debuting for the first time in the latest iOS 13 developer betas making the rounds today. The feature will only be available to the iPhone 11, iPhone 11 Pro and Pro Max, as it relies upon the boosted machine learning capabilities of Apple's A13 Bionic processor to improve photos on a pixel-by-pixel level in medium-light scenarios.

The beauty of Deep Fusion is that it works in the background, but doesn't require the user to fiddle with settings, or take their attention away from framing the shot. If you're shooting a scenario that isn't so dark that it necessitates Night Mode, but not so bright that the phone could get by with Smart HDR alone, Deep Fusion will automatically kick in, working its "computational photography mad science" below the surface, as Apple's Phil Schiller put it last month.

Apple's Phil Schiller touts Deep Fusion at the iPhone 11 launch event.

Apple's Phil Schiller touts Deep Fusion at the iPhone 11 launch event. (Image credit: Apple)

The Deep Fusion process goes a little something like this. Before you press the shutter button, your iPhone will have already captured three short-exposure images used to ascertain optimal sharpness. When you hit the button, the camera adds another three normally-exposed shots, and one longer exposure which pulls in light and color.

The four longer exposures are combined once, then that result is compared against the best of the short-exposed images. Both shots are sent through waves of processing that treat each segment of each frame differently. That's where machine learning comes in, as the A13 Bionic prioritizes sharpening details in a subject's sweater, for example, but less so in their face, and even less so for objects in the background. And it does this all on the basis of individual pixels, pulling clusters from two pictures to generate one thoroughly-refined final image.

MORE: The iPhone 11 Is the Best Low-Light Camera We've Ever Tested

Deep Fusion will add about a second or so to shooting time, though you won't be kept staring at a photo as it processes — again, all that happens in the background, so you can move right along to the next photo op. Only the primary 12-megapixel sensor on the iPhone 11 and Pro series devices, as well as the telephoto lens exclusive to the Pro models, can make use of Deep Fusion; the ultrawide camera doesn't support the feature, just like it doesn't support Night Mode.

What appears in the developer betas of iOS soon make their way to the iOS 13 public beta before being rolled out to the general public. We can't wait to get our hands on Deep Fusion mode, and we intend to post our impressions during the beta period before issuing a deeper verdict when the final incarnation goes live.

Adam Ismail is a staff writer at Jalopnik and previously worked on Tom's Guide covering smartphones, car tech and gaming. His love for all things mobile began with the original Motorola Droid; since then he’s owned a variety of Android and iOS-powered handsets, refusing to stay loyal to one platform. His work has also appeared on Digital Trends and GTPlanet. When he’s not fiddling with the latest devices, he’s at an indie pop show, recording a podcast or playing Sega Dreamcast.