Skip to content
Computational Photography Part 3

Computational Photography Part 3

On September 14, Apple rolled out the first of its fall 2021 product line refreshes. There were no new products yesterday, just new models. The event was titled “California Streaming,” which caused me to wonder if the event would have some AppleTV news or some new streaming entertainment products. To be fair, the first thing Tim Cook discussed was AppleTV+ which has racked up an impressive track record of new entertainment and industry award nominations. Ted Lasso alone received 20 Emmy nominations, I believe. Like so many of its new ventures, AppleTV+ was initially poo-poo’ed as thin, derivative, and uninspiring. No more. Anyone else looking forward the Foundation series that drops 9/24? I’ve been waiting for this since I was eleven years old.

But the ATV+ segment did not last long. And it wasn’t too much longer before a good friend, who is a professional photog, texted me and said he thought the event was a bit of a yawner. At first blush, I was underwhelmed as well. Until I started to consider the cumulative computational capabilities Apple has packed into the new iPhone, the software improvements and new features, and the hardware improvements. Last year’s iPhone 12 Pro was billed as the biggest improvement in Apple photography ever. And I think it was. But this year, Apple is saying the same thing, albeit with a phone and camera that look a lot like last year’s. But this year’s model is, in my view, significantly superior.

But it isn’t its superiority over the 12 that matters. What matters is the gap that Apple is increasingly closing between itself and traditional camera box companies like Canon, Nikon, and Sony. And even higher end makers in the medium format realm like Fuji and Hasselblad, and even Phase One. No, I’m not kidding, and that’s not click-bait [I don’t get enough clicks to matter]. I’ve written before about the road-grading capabilities of computational photography. Apple isn’t just leveling the playing field. They’re plowing it under and building an entirely new paradigm. Why and how is this? Just about all modern cameras have computers inside them. Among camera makers, Phase easily has the most sophisticated processing platform. Its IQ4 platform is capable of handling millions of calculations related to its 151mpx files, and Phase users enjoy the benefits when it comes to features like focus stacking and frame averaging and HDR, particularly when seen “through the lens” of some of the extraordinarily sharp optics available to medium format users. But these features are trivial compared to what Apple is doing.

The reality is that traditional cameras are exceedingly stupid by comparison to the computational capabilities of even an iPhone 10 or 11, let alone the new 13. And every year, Apple laps the field again. The A15 Bionic chip in the iPhone 13 is a marvel of engineering and technology. At 15 billion transistors, it offers significantly enhanced performance over last year’s 12 with 11.8 billion transistors. The combination of these cores, plus LIDAR and similar advanced features, produce a smart computer that knows how to take pictures, but it doesn’t get in the way of what you want to do

 And consider the lenses, what Apple refers to as its three cameras, because each lens actually is its own camera.

That means that each camera is optimized for the light delivery of the lens. Are the optics available to CaNiSony users, let alone medium format, more capable than what Apple can put into a phone that slips into your pocket? Yes. But Apple addresses that gap every year. With each product cycle, Apple improves on the prior version by orders of magnitude. Camera body and lens makers can improve on their gear incrementally, but they cannot fundamentally change the output of their product. They can keep ramping up the pixel count, but physics will determine to what extent any of that matters. Notice that Apple’s iPhone remains at 12mpx this year, but the pixels themselves are larger and gather significantly more light for less noise. This is the right choice to make. And it’s part of a multitude of new features & capabilities:

  • Significantly better battery life
  • Macro capability
  • Cinematic Mode * ProRes Video [I wish I was a videographer]
  • 6x optical zoom range
  • Styles & Smart HDR4
  • that 77mm telephoto for classic portraiture w/ 3x optical zoom
  • Night Mode across all three cameras
  • Advanced and adjustable depth-of-field mapping
  • and the pipeline that produces a lot of these features in a RAW format
Combine these various features with Apple’s image processing pipeline, and you have a RAW image generation capability that is unmatched. [CaNiSony, what is wrong with you?] And with a good pixel multiplier in post, say, like Topaz’s Gigapixel AI, you can have image files to rival any traditional camera maker.

    I have just one nit: it’s still got a Lightning connector, instead of a USB-C jack. Even the new iPad mini has a USB-C port. I doubt this was a miss; it was an intentional choice. But it was the wrong one from this user’s perspective. Everything else not-camera-related is extraordinary. From the new OLED display, to ProMotion, to 

    I yawned at first, too. Last year’s 12—which I still love—was the most impressive Apple camera ever. Until yesterday. And I’m ordering one on Friday.

    Leave a comment
    Please note, comments need to be approved before they are published.

    Other Posts