Skip to content
Small Mammals: The Power of Computational Photography

Small Mammals: The Power of Computational Photography

If you’re a photographer, or avid phone camera user, you probably know a fair bit about computational photography. Just about any camera made in this millennium has been computer-assisted. Chips inside cameras and lenses can do amazing things from programmed exposure to custom interval shooting to lightning fast auto focus with human and animal eye tracking. As impressive as that is, it’s really not much compared to the computational photographer built into modern phones.

Consider the photo that accompanies this post. It was taken outside my hotel near Höfn, Iceland, more than an hour before sunrise, under extremely dark overcast skies. It wasn’t the middle of the night anymore, but it’s still dark. The image is nondescript, but it illustrates the current rapid evolution of digital this is a 3+ second exposure with the iPhone 11 Pro using Apple’s Night Mode. Apple wasn’t first to market with a dedicated low light mode, but as is often the case, its product is simply better. While you wouldn’t want to rely on a file like this for large, crystal clear prints, the ability of the phone to make a reasonably sharp picture on the near-dark while the person hand-holding the phone is Trying to stay still in the modest wind is truly amazing. You can’t do this with any current non-phone camera. This capability is driven by the processor inside the phone (in this case, Apple’s own A13 proprietary chip) that is making millions of calculations and thousands of “exposures” per second from the scene it “sees” through the lens. And this is just the early stages

Digital photography used to be all about the size of your sensor, the number of megapixels (yeah, I know, not really, but yes really), and the size and/or light gathering power of those pixels. In this way, the first phase of digital photography has been very much like the age of film, when the size of the exposed film, its speed, granularity, and similar physical characteristics of the light-recording film determined the latitude with which the photographer worked. That’s changing. While we are still very much in the first or second phase of digital photography—the second might be thought of as the time during which digital has surpassed film in its realism capacity—I think the divide between now and what is coming in the next five to ten years will be a larger divide than the one between film and digital. Digital capabilities have an essentially unlimited ceiling—limitations like processors size, heat, speed, etc., are merely temporary engineering obstacles—and for the phone camera, that means that lens design will be the only constraint on image quality that no conventional camera, no matter what its sensor specs are, can match. And the reason is the same as every other analog/digital divide in our lives: the digital processor. 

So, why not put those processors in “traditional cameras”? Someone might. And if anyone had a brain, they would make it possible to BTO your camera just like Apple does its computers: not too many choices to be paralyzing, but enough to be flexible, and deliverable in under two weeks. But even that won’t stop the evolutionary armageddon. Traditional cameras, like crocodiles and roaches, will survive into the next digital age, but will not stop the ascendancy of mobile digital photography, any more than the dinosaurs could stop the rise of small mammals. This is the meteor. 

Leave a comment
Please note, comments need to be approved before they are published.

Other Posts