Computational photography will turn the photo world upside-down
Submitted by brad on Mon, 2017-09-18 13:30The camera industry is about to come crashing down thanks to the rise of computational photography.
Many have predicted this for some time, and even wondered why it hasn't happened. While many people take most of their photos with their cell phones, at this point, if you want to do serious photography, in spite of what it says on giant Apple billboards, you carry a dedicated camera, and the more you want from that camera, the bigger the lens on the front of it is.
That's because of some basic physics. No matter how big your sensor is, the bigger the lens, the more light that will come in for each pixel. That means less noise, more ability to get enough light in dark situations, faster shutter speeds for moving subjects and more.
For serious photographers, it also means making artistic use of what some might consider a defect of larger lenses -- only a narrow range of distances is in focus. "Shallow depth of field" lets photographers isolate and highlight their subjects, and give depth and dimensionality to photos that need it.
So why is it all about to change?
Traditional photography has always been about capturing a single frame. A frozen moment in time. The more light you gather, the better you can do that. But that's not the way the eye works. Our eyes are constantly scanning a dynamic scene in real time, assembling our image of the world in our brains. We combine information captured at different times to get more out of a scene than our eyes as cameras can extract in a single "frame" (if they had frames.)
Computational photography adds smart digital algorithms not just to single frames, but to quickly shot sequences of them, or frames from multiple different lenses. It uses those to learn more about the image than any one frame or lens could pull out.