The first time I got to play with a Lytro light-field camera was in March, 2012. If I had to sum up my reaction in eight words, it would be: “Concept: Mind-blowing. Execution: Not quite there yet.”
It was clear that the technology inside the camera—which makes it possible to refocus a picture after it’s been taken—would eventually upset all of our notions about photos and photography. But the device itself reminded me of the Magnavox Odyssey, the first commercial home video game console.
When it appeared in 1972, the Odyssey had analog circuitry, no sound, and grainy black-and-white graphics, and could only run a handful of games. Yet it’s remembered now because it heralded a true revolution in home entertainment. (Atari’s Pong didn’t come until three years later.)
Likewise, the first-generation Lytro has a low-resolution sensor, a lamentably tiny display, and an awkward interface. But it’s still enough to get across the enormous potential of light field photography.
What’s amazing is how quickly the technology is evolving. There’s no second-generation Lytro yet (though it’s safe to assume the Mountain View, CA-based company is working on one). But because light field photography is mostly about computation, not optics or electronics, Lytro can make its existing camera more powerful simply by upgrading the software used to process light-field images.
And that’s exactly what it did in November, rolling out a new feature called Perspective Shift. As the name implies, the feature lets you nudge the perspective in a Lytro image slightly, as if you were present in the scene and moving your head a few inches in one direction or the other.
It’s easier to show Perspective Shift than to describe it—just click and drag on the Lytro image below to see how it works. (You can also click on any point in the image to refocus it; that feature was the Lytro’s original selling point.)
Pretty damn cool, huh? You can go to Lytro’s gallery to explore a bunch more of these images.
Perspective Shift is possible because the Lytro camera captures far more information about a scene than a traditional digital camera. In fact, there’s enough data in a single Lytro image to reconstruct a 3-D scene, or at least a sliver of one. “The light field itself is inherently multidimensional,” explains Eric Cheng, Lytro’s director of photography. “The 2-D refocusable picture that we launched with was just one way to represent that.”
The big picture here (so to speak) is that we are about to enter the second age of 3-D photography, and this time it will be consumers, rather than just professional photographers, behind the lens. I’ll explain what happened during the first age, and how Lytro is changing things, in a moment. But if you retain nothing else about this article, remember this: The Lytro images we’re seeing today are but a meager taste of what’s coming.
Whether or not future light field cameras bear the Lytro logo, they’re going to give us capabilities that even science-fiction movie directors haven’t imagined. With a single snapshot, you’ll be able to capture an entire 3-D environment, then explore it later using either a 2-D or a 3-D display. The implications for consumer-level home and travel photography are exciting enough. But when you imagine what architects, designers, engineers, and entertainers could do with the technology, the mind boggles.
But let’s back up about 160 years. Most people don’t realize it, but 3-D photography is almost as old as photography itself. By 1845, a British scientist named Charles Wheatstone had already figured out that if you take two photos of the same scene from slightly different angles, and then arrange the printed pictures so that the viewer’s left eye only sees the left image while the right eye only sees the right image, the brain will instantly fuse the images into a 3-D scene.
This is exactly the way we see our actual environment, so the effect isn’t too surprising. Many nineteenth-century families owned stereoscopes based on Wheatstone’s principle, and the devices became the first medium for mass-market photojournalism, exposing millions of middle-class people to 3-D pictures of exotic places they’d never get to see in person. I’ve made a hobby of collecting the old stereograph cards produced for the devices—see my December 2008 column “The 3-D Graphics Revolution of 1859.”
Stereoscopes went out of fashion by the 1930s, but the same principle popped up again in the 3-D films of the early 1950s. Just as before, the creators of 3-D monster features like House of Wax (1953) used special cameras to capture a stereo pair of images for each frame of the film. But whereas the old stereoscopes used a physical barrier, typically a slat of wood, to keep the left eye from seeing the right image, and vice versa, early 3-D films were based on “anaglyph” technology. Here, the two images were filtered through glasses with differently colored lenses, typically red and green. A similar system is still used today for films like James Cameron’s Avatar, except that modern 3-D glasses filter the left and right images using polarized lenses.
The first age of 3-D photography, then, had two defining characteristics. First, it was only possible to create 3-D images using specialized, expensive equipment. (The main exception was the Stereo Realist, a consumer-oriented stereo camera produced by the David White Company of Milwaukee, WI, from 1947 to 1972. It created color slides that could be viewed using a View-Master-style device. President Dwight D. Eisenhower was one famous Stereo Realist devotee.) Second, it was only possible to experience stereoscopic 3-D images using a variety of kludgy, uncomfortable systems designed to prevent interference between the left and right images.
In the new second age of 3-D photography, both of those requirements have gone out the window. With Lytro’s technology, you don’t need to capture a pair of images in order to recreate a scene in 3-D, and you don’t need a special stereoscope or glasses to explore the subject in 3-D. You do, of course, need a computing device that can render an image interactively.
I won’t try to explain light field photography in detail (I couldn’t anyway—we’re talking advanced optics and image processing here) but the general idea is this: in a traditional digital camera, a lot of data goes to waste. Light is coming at the sensor from everywhere, but at each point on the imaging plane, the sensor only records the aggregate intensity of this light in the red, green, and blue parts of the spectrum. All information about the direction the light rays came from is lost.
In Lytro’s camera, there’s a microlens array that divides the sensor into thousands of mini-cameras, each of which captures a scene from a slightly different angle. In this way, information about the directionality of the rays is preserved.
The result isn’t a conventional image at all, but rather a few megabytes of data representing a “4-D light field.” This field can be sliced and diced in a variety of ways using software. One way is to change the plane of focus in the image—in effect, isolating one 2-D slice of the 4-D light field. Another way is to pivot between different virtual points of view. That’s Perspective Shift.
Cheng admits that it’s a challenge to explain the physics of light field photography to lay people. But they don’t need to grok the details in order to grasp what’s going on with Perspective Shift. “When people see it, they immediately understand,” Cheng says. “It’s a little bit like magic—they don’t care how it’s done.”
Now, there’s still a big difference between exploring a subject in 3-D—that is, being able to rotate the perspective back and forth a bit—and actually seeing it in 3-D. Cheng says that as soon as Lytro released the Perspective Shift software upgrade, customers began making their own stereographs from Lytro images. They’d shift the perspective all the way to the left, take a screen shot, shift it all the way to the right, take a second screen shot, and then juxtapose that pair of images in a stereoscope.
Cheng says Lytro engineers have done the same thing using software in their lab, and have visualized the results using 3-D displays. The next logical step, he says, would be to build a version of the Lytro viewer that works on 3-D televisions. “The 3-D TV manufacturers have a content scarcity, so those are all very interesting things we are exploring,” he says.
Beyond that, Lytro is thinking about cameras with larger sensors that could capture more information, which would increase both the resolution of the 4-D light field and the degree of perspective shift that’s possible. To get slightly technical, the “shiftability” of a Lytro image is a function of the width of the image sensor; this baseline is small in the first-generation Lytro device because the camera itself is small and has a fixed-aperture, f/2 lens. But that won’t be true forever. “When the sensor is bigger, we can start to do really compelling 3-D content,” Cheng says.
Photography used to be all about lenses and film. Then it was about lenses and electronic sensors. Lytro is changing the game again. As Cheng and his colleagues continue to improve their software and hardware, get ready to abandon all your old ideas about how a static image should behave.
“One of the things we like to say, internally, is that we are moving the power of photography from optics to computation,” Cheng says. “And the very interesting thing about the light field is that 3-D is a byproduct. We get it without having to put much effort into it. So when the public really demands 3-D content, we will be ready for it.”
Here’s a Lytro video about Perspective Shift, featuring Eric Cheng.