The Lytro Camera Is Revolutionary, But It’s No iPhone

(Page 2 of 3)

light rays from the scene converge into a focused image on the sensor. You no longer care about focus at all, because if you compare all the mini-images, you can reason backward to figure out where each ray of light in a scene came from. You end up, in essence, with a three-dimensional record of the space in front of the camera (that’s what “light field” means), and from this record you can reconstruct imaginary pictures showing what any slice of that 3D space would have looked like if you had tried to focus on it.

That, plus some nifty visualization software, is what allows Lytro to create its unique “living pictures,” which you can refocus instantly simply by clicking or tapping on a specific point in the image. (Try clicking around on one of the images embedded here, which I took this week on a Lytro photo walk for journalists in San Francisco.) This in itself would be cool enough: the refocusability of Lytro’s images makes them into interactive objects, inviting a kind of exploration and emotional engagement that you just don’t get with static, monoplanar images. But there’s an added advantage to light field photography: If you don’t care about focusing the image before it’s taken, you don’t need all the autofocus sensors and motors that get the optics into place before you shoot. This means you can snap a picture the instant the camera comes on—which any parent with a hyperkinetic child will appreciate. The Lytro camera does have motors and a stack of lenses inside, but that’s only to provide zoom capability.

It’s really a mind-blowing concept, and it was all worked out by Ren Ng as part of his 2006 doctoral dissertation for the Stanford computer science department. The founding of Lytro, where Ng is now CEO, was a typical Silicon Valley story: Pat Hanrahan, Ng’s doctoral advisor, knew the partners at NEA because they’d backed his company Tableau Software. “Pat said ‘I have this really super bright student and you should take a look at what he is doing,'” Chung recounts. After falling for Ng’s initial presentation, Chung put him in front of a full partner meeting at NEA, where he “took a picture of us, immediately uploaded it to the Web, and showed us the refocusability. We were just astonished. Each and every one of us had the same reaction, which was that [optical photography] is a technology that has not seen fundamental innovation in two centuries, and we were staring it in the face.”

But it’s one thing to come up with a game-changing idea, and another thing to use it to actually change consumer behavior. If you’re looking to explain why the iPhone took off so quickly—selling 1 million units in the first 74 days—-I think you have to zero in on two interrelated innovations: the beautiful multitouch screen, and the intuitive, gestural interaction paradigms that Apple’s software designers came up with to exploit that screen. The iPhone didn’t make just one thing, like dialing or managing a contact list, demonstrably easier and more fun than on previous phones—it made many things easier, from Web browsing to e-mail to calendaring to messaging to navigation to photo and music management. And all this was even before the iPhone had third-party apps or 3G connectivity.

For photographers, the Lytro makes exactly two things easier: 1) Focusing, which is now unnecessary. 2) Capturing a candid scene instantly, without any autofocus or shutter delay. Then there’s a third, bonus element: the explorative nature of the “living pictures,” which is a genuine novelty with many creative implications.

This is all very cool, but I’m just not sure it adds up to a $399 to $499 value for most consumers. To get really nit-picky: The no-focus feature is actually a little hard to get your head around, and I’m not sure it’s a huge advantage, because people are already … Next Page »

Single Page Currently on Page: 1 2 3 previous page

Wade Roush is a contributing editor at Xconomy. Follow @wroush

Trending on Xconomy