Autodesk Labs Builds Tools for Capturing Reality—And Improving On It

11/28/11Follow @wroush

If you had to boil down Autodesk‘s business to a few simple words, it might be “helping people create new realities”—whether that means constructing new objects or structures first envisioned on the company’s computer-aided design (CAD) programs or generating new Avatar-like movie worlds using its modeling and animation software. But increasingly, the first step in the process of modeling a new product or environment is capturing an existing reality, then building on it. And a new cloud service hatched by Autodesk Labs, the company’s San Rafael, CA-based experimental design group, helps professionals and amateurs alike do exactly that, by synthesizing eerily accurate 3D computer models of almost any object or space from a few dozen conventional photographs.

Released in early November as an official Autodesk (NASDAQ: ADSK) beta product, the service is called 123D Catch, reflecting its place in a growing family of amateur-accessible design tools under the 123D brand. It uses a technique called photogrammetry to identify common features in a series of photos snapped from multiple angles. From those reference points, Autodesk’s servers can recreate the scene as a 3D mesh, like the model of my head shown below. The 3D models can then be modified using simple CAD programs like 123D, or even printed out and reassembled as real world sculptures using yet another Autodesk program, 123D Make.

It’s pretty amazing stuff for anyone who has a bit of maker in them. Until recently, building detailed photogrammetric models of everyday objects wasn’t possible without a battery of expensive laser scanners. But 123D Catch is just part of Autodesk’s larger plan to reach beyond its traditional audience of professional architects and designers with tools that can help advanced amateurs create, explore, and build their own 3D objects. And it’s a first step toward a future world where small-scale custom design and manufacturing may be widespread—and where Autodesk hopes to stake a big claim.

The “things industry” is gradually going the way of Netflix, argues Autodesk Labs vice president Brian Mathews. “We used to use money to buy things—shoes, glasses—but now we will effectively buy ideas,” Mathews says. “That is our prediction.”

And since the ideas will be digital, it will be easy to tweak them to our own tastes before they’re brought to life. Autodesk describes this as the “scan/modify/print” worldview. “In the music industry, people rip songs and deejays put them together in new ways,” Mathews observes. “That is also going to happen with the things industry. We’ve got the ability to modify things with 123D and do 3D printing with 123D Make. But what we haven’t shown is the scan part, and that’s what [123D Catch] is one aspect of—bringing laser scanning down to the consumer level.”

Autodesk first shared a preview version of 123D Catch under the code name Photofly in early 2010. I visited Mathews at Autodesk’s San Francisco offices this fall to learn more about Autodesk Labs, and we ended up focusing on Photofly as a soup-to-nuts illustration of the group’s mission and working pattern. “Everyone [at Autodesk] is inventing and improving, but an invention is not an innovation,” Mathews says. “An innovation has to be more in the practical realm; it has to work. We make real-world prototypes instead of research stuff, and our key differentiating feature is that we involve our customers. When we have something really new like Photofly, we are involving the customers in the R&D process from the beginning.”

Indeed, makers using early versions of Photofly have come up with some pretty stunning creations. One of the most impressive is this music video from the Brisbane, Australia-based electronic-pop band Hunz; it’s populated by haunting Photofly models of lead singer-composer-programmer Hans Van Vliet. But users have also employed Photofly to model more mundane scenes, from archaeological digs to ratty jogging shoes.

Photogrammetry—the process of measuring objects from their images—is a science that dates back nearly to the invention of photography in the mid-1800s. But it’s gotten a huge boost in the last decade from the introduction of digital photography and … Next Page »

Wade Roush is a contributing editor at Xconomy. Follow @wroush

Single Page Currently on Page: 1 2 3

By posting a comment, you agree to our terms and conditions.