Autodesk Labs Builds Tools for Capturing Reality—And Improving On It

11/28/11Follow @wroush

If you had to boil down Autodesk‘s business to a few simple words, it might be “helping people create new realities”—whether that means constructing new objects or structures first envisioned on the company’s computer-aided design (CAD) programs or generating new Avatar-like movie worlds using its modeling and animation software. But increasingly, the first step in the process of modeling a new product or environment is capturing an existing reality, then building on it. And a new cloud service hatched by Autodesk Labs, the company’s San Rafael, CA-based experimental design group, helps professionals and amateurs alike do exactly that, by synthesizing eerily accurate 3D computer models of almost any object or space from a few dozen conventional photographs.

Released in early November as an official Autodesk (NASDAQ: ADSK) beta product, the service is called 123D Catch, reflecting its place in a growing family of amateur-accessible design tools under the 123D brand. It uses a technique called photogrammetry to identify common features in a series of photos snapped from multiple angles. From those reference points, Autodesk’s servers can recreate the scene as a 3D mesh, like the model of my head shown below. The 3D models can then be modified using simple CAD programs like 123D, or even printed out and reassembled as real world sculptures using yet another Autodesk program, 123D Make.

It’s pretty amazing stuff for anyone who has a bit of maker in them. Until recently, building detailed photogrammetric models of everyday objects wasn’t possible without a battery of expensive laser scanners. But 123D Catch is just part of Autodesk’s larger plan to reach beyond its traditional audience of professional architects and designers with tools that can help advanced amateurs create, explore, and build their own 3D objects. And it’s a first step toward a future world where small-scale custom design and manufacturing may be widespread—and where Autodesk hopes to stake a big claim.

The “things industry” is gradually going the way of Netflix, argues Autodesk Labs vice president Brian Mathews. “We used to use money to buy things—shoes, glasses—but now we will effectively buy ideas,” Mathews says. “That is our prediction.”

And since the ideas will be digital, it will be easy to tweak them to our own tastes before they’re brought to life. Autodesk describes this as the “scan/modify/print” worldview. “In the music industry, people rip songs and deejays put them together in new ways,” Mathews observes. “That is also going to happen with the things industry. We’ve got the ability to modify things with 123D and do 3D printing with 123D Make. But what we haven’t shown is the scan part, and that’s what [123D Catch] is one aspect of—bringing laser scanning down to the consumer level.”

Autodesk first shared a preview version of 123D Catch under the code name Photofly in early 2010. I visited Mathews at Autodesk’s San Francisco offices this fall to learn more about Autodesk Labs, and we ended up focusing on Photofly as a soup-to-nuts illustration of the group’s mission and working pattern. “Everyone [at Autodesk] is inventing and improving, but an invention is not an innovation,” Mathews says. “An innovation has to be more in the practical realm; it has to work. We make real-world prototypes instead of research stuff, and our key differentiating feature is that we involve our customers. When we have something really new like Photofly, we are involving the customers in the R&D process from the beginning.”

Indeed, makers using early versions of Photofly have come up with some pretty stunning creations. One of the most impressive is this music video from the Brisbane, Australia-based electronic-pop band Hunz; it’s populated by haunting Photofly models of lead singer-composer-programmer Hans Van Vliet. But users have also employed Photofly to model more mundane scenes, from archaeological digs to ratty jogging shoes.

Photogrammetry—the process of measuring objects from their images—is a science that dates back nearly to the invention of photography in the mid-1800s. But it’s gotten a huge boost in the last decade from the introduction of digital photography and cloud computing. Microsoft was the first to bring a 3D photogrammetry tool to consumers back in 2008, in the form of a demo system called Photosynth. Using a Windows computer or the Photosynth app on an iPhone, users can snap multiple images of a scene, then stitch them together into explorable, wraparound panoramas.

But Photosynth, which was developed into a Web service by Microsoft’s now-defunct Live Labs, was conceived more as a 3D photo browser than as a tool for generating high-resolution 3D models. The 3D “point cloud” that Photosynth creates by matching common features in multiple images “isn’t particularly useful,” Mathews asserts, because the resolution is too low and up to 20 percent of the matches are bogus.

Autodesk got into photogrammetry in 2008 by acquiring a French company called Realviz, which made high-end 3D modeling tools used mainly for special-effects-laden movies like Superman Returns and Harry Potter and the Goblet of Fire. “We wanted to democratize this down, so we put it into the Labs,” says Mathews.

But what is Autodesk Labs, exactly? Created in 2006, it’s a porous collection of 20 to 50 experts who rotate in from other parts of the 6,800-employee corporation to create software prototypes and previews that can be shared and tested on the Web—from Project Vasari for modeling new buildings in real cityscapes to Project Falcon for simulated wind tunnel testing of vehicle designs. To adapt the Realviz technology, which was rechristened Photofly, the labs assembled a group of Autodesk engineers with backgrounds in big-data analytics, fault-tolerant cloud computing environments, mathematics, and user interfaces. They did some rapid prototyping of their own and released Version 1.0 of Photofly at the TEDGlobal conference in Oxford, England, in July 2010.

“The first thing we did was make it more user-friendly,” says Mathews. “The second thing was that we put all the heavy-duty computation up on the cloud. Photosynth and Realviz would tie up your computer for hours. In the cloud we can split the problem up into the different parts and send them to data centers with different specialties.” Mathews showed me how quickly the process works by sitting me down in a chair and hopping in circles around me with a digital camera, shooting about 50 images and then uploading them to Autodesk’s cloud. The finished mesh in the image above was ready well before we finished our hour-long interview.

The math problem of turning a bunch of photographs into a 3D model is a pretty gnarly one, so moving the computation into the cloud made sense. The challenge starts with comparing the millions of pixels in a pair of images to identify visual features that might correspond—”a freckle here, a corner of a collar there,” as Mathews puts it. Once you’ve got enough common points, you can do a bit of trigonometry to guess where the camera must have been for each shot. The difficulty is that this process must be repeated for dozens or sometimes hundreds of photos, with up to 4,000 points being tracked across the images. “You are building this probabilistic matrix, and the math is very complicated,” says Mathews. But the more images that contain a single feature, the more confidence the software can build up in its guesses.

The end product is a 3D mesh made of thousands of tiny triangles, just like the models in video games or CGI movies. The resolution of the mesh varies depending on the object represented and the distance from which the original photos were taken, but for a human head, each triangle might represent a patch of flesh as small as a few millimeters.

But that’s not the last step. Once the mesh is complete, Photofly goes back to the original images and extracts colors to create a “texture map” that can be laid over the mesh. The program even compensates for differences in lighting and exposure, blending textures so that the model isn’t simply the right shape, but also has a photo-realistic surface.

The finished model can be exported in various formats and used to make movies and fly-throughs. (Just search YouTube for “Photofly” to see about 800 examples.) You can also modify the mesh using programs like Autodesk’s 123D Sculpt. If I were unhappy with the Jimmy Durante nose and the Dumbo ears in the model of my own head, for example, I could do a bit of virtual plastic surgery.

When Autodesk put a Labs team on the Photofly project, it didn’t necessarily have photo-realistic 3D busts in mind. “When we first started doing this, the main use case we had in mind was sustainable architecture,” says Mathews. When you’re renovating an existing building to make it more energy-efficient, it helps to start with a digital model—but sending a crew to do a centimeter-scale laser scan isn’t always practical or affordable. “You don’t need a laser-quality scan to do an energy model,” Mathews says. “We thought we would use Photofly to get rough dimensions. How many square feet is the roof and what is the orientation to the sun and how much of a shadow does that tree cast? But the next thing we knew, music videos were coming out using this thing.”

The video game industry “has glommed onto this the most,” says Mathews. “These guys are making entire worlds, and they spend a lot of time on the artistry of the characters and the environment. But it can take a week to make a single object, so these guys are starting to use Photofly. Take a few pictures, and you’ve got a tree stump.”

Autodesk isn’t charging anything—yet—for the 123D Catch Windows software that’s used to prepare images for comparison, or for the cloud computing resources that the process chews up. It’s not even a sure thing that 123D Catch, which is still in beta, will graduate to formal product status. Mathews says his goal at Autodesk Labs is to generate a finished product every one or two years, and there are plenty of other projects in the pipeline. “The point of Autodesk Labs is not to make sure that everything survives and goes into product,” says Mathews. “If some of these things don’t fail, we are not stretching ourselves enough.”

While Mathews clearly relishes the engineering details that go into systems like 123D Catch, he’s also a big-picture guy. And the big picture at Autodesk Labs, he says, has to do with some fundamental shifts that are changing the world of design. Cloud computing is putting nearly infinite computational power at the fingertips of designers. Imaging technologies like Photosynth and Photofly are bringing the analog objects and spaces into the digital realm, where they can be analyzed, adjusted, and endlessly optimized. And the technologies themselves are going down-market, allowing anyone with a creative itch and a few digital tools to solve design problems.

“Photofly is an example of what we call ‘reality capture disruption,’” says Mathews. “But infinite computing is probably the most important thing going on. If you just look at the cloud as better, faster, and cheaper, you miss the point, which is about doing new things that you couldn’t have done before. Think of computer-based evolution of structures—matching video game technology with building construction with evolutionary algorithms. This isn’t like CAD, which was a lever for humans. We are now using the computer to do things that a human can’t do.”

Here’s an Autodesk video introducing 123D Catch.

Wade Roush is a contributing editor at Xconomy. Follow @wroush

By posting a comment, you agree to our terms and conditions.