When Tel Aviv, Israel-based PrimeSense came out with its first depth-sensitive, near-infrared camera-on-a-chip in 2010, nobody could have predicted how many uses hardware makers would dream up for the technology within a few short years.
The first and most famous was Microsoft’s Kinect sensor, which lets gamers move their bodies to interact with video games. But now PrimeSense chips are also starting to turn up in robots, PC peripherals, and interactive displays.
And here are two more applications that might surprise you: vacation rental marketing and interior design.
Those are two of the use cases being targeted by Matterport, a Mountain View, CA-based startup that uses PrimeSense chips in a new 3D camera being prepped for release this summer. Place the motorized, tripod-mounted Matterport camera in the center of a room and it spins in place, capturing depth and image data that can be uploaded to Matterport’s cloud servers and assembled into a high-resolution 3D model of the space.
Viewers can then move through the model as if it were a scene in a video game. In fact, the viewing software runs on the same Unity engine used to power hundreds of popular video games.
In 3D demos that you can check out at Matterport’s website, the company shows how it has used its prototype cameras to capture the interiors of a modern condo, a wood-paneled library, a pre-school classroom, a construction site, a vacation cabin, and a C-130 cargo plane. Unlike the models produced using professional 3D laser scanning systems like Trimble RealWorks, Matterport’s models can be created in just minutes, using tablet-based controls that anyone can figure out.
The commercial possibilities are obvious: a real-estate company could use a Matterport model to show off a home or a vacation spot; interior designers could use a Matterport system to capture an empty space in its “blank canvas” state and then show how it would look filled with furniture and art. A 3D services firm called Planet 9 Studios, which serves architectural and construction firms, has already used Matterport to document sections of the Stanford University Medical Center to help with renovations. Matterport’s cameras could have plenty of other creative applications, too—imagine being able to create settings for new video games or virtual worlds without having to hire computer-graphics experts.
PrimeSense isn’t the only maker of 3D sensing chips, and Matterport isn’t the only company working on cheaper, faster 3D scanning technology for interior spaces. But co-founder and CEO Matt Bell hopes his crew of 16 has enough of a head start over its competitors that Matterport will be the brand of choice by the time 3D imaging becomes a standard tool in areas like real estate, home improvement, and insurance.
Already, Bell notes, PrimeSense is working on a sensing device called Capri that’s small enough to embed in a smartphone. “Once it shows up in a tablet or a phone, people will be able to make these models with the hardware they have in their pocket all the time,” Bell says. “At that point, the possibilities for this just really explode.”
So while Matterport is working hard to get the newest version of its own camera out the door—so that early adopters have a chance to start making their own Matterport models—-the company’s vision isn’t necessarily to make and sell its own cameras forever. The real plan is to become the default app and cloud service for generating 3D models of interiors. And to help with that, the company just collected $5.6 million in Series A funding from Felicis Ventures, Lux Capital, Red Swan Ventures, Greylock Partners, Qualcomm Ventures, and Rothenburg Ventures. (That comes on top of a $1.6 million seed round from Felicis last year.)
Bell’s interest in using visual applications of computing to transform old industries goes back to his days at Stanford, where he graduated with a degree in computer science in 2001. “Computer vision at that time was not very mature,” Bell says. “I remember thinking, ‘The building blocks for this just aren’t there yet.’” So he worked on machine-learning algorithms for a short time at Google—this was back when the search company employed only 200 people—then left to start his own company.
Called Reactrix, the company built software for interactive advertising for public spaces. If you’ve ever been to a mall like the Metreon in San Francisco and played with the video projections on the floor that let you kick a virtual ball or make ripples in a virtual koi pond, it was Reactrix’s software under the hood.
The hard part about these installations wasn’t generating the images, so much as sensing how people were interacting with them. “That taught me how to get computer vision into the real world,” Bell says. “We were basically teaching people how to use these gestural interfaces in 10 seconds, in a public setting, with other people watching. That was a very interesting accomplishment, and we could see the age of computers being able to pervade the physical world was coming.”
After seeing an early demo of PrimeSense’s 3D infrared camera in 2007, “I realized that there were going to be a number of industries founded on this new sensor technology,” Bell says. Together with his friend David Gausebeck—a former technical architect at PayPal, where he’s credited with being the first person to use a CAPTCHA to combat fraud in a commercial application—Bell set out to see if there might be a way to stitch together the 3D snapshots that PrimeSense’s camera could capture.
“We felt like there was going to be a whole new industry around 3D capture, and we wanted to be the first to provide a tool that was accessible to everyone,” Bell says. “But it turned out that stitching together Kinect images was way harder than we thought.”
Think of it like a giant 3D jigsaw puzzle, with thousands of pieces. The algorithm needed to work quickly, and it had to work as well on the data from a crowded engine room inside a ship as it did in an empty apartment. After about six months of hard work, Bell and Guasebeck had the barest demo, using a hand-held scanner. “But we realized we were on to something,” Bell says.
With little more to show for their work than a goofy-looking Kinect-on-a-stick device, Bell and Gausebeck managed to recruit a third co-founder, Mike Beebe, an expert in 3D modeling and medical-device design. And then they got accepted to Y Combinator, the Mountain View, CA-based venture incubator.
That’s where they got a crash course in what lean-startup black belts call product-market fit. “We realized that it was very important to get this in front of customers and figure out what their needs were, and rapidly iterate,” Bell says. This meant abandoning the handheld scanner and building a first-generation tripod-mounted camera—something that made it easy for a non-expert to scan a room at the push of a button.
In 2012 the startup distributed a handful of these units to beta testers, and by March 2013 they’d used them to scan more than 1,000 locations. (Y Combinator’s old office at 320 Pioneer Way was the first to get the treatment.) Now the company is working on a second-generation camera that’s smaller and more durable and captures images at even higher resolution. It’s got three PrimeSense sensors inside, with overlapping fields of view.
A full Matterport scan consists of hundreds of snapshots, and each snapshot consists of a “point cloud” with more than 300,000 points. To create a final 3D model, then, hundreds of millions of points must be joined into polygons. That’s a job that would choke a typical laptop or desktop computer, so the data from a Matterport scan must be uploaded to the company’s cloud servers, which do the hard math and send back a model with a polygon count low enough for a computer or mobile device to handle. (Other 3D modeling tools such as Autodesk’s 123D Catch use the same approach.)
Matterport’s models can be exported to 3D editing tools such as Maya, AutoCAD, Revit, or SolidWorks. Or they can simply be explored in a Web browser with the Unity viewer—for some potential users, it might be enough just to be able to check out a 3D space online.
“In real estate there’s a use case where you can’t visit the site prior to paying them a lot of money, and that’s vacation rentals,” Bell says. That’s why the company sent one of its beta units to Lake Tahoe, where it was used to scan this rustic cabin.
Police investigators have told Matterport they’d love to use its devices to freeze complex crime scenes in time. Then there’s home sales and home improvement. “Buying, selling, renting, remodeling, furniture, insurance—all of those decisions are very large and would be aided substantially by having a 3D model of the space,” Bell says. “We can help people make better decisions.”
In most of these scenarios, Matterport’s actual customers would be the real estate agents, landlords, interior designers, or forensic scientists buying its cameras (which will probably be priced similarly to professional digital SLR cameras, i.e., around $3,000) and subscribing to its cloud rendering service. But the user base could grow enormously if PrimeSense’s Capri sensor and similar chips make their way into the next generation of mobile gadgets.
“Suppose you’re going shopping for furniture,” Bell says. “Before you go, you take a quick scan of your living room, and you have that in your pocket. And then you go to the furniture store, scan that stuff, and drag-and-drop it into your living room model. Or say you want to share what your dorm room looks like with your parents—you can quickly make a model, send it off to them, and they could walk around it virtually. The same way people relentlessly share images today, they will be able to share the spaces they visit and inhabit in a couple of years.”
To get to that future, Matterport first has to finish its second-generation camera. Bell says he expects to have it in the hands of early adopters starting in July, with broader availability expected in the fall. As a buyer of PrimeSense chips, the startup probably won’t rival Microsoft anytime soon—but then again, Matterport isn’t just playing around.
Wade Roush is a contributing editor at Xconomy.