Five Views of the Microsoft Research Silicon Valley TechFair

4/28/14Follow @wroush

Microsoft, with a new CEO and a new corporate structure, is pushing hard to burnish its image as a source of innovation in business and consumer computing. Satya Nadella, Steve Ballmer’s replacement (and only the third CEO in the company’s 39-year history), continues to talk about Microsoft’s future as both a “mobile first” and a “cloud first” company, which means building on the now-completed Nokia acquisition and moving farther beyond its traditional emphasis on desktop operating systems and productivity software.

Meanwhile, Peter Lee, who took over last fall as managing director of Microsoft Research, says Microsoft is now more dependent than ever on new ideas from his 1,100-employee R&D operation. He says MSR must deliver both mission-focused innovations that address immediate business needs, and more disruptive or blue-sky innovations that could redefine Microsoft’s product lines years or decades down the road. Tellingly, Lee also thinks MSR researchers should be less secretive about what they’re working on—or at least, that’s what he told Xconomy CEO and editor-in-chief Bob Buderi in an interview last year.

In that spirit, the company invited a few journalists to MSR’s Silicon Valley TechFair at its Mountain View, CA, campus a couple of weeks ago. I was there to check out the 18 projects on offer, which ranged from new types of 3D displays to improved mobile interfaces and data visualization methods.

True to Lee’s four-quadrant map for representing computer research at Microsoft, some of the technologies had obvious and immediate uses in products like Office or Windows Phone 8.1 (which came out this month), while others didn’t seem relate to Microsoft’s existing business areas in any discernible way. Which is fine, in Lee’s world, since theoretical or curiosity-driven projects can turn out to have foundational implications.

I was able to spend time investigating five of the projects in particular. Here are few details, in order of most mission-focused to least (in my eyes, anyway).

1. Texting Without Lifting a Finger

Years ago, Seattle startup Swype pioneered the concept of a virtual QWERTY keyboard for touchscreen phones that would let you spell words by sliding your finger from key to key, rather than tapping; its software recognized the resulting squiggles. Nuance bought Swype in 2011 and continues to offer the Swype app for Android phones, but the idea has also been percolating elsewhere in the mobile world. At MSR, a team led by Jason Abacible, Time Paek, Bongshin Lee, and Asela Gunawardana used machine learning to develop an extremely error-tolerant shape writing system called Word Flow.

The researchers say it’s all about letting users trace out words using muscle memory, and then applying machine learning to disambiguate the resulting letter groupings. That means the system works even if the user is blindfolded or looking away from the screen. I wasn’t blindfolded when I tried Word Flow (which was included as a feature of Windows Phone 8.1), but I was able to write text messages like “The quick brown fox jumped over the lazy dog’s back” with nearly 100 percent accuracy on my first try. In January, a Seattle high school student using Word Flow on a Windows Phone achieved a Guinness World Record for the fastest text message on a touchscreen phone.

1. Making Cortana Smarter

Halo players know Cortana as the holographic AI character who supplies tactical tips to Master Chief. That made her (it?) a good model for the virtual personal assistant baked into Windows Phone 8.1; in this form, Cortana is a Siri-style, Bing-powered interface that lets owners query their phones about things like weather and news reports, flight and traffic information, and reminders and appointments. Microsoft engineers have strived to make Cortana a little more personalized than Siri or Google Now; for example, the system learns over time who’s in the user’s “inner circle” of friends and should therefore be allowed to interrupt designated “quiet hours” with incoming calls or texts.

But what Cortana knows about the outside world is largely limited to the domains where Bing’s search index is strong. If you ask a question like “Are there are any cheap Japanese restaurants open right now within five miles?” it can assemble a pretty accurate list, just based on Web data such as Yelp restaurant listings. If you ask what constellations will be visible from Tasmania tonight, it might have a harder time.

Cortana is a virtual personal assistant service included in Windows Phone 8.1.

Cortana is a virtual personal assistant service included in Windows Phone 8.1.

At Tech Fair, I talked about that problem with Larry Heck, a distinguished engineer at MSR and one of the contributors to the Cortana product. He says the number-one job for the Cortana team right now is to gather user data. The more people who use Windows Phone 8.1, the better developers will see which tasks and subjects Cortana users are asking about, and which answers they’re clicking on; this clickstream feedback will help Microsoft tune future responses and prioritize which new domains to bring into Cortana’s knowledge base.

3. Manipulating Data in 3-D

Last year I wrote about an MSR project called GeoFlow, designed to help Microsoft Excel users generate and explore interactive maps showing complex geospatial and temporal data. The technology was eventually incorporated into Office 365 under the name Power Map, and it’s all based on ideas about information display and storytelling that principal researcher Curtis Wong first developed for WorldWide Telescope, Microsoft’s virtual planetarium program.

Holograph is a software platform that lets users of large, immersive touchscreens interact with data in 3-D.

Holograph is a software platform that lets users of large, immersive touchscreens interact with data in 3-D.

Power Map is great for visualizing on a flat, 2-D display, but often such data would be even easier to explore in 3-D. At Tech Fair, Wong and his colleague Dave Brown, a senior research development engineer at MSR, were showing off a graphics platform code-named Holograph that makes special use of Microsoft’s giant Perceptive Pixel or “PPI” displays, touchscreen devices that come in 55-inch and 82-inch sizes. Brown developed virtual dials and buttons that let PPI users control where 3D datasets or objects will be rendered in relation to the plane of the screen itself—above it, below it, or resting right “on” the screen.

If an object is “below” the screen, you get a sense that the screen is a window onto the data—think of peering down at Earth from a porthole on the International Space Station. If it’s “above,” it might lend itself to 3-D exploration or manipulation by engineers—think Tony Stark’s lab in the Iron Man movies. By donning red-cyan anaglyph glasses, Holograph users can even see data in true 3-D. The whole idea, Brown says, is to help users glean insights from data that might be harder to uncover if the data were presented in flatter, 2-D form.

4. Making Every Surface Into a Screen

Microsoft senior researcher Eyal Ofek showed me a prototype called SurroundWeb that, in a way, turns the Holograph idea inside out: it offers a smarter way to project 2-D data onto every available surface in a physical 3-D space.

It’s already possible, using commercial technology Kinect sensors, to scan a room and figure out which surfaces in a room are available for display (say, an empty coffee table or part of a wall). SurroundWeb is a “3D Browser” protocol that can grab multiple images or videos from a Web page and separate them for rendering in up to 25 separate locations around a room.

SurroundWeb looks for displays and "projectable surfaces" in a room and chooses the best place to show Web-based data.

SurroundWeb looks for displays and “projectable surfaces” in a room and chooses the best place to show Web-based data.

Ofek’s team thinks “immersive room” technology might someday be useful inside homes: if someone is cooking at home and a Web-based sensor program detects that water is boiling on the stove (to use an example from the group’s research paper), a warning might be projected on the cabinet near her head.

The key to delivering these experiences via the Web, Ofek said, is to write code that can use scanner data to construct a “skeleton” model of the room, which is then used to decide where to project content. But all of this has to happen within strict privacy controls: nobody really wants scans of themselves or their homes being uploaded to a public Web server. Ofek calls this the “least privilege” approach: gathering just enough data about a space to render content flexibly, while revealing minimum information about what’s actually there.

5. Using Big Data to Gauge Urban Air Quality

You can’t blame researchers at Microsoft’s Beijing facility for being concerned about air quality: the Chinese capital’s smog problem is so bad at times that it impedes photosynthesis in the city’s greenhouses, prompting comparisons to “nuclear winter.”

Air pollution data from a network of monitoring stations around Beijing.

Air pollution data from a network of monitoring stations around Beijing.

There are only a few dozen air-quality monitoring stations around Beijing, a huge city of 21 million people. But it’s possible to infer air quality levels almost anywhere in the city using neural-network techniques, according to Eric Chang, senior director of technology strategy and communications for MSR Asia. Chang told me his team’s software uses data from existing monitoring stations as well as real-time meteorological data, traffic data, highway maps, and other inputs to estimate and predict pollution levels at any given spot around the city, helping people figure out whether to go outside and when it’s safe to exercise.

The cloud-based models built by Chang’s team can be accessed from a Windows Phone app, and the team is also releasing the data for research purposes. For Beijing residents and people in other smog-ridden cities, carrying a smartphone could soon become as important as wearing a face mask.

Wade Roush is a contributing editor at Xconomy. Follow @wroush

By posting a comment, you agree to our terms and conditions.

  • Sath Ish

    Am waiting for more from Cortana