At CHI Meeting, Microsoft Turns Computing Interfaces on Their Head, and Side, and Back

4/10/09Follow @wroush

I spent a couple of days this week at CHI, the big annual meeting of the Association for Computing Machinery’s Special Interest Group on Computer-Human Interaction (ACM SIGCHI). It was the first time since 1994 that the conference—the main international gathering for scholars and practitioners in user interface design—has come to Boston. But it wasn’t the first time that a single company, namely Microsoft, has dominated the proceedings. Matching statistics from other recent CHI meetings, authors from Microsoft Research supplied nearly one out of eight papers presented at the conference, and researchers Ken Hinckley and Meredith Ringel Morris from MSR’s Adaptive Systems and Interaction Group in Redmond were co-chairs of the technical program.

I’m a sucker for this stuff, so I thought almost all of Microsoft’s 25 CHI papers were interesting. But two of the talks in particular, presented back-to-back on the closing day of the conference, contained enticing new ideas about how we might use computing devices in the future. One of them was Hinckley’s own paper on Codex, a prototype dual-screen computer system. The other was a paper by Patrick Baudisch on “back-of-device” interfaces, an intriguing alternative to today’s touch-screen-based devices.

Baudisch, a German native, is a former Xerox PARC researcher who joined Microsoft in 2002 and recently accepted a joint position at the Hasso Plattner Institute at the University of Potsdam in Germany. One of the questions he’s been studying over the past few years is whether it’s feasible to move the main touch interface for small mobile devices (think phones, mini-tablet computers, iPods, Zunes, and the like) from the front—where your fingers occlude your view of the screen—to the back.

After all, the smaller devices get, the less screen real estate they’ll offer, and the larger the fraction of the screen that’s covered up by your finger when you try to manipulate it. “The scientific term for this is the fat finger problem,” Baudisch deadpanned during his talk.

If the touch-sensitive surface on a mobile device were on the back instead, gestures like pointing, tapping, and selecting wouldn’t get in the way of the screen. At least, that’s the idea. But that creates a new challenge—seeing where your finger is going. So Baudisch’s team has been experimenting with a variety of approaches, including using transparent screens (which, unfortunately, don’t leave room for the electronic guts of most devices) and attaching a boom with a camera to a device’s backside (which is predictably clunky).

Microsoft nanoTouch prototypeBaudisch’s newest prototype, and the one he described yesterday, is called nanoTouch. It’s a squarish little gadget resembling an iPod nano, with a 2.4-inch screen that dominates the front and a capacative trackpad similar to the mousepad on a laptop computer attached to the back.

The nanoTouch is designed to be held by the edges in one hand while you operate the trackpad with the index finger of your other hand. The cleverest touch, so to speak, is that the device uses “pseudotransparency” to provide visual feedback—basically, the “cursor” is a life-size picture of a finger that tracks with the position of your actual finger, as if you were looking through the device with X-ray glasses.

It’s a nifty effect that neatly captures the concept of back-of-device interaction; the tip of the simulated finger even turns white when you press harder against the screen, as if the blood were rushing away from that spot. Baudisch’s nanoTouch demo provoked a little flurry of publicity back in December, with coverage by Engadget and New Scientist, among others (I’ve embedded a nanoTouch video from New Scientist below). But as Baudisch explained yesterday, the finger is just for show—it’s there to quickly train the user on what’s happening. “You never see the finger in an application,” he said. “For any real application, we reduce the touch to a single point—which is how we get the finger out of the equation and enable high precision.”

By masking the screen of the nanoTouch prototype and leaving less and less of the trackpad active, Baudisch’s group has been studying just how tiny manufacturers might be able to make future devices without sacrificing usability. They’ve found that as long as a target (meaning, say, an onscreen button) is more than about 3 millimeters across, it’s possible to accurately manipulate a device with a screen measuring as little as 8 millimeters diagonally—less than the size of the fingernail on your pinky.

Baudisch suggests that such devices might be made into pendants, wristbands, or belt buckles—all of which would surely be more fashionable than wearing your smartphone on a geeky belt holster. “Back-of-device interaction is the key to making extremely small pointing devices,” Baudisch concluded.

But while certain types of devices such as music players may keep shrinking (as this classic Saturday Night Live skit about the “iPod Invisa” predicted), we’ll probably still want to use laptop-sized or book-sized devices for most of our information-gathering, reading, note-taking, and so forth. A group at MSR led by Ken Hinckley has been studying the new user-interface possibilities that arise when two such devices are paired.

The idea of a computer or an e-book reading device with side-by-side screens isn’t new, of course. MIT’s Vannevar Bush envisioned a dual-screen reading device as part of his Memex machine in 1945; Apple constructed a mockup two-screen computer for a 1987 demo of its Knowledge Navigator software; various now-defunct e-book startups fiddled with two-screen designs in the 1990s; and last year, the One Laptop Per Child Foundation announced that its second-generation machine would have a clamshell “handbook” design. But unless you count the Nintendo DS, no manufacturer has ever come out with a serious dual-screen device, perhaps because making full use of two screens would require rethinking so many of the user-interface conventions we’ve developed for our single-screen devices.

That rethinking is what Hinckley is doing. His prototype, called Codex, isn’t so much a potential Microsoft product as it is a test platform for the new types of computing activities that become possible when two small-to-medium-sized screens are used in conjunction. “I would suggest that the reading and writing experience together is what’s cool,” said Hinckley, who ought to know—he’s also the guy at MSR who, in 2007, came up with InkSeine (a note-taking application that’s so freaking cool it’s got me thinking about buying a tablet PC just so I can use it).

Codex prototypeCodex consists of a pair of OQO mini-tablet PCs, each with a 3-inch-by-5-inch screen, mounted in a hinged device with built-in sensors that can detect how the hinges are oriented. The sensors are important because Hinckley’s whole concept is that a dual-screen device should be able to switch configurations on the fly depending on what “posture” it’s in. For example, there’s the “book-in-hand” posture, where the device is being held open in front of you the same way you’d read a hardcover book; the “laptop” posture where one screen is flat on a table and the other is propped up at an angle like a laptop screen; the “flat” posture where both screens are flat on the same surface, with two people using them side by side or across a table; and even the “battleship” posture (named after the Milton Bradley game) where the two screens are leaning against each other like a teepee.

Codex prototype--split navigation exampleThe Codex’s hinges, together with the devices’ built-in-accelerometers, sense the prototype’s posture, which allows the displays to adopt the proper orientation (landscape or portrait, right-side-up or upside-down) automatically. While the two OQO devices used in the Codex prototype are technically separate computers, Hinckley wrote software to synchronize them, so that objects can be shared across screens.  Rather than just treating the two screens as if they’re facing pages of a conventional book, many of Hinckley’s experimental scenarios involve split-page navigation, where one screen is being used for a task such as reading and the other is being used to collect notes (he calls this the “hunter-gatherer workflow”). When used collaboratively, the screens can act as a shared, continuous whiteboard, or content can be “beamed” from one screen to the other. The screens can also be detached from the hinging mechanism—which Hinckley said his usability testers appreciated, since it let them use the device collaboratively without being in one another’s personal space all the time.

Overall, Hinckley’s work on dual-screen computing, like Baudisch’s studies of back-of-device interfaces, is in the early stages—he says the Codex prototype isn’t even robust enough to allow extensive user studies. But he says he has learned enough to be able to assert that “dual-screen devices have a well-motivated role to play in the ecosystem” of computing.

The technology isn’t quite there to put dual-screen devices into production. Indeed, the second-generation OLPC device, while sexy, has all the signs of being vaporware. But Microsoft and other companies have poured too much money into tablet- and pen-based computing to let the technology’s development stop now. As Hinckley put it to me after his talk, “This is eventually going to happen. If Microsoft doesn’t do it, somebody else will. So it’s really important to understand what the issues are.”

Wade Roush is a contributing editor at Xconomy. Follow @wroush

By posting a comment, you agree to our terms and conditions.