At CHI Meeting, Microsoft Turns Computing Interfaces on Their Head, and Side, and Back

(Page 2 of 2)

use laptop-sized or book-sized devices for most of our information-gathering, reading, note-taking, and so forth. A group at MSR led by Ken Hinckley has been studying the new user-interface possibilities that arise when two such devices are paired.

The idea of a computer or an e-book reading device with side-by-side screens isn’t new, of course. MIT’s Vannevar Bush envisioned a dual-screen reading device as part of his Memex machine in 1945; Apple constructed a mockup two-screen computer for a 1987 demo of its Knowledge Navigator software; various now-defunct e-book startups fiddled with two-screen designs in the 1990s; and last year, the One Laptop Per Child Foundation announced that its second-generation machine would have a clamshell “handbook” design. But unless you count the Nintendo DS, no manufacturer has ever come out with a serious dual-screen device, perhaps because making full use of two screens would require rethinking so many of the user-interface conventions we’ve developed for our single-screen devices.

That rethinking is what Hinckley is doing. His prototype, called Codex, isn’t so much a potential Microsoft product as it is a test platform for the new types of computing activities that become possible when two small-to-medium-sized screens are used in conjunction. “I would suggest that the reading and writing experience together is what’s cool,” said Hinckley, who ought to know—he’s also the guy at MSR who, in 2007, came up with InkSeine (a note-taking application that’s so freaking cool it’s got me thinking about buying a tablet PC just so I can use it).

Codex prototypeCodex consists of a pair of OQO mini-tablet PCs, each with a 3-inch-by-5-inch screen, mounted in a hinged device with built-in sensors that can detect how the hinges are oriented. The sensors are important because Hinckley’s whole concept is that a dual-screen device should be able to switch configurations on the fly depending on what “posture” it’s in. For example, there’s the “book-in-hand” posture, where the device is being held open in front of you the same way you’d read a hardcover book; the “laptop” posture where one screen is flat on a table and the other is propped up at an angle like a laptop screen; the “flat” posture where both screens are flat on the same surface, with two people using them side by side or across a table; and even the “battleship” posture (named after the Milton Bradley game) where the two screens are leaning against each other like a teepee.

Codex prototype--split navigation exampleThe Codex’s hinges, together with the devices’ built-in-accelerometers, sense the prototype’s posture, which allows the displays to adopt the proper orientation (landscape or portrait, right-side-up or upside-down) automatically. While the two OQO devices used in the Codex prototype are technically separate computers, Hinckley wrote software to synchronize them, so that objects can be shared across screens.  Rather than just treating the two screens as if they’re facing pages of a conventional book, many of Hinckley’s experimental scenarios involve split-page navigation, where one screen is being used for a task such as reading and the other is being used to collect notes (he calls this the “hunter-gatherer workflow”). When used collaboratively, the screens can act as a shared, continuous whiteboard, or content can be “beamed” from one screen to the other. The screens can also be detached from the hinging mechanism—which Hinckley said his usability testers appreciated, since it let them use the device collaboratively without being in one another’s personal space all the time.

Overall, Hinckley’s work on dual-screen computing, like Baudisch’s studies of back-of-device interfaces, is in the early stages—he says the Codex prototype isn’t even robust enough to allow extensive user studies. But he says he has learned enough to be able to assert that “dual-screen devices have a well-motivated role to play in the ecosystem” of computing.

The technology isn’t quite there to put dual-screen devices into production. Indeed, the second-generation OLPC device, while sexy, has all the signs of being vaporware. But Microsoft and other companies have poured too much money into tablet- and pen-based computing to let the technology’s development stop now. As Hinckley put it to me after his talk, “This is eventually going to happen. If Microsoft doesn’t do it, somebody else will. So it’s really important to understand what the issues are.”

Single PageCurrently on Page: 1 2 previous page

Wade Roush is the producer and host of the podcast Soonish and a contributing editor at Xconomy. Follow @soonishpodcast

Trending on Xconomy

By posting a comment, you agree to our terms and conditions.

  • Joe Torre (Amiga)

    I see white (pink) fingers in the demo – will tan, brown, and black fingers be available? – just asking –
    Romulan green? Tattoos?

  • Very bad typo!!

    “The Haso Plattner Institute” is wrong, it is called “the Hasso Plattner Institute”

  • Robert Mehlschau

    This is MY IDEA, the pseudotransparency …


    I sent it to Apple, in their suggestion box, the idea EXACTLY as MS is showing it here.

    I sent the idea to Apple before the iPhone came out saying that this was THE solution to many problems that would occur with touchscreen.

    The least they could do is give credit to the inventor !!!

  • sure

    SURE! What everyone wants in a device is to see their fingers underneath it. how stupid.