Expect Labs Anticipates a Day when the Computer Is Always Listening

8/20/13Follow @wroush

(Page 2 of 2)

create and continuously update a statistical model of the user’s situation and likely interests. (In Tuttle’s words, this model is “essentially a collection of all the important concepts, entities, or words that have been said during a conversation; the relationships between them that we can draw from our knowledge graph; and numerical weights that represent our confidence in these relationships.”) Third, the platform acts on the model to proactively search for relevant material across the Web, the user’s social graph, and personal documents.

In MindMeld, you can get together as many as seven friends for an audio conversation, with show-and-tell items automatically provided by the Anticipatory Computing Engine. When Tuttle demonstrated the app for me at Expect’s office in downtown San Francisco, he connected over MindMeld with a colleague, then launched into a rambling monologue about Apple, iOS7, Tim Cook, a mobile conference he had just attended, the NBA finals, cooking, and the relative merits of lasagna, clam chowder, and strawberry shortcake. As he spoke, topic headings appeared in a scrolling timeline on the left side of the iPad screen, and links to related material such as news articles and recipes popped up on cards on the right. To show the materials to his colleague, all Tuttle had to do was drag and drop the cards to the sharing area. (For more details on the app, watch this video.)

“We are trying to extract what we believe are the high-level topics or points based on the language,” Tuttle explained afterward. “The timeline serves as a set of conversational bookmarks, or an annotated thread of what your conversation was about. It’s also a navigational aid, if there are certain things from the past conversation you want to drill down on.”

The screen-share area of the Expect Labs Mindmeld app.

The screen-share area of the Expect Labs Mindmeld app.

MindMeld—which was still in early “alpha” release with a few customers when I visited Expect Labs in June—only cares about topics identified in the last five to 10 minutes. But it would be easy to make the engine go back and prepare a summary of the last hour’s conversation, Tuttle says. “The models are very tunable depending on the use case,” he says. The startup is experimenting with a smartphone version of MindMeld that doesn’t get in the way during a phone call, but presents the user with three salient pieces of information after a call is finished—including, for example, social-networking profiles for the other parties on the call.

It’s not hard to imagine how such tools might be useful in business contexts such as sales or customer support—and those are exactly the sorts of application areas Tuttle hopes third-party developers will explore once the Anticipatory Computing Engine’s APIs are released.

Tuttle says Expect Labs has 10 patents covering its continuous, context-driven search engine technology. But none of them have much to do with old-school natural language processing of the sort that Siri, and its parent, the defense-funded CALO project at SRI International, are built around.

“I think we have a philosophical difference with the direction they took,” Tuttle says. “At the CALO project they believed that solving the problem of how you understand conversation can be done by understanding the construction of the English language, the grammar and syntax, and ultimately the meaning of the words. What we have believed from the beginning, and what the industry is starting to come around to understand, is that this approach only gets you so far. You need to be able to complement natural language understanding with large-scale statistical search and information retrieval—what Google has pioneered. If you know the content of every document on the Web, or in your personal document collection, that ends up providing stronger signals than a basic understanding of English.”

As an example, Tuttle cites the sentence “Did you hear about those tornadoes in Oklahoma?” A program parsing that sentence using a traditional natural language processing engine might eventually figure out that Oklahoma is a place and that a tornado is a type of weather pattern. “But it will have no idea that this is an important question because there was massive destruction from tornadoes this month,” Tuttle says. In other words, a search engine doesn’t need a semantic understanding of the word “tornado” to be able to match it with thousands of Web articles containing the same word. “That second signal, you only get from the search side of the situation.”

With 12 employees, Expect Labs has raised somewhere north of $4.8 million in venture funding (it hasn’t reported the exact sum). Backers like Samsung and Telefonica have more than a passing interest in better search technology for mobile devices; the Google Ventures connection is especially interesting, given that Google, like Apple, is investing deeply in technologies to make its mobile devices and operating systems smarter and more responsive.

If it turns out that lots of developers want to tap into the Anticipatory Computing Engine, Expect Labs could quickly end up colliding with (or being courted by) both of these Silicon Valley giants. “We are in a space where there are a lot of very big companies that care a lot about this technology, which is a bit frightening,” says Tuttle. “So we will see. Hopefully we can get there before they do.”

Wade Roush is a contributing editor at Xconomy. Follow @wroush

Single Page Currently on Page: 1 2 previous page

By posting a comment, you agree to our terms and conditions.