Update: Paul Allen’s Artificial Intelligence Push, Research Grants

Researchers at the Allen Institute for Artificial Intelligence (AI2), led by CEO Oren Etzioni, recently scored a D on a fourth-grade science test—a notable accomplishment in the field.

OK, it was actually their flagship artificial intelligence effort, Project Aristo, which combines leading-edge technologies in machine reading, reasoning, and something approaching common sense to tackle progressively harder science tests.

Etzioni, the former Univesity of Washington computer science professor and entrepreneur, was appointed a little more than a year ago to head the AI2—one of Microsoft co-founder Paul Allen’s several ambitious, large-scale research projects. He shared Aristo’s latest report card in an interview with Xconomy, condensed below, on the occasion of a new round of grants awarded to artificial intelligence researchers by the Paul G. Allen Family Foundation.

The Allen Distinguished Investigator grants in artificial intelligence (AI) total $5.7 million over three years and will support research teams focused on machine reading, diagram interpretation, and spatial and temporal reasoning at Virgina Tech, the University of California, the University of Washington, and University College London. The grants are part of some $79.1 million Allen has committed to research in the field, with the AI2—located on the north shore of Seattle’s Lake Union—as the centerpiece of the effort.

Xconomy: It’s been a little more than a year now since you were tapped to head the AI2. Can you give us a brief update on the institute as it’s constituted today?

Etzioni

Etzioni

Oren Etzioni: We’re even more excited than when we started. It’s been a great year. We’re at 30 people, most of them technical. These are researchers and engineers working shoulder-to-shoulder. We have PhD-level researchers from Wisconsin, the University of Washington, Carnegie Mellon, Princeton, the University of Texas, Austin, UCLA—really the best computer science schools in the country.

We set out goals for ourselves, both long-term goals, but also specific quantitative goals. One of them is to perform on the fourth-grade science test. This is kind of our benchmark task. We do text processing. We do fancy AI. But at the end of the day, can you measure how much progress you’re making? And the answer is, yes. We take the New York Regents test, unseen questions, for fourth graders.

And for technical reasons, that’s plenty hard. That’s actually a very challenging problem. It involves fancy natural-language processing to understand the questions, and common-sense background knowledge. And on the non-diagram portion of the test—some of the questions have diagrams in it, which we are not yet handling—but the ones that don’t, we’ve achieved 66 percent, and that’s a very recent result. It’s the first time a system has done that on this test, and this is fully automated. So, there have been some announcements about various things, and you dig under the covers, and there’s a human in the loop helping out, or the questions are being interpreted and written down in a different language. These are unseen tests, as-is, the first system in the world [Project Aristo] to achieve this kind of performance.

X: I had the notion that artificial intelligence is so hot that it would get lots of research funding, but part of the stated purpose of the Allen Distinguished Investigator (ADI) grants is to back research in fields that “may face significant funding challenges.” So what’s going on here?

OE: The fact of the matter is there is funding for research, but it’s of two kinds: Either it’s of the sprinkle kind, which means it’s just a tiny bit, and that comes from the National Science Foundation, or from Google research funding, and those are often at $100,000 a year or even less. The other kind is major funding, and that typically comes from DARPA (Defense Advanced Research Projects Agency) or IARPA (Intelligence Advanced Research Projects Activity)—so from the Defense department or the intelligence community. Those are much larger grants or contracts, but they also come with a lot of strings attached. So that directs the research in very complicated ways.

There’s a tremendous dearth of funding for researchers who want to do exciting, speculative things, and to do that at the level of a million dollars or so. And so, when we put out the announcement, even though it’s the first year of the program, more than 100 teams worldwide responded to the call for proposals. This is very high visibility within the community, and considered a tremendous achievement for the people and teams who have won.

This is not a MacArthur Genius grant, where you can really do anything with it, including go on a vacation in the Bahamas, but it is very much funding for research that’s unfettered by all kinds of constraints, or Department of Defense constraints of various kinds. And also, the focus is on particular areas that have not been in the mainstream necessarily or received a lot of funding. So, for example, in natural-language processing, a huge amount has been [for] simpler tasks like machine translation or syntactic processing. Diagram understanding, which was one focus [of the ADI grants], is not something that’s been studied very heavily, so the idea is to define things where there’s this dearth of funding, both because of the shape of this, and in the area, and find the very best people interested in those problems.”

X: One of the grant recipients at Virginia Tech is working on imparting machines with “common sense.” I know that’s short-hand for something more complicated, but that kind of language makes me think about other human traits that AI might be built to posses. As you surveyed the field of research, are there people working on AI systems with characteristics we might consider more human, like compassion, sympathy, or altruism? Or is that too far down the road yet?

OE: The world is split between people who are AI pessimists like Elon Musk and Stephen Hawking, who say that we’re unleashing the demon and this could spell the end of humanity, and then AI optimists like Kevin Kelly who say, AI is going to be like electricity. You’ll just plug in and intelligence will flow from the wall, or people like Ray Kurzweil, who say the singularity is just around the corner. I and Paul Allen are what I would call AI realists, which is, we look at the state of the art today, and we think that neither party is correct, because both the fears and the optimism are overblown. The truth is that there are hard, fundamental, and interesting research problems that need to be solved, and we need to do the work without a preponderance of hype.

To your specific question about compassion and so on, it is very far beyond us because we’re still trying to build computers that can understand simple sentences. We’re still trying to build computers that can make sense of simple diagrams. We’re still trying to build computers that can take fourth-grade tests. And so that’s where the state of the art is at. We’re making progress. So I’m not a pessimist. But I want to make sure that things don’t get overblown.

X: When we talked last year, you mentioned the adage in the field that anything that is actually accomplished suddenly is no longer considered real AI. Knowing that, is there anything that’s come out recently under the banner of AI that’s particularly interesting, or conversely an egregious masquerade under that title?

OE: I think people have gotten better about recognizing AI contributions, and what you see is deep learning, which is not an area that I work in personally. Neural networks is another phrase for it. These methods have been getting terrific performance in tasks like speech recognition, object detection in computer vision, and so on. And so those are very impressive successes in the last year or two. The error rates in things like facial recognition, speech recognition have gone way down, which is impressive. And of course that results in a lot of attention and a lot of funding directed at those.

The questions that we’re asking are OK, can we build on these successes and then tie them into these bigger challenges like being able to read a text book, being able to have common-sense knowledge. That’s very much green field for future research, and that’s what we’re working on.

Benjamin Romano is editor of Xconomy Seattle. Email him at bromano [at] xconomy.com. Follow @bromano

Trending on Xconomy