Dan Reed, Microsoft’s Resident Futurist, Thinks Past Windows to the Fusion of Mobile and Cloud Computing; Meet Him Next Week at Beyond Mobile

5/10/11Follow @wroush

Dan Reed heads a crew within Microsoft that may have the coolest name of any division in the company: the eXtreme Computing Group, or XCG. Formed a little less than two years ago, the group is part of Microsoft Research, but its mandate goes beyond R&D: it’s to help the company as a whole look into the future and question its assumptions about the nature of computing.

Which makes Reed the perfect panelist for Beyond Mobile: Computing in 2021, Xconomy San Francisco’s marquee information technology event this spring. Set for next Tuesday, May 17, on the campus of SRI International in Menlo Park, this evening forum will give audience members a chance to interact with Reed and a cast of brilliant thinkers, researchers, and entrepreneurs on a question we’re all usually too busy to think about: just where is all this technology taking us? Given the current pace of progress in semiconductor and software technology, what sorts of capabilities will computers have 10 years from now? And what can we do now to plan for the entrepreneurial opportunities—and social and political challenges—that these capabilities will create?

Reed—a former UNC Chapel Hill professor and a onetime member of the President’s Council of Advisors on Science and Technology—is uniquely qualified to talk about those questions, because he’s not just a technologist. He’s also Microsoft’s corporate vice president of technology strategy and policy, meaning he spends a lot of his time thinking and talking about how trends in computing technology translate into issues that politicians and businesspeople are going to have to work through at some point.

I spoke on the phone with Reed last week, getting his views on everything from smart radios to data center design to AI. We’ll delve even deeper into many of these issues at next week’s event, so be sure to get your ticket now. Meanwhile, as a preview, here’s an edited summary of my talk with Reed.

Xconomy: Being a futurist is notoriously hazardous. Bill Gates himself said, “We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten.” What do you do in your own work to try to work around all the uncertainties and biases involved in technology forecasting?

Dan Reed: One of the things I do is look at exponentials that have been in flight for a long time and ask how long they can continue. I think our semiconductor roadmaps are an example of that, looking at transistor size and some of the quantum-mechanical issues that are starting to rear their heads, and the power issues. When people start to struggle with something, then there is a chance that a phase transition is about to occur. It doesn’t mean that some breakthrough won’t occur to obviate the conventional approach. But when you see the bag of tricks starting to look empty, then I ask questions. It’s time to point out that this field may be in for a disruption.

I also talk to people outside technology. I talk to my friends in the arts. What do they find interesting? I try to listen to users of technology who aren’t technologically sophisticated to see what unexpected things they’re doing. And I do a lot of general-interest reading and trying to understand societal trends, and looking at where some of our social structures are being challenged.

X: What benefit do you get from talking to people outside of technology or engineering?

DR: I think sometimes people outside a domain are actually better at predicting than people inside it. They’re usually less technologically knowledgeable, therefore they can make wildly inaccurate assumptions. But at the same time, they are not encumbered by the conventional wisdom that those of us in the discipline may have.

X: How would you describe the mission of the eXtreme Computing Group?

DR: Craig Mundie [Microsoft's chief research and strategy officer] and Rick Rashid [senior vice president of research] originally asked me to come to Microsoft to take a blank sheet of paper on how to build a data center. They said, “Don’t take any of the existing constraints into account, don’t talk about incremental change. Start by asking what is the right thing to do.”

One of the things that grew out of that was a deep look at ways to reduce cooling costs. Another aspect was finding the sweet spots for microprocessor design. How much performance do you really need? How do you optimize operations per joule? Those are all long-term explorations that are not going to affect the next generation of x86 hardware.

X: From that focus on data centers, I take it XCG is mainly concerned with the future of cloud services?

DR: Yes and no. There are two kinds of interesting innovations taking place right now. Cloud is one, but there is an enormous amount of ferment in the mobile device world. There could be 50 billion of these interconnected things in a few years, most of which are not computers.

One of the things I’ve learned is that the pendulum swings back and forth. The questions don’t change, but the answers do. Right now the best cloud data center would be this massive thing occupying tens of thousands of square feet. But the pendulum has swung back and forth between centralized and distributed many times before, so one of the things that we are looking at is how do we build “micro data centers,” and how do we reach the other four billion people [in the developing world]. So that we’re not just deploying infrastructure where it’s already available, but so that we can bring information resources to anyone, anywhere.

Maybe I would answer your cloud question this way. I believe there is huge power in local sensors and devices with low-latency response and context awareness. I believe there is also huge value in extracting insights from aggregate data that is consumer- and organization-generated. The fusion of those two things is where the revolution is. The devil is in the details about how we design each of these. Cloud data centers are just one solution point.

X: How is it that one person at Microsoft came to have your two jobs—directing the eXtreme Computing Group and also being a vice president of technology policy and strategy?

DR: It sort of evolved that way. The eXtreme Computing Group is really trying to take a long-term but integrated view of where we think technologies are going, rather than the traditional research view of looking at specific piece of technology. The mission is to step back and say, “If we took this and this and this and put them together, we could make that happen,” rather than just focusing on advancing the state of the art in one area.enough data center stuff already I think.

Every one of those issues raises a counterpart in the policy space. What are the implications of security in the massive cloud infrastructure? That’s already in the news frequently. Craig Mundie was thinking about both of these things, and he needed help, and I was in the right place at the right time.

X: Can you give me a concrete example of how technology research and forecasting and making policy recommendations overlap in your job?

DR: Let’s start with security. Part of what XCG does is look at next-generation hardware standards for trusted platforms—the hardware that supports basic cryptography in microprocessors. If you think about the chain of trust required to run secure software, it begins with the hardware. You want to be able to manage keys and verify that the software that’s running on the hardware is the software you think it is. As we work on next-generation cryptography and mechanisms for managing public and private keys, those technical developments and the software and hardware standards we produce naturally focus attention on the corresponding policy issues.

Say, for example, you’re managing medical records and you want to make those records available to a research group, or you want to make a subset of them available to a pharmacist. You maybe willing to share information about physical attributes, but you don’t to share information about mental health history. How do you give different groups differential access to information? That’s a policy question people would like answered, but it requires a technical implementation to deliver it.

Or in terms of telecommunications—I was reading your interview with Bill Mark at SRI. Many of the “intelligent agent” technologies they’re working on bring requirements for quality of service—a certain bandwidth that the world of devices will depend on for anytime, anyplace access to communications. With prototypes, we run directly into those quality-of-service issues, which leads to the policy side. If you don’t think about managing the spectrum available for those devices, it’s potentially going to make these kinds of experiences difficult or impossible to realize.

X: The wireless spectrum issue is interesting, because it’s pretty easy to predict that it is still going to be important and contentious 10 years from now.

DR: Yes. We’re going to have to embrace cognitive radio technologies. We can’t make more spectrum, but we can use the existing spectrum more efficiently, in a couple of ways. Obviously we can increase the density of cells and slice things up spatially. But there isn’t going to be any alternative but to do more nimble negotiation.

Right now, [buying a mobile device] is like getting assigned to one lane on the highway at the time you get your license, and that’s the only lane you can drive in, whether the highway is busy or not. That just doesn’t make sense in a world with an exploding number of wireless devices. There needs to be a much more nimble way to negotiate real-time access to that finite resource.

That one is definitely going to be a 10-year trend, both because the technologies are rapidly evolving, and because that’s a place where the policy side will take a decade or more to resolve. How we license spectrum, which parts are unlicensed, how one adjudicates priorities, the economies associated with that, and then all the issues around public safety and national security, balanced against consumer and business desires.

X: Last question. Microsoft has a big team of people looking at artificial intelligence. Do you think we’re any closer today to convincing AI than we were, say, in 1968 when Kubrick and Clarke dreamed up HAL?

DR: I’m sure you’ve heard the saying that once you figure out how to do something using computers, the rest of the world says, “That’s not AI,” so by definition AI is the stuff you don’t know how to do yet. They’re right at some level. We are not on the verge of the grand AI of science fiction. That is the holy grail, and we’re still looking for that. I think what has changed is the amount of data we have. This is one of the places where statistics becomes your friend. That has led to substantial amounts of progress in areas like natural language processing.

It would also be a mistake to dismiss the power of local sensors. One of the historical challenges we’ve had is inferring things from a scarcity of data. But this cloud of data is one of the consequences of the sensor explosion [including mobile devices]. We are just in the early days of what could be done at the point of sale in retail trades, for example. A lot of these near-magical “I know what you want” things occur simply because you are not the only one who wants that thing—there are millions of people who want that thing and are asking the same questions.

Wade Roush is a contributing editor at Xconomy. Follow @wroush

By posting a comment, you agree to our terms and conditions.