Can We Be Too Connected? A Harvard Scholar Explores Interoperability

6/22/12Follow @wroush

If you’re fond of delicious ironies, as I am, there’s a new book that will leave you positively gorged. It’s called Interop: The Promise and Perils of Highly Interconnected Systems, and last week I got to speak with one of its authors, Harvard Law School professor John Palfrey.

The central irony that fascinates Palfrey and his co-author Urs Gasser, who directs the Berkman Center for Internet & Society at Harvard, is that complex systems—especially those designed to support a high degree of interoperability—often bite back, achieving the reverse of what we intended. Think of the Challenger disaster as an example. The way NASA designed the shuttle system, more than 80 percent of the thrust required for liftoff came from the reusable solid rocket boosters, which consisted of four segments each (an arrangement that made it easier to refill the boosters with solid propellant between missions). Rubber O-rings between the segments were supposed to seal the joints and keep hot gas from escaping during launch. But January 28, 1986, was a very cold day at Cape Canaveral, and the O-rings turned out to be so brittle under those conditions they failed to hold the seal after ignition, allowing a gas leak that burned a hole in the main external fuel tank and destroyed the vehicle.

Put another way, one of the very components designed to make the shuttle components interoperable—so that NASA could refuel the boosters between missions and mix-and-match boosters and orbiters—turned out to be Challenger’s undoing.

I’ve been thinking about the unanticipated risks built into complex technological systems for a long time—ever since I wrote my MIT PhD thesis on technological disasters back in 1994. What I like about Interop is that it provides a new framework for thinking about these risks, one that’s been updated for the network-saturated world we live in today. When you hear the word “interoperability,” it’s usually in some context having to do with computing, such as document portability. For example, the spreadsheets you create in Google Docs will usually show up correctly when you export them to Microsoft Excel, because both companies have committed to a certain level of interoperability. But Palfrey and Gasser point out that thanks to the Internet, mobile networks, and other systems, interoperability issues pop up in many other areas of our daily lives, and have a big impact on our productivity, privacy, and safety.

In some situations, we’re frustrated by a lack of interoperability: don’t expect to plug your 110-volt U.S. laptop into a 220-volt outlet in Europe without dire consequences. In others, we’re endangered by what might seem to be excessive interoperability. To take a recent example, it was all too easy for hackers to connect to LinkedIn’s computer systems two weeks ago, steal 6.4 million customer passwords, post them to a Russian hacker forum, and invite the world to help decrypt them. At the moment, all of Europe seems to be in the grip of an interoperability problem. By adopting a common currency back in 1999, European politicians simplified commerce and boosted growth. But now it faces a cascading debt crisis as countries like Greece fail to adopt the kind of fiscal discipline required to sustain a currency union.

The central message of Interop is that interoperability can be dialed up or down, and that it’s up to business executives, government policymakers, and consumers—not just engineers—to think about how much interoperability is appropriate for each situation. The challenge, as Gasser and Palfrey put it in the book, is “creating better, more useful connectivity while simultaneously finding better ways to manage its inherent risks.” Palfrey also happens to be the chair of a controversial non-profit project called the Digital Public Library of America, which aims to increase interoperability between the digital collections of other institutions; our conversation ranged from air traffic control to nuclear power, the Euro Zone crisis, and the future of literature in a world where companies like Amazon and Apple get to decide where and how you can read the books you buy.

Wade Roush: Talk about the threads in your work at the Berkman Center that brought you and Urs Gassser to the point of writing this book.

John Palfrey: We started out with a really basic question seven or eight years ago, which was trying to make sure that the presumption that everyone had about higher levels of interoperability leading to more innovation was actually true. In policy and business discussions, people talk about interoperability like they do about motherhood and apple pie—as if everything is great. Generally that is the right approach, but it’s a more complicated topic than that. So we decided to dig into a bunch of case studies and use this framework to determine if there is a relationship between higher levels of interoperability and higher levels of innovation. And generally the answer came back yes.

Then the idea of interoperability got into our heads in an interesting way. We started to see interoperability issues everywhere, sometimes in the context of IT and sometimes with things that had nothing to do with IT, in areas as far-reaching as transportation and economics. Increasingly, we began to see larger social issues as related to this. What turned it into a book-length project was the combination of two types of problems that we think interoperability speaks to. One is the development of business models where interoperability is central—Facebook, Twitter, and Google are all companies where an interoperability strategy is essential to corporate success. Second, many of the large social issues we are seeking to address—and here I would mention climate change and healthcare—have interoperability issues buried within them. Interoperability at first sounds like a really geeky, IT-specific topic, but we felt that this was a conversation that mattered to people in many aspects of business and politics. So we wrote this book for people who are trying to determine business strategy in a broad sense, and technology policy in a broad sense. It’s a sleeper topic at first, but once you see its applications it can become a really useful theory.

WR: Before we dive into that theory, I wanted to ask you to lay out some examples of interoperability working well, and interoperability gone bad.

John Palfrey, co-author of Interop. Photo by Asia Kepka.

JP: That’s a great question, because at the outset we realized that interoperability is not a one-size-fits-all topic. It’s not that you either have interoperability or you don’t. It’s more, do you have a useful form of interoperability that is close to optimal, or a bad kind of interoperability that is suboptimal in some way. There is a lot of nuance there.

To take one example that shows both the good parts and the not-so-good parts, take air traffic control. This is a topic where interoperability has been key from the beginning. When the Wright brothers took off, there was only one plane in the sky. Now there are many more. One of the ways we have sought to protect the safety of those planes is to coordinate through this complex system of air traffic control. We initially passed a set of rules that help people coordinate when a plane should take off. Then we needed a communication system, so we added in radio. Then language became really important, and eventually we resolved on English as a common language, but even that was not good enough, so we had to resolve a standardized form of English.

It turns out that we got locked in several decades ago. The system was fine at the time, but it wasn’t sufficient to handle all of today’s traffic, and it’s been very difficult to incorporate innovations such as GPS. It’s been very effective, up to a point, but it’s not optimal.

Another example that shows both some great aspects of interoperability and some less so is the social media landscape today. It is a fabulous thing that from an Apple iPhone, one can access one’s Twitter and Facebook account and connect all of these bits of data about oneself and other people. But increasingly that is also giving rise to some new problems, particularly concerns about privacy and information security. So I think there are levels of interoperability that are fantastic for consumers and innovation, but they are also helping to introduce new versions of old problems that we will need to solve.

WR: You don’t dwell much in the book on the areas of mechanical engineering, software engineering, and networking where interoperability first became a big theme. But it strikes me that there are many useful observations there about different levels or categories of interoperability. For example, many analysts have argued that the partial meltdown at the Three Mile Island nuclear plant in 1979 was the result of interoperability gone awry—that the plant’s various systems were so tightly coupled together that there was no room for errors. Did you deliberately steer away from discussing these technical and engineering aspects of interoperability?

JP: Most of the really interesting literature so far on interoperability has been in the technical fields like engineering and computer science. I think nuclear energy is a fantastic example of that. And yes, we chose examples that were relatively non-technical to try to make a book that could be read by ordinary humans. But I think your example is a fantastic one.

To pick up on two aspects of it: first, I think the Internet and the Web and the emerging mobile Internet are examples of interoperability that has occurred in an organic way that has been enormously generative. It is the core case in favor of interoperability in many ways. The open standards that underlie the Web, e-mail, et cetera are essential to the story and that is, at its core, an engineering issue. But there are policy issues that occur along the way. We didn’t establish patents in many of the core protocols and standards of the Web, and I think that has been a very good thing. We have had great innovation because people didn’t claim strong intellectual property interests in these protocols. One thing I fear is that in the next iteration of the Web, we may not have that. I am concerned that the Tim Berners-Lee vision of the Web is not going to play out on the mobile Web.

The second example that would draw out things in your Three Mile Island example is the smart grid. This is where we end the book—on the big architectures that are relevant to the future. The tight coupling of the electric grid to information systems holds out enormous promise for improving efficiency. On the other hand, it leads to all of the problems that caused Three Mile Island, and also to a set of national security concerns and consumer privacy concerns.

This, in a way, is the core payoff of our theory: when we are designing these systems, we need to design for these known bugs in highly interoperable systems. We need to start thinking about complexity and its relationship to security and privacy and build those in from an engineering and a policy perspective.

WR: You do have a lot of examples in the book of interoperability playing out in areas like music sharing and privacy and standards, but in the end you don’t really offer a strong thesis or set of recommendations about how much interoperability is the right amount. Is that because the answer is different for every type of system? Or were you more interested in providing a framework than specific prescriptions?

JP: We do think there are optimal levels of interoperability. They just happen to be context-specific. Rather than write a book that said “This is how to do it,” we wanted to develop a broad theory that we thought could apply to lots of situations. In the introduction, right on page 15, we include a chart that shows 10 different ways to get to interoperability. This may seem academic and potentially too nuanced, but we actually think it’s the right answer; we prefer complex answers to complex problems, to get to a balanced and effective outcome in a specific instance. It may seem unfulfilling, but I think it’s part and parcel of the story.

When we look into specific problems, we do give specific answers. In the case of developing standards for electronic health records, for example, we give fairly specific answers for what we think should happen. In some cases it means a greater role for certain private players, and in some cases it means a greater role for actors in government. We’re trying to give a general theory and show how we would apply that theory in some specific instances. So I think your critique is fair, but it’s also core to the payoff of the book.

WR: Let’s look at a specific example that was probably just starting to boil over as you were finishing the book—the financial crisis in Europe and specifically in Greece, where it seems that the debate over bailouts and austerity is bringing the whole logic of the European common currency into question. In your framework, is this a case of too much interoperability, or too little?

JP: I think the Greek example is a perfect one for seeking to understand interoperability at its most complex. To me the benefit of a highly interoperable system in Europe and having a common currency is that there has been economic growth over a period of time. But what I think the current crisis shows is that we have not been as good about creating the firebreaks between the different parts of a system as we have been about creating the connections. We tend to rush into interoperable systems and make them more interoperable than they ought to be, because we get carried away by the benefits without thinking about the downsides. In the Greek example, I think we need to develop ways to compartmentalize the trouble that comes from one part of a complex system, so that when it goes bad it doesn’t affect the entire thing.

WR: I’m not an economist, but one of the major prescriptions I’m hearing for saving the Euro Zone is to have an even higher level of interoperability, with rules for borrowing and taxation set centrally. In other words, the idea would be to put in place a fiscal union to go along with the common currency. What would your theory say about whether that’s the right direction?

JP: I’m not an economist either, and this is reaching quite a bit further than the core of our book. But I would say there are at least two answers for Europe. One would be to become more interoperable and have a system that’s more harmonized, with stricter rules about what it means to be in the system. Or you could say that Greece is different enough that they cannot be part of the system. You can have a highly interoperable system with one less member. That wouldn’t mean that the Euro Zone is over, it would just mean that there was a member who was not willing to play by the rules that are necessary to maintain the union.

Is there a third way? That is ultimately the creative challenge for policymakers. Can they come up with a plan that will enable Greece to stay in the union but also create enough of a firebreak between Greece and the rest of the system so that there is not a continued series of bailouts? That is the question the EU has to answer.

WR: You were saying earlier that once you started thinking deeply about interoperability, you saw examples of it everywhere. And I think it would be really easy to get carried away and start saying that in every case where a technology or a system has had great benefits, it’s an example of interoperability being optimized correctly, and in every case where we have a bad outcome, it was because of too much or too little interoperability. But at that point, the framework would become almost meaningless. Have you thought about whether your theory is falsifiable? Are there areas where it just doesn’t apply?

JP: The way I think about it is, topics that relate to complex systems that are highly dependent on information and data flows are the ones where interoperability theory is the strongest. The relationship between interoperability and innovation is a good core case. Another one that is really core is the relationship between interoperability and consumer choice. On the down side, two examples that are really important are privacy and security. Those are the core cases where interoperability really makes a lot of difference. I think where the theory starts to break down is in areas where the flow of information is not as essential to the story, and where the complexity one is trying to explain has less to do with that kind of connectedness. We cast off a lot of examples as we did the study because they fit less well.

WR: You have a section on standards bodies and how common standards such as the Internet Protocol or USB usually evolve through negotiations between big companies and government agencies. We don’t think of individuals as having a lot of influence over these processes, with Tim Berners-Lee as, perhaps, the biggest recent exception. If interoperability is generally something shaped by large groups over time, what can individuals really do to change things?

JP: That’s an incredibly good point. It’s right to say that generally speaking, standards processes work better for big companies than for individuals. I think where the theory of interoperability is most useful to individuals is through the frame of decision-making in our own lives.

One decision we have to make, for instance, is how we want to keep our digital media and what kind of relationship we want to have with the companies that sell digital music or digital books. If you were to go out and buy a Kindle and decide to use the Amazon platform for all of your books, and then decide at a certain point a decade from now that you want to switch over, that may or may not be an easy thing for you to do. Even if you trust the company now, you may not trust them later. That is a hard problem. With respect to innovation, as we put more of our lives into Web-based applications such as Facebook and Google, our lives are rendered more interoperable, and in most cases that interoperability works very well to ensure seamless communication. But it also means that we are putting lots of information about ourselves into private hands.

WR: Well, this is an interesting example. Jeff Bezos may decide 10 years from now that you should be able to read every Kindle edition you’ve bought on any e-reader device. Or Amazon may get even more restrictive about DRM. How is an individual supposed to anticipate those kinds of changes in interoperability?

JP: That is the problem of interoperability over time. We may get it right on day one, but what really matters is whether over time that interoperability holds. That is the point at which, to make the best personal decisions possible, we may need the government or the market to help out, to make sure that our preference holds up.

In the context of Jeff Bezos and books, that is why we need a model at this point to ensure that there is a way to update books and other items that we think of as core to our democracy. I fear exactly what you describe. We can’t rely on individual leaders. We may love the founders of Google and Facebook and Amazon today, but a decade from now they may act in ways that are not in the public interest.

WR: Who do you hope will read your book?

JP: I think there are two core audiences that we would love to see engage with the book. One is people who are running the most important companies today, who need to see that interoperability is a core part of their business strategy and that generally speaking more interoperability is a good thing both for the company and for the public, but that it’s not totally clear cut. Second, I would say people who are in key policy and decision-making positions need to see the importance of optimal interoperability to the outcomes of major public policy debates. Those range from climate change to healthcare to intellectual property and innovation in general. From an impact perspective, I hope the book is useful to those two core communities.

Wade Roush is a contributing editor at Xconomy. Follow @wroush

By posting a comment, you agree to our terms and conditions.