Xconomist of the Week: Stefan Savage on Computer Security

The evolution of computer security is not merely some dark mirror, passively reflecting advances in technology. While technology provides new opportunities for threats, these become true dangers only when there is a motivation to exploit them and a means to do so.

Stefan Savage, writing in The New York Times, Dec. 5, 2011.

By his own admission, Stefan Savage’s interests are all over the map.

Savage is a professor of computer science at the University of California, San Diego, who works on computer and network security issues with researchers at the University of Washington, where he got his PhD, as well as UC Berkeley, the University of Illinois at Urbana-Champaign, and elsewhere.

Last year, a team led by Savage and UW’s Tadayoshi Kohno showed that a hacker with physical access to an automotive electronic control unit could alter software to stop the engine, disable the brakes, and carry out other nefarious tasks. In follow-up research published earlier this year, Savage and company said they had succeeded in performing similar tasks remotely—using the cellular phone in a car to insert malicious software that enabled them to override various vehicle controls. (Their findings can be found at the website of the Center for Automotive Embedded Systems Security, a UW-UCSD collaboration.)

Savage also has helped lead wide-ranging studies of Internet spam, outlining the global “ecosystem” that supports compromised accounts, spam mailers, credit cards, e-mail lists, and other tools of the trade. This work led to a comprehensive study of just how much revenue spam advertising can generate, even when most of the spam is blocked. In a recently published paper, the scientists from Berkeley and San Diego counted more than 100,000 orders a month in just one spam network. The group also offered a “rough but well-founded” estimate that revenue generated from spam-advertised pharmaceutical drugs amounts to tens of millions of dollars a year.

He recently fielded some questions from Xconomy:

Xconomy: You’ve been involved in so many different aspects of cyber-security. What do you see as the single biggest danger in computer and network security today?

Stefan Savage: I think the answer here is relative to who you are, what the real threat is and what resources of value you need to protect. For most small-to-medium businesses, I suspect that the problem with the biggest potential for direct losses is still going to be ACH fraud. [The Automated Clearing House (ACH) network used by financial institutions to handle electronic deposits, checks, bill payments, and cash transfers between businesses and individuals.]

There is a vibrant ecosystem of attackers going after such accounts, and in many cases the small and medium businesses carry full liability for such losses—unlike consumer credit card losses. Still, businesses with valuable IP portfolios may face greater dangers from targeted data exfiltration.

Thankfully, attacks on cyber-physical systems (i.e., computer systems that control “real world” components: electricity, transportation, etc) are still more in the latent risk phase of evolution rather than a true “danger” today. While it’s fairly clear that these systems are vulnerable to attack, it’s not yet clear if there is a capable constituency whose immediate goals would be served by actually mounting such attacks.

X: Who came up with the idea for creating a Center for Automotive Embedded Systems Security?

SS: The genesis of this effort goes back about five years. Yoshi Kohno and I had been observing how automotive systems were both increasingly? computerized and then networked to the outside world. Our experience has been that this evolution inevitably leads to security issues and we figured that it was an ideal time to explore the issues in automobiles since the transformation is still in progress and thus there is much more potential to influence how security is introduced.

As for how the collaboration was born, we have a long-standing close relationship with the University of Washington. For example, I was a Ph.D. student there before coming to UCSD and Yoshi (now a professor at Washington) was originally a Ph.D. student here at UCSD. [More questions and answers about their automotive research is here.]

X: There was a dustup earlier this year between the hacker group Anonymous and a government defense contractor, G.B. Gary Federal. Who is at higher risk—companies or individuals—from organizations that seek to mine information from social networks?

SS: I think the Wikileaks meme, that online data leaking can be used as a kind of weapon, cannot be put back in the bottle. The effect that Anonymous, AnonOps, LulzSec and others have had in publishing data leaks makes this clear. I suspect that companies face more risk from this phenomenon than individuals because they are larger targets and have a broader set of damages that can be experienced.

Now the second question is about who faces risk from mining social networks and again I think this depends on what kind of risk you’re talking about. Most of these companies are doing this mining in support of marketing activity. While I don’t particular care for this, I’m hard pressed to identify it as a major security risk. However, depending on who you are there may be individual risks that could be exposed (e.g., your true address to a stalker, suggestions of infidelity for someone involved in an affair, evidence of drug use to an employer, etc.) The extent to which this is happening is an open question. It clearly is happening, but it’s hard to quantify the risk.

X: Do you have any concerns about organizations that have developed software that enables them to create fake online personas to mislead and manipulate social media sites?

SS: I think fake identities are part and parcel of undercover investigations and so I’m not fundamentally concerned that this capability exists. It’s a bit more interesting when you consider that this capability might be scaled to create millions of fake identities that interact automatically, i.e., social-bots. This potential for scale, combined with our increasing trust in online identities does create interesting new security issues.

X: Where do you think the biggest opportunities are for improving security?

SS: I think in order to best address cyber-attacks we really need to understand the attacker’s world better. While we’re used to thinking about cyber-attacks as technical endeavors, that’s only part of the picture.

For example, most large-scale attacks today are commercial in nature—the attacker is profit-seeking. While we invest a great deal of money and effort (rightly so) in trying to technically harden our systems against attack, it is rare for us to consider how these defenses actually impact the attacker’s bottom line. In most cases, the underlying business model has already “priced in” the impact of defenses, and the end-system is not in fact the most critical part of the attacker’s value chain. In fact, compromised U.S. hosts are available in bulk for $100 per thousand, Asian hosts for a tenth less.

When you invest the time to understand how the attacker’s value chain works this provides pointers to where their true weak points are. In our examination of the spam ecosystem it became clear that there was just no way that spam filtering, blacklistings, or takedowns were ever going to cause enough financial drag to undermine the spam advertising channel. However, it turns out that the payment systems by which advertised goods and services accept consumer credit cards is a huge weak link that has no cheap substitute. That is going to be a far more effective place to intervene. This kind of analysis is appropriate for a wide variety of security situations, but it’s rarely undertaken because it requires considerable time and effort, and it doesn’t necessarily lend itself to selling a product.

Bruce V. Bigelow is the editor of Xconomy San Diego. You can e-mail him at bbigelow@xconomy.com or call (619) 669-8788 Follow @bvbigelow

Trending on Xconomy