Human Error: Living With the Weakest Link

Opinion

“We have met the enemy, and he is us.”
– Walt Kelly’s Pogo

Computer security breaches have become so common as to seem like a force of nature we can’t stop or control, like hurricanes or epidemics. After each one, experts scramble to plug holes, rewrite security plans, and explain at length why that particular problem will never happen again. We want to believe that with just a few more bug fixes, our systems will be truly secure.

Unfortunately, perfect security will elude us for as long as human beings are involved, because humans—for all our greatness—are imperfect. From the data entry clerk to the Director of Security to the CEO, everyone makes mistakes, and sometimes those mistakes result in security breaches, either immediately or long after the original mistakes were made.

The truth is, our security systems are indeed improving steadily, and tend to get better after each breach. But even if we believe we might create perfect security technology, human nature doesn’t fundamentally change. Sometimes we’re fooled by social engineering attacks (like spear phishing), sometimes we misunderstand what we’re supposed to do when confronted with a cyber threat, and sometimes we just plain mess up. Software can be more or less perfected, but not human behavior. In fact, the more complex, the more stressful, or more fast-paced the environments we work in, the more likely we are to make mistakes; this “human factor” has been well proven in industries that rely on human performance for safety, such as aviation.

Given that human error is inevitable, we need to start supplementing our important efforts to educate users about security with more explicit plans about how to handle the next security-threatening human error. Just as we can prepare for the next tsunami by building higher sea walls, and zoning to discourage building in flood-prone areas, we can anticipate the ways some users will inevitably err, and plan around them.

Pre-Empting Human Behavior

One thing we can do is deflect and redirect errors to where they’ll do the least harm. Workers in nuclear power plants have often replaced generic-looking but potentially hazardous switches with beer tap handles or other things that stand out and warn workers that this is a particularly dangerous switch. This may not decrease the likelihood of a worker throwing the wrong switch, but it may decrease the likelihood of throwing the worst possible switch, and certainly helps us outwit our own tired, stressed, or panicked subconscious that could throw the switch without thinking.

Paradoxically, an overall system is often safer if we … Next Page »

Single PageCurrently on Page: 1 2

Nathaniel Borenstein is chief scientist at e-mail management firm Mimecast. Based in Michigan, he is the co-creator of the MIME e-mail standard and previously co-founded First Virtual Holdings and NetPOS. Follow @drmime

Trending on Xconomy