Secrecy Rarely Works

Last month some researchers at England's Cambridge University made a disturbing discovery about certain bank ATMs: it's possible to steal from them from your account. Don't panic--the flaw they found could be exploited only by an insider, and many U.S. banks don't use the hardware systems in question. But this was small comfort to a Diners Club cardholder and his wife, who were shocked to find themselves charged for about $80,000 in withdrawals from London ATMs. The cardholders were at home--in South Africa-- on the March 2000 weekend when the machines gave out cash in their names, 190 times. Diners Club and its owner, Citibank, are suing, saying the charges should stick because the system is infallible. The defendants have turned to the Cambridge researchers to challenge that conclusion. But, claiming a threat to ATM security, the plaintiffs have asked for--and received--an order that technical testimony can be given only in closed chambers and cannot be made public, ever.

Ross Anderson--the Cambridge reader in security engineering whose students performed the research and is one of the witnesses--thinks the decision is a travesty, and not only because it can impair his students' careers. In the bigger picture, the stance taken by Citibank represents a retreat to a model of protecting information that many believe has been long discredited. It all hearkens back to an old debate in the security community that has particular pungency in the jittery post-9-11 world.

Here's the dilemma: should information about systems that protect property be closely held or widely circulated? Should flaws be reported widely so people can fix them, or should they be buried in hopes they won't be exploited? Sunshine or darkness--which is the better shield? The knee-jerk reaction is to keep things as quiet as possible, so that rogues and scoundrels will be denied any scuttlebutt that might help them plunder an inadequately secure operation. This is known as "security by obscurity," a model apparently embraced by the British judge in the ATM case. The alternative is transparency, where the workings of a system are open and the protection comes from passwords, cryptographic keys and other built-in controls.

Hard-won experience has shown one model to be a clear winner. "Security through obscurity is a bad idea," says Vincent Weafer, director of security response at Symantec. "The bad guys wind up getting the information anyway." The best way to nail down a system is to be open about its workings, and let researchers have at 'em. That way the smartest people trying to crack your system will probably be the good guys, most likely grad students in search of glory and a dissertation topic. When they find a flaw they should first notify the company, which should fix it, but they should also make it clear that the information is going to be published. A deadline increases the chances that a fix will be implemented, and publication helps notify users they should make sure the fix is in.

Countless systems have benefited from this method. In the early days of the Netscape browser, which pioneered a secure means of protecting credit-card numbers online, some Berkeley researchers discovered an elementary error. Netscape wisely admitted its mistake--and instituted a program to reward future discoveries.

Though Citibank says it doesn't want to suppress research, its actions put it squarely in the obscurity camp. It's not alone. In January, Matt Blaze, a scientist at AT&T Labs Research, released a paper about a weakness in some lock systems (like the ones on doors). Some experts warned burglars would use his technique, which makes creating an unauthorized master key easy. But Blaze says he didn't get a single complaint from a person charged with safeguarding a facility. The criticism came from locksmiths, who felt that only they should have such information. Now Blaze's work will force lockmakers to fix the problem.

After September 11, of course, the security-by-obscurity debate has taken on added urgency. It's tougher to encourage open discussion about security weaknesses when you know terrorists may be Googling for that very stuff. That was the impetus for a recent article in Polygraph, a journal regarding lie detectors: its author, a federal expert on "countermeasures" that help people pass the tests, called for a ban on disseminating information about such techniques. He cited news reports of countermeasure information being found in a Qaeda safe house in Kabul.

It sounds so tempting--outlaw the discussion and the problem will go away. Then you won't have to confront the reality: if countermeasures make it easy to spoof the tests, we've got to fix the problem by improving the tests. Otherwise, who are we kidding?

Ultimately, the lure of security by obscurity is a form of self-deception--just a high-stakes form of sweeping a problem under the rug. The only thing that's really obscured is the necessity to work harder at protecting ourselves. And that's a lesson we shouldn't have to learn twice.

Editor's pick

Newsweek cover
  • Newsweek magazine delivered to your door
  • Unlimited access to Newsweek.com
  • Ad free Newsweek.com experience
  • iOS and Android app access
  • All newsletters + podcasts
Newsweek cover
  • Unlimited access to Newsweek.com
  • Ad free Newsweek.com experience
  • iOS and Android app access
  • All newsletters + podcasts