Since the revelation in recent weeks in the al Qaeda has been planning attacks on the “information superhighway,” some of you have expressed concern about the outdated nature of the security models we use.
William Wulf, professor of engineering and applied science in the Department of Computer Science at the University of Virginia, has a good name for the current approach, calling it the “Maginot Line” model. I came across the term in a speech he gave before the House Science Committee on Oct. 10, 2001, a month after the attacks on the Twin Towers and the Pentagon.
It surprises me that his analysis and that of others at the hearings did not get more play in the press. But I guess unpleasant truths are like secrets in that way: If you want to be sure no one sees them, put them in plain sight.
My thanks to those of you who brought the hearings to my attention. I can recommend them as thoughtful analyses of the situation. In addition to Wulf, others presenting their views included Eugene H. Spafford, Professor of Computer Science and Director of Purdue University's Center For Education and Research in Information Assurance and Security (CERIAS), and Terry C. Vickers Benzel, Vice President of Advanced Security Research, Network Associates, Inc.
To find out more about what they said, go to the full list of the House Committee hearings for the 107th Congress to listen to the testimony or read the transcripts listed under the Oct. 10 meeting of the Science Committee.
According to Wulf, the current system is flawed because the strategic assumptions on which it's based are outdated, as are the responses to security breaches. Most cyber security, he points out, is based on what he calls the “Maginot Line” model: the assumption that what we need to protect is inside the system behind firewalls, cryptographic mechanisms, intrusion and virus detector that are supposed to keep outside attackers from gaining access and taking control. But, he said, in a net-centric computing and communications environment where there is no clear boundary between “in here” and “out there,” this view has outlived its usefulness, if it had any in the first place.
“The immediate problems of cyber systems can be patched by implementing 'best practices,' but not the fundamental problems,” he testified. Moreover, no one has questioned the underlying assumptions about cyber security established in the 1960s mainframe environment out of which the Maginot Line Model emerged. As a result, he said, the little research that is being done is focused on answering the wrong questions.
“In WWII, France fell in 35 days because of its reliance on this model,” he said. “No matter how formidable the defenses, the attacker can make an end run around them, and once inside, the entire system is compromised.
Wulf also objects to the model because if fails to recognize that that many security flaws are “designed in.” In other words, a system may fail by performing exactly as specified. Flaws are not always “bugs” or errors; they can also result when a system behaves as designed, but in ways the designers did not anticipate. “It is impossible to defend or provide a firewall against security flaws that were conceived of as perfectly legitimate; that were, in fact, considered requirements of correct system behavior,” he said.
A third concern he has is that the Maginot Line cannot protect against insider attacks. “If we only direct our defenses outward, we ignore our greatest vulnerability, the legitimate insider,” he testified.
An even more serious flaw he sees is that it may not be necessary to “penetrate” a system to do major damage. “This was demonstrated by the distributed denial-of-service attacks on Yahoo,” Wulf said, “which showed that expected behavior can be disrupted or prevented without any form of penetration. Simply by flooding a system with false requests for service, it became impossible to respond to legitimate requests” (I understand that this was one of the ploys al Qaeda might have been considering if they had been successful in penetration of one of the many control systems monitoring power, water, and fuel distribution.)
Probably the most serious objection Wulf has to the current Maginot Line approach to computer security is that it has never worked. “Every system built to protect a Maginot Line-type system has been compromised–including the systems I built in the 1970s,” he said. “After 40 years of trying to develop a foolproof system, it's time we realized that we are not likely to succeed. It's time to change the flawed inside-outside model of security.”
While I find many of Wulf's arguments compelling, what do we have to replace this model? Wulf points to a couple of possibilities. They include the use of models based on biological immune responses and others along with other models that distribute the responsibility for defining and enforcing security to every object in the system. The goal is to assure that the compromise of one object would not compromise the whole system.
What are the alternatives to the traditional model? How robust are they? Are they applicable across the board to all connected computing environments, from those based on desktops to embedded devices, or are they limited to just one segment?
Bernard Cole is the managing editor for embedded design and net-centric computing at EE Times. He welcomes contact. You can reach him at or 928-525-9087.
I believe that the best approach we have now is still the old-fashioned way of not giving a user more access to what he, she or, in the case of automation, it, needs to perform the tasks. There are computer systems that implement methods like this. However, enforcing security at the object level is expensive and unpopular. The feel-good developers and the short-term-result oriented mid-managers need to understand that a security compromise is far more expensive than implementing a good system.
David J. Liu
Director of Technology