Active Networks and Cyber-terrorism -

Active Networks and Cyber-terrorism


In the aftermath of Sept. 11, the concept of active networking, in which data traffic on the worldwide information superhighway could be re-routed around massive blockages and damage dynamically in real time, deserves serious consideration. And it deserves it now , not five or ten years out.

If implemented, this capability will make the information superstructure on which our economy depends more resilient and resistant to cyber-attacks by terrorists. The shift to such a network topology will require rethinking the design and implementation of network processors embedded in the network's switches and routers, as well as the very nature of the operating systems and applications that will be needed there.

While there has been considerable product development even before September 11 aimed at making our net-centric information superstructure much hardier, much of it is localized: individual servers and clusters of servers, individual switches and routers or groups of them, and individual corporate or organizational intranets. The degree of hardiness has also varied with the specific needs of the organizations. But there is no set of technologies or products in place today to do what Sept. 11 has shown us needs to be done: build an Internet and World Wide Web that will survive a concentrated, focused terrorist cyber-attack and which can repair and reconfigure itself on a network-wide real-time basis.

President Dwight D. Eisenhower proposed the original inter-network and the freeway system in the early 1950s to ensure that data and physical traffic would still move in the event of an attack by our enemies during the Cold War. Since then, the Department of Defense's Advanced Research Projects Administration (DARPA) has actively funded development of hardware and software technology to ensure its ability to survive such an attack.

The result is a body of university research on what is broadly categorized as active networks. An active network is one that can be configured on the fly in the event of data traffic jams, crashes, and outages. While current network processor schemes in the control plane of today's switches and routers use some of the concepts developed for active networking, they are being deployed only incrementally and piecemeal in segments such as wireless Internet connectivity and networked multimedia.

In a smart or active network, a high degree of intelligence is built into the data packets and the data flow network processors at the core of the network infrastructure's routers and switches. Using dynamically updated code in the header of each packet, network processors in intelligent networks could modify, store, or redirect data around blockages or outages, replacing today's partially high-bandwidth, but ultimately passive inter-network.

There are at least three reasons that active networking has not moved to transform the Internet. First, in the absence of any compelling external pressure such as the Cold War and the threat of physical damage, there was not compelling reason to implement such a scheme broadly. Second, there was a belief among many network builders that simply throwing more bandwidth at traffic problems would solve most common network problems. Third, before the introduction of the present first- and second-generation network processors, the CPUs used were not adequate to the job.

The first reason is no longer relevant, obviously. And as far has the second reason is concerned, more bandwidth will not solve the kind of problems that a cyber-attack would bring. Everything now depends on the last perceived roadblock. The big question is whether even the new generation of network processors that are now being deployed in the 1 to 100 Gbps routers and switches are advanced enough, programmable enough and powerful enough to give the Internet the kind of real-time reconfigurabilty and survivability needed.

With the new NPs embedded in the switches and routers, we're certainly much closer than when the concept was first being researched. But we are still very far from the dynamically reconfigurable hardware we will need. Right now, switches and routers manage appropriate security and Quality of Service attributes according to a set of policies. Policy is distributed throughout the network via a server, and is basically static, being configured by a network manager. The networking equipment examines the header of incoming packets and performs a complex comparison between the received header and a database of destinations, applications, and individual users. Depending on the results of the comparison, the packet may be denied access to the network for security purposes, or it may be awarded a higher priority to network bandwidth for mission-critical applications.

From what I have been told, we are slowly moving to the next stage, where the application residing on the originating host server platform takes a role in the management of the network by setting specific bits in the packet headers to indicate requested priorities to the network equipment. These network signals may be sent in response to data being received about congestion in the network. The implications for networking equipment are that as applications become more intelligent and network-aware, the header comparison functions may significantly change. Indeed, it may be necessary to examine the payload of the packet in addition to the header.

The final step, and it is still a big one, is to an active rather than largely passive environment where the application includes network programming instructions along with transmitted data to customize network paths for specific application tasks. In this case, the network equipment has to be capable of being dynamically programmed. But when other security mandates of this new environment are combined with the continuing drive to higher and higher bandwidths, it will not be easy to get there. It is now common for a single networking platform to support access links of OC-3 (155 Mbps) and trunk links of OC-48 (2.5 Gbps). In the near future, due to enhancements in the optical networking domain, it will be necessary for equipment providers to support OC-192 (10 Gbps) connections.

Regardless of bandwidth, the packet parsing function becomes more complex in an active network as the provisioning of policies becomes more dynamic. The network processing engines will need to be able to change the received packet header fields that are used to determine security or Quality of Service attributes. In some cases it may be necessary to examine data from the packet payload in addition to the packet header, for example in Web or Content Switching applications where the URL is used to determine the route of the packet. Ideally this function will be software programmable, ensuring that the platform is future-proofed against constantly evolving requirements.

This will be difficult enough at even OC-48 data rates, but at OC-192 and beyond? Aside from coming up with the kind of processor engine that can give us both reconfigurabilty and performance, we will all have to make some decisions: Do we want security and reconfigurabilty first and performance second? Or vice versa? I know what the general public would want, if they understood the issues. And I certainly know what the U.S. government will want.

To provision the net-centric information superstructure for not only performance but security and after the fact fault tolerance and resilience will require new processor concepts and new distributed operating systems unlike any that have preceded them. And if that is not possible, I would hope that some sort of industry consensus would be reached (yeah, sure), one way or the other. If not, a standard or set of specifications and requirements not necessarily to the liking of the industry will certainly be imposed.

In the 1950s the Republican President Eisenhower and the Democratic congress not only passed the laws that started both the Internet and the nation-wide interstate highway system, they also initiated the involvement of the Federal government in coming up with standards and rules relating to both. So I do not see why they will hesitate now to do so, especially in the face of what the Federal government now considers a clear and present danger.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.