You'd have to be blind and deaf not to have noticed the excitement over net-centric peer-to-peer computing, thanks to the various MP3 audio download and sharing sites on the World Wide Web.
But I suspect that peer-to-peer connectivity will have profound effects on net-centric computing beyond allowing Web-enabled information appliances and desktop personal computers to connect more or less directly with one another to share files.
What most fascinates me is its impact on embedded computing, particularly resource-limited 8- and 16-bit devices. Direct peer-to-peer connectivity means that such devices, no matter how dumb, may be able to talk to each other and share resources without the impact on real-time performance and determinism that most other controller-area networking schemes impose.
This kind of microcontroller-based net-centric democracy is not without its problems. However, these problems seem to me to be purely technical.
The key to peer-to-peer connectivity in embedded devices is the ubiquitous TCP/IP protocol. The Transmission Control Protocol portion provides for reliable end-to-end connectivity across one or more networks and runs over the Internet Protocol. The combination provides a simultaneous two-way link between a socket on the source and a socket on the destination, each of which has a globally unique identifier consisting of the IP address plus a port number assigned by the host computer. In order to provide reliable performance, TCP also includes other services such as flow control, sequencing, error checking, acknowledgements, retransmission and multiplexing. In general IP, as a connectionless packet-switched protocol, is designed for broadcasting of data in which the data is broken up into blocks of data, each of which contains the unique addresses of both the source and destination.
In peer-to-peer on the desktop, a server acts as a mediator that adds a peer to a list of peers and allows one peer to access the files and resources of another. In the case of such connection schemes such as Napster, Gnutella and Freenet, it's usually music files. To varying degrees, most are not true peer-to-peer, because no one location talks directly to another location, but they are several steps away from true client/server connections. While there are no clients and servers as such among the various peers, some are more equal than others.
It seems to me that a true peer-to-peer, or more accurately, device-to-device, architecture would be the key to creating confederations of cooperating intelligent devices, in this case controllers, a practical reality. At least two proprietary schemes, Sun Microsystems' Java-based Jini, and Microsoft's Universal Plug and Play (UP&P), aim in this direction.
For embedded controller-based confederations of devices that share both data and resources in a deterministic way, most such proposals seem to be ruled out. Jini is restricted to devices that use Java, which in itself is pretty much a determinism-killer. Jini also assumes the use of a server or gateway. UP&P is less client/server-based, but it also assumes that most of the devices in a cooperating confederation have 32-bit processors with sufficient speed, processing power and memory space to handle the full TCP/IP specification.
Other proprietary schemes, such as EmWare's Emit device networking software infrastructure, are designed specifically for a client/server architecture in which at least one 32-bit device, usually a personal computer, acts as the gateway for devices to talk to each other and to the Internet. In all of these schemes, 8- and 16-bit microcontrollers, which still constitute the largest installed base of CPUs, are second-class citizens because of their inability to handle the full protocols.
If there were a way to pare down any of these schemes, especially TCP/IP, to their absolutely minimum features, we would have a true peer-to-peer architecture that is both real-time and deterministic.
There has been a lot of activity at paring down the TCP/IP stack in the embedded Internet device space. Most of the effort assumes that the main task of a net-centric control device is to provide connectivity to a browser on a desktop or to a PC-based gateway that is providing a person with a view into the internal operations of the controller. As a result, almost everything you would want on a desktop stack would be necessary, but written in space-saving machine code or sparse, pared-down non-standard implementations of the stack.
However, if you assume that most of the devices would be operating in a relatively closed environment, a lot more could be pared away — for example, almost anything that is involved in presenting information to a browser. Probably the minimum you could get away with would be a first-among-equals configuration, in which at least one controller would act as an ultra-thin gateway server to a browser-based fat client somewhere.
You'd also probably need to go several steps further if you used anything smaller than a 16-bit microcontroller, or wanted any degree of deterministic response amongst the sharing devices.
What else is there? Well, if the devices are operating in a relatively closed environment in a single network rather than across the Internet, is it necessary to use an IP scheme based on 32-bit address space permitting about four billion unique addresses?
More than suitable might be a setup that is much closer to the original precursor of the Internet: Arpanet, which used an 8-bit address space and was designed to interconnect about 20 computers.
Thus pared down, such a configuration would be appropriate for most confederations of microcontrollers as long as all they need is soft real-time response. For more than that, it would be necessary to go even further and replace the TCP half of the TCP/IP stack. TCP was designed to operate in an asynchronous environment and ensures reliable connections by exchanging data in a time-consuming three-way handshake. Thus, its ability to operate in any way that is remotely deterministic is severely limited.
What about going to that often-ignored stack in the protocol suite, the user datagram protocol (UDP)? Unlike TCP, it is connectionless and was designed for environments where reliability is not required, or where there are already reliable links. By eliminating the entire asynchronous handshaking overhead, deterministic sharing between devices should be achievable.
The downside to all of this slicing and dicing is that it eliminates one of the main advantages of TCP/IP as a global peer-to-peer mechanism that is entirely independent of the underlying physical transport layer. Depending on how the microcontroller connection is implemented, it may or may not be able to communicate with an external browser to allow monitoring of activity, one of the main attractions of TCP/IP to engineers considering its use in industrial control and home networking.
Perhaps the solution is to implement the entire IP Suite of services into hardware, which would allow it to be implemented for deterministic peer-to-peer operation among cooperating devices locally, but still be available to connection on a global basis. The key, however, will be cost. Can the entire TCP/IP stack be implemented in silicon at a price — less than 50 cents — that would make it attractive in the 8- and 16-bit microcontroller world?
When all is said and done, however, I still have some questions that would make all of this moot: Is there any practical reason for having a confederation of cooperating microcontroller peers that share both data and resources? And if so, is it worth the effort it will take to get there?
Bernard Cole is the managing editor for embedded design and net-centric computing at EE Times. He welcomes contact. You can reach him at or 520-525-9087.
Hmmm. My big concern would be opening up all those 8- and 16-bit micros to network hackers. Do you really want the controller in your respirator (which would now be sharing data over a LAN to the nurses' station) to be open to standard Internet hacking techniques? I wouldn't! Some things need to stay isolated.
Bernie replies: I agree, some designs will and should remain isolated. But the temptation to add connectivity to solve the problem of providing resources the controller might not have locally could be overwhelming. This is especially the case when marketing issues are factored in, or the customer for the design insists on it. At an engineering conference a while back I was told about the design of a system that would allow power meter readers to collect information about power consumption via a small transmitter on the meter that could be scanned from a few hundred feet. The engineering firm came up with an excellent design based on C that met the requirements. But the customer turned down the design because it was not based on the “latest” technology, in this case Java, even though it would cost more, would not be deterministic, required more memory and would be slower.
I've recently started studying JXTA. Its touted as a way to implement Peer to Peer connections on anything with a “digital heartbeat”. Did you consider JXTA when preparing this article?
Michael G. Marriam
Sr. Software Engineer
Detection Systems Inc.
It seems that the SoC trend (leading into multi-core technology) will be in need of some protocol for inter-core communication. This may be too much, but the environment is closed (impervious to hackers) and the code exists (mostly).
Firmware Systems Engineer
What's the point with TCP-IP in such a setting? We have excellent embedded network schemes like CAN, ByteFlight, FlexRay, all of which are designed for determinism and implementation on-chip in microcontrollers.
The addition of TCP-IP strikes me as overhead, unless Internet connectivity is absolutely needed. But the ideas about a “closed” system that does NOT interface to a PC gives us complete freedom in choosing our networking architecture, and that should be used.
And then it makes sense to choose stuff designed and built for predictable and real-time performance.