P2P, embedded sensors and deterministic wireless networks
You'd have to be blind and deaf not to have noticed the excitement over
networking, particularly at venues such as last week's Sensor
Expo. For a long time P2P was predominantly used for transferring
and sharing music over the Internet in violation of licensing terms,
but P2P file-sharing systems now generate a significant portion of
legitimate Internet traffic.
Also, service providers are using P2P mobile ad hoc networks in 2.5/3G mobile devices to allow subscribers a controlled and monitored way to communicate and share files device-to-device without the intervention of a server intermediary.
In embedded systems, P2P is getting a lot of attention as a means to allow servers, blade computers and telecom boards to share information and resources. It is also being considered as a way for wirelessly connected MCUs and sensors to do the same. But for now, embedded P2P usage is limited to applications where deterministic response is not important or where such responses can be carefully bounded and calculated.
Soon, however, P2P connectivity will have a much greater impact on embedded computing, particularly resource-limited 8- and 16-bit devices connected to sensors. Direct peer-to-peer connectivity means that such devices will be able to talk to each other and share resources without the negative impact on real-time performance and determinism that most other server mediated and controller-area networking schemes impose.
At the recent Sensor Expo there were several presentations on using P2P as a linking mechanism in wireless networks of sensors and MCUs. One approach described peer-to-peer clustering, which links servers according to their geographical or interest-based locality so that requests can be answered faster.
Another presentation described wireless networks of microcontrollers and sensors distributed throughout a military aircraft. These networks use a technique reminiscent of how the Ethernet protocol was adapted to the real-time needs of industrial networks. In this approach, the number of devices is limited to groups of closely linked nodes that are isolated from the broader network so response times, delivery times, noise and other issues can be carefully calculated and constrained.
Since the emergence of the IEEE
1588 Precision Time Synchronization Protocol, this kind of bounded,
isolated application of the Ethernet protocol is no longer necessary in
the wired industrial Ethernet market, since IEEE 1588 can be used to
synchronize clocks in networks running industrial Ethernet protocols
such as Ethernet/IP and Ethernet Powerlink.
As far as I know, no similar protocol has been considered for wireless control networks, and it is still uncertain whether IEEE 1588 can be applied there widely. At the Sensor Expo I spotted only one paper that described the use of 1588 in wireless control networks. Specifically, it described multiple wireless zones scattered throughout a large major aircraft using a wireless piconet developed by Boeing for use in harsh airborne environments.
Until 1588 or some other real-time, deterministic mechanism is developed, the key to peer-to-peer connectivity in embedded devices is what embedded developers come up with to make the ubiquitous TCP/IP protocol do deterministic tricks.
The Transmission Control Protocol (TCP) portion of TCP/IP provides for reliable end-to-end connectivity across one or more networks over the Internet. In order to provide reliable performance, TCP also includes other services such as flow control, sequencing, error checking, acknowledgements, retransmission and multiplexing.
In peer-to-peer networks on desktop and mobile devices, a server acts as a mediator that allows one peer to access the files and resources of another. It seems to me that a true peer-to-peer (or device-to-device) architecture would be the key to making confederations of cooperating controllers and sensors a practical reality.
If there were a way to pare down TCP/IP to its absolutely minimum features, the result would be a true peer-to-peer architecture that is both real-time and deterministic. There are a lot of proprietary schemes that have focused on exactly that.
Most of these schemes assume that if the devices are operating in a relatively closed environment in a single network rather than across the Internet, it's not necessary to use an IP scheme based on a 32-bit address space that permits about four billion unique addresses. Nor is it necessary to carry the overhead to ensure delivery of packets. Thus pared down, such a configuration would be appropriate for most confederations of microcontrollers as long as all they need is soft real-time response.
For more than that, it is often necessary to go even further and replace the TCP half of the TCP/IP stack. TCP was designed to operate in an asynchronous environment. It ensures reliable connections by exchanging data in a time-consuming three-way handshake. Thus, its ability to operate deterministically is severely limited.
What about going to that often-ignored stack in the protocol suite, the user datagram protocol (UDP)? It is connectionless, but was designed for environments where there are already reliable links or in closed environment where a reliable link is a given. Because it eliminates the entire asynchronous handshaking overhead, deterministic sharing between devices should be achievable.
The downside is that it eliminates one of the main advantages of TCP/IP: a global peer-to-peer mechanism that is entirely independent of the underlying physical transport layer.
Perhaps the solution is to implement the entire IP Suite of services into hardware. This would allow it to be implemented for deterministic peer-to-peer operation among cooperating devices locally, but still be available to connection on a global basis. Similar TCP/IP Offload Engines (TOEs) are regularly used to reduce the main network processor load in routers and switches.
The key, however, will be cost. Can the entire TCP/IP stack be implemented in silicon at a price (in terms of both currency and die area) that would make it attractive in the 8-, 16- and 32-bit microcontroller world?
And if so, is it worth the effort it will take to get there? Given
what I have been reading about plans of the Department of Defense for
extensive use of sensors in its Global Information