In the conclusion of a four part series, the authors of “Ad hoc wireless networks,” look at the remaining challenges in wireless sensor network design relating energy efficient hardware design, synchronization, transport layer protocols and real time communications.
The purpose of a sensor network is to monitor and report events or phenomena taking place in a particular area. Hence, the main parameters which define how well the network observes a given area are “coverage” and “exposure.”
Coverage is a measure of how well the network can observe or cover an event. Coverage depends upon the range and sensitivity of the sensing nodes, and the location and density of the sensing nodes in the given region.
The worst-case coverage define s areas of breach, that is, where coverage is the poorest. This can be used to determine if additional sensors need to be deployed to improve the network. The best-case coverage, on the other hand, define s the areas of best coverage. A path along the areas of best coverage is called a maximum support path or maximum exposure path.
A mathematical technique to solve the coverage problem is the Voronoi diagram. It can be proved that the path PB will be composed of line segments that belong to the Voronoi diagram corresponding to the sensor graph.
In two dimensions, the Voronoi diagram of a set of sites is a partitioning of the plane into a set of convex polygons such that all points inside a polygon are closest to the site enclosed by the polygon, and the polygons have edges equidistant from the nearby sites. A Voronoi diagram for a sensor network, and a breach path from I to F,are shown in Figure 12.14 below .
Click on image to enlarge.
As shown in Figure 12.14 above, the algorithm to find the breach path PB is:
1) Generate the Voronoi diagram, with the set of vertices V and the set of edges E. This is done by drawing the perpendicular bisectors of every line segment joining two sites, and using their points of intersection as the vertices of the convex polygons.
2) Create a weighted graph with vertices from V and edges from E, such that the weight of each edge in the graph is the minimum distance from all sensors in S. The edge weights represent the distance from the nearest sensor. Smaller edge weights imply better coverage along the edge.
3) Determine the maximum cost path from I to F, using breadth-first search.
The maximum cost implies least coverage. Hence, the required breach path is along this maximum-cost path determined from the Voronoi diagram. The breach path shows the region of maximum vulnerability in a sensor network, where the coverage provided by the sensors is the weakest.
A related problem is that of finding the best-case coverage. The problem is formally stated as finding the path which offers the maximum coverage, that is, the maximum support path PS in S,from I to F .The solution is obtained by a mathematical technique called Delaunay triangulation, shown in Figure 12.15. below
Click on image to enlarge.
This is obtained from the Voronoi diagram by connecting the sites whose polygons share a common edge. The best path PS will be a set of line segments from the Delaunay triangulation, connecting some of the sensor nodes.
The algorithm is again similar to that used to findthe maximum breach path, replacing the Voronoi diagram by the Delaunay triangulation, and defining the edge costs proportional to the line segment lengths. The maximum support path is hence formed by a set of line segments connecting some of the sensor nodes.
Exposure is defined as the expected ability of observing a target in the sensor field. It is formally define d as the integral of the sensing function on a path from source node Ps to destination node Pd. The sensing power of a node s at point p is usually modeled as
where γ and k are constants, and d(s,p)is the distance of p from s.Consider anetwork with sensors s1,s2,…,sn.The total intensity at point p,called the all-sensor field intensity, is given by
The closest-sensor field intensity at p is
where smin is the closest sensor to p. The exposure during travel of an event along apath p(t)is defined by the exposure function
where xx) is the elemental arc length, and t1,t2 are the time instances between which the path is traversed. For conversion from Cartesian coordinates (x(t),y(t)),
In the simplest case of having one sensor node at (0,0) in a unit field , the breach path or minimum exposure path (MEP) from (-1,-1) to (1,1) is shown in Figure 12.16 below .
The MEP consists of the line segment from vi to ui, part of the inscribed circle from ui to uj ,and the line segment from uj to vj . This is shown in Figure 12.17 below.
The exposure problem is still unsolved for two points in the same corner, or for points within the inscribed circle. For the generic exposure problem of determining the MEP for randomly placed sensor nodes in the network, the network is tessellated with grid points. An example is shown in Figure 12.18 below.
Within each square, all vertices are connected to obtain a grid. Higher order grids have greater accuracy. For each edge in the grid network, the exposure function is used to determine the edge weights, and the MEP is define d as the shortest path, determined by Dijkstra’s algorithm.
The mathematical concept of exposure is important for evaluating the target detection capability of a sensor network. Sensors are deployed in a given area to detect events occurring in the field of interest.
The nodes collaborate among themselves (perform data fusion) through the exchange of localized information, and reach a decision about the location and movement of a given event or target. In , Clouqueur et al. discuss a probabilistic protocol for target detection, where the observations made by individual sensors are collaborated, and the presence or movement of a target is probabilistically determined by data fusion, with allowance for noise in data recording. The network topology which gives a maximum exposure is also determined analytically.
Standards for sensor networks are at an incipient stage. The IEEE 802.15.4 low-rate wireless personal area networks (LR-WPANs) standard  investigates a low data rate solution with multi-month to multi-year battery life and very low complexity. It is intended to operate in an unlicensed, international frequency band. Potential applications of this standard include sensor networks, home automation, and remote controls. The eighteenth draft of this standard was accepted in May 2003.
This standard aims to define the physical and MAC layer specific ations for sensor and other WPAN networks. Low power consumption is an important feature targeted by the standard. This requires reduced transmission rate, power-efficient modulation techniques, and strict power management techniques such as sleep modes.
Different network configurations and topologies were compared, and star and mesh networks were found to be favorable. The standard also proposes a generic frame structure whose length can be varied according to the application.
Other standards under development include the SensIT project  by the Defense Advanced Research Projects Agency (DARPA) which focuses on large distributed military systems, and the ZigBee Alliance , which addresses industrial and vehicular appliances.
The IEEE 1451.5 wireless smart transducer interface standard is still under review. It is proposed to include multiple combinations of MAC and physical layers, using the IEEE 802 approach as a model.
Following are some of the issues that are recently being explored in sensor networks, such as energy-efficient hardware and architecture, real-time communication on sensor networks, transport layer protocols, and security issues. Because these are mostly in the research stage, there are many improvements to be made on these fronts.
As has been emphasized throughout the chapter, sensor nodes have a very stringent energy constraint. Energy optimization in sensor networks must prolong the life of a single node as well as of the entire network. Power saving in the micro-controller unit has been analyzed in , where the power required by different processors has been compared.
The choice of the processor should be application-specific , such that performance requirements are met with the least power consumption. Computation can be carried out in a power-aware manner using dynamic power management (DPM).
One of the basic DPM techniques is to shut down several components of the sensor node when no events take place. The processor has a time-varying computational load, hence the voltage supplied to it can be scaled to meet only the instantaneous processing requirement. This is called dynamic voltage scaling (DVS).
The software used for sensor networks such as the operating system, application software, and network software can also be made energy-aware. The real-time task scheduler should actively support DVS by predicting the computation and communication loads.
Sensor applications can use a trade-off between energy and accuracy by performing the most significant operations first , so that premature termination of the computation due to energy constraints does not affect the result by a large margin.
The communications subsystem should also perform energy-aware packet forwarding. The use of intelligent radio hardware enables packets to be forwarded directly from the communication subsystem, without processing it through the micro-controller.
Techniques similar to DVS are used for modulation, to transmit data using a simpler modulation scheme, thereby consuming less energy, when the required data transmission rate is lower. This is called modulation scaling.
Besides incorporating energy-efficient algorithms at the node level, there should be a network-wide cooperation among nodes to conserve energy and increase the overall network lifetime. The computation-communication trade-off determines how much local computation is to be performed at each node and what level of aggregated data should be communicated to neighboring nodes or BSs.
Traffic distribution and topology management algorithms exploit the redundancy in the number of sensor nodes to use alternate routes so that energy consumption all over the network is nearly uniform.
Synchronization among nodes is essential to support TDMA schemes on multi-hop wireless networks.Also, time synchronization is useful for determining the temporal ordering of messages sent from sensors and the proximity of the sensors.
Usually, sensor nodes are dropped into the environment from which data has to be collected, and their exact positions are not fixed before deployment. Hence, synchronization is the only way by which the nodes can determine their relative positions.
Further, in order to furnish aggregate data to the monitor node, the sensors must evolve a common timescale using their synchronized clocks, to judge the speed of a moving target or phenomenon.
Sensors must be able to recognize duplicate reports of the same event by different nodes and discard them, which means that the node must be able to precisely determine the instant of time at which the event occurred.
There are two major kinds of synchronization algorithms: one which achieves long-lasting global synchronization, that is, lasts throughout the network for its entire lifetime, and one which achieves a short-lived or pulse synchronization where the nodes are synchronized only for an instant.
Synchronization protocols typically involve delay measurements of control packets. The delays experienced during a packet transmission can be split into four major components:send time, access time, propagation time, and receive time. The send time is the time spent at the sender to construct the message. The access time is the time taken by the MAC layer to access the medium, which is appreciable in a contention-based MAC protocol.
The propagation time rejects the time taken by the bits to be physically transmitted through the medium over the distance separating the sender and receiver. The receive time is the time for processing required in the receiver’s network interface to receive the message from the channel and notify the host of its arrival.
If the arrival time is time-stamped at a low layer, overheads of context switches and system calls are avoided, and the arrival time-stamp closely rejects the actual arrival time, with the only non-determinism introduced being due to reception of the first bit.
Many existing synchronization algorithms for sensor networks rely on the time information obtained through the GPS to provide coarse time synchronization. The accuracy of time synchronization provided by GPS depends on the number of satellites observed by the GPS receiver.
In the worst case, with only one observed satellite, GPS offers an accuracy of 1 microsecond. However, GPS is not a suitable choice for sensor networks because GPS receivers cannot be used inside large buildings and basements, or underwater, or in other satellite-unreachable environments where sensor networks may have to be deployed.
A low-power synchronization scheme called post facto synchronization has been proposed by Elson and Estrin in  for wireless sensor networks. In this scheme, the clocks of the nodes are normally unsynchronized. When an event is observed, a synchronization pulse is broadcast by a beacon node, with respect to which all nodes normalize their time-stamps for the observation of the event.
This scheme offers short-lived synchronization, creating only an “instant” of synchronization among the nodes which are within transmission range of the beacon node. The propagation delay of the synchronization pulse is assumed to be the same for all nodes.
Yoram Ofek  proposed a global synchronization protocol based on exchange of control signals between neighboring nodes. A node becomes a leader when elected by a majority of nodes in the network. A distributed election protocol is used which ensures the presence of a unique leader for the network.
The leader then periodically sends synchronization messages to its neighbors. These messages are broadcast in turn to all nodes of the network. The time-difference bounds have been theoretically analyzed, and fault-tolerance techniques have been added to account for errors in the synchronization messages.
A long-lasting synchronization protocol is proposed in , which ensures global synchronization of a connected network, or synchronization within connected partitions of a network. Each node in the network maintains its own local clock (real clock) and a virtual clock to keep track of its leader’s clock.
A unique leader is elected for each partition in the network, and virtual clocks are updated to match the leader’s real clock. The leader election process occurs as follows. On power-up, every node makes an attempt to either locate a leader in its partition or claims to be a leader itself. A node decides, with a small probability, to stake a claim for leadership and announces its claim with a random number sent on the claim packet.
This Leader Announcement packet also contains the transmission power used by the node. A node which receives this claim applies a correction for the propagation delay experienced by the claim packet (calculated based on received power), and updates its virtual clock to the expected value of the leader’s real clock at that instant. Time-stamping of claims is performed at the physical layer, to avoid the variable queuing and medium access delays introduced by the MAC layer.
The claim is flooded throughout the partition, bounded by a TTL field . In case two nodes within a partition stake a leadership claim, the one whose Leader Announcement has a higher random number resynchronizes to the leader whose Leader Announcement has the lower random number, and then rebroadcasts the Leader Announcement of the node that generated the lower random number.
In the highly unlikely case of two leaders generating the same random number, node ID is used for resolution. Periodic beaconing ensures that synchronization is maintained throughout the partition, and nodes which join it later also synchronize their clocks.
Resynchronization is the process of synchronizing different network partitions that are independently synchronized to different clocks to a common clock. In dynamic networks such as sensor networks, frequent changes in topology make resynchronization an important issue. Resynchronization takes place in situations such as the merging of two partitions due to mobility, where all clocks in a partition may need to be updated to match the leader of the other partition, as shown in Figure 12.19 below
Clickon image to enlarge.
The typical TDMA superframe structure is shown in Figure 12.20 below . Presynch frames define the start and end of a superframe, control frames transmit control information, and data frames are the TDMA time slots allotted to the nodes involved in data transfer. A positive shift in resynchronization is define d as the transmission of a data packet at an absolute time later than the slot in the current frame structure.
Clickon image to enlarge.
Negative shift is defined as advancing the start of a superframe to transmit the data packets earlier than the start of transmission in the current frame structure. Resynchronization maintains slot assignment to routes through the node, but shifts the start of the superframe. If the clocks of nodes of partition 1 have to be updated, the superframe can be shifted without loss of data or reconfiguration.
However, if the clocks of partition 2 have to be shifted, as shown in Figure 12.20 (b), some data frames are lost due to the negative shift. If the policy of positive shift is followed uniformly, the nodes must have the capacity to buffer up to an entire superframe’s data packets to start afresh with the new timing, as shown in Figure 12.20 (c).
Buffering alleviates the problem of data loss on the link whose end-points are being resynchronized, but neighboring links may suffer collisions when they follow different clocks. Hence, as the resynchronization proceeds radially from the new leader, there is data loss along the head of the resynchronization wave.
This remains for a time period proportional to the time taken for the Leader Announcement packet to propagate from the leader to the farthest node, which in turn depends on the diameter of the network, until the entire network is resynchronized.
Also, different methods for transmitting the synchronization information have been studied. Out-of-band synchronization uses a separate control channel for sending claim and beacon packets. Collisions are reduced to a great extent for the control packets.
However, the available bandwidth for data transmission is reduced, and the cost of the mobile nodes increases because of the need for an additional radio interface. In in-band synchronization, control information for synchronization shares the same channel with the data packets, as shown in Figure 12.21 (a) below.
Clickon image to enlarge.
This leads to a greater number of collisions, but avoids an additional channel or bandwidth reservation. Piggy-backing can be used to reduce explicit control packets. Control information is piggy-backed onto outgoing data packets, as in Figure 12.21 (b).
This involves very low overhead with each packet and leads to considerable bandwidth saving. A control packet carrying the synchronization information is originated only if there are no data packets to be sent from the node. The scheme can also be applied with piggy-backing on the link-level acknowledgments.
In sensor networks, data usually flows from all sensors to the monitor, which is a fixed node with greater computing and power resources than the sensors.
If the monitor is forced to be the leader, the synchronization information moves in the reverse direction, that is, along the link-level acknowledgments sent by the nodes for each hop of the data packets, as shown in Figure 12.21 (c). Using simulation studies, this has been observed to be the most efficient mechanism.
Transport Layer Issues
The major issue in transport layer protocols for sensor networks is the provision of reliable data delivery. This assumes special significance in the design of general-purpose sensor networks, where groups of nodes may need to be reconfigured or reprogrammed to suit an evolving application. This may require disseminating a code segment to some nodes, where loss of even a single line of code would render the retasking operation a failure.
In , a reliable, robust, scalable, and customizable transport protocol, pump slowly fetch quickly (PSFQ), is proposed. The key concept behind the protocol is that a source node distributes data at a slow rate (pump slowly), and a receiver node which experiences data loss retrieves the missing data from immediate neighbors quickly (fetch quickly). PSFQ assumes that data loss is due to poor link conditions rather than traffic congestion.
It proposes a hop-by-hop error recovery scheme, rather than holding only the destination node responsible for error detection. The overhead of requiring intermediate nodes to keep track of forwarded data is justified in sensor networks, because most transmissions are intended for groups of sensors, so intermediate nodes are also intended receivers.
PSFQ consists of three functions: message relaying (pump), error recovery (fetch), and selective status reporting (report). The pump operation disseminates data to all target nodes, performs flow control, and localizes loss by ensuring caching at intermediate nodes.
Hence, the errors on one link are rectified locally without propagating them down the entire path. When a receiver detects gaps in the received sequence numbers, a loss is indicated, and it goes into fetch mode. It requests are transmission from neighbor nodes. An attempt is made to aggregate losses, that is, many message losses are batched into a single fetch operation, which is especially appropriate for bursty losses.
PSFQ supports a report operation to provide feedback on data delivery status to the source. The farthest target node initiates its report on the reverse path of data, and all intermediate nodes append their reports to the same. Hence, PSFQ ensures that data segments are delivered to all intended receivers in a scalable and reliable manner, even in an environment where the radio link quality is poor. It has been observed that the ratio between the fetch and pump rates should be around 5 for maximum effectiveness.
A recent protocol, event-to-sink reliable transport (ESRT) provides a new perspective on reliability in sensor networks. It defines event-to-sink reliability in place of the traditional end-to-end reliability provided by the transport layer, that is, data about the event is to be carried reliably to the sink, with minimum energy expenditure.
The sink is required to track reliably only the collective report about an event and not individual reports from each sensor. This enables a relaxation in stringent end-to-end reliability for each flow.
The salient features of ESRT are its self-configuring capability, energy awareness, and congestion control. ESRT defines the term observed reliability as the number of packets that are routed from event to sink, and required reliability as the desired number of such packets for the event to be successfully tracked.
If the observed reliability of an event falls below the requirement, ESRT increases the reporting frequency. On the other hand, if the reliability level required has been exceeded, ESRT decreases the reporting frequency in order to conserve energy.
The frequency at which sensors must send their reports is conveyed to them through broadcasts from the sink, after appropriate calculations, so that the required reliability is achieved. Congestion control is achieved by monitoring buffer levels at forwarding sensors.
Reliability in the reverse direction, from sink to sensors, is discussed in . The different kinds of reliability required in this direction are listed. On one hand, small queries are sent in a single packet, whereas software that needs to be updated in the sensors may be sent across multiple packets. Accordingly, reliability should be ensured for single or multiple packets, depending on the content.
Further classification is based on the intended set of receivers. A message may need to be sent to all sensors of the network, or to all within a sub-area (as in a location-based query), or maybe to a subset of sensors which ,among themselves, covers a certain area.
In the last case, not all sensors in the area need to receive the message, but only a small subset, the union of whose coverage areas adds up to the required area, needs to reliably receive the message.
One of the ways to ensure any of these forms of reliability is to use some nodes as recovery servers, which retransmit the message to sensors which did not receive it.
Sensor networks, based on an inherently broadcast wireless medium, are vulnerable to a variety of attacks. Security is of prime importance in sensor networks because nodes assume a large amount of trust among themselves during data aggregation and event detection.
From a set of sensor nodes in a given locality, only one final aggregated message may be sent to the a base station (BS), so it is necessary to ensure that communication links are secure for data exchange.
The basic kinds of attacks on sensor networks at the network layer level have been listed in . Cryptographic solutions based on symmetric or public key cryptography are not suitable for sensor networks, due to the high processing requirements of the algorithms.
Routing protocols can be affected by spoofng or altering the routing information exchanged between nodes. This can lead to errors in routing, higher latency, or even partitioning of the network.
Sybil attacks  occur when a single node presents itself as multiple entities to the network. This can affect the fault tolerance of the network and mislead geographic routing algorithms. Encryption and authentication using a globally shared key can prevent these attacks caused by an outsider trying to corrupt the messages in the network.
A selective forwarding attack is a situation when certain nodes do not forward many of the messages they receive. The sensor networks depend on repeated forwarding by broadcast for messages to propagate throughout the network.
Sinkhole attacks are those which make a malicious node seem very favorable to the routing algorithm so that most data is routed through it. This node then performs selective forwarding, or acts as a “sink.” Sensor networks are especially vulnerable to sinkhole attacks because most traffic is toward the BS.
So, providing a single “favorable” route is likely to influence a large number of nodes to route their data through the malicious node. The wormhole attack lures traffic through a very long path by giving false information to the nodes about the distance between them. This increases latency by avoiding some other possible shorter paths.
Wormhole and sinkhole attacks are difficult to counter because routing information supplied by a node is difficult to verify. However, geographic routing protocols are not affected by these attacks since the routes are considered on demand using the location coordinates, so false distances can be verified.
Hello flood attacks can be caused by a node which broadcasts a Hello packet with very high power, so that a large number of nodes even far away in the network choose it as the parent. All messages now need to be routed multi-hop to this parent, which increases delay.
This can be avoided by checking the bidirectionality of a link, so that the nodes ensure that they can reach their parent within one hop. The rest of this section deals with some protocols that have been proposed to improve security in sensor networks.
Localized Encryption and Authentication Protocol (LEAP)
LEAP  is a key management protocol (a protocol to distribute cryptographic keys) for sensor networks based on symmetric key algorithms; that is, the same key is used by sender and receiver. In a network, requiring every pair of nodes to have a shared key to be used for communication between them is ideal for security, because an attack on any one node does not compromise the security of other nodes.
However, in sensor networks, the neighbors of a node may not be known in advance, hence this sharing of keys must take place after the network is deployed, which will cause a high overhead.
Also, sensor networks may employ certain processing optimizations such as a node’s deciding not to report an event if it overhears its neighbor reporting the same. Such optimizations will be precluded by the usage of a separate key for each neighboring pair.
On the other hand, having a common key for all nodes in the network has lower overhead, but compromise of any node affects the entire system.
LEAP uses different keying mechanisms for different packets depending on their security requirements. For instance, routing information, which is usually in broadcast mode, does not require confidentiality, whereas aggregated data sent to the BS must be confidential.
Every sensor node maintains four types of keys: an individual key which it shares with the BS; a group key shared with all nodes of the network and the BS; a cluster key shared between a node and its neighbors; and a pairwise shared key with each of its neighbors.
The individual key is preloaded into the node before deployment, and is used for transmission of any special information between the BS and the node, such as exclusive instructions to a node, or report from a node to BS about the abnormal behavior of a neighboring node.
It is assumed that the time required to attack a node is greater than the network establishment time, during which a node can detect all its immediate neighbors. A common initial key is loaded into each node before deployment. Each node derives a master key which depends on the common key and its unique identifier.
Nodes then exchange Hello messages, which are authenticated by the receivers (since the common key and identifier are known, the master key of the neighbor can be computed). The nodes then compute a shared key based on their master keys. The common key is erased in all nodes after the establishment, and by assumption, no node has been compromised up to this point.
Since no adversary can get the common key, it is impossible to inject false data or decrypt the earlier exchange messages. Also, no node can later forge the master key of any other node. In this way, pair wise shared keys are established between all immediate neighbors. The cluster key is established by a node after the pair wise key establishment. A node generates a cluster key and sends it encrypted to each neighbor with its pair wise shared key.
The group key can be preloaded, but it should be updated once any compromised node is detected. This could be done, in a naive way, by the BS’s sending the new group key to each node using its individual key, or on a hop-by-hop basis using cluster keys. Other sophisticated algorithms have been proposed for the same. Further, the authors of  have proposed methods for establishing shared keys between multi-hop neighbors.
Intrusion Tolerant Routing in Wireless Sensor Networks (INSENS)
INSENS)  adopts a routing-based approach to security in sensor networks. It constructs routing tables at each node, bypassing malicious nodes in the network. The protocol cannot totally rule out attack on nodes, but it minimizes the damage caused to the network. The computation, communication, storage, and bandwidth requirements at nodes are reduced, but at the cost of greater computation and communication at the BS.
To prevent DoS attacks, individual nodes arenot allowed to broadcast to the entire network. Only the BS is allowed to broadcast, and no individual nodes can masquerade as the BS, since it is authenticated using one-way hash functions (i.e., a hash function whose inverse is not easy to obtain).
Control information pertaining to routing must be authenticated by the BS in order to prevent injection of false routing data. The BS computes and disseminates routing tables, since it does not face the computation and energy constraints that the nodes do. Even if an intruder takes over a node and does not forward packets, INSENS uses redundant multipath routing, so that the destination can still be reached without passing through the malicious node.
INSENS has two phases: route discovery and data forwarding. During the route discovery phase, the BS sends a request message to all nodes in the network by multi-hop forwarding (not using its broadcast). Any node receiving a request message records the identity of the sender and sends the message to all its immediate neighbors if it has not already done so.
Subsequent request messages are used to identify the senders as neighbors, but repeated flooding is not performed. The nodes respond with their local topology by sending feedback messages. The integrity of the messages is protected using encryption by a shared key mechanism. A malicious node can inflict damage only by not forwarding packets, but the messages are sent through different neighbors, so it is likely that it reaches a node by at least one path.
Hence, the effect of malicious nodes is not totally eliminated, but it is restricted to only a few downstream nodes in the worst case. Malicious nodes may also send spurious messages and cause battery drain for a few upstream nodes.
Finally, the BS calculates forwarding tables for all nodes, with two independent paths for each node, and sends them to the nodes. The second phase of data forwarding takes place based on the forwarding tables computed by the BS.
Security Protocols for Sensor Networks (SPINS)
SPINS consists of a suite of security protocols that are optimized for highly resource-constrained sensor networks. SPINS consists of two main modules: sensor network encryption protocol (SNEP) and a micro-version of timed, efficient , streaming, loss-tolerant authentication protocol (µTESLA). SNEP provides data authentication, protection from replay attacks, and semantic security, all with low communication overhead of eight bytes per message.
Semantic security means that an adversary cannot get any idea about the plaintext even by seeing multiple encrypted versions of the same plaintext. Encryption of the plaintext uses a shared counter (shared between sender and receiver). Hence, the same message is encrypted differently at different instances in time. Message integrity and confidentiality are maintained using a message authentication code (MAC).
This is similar to a checksum derived by applying an authentication scheme with a secret shared key to the message. The message can be decrypted only if the same shared key is present. The message also carries the counter value at the instance of transmission (like a time-stamp), to protect against replay attacks.
µTESLA ensures an authenticated broadcast, that is, nodes which receive a packet can be assured of its sender’s identity. It requires a loose time synchronization between BS and nodes, with an upper bound on maximum synchronization error.
The MAC keys are derived from a chain of keys, obtained by applying a one-way function F (a one-way function is one whose inverse is not easily computable). All nodes have an initial key K0,which is some key in the key-chain.
The key to be used changes periodically, and since nodes are synchronized to a common time within a bounded error, they can detect which key is to be used to encrypt/decrypt a packet at any time instant. The BS periodically discloses the next verification key to all the nodes and this period is known to all nodes. There is also a specified lag of certain intervals between the usage of a key for encryption and its disclosure to all the receivers.
When the BS transmits a packet, it uses a MAC key which is still secret (not yet disclosed). The nodes which receive this packet buffer it until the appropriate verification key is disclosed. But, as soon as a packet is received, the MAC is checked to ensure that the key used in the MAC has not yet been disclosed, which implies that only the BS which knows that yet undisclosed key could have sent the packet.
The packets are decrypted once the key-disclosure packet is received from the BS. If one of the key-disclosure packets is missed, the data packets are buffered till the next time interval, and then authenticated. For instance, suppose the disclosure packet of Kj does not reach a node; it waits till it receives Kj+1,then computes Kj = F (Kj+1)and decrypts the packets received in the previous time interval.
Support for real-time communication is often essential in sensor networks which are used for surveillance or safety-critical systems. The communication delay between sensing an intrusion and taking appropriate action greatly affects the quality of tracking provided by a surveillance system.
Similarly, in a nuclear power plant, the detection of an abnormality in temperature or pressure must be conveyed in real-time to the control system in order to take immediate action. Hence, delay guarantees on routing would be extremely useful for such systems. Two protocols which support real-time communication in sensor networks are SPEED and RAP.
SPEED. A stateless protocol, SPEED, which supports real-time communication in sensor networks, has been proposed in . SPEED is a localized algorithm which provides real-time unicast, real-time area-multicast (multicast to all nodes in a particular region), and real-time anycast support for packet transmission.
SPEED has minimal overheads, as it does not require routing tables. It is compatible with best-effort MAC layer, not requiring any special MAC support. It also distributes traffic and load equally across the network using non-deterministic forwarding.
The SPEED protocol requires periodic beacon transmissions between neighbors. It also uses two specific on-demand beacons for delay estimation and congestion (back-pressure) detection. These are used to adapt to changes in the network. The load at a node is approximated using single-hop delay.
The measurement is made using data packets which pass bya node, instead of separate probe packets. This minimizes the overhead. Nodes also respond using the delay estimation beacon to inform neighbors of the estimated delay. Routing of packets is performed by stateless non-deterministic geographic forwarding (SNGF).
Using geographic information, packets are forwarded only to the nodes which are closer to the destination. Among the eligible closer nodes, the ones which have least estimated delay have a higher probability of being chosen as an intermediate node. If there are no nodes that satisfy the delay constraint, the packet is dropped.
SPEED uses a neighbor feedback loop (NFL) to maintain the estimated delay fairly constant, so that frequent updates of delay estimates are not required. When apacket has to be dropped, that is,there is no path which can meet the delay constraint, the sending rate to the downstream nodes (nodes which are closer to the receiver) is reduced to avoid congestion, thereby maintaining the delay.
The NFL issues a back-pressure beacon indicating the average delay. The increased delay is noted, and SNGF accordingly reduces the probability of selecting the congested downstream nodes for routing, until eventually the congestion eases out and de-lay is reduced.
This can continue recursively, propagating the back-pressure from downstream to upstream nodes (nodes whichare closer to the sender), to relieve congestion in a hotspot.
Many geographic routing algorithms may encounter a situation when there is no node close to the destination to forward a packet. This is called a “void.” SPEED uses a void-avoidance technique by issuing a back-pressure beacon with estimated delay as infinite.
This will trigger a search for alternative paths. Hence, if there exists any path to a destination satisfying the delay constraint, it will be detected by SPEED. The protocol provides support for real-time communication over sensor networks by providing guarantees on the maximum delay.
RAP. This approach provides APIs for applications to address their queries. An application layer program in the BS can specify the kind of event information required, the area to which the query is addressed, and the deadline within which information is required.
The underlying layers of RAP ensure that the query is sent to all nodes in the specified area, and the results are sent back to the BS. The protocol stack of RAP consists of location addressed protocol (LAP) in the transport layer, velocity monotonic scheduling (VMS) as the geographic routing protocol, and a contention-based MAC scheme that supports prioritization.
LAP is a connectionless transport layer protocol which uses location to address nodes instead of a unique addressing scheme such as IP address. It supports three kinds of communication: unicast, area multicast, and area anycast. VMS is based on the concept of packet-requested velocity, which rejects both the timing and the distance constraints. Hence, requested velocity is a measure of the urgency of the packet.
If a packet can travel at its requested velocity, that is, can cover the required distance within a specified time, then it can meet its deadline. VMS gives higher priority to packets which have requested higher velocities.
The velocity of a packet is calculated as the ratio of the geographic distance between sender and receiver, to the deadline. Dynamic VMS recalculates the velocity at each intermediate node, so that a packet which has been slower than its requested velocity until then can be given higher priority.
This is mapped onto a MAC layer priority, which is handled by the contention-based MAC layer. The protocol hence provides convenient services for application layer programs that require real-time support.
Sensor networks realize an all-pervasive distributed network to create an intelligent environment. The possible applications of sensor networks are wide-ranging, from intelligent buildings and sensor-controlled chemical plants, to habitat-monitoring and covert military operations.
The direction of research in sensor networks is toward overcoming the challenges of scalability, reliability, robustness, and power-efficiency, so that a variety of applications can be implemented in highly constrained scenarios.
Hardware design of sensor nodes needs to be further miniaturized, and power-efficient hardware and operating systems need to be developed. On the MAC layer, provisions need to be made for mobility of sensor nodes. New ideas for routing, network establishment, and maintenance are still in the development stage.
Similarly, a transport layer protocol with limited power consumption and computational costs, and which is capable of interfacing with TCP or UDP, is still on the drawing board. Handling the sensed data, and development of application-specific query languages, would greatly help in fine-tuning the performance of sensor networks for a variety of applications. With open problems at every layer of the sensor network protocol stack, this is a highly promising concept still in its incipient stages.
To read Part 1 , go to: issues and challenges
To read Part 2 , go to: Sensor data dissemination
To read Part 3 , go to: Sensor data gathering and location detection
C. Siva Ram Murthy received a B.Tech degree in electronics and communications engineering from the National Institute of Technology, an M.Tech degree in computer engineering from the Indian Institute of Technology, and a Ph.D in computer science from the Indian Institute of Science. He has to his credit over 100 research papers in international journals and conference publications as well as author of several text books on parallel computing, real time systems and networks, and optical networks. His research interest include parallel and distributed computing, real time systems lightwave networks, and, of course, wireless sensor networks.
B.S. Manoj is the recipient of an M.Tech degree in Electronics and Communications engineering from the Institution of Engineers in India. His research interest include ad hoc wireless networks, next generation wireless architectures and wireless sensor networks.
This series of articles is from the book “Ad Hoc Wireless Networks ,” by C. Siva Ram Murthy and B.S. Manoj, Copyright 2011, used by permission of PearsonEducation, Inc.. Written permission from Pearson Education, Inc. is required forall other uses.
 I. F. Akyildz, W. Su, Y. Sankarasubramaniam, and E. Cayirci, “A Survey on Sensor Networks,” IEEE Communications Magazine,vol. 40, no. 8, pp. 102-114, August 2002.
 J. Ding, “Design and Analysis of an Integrated MAC and Routing Protocol Framework for Large-Scale Multi-Hop Wireless Sensor Networks,” Technical Re-port,Department of Computer Science and Electrical Engineering, University of Maryland, Baltimore, July 2002.
 “Integrated MAC and Routing Protocol Framework for Large-Scale Multi-Hop Wireless Sensor Networks,” Department of Electrical Engineering and Computer Science, Washington State University, http://jaguar.eecs.wsu.edu/˜jding1/presentations/poster.ppt
 W. Heinzelman, A. Chandrakasan, and H. Balakrishnan, “Energy-Efficient Communication Protocol for Wireless Microsensor Networks,” Proceedings of HICSS 2000, pp. 4-7, January 2000.
 W. R. Heinzelman, J. Kulik, and H. Balakrishnan, “Adaptive Protocols for Information Dissemination in Wireless Sensor Networks,” Proceedings of ACM MOBICOM 1999, pp. 174-185, August 1999.
 D. Braginsky and D. Estrin, “Rumor Routing Algorithm for Sensor Networks,” Proceedings of ACM Workshop on Wireless Sensor Networks and Applications 2002, pp. 22-31, September 2002.
 K. Sohrabi, J. Gao, V. Ailawadhi,and G. J. Pottie, “Protocols for Self-Organization of a Wireless Sensor Network,” IEEE Personal Communications Mag-azine,vol. 7, no. 5, pp. 16-27, October 2000.
C.Intanagonwiwat, R. Govindan, and D. Estrin, “Directed Diffusion: A Scalable and Robust Communication Paradigm for Sensor Networks,” Proceedings of ACM MOBICOM 2000, pp. 56-67, August 2000.
 F. Ye, A. Chen, S. Lu, andL. Zhang, “A Scalable Solution to Minimum Cost Forwarding in Large SensorNetworks,” Proceedings of IEEE ICCCN 2001, pp. 304309, October 2001.693
 S. Ratnasamy et al., “GHT: A Geographic Hash Table for Data-Centric Storage,” Proceedings of ACM Workshop on Wireless Sensor Networks and Applications 2002, pp. 78-87, September 2002.
 I. Stoica, R. Morris, D. Liben-Nowell, D. Karger, M. F. Kaashoek, F. Dabek, and H. Balakrishnan, “Chord: A Scalable Peer-to-Peer Lookup Service for Internet Applications,” Proceedings of ACM SIGCOMM 2001, pp. 149-160, August 2001.
 B. Y. Zhao, J. Kubiatowicz, and A. D. Joseph, “Tapestry: An Infrastructure for Fault-Tolerant Wide-Area Location and Routing,” Technical Report UCB/CSD-01-1141,Computer Science Division, U. C. Berkeley, April 2001.
 B. Karp and H. T. Kung, “GPSR: Greedy Perimeter Stateless Routing for Wireless Sensor Networks,” Proceedings of ACM MOBICOM 2000, pp. 243-254, August 2000.
 L. Li and J. Y. Halpern, “Minimum Energy Mobile Wireless Networks Revisited,” Proceedings of IEEE ICC 2001, pp. 278-283, June 2001.
 S. Lindsey and C. S. Raghavendra, “PEGASIS: Power-Efficient Gathering in Sensor Information Systems,” Proceedings of IEEE ICC 2001,vol. 3, pp. 1125-1130, June 2001.
 S. Lindsey, C. S. Raghavendra, and K. M. Sivalingam, “Data-Gathering Algorithms in Sensor Networks Using Energy Metrics,” IEEE Transactions on Parallel and Distributed Systems,vol. 13, no. 9, pp. 924-935, September 2002.
 E. Shih, S. Cho, N. Ickes, R. Min, A. Sinha, A. Wang, and A. Chandrakasan, “Physical Layer Driven Protocol and Algorithm Design forEnergy-Efficient Wireless Sensor Networks,” Proceedings of ACM MOBICOM 2001, pp. 272-286, July 2001.
 A. Wooand D. Culler, “A Transmission Control Scheme for Media Access in Sensor Networks,” Proceedings of ACM MOBICOM 2001, pp. 221-235, July 2001.
 K. M. Sivalingam, “Tutorial on Wireless Sensor Network Protocols,” International Conference on High-Performance Computing 2002,Bangalore, India, December 2002.
 A. Savvides, C. Han, and M. B. Srivastava, “Dynamic Fine-Grained Localization in Ad Hoc Networks of Sensors,” Proceedings of ACM MOBICOM 2001, pp. 166179, July 2001.
 A. Nasipuri and K. Li, “A Directionality-Based Localization Scheme for Wireless Sensor Networks,” Proceedings of ACM Workshop on Wireless Sensor Networks and Applications 2002, pp. 105-111, September 2002.
 Y. Shang, W. Ruml, Y. Zhang, and M. P. J. Fromherz, “Localization from Mere Connectivity,” Proceedings of ACM MOBIHOC 2003, pp. 201-212, June 2003. T. Clouqueur, V. Phipatanasuphorn, P. Ramanathan, and K. K. Saluja, “Sensor Deployment Strategy for Target Detection,” Proceedings of ACM Workshop on Wireless Sensor Networks and Applications 2002, pp. 42-48, September 2002.
 IEEE 802.15 Working Group for Wireless Personal Area Networks, www.ieee802.org/15/pub/TG4.html
 DARPA SensIT Program Homepage, http://dtsn.darpa.mil/ixo/sensit
 The offcial Web site of the Zigbee Alliance, www.zigbee.org/
 V. Raghunathan et al.,“Energy-Aware Wireless Microsensor Networks,” IEEE Signal Processing Magazine,vol. 19, no. 2, pp. 40-50, March 2002.
 H. Kopetz and W. Schwabl, “Global Time in Distributed Real-Time Systems,” Technical Report 15/89,Technische UniversitatWien, 1989.
 A. Ebner, H. Rohling, R. Halfmann, and M. Lott, “Synchronization in Ad Hoc Networks Basedon UTRATDD,” Proceedings of IEEE PIMRC 2002,vol. 4, pp. 1650-1654, September 2002.
 J. Elson and D. Estrin, “Time Synchronization for Wireless Sensor Networks,”Proceedings of IEEE IPDPS Workshop on Parallel and Distributed Computing Issues in Wireless Networks and Mobile Computing 2001, pp. 1965-1970, April 2001.
 Y. Ofek, “Generating a Fault-Tolerant Global Clock Using High-Speed Control Signals for the MetaNet Architecture,” IEEE/ACM Transactions on Networking, vol. 3, no. 2, pp. 169-180, April 1995.
 Archana, S., B. S. Manoj, and C. Siva Ram Murthy, “A Novel Solution for Synchronization in Wireless AdHoc andSensorNetworks,” TechnicalReport,De-partment of Computer Science and Engineering, Indian Institute of Technology, Madras, India, October 2003.
 C. Y. Wan, A. T. Campbell, and L. Krishnamurthy, “PSFQ: A Reliable Transport Protocol for Wireless Sensor Networks,” Proceedings of ACM Workshop on Wireless Sensor Networks and Applications 2002, pp. 1-11, September 2002.
 Y. Sankarasubramaniam, O. B. Akan, and I. F. Akyildz, “ESRT: Event-to-Sink Reliable Transport in Wireless Sensor Networks,” Proceedings of ACM MOBIHOC 2003, pp. 177-188, June 2003.
 S. J. Park and R. Sivakumar, “Sink-to-Sensor Reliability in Sensor Networks,” Poster presentation at ACMMOBIHOC 2003,June 2003.
 C. Karlof and D. Wagner, “Secure Routing in WirelessSensorNetworks: Attacks and Countermeasures,” Proceedings of IEEE Workshop on Sensor Network Protocols and Applications 2003, pp. 113-127, May 2003.
 J. Douceur, “The Sybil Attack,” Proceedings of IPTPS 2002,March 2002. S. Zhu, S. Setia, and S. Jajodia, “LEAP: Efficient Security Mechanisms for Large-Scale Distributed Sensor Networks,” Proceedings of ACM Conference on Computer and Communications Security 2003, pp. 62-72, October 2003.
 J. Deng, R. Han, and S. Mishra, “INSENS: Intrusion Tolerant Routing in Wireless Sensor Networks,” Poster presentation at IEEE ICDCS 2003,May 2003.
 A. Perrig, R. Szewczyk, V. Wen, D. E. Culler, and J. D. Tygar, “SPINS: Security Protocols for Sensor Networks,” Proceedings of ACM MOBICOM 2001, pp. 189-199, July 2001.
 T. He, J. A. Stankovic, C. Lu, and T. Abdelzaher, “SPEED: A Stateless Protocol for Real-Time Communication in Sensor Networks,” Proceedings of IEEE ICDCS 2003, pp. 46-57, May 2003.
 C. Lu, B. M. Blum, T. F. Abdelzaher,J. A. Stankovic, and T. He, “RAP: A Real-Time Communication Architecture for Large-Scale Wireless Sensor Networks,” Proceedings of IEEE RTAS 2002, pp. 55-66, September 2002.
 S. Archana and C. Siva Ram Murthy, “A Survey of Protocols and Algorithms for Wireless Sensor Networks,” Technical Report,Department of Computer Science and Engineering, Indian Institute of Technology, Madras, India, July 2003.