At the transport layer, it's often difficult to decide between TCP and UDP, with their respective benefits of reliability and efficiency. Fortunately, there's a third choice: a little-known standard called T/TCP offers the best features of both protocols.
In today's world, the quantity and variety of network-connected embedded devices are expanding rapidly. The data communication requirements of many of these applications involve frequent query-response or transaction-type exchanges of small chunks of data (small enough to fit into a single, un-fragmented IP packet) between client and server devices. (In this article, “client” refers to the device that initiates such an exchange, and “server” to the device that responds. The client-to-server relationship may be many-to-one, one-to-many, or many-to-many.)
In network-connected devices, it is often a given that the Internet Protocol (IP) will serve as the network layer protocol. A crucial choice for transaction applications is which protocol to use at the transport layer. In this article, we're going to take a look at several issues and concerns that bear on this choice, especially (but not exclusively) in the context of embedded devices: memory use, network bandwidth, response time, reliability, and interoperability. In light of these issues, we'll consider three options for the transport layer protocol: UDP, TCP, and T/TCP.,
It is assumed that the reader has a working familiarity with the characteristics of both the Transmission Control Protocol (TCP)and the User Datagram Protocol (UDP). However, because the TCP extensions for Transactions (T/TCP) are not yet very well-known, this section provides a partial summary of that protocol.
T/TCP is not a new protocol. RFC 1644, which defines it, came into being in 1994. T/TCP is standard TCP with some new features added to optimize it for transaction applications. All of standard TCP's mechanisms and infrastructure are used in T/TCP.
T/TCP is backward-compatible with TCP. Whether T/TCP is on the client or server side, it will automatically adapt to behave as standard TCP when communicating with a peer that does not support T/TCP.
Each IP packet (also known as a segment in TCP lingo) within a T/TCP connection is tagged with a 32-bit connection count that is unique to that connection. Under most conditions, this eliminates the risk of old segments corrupting a subsequent connection. The T/TCP connection count is carried as a new TCP header option, called “CC.”
A benefit of the CC option is that, for connections of short duration-less than a maximum segment lifetime (MSL)-it is possible to shorten or eliminate TCP's TIME_WAIT state. The TIME_WAIT state is a mechanism by which TCP datastreams are protected from corruption by the arrival of old duplicate segments from an earlier incarnation of the same connection. One end of a TCP connection stays in the TIME_WAIT state for a period of two MSLs (an MSL typically lasts two minutes), thus preventing a new incarnation of the same connection from being established until all such duplicate segments have expired on the network. T/TCP's CC option allows the TIME_WAIT state to be truncated to, at most, eight times the measured round-trip time (RTT) of a connection. More importantly, a new incarnation of the same connection can be established immediately (that is, the TIME_WAIT state can be eliminated entirely).
T/TCP maintains a cache of protocol information about each remote host with which it interacts. This cache includes such information as the round-trip time, the maximum segment size, and the congestion window. Thus, this information does not have to be rediscovered within each connection, and a T/TCP connection can immediately begin to operate according to these parameters.
Standard TCP begins each connection with a three-segment synchronization (SYN) sequence that carries no data, and ends it with a three-segment finishing (FIN) sequence that also typically carries no data. In between, the segments that carry data are exchanged. If the total amount of data to be sent will fit into a single segment, then T/TCP allows the SYN, FIN, and data functions to be combined into a single segment. Thus, a complete T/TCP connection can be opened and closed with just three segments.
The sockets API for T/TCP is a minimal extension of the sockets API for standard TCP. Two existing API functions (sendto and send) are augmented to allow the application to combine the requests to open a connection, transmit data, and close the transmit direction of the connection into a single API call.
On the client side, the duration of a minimal, three-segment connection is on the order of a single round-trip time. Of course, this assumes that the time it takes for the server application to process a request and emit a response is small compared to the round-trip time.
Due to high manufacturing cost, embedded devices often have limited memory; their software must be designed with this constraint in mind.
One of the characteristics that makes standard TCP reliable is that the side that initiates the termination of a connection (usually the client side) goes into the TIME_WAIT state once the connection is terminated, and remains in that state for a lengthy period (two times the MSL). During that period, no new instances of the same connection can be established. This mechanism protects a TCP connection from being corrupted by old, duplicate segments from an earlier connection between the same endpoints. Presumably, by the time the TIME_WAIT state expires, all such “old” segments will have expired on the network.
While a connection remains in the TIME_WAIT state, TCP must maintain a control block with state information for the connection. Typically, the TCP control block for a connection occupies several hundred bytes of memory.
Meanwhile, if the client application wishes to conduct subsequent transactions with the same server, a new local TCP port must be allocated for each one, and memory allocated for a TCP control block for that connection. If the client application is running on a hardware platform on which memory is tight, the TCP layer will probably be designed with the capacity for a limited number of simultaneous connections. Once all of this connection capacity is tied up with connections in the TIME_WAIT state, no new connections can be made until one of these TIME_WAIT states expires. This could severely limit the transaction rate. For example, if the TIME_WAIT state duration is two minutes, and the client device has capacity for 20 simultaneous connections, the maximum transaction rate that could be achieved is 10 transactions per minute.
Because UDP is connectionless and, therefore, without a connection state to be maintained, memory size/usage is not much of an issue for UDP.
Because of T/TCP's TIME_WAIT truncation feature, an application can re-use a previous connection's local TCP port immediately, thus avoiding an additional TCP control block allocation.
Even if an application does not re-use the same local TCP port, since T/TCP's TIME_WAIT state duration is usually very short compared to that of standard TCP, the constriction on transaction rate due to a limited simultaneous connection capacity is much less severe.
If the network components in the paths between the clients and servers are heavily loaded relative to their capacity, congestion-induced packet drops may result. Using a transport layer protocol with less overhead can minimize this risk.
A minimal transaction using a standard TCP implementation usually requires nine TCP segments, even though only two of these carry application data. The seven extra segments are for the three-way handshakes that open and close the connection and an acknowledgment. That's a lot of overhead.
Furthermore, if any of these segments are dropped due to congestion, retransmissions will occur, adding to the packet count. Of course, the more overhead inherent in the transport protocol, the more likely it is that a network component along the data path will become congested in the first place.
A minimal transaction using UDP requires only two UDP datagrams, one in each direction. Thus, UDP minimizes the load on the network.
The initial T/TCP transaction between a pair of hosts requires six segments. Subsequent transactions between the same pair of hosts require just three segments.
If one or more of the data links in the network path between the clients and servers use transmission media that are low speed (such as a serial PPP link) or characterized by a large transmission delay (such as satellite links using satellites in geosynchronous orbit), the initiator of a transaction could experience a relatively large response time. To a human user, this could merely be annoying. To a client program with significant real-time constraints, this may be a serious problem.
In a transaction using standard TCP, the client side experiences a response time of at least two times the round-trip time of the connection path, plus the time it takes the server-side application to process the request/query and generate a response. Let's call the latter server processing time (SPT). So the total time is at least 2*RTT + SPT.
In a minimal transaction using UDP, the client side experiences a response time of RTT + SPT.
The initial T/TCP transaction between a pair of hosts has the same response time as TCP (2*RTT + SPT). All subsequent transactions between the same pair of hosts have a response time that is similar to that of UDP (RTT + SPT).
IP networks are unreliable by definition. For example, individual packets may be dropped by congested nodes. If end-to-end data transfer reliability is required by an application, it must be provided either by the transport layer or by the application itself.
Because standard TCP is a reliable data transport protocol, it recovers gracefully from congestion-induced dropped segments and the transaction completes. Except for an increase in response time, this happens transparently to the application software. If the transaction cannot complete because a network node crashes and/or the data path between the client and server is broken with no alternative routes available, the application is notified of the error.
UDP is not a reliable transport protocol. If a UDP datagram is dropped due to congestion, the transaction does not complete. If it is necessary for the data transfer to be reliable, this reliability must be built into the application layer.
T/TCP has the same reliability as standard TCP.
An application may be intended to communicate with peer applications running on many and various types of hardware and software platforms. This interoperability is easier to achieve with a protocol stack that conforms to widely accepted and implemented standards than by using a proprietary protocol stack, or a new one that has not yet become widespread.
Standard TCP poses no obstacles to interoperability.
If reliability is required, a proprietary mechanism must be used on top of UDP. This could present an obstacle to interoperability.
Because of T/TCP's backward compatibility with standard TCP, its use poses little or no obstacle to interoperability. (An exception might be if a client-side T/TCP attempts to communicate with a defective standard TCP that cannot silently ignore the CC option.) While the performance benefits of T/TCP could not be realized in this “interoperability” mode, at least the transactions themselves could still take place.
If an application requires reliable data transfer, it must either use a reliable transport layer protocol, or provide the reliability itself. Including data transfer reliability mechanisms within an application can significantly increase the complexity and development cost of the application.
If the transport layer protocol implementation used is widely employed by other applications, the code that interfaces with the transport layer protocol can likely be “cloned” from other applications, rather than developed from scratch. Such an advantage would not be realized by using a proprietary or a new protocol stack that has not yet been used by many applications
Because TCP is a reliable data transfer protocol, reliability mechanisms need not be included in applications that use TCP. Most implementations of TCP provide the de-facto standard BSD sockets API.
Most implementations of UDP provide the de-facto standard BSD sockets API. On the other hand, UDP provides no reliability, so the application must provide its own.
T/TCP provides the same reliability that TCP provides, so reliability mechanisms aren't necessary in applications that use T/TCP. Because the TCP API is augmented slightly to support T/TCP, application code to interface with T/TCP can't quite be “cloned” from an application that uses standard TCP. But the differences aren't significant, and that code can still serve as a starting point for the small modifications specific to utilizing the advantages of T/TCP.
I conducted a series of trials to compare the performance of T/TCP and standard TCP. I didn't test UDP because, for that protocol, performance isn't an issue there-the lack of reliability is. The two test-bed configurations used for these trials are illustrated in Figure 1.
Figure 1: Test-bed configurations
The client machine in these tests was a PC running FreeBSD 4.4 Release 0 with a T/TCP patch. The router was a PC running Linux 2.2.14-5.0. The server was a PC running Windows 98 with the Fusion 6.5 TCP/IP stack.
In some of the tests, a tool was used on the router to impose a 250ms delay on each packet before being routed out the egress port. The purpose of this was to emulate the delay characteristics of a “long” link, such as a geosynchronous satellite link. In effect, there were four test-bed configurations, representing the following four types of network paths:
- HB-LD: high-bandwith, low-delay
- HB-HD: high bandwidth, high-delay
- LB-LD: low-bandwidth, low-delay
- LB-HD: low-bandwidth, high-delay
In each transaction, the client application sent a 1,000-byte request to the server application, which responded by echoing the request back to the client. Different client/server application programs were used for the T/TCP and the standard TCP tests because of the minor API differences.
In each trial, 100 transactions were run in rapid succession. The client application initiated each transaction immediately upon completion of the previous. A network monitor captured the traffic for offline analysis. Three trials were run in each of the four configurations, and, for each, trial the total elapsed time between the first packet of the first transaction and the final packet of the 100th transaction was recorded. Also, the number of TCP segments passing through the network in each trial was noted. The results of the three trials in each configuration were averaged (though, in fact, there was negligible variation in the results). The results are presented in Table 1.
|Table 1: TCP vs. T/TCP performance comparison|
In most of the configurations, the test results show the expected difference in segment count: three per transaction for T/TCP, 9-per-transaction for standard TCP. The exception is when running standard TCP in the low-bandwidth configurations, where the packet count is even higher than expected. This is because of unnecessary retransmissions of the 1,000-byte query by the client, due to an initial retransmission timer value that is less than the time it takes for the client to receive the acknowledgement. With T/TCP this doesn't happen (except possibly on the first transaction), because T/TCP caches the measured round-trip time (upon which the retransmission timer is based) for use in subsequent connections. Standard TCP does not remember RTTs across connections.
In all of the configurations that were tested, the T/TCP transactions completed faster than the standard TCP transactions. This difference was most dramatic in the HB-HD configuration (“long, fat pipe”), where standard TCP transactions took about twice as long as T/TCP transactions. In this configuration, the delay is the dominant factor in the round-trip time; standard TCP requires two round-trip times, compared to one for T/TCP. The least significant difference is seen in the LB-LD configuration (“short, thin pipe”), where the transmission time of the 1,000-byte queries and responses is the dominant time-consuming component.
T/TCP and the future
As network-connected devices running query/response types of applications proliferate and compete for bandwidth that is limited compared to demand, it will be important to accomplish the individual transactions efficiently. Since many of these applications are interactive, and may operate over “long, fat networks,” response time is an important factor, too.
The choice of transport layer protocol affects bandwidth efficiency, and it can also be a significant factor in response time. While UDP is the most bandwidth-efficient, widely deployed, transport layer protocol, it's clearly not suitable for applications in which reliability of data transfer is essential. If you're looking for reliability, standard TCP is a better choice. But as this article demonstrates, T/TCP outperforms TCP in both bandwidth efficiency and response time, while retaining reliability.
Since the API of T/TCP is a relatively small extension of standard TCP's API, developers' experiences with TCP will serve them well in developing T/TCP applications.
For these reasons, the networking world is likely to see a significant number T/TCP-enabled devices deployed in the coming years.
Michael Mansberg is a principal software engineer with NetSilicon Softworks Group. He has a degree in computer science from the University of Maryland and extensive experience implementing data communications protocols. Contact him at .