What is used by tcp and udp to track multiple individual conversations between clients and servers?

This is where your network really gets moving! The transport layer uses two protocols: TCP and UDP. Think of TCP as getting a registered letter in the mail.

UDP is more like a regular, stamped letter. It arrives in your mailbox and, if it does, it is probably intended for you, but it might actually be for someone else who does not live there.

14.0.2 — What will I learn from this module?

Module Title: Transport Layer

Module Objective: Compare the operations of transport layer protocols in supporting end-to-end communication.

The transport layer is responsible for logical communications between applications running on different hosts. This may include services such as establishing a temporary session between two hosts and the reliable transmission of information for an application.

The transport layer has no knowledge of the destination host type, the type of media over which the data must travel, the path taken by the data, the congestion on a link, or the size of the network.

The transport layer includes two protocols:

  • Transmission Control Protocol (TCP)
  • User Datagram Protocol (UDP)

14.1.2 — Transport Layer Responsibilities

The transport layer has many responsibilities.

Tracking Individual Conversations

Each set of data flowing between a source application and a destination application is known as a conversation and is tracked separately.

Most networks have a limitation on the amount of data that can be included in a single packet. Therefore, data must be divided into manageable pieces.

Segmenting Data and Reassembling Segments

Divide the application data into appropriately sized blocks. The transport layer divides the data into smaller blocks (i.e., segments or datagrams) that are easier to manage and transport.

Add Header Information

The transport layer protocol also adds header information containing binary data organized into several fields to each block of data.

Identifying the Applications

The transport layer must be able to separate and manage multiple communications with different transport requirement needs (Required port number).

Conversation Multiplexing

Sending some types of data (e.g., a streaming video) across a network, as one complete communication stream, can consume all the available bandwidth.

14.1.3 — Transport Layer Protocols

IP does not specify how the delivery or transportation of the packets takes place. The transport layer includes the TCP and UDP protocols. Different applications have different transport reliability requirements.

14.1.4 — Transmission Control Protocol (TCP)

TCP is considered a reliable, full-featured transport layer protocol, which ensures that all of the data arrives at the destination. TCP includes fields which ensure the delivery of the application data. These fields require additional processing by the sending and receiving hosts.

TCP divides data into segments.

TCP provides reliability and flow control using these basic operations:

  • Number and track data segments transmitted to a specific host from a specific application
  • Acknowledge received data
  • Retransmit any unacknowledged data after a certain amount of time
  • Sequence data that might arrive in wrong order
  • Send data at an efficient rate that is acceptable by the receiver

TCP must first establish a connection between the sender and the receiver. This is why TCP is known as a connection-oriented protocol.

14.1.5 — User Datagram Protocol (UDP)

UDP is a simpler transport layer protocol than TCP. It does not provide reliability and flow control, which means it requires fewer header fields.

Because the sender and the receiver UDP processes do not have to manage reliability and flow control, UDP datagrams faster than TCP segments.

UDP divides data into datagrams that are also referred to as segments.

UDP is a connectionless protocol. Because UDP does not track information sent or received between the client and server, UDP is also known as a stateless protocol.

UDP is also known as a best-effort delivery protocol because there is no acknowledgment that the data is received at the destination.

14.1.6 — The Right Transport Layer Protocol for the Right App

UDP is the better choice because it requires less network overhead. UDP is preferable for applications such as Voice over IP (VoIP). Acknowledgments and retransmission would slow down delivery and make the voice conversation unacceptable.

Application developers must choose which transport protocol type is appropriate based on the requirements of the applications. Video may be sent over TCP or UDP. Applications that stream stored audio and video typically use TCP. The application uses TCP to perform buffering, bandwidth probing, and congestion control.

Real-time video and voice usually use UDP, but may also use TCP, or both UDP and TCP. A video conferencing application may use UDP by default, but because many firewalls block UDP, the application can also be sent over TCP.

The figure summarizes differences between UDP and TCP.

In addition to supporting the basic functions of data segmentation and reassembly, TCP also provides the following services:

  • Establishes a Session — TCP is a connection-oriented protocol that negotiates and establishes a permanent connection (or session) between source and destination devices prior to forwarding any traffic.
  • Ensures Reliable Delivery — For many reasons, it is possible for a segment to become corrupted or lost completely, as it is transmitted over the network. TCP ensures that each segment that is sent by the source arrives at the destination.
  • Provides Same-Order Delivery — Because networks may provide multiple routes that can have different transmission rates, data can arrive in the wrong order. By numbering and sequencing the segments, TCP ensures segments are reassembled into the proper order.
  • Supports Flow Control — Network hosts have limited resources (i.e., memory and processing power). When TCP is aware that these resources are overtaxed, it can request that the sending application reduce the rate of data flow. Flow control can prevent the need for retransmission of the data when the resources of the receiving host are overwhelmed.

TCP, search the internet for the RFC 793.

14.2.2 — TCP Header

TCP is a stateful protocol which means it keeps track of the state of the communication session. TCP records which information it has sent and which information has been acknowledged.

A TCP segment adds 20 bytes (i.e., 160 bits) of overhead when encapsulating the application layer data.

14.2.3 — TCP Header Fields

The table identifies and describes the ten fields in a TCP header.

14.2.4 — Apps that use TCP

TCP is a good example of how the different layers of the TCP/IP protocol suite have specific roles.

TCP handles all tasks associated with dividing the data stream into segments, providing reliability, controlling data flow, and reordering segments.

UDP is a best-effort transport protocol. UDP is a lightweight transport protocol.

UDP is such a simple protocol that it is usually described in terms of what it does not do compared to TCP.

UDP features include the following:

  • Data is reconstructed in the order that it is received.
  • Any segments that are lost are not resent.
  • There is no session establishment.
  • The sending is not informed about resource availability.

UDP, search the internet for the RFC.

14.3.2 — UDP Header

UDP is a stateless protocol, meaning neither the client, nor the server, tracks the state of the communication session.

The blocks of communication in UDP are called datagrams, or segments. These datagrams are sent as best effort by the transport layer protocol.

The UDP header is far simpler and requires 8 bytes (i.e., 64 bits).

14.3.3 — UDP Header Fields

The table identifies and describes the four fields in a UDP header.

14.3.4 — Apps that use UDP

There are three types of applications that are best suited for UDP:

  • Live video and multimedia applications — These applications can tolerate some data loss, but require little or no delay. Examples include VoIP and live streaming video.
  • Simple request and reply applications — Applications with simple transactions where a host sends a request and may or may not receive a reply. Examples include DNS and DHCP.
  • Applications that handle reliability themselves — Unidirectional communications where flow control, error detection, acknowledgments, and error recovery is not required, or can be handled by the application. Examples include SNMP and TFTP.

DNS and SNMP use UDP by default, both can also use TCP.

DNS will use TCP if the DNS request or DNS response is more than 512 bytes.

Similarly, under some situations the network administrator may want to configure SNMP to use TCP.

The TCP and UDP transport layer protocols use port numbers to manage multiple, simultaneous conversations.

The source port number is associated with the originating application on the localhost.

Whereas the destination port number is associated with the destination application on the remote host.

A server can offer more than one service simultaneously such as web services on port 80 while it offers File Transfer Protocol (FTP) connection establishment on port 21.

14.4.2 — Socket Pairs

The combination of the source IP address and source port number, or the destination IP address and destination port number is known as a socket.

The socket is used to identify the server and service being requested by the client. A client socket might look like this, with 1099 representing the source port number: 192.168.1.5:1099

The socket on a web server (destination) might be 192.168.1.7:80

Sockets enable multiple processes, running on a client, to distinguish themselves from each other, and multiple connections to a server process to be distinguished from each other.

14.4.3 — Port Number Groups

The 16 bits used to identify the source and destination port numbers provides a range of ports from 0 through 65535.

The IANA has divided the range of numbers into the following three port groups.

The table displays some common well-known port numbers and their associated applications.

Well-Known Port Numbers

Some applications may use both TCP and UDP. For example, DNS uses UDP when clients send requests to a DNS server. However, communication between two DNS servers always uses TCP.

14.4.4 — The netstat Command

Sometimes it is necessary to know which active TCP connections are open and running on a networked host.

Netstat is an important network utility that can be used to verify those connections.

Enter the command netstat to list the protocols in use, the local address and port numbers, the foreign address and port numbers, and the connection state.

The -n option can be used to display IP addresses and port numbers in their numerical form.

Each application process running on a server is configured to use a port number. The port number is either automatically assigned or configured manually by a system administrator.

Clients Sending TCP Requests

Client 1 is requesting web services and Client 2 is requesting email service.

Sending TCP Requests

Request Destination Ports

Client 1 is requesting web services using well-known destination port 80 (HTTP) and Client 2 is requesting email service using well-known port 25 (SMTP).

Request Destination Ports

Request Source Ports

Client requests dynamically generate a source port number. In this case, Client 1 is using source port 49152 and Client 2 is using source port 51152.

Response Destination Ports

Notice that the Server response to the web request now has destination port 49152 and the email response now has destination port 51152.

Response Source Ports

The source port in the server response is the original destination port in the initial requests.

14.5.2 — TCP Connection Establishment (Greeting/Request Connection)

Step 1. SYN

The initiating client requests a client-to-server communication session with the server.

Step 2. ACK and SYN

The server acknowledges the client-to-server communication session and requests a server-to-client communication session.

Step 3. ACK

The initiating client acknowledges the server-to-client communication session.

The three-way handshake validates that the destination host is available to communicate. In this example, host A has validated that host B is available.

14.5.3 — Session Termination (End the Session)

To close a connection, the Finish (FIN) control flag must be set in the segment header. To end each one-way TCP session, a two-way handshake, consisting of a FIN segment and an Acknowledgment (ACK) segment, is used.

Step 1. FIN

When the client has no more data to send in the stream, it sends a segment with the FIN flag set.

Step 2. ACK

The server sends an ACK to acknowledge the receipt of the FIN to terminate the session from client to server.

Step 3. FIN

The server sends a FIN to the client to terminate the server-to-client session.

Step 4. ACK

The client responds with an ACK to acknowledge the FIN from the server.

When all segments have been acknowledged, the session is closed.

14.5.4 — TCP Three-way Handshake Analysis

These are the functions of the three-way handshake:

  • It establishes that the destination device is present on the network.
  • It verifies that the destination device has an active service and is accepting requests on the destination port number that the initiating client intends to use.
  • It informs the destination device that the source client intends to establish a communication session on that port number.

The connection and session mechanisms enable TCP reliability function.

Control Bits Field

The six bits in the Control Bits field of the TCP segment header are also known as flags.

The six control bits flags are as follows:

  • URG — Urgent pointer field significant
  • ACK — Acknowledgment flag used in connection establishment and session termination
  • PSH — Push function
  • RST — Reset the connection when an error or timeout occurs
  • SYN — Synchronize sequence numbers used in connection establishment
  • FIN — No more data from sender and used in session termination

Search the internet to learn more about the PSH and URG flags.

14.5.5 — Video — TCP 3-Way Handshake

TCP can also help maintain the flow of packets so that devices do not become overloaded.

For the original message to be understood by the recipient, all the data must be received and the data in these segments must be reassembled into the original order.

During session setup, an initial sequence number (ISN) is set. This ISN represents the starting value of the bytes that are transmitted to the receiving application.

14.6.2 — Video — TCP Reliability — Sequence Number and Acknowledgments

14.6.3 — TCP Reliability — Data loss and Retransmission

The sequence (SEQ) number and acknowledgement (ACK) number are used together to confirm receipt of the bytes of data contained in the transmitted segments.

Prior to later enhancements, TCP could only acknowledge the next byte expected.

For example, in the figure, using segment numbers for simplicity, host A sends segments 1 through 10 to host B. If all the segments arrive except for segments 3 and 4, host B would reply with acknowledgment specifying that the next segment expected is segment 3. Host A has no idea if any other segments arrived or not. Host A would, therefore, resend segments 3 through 10. If all the resent segments arrived successfully, segments 5 through 10 would be duplicates. This can lead to delays, congestion, and inefficiencies.

If all the segments arrive except for segments 3 and 4, host B can acknowledge that it has received segments 1 and 2 (ACK 3), and selectively acknowledge segments 5 through 10 (SACK 5–10). Host A would only need to resend segments 3 and 4.

14.6.4 — Video — TCP Reliability — Data Loss and Retransmission

14.6.5 — TCP Flow Control — Windows Size and Acknowledgements

Flow control is the amount of data that the destination can receive and process reliably. Flow control helps maintain the reliability of TCP transmission by adjusting the rate of data flow between source and destination for a given session.

TCP Window Size Example

The window size determines the number of bytes that can be sent before expecting an acknowledgment. The acknowledgment number is the number of the next expected byte.

For example, it is typical that PC B would not wait until all 10,000 bytes have been received before sending an acknowledgment. This means PC A can adjust its send window as it receives acknowledgments from PC B. As shown in the figure, when PC A receives an acknowledgment with the acknowledgment number 2,921, which is the next expected byte. The PC A send window will increment 2,920 bytes. This changes the send window from 10,000 bytes to 12,920. PC A can now continue to send up to another 10,000 bytes to PC B as long as it does not send more than its new send window at 12,920.

14.6.6 — TCP Flow Control — Maximum Segment Size (MSS)

This is typically the Maximum Segment Size (MSS) that the destination device can receive. The MSS is part of the options field in the TCP header that specifies the largest amount of data, in bytes, that a device can receive in a single TCP segment.

A common MSS is 1,460 bytes when using IPv4. A host determines the value of its MSS field by subtracting the IP and TCP headers from the Ethernet maximum transmission unit (MTU). On an Ethernet interface, the default MTU is 1500 bytes. Subtracting the IPv4 header of 20 bytes and the TCP header of 20 bytes, the default MSS size will be 1460 bytes, as shown in the figure.

14.6.7 — TCP Flow Control — Congestion Avoidance

Whenever there is congestion, retransmission of lost TCP segments from the source will occur. If the retransmission is not properly controlled, the additional retransmission of the TCP segments can make the congestion even worse.

To avoid and control congestion, TCP employs several congestion handling mechanisms, timers, and algorithms.

As illustrated in the figure, PC A senses there is congestion and therefore, reduces the number of bytes it sends before receiving an acknowledgment from PC B.

Notice that it is the source that is reducing the number of unacknowledged bytes it sends and not the window size determined by the destination.

Explanations of actual congestion handling mechanisms, timers, and algorithms are beyond the scope of this course.

UDP is perfect for communications that need to be fast, like VoIP.

As shown in the figure, UDP does not establish a connection. UDP provides low overhead data transport because it has a small datagram header and no network management traffic.

14.7.2 — UDP Datagram Reassembly

when UDP datagrams are sent to a destination, they often take different paths and arrive in the wrong order. UDP does not track sequence numbers the way TCP does. UDP has no way to reorder the datagrams into their transmission order.

UDP: Connectionless and Unreliable

14.7.3 — UDP Server Processes and Requests

UDP-based server applications are assigned well-known or registered port numbers.

When UDP receives a datagram destined for one of these ports, it forwards the application data to the appropriate application based on its port number.

The Remote Authentication Dial-in User Service (RADIUS) server shown in the figure provides authentication, authorization, and accounting services to manage user access. The operation of RADIUS is beyond the scope for this course.

14.7.4 — UDP Client Processes

Clients Sending UDP Requests

Client 1 is sending a DNS request while Client 2 is requesting RADIUS authentication services of the same server.

UDP Request Destination Ports

Client 1 is sending a DNS request using the well-known destination port 53 while Client 2 is requesting RADIUS authentication services using the registered destination port 1812.

UDP Request Source Ports

The requests of the clients dynamically generate source port numbers. In this case, Client 1 is using source port 49152 and Client 2 is using source port 51152.

UDP Response Destination

When the server responds to the client requests, it reverses the destination and source ports of the initial request. In the Server response to the DNS request is now destination port 49152 and the RADIUS authentication response is now destination port 51152.

UDP Response Source Ports

The source ports in the server response are the original destination ports in the initial requests.

References: