TCP: Overview
Transport Layer
TCP Congestion Control, Connection Management, etc.
Copyright By PowCoder代写 加微信 powcoder
TCP, Part 3
Multiple senders, routers with finite buffers, multihop paths
C-to-A TCP connection
B-to-D TCP connection
lost packets (buffer overflow at routers)
long delays (queuing in router buffers)
a top-10 problem!
Manifestations of Congestion
For a demo of congestion in the network, see the QUEUING & LOSS Applet
Example: Congestion scenario
A particularly interesting case is when the emission and transmission rates are the same, for example when both are 500 packets/sec. If you let the applet run for a very long time, you’ll eventually see the queue fill up and overflow. Indeed when the two rates are the same (that is, ρ = 1), the queue grows without bound (with random inter-arrival times), as described in the text.
Emission rate=500 packets/sec, Transmission rate=1000 packets/sec
Emission rate=500 packets/sec, Transmission rate=350 packets/sec
Emission rate=500 packets/sec, Transmission rate=500 packets/sec
Approaches towards congestion control
End-to-end congestion control:
no explicit feedback from network
congestion inferred by end-systems from observed packet loss & delay
approach taken by TCP
Network-assisted congestion control:
routers provide feedback to End Systems in the form of:
single bit indicating link congestion
explicit transmission rate the sender should send at
(ATM ABR, IBM SNA, DECbit, TCP/IP ECN)
Two broad approaches towards congestion control:
(ECN) Explicit Congestion Notification
TCP Congestion Control
How TCP sender limits the rate at which it sends traffic into its connection?
(Amount of unACKed data)SENDER < min(CongWin, RcvWindow)
LastByteSent - LastByteACKed
Indirectly limits the sender’s send rate
Assumptions:
TCP receive buffer is very large – no RcvWindow constraint
Amt. of unACKed data at sender is solely limited by CongWin
Packet loss delay & packet transmission delay are negligible
Sending rate: (approx.)
By adjusting CongWin, sender can therefore adjust the rate at which it sends data into its connection
New variable! – Congestion Window
TCP Congestion Control
TCP uses ACKs to trigger (“clock”) its increase in congestion window size – “self-clocking”
Arrival of ACKs – indication to the sender that all is well
Slow Rate of ACK arrival
Congestion window will be increased at a relatively slow rate
High rate of ACK arrival
Congestion window will be increased more quickly
TCP Congestion Control
How TCP perceives that there is congestion on the path?
“Loss Event” – when there is excessive congestion, router buffers along the path overflows, causing datagrams to be dropped, which in turn, results in a “loss event” at the sender
no ACK is received after segment loss
Receipt of three duplicate ACKs
segment loss is followed by three duplicate ACKs received at the sender
Challenges faced by TCP
How TCP senders can adjust their sending rate so that they can make use of available bandwidth, but not congest the network?
Limitations:
TCP senders are not explicitly coordinated by some server.
TCP only works on local information, asynchronously from other TCP senders
There is no explicit signaling of congestion state by the network
Major Components
TCP Congestion Control Details
Slow-start
Congestion-avoidance
Fast recovery
recommended, but not required component of TCP
TCP Slow-Start
when connection begins, increase rate exponentially until first loss event:
initially cwnd = 1 MSS
double cwnd every RTT
done by incrementing cwnd by 1 MSS for every ACK received
summary: initial rate is slow but ramps up exponentially fast (doubling of the sending rate every RTT)
one segment
two segments
four segments
actually increases fast, but since the sender starts with 1 segment, it takes about a hundred msecs. to get the throughput high
Time to reach the Cwnd size of size N
End of Slow-Start’s Exponential Growth
After 3 dup ACKs:
Fast recovery
cwnd is cut approximately in half
window then grows linearly
But after timeout event:
Start slow-start anew
threshold = cwnd / 2
cwnd is set to 1 MSS
cwnd then grows exponentially
Up to a threshold, then grows linearly
3 dup ACKs indicate that the network is capable of delivering some segments
timeout indicates a “more alarming” congestion scenario
Philosophy:
Q: after timeout event, when should the exponential increase switch to linear?
A: when cwnd gets to 1/2 of its value before timeout.
CA Implementation:
variable ssthresh (slow-start threshold)
When a new ACK arrives, the TCP sender increases its cwnd by MSS bytes * ( MSS/cwnd).
(A) TCP Reno, Fast recovery
After exceeding the threshold, switch to
Congestion-avoidance (CA) phase:
End of Slow-Start’s Exponential Growth
(A) TCP Tahoe
(A) Receipt of 3 duplicate ACKs
Congestion Avoidance (CA) Implementation:
variable ssthresh (slow-start threshold)
When a new ACK arrives, the TCP sender increases its cwnd by
MSS * ( MSS/cwnd).
Fast recovery
Receipt of 3 duplicate ACKs
MSS=1,460 bytes
Cwnd=11,680 bytes (=8MSS)
Each arriving ACK (assuming that 1 ACK is received per segment) increases cwnd by (1/8)*MSS.
e.g. After receipt of 1 ACK,
cwnd = 11,680 + ((1460)*(1/8))=11,863 bytes
Therefore, cwnd would increase by 1MSS after all 8 ACKs have been received.
TCP Sender Congestion Control
STATE EVENT TCP SENDER Congestion-Control Action Commentary
SLOW START (SS) ACK receipt for previously unACKed data CongWin = CongWin + MSS,
If(CongWin > Threshold)
set state to Congestion Avoidance (CA) Resulting in the doubling of CongWin every RTT
Congestion Avoidance (CA) ACK receipt for previously unACKed data CongWin =
CongWin + (MSS * (MSS/CongWin)) Additive increase, resulting to increase of CongWin by 1 MSS every RTT
SS or CA Loss event detected by triple duplicate ACK Threshold = CongWin / 2,
CongWin = Threshold+3*MSS
Set state to Congestion Avoidance (CA) Fast recovery, implementing multiplicative decrease, CongWin will not drop below 1 MSS.
Timeout Threshold = CongWin / 2,
CongWin = 1 MSS,
Set state to “Slow Start” Enter Slow Start.
Duplicate ACK Increment duplicate ACK count for segment being ACKed CongWin and Threshold not changed
Summary: TCP Congestion Control
ssthresh = cwnd/2
cwnd = 1 MSS
dupACKcount = 0
retransmit missing segment
cwnd > ssthresh
congestion
cwnd = cwnd + MSS (MSS/cwnd)
dupACKcount = 0
transmit new segment(s), as allowed
dupACKcount++
duplicate ACK
cwnd = cwnd + MSS
transmit new segment(s), as allowed
duplicate ACK
ssthresh= cwnd/2
cwnd = ssthresh + 3MSS
retransmit missing segment
dupACKcount == 3
ssthresh = cwnd/2
dupACKcount = 0
retransmit missing segment
ssthresh= cwnd/2
cwnd = ssthresh + 3 MSS
retransmit missing segment
dupACKcount == 3
cwnd = ssthresh
dupACKcount = 0
ssthresh = cwnd/2
cwnd = 1 MSS
dupACKcount = 0
retransmit missing segment
cwnd = cwnd+MSS
dupACKcount = 0
transmit new segment(s), as allowed
dupACKcount++
duplicate ACK
cwnd = 1 MSS
ssthresh = 64 KB
dupACKcount = 0
Initialisation
Sample PhD thesis on Congestion
Link: http://www.aciri.org/tfrc/tcp-friendly.TR.ps.gz
J. Padhye, “Towards a Comprehensive Congestion Control Framework for Continuous Media Flows in Networks”, Ph.D. Thesis, University of Massachusetts Amherst, March 3, 2000.
Papers on Analytical Modeling of TCP/IP
Link: http://www.wu.ece.ufl.edu/books/CS/networks/TCPIP.html
TCP congestion control algorithm:
additive increase, multiplicative decrease (AIMD)
General approach: increase transmission rate (window size), probing for usable bandwidth, until loss occurs
additive increase: increase cwnd by 1 MSS every RTT until loss is detected
multiplicative decrease: cut cwnd in half after loss
cwnd: congestion window size
saw tooth behavior:
– probing for bandwidth,
– ignoring the Slow-Start phase (typically short!);
– considering only 3 duplicate ACKs
Distributed asynchronous-optimisation algorithm that results in several aspects of user and network performance being simultaneously optimised.
Transport Layer
Highly-Simplified Macroscopic Description of TCP throughput
what’s the average throughput of TCP as a function of window size and RTT?
ignore slow start (typically very short phases)
let W be the window size when loss occurs.
when window is W, throughput is W/RTT
just after loss, window drops to W/2, throughput to W/2RTT.
Throughput increases linearly (by 1 MSS every RTT)
Average Throughput: .75 W/RTT
(Based on Idealised model for the steady-state dynamics of TCP)
Transport Layer
TCP Futures: TCP over “long, fat pipes”
Example: GRID & CLOUD computing applications
1500-byte segments, 100ms RTT, desired ave.throughput of 10 Gbps
requires window size W = 111,111 in-flight segments
Throughput in terms of loss rate:
➜ L = 2·10-10 – a very small loss rate! (1 loss event every 5 billion segments)
new versions of TCP is needed for high-speed environments!
Computed from 0.75W/RTT
TCP Fairness
Fairness goal: if N TCP sessions share same bottleneck link, each should get an average transmission rate of R/N , an equal share of the link’s bandwidth
TCP connection 1
bottleneck
capacity R
connection 2
Go to Summary of TCP Congestion Control
different start times
different W size
Both have a large amount of data to send
Analysis of 2 connections sharing a link
Link with transmission rate of R
Each connection have the same MSS, RTT and has a large amount of data to send
No other TCP connections or UDP datagrams traverse the shared link
Ignore slow start phase of TCP
Operating in congestion-avoidance mode (AIMD) at all times
Goal: adjust sending rate of the two connections to allow for equal bandwidth sharing
Assumptions:
TCP connection 1
bottleneck
capacity R
connection 2
Why is TCP fair?
(Congestion Control Mechanism)
Two competing sessions:
Additive increase gives slope of 1, as throughout increases
multiplicative decrease: decreases throughput proportionally
equal bandwidth share line
Connection 1 throughput
Connection 2 throughput
congestion avoidance: additive increase
loss: decrease window by factor of 2
A point on the graph depicts the amount of link bandwidth jointly consumed by the connections
Full bandwidth utilisation line
We can view
a simulation on this
View Simulation
TCP Latency Modeling
In practice, client/server applications with smaller RTT gets the available bandwidth more quickly as it becomes free. Therefore, they have higher throughputs
Multiple parallel TCP connection allows one application to get a bigger share of the bandwidth
Multiple End Systems sharing a link
R bps – link’s transmission rate
Loop holes in TCP:
1 TCP connection
1 TCP connection
1 TCP connection
3 TCP connections
Multithreading implementation
congestion
16 Kbytes�
24 Kbytes�
congestion
Ave. throughput of a connection
/docProps/thumbnail.jpeg
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com