CS计算机代考程序代写 algorithm PowerPoint Presentation

PowerPoint Presentation

Transport Layer
All material copyright 1996-2012
J.F Kurose and K.W. Ross, All Rights Reserved
George Parisis
School of Engineering and Informatics
University of Sussex

Transport Layer
3-*
Outline
transport-layer services
multiplexing and demultiplexing
connectionless transport: UDP
principles of reliable data transfer
connection-oriented transport: TCP

segment structure
reliable data transfer
flow control
connection management
principles of congestion control
TCP congestion control

Transport Layer

Transport Layer
3-*
TCP flow control

application
process

TCP
code

IP
code

receiver protocol stack
application may
remove data from
TCP socket buffers ….
… slower than TCP
receiver is delivering
(sender is sending)
from sender

TCP socket
receiver buffers

application
OS

flow control

receiver controls sender, so sender won’t overflow receiver’s buffer by transmitting too much, too fast

Transport Layer

Transport Layer
3-*
TCP flow control
rwnd
RcvBuffer
TCP segment payloads
to application process
receiver “advertises” free buffer space by including rwnd value in TCP header of receiver-to-sender segments

RcvBuffer size set via socket options (typical default is 4096 bytes)
many operating systems autoadjust RcvBuffer
sender limits amount of unacked (“in-flight”) data to receiver’s rwnd value
guarantees receive buffer will not overflow

receiver-side buffering

buffered data

free buffer space

Transport Layer

Transport Layer
3-*
Outline
transport-layer services
multiplexing and demultiplexing
connectionless transport: UDP
principles of reliable data transfer
connection-oriented transport: TCP

segment structure
reliable data transfer
flow control
connection management
principles of congestion control
TCP congestion control

Transport Layer

Transport Layer
3-*
congestion:
informally: “too many sources sending too much data too fast for network to handle”
different from flow control!
manifestations:

lost packets (buffer overflow at routers)
long delays (queueing in router buffers)
a top-10 problem!

Principles of congestion control

Transport Layer

Causes/costs of congestion: scenario 1
two senders, two receivers
one router, infinite buffers
output link capacity: R
no retransmission

maximum per-connection throughput: R/2

unlimited shared output link buffers
Host A
original data: lin
Host B

throughput: lout
large delays as arrival rate, lin, approaches capacity

Transport Layer
3-*
one router, finite buffers
sender retransmission of timed-out packet

transport-layer input includes retransmissions : l’in > lin

finite shared output link buffers
Host A
lin : original data
Host B

lout

l’in: original data, plus retransmitted data
Causes/costs of congestion: scenario 2

Transport Layer

Transport Layer
3-*
idealization: perfect knowledge
sender sends only when router buffers available – no loss
Sending rate cannot exceed R/2

finite shared output link buffers
lin : original data

lout

l’in: original data, plus retransmitted data

copy
free buffer space!
Causes/costs of congestion: scenario 2
Host B
A

Transport Layer

Transport Layer
3-*

lin : original data

lout

l’in: original data, plus retransmitted data

copy
no buffer space!
Idealization: known loss packets can be lost, dropped at router due to full buffers
sender only resends if packet known to be lost

Causes/costs of congestion: scenario 2
A
Host B

Transport Layer

Transport Layer
3-*

lin : original data

lout

l’in: original data, plus retransmitted data

free buffer space!

Causes/costs of congestion: scenario 2
Idealization: known loss packets can be lost, dropped at router due to full buffers
sender only resends if packet known to be lost

A
Host B

Transport Layer

Transport Layer
3-*

A
lin

lout

l’in

copy
free buffer space!
Host B
Realistic: duplicates
packets can be lost, dropped at router due to full buffers
sender times out prematurely, sending two copies, both of which are delivered

Causes/costs of congestion: scenario 2

timeout

Transport Layer

Transport Layer
3-*
“costs” of congestion:
more work (retrans) for given “goodput”
unneeded retransmissions: link carries multiple copies of pkt

decreasing goodput
Causes/costs of congestion: scenario 2
Realistic: duplicates
packets can be lost, dropped at router due to full buffers
sender times out prematurely, sending two copies, both of which are delivered

Transport Layer

Transport Layer
3-*
four senders
multihop paths
timeout/retransmit

Q: what happens as lin and lin’ increase ?
finite shared output link buffers
Host A

lout

Causes/costs of congestion: scenario 3
Host B
Host C
Host D
lin : original data
l’in: original data, plus retransmitted data
A: as red lin’ increases, all arriving blue pkts at upper queue are dropped, blue throughput g 0

Transport Layer

Transport Layer
3-*

another “cost” of congestion:
when packet dropped, any “upstream transmission capacity used for that packet was wasted!
Fairness?

Causes/costs of congestion: scenario 3

Transport Layer

Transport Layer
3-*
Approaches towards congestion control
two broad approaches towards congestion control:

end-end congestion control:
no explicit feedback from network
congestion inferred from end-system. How?
approach taken by TCP

network-assisted congestion control:
routers provide feedback to end systems

single bit indicating congestion (SNA, DECbit, TCP/IP ECN, ATM)
explicit rate for sender to send at
choke packet

Transport Layer

Transport Layer
3-*
Outline
transport-layer services
multiplexing and demultiplexing
connectionless transport: UDP
principles of reliable data transfer
connection-oriented transport: TCP

segment structure
reliable data transfer
flow control
connection management
principles of congestion control
TCP congestion control

Transport Layer

TCP Congestion Control
how does a TCP sender…

… perceive that there is congestion on the path between itself and the destination?
… limit the rate at which it sends traffic into its connection?
what algorithm should the sender use to change its send rate as a function of perceived end-to-end congestion?

Transport Layer
3-*

Transport Layer

Transport Layer
3-*
TCP Congestion Control: details
sender limits transmission:

cwnd is dynamic, function of perceived network congestion

TCP sending rate:
roughly: send cwnd bytes, wait RTT for ACKS, then send more bytes

last byte
ACKed
sent, not-yet ACKed
(“in-flight”)
last byte sent
cwnd
LastByteSent – LastByteAcked
min(cwnd,rwnd)

sender sequence number space
rate
bytes/sec

< ~ ~ cwnd RTT Transport Layer Transport Layer 3-* TCP congestion control: additive increase multiplicative decrease approach: sender increases transmission rate (window size), probing for usable bandwidth, until loss occurs additive increase: increase cwnd by 1 MSS every RTT until loss detected multiplicative decrease: cut cwnd in half after loss cwnd: TCP sender congestion window size AIMD saw tooth behavior: probing for bandwidth additively increase window size … …. until loss occurs (then cut window in half) time Transport Layer Transport Layer 3-* TCP Slow Start when connection begins, increase rate exponentially until first loss event: initially cwnd = 1 MSS double cwnd every RTT done by incrementing cwnd for every ACK received summary: initial rate is slow but ramps up exponentially fast Host A one segment RTT Host B two segments four segments time Transport Layer Transport Layer 3-* TCP: detecting, reacting to loss loss indicated by timeout: cwnd set to 1 MSS; window then grows exponentially (as in slow start) to threshold, then grows linearly loss indicated by 3 duplicate ACKs: TCP RENO (Fast Recovery) dup ACKs indicate network capable of delivering some segments cwnd is cut in half window then grows linearly TCP Tahoe always sets cwnd to 1 (timeout or 3 duplicate acks) Transport Layer Transport Layer 3-* Q: when should the exponential increase switch to linear? A: when cwnd gets to 1/2 of its value before timeout. Implementation: variable ssthresh on loss event, ssthresh is set to 1/2 of cwnd just before loss event TCP: from slow start to congestion avoidance During congestion avoidance increase cwnd by MSS/cwnd for each new ACK Transport Layer Transport Layer 3-* Summary: TCP Congestion Control timeout ssthresh = cwnd/2 cwnd = 1 MSS dupACKcount = 0 retransmit missing segment cwnd > ssthresh

L

congestion
avoidance
cwnd = cwnd + MSS*(MSS/cwnd)
dupACKcount = 0
transmit new segment(s), as allowed

new ACK
dupACKcount++

duplicate ACK

fast
recovery
cwnd = cwnd + MSS/cwnd
transmit new segment(s), as allowed

duplicate ACK

ssthresh= cwnd/2
cwnd = ssthresh + 3
retransmit missing segment

dupACKcount == 3
timeout
ssthresh = cwnd/2
cwnd = 1
dupACKcount = 0
retransmit missing segment

ssthresh= cwnd/2
cwnd = ssthresh + 3
retransmit missing segment

dupACKcount == 3

New ACK
cwnd = ssthresh
dupACKcount = 0

slow
start
timeout
ssthresh = cwnd/2
cwnd = 1 MSS
dupACKcount = 0
retransmit missing segment

cwnd = cwnd+MSS
dupACKcount = 0
transmit new segment(s), as allowed

new ACK
dupACKcount++

duplicate ACK
L
cwnd = 1 MSS
ssthresh = 64 KB
dupACKcount = 0

New
ACK!

New
ACK!

New
ACK!

Transport Layer

Transport Layer
3-*
TCP throughput
avg. TCP throughput as function of window size, RTT?

ignore slow start, assume always data to send
W: window size (measured in bytes) when loss occurs

avg. window size (# in-flight bytes) is ¾ W
avg. throughput is 3/4W per RTT
3
4

W
RTT

bytes/sec
avg TCP throughput =

Transport Layer

Transport Layer
3-*
TCP over “long, fat pipes”
example: 1500 byte segments, 100ms RTT, want 10 Gbps throughput
requires W = 83,333 in-flight segments
throughput in terms of segment loss probability, L [Mathis 1997]:

➜ to achieve 10 Gbps throughput, need a loss rate of L = 2·10-10 – a very small loss rate!
new versions of TCP for high-speed

.
MSS

RTT

L
TCP throughput =
1.22

Transport Layer

Transport Layer
3-*
fairness goal: if K TCP sessions share same bottleneck link of bandwidth R, each should have average rate of R/K

TCP connection 1
bottleneck
router
capacity R

TCP Fairness
TCP connection 2

Transport Layer

Transport Layer
3-*
Why is TCP fair?
two competing sessions:
additive increase gives slope of 1, as throughout increases
multiplicative decrease decreases throughput proportionally

R
R
equal bandwidth share
Connection 1 throughput
Connection 2 throughput
congestion avoidance: additive increase
loss: decrease window by factor of 2
congestion avoidance: additive increase
loss: decrease window by factor of 2

Transport Layer

Transport Layer
3-*
Fairness (more)
Fairness and UDP
multimedia apps often do not use TCP

do not want rate throttled by congestion control
instead use UDP:

send audio/video at constant rate, tolerate packet loss
Fairness, parallel TCP connections
application can open multiple parallel connections between two hosts
web browsers do this
e.g., link of rate R with 9 existing connections:

new app asks for 1 TCP, gets rate R/10
new app asks for 11 TCPs, gets R/2

Transport Layer

Transport Layer
3-*
Summary
principles behind transport layer services:

multiplexing, demultiplexing
reliable data transfer
flow control
congestion control
instantiation, implementation in the Internet

UDP
TCP
next:
leaving the network “edge” (application, transport layers)
into the network “core”

Transport Layer

3.6 • PRINCIPLES OF CONGESTION CONTROL 261

average number of queued packets in the router is unbounded, and the average delay
between source and destination becomes infinite (assuming that the connections
operate at these sending rates for an infinite period of time and there is an infinite
amount of buffering available). Thus, while operating at an aggregate throughput of
near R may be ideal from a throughput standpoint, it is far from ideal from a delay
standpoint. Even in this (extremely) idealized scenario, we’ve already found one
cost of a congested network—large queuing delays are experienced as the packet-
arrival rate nears the link capacity.

Scenario 2: Two Senders and a Router with Finite Buffers

Let us now slightly modify scenario 1 in the following two ways (see Figure 3.45).
First, the amount of router buffering is assumed to be finite. A consequence of this
real-world assumption is that packets will be dropped when arriving to an already-
full buffer. Second, we assume that each connection is reliable. If a packet contain-
ing a transport-level segment is dropped at the router, the sender will eventually
retransmit it. Because packets can be retransmitted, we must now be more careful
with our use of the term sending rate. Specifically, let us again denote the rate at
which the application sends original data into the socket by !in bytes/sec. The rate at
which the transport layer sends segments (containing original data and retransmit-
ted data) into the network will be denoted !”in bytes/sec. !”in is sometimes referred to
as the offered load to the network.

The performance realized under scenario 2 will now depend strongly on
how retransmission is performed. First, consider the unrealistic case that Host A is
able to somehow (magically!) determine whether or not a buffer is free in the router
and thus sends a packet only when a buffer is free. In this case, no loss would occur,

R/2

R/2

D
el

ay

R/2
λin λin

λ o
u

t

a. b.

Figure 3.44 ! Congestion scenario 1: Throughput and delay as a function
of host sending rate

3.6 • PRINCIPLES OF CONGESTION CONTROL 263

course, the receiver needs but one copy of this packet and will discard the retrans-
mission. In this case, the work done by the router in forwarding the retransmitted
copy of the original packet was wasted, as the receiver will have already received
the original copy of this packet. The router would have better used the link trans-
mission capacity to send a different packet instead. Here then is yet another cost of
a congested network—unneeded retransmissions by the sender in the face of large
delays may cause a router to use its link bandwidth to forward unneeded copies of a
packet. Figure 3.46 (c) shows the throughput versus offered load when each packet
is assumed to be forwarded (on average) twice by the router. Since each packet is
forwarded twice, the throughput will have an asymptotic value of R/4 as the offered
load approaches R/2.

Scenario 3: Four Senders, Routers with Finite Buffers, and
Multihop Paths

In our final congestion scenario, four hosts transmit packets, each over overlap-
ping two-hop paths, as shown in Figure 3.47. We again assume that each host
uses a timeout/retransmission mechanism to implement a reliable data transfer
service, that all hosts have the same value of !in, and that all router links have
capacity R bytes/sec.

Let’s consider the connection from Host A to Host C, passing through routers
R1 and R2. The A–C connection shares router R1 with the D–B connection and
shares router R2 with the B–D connection. For extremely small values of !in, buffer
overflows are rare (as in congestion scenarios 1 and 2), and the throughput approxi-
mately equals the offered load. For slightly larger values of !in, the corresponding
throughput is also larger, since more original data is being transmitted into the

R/2

R/2 R/2

λ o
u

t

a. b.

R/2

λ o
u

t

R/3

R/2

R/2

λ o
u

t

R/4

c.
λ’in λ’in λ’in

Figure 3.46 ! Scenario 2 performance with finite buffers

3.6 • PRINCIPLES OF CONGESTION CONTROL 263

course, the receiver needs but one copy of this packet and will discard the retrans-
mission. In this case, the work done by the router in forwarding the retransmitted
copy of the original packet was wasted, as the receiver will have already received
the original copy of this packet. The router would have better used the link trans-
mission capacity to send a different packet instead. Here then is yet another cost of
a congested network—unneeded retransmissions by the sender in the face of large
delays may cause a router to use its link bandwidth to forward unneeded copies of a
packet. Figure 3.46 (c) shows the throughput versus offered load when each packet
is assumed to be forwarded (on average) twice by the router. Since each packet is
forwarded twice, the throughput will have an asymptotic value of R/4 as the offered
load approaches R/2.

Scenario 3: Four Senders, Routers with Finite Buffers, and
Multihop Paths

In our final congestion scenario, four hosts transmit packets, each over overlap-
ping two-hop paths, as shown in Figure 3.47. We again assume that each host
uses a timeout/retransmission mechanism to implement a reliable data transfer
service, that all hosts have the same value of !in, and that all router links have
capacity R bytes/sec.

Let’s consider the connection from Host A to Host C, passing through routers
R1 and R2. The A–C connection shares router R1 with the D–B connection and
shares router R2 with the B–D connection. For extremely small values of !in, buffer
overflows are rare (as in congestion scenarios 1 and 2), and the throughput approxi-
mately equals the offered load. For slightly larger values of !in, the corresponding
throughput is also larger, since more original data is being transmitted into the

R/2

R/2 R/2

λ o
u

t

a. b.

R/2

λ o
u

t

R/3

R/2

R/2

λ o
u

t

R/4

c.
λ’in λ’in λ’in

Figure 3.46 ! Scenario 2 performance with finite buffers

3.6 • PRINCIPLES OF CONGESTION CONTROL 263

course, the receiver needs but one copy of this packet and will discard the retrans-
mission. In this case, the work done by the router in forwarding the retransmitted
copy of the original packet was wasted, as the receiver will have already received
the original copy of this packet. The router would have better used the link trans-
mission capacity to send a different packet instead. Here then is yet another cost of
a congested network—unneeded retransmissions by the sender in the face of large
delays may cause a router to use its link bandwidth to forward unneeded copies of a
packet. Figure 3.46 (c) shows the throughput versus offered load when each packet
is assumed to be forwarded (on average) twice by the router. Since each packet is
forwarded twice, the throughput will have an asymptotic value of R/4 as the offered
load approaches R/2.

Scenario 3: Four Senders, Routers with Finite Buffers, and
Multihop Paths

In our final congestion scenario, four hosts transmit packets, each over overlap-
ping two-hop paths, as shown in Figure 3.47. We again assume that each host
uses a timeout/retransmission mechanism to implement a reliable data transfer
service, that all hosts have the same value of !in, and that all router links have
capacity R bytes/sec.

Let’s consider the connection from Host A to Host C, passing through routers
R1 and R2. The A–C connection shares router R1 with the D–B connection and
shares router R2 with the B–D connection. For extremely small values of !in, buffer
overflows are rare (as in congestion scenarios 1 and 2), and the throughput approxi-
mately equals the offered load. For slightly larger values of !in, the corresponding
throughput is also larger, since more original data is being transmitted into the

R/2

R/2 R/2

λ o
u

t

a. b.

R/2

λ o
u

t

R/3

R/2

R/2

λ o
u

t

R/4

c.
λ’in λ’in λ’in

Figure 3.46 ! Scenario 2 performance with finite buffers

3.6 • PRINCIPLES OF CONGESTION CONTROL 263

course, the receiver needs but one copy of this packet and will discard the retrans-
mission. In this case, the work done by the router in forwarding the retransmitted
copy of the original packet was wasted, as the receiver will have already received
the original copy of this packet. The router would have better used the link trans-
mission capacity to send a different packet instead. Here then is yet another cost of
a congested network—unneeded retransmissions by the sender in the face of large
delays may cause a router to use its link bandwidth to forward unneeded copies of a
packet. Figure 3.46 (c) shows the throughput versus offered load when each packet
is assumed to be forwarded (on average) twice by the router. Since each packet is
forwarded twice, the throughput will have an asymptotic value of R/4 as the offered
load approaches R/2.

Scenario 3: Four Senders, Routers with Finite Buffers, and
Multihop Paths

In our final congestion scenario, four hosts transmit packets, each over overlap-
ping two-hop paths, as shown in Figure 3.47. We again assume that each host
uses a timeout/retransmission mechanism to implement a reliable data transfer
service, that all hosts have the same value of !in, and that all router links have
capacity R bytes/sec.

Let’s consider the connection from Host A to Host C, passing through routers
R1 and R2. The A–C connection shares router R1 with the D–B connection and
shares router R2 with the B–D connection. For extremely small values of !in, buffer
overflows are rare (as in congestion scenarios 1 and 2), and the throughput approxi-
mately equals the offered load. For slightly larger values of !in, the corresponding
throughput is also larger, since more original data is being transmitted into the

R/2

R/2 R/2

λ o
u

t

a. b.

R/2

λ o
u

t

R/3

R/2

R/2

λ o
u

t

R/4

c.
λ’in λ’in λ’in

Figure 3.46 ! Scenario 2 performance with finite buffers

276 CHAPTER 3 • TRANSPORT LAYER

the triple duplicate ACKs received) and records the value of ssthresh to be half
the value of cwnd when the triple duplicate ACKs were received. The fast-recovery
state is then entered.

Fast Recovery

In fast recovery, the value of cwnd is increased by 1 MSS for every duplicate ACK
received for the missing segment that caused TCP to enter the fast-recovery state.
Eventually, when an ACK arrives for the missing segment, TCP enters the
congestion-avoidance state after deflating cwnd. If a timeout event occurs, fast
recovery transitions to the slow-start state after performing the same actions as in
slow start and congestion avoidance: The value of cwnd is set to 1 MSS, and the
value of ssthresh is set to half the value of cwnd when the loss event occurred.

Fast recovery is a recommended, but not required, component of TCP [RFC
5681]. It is interesting that an early version of TCP, known as TCP Tahoe, uncondi-
tionally cut its congestion window to 1 MSS and entered the slow-start phase after
either a timeout-indicated or triple-duplicate-ACK-indicated loss event. The newer
version of TCP, TCP Reno, incorporated fast recovery.

Figure 3.53 illustrates the evolution of TCP’s congestion window for both Reno
and Tahoe. In this figure, the threshold is initially equal to 8 MSS. For the first eight
transmission rounds, Tahoe and Reno take identical actions. The congestion window
climbs exponentially fast during slow start and hits the threshold at the fourth round
of transmission. The congestion window then climbs linearly until a triple duplicate-
ACK event occurs, just after transmission round 8. Note that the congestion window
is 12 • MSS when this loss event occurs. The value of ssthresh is then set to

0
10 2 3 4 5 6 7 8

Transmission round

TCP Tahoe

ssthresh

ssthresh

C
o

n
g

es
ti

o
n

w
in

d
o

w
(i

n
s

eg
m

en
ts

)

9 10 11 12 13 14 15

2

4

6

8

10

12

14

16

TCP Reno

Figure 3.53 ! Evolution of TCP’s congestion window (Tahoe and Reno)

VideoNote
Examining the
behavior of TCP