CS计算机代考程序代写 M.Sc. Computer Science

M.Sc. Computer Science
Computer Systems

Additional Questions on Week 9 Materials – Part III

Question #1: Consider a planet where everyone belongs to a family of six, every family lives in its
own house, each house has a unique address, and each person in a given house has a unique name.
Suppose this planet has a mail service that delivers letters from source house to destination house. The
mail service requires that (1) the letter be in an envelope, and that (2) the address of the destination
house (and nothing more) be clearly written on the envelope. Suppose each family has a delegate
family member who collects and distributes letters for the other family members. The letters do not
necessarily provide any indication of the recipients of the letters.
a) Describe a protocol that the delegate can use to deliver letters from a sending family member to a
receiving family member.
b) In your protocol, does the mail service ever have to open the envelope and examine the letter in
order to provide its service?

a) For sending a letter, the family member is required to give the delegate the letter itself, the address of
the destination house, and the name of the recipient. The delegate clearly writes the recipient’s name on
the top of the letter. The delegate then puts the letter in an envelope and writes the address of the
destination house on the envelope. The delegate then gives the letter to the planet’s mail service. At the
receiving side, the delegate receives the letter from the mail service, takes the letter out of the envelope,
and takes note of the recipient name written at the top of the letter. The delegate then gives the letter to
the family member with this name.

b) No, the mail service does not have to open the envelope; it only examines the address on the
envelope.

Question #2: Describe why an application developer might choose to run an application over UDP
rather than TCP.

An application developer may not want its application to use TCP’s congestion control, which can
throttle the application’s sending rate at times of congestion. Often, designers of IP telephony and IP
videoconference applications choose to run their applications over UDP because they want to avoid
TCP’s congestion control. Also, some applications do not need the reliable data transfer provided by
TCP.

Question #3: Suppose that a Web server runs in Host C on port 80. Suppose this Web server uses
persistent connections, and is currently receiving requests from two different Hosts, A and B. Are all of
the requests being sent through the same socket at Host C? If they are being passed through different
sockets, do both of the sockets have port 80? Discuss and explain.

For each persistent connection, the Web server creates a separate “connection socket”. Each connection
socket is identified with a four-tuple: (source IP address, source port number, destination IP address,
destination port number). When host C receives and IP datagram, it examines these four fields in the
datagram/segment to determine to which socket it should pass the payload of the TCP segment. Thus,
the requests from A and B pass through different sockets. The identifier for both of these sockets has 80

for the destination port; however, the identifiers for these sockets have different values for source IP
addresses. Unlike UDP, when the transport layer passes a TCP segment’s payload to the application
process, it does not specify the source IP address, as this is implicitly specified by the socket identifier.

Question #4: Suppose that the round-trip delay between sender and receiver is constant and known to
the sender. Would a timer still be necessary in protocol rdt 3.0, assuming that packets can be lost?
Explain.

A timer would still be necessary in the protocol rdt 3.0. If the round trip time is known then the only
advantage will be that, the sender knows for sure that either the packet or the ACK (or NACK) for the
packet has been lost, as compared to the real scenario, where the ACK (or NACK) might still be on the
way to the sender, after the timer expires. However, to detect the loss, for each packet, a timer of
constant duration will still be necessary at the sender.

Question #5: The sender side of rdt 3.0 simply ignores (that is, takes no action on) all received packets
that are either in error or have the wrong value in the acknum field of an acknowledgment packet.
Suppose that in such circumstances, rdt 3.0 were simply to retransmit the current data packet. Would
the protocol still work? (Hint: Consider what would happen if there were only bit errors; there are no
packet losses but premature timeouts can occur. Consider how many times the nth packet is sent, in the
limit as n approaches infinity.)

The protocol would still work, since a retransmission would be what would happen if the packet
received with errors has actually been lost (and from the receiver standpoint, it never knows which of
these events, if either, will occur).
To get at the more subtle issue behind this question, one has to allow for premature timeouts to occur.
In this case, if each extra copy of the packet is ACKed and each received extra ACK causes another
extra copy of the current packet to be sent, the number of times packet n is sent will increase without
bound as n approaches infinity.

Question #6: Suppose Host A sends two TCP segments back to back to Host B over a TCP connection.
The first segment has sequence number 90; the second has sequence number 110.
a) How much data is in the first segment?
b) Suppose that the first segment is lost but the second segment arrives a B. In the acknowledgment
that Host B sends to Host A, what will be the acknowledgment number?

a) 20 bytes b) ack number = 90

Question #7: Suppose that the five measured SampleRTT values are 106 ms, 120 ms, 140 ms, 90 ms
and 115 ms. Compute the EstimatedRTT after each of these SampleRTT values is obtained, using
a value of  = 0.125 and assuming that the value of EstimatedRTT was 100ms just before the first
of these five samples were obtained. Compute also the DevRTT after each sample is obtained,
assuming a value of  = 0.25 and assuming the value of DevRTT was 5 ms just before the first of these
five samples was obtained. Lastly, compute the TCP TimeoutInterval after each of these samples
is obtained.

DevRTT = (1 – ) * DevRTT +  * | SampleRTT – EstimatedRTT |
EstimatedRTT = (1 – ) * EstimatedRTT +  * SampleRTT
TimeoutInterval = EstimatedRTT + 4 * DevRTT

After obtaining first SampleRTT 106ms:
DevRTT = 0.75*5 + 0.25 * | 106 – 100 | = 5.25ms
EstimatedRTT = 0.875 * 100 + 0.125 * 106 = 100.75 ms
TimeoutInterval = 100.75+4*5.25 = 121.75 ms

After obtaining 120ms:
DevRTT = 0.75*5.25 + 0.25 * | 120 – 100.75 | = 8.75 ms
EstimatedRTT = 0.875 * 100.75 + 0.125 * 120 = 103.16 ms
TimeoutInterval = 103.16+4*8.75 = 138.16 ms

After obtaining 140ms:
DevRTT = 0.75*8.75 + 0.25 * | 140 – 103.16 | = 15.77 ms
EstimatedRTT = 0.875 * 103.16 + 0.125 * 140 = 107.76 ms
TimeoutInterval = 107.76+4*15.77 = 170.84 ms

After obtaining 90ms:
DevRTT = 0.75*15.77 + 0.25 * | 90 – 107.76 | = 16.27 ms
EstimatedRTT = 0.875 * 107.76 + 0.125 * 90 = 105.54 ms
TimeoutInterval = 105.54+4*16.27 =170.62 ms

After obtaining 115ms:
DevRTT = 0.75*16.27 + 0.25 * | 115 – 105.54 | = 14.57 ms
EstimatedRTT = 0.875 * 105.54 + 0.125 * 115 = 106.72 ms
TimeoutInterval = 106.72+4*14.57 =165 ms

Question #8: Host A and B are directly connected with a 100 Mbps link. There is one TCP connection
between the two hosts, and Host A is sending to Host B an enormous file over this connection. Host A
can send its application data into its TCP socket at a rate as high as 120 Mbps but Host B can read out
of its TCP receive buffer at a maximum rate of 50 Mbps. Describe the effect of TCP flow control.

Since the link capacity is only 100 Mbps, so host A’s sending rate can be at most 100Mbps. Still, host A
sends data into the receive buffer faster than Host B can remove data from the buffer. The receive
buffer fills up at a rate of roughly 50Mbps. When the buffer is full, Host B signals to Host A to stop
sending data by setting RcvWindow = 0. Host A then stops sending until it receives a TCP segment
with RcvWindow > 0. Host A will thus repeatedly stop and start sending as a function of the
RcvWindow values it receives from Host B. On average, the long-term rate at which Host A sends data
to Host B as part of this connection is no more than 50Mbps.