Advanced Networks
Advanced Network Technologies
Week 2:
Network performance
Network application
School of Computer Science
Dr. | Lecturer
1
Network Performance:
Throughput
Throughput
throughput: rate (bits/time unit) at which bits transferred between sender/receiver
instantaneous: rate at given point in time
average: rate over longer period of time
server, with
file of F bits
to send to client
link capacity
Rs bits/sec
link capacity
Rc bits/sec
server sends bits (fluid) into pipe
Pipe that can carry fluid at rate Rs bits/sec
Pipe that can carry fluid at rate Rc bits/sec
3
Throughput (cont’d)
Rs < Rc What is average end-end throughput?
Rs > Rc What is average end-end throughput?
Rs bits/sec
Rc bits/sec
link on end-end path that constrains end-end throughput
bottleneck link
Rs bits/sec
Rc bits/sec
Throughput (cont’d)
Internet Scenario
per-connection end-end throughput: min(Rc,Rs,R/10)
10 connections (fairly) share backbone bottleneck link R bits/sec
Rs
Rs
Rs
Rc
R
Bit and byte
bit: basic unit. “b”
byte: 8 bits. “B”
bps: bit per second
Network/Telecom:
Kb/Mb/Gb: 103,106,109 bit
Kbps/Mbps/Gbps: 103,106,109 bit per second
By default in this course
File system:
KB/MB/GB: 210,220,230 byte (1024,10242,10243 byte)
Bit and byte
210 byte
109 bit per second
Network Performance:
Fairness
Network Fairness and Bandwidth Allocation
Efficiency
Fairness
However, they are contradicting!
In reality: two considerations
Network Fairness, Bandwidth allocation
Three flows: A-B, B-C, A-C
A
B
C
1 Mbps
1 Mbps
Q: How can we allocate the link bandwidths to the three flows?
Network Fairness, Bandwidth allocation
Three flows: A-B, B-C, A-C
A
B
C
1 Mbps
1 Mbps
0.5 Mbps
0.5 Mbps
0.5 Mbps
Very fair!
However: Network throughput, only 1.5Mbps
Network Fairness, Bandwidth allocation
Three flows: A-B, B-C, A-C
A
B
C
1 Mbps
1 Mbps
1 Mbps
1 Mbps
0 Mbps
Very unfair!
However: Network throughput, 2Mbps
Fairness
Bottleneck for a flow: The link that limits the data rate of the flow
A
B
C
1 Mbps
10 Mbps
Bottleneck for A-B, A-C
Bottleneck for B-C
Max-min Fairness
Maximize the minimum
Try to increase the “poorest” as much as possible
A richer flow can be sacrificed.
Try to increase the second “poorest” as much as possible
A richer flow can be sacrificed.
A poorer flow cannot be sacrificed.
Try to increase the third “poorest” as much as possible
…
Max-min Fairness
Max-min Fairness criteria: if we want to improve one flow, we can only achieve this by sacrificing a poorer or equal flow.
Max-min Fairness
Bottleneck for a flow: The link limits its data rate
A
B
C
1 Mbps
10 Mbps
0.5 Mbps
0.5 Mbps
9.5 Mbps
Even this is large, but it does hurt poorer flows
Bottleneck approach
1 Start with all zero flows, potential flow set = {all flows}
2 Slowly increase flows in the potential flow set until there is a (new) link saturated
“Pouring water in the network”
3 Hold fix the flows that are bottlenecked, remove them from the potential flow set
4 If potential flow set is not empty, go to step 2 (still has potential to increase)
Bottleneck approach
Bottleneck approach
A
B
C
D
Each link between two routes with capacity 1
0
0
0
0
Potential flow set {A, B, C, D}
1/3
1/3
1/3
1/3
Bottleneck!
18
Bottleneck approach
A
B
C
D
Each link between two routes with capacity 1
1/3
1/3
1/3
1/3
Potential flow set {A}
1/3
1/3
1/3
1/3
Bottleneck!
Bottleneck!
2/3
19
Bottleneck approach
A
B
C
D
Each link between two routes with capacity 1
2/3
1/3
1/3
1/3
Potential flow set {}
1/3
1/3
1/3
Bottleneck!
Bottleneck!
2/3
20
Can you solve the following problem?
B
C
A
link rate: AB=BC=1, CA=2
Can you solve the following problem?
B
C
A
link rate: AB=BC=1, CA=2
demand 1,2,3 =1/3
demand 4 =2/3
demand 5=4/3
More comments
A
B
C
1 Mbps
1 Mbps
0.5 Mbps
0.5 Mbps
0.5 Mbps
You are using two links. How can we get a same share?
More comment: Max-min fairness is too fair!
More comments
A
B
C
1 Mbps
1 Mbps
2/3 Mbps
2/3 Mbps
1/3 Mbps
Longer routes are penalized
Another form of fairness
proportional fairness
The Application Layer
Some network applications
e-mail
web
text messaging
remote login
P2P file sharing
multi-user network games
streaming stored video (YouTube, Netflix)
voice over IP (e.g., Skype)
real-time video conferencing
social networking
search
…
…
Creating a network app
write programs that:
run on (different) end systems
communicate over network
e.g., web server software communicates with browser software
no need to write software for network-core devices
network-core devices do not run user applications
applications on end systems allows for rapid app development, propagation
application
transport
network
data link
physical
application
transport
network
data link
physical
application
transport
network
data link
physical
No modification needed at routers or switches.
27
Application architectures
Possible structure of applications
Client-server
Peer-to-peer (P2P)
Client-server architecture
server:
always-on
permanent IP address
data centers for scaling
clients:
communicate with server
may be intermittently connected
may have dynamic IP addresses
do not communicate directly with each other
client/server
29
P2P architecture
no always-on server
arbitrary end systems directly communicate
peers request service from other peers, provide service in return to other peers
self scalability – new peers bring new service capacity, as well as new service demands
peers are intermittently connected and change IP addresses
complex management
application
transport
network
data link
physical
application
transport
network
data link
physical
application
transport
network
data link
physical
Process communicating
process: program running within a host
within same host, two processes communicate using inter-process communication (defined by OS)
processes in different hosts communicate by exchanging messages
client process: process that initiates communication
server process: process that waits to be contacted
aside: applications with P2P architectures have client processes & server processes
clients, servers
31
Sockets
process sends/receives messages to/from its socket
socket analogous to door
sending process shoves message out door
sending process relies on transport infrastructure on other side of door to deliver message to socket at receiving process
Shove: to push with force
32
Addressing processes
identifier includes both IP address and port numbers associated with process on host.
example port numbers:
HTTP server: 80
mail server: 25
to send HTTP message to gaia.cs.umass.edu web server:
IP address: 128.119.245.12
port number: 80
more shortly…
to receive messages, process must have identifier
host device has unique 32-bit IP address (or 128 in IPv6)
Q: does IP address of host on which process runs suffice for identifying the process?
A: no, many processes can be running on same host
A: no many processes can be running on the same host
33
App-layer protocol defines
types of messages exchanged,
e.g., request, response
message syntax:
what fields in messages & how fields are delineated
e.g. First line: method. Second line: URL
message semantics
meaning of information in fields
e.g. 404 means “not found”
rules for when and how processes send & respond to messages
open protocols:
defined in RFCs
allows for interoperability
e.g., HTTP, SMTP
proprietary protocols:
e.g., Skype
34
What transport service does an app need?
data integrity
some apps (e.g., file transfer, web transactions) require 100% reliable data transfer
other apps (e.g., audio) can tolerate some loss
timing
some apps (e.g., Internet telephony, interactive games) require low delay to be “effective”
throughput
some apps (e.g., multimedia) require minimum amount of throughput to be “effective”
other apps (“elastic apps”) make use of whatever throughput they get
35
Internet transport protocols services
TCP service:
reliable transport between sending and receiving process
flow control: sender won’t overwhelm receiver
congestion control: throttle sender when network overloaded
does not provide: timing, minimum throughput guarantee
connection-oriented: setup required between client and server processes
UDP service:
unreliable data transfer between sending and receiving process
does not provide: reliability, flow control, congestion control, timing, throughput guarantee, or connection setup,
36
application
e-mail
remote terminal access
Web
file transfer
streaming multimedia
Internet telephony
application
layer protocol
SMTP [RFC 2821]
Telnet [RFC 854]
HTTP [RFC 2616]
FTP [RFC 959]
HTTP
RTP [RFC 1889]
SIP, RTP, proprietary
(e.g., Skype)
underlying
transport protocol
TCP
TCP
TCP
TCP
TCP or UDP
TCP or UDP
Internet apps: application, transport protocols
37
Web and HTTP
Web and HTTP
First, a review…
web page consists of base HTML-file which includes several referenced objects
HTML: HyperText Markup Language
object can be JPEG image, Java applet, audio file,…
each object is addressable by a URL (Uniform Resource Locator), e.g.,
www.someschool.edu/someDept/pic.gif
host name
path name
39
Web and HTTP
xxxxxxxxx
www.aaa.edu/Obj1.jpg
yyyyyyyyyyyy
www.aaa.edu/Obj2.jpg
zzzzzzzzz
xxxxxxxxx
yyyyyyyyyyyy
zzzzzzzzz
File: usually base-html file
(HyperText Markup Language)
Browser shows
40
HTTP overview
HTTP: hypertext transfer protocol
Web’s application layer protocol
client/server model
client: browser that requests, receives, (using HTTP protocol) and “displays” Web objects
server: Web server sends (using HTTP protocol) objects in response to requests
PC running
Firefox browser
server
running
Apache Web
server
iPhone running
Safari browser
HTTP request
HTTP response
HTTP request
HTTP response
41
HTTP overview (cont’d)
uses TCP:
client initiates TCP connection (creates socket) to server, port 80
How to know IP address?
DNS (Domain Name System)
server accepts TCP connection from client
HTTP messages (application-layer protocol messages) exchanged between browser (HTTP client) and Web server (HTTP server)
TCP connection closed
HTTP is “stateless”
server maintains no information about past client requests
protocols that maintain “state” are complex!
past history (state) must be maintained
if server/client crashes, their views of “state” may be inconsistent, must be reconciled
aside
42
HTTP connections
non-persistent HTTP
at most one object sent over TCP connection
connection then closed
downloading multiple objects required multiple connections
persistent HTTP
multiple objects can be sent over single TCP connection between client, server
43
Non-persistent HTTP
suppose user enters URL:
1a. HTTP client initiates TCP connection to HTTP server (process) at www.someSchool.edu on port 80
2. HTTP client sends HTTP request message into TCP connection socket. Message indicates that client wants page someDepartment/home.index
1b. HTTP server at host www.someSchool.edu waiting for TCP connection at port 80. “accepts” connection, notifying client
3. HTTP server receives request message, forms response message containing requested page, and sends message
(contains text,
references to 10
jpeg images)
www.someSchool.edu/someDepartment/home.index
time
5. HTTP client receives response message containing html file, displays html. Parsing html file, finds 10 referenced jpeg objects to download
4. HTTP server closes TCP connection.
44
Non-persistent HTTP
suppose user enters URL:
1a. HTTP client initiates TCP connection to HTTP server (process) at www.someSchool.edu on port 80
2. HTTP client sends HTTP request message into TCP connection socket. Message indicates that client wants object someDepartment/object1.jpg
1b. HTTP server at host www.someSchool.edu waiting for TCP connection at port 80. “accepts” connection, notifying client
3. HTTP server receives request message, forms response message containing requested object, and sends message
(contains text,
references to 10
jpeg images)
www.someSchool.edu/someDepartment/home.index
time
5. HTTP client receives response message containing object, displays the object.
6. Steps 1-5 repeated for each of 10 jpeg objects
4. HTTP server closes TCP connection.
45
HTTP: response time
RTT (definition): time for a small packet to travel from client to server and back
HTTP response time:
one RTT to initiate TCP connection
one RTT for HTTP request and first few bytes of HTTP response to return
file transmission time
non-persistent HTTP response time =
2RTT+ file transmission time
time to
transmit
file
initiate TCP
connection
RTT
request
file
RTT
file
received
time
time
46
Persistent HTTP
suppose user enters URL:
1a. HTTP client initiates TCP connection to HTTP server (process) at www.someSchool.edu on port 80
2. HTTP client sends HTTP request message into TCP connection socket. Message indicates that client wants page someDepartment/home.index
1b. HTTP server at host www.someSchool.edu waiting for TCP connection at port 80. “accepts” connection, notifying client
3. HTTP server receives request message, forms response message containing requested page, and sends message
(contains text,
references to 10
jpeg images)
www.someSchool.edu/someDepartment/home.index
time
5. HTTP client receives response message containing html file, displays html. Parsing html file, finds 10 referenced jpeg objects to download
TCP is still on
47
Persistent HTTP
suppose user enters URL:
2. HTTP client sends HTTP request message into TCP connection socket. Message indicates that client wants object someDepartment/object1.jpg
3. HTTP server receives request message, forms response message containing requested object, and sends message
(contains text,
references to 10
jpeg images)
www.someSchool.edu/someDepartment/home.index
time
4. HTTP client receives response message containing object, displays the object.
Repeated for each of 10 jpeg objects
10 rounds later HTTP server closes TCP connection.
48
Non-persistent vs. persistent
OK
Request file
time
time
File
Request obj 1
Obj1
initiate TCP
connection
OK
initiate TCP
connection
Non-persistent
OK
Request file
time
time
File
Request obj 1
Obj1
initiate TCP
connection
Persistent
49
Persistent HTTP
non-persistent HTTP issues:
requires 2 RTTs + file transmission time per object
persistent HTTP:
server leaves connection open after sending response
subsequent HTTP messages between same client/server sent over open connection
client sends requests as soon as it encounters a referenced object
as little as one RTT + file transmission time for all the referenced objects
50
HTTP request message
two types of HTTP messages: request, response
HTTP request message:
ASCII (human-readable format)
request line
(GET, POST,
HEAD commands)
header
lines
carriage return,
line feed at start
of line indicates
end of header lines
GET /index.html HTTP/1.1\r\n
Host: www-net.cs.umass.edu\r\n
User-Agent: Firefox/3.6.10\r\n
Accept: text/html,application/xhtml+xml\r\n
Accept-Language: en-us,en;q=0.5\r\n
Accept-Encoding: gzip,deflate\r\n
Accept-Charset: ISO-8859-1,utf-8;q=0.7\r\n
Keep-Alive: 115\r\n
Connection: keep-alive\r\n
\r\n
carriage return character
line-feed character
51
request
line
header
lines
body
method
sp
sp
cr
lf
version
URL
cr
lf
value
header field name
cr
lf
value
header field name
~
~
~
~
cr
lf
entity body
~
~
~
~
HTTP request message: general format
Method is GET, POST, HEAD, PUT, DELETE.
sp: space, cr: carriage return character, lf: line feed
Connection: keep-alive (persistent connection)
52
Uploading form input
POST method:
web page often includes form input
input is uploaded to server in entity body
GET method
www.somesite.com/animalsearch?monkeys&banana
53
Method types
HTTP/1.0:
GET
POST
HEAD
asks server to leave requested object out of response
HTTP/1.1:
GET, POST, HEAD
PUT
uploads file in entity body to path specified in URL field
DELETE
deletes file specified in the URL field
54
status line
(protocol
status code
status phrase)
header
lines
data, e.g.,
requested
HTML file
HTTP/1.1 200 OK\r\n
Date: Sun, 26 Sep 2010 20:09:20 GMT\r\n
Server: Apache/2.0.52 (CentOS)\r\n
Last-Modified: Tue, 30 Oct 2007 17:00:02 GMT\r\n
ETag: “17dc6-a5c-bf716880″\r\n
Accept-Ranges: bytes\r\n
Content-Length: 2652\r\n
Keep-Alive: timeout=10, max=100\r\n
Connection: Keep-Alive\r\n
Content-Type: text/html; charset=ISO-8859-1\r\n
\r\n
data data data data data …
HTTP response message
55
HTTP response status codes
200 OK
request succeeded, requested object later in this msg
301 Moved Permanently
requested object moved, new location specified later in this msg (Location:)
400 Bad Request
request msg not understood by server
404 Not Found
requested document not found on this server
505 HTTP Version Not Supported
status code appears in 1st line in server-to-client response message.
some sample codes:
56
client
server
usual http response msg
usual http response msg
cookie file
one week later:
usual http request msg
cookie: 1678
cookie-
specific
action
access
usual http request msg
Amazon server
creates ID
1678 for user
create
entry
usual http response
set-cookie: 1678
amazon 1678
usual http request msg
cookie: 1678
cookie-
specific
action
access
amazon 1678
backend
database
Cookies: keeping “state” (cont’d)
Response can be customized depending on your interest as the server knows you.
57
User-server state: cookies
many Web sites use cookies
four components:
1) cookie header line of HTTP response message
2) cookie header line in next HTTP request message
3) cookie file kept on user’s host, managed by user’s browser
4) back-end database at Web site
58
Cookies (cont’d)
what cookies can be used for:
authorization
shopping carts
recommendations
user session state (Web e-mail)
how to keep “state”:
protocol endpoints: maintain state at sender/receiver over multiple transactions
cookies: http messages carry state
59
Web caches (proxy server)
user sets browser: Web accesses via cache
browser sends all HTTP requests to cache
if object in cache:
then cache returns object
else cache requests object from origin server, then returns object to client
goal: satisfy client request without involving origin server
client
proxy
server
client
HTTP request
HTTP response
HTTP request
HTTP request
origin
server
origin
server
HTTP response
HTTP response
60
More about Web caching
Q: Does the cache act as a client or a server?
61
More about Web caching
R: cache acts as both client and server
server for original requesting client
client to origin server
typically cache is installed by ISP (university, company, residential ISP)
why Web caching?
reduce response time for client request
reduce traffic on an institution’s access link
62
origin
servers
public
Internet
institutional
network
1 Gbps LAN
1.54 Mbps
access link
assumptions:
avg object size: 100K bits
avg request rate from browsers to origin servers:15/sec (1.5 Mbps service )
RTT from institutional router to any origin server: 2 sec
access link rate: 1.54 Mbps
consequences:
LAN utilization: 0.15%
LANU = avg req rate * size / link bandwidth
access link utilization = 99%
ALU = avg req rate * size / link bankwidth
total delay = 2 sec + minutes + usecs
Q: what happens with fatter access link?
problem!
Caching example
63
assumptions:
avg object size: 100K bits
avg request rate from browsers to origin servers:15/sec
RTT from institutional router to any origin server: 2 sec
access link rate: 1.54 Mbps
consequences:
LAN utilization: 0.15%
access link utilization = 99%
total delay = 2 sec + minutes + usecs
origin
servers
1.54 Mbps
access link
154 Mbps
154 Mbps
msecs
Cost: increased access link speed (not cheap!)
0.99%
public
Internet
institutional
network
1 Gbps LAN
Caching example: fatter access link
64
institutional
network
1 Gbps LAN
origin
servers
1.54 Mbps
access link
local web
cache
assumptions:
avg object size: 100K bits
avg request rate from browsers to origin servers:15/sec
RTT from institutional router to any origin server: 2 sec
access link rate: 1.54 Mbps
consequences:
LAN utilization: 0.15%
access link utilization = 0%
total delay = usecs
Cost: web cache (cheap!)
public
Internet
Caching example: install local cache
65
Calculating access link utilization, delay with cache:
suppose cache hit rate is 0.4
40% requests satisfied at cache,
60% requests satisfied at origin
access link utilization:
60% of requests use access link
average total delay
= 0.6 * (delay from origin servers) +0.4 * (delay when satisfied at cache)
Link utilization is around 60%, queueing delay is small enough
= 0.6 (~2.x second) + 0.4 (~usecs)
less than with 154 Mbps link (and cheaper too!)
origin
servers
1.54 Mbps
access link
public
Internet
institutional
network
1 Gbps LAN
local web
cache
Caching example: install local cache
66
Conditional GET
Goal: don’t send object if client has up-to-date cached version
no object transmission delay
lower link utilization
client: specify date of cached copy in HTTP request
If-modified-since:
server: response contains no object if cached copy is up-to-date:
HTTP/1.0 304 Not Modified
HTTP request msg
If-modified-since:
HTTP response
HTTP/1.0
304 Not Modified
object
not
modified
HTTP request msg
If-modified-since:
HTTP response
HTTP/1.0 200 OK
object
modified
after
client
server
67
/docProps/thumbnail.jpeg