Advanced Network Technologies
Week 2:
Network performance Network application
Dr. Wei Bao | Lecturer School of Computer Science
Network Performance: Throughput
Throughput
› throughput: rate (bits/time unit) at which bits transferred between sender/receiver
– instantaneous: rate at given point in time – average: rate over longer period of time
server, with link capacity file of F bits Rs bits/sec
to send to client
link capacity Rc bits/sec
Throughput (cont’d)
› Rs < Rc What is average end-end throughput?
Rs bits/sec
› Rs > Rc What is average end-end throughput?
Rc bits/sec
Rs bits/sec
Rc bits/sec
bottleneck link
link on end-end path that constrains end-end throughput
Throughput (cont’d)
Internet Scenario
› per-connection end-end throughput: min(Rc,Rs,R/10)
Rs
Rs
Rs R
Rc
Rc
10 connections (fairly) share backbone bottleneck link R bits/sec
Rc
Network Performance: Fairness
Network Fairness and Bandwidth Allocation
In reality: two considerations › Efficiency
› Fairness
› However, they are contradicting!
Network Fairness, Bandwidth allocation
Three flows: A-B, B-C, A-C
A
BC 1 Mbps 1 Mbps
Q: How can we allocate the link bandwidths to the three flows?
Network Fairness, Bandwidth allocation
Three flows: A-B, B-C, A-C
A
0.5 Mbps B 0.5 Mbps C 1 Mbps 1 Mbps
0.5 Mbps Very fair!
However: Network throughput, only 1.5Mbps
Network Fairness, Bandwidth allocation
Three flows: A-B, B-C, A-C
A
1 Mbps B 1 Mbps
0 Mbps Very unfair!
1 Mbps C 1 Mbps
However: Network throughput, 2Mbps
Fairness
Bottleneck for a flow: The link that limits the data rate of the flow
A
1 Mbps
BC
10 Mbps
Bottleneck for A-B, A-C Bottleneck for B-C
Max-min Fairness
› Maximize the minimum
› Try to increase the “poorest” as much as possible
– A richer flow can be sacrificed.
› Try to increase the second “poorest” as much as possible
– A richer flow can be sacrificed.
– A poorer flow cannot be sacrificed.
› Try to increase the third “poorest” as much as possible ›…
› Max-min Fairness criteria: if we want to improve one flow, we can only achieve this by sacrificing a poorer or equal flow.
Max-min Fairness
Bottleneck for a flow: The link limits its data rate
9.5 Mbps
10 Mbps
0.5 Mbps
Even this is large,
but it does hurt C poorer flows
A
1 Mbps
0.5 Mbps B
Bottleneck approach
Bottleneck approach
› 1 Start with all zero flows, potential flow set = {all flows}
› 2 Slowly increase flows in the potential flow set until there is a (new) link saturated
– “Pouring water in the network”
› 3 Hold fix the flows that are bottlenecked, remove them from the potential
flow set
› 4 If potential flow set is not empty, go to step 2 (still has potential to increase)
Bottleneck approach
Each link between two routes with capacity 1
Potential flow set {A, B, C, D} 1/3 A0
1/3 1/3
B0 C0
1/3 D0
Bottleneck!
Bottleneck approach
Each link between two routes with capacity 1 Potential flow set {A}
1/3 2/3 1/3
1/3
Bottleneck!
A 1/3
B 1/3
C 1/3 D 1/3
Bottleneck!
1/3
Bottleneck approach
Each link between two routes with capacity 1 Potential flow set {}
2/3
1/3 1/3
Bottleneck!
A 2/3 B 1/3
C 1/3 D 1/3
Bottleneck!
1/3
Can you solve the following problem?
link rate: AB=BC=1, CA=2
B
AC
More comments
More comment: Max-min fairness is too fair!
A
0.5 Mbps B 0.5 Mbps C 1 Mbps 1 Mbps
0.5 Mbps
You are using two links. How can we get a same share?
More comments
Another form of fairness proportional fairness
A
2/3 Mbps C 1 Mbps 1 Mbps
2/3 Mbps B
1/3 Mbps Longer routes are penalized
Can you solve the following problem?
link rate: AB=BC=1, CA=2
B
demand 1,2,3 =1/3
demand 4 =2/3
AC
demand 5=4/3
The Application Layer
Some network applications
› e-mail
› web
› text messaging
› remote login
› P2P file sharing
› multi-user network games
› streaming stored video (YouTube, Netflix)
› voice over IP (e.g., Skype)
› real-time video conferencing › social networking
› search
›…
›…
Creating a network app
application
transport
network
data link
physical
write programs that:
› run on (different) end systems
› communicate over network
› e.g., web server software communicates with browser software
no need to write software for network- core devices
› network-core devices do not run user applications
› applications on end systems allows for rapid app development, propagation
data link physical
application
transport
network
data link
application
transport
physical
network
Application architectures
Possible structure of applications
› Client-server
› Peer-to-peer (P2P)
Client-server architecture
client/server
server:
› always-on
› permanent IP address › data centers for scaling
clients:
› communicate with server
› may be intermittently connected
› may have dynamic IP addresses
› do not communicate directly with each other
transport
network
P2P architecture
application
data link
physical
› no always-on server
› arbitrary end systems directly
communicate
› peers request service from other peers, provide service in return to other peers
– self scalability – new peers bring new service capacity, as well as new service demands
› peers are intermittently connected and change IP addresses
– complex management
data link physical
application
transport
network
data link
application
transport
physical
network
Process communicating
process: program running within a host
› within same host, two processes communicate using inter-process communication (defined by OS)
› processes in different hosts communicate by exchanging messages
clients, servers
client process: process that initiates communication
server process: process that waits to be contacted
aside: applications with P2P architectures have client processes & server processes
Sockets
› process sends/receives messages to/from its socket › socket analogous to door
– sending process shoves message out door
– sending process relies on transport infrastructure on other side of door to deliver message to socket at receiving process
Addressing processes
› to receive messages, process must have identifier
› host device has unique 32-bit IP address (or 128 in IPv6)
› Q: does IP address of host on which process runs suffice for identifying the process?
A: no, many processes can be running on same host
› identifier includes both IP address and port numbers associated with process on host.
› example port numbers: – HTTP server: 80
– mail server: 25
› to send HTTP message to gaia.cs.umass.edu web server:
– IP address: 128.119.245.12 – port number: 80
› more shortly…
App-layer protocol defines
› types of messages exchanged, – e.g., request, response
› message syntax:
– what fields in messages & how
fields are delineated
– e.g. First line: method. Second line: URL
– message semantics
– meaning of information in fields
– e.g. 404 means “not found”
› rules for when and how processes send & respond to messages
open protocols:
› defined in RFCs
› allows for interoperability › e.g., HTTP, SMTP proprietary protocols:
› e.g., Skype
What transport service does an app need?
data integrity
› some apps (e.g., file transfer, web transactions) require 100%
reliable data transfer
› other apps (e.g., audio) can tolerate some loss
timing
› some apps (e.g., Internet telephony, interactive games) require low delay to be “effective”
throughput
some apps (e.g., multimedia) require minimum amount of throughput to be “effective”
other apps (“elastic apps”) make use of whatever throughput they get
Internet transport protocols services
TCP service:
UDP service:
› unreliable data transfer between sending and receiving process
› does not provide: reliability, flow control, congestion control, timing, throughput guarantee, or connection setup,
› reliable transport between sending and receiving process
› flow control: sender won’t overwhelm receiver
› congestion control: throttle sender when network overloaded
› does not provide: timing, minimum throughput guarantee
› connection-oriented: setup required between client and server processes
Internet apps: application, transport protocols
application
e-mail remote terminal access
Web file transfer streaming multimedia
Internet telephony
application layer protocol
SMTP [RFC 2821] Telnet [RFC 854] HTTP [RFC 2616] FTP [RFC 959] HTTP
RTP [RFC 1889] SIP, RTP, proprietary (e.g., Skype)
underlying transport protocol
TCP
TCP
TCP
TCP
TCP or UDP
TCP or UDP
Web and HTTP
Web and HTTP
First, a review…
› web page consists of base HTML-file which includes several referenced objects
– HTML: HyperText Markup Language
› object can be JPEG image, Java applet, audio file,…
› each object is addressable by a URL (Uniform Resource Locator), e.g.,
www.someschool.edu/someDept/pic.gif
host name
path name
Web and HTTP
File: usually base-html file (HyperText Markup Language)
Browser shows
xxxxxxxxx www.aaa.edu/Obj1.jpg yyyyyyyyyyyy www.aaa.edu/Obj2.jpg zzzzzzzzz
xxxxxxxxx
yyyyyyyyyyyy
zzzzzzzzz
HTTP overview
HTTP: hypertext transfer protocol
› Web’s application layer protocol
› client/server model
– client: browser that requests, receives, (using HTTP protocol) and “displays” Web objects
– server: Web server sends (using HTTP protocol) objects in response to requests
PC running Firefox browser
iPhone running Safari browser
server running
Apache Web server
HTTP overview (cont’d)
uses TCP:
› client initiates TCP connection (creates socket) to server, port 80
– How to know IP address?
– DNS (Domain Name System)
› server accepts TCP connection from client
› HTTP messages (application- layer protocol messages) exchanged between browser (HTTP client) and Web server (HTTP server)
› TCP connection closed
HTTP is “stateless”
› server maintains no information about past client requests
aside
“state” are complex!
past history (state) must be maintained
if server/client crashes, their views of “state” may be inconsistent, must be reconciled
protocols that maintain
HTTP connections
non-persistent HTTP
› at most one object sent over TCP connection
– connection then closed
› downloading multiple objects required multiple connections
persistent HTTP
› multiple objects can be sent over single TCP connection between client, server
Non-persistent HTTP
(contains text, www.someSchool.edu/someDepartment/home.index references to 10
suppose user enters URL:
1a. HTTP client initiates TCP connection to HTTP server (process) at www.someSchool.edu on port 80
2. HTTP client sends HTTP request message into TCP connection socket. Message indicates that client wants page someDepartment/home. index
5. HTTP client receives response message containing html file, displays html. Parsing html file, finds 10 referenced jpeg objects to download
time
jpeg images)
1b. HTTP server at host www.someSchool.edu waiting for TCP connection at port 80. “accepts” connection, notifying client
3. HTTP server receives request message, forms response message containing requested page, and sends message
4. HTTP server closes TCP connection.
Non-persistent HTTP
(contains text, www.someSchool.edu/someDepartment/home.index references to 10
suppose user enters URL:
1a. HTTP client initiates TCP connection to HTTP server (process) at www.someSchool.edu on port 80
2. HTTP client sends HTTP request message into TCP connection socket. Message indicates that client wants object someDepartment/object1. jpg
5. HTTP client receives response message containing object, displays the object.
time
jpeg images)
1b. HTTP server at host www.someSchool.edu waiting for TCP connection at port 80. “accepts” connection, notifying client
3. HTTP server receives request message, forms response message containing requested object, and sends message
4. HTTP server closes TCP connection.
6. Steps 1-5 repeated for each of 10 jpeg objects
HTTP: response time
RTT (definition): time for a small packet to travel from client to server and back
HTTP response time:
› one RTT to initiate TCP connection
› one RTT for HTTP request and first few bytes of HTTP response to return
› file transmission time
› non-persistent HTTP response
time =
2RTT+ file transmission time
initiate TCP connection
RTT
request file
RTT
file received
time
time to transmit file
time
Persistent HTTP
(contains text, www.someSchool.edu/someDepartment/home.index references to 10
suppose user enters URL:
1a. HTTP client initiates TCP connection to HTTP server (process) at www.someSchool.edu on port 80
2. HTTP client sends HTTP request message into TCP connection socket. Message indicates that client wants page someDepartment/home. index
5. HTTP client receives response message containing html file, displays html. Parsing html file, finds 10 referenced jpeg objects to download
time
jpeg images)
1b. HTTP server at host www.someSchool.edu waiting for TCP connection at port 80. “accepts” connection, notifying client
3. HTTP server receives request message, forms response message containing requested page, and sends message
TCP is still on
Persistent HTTP
suppose user enters URL:
(contains text, references to 10
jpeg images)
www.someSchool.edu/someDepartment/home.index
2. HTTP client sends HTTP request message into TCP connection socket. Message indicates that client wants object someDepartment/object1. jpg
4. HTTP client receives response message containing object, displays the object.
Repeated for each of 10 jpeg objects
time
3. HTTP server receives request message, forms response message containing requested object, and sends message
10 rounds later HTTP server closes TCP connection.
Non-persistent vs. persistent
Non-persistent
initiate TCP connection
OK Request file
File
initiate TCP connection
OK
Request obj 1 Obj1
Persistent
initiate TCP connection
OK Request file
File
Request obj 1 Obj1
time
time
time
time
Persistent HTTP
non-persistent HTTP issues:
› requires 2 RTTs + file transmission time per object
persistent HTTP:
› server leaves connection open after sending response
› subsequent HTTP messages between same client/server sent over open connection
› client sends requests as soon as it encounters a referenced object
› as little as one RTT + file transmission time for all the referenced objects
HTTP request message
› two types of HTTP messages: request, response › HTTP request message:
– ASCII (human-readable format)
carriage return character line-feed character
request line
(GET, POST, HEAD commands)
header lines
carriage return, line feed at start
of line indicates end of header lines
GET /index.html HTTP/1.1\r\n
Host: www-net.cs.umass.edu\r\n
User-Agent: Firefox/3.6.10\r\n
Accept: text/html,application/xhtml+xml\r\n Accept-Language: en-us,en;q=0.5\r\n Accept-Encoding: gzip,deflate\r\n Accept-Charset: ISO-8859-1,utf-8;q=0.7\r\n Keep-Alive: 115\r\n
Connection: keep-alive\r\n
\r\n
HTTP request message: general format
method
sp
URL
sp
version
cr
lf
header field name
value
cr
lf
~
~
request line
header lines
header field name
value
cr
lf
cr
lf
~
entity body
~ body
Uploading form input
GET method
POST method:
› web page often includes form input
› input is uploaded to server in entity body
Method types
HTTP/1.0:
› GET
› POST › HEAD
– asks server to leave requested object out of response
HTTP/1.1:
› GET, POST, HEAD › PUT
– uploads file in entity body to path specified in URL field
› DELETE
– deletes file specified in the URL field
HTTP response message
status line (protocol status code status phrase)
header lines
HTTP/1.1 200 OK\r\n
Date: Sun, 26 Sep 2010 20:09:20 GMT\r\n Server: Apache/2.0.52 (CentOS)\r\n Last-Modified: Tue, 30 Oct 2007 17:00:02
data, e.g., requested HTML file
data data data data data …
GMT\r\n
ETag: “17dc6-a5c-bf716880″\r\n Accept-Ranges: bytes\r\n
Content-Length: 2652\r\n
Keep-Alive: timeout=10, max=100\r\n Connection: Keep-Alive\r\n
Content-Type: text/html; charset=ISO-8859-
1\r\n \r\n
HTTP response status codes
› status code appears in 1st line in server-to-client response message.
› some sample codes: 200 OK
– request succeeded, requested object later in this msg 301 Moved Permanently
– requested object moved, new location specified later in this msg (Location:)
400 Bad Request
– request msg not understood by server 404 Not Found
– requested document not found on this server 505 HTTP Version Not Supported
Cookies: keeping “state” (cont’d)
client
cookie file
amazon 1678
usual http request msg
server
Amazon server creates ID 1678 for user
usual http response
set-cookie: 1678
backend entry database
usual http request msg
cookie: 1678
cookie- specific action
create access
usual http response msg
one week later:
amazon 1678
access
cookie- specific action
usual http request msg
cookie: 1678
usual http response msg
User-server state: cookies
many Web sites use cookies
four components:
1) cookie header line of HTTP response message
2) cookie header line in next HTTP request message
3) cookie file kept on user’s host, managed by user’s browser 4) back-end database at Web site
Cookies (cont’d)
what cookies can be used for:
› authorization
› shopping carts
› recommendations
› user session state (Web e-mail)
how to keep “state”:
• protocol endpoints: maintain state at sender/receiver over multiple transactions
• cookies: http messages carry state
Web caches (proxy server)
goal: satisfy client request without involving origin server › user sets browser: Web
accesses via cache
› browser sends all HTTP requests to cache
› if object in cache:
– then cache returns object
– else cache requests object from origin server, then returns object to client
client
proxy server
origin server
client
origin server
More about Web caching
› Q: Does the cache act as a client or a server?
More about Web caching
› R: cache acts as both client and server
– server for original requesting client
– client to origin server
› typically cache is installed by ISP (university, company, residential ISP)
why Web caching?
› reduce response time for client request
› reduce traffic on an institution’s access link
Caching example
assumptions:
avgobjectsize:100Kbits
avgrequestratefrombrowserstoorigin
servers:15/sec (1.5 Mbps service )
RT T from institutional router to any
origin server: 2 sec
access link rate: 1.54 Mbps
consequences:
LAN utilization: 0.15%
LANU=avgreqrate*size/link
public Internet
origin servers
bandwidth
access link utilization = 99%
ALU=avgreqrate*size/link
problem!
institutional network
1.54 Mbps access link
1 Gbps LAN
bankwidth
total delay = 2 sec + minutes + usecs
Q: what happens with fatter access link?
Caching example: fatter access link
assumptions:
avgobjectsize:100Kbits
avgrequestratefrombrowserstoorigin
servers:15/sec
RT T from institutional router to any
public Internet
1.54 Mbps access link
origin servers
origin server: 2 sec
access link rate: 1.54 Mbps
154 Mbps LAN utilization: 0.15% 0.99%
consequences:
154 Mbps
access link utilization = 99%
total delay = 2 sec + minutes + usecs
msecs
institutional network
1 Gbps LAN
Cost: increased access link speed (not cheap!)
Caching example: install local cache
assumptions:
avgobjectsize:100Kbits
avgrequestratefrombrowserstoorigin
servers:15/sec
RT T from institutional router to any
origin server: 2 sec
access link rate: 1.54 Mbps
consequences:
public Internet
origin servers
LAN utilization: 0.15% access link utilization = 0%
1.54 Mbps access link
1 Gbps LAN
local web cache
total delay
= usecs
institutional network
Cost: web cache (cheap!)
Caching example: install local cache
Calculating access link utilization, delay with cache:
› suppose cache hit rate is 0.4 – 40%requestssatisfiedatcache,
– 60%requestssatisfiedatorigin
› access link utilization:
– 60%ofrequestsuseaccesslink
› average total delay
= 0.6 * (delay from origin servers) +0.4 * (delay
Link utilization is around 60%, queueing delay is small enough
= 0.6 (~2.x second) + 0.4 (~usecs)
less than with 154 Mbps link (and cheaper too!)
public Internet
origin servers
when satisfied at cache)
1.54 Mbps access link
1 Gbps LAN
local web cache
institutional network
Conditional GET
› Goal: don’t send object if client has up-to-date cached version
– no object transmission delay – lower link utilization
› client: specify date of cached copy in HTTP request
If-modified-since:
› server: response contains no object if cached copy is up-to- date:
HTTP/1.0 304 Not
Modified
object modified
after
client
server
object not
modified
HTTP request msg
If-modified-since:
HTTP response
HTTP/1.0 304 Not Modified
HTTP request msg
If-modified-since:
HTTP response
HTTP/1.0 200 OK