程序代写代做代考 cache algorithm File Servers

File Servers
• Basic idea is simple
– Clients open/read/write/close files indirectly, by going through servers, who access the disk
– A client must be matched with a server
– Clients do not store (permanent) files; servers do
– Clients cache parts of (or whole) files

Picture of File Server, Many Channels

Basic Idea—Client
Note: numbers are for this and next slides
(1) send request to single global channel (open) (4) receive reply on client specific channel
– reply contains a server id (openreply[clientId])
(5) thereafter, send requests to channel indexed by
server id (filerequest[serverId])
(9) receive results on a different client specific
channel (filereply[clientId])
– eventually send a close message
Note: could have one channel on which servers send to clients (but extra argument marshalling)

Basic Idea—Server
while (TRUE) {
(2) receive client id on global channel (open)
(3) send server id to that client’s channel (openreply[clientId]) while close message not yet received {
(6) recv requests on server specific channel (filerequest[servId]) (7) process request
(8) send reply to channel indexed by client (filereply[clientId])
} }

Notes on File Server
• Our solution used conversational continuity
– Client talks to same server throughout the lifetime of the file
• Advantage: caching on server side • Other options
– One server per disk
• Harder to program: server must keep track of file information for all clients
– Stateless (client sends all info on every request)
• Example: on a file read, must send file name, file offset (this is what NFS does)

Interacting Peers Picture

Interacting Peers
• Each process has an integer
– Goal: find max and min integers
• Approaches
– Centralized: use coordinator
• 2 * (P-1) messages and a bottleneck
– Symmetric: every process sends value to every other • P * (P-1) messages (but easy to program)
– Ring: form a circle; send values around
• 2 * (P-1) messages; no bottleneck, but sequential
– Tree: create a binary (in general, n-ary) tree
• 2 * (P-1) messages; less bottleneck, but log(P) steps

Interacting Peers
• How to know which implementation to choose?
• Can use analytical models
– LogP model most widely used model
– L (latency), o (overhead), g (gap), P (number of cores)
• Allows mostly architecture-independent analysis of parallel algorithms

Picture of LogP Model for Barriers

Interacting Peers
• Generalizations
– Example: all-to-all
• Implementations are not at all clear here • Must worry about contention
• May want intelligent scheduling