代写代考 BUFSIZE 1024

Most concepts are drawn from Chapter 12
Distributed File Systems (DFS)
Updated by
* Introduction

Copyright By PowCoder代写 加微信 powcoder

* File service architecture
* Sun Network File System (NFS) *→ System (personal study)
* Recent advances

Learning objectives
 Understand the requirements that affect the design of distributed services
 NFS: understand how a relatively simple, widely- used service is designed
– Obtain a knowledge of file systems, both local and networked – Caching as an essential design technique
– Remote interfaces are not the same as APIs
– Security requires special consideration
 Recent advances: appreciate the ongoing research
that often leads to major advances (creation of a
widely used storage infrastructures like DropBox).

Introduction
 Why do we need a DFS?
– Primary purpose of a Distributed System…
Connecting Users and Resources
– Resources…
 … can be inherently distributed
 … can actually be data (files, databases, …) and…
 … their availability becomes a crucial issue for the performance of a Distributed System and applications.

Introduction
 A case for DFS
Uhm… perhaps time has come to buy a rack of servers….
I want to store my thesis on the server!
My boss wants…
I need to store my analysis and reports safely…
I need to have my book always available..
I need storage for my reports

Introduction
 A Case for DFS
Uhm… … maybe we need a DFS?… Well after the paper and a nap…
Same here… I don’t remember..
Hey… but where did I put my docs?
I am not sure whether server A, or B, or C…
Wow… now I can store a lot more documents…

Introduction
 A Case for DFS
Distributed File System
It is reliable, fault tolerant, highly available, location transparent…. I hope I can finish my newspaper now…
Nice… my boss will promote me!
Good… I can access my
Wow! I do not have folders from anywhere.. to remember which
server I stored the data into…

Storage systems and their properties
 In first generation of distributed systems (1974-95), file systems (e.g. NFS) were the only networked storage systems.
 With the advent of distributed object systems (CORBA, Java) and the web, the picture has become more complex.
 Current focus is on large scale, scalable storage.
– Google File System (GFS)
– AmazonS3(SimpleStorageService)
1974 – 1995
1995 – 2010 2010 – now
– Cloud Storage (e.g., DropBox, Google Drive, Microsoft OneDrive)

Storage systems and their properties
Main memory
File system Distributed file system
Distributed shared memory Remote objects (RMI/ORB) Persistent object store
Peer-to-peer storage store
Sharing Persis- Distributed Consistency Example tence cache/replicas maintenance
Ivy (Ch. 16) 1 CORBA
1 CORBA Persistent Object Service
2 OcceanStore
Types of consistency between copies: 1 – strict one-copy consistency
√ – approximate/slightly weaker guarantees
X – no automatic consistency
2 – considerably weaker guarantees
1 UNIX file system
Sun NFS Web server

What is a file system? 1
 Persistent stored data sets
 Hierarchic name space visible to all processes
 API with the following characteristics:
– access and update operations on persistently stored data sets
– Sequential access model (with additional random facilities)
 Sharing of data between users, with access control
 Concurrent access:
– certainly for read-only access
– what about updates?
 Other features:
– mountable file stores
– more? …

What is a file system?
UNIX file system operations
filedes = open(name, mode) filedes = creat(name, mode)
status = close(filedes)
count = read(filedes, buffer, n) count = write(filedes, buffer, n)
pos = lseek(filedes, offset, whence)
status = unlink(name)
status = link(name1, name2) status = stat(name, buffer)
Opens an existing file with the given name.
Creates a new file with the given name.
Both operations deliver a file descriptor referencing the open file. The mode is read, write or both.
Closes the open file filedes.
Transfers n bytes from the file referenced by filedes to buffer. Transfers n bytes to the file referenced by filedes from buffer. Both operations deliver the number of bytes actually transferred and advance the read-write pointer.
Moves the read-write pointer to offset (relative or absolute, depending on whence).
Removes the file name from the directory structure. If the file has no other names, it is deleted.
Adds a new name (name2) for a file (name1). Gets the file attributes for file name into buffer.

What is a file system?
Class Exercise A
Write a simple C program to copy a file using the UNIX file system operations:
copyfile(char * oldfile, * newfile) {
}
Note: remember that read() returns 0 when you attempt to read beyond the end of the file.

A code in C – Copy File program
Write a simple C program to copy a file using the UNIX file system operations.
#define BUFSIZE 1024
#define READ 0
#define FILEMODE 0644
void copyfile(char* oldfile, char* newfile)
{ char buf[BUFSIZE]; int i,n=1, fdold, fdnew;
if((fdold = open(oldfile, READ))>=0) {
fdnew = creat(newfile, FILEMODE);
while (n>0) {
n = read(fdold, buf, BUFSIZE);
if(write(fdnew, buf, n) < 0) break; close(fdold); close(fdnew); else printf("Copyfile: couldn't open file: %s \n", oldfile); main(int argc, char **argv) { copyfile(argv[1], argv[2]); What is a file system? (a typical module structure for implementation of non-DFS) File system modules Directory module: relates file names to file IDs File module: relates file IDs to particular files Access control module: checks permission for operation requested File access module: reads or w rites file data or attributes Block module: accesses and allocates disk blocks Device module: disk I/O and buffering Files Directories What is a file system? File attribute record structure updated by system: File length Creation timestamp Read timestamp updated by owner: Write timestamp Attribute timestamp Reference count Access control list E.g. for UNIX: rw-rw-r-- Distributed File system/service requirements         Transparency Concurrency Replication Heterogeneity Fault tolerance Consistency Security Efficiency.. Location: Same name space after relocation of make errors or crash. operations on local files - caching is completely local files. • Load-sharing between servers makes service performance comparable to local file system. files or processes (client programs CoDnecsuigrrnenmcuyspt rboepecrotmiespatible with the file systems of transparent. Service must resume after a server machine •based on identity of user making request more scalable should see a uniform file name space) different O . Difficult to achieve the same for distributed file •identities of remote users must be authenticated M•obLiliotyc:al acAcuetsosmhaatisc breltotecar trieosnpofnfsiles(loiswpeor slastibenlecy) Service interfaces must be open - precise systems while maintaining good performance FileIf-ltehveesleorrvirceecoisrdr-elepvlieclaltoecdk,iintgcan continueto •privacy requires secure communication (neither client programs nor system • Fsapueltctifoicleartaionncseof APIs are published. and scalability. operate even during a server crash. Other forms of concurrency control to minimise admin tables in client nodes need to be Service interfaces are open to all processes not Full replication is difficult to implement. contention changed when files are moved). excluded by a firewall. Caching (of all or part of a file) gives most of the Performance: Satisfactory performance across a •vulnerable to impersonation and other benefits (except fault tolerance) specified range of system loads attacks ng: Service can be expanded to meet additional loads or growth. File service is most heavily loaded service in an intranet, so its functionality and performance are critical Tranparencies Changes to a file by one client should not interfere Replication properties Heterogeneity properties Access: Same operations (client programs are Fault tolerwainthcethe operation of other clients Consistency Security File service maintains multiple identical copies of Efficiency unaware of distribution of files) Service can be accessed by clients running on simultaneously accessing or changing Service must continue to operate even when clients Unix offers one-copy update semantics for fiMleusst maintain access control and privacy as for (Galmoaol sfot)trhadenisystarOimbSuetoefirldeh.failerdswyasrtemplsatifsorumsu. ally File Service Architecture  An architecture that offers a clear separation of the main concerns in providing access to files is obtained by structuring the file service as three components: – A flat file service – A directory service – A client module.  The relevant modules and their relationship is (shown next).  The Client module implements exported interfaces by flat file and directory services on server side. Model file service architecture Client computer Lookup AddName UnName GetNames Server computer Application program Application program Directory service Flat file service Client module Create Delete GetAttributes SetAttributes Responsibilities of various modules  Flat file service: – Concernedwiththeimplementationofoperationsonthecontentsoffile. Unique File Identifiers (UFIDs) are used to refer to files in all requests for flat file service operations. UFIDs are long sequences of bits chosen so that each file has a unique among all of the files in a distributed system.  Directory Service: – ProvidesmappingbetweentextnamesforthefilesandtheirUFIDs.Clients may obtain the UFID of a file by quoting its text name to directory service. Directory service supports functions needed to generate directories and to add new files to directories.  Client Module: – Itrunsoneachcomputerandprovidesintegratedservice(flatfileand directory) as a single API to application programs. For example, in UNIX hosts, a client module emulates the full set of Unix file operations. – It holds information about the network locations of flat-file and directory server processes; and achieve better performance through implementation of a cache of recently used file blocks at the client. Server operations/interfaces for the model file service Flat file service Directory service Lookup(Dir, Name) -> FileId
position of first byte
Read(FileId, i, n) -> Data
position of first byte
Write(FileId, i, Data)
Create() -> FileId
Delete(FileId) GetAttributes(FileId) -> Attr
SetAttributes(FileId, Attr)
AddName(Dir, Name, File) UnName(Dir, Name)
GetNames(Dir, Pattern) -> NameSeq
Pathname lookup
Pathnames such as ‘/usr/bin/tar’ are resolved
A unique identifier for files anywhere in the
by iterative calls to lookup(), one call for network. Similar to the remote object
each component of the path, starting with
references described in Section 4.3.3.
the ID of the root directory ‘/’ which is known in every client.

File Group
A collection of files that can be located on any server or moved between servers while maintaining the same names.
– Similar to a UNIX filesystem
– Helps with distributing the load of file
serving between several servers.
– File groups have identifiers which are unique throughout the system (and hence for an open system, they must be globally unique).
 Used to refer to file groups and files 20
File Group ID:
To construct a globally unique ID we use some unique attribute of the machine on which it is created, e.g. IP number, even though the file group may move subsequently.
IP address

DFS: Case Studies
 NFS (Network File System)
– Developed by Sun Microsystems (in 1985)
– Most popular, open, and widely used.
– NFS protocol standardised through IETF (RFC 1813)
 AFS ( System)
– Developed by University as part of Andrew
distributed computing environments (in 1986)
– A research project to create campus wide file system.
– Public domain implementation is available on Linux (LinuxAFS)
– It was adopted as a basis for the DCE/DFS file system in the Open Software Foundation (OSF, www.opengroup.org) DEC (Distributed Computing Environment)

Case Study: Sun NFS
 An industry standard for file sharing on local networks since the 1980s  An open standard with clear and simple interfaces
 Closely follows the abstract file service model defined above
 Supports many of the design requirements already mentioned:
– transparency
– heterogeneity
– efficiency
– fault tolerance
 Limited achievement of: – concurrency
– replication – consistency – security

NFS – History
 1985: Original Version (in-house use)
 1989: NFSv2 (RFC 1094)
– Operated entirely over UDP
– Stateless protocol (the core)
– Support for 2GB files
 1995: NFSv3 (RFC 1813)
– Support for 64 bit (> 2GB files)
– Support for asynchronous writes
– Support for TCP
– Support for additional attributes
– Other improvements
 2000-2003: NFSv4 (RFC 3010, RFC 3530)
– Collaboration with IETF
– Sun hands over the development of NFS
 2010: NFSv4.1
– Adds Parallel NFS (pNFS) for parallel data access
– RFC 7530 – NFS Version 4 Protocol
– Unlike earlier versions, it supports traditional file access while integrating support for file locking and the MOUNT protocol. It makes NFS operate well in an Internet environment.

NFS architecture
Client computer
Application program
NFS Client
Client computer
Server computer
system calls UNIX kernel
Operations on local files
Operations on
remote files
Application program
Application program
Virtual file system
NFS client
Virtual file system
NFS server
NFS Client
Application program
NFS protocol
(remote operations)
file system

NFS architecture:
does the implementation have to be in the system kernel?
– there are examples of NFS clients and servers that run at application- level as libraries or processes (e.g. early Windows and MacOS implementations, current PocketPC, etc.)
But, for a Unix implementation there are advantages: – Binary code compatible – no need to recompile applications
 Standard system calls that access remote files can be routed through the NFS client module by the kernel
– Shared cache of recently-used blocks at client
– Kernel-level server can access i-nodes and file blocks directly
 but a privileged (root) application program could do almost the same. – Security of the encryption key used for authentication.

NFS server operations (simplified)
• read(fh, offset, count) -> attr, data
• write(fh, offset, count, data) -> attr
• create(dirfh, name, attr) -> newfh, attr
• remove(dirfh, name) status
• getattr(fh) -> attr
• setattr(fh, attr) -> attr
• lookup(dirfh, name) -> fh, attr
• rename(dirfh, name, todirfh, toname)
• link(newdirfh, newname, dirfh, name)
• readdir(dirfh, cookie, count) -> entries
• symlink(newdirfh, newname, string) -> statu
• readlink(fh) -> string
• mkdir(dirfh, name, attr) -> newfh, attr
• rmdir(dirfh, name) -> status
• statfs(fh) -> fsstats
Model flat file service
le handle:
Read(FileId, i, n) -> Data
Filesystem identifier i-node number i-node generation
Write(FileId, i, Data) Create() -> FileId Delete(FileId) GetAttributes(FileId) -> Attr SetAttributes(FileId, Attr)
Model directory service
Lookup(Dir, Name) -> FileId
AddName(Dir, Name, File) sUnName(Dir, Name)
GetNames(Dir, Pattern) ->NameSeq

NFS access control and authentication
 Stateless server, so the user’s identity and access rights must be checked by the server on each request.
– Inthelocalfilesystemtheyarecheckedonlyonopen()
 Every client request is accompanied by the userID and groupID
– whichareinsertedbytheRPCsystem
 Server is exposed to imposter attacks unless the userID and groupID are protected by encryption
 Kerberos has been integrated with NFS to provide a stronger and more comprehensive security solution

Architecture Components (UNIX / Linux)
– nfsd: NFS server daemon that services requests from clients.
– mountd:NFSmountdaemonthatcarriesoutthemountrequest passed on by nfsd.
– rpcbind: RPC port mapper used to locate the nfsd daemon.
– /etc/exports: configuration file that defines which portion of the file systems are exported through NFS and how.
– mount:standardfilesystemmountcommand.
– /etc/fstab: file system table file.
– nfsiod: (optional) local asynchronous NFS I/O server.

Mount service
 Mount operation:
mount(remotehost, remotedirectory, localdirectory)
 Server maintains a table of clients who have mounted filesystems at that server
 Each client maintains a table of mounted file systems holding:
< IP address, port number, file handle>
 Hard versus soft mounts

Local and remote file systems accessible on an NFS client
big jon bob …
Re mot e mo un t
Re mot e mo un t
… vmunix usr
students x staff
jim ann jane joe
Note: The file system mounted at /usr/students in the client is actually the sub-tree located at /export/people in Server 1; the file system mounted at /usr/staff in the client is actually the sub-tree located at /nfs/users in Server 2.

Automounter
NFS client catches attempts to access ’empty’ mount
points and routes them to the Automounter
– Automounterhasatableofmountpointsandmultiplecandidate serves for each
– it sends a probe message to each candidate server and then uses the mount service to mount the filesystem at the first server to respond
 Keeps the mount table small
 Provides a simple form of replication for read-only
filesystems
– E.g. if there are several servers with identical copies of /usr/lib then each server will have a chance of being mounted at some clients.

Kerberized NFS
 Kerberos protocol is too costly to apply on each file access request
 Kerberos is used in the mount service:
– toauthenticatetheuser’sidentity
– User’sUserIDandGroupIDarestoredattheserverwiththeclient’sIPaddress
 For each file request:
– TheUserIDandGroupIDsentmustmatchthosestoredattheserver – IPaddressesmustalsomatch
 This approach has some problems
– can’taccommodatemultipleuserssharingthesameclientcomputer – allremotefilestoresmustbemountedeachtimeauserlogsin

New design approaches
Distribute file data across several servers
– Exploitshigh-speednetworks(InfiniBand,GigabitEthernet)
– Layeredapproach,lowestlevelislikea’distributedvirtualdisk’ – Achievesscalabilityevenforasingleheavily-usedfile
‘Serverless’ architecture
– Exploitsprocessinganddiskresourcesinallavailablenetworknodes – Service is distributed at the level of individual files
xFS : Experimental implementation demonstrated a substantial performance gain over NFS and AFS
Peer-to-peer systems: Napster, OceanStore (UCB), Farsite (MSR), Publius (AT&T research) – see web for documentation on these very recent systems
Cloud-based File Systems: DropBox

DropBox Cloud Storage Architecture
Dropbox Folder
Automatic synchronization
Dropbox Folder
Dropbox Folder

 Distributed File systems provide illusion of a local file system and hide complexity from end users.
 NFS is an excellent example of a distributed service designed to meet many important design requirements
 Effective client caching can produce file service performance equal to or better than local file systems
 Consistency versus update semantics versus fault tolerance remains an issue
 Most client and server failures can be masked

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com