CS代考 Chord: A Scalable Peer-to-peer Lookup Protocol for Internet Applications

Chord: A Scalable Peer-to-peer Lookup Protocol for Internet Applications
Ion Stoica†, ‡, -Nowell‡, . Karger‡, M. ‡, ‡, ‡
A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is contin- uously changing. Results from theoretical analysis and simulations show that Chord is scalable: communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.
I. INTRODUCTION

Copyright By PowCoder代写 加微信 powcoder

Peer-to-peer systems and applications are distributed systems without any centralized control or hierarchical organization, in which each node runs software with equivalent functionality. A review of the features of recent peer-to-peer applications yields a long list: redundant storage, permanence, selection of nearby servers, anonymity, search, authentication, and hierar- chical naming. Despite this rich set of features, the core oper- ation in most peer-to-peer systems is efficient location of data items. The contribution of this paper is a scalable protocol for lookup in a dynamic peer-to-peer system with frequent node ar- rivals and departures.
The Chord protocol supports just one operation: given a key, it maps the key onto a node. Depending on the application using Chord, that node might be responsible for storing a value associ- ated with the key. Chord uses consistent hashing [12] to assign keys to Chord nodes. Consistent hashing tends to balance load, since each node receives roughly the same number of keys, and requires relatively little movement of keys when nodes join and leave the system.
Previous work on consistent hashing assumes that each node is aware of most of the other nodes in the system, an approach that does not scale well to large numbers of nodes. In con- trast, each Chord node needs “routing” information about only a few other nodes. Because the routing table is distributed, a Chord node communicates with other nodes in order to perform a lookup. In the steady state, in an N-node system, each node maintains information about only O(log N ) other nodes, and re- solves all lookups via O(log N ) messages to other nodes. Chord maintains its routing information as nodes join and leave the sys-
†University of California, Berkeley,
A Chord node requires information about O(logN) other
nodes for efficient routing, but performance degrades gracefully when that information is out of date. This is important in prac- tice because nodes will join and leave arbitrarily, and consis- tency of even O(log N ) state may be hard to maintain. Only one piece of information per node need be correct in order for Chord to guarantee correct (though possibly slow) routing of queries; Chord has a simple algorithm for maintaining this information in a dynamic environment.
The contributions of this paper are the Chord algorithm, the proof of its correctness, and simulation results demonstrating the strength of the algorithm. We also report some initial results on how the Chord routing protocol can be extended to take into account the physical network topology. Readers interested in an application of Chord and how Chord behaves on a small Internet testbed are referred to Dabek et al. [9]. The results reported by Dabek et al. are consistent with the simulation results presented in this paper.
The rest of this paper is structured as follows. Section II com- pares Chord to related work. Section III presents the system model that motivates the Chord protocol. Section IV presents the Chord protocol and proves several of its properties. Sec- tion V presents simulations supporting our claims about Chord’s performance. Finally, we summarize our contributions in Sec- tion VII.
II. RELATED WORK
Three features that distinguish Chord from many other peer- to-peer lookup protocols are its simplicity, provable correctness, and provable performance.
To clarify comparisons with related work, we will assume in this section a Chord-based application that maps keys onto val- ues. A value can be an address, a document, or an arbitrary data item. A Chord-based application would store and find each value at the node to which the value’s key maps.
DNS provides a lookup service, with host names as keys and IP addresses (and other host information) as values. Chord could provide the same service by hashing each host name to a key [7]. Chord-based DNS would require no special servers, while ordi- nary DNS relies on a set of special root servers. DNS requires manual management of the routing information (NS records) that allows clients to navigate the name server hierarchy; Chord automatically maintains the correctness of the analogous rout- ing information. DNS only works well when host names are structured to reflect administrative boundaries; Chord imposes no naming structure. DNS is specialized to the task of finding named hosts or services, while Chord can also be used to find
‡MIT Laboratory for Computer Science, kaashoek, fdabek,
Authors in reverse alphabetical order.
{rtm, dln, karger,
This research was sponsored by the Defense Advanced Research Projects Agency (DARPA) and the Space and Naval Warfare Systems Center, San Diego, under contract N66001-00-1-8933.

data objects that are not tied to particular machines.
The Freenet peer-to-peer storage system [5], [6], like Chord, is decentralized and symmetric and automatically adapts when hosts leave and join. Freenet does not assign responsibility for documents to specific servers; instead, its lookups take the form of searches for cached copies. This allows Freenet to provide a degree of anonymity, but prevents it from guaranteeing retrieval of existing documents or from providing low bounds on retrieval costs. Chord does not provide anonymity, but its lookup oper- ation runs in predictable time and always results in success or
definitive failure.
The Ohaha system uses a consistent hashing-like algorithm
map documents to nodes, and Freenet-style query routing [20]. As a result, it shares some of the weaknesses of Freenet. Archival Intermemory uses an off-line computed tree to map logical addresses to machines that store the data [4].
The Globe system [2] has a wide-area location service to map object identifiers to the locations of moving objects. Globe arranges the Internet as a hierarchy of geographical, topologi- cal, or administrative domains, effectively constructing a static world-wide search tree, much like DNS. Information about an object is stored in a particular leaf domain, and pointer caches provide search shortcuts [25]. The Globe system handles high load on the logical root by partitioning objects among multi- ple physical root servers using hash-like techniques. Chord per- forms this hash function well enough that it can achieve scala- bility without also involving any hierarchy, though Chord does not exploit network locality as well as Globe.
The distributed data location protocol developed by Plaxton et al. [21] is perhaps the closest algorithm to the Chord protocol. The Tapestry lookup protocol [26], used in OceanStore [13], is a variant of the Plaxton algorithm. Like Chord, it guarantees that queries make no more than a logarithmic number of hops and that keys are well-balanced. The Plaxton protocol’s main advantage over Chord is that it ensures, subject to assumptions about network topology, that queries never travel further in net- work distance than the node where the key is stored. Chord, on the other hand, is substantially less complicated and handles concurrent node joins and failures well. Pastry [23] is a prefix- based lookup protocol that has properties similar to Chord. Like Tapestry, Pastry takes into account network topology to reduce the routing latency. However, Pastry achieves this at the cost of a more elaborated join protocol which initializes the routing ta- ble of the new node by using the information from nodes along the path traversed by the join message.
CAN uses a d-dimensional Cartesian coordinate space (for some fixed d) to implement a distributed hash table that maps keys onto values [22]. Each node maintains O(d) state, and the lookup cost is O(dN1/d). Thus, in contrast to Chord, the state maintained by a CAN node does not depend on the network size N,butthelookupcostincreasesfasterthanlogN.Ifd=logN, CAN lookup times and storage needs match Chord’s. However, CAN is not designed to vary d as N (and thus log N ) varies, so this match will only occur for the “right” N corresponding to the fixed d. CAN requires an additional maintenance protocol to periodically remap the identifier space onto nodes. Chord also has the advantage that its correctness is robust in the face of partially incorrect routing information.
Chord’s routing procedure may be thought of as a one- dimensional analogue of the Grid location system (GLS) [15]. GLS relies on real-world geographic location information to route its queries; Chord maps its nodes to an artificial one- dimensional space within which routing is carried out by an al- gorithm similar to Grid’s.
Napster [18] and Gnutella [11] provide a lookup operation to find data in a distributed set of peers. They search based on user-supplied keywords, while Chord looks up data with unique identifiers. Use of keyword search presents difficulties in both systems. Napster uses a central index, resulting in a single point of failure. Gnutella floods each query over the whole system, so its communication and processing costs are high in large sys- tems.
Chord has been used as a basis for a number of subsequent research projects. The Chord File System (CFS) stores files and meta-data in a peer-to-peer system, using Chord to lo- cate storage blocks [9]. New analysis techniques have shown that Chord’s stabilization algorithms (with minor modifications) maintain good lookup performance despite continuous failure and joining of nodes [16]. Chord has been evaluated as a tool to serve DNS [7] and to maintain a distributed public key database for secure name resolution [1].
III. SYSTEM MODEL
Chord simplifies the design of peer-to-peer systems and ap- plications based on it by addressing these difficult problems:
Load balance: Chord acts as a distributed hash function, spreading keys evenly over the nodes; this provides a de- gree of natural load balance.
Decentralization: Chord is fully distributed: no node is more important than any other. This improves robustness and makes Chord appropriate for loosely-organized peer- to-peer applications.
Scalability: The cost of a Chord lookup grows as the log of the number of nodes, so even very large systems are feasi- ble. No parameter tuning is required to achieve this scaling. Availability: Chord automatically adjusts its internal ta- bles to reflect newly joined nodes as well as node failures, ensuring that, barring major failures in the underlying net- work, the node responsible for a key can always be found. This is true even if the system is in a continuous state of change.
Flexible naming: Chord places no constraints on the struc- ture of the keys it looks up: the Chord key-space is flat. This gives applications a large amount of flexibility in how they map their own names to Chord keys.
The Chord software takes the form of a library to be linked with the applications that use it. The application interacts with Chord in two main ways. First, the Chord library provides a lookup(key) function that yields the IP address of the node responsible for the key. Second, the Chord software on each node notifies the application of changes in the set of keys that the node is responsible for. This allows the application software to, for example, move corresponding values to their new homes when a new node joins.
The application using Chord is responsible for providing any desired authentication, caching, replication, and user-friendly

File System
Block Store
Block Store
Block Store
Fig. 1. Structure of an example Chord-based distributed storage system.
naming of data. Chord’s flat key-space eases the implementa- tion of these features. For example, an application could au- thenticate data by storing it under a Chord key derived from a cryptographic hash of the data. Similarly, an application could replicate data by storing it under two distinct Chord keys derived from the data’s application-level identifier.
The following are examples of applications for which Chord can provide a good foundation:
Cooperative mirroring, in which multiple providers of content cooperate to store and serve each others’ data. The participants might, for example, be a set of software devel- opment projects, each of which makes periodic releases. Spreading the total load evenly over all participants’ hosts lowers the total cost of the system, since each participant need provide capacity only for the average load, not for that participant’s peak load. Dabek et al. describe a realization of this idea that uses Chord to map data blocks onto servers; the application interacts with Chord achieve load balance, data replication, and latency-based server selection [9].
Time-shared storage for nodes with intermittent connectiv- ity. If someone wishes their data to be always available, but their server is only occasionally available, they can offer to store others’ data while they are connected, in return for having their data stored elsewhere when they are discon- nected. The data’s name can serve as a key to identify the (live) Chord node responsible for storing the data item at any given time. Many of the same issues arise as in the cooperative mirroring application, though the focus here is on availability rather than load balance.
Distributed indexes to support Gnutella- or Napster-like keyword search. A key in this application could be derived from the desired keywords, while values could be lists of machines offering documents with those keywords.
Large-scale combinatorial search, such as code breaking. In this case keys are candidate solutions to the problem (such as cryptographic keys); Chord maps these keys to the machines responsible for testing them as solutions.
We have built several peer-to-peer applications using Chord. The structure of a typical application is shown in Figure 1. The highest layer implements application-specific functions such as file-system meta-data. The next layer implements a general- purpose distributed hash table that multiple applications use to insert and retrieve data blocks identified with unique keys. The distributed hash table takes care of storing, caching, and replica- tion of blocks. The distributed hash table uses Chord to identify
the node responsible for storing a block, and then communicates with the block storage server on that node to read or write the block.
IV. THE CHORD PROTOCOL
This section describes the Chord protocol. The Chord proto- col specifies how to find the locations of keys, how new nodes join the system, and how to recover from the failure (or planned departure) of existing nodes. In this paper we assume that com- munication in the underlying network is both symmetric (if A can route to B, then B can route to A), and transitive (if A can route to B and B can route to C, then A can route to C).
A. Overview
At its heart, Chord provides fast distributed computation of a hash function mapping keys to nodes responsible for them. Chord assigns keys to nodes with consistent hashing [12], [14], which has several desirable properties. With high probability the hash function balances load (all nodes receive roughly the same number of keys). Also with high probability, when an N th node joins (or leaves) the network, only a O(1/N ) fraction of the keys are moved to a different location—this is clearly the minimum necessary to maintain a balanced load.
Chord improves the scalability of consistent hashing by avoiding the requirement that every node know about every other node. A Chord node needs only a small amount of “rout- ing” information about other nodes. Because this information is distributed, a node resolves the hash function by communicating with other nodes. In an N-node network, each node maintains information about only O(log N ) other nodes, and a lookup re- quires O(log N ) messages.
B. Consistent Hashing
The consistent hash function assigns each node and key an m- bit identifier using SHA-1 [10] as a base hash function. A node’s identifier is chosen by hashing the node’s IP address, while a key identifier is produced by hashing the key. We will use the term “key” to refer to both the original key and its image under the hash function, as the meaning will be clear from context. Similarly, the term “node” will refer to both the node and its identifier under the hash function. The identifier length m must be large enough to make the probability of two nodes or keys hashing to the same identifier negligible.
Consistent hashing assigns keys to nodes as follows. Iden- tifiers are ordered on an identifier circle modulo 2m. Key k is assigned to the first node whose identifier is equal to or follows (the identifier of ) k in the identifier space. This node is called the successor node of key k, denoted by successor(k). If iden- tifiers are represented as a circle of numbers from 0 to 2m − 1, then successor(k) is the first node clockwise from k. In the re- mainder of this paper, we will also refer to the identifier circle as the Chord ring.
Figure 2 shows a Chord ring with m = 6. The Chord ring has 10 nodes and stores five keys. The successor of identifier 10 is node 14, so key 10 would be located at node 14. Similarly, keys 24 and 30 would be located at node 32, key 38 at node 38, and key 54 at node 56.

Fig. 2. An identifier circle (ring) consisting of 10 nodes storing five keys.
Consistent hashing is designed to let nodes enter and leave the network with minimal disruption. To maintain the consistent hashing mapping when a node n joins the network, certain keys previously assigned to n’s successor now become assigned to n. When node n leaves the network, all of its assigned keys are reassigned to n’s successor. No other changes in assignment of keys to nodes need occur. In the example above, if a node were to join with identifier 26, it would capture the key with identifier 24 from the node with identifier 32.
The following results are proven in the papers that introduced consistent hashing [12], [14]:
Theorem IV.1: For any set of N nodes and K keys, with high probability:
1. Each node is responsible for at most (1 + ε)K/N keys
2. When an (N + 1)st node joins or leaves the network, re- sponsibility for O(K/N) keys changes hands (and only to
or from the joining or leaving node).
When consistent hashing is implemented as described above,
the theorem proves a bound of ε = O(logN). The consistent hashing paper shows that ε can be reduced to an arbitrarily small constant by having each node run Ω(log N ) virtual nodes, each with its own identifier. In the remainder of this paper, we will analyze all bounds in terms of work per virtual node. Thus, if each real node runs v virtual nodes, all bounds should be multi- plied by v.
The phrase “with high probability” bears some discussion. A simple interpretation is that the nodes and keys are randomly chosen, which is plausible in a non-adversarial model of the world. T

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com