CS代考 CS61B: Lecture #17 1

Last modified: Sun Oct 6 14:43:32 2019
CS61B: Lecture #17 1
CS61B Lecture #17

Copyright By PowCoder代写 加微信 powcoder

• Overview of standard Java Collections classes. – Iterators, ListIterators
– Containers and maps in the abstract
• Amortized analysis of implementing lists with arrays.
Last modified: Sun Oct 6 14:43:32 2019
CS61B: Lecture #17 2

Data Types in the Abstract
• Most of the time, should not worry about implementation of data structures, search, etc.
• What they do for us—their specification—is important.
• Java has several standard types (in java.util) to represent collec-
tions of objects
– Six interfaces:
∗ Collection: General collections of items.
∗ List: Indexed sequences with duplication
∗ Set, SortedSet: Collections without duplication ∗ Map, SortedMap: Dictionaries (key 􏰌→ value)
– Concrete classes that provide actual instances: LinkedList, ArrayList, HashSet, TreeSet.
– To make change easier, purists would use the concrete types only for new, interfaces for parameter types, local variables.
Last modified: Sun Oct 6 14:43:32 2019 CS61B: Lecture #17 3

Collection Structures in java.util
Collection
LinkedList
SortedSet HashSet
: implements
WeakHashMap
Last modified: Sun Oct 6 14:43:32 2019
CS61B: Lecture #17 4

The Collection Interface
• Collection interface. Main functions promised:
– Membership tests: contains (∈), containsAll (⊆)
– Other queries: size, isEmpty
– Retrieval: iterator, toArray
– Optional modifiers: add, addAll, clear, remove, removeAll (set difference), retainAll (intersect)
Last modified: Sun Oct 6 14:43:32 2019 CS61B: Lecture #17 5

Side Trip about Library Design: Optional Operations
• Not all Collections need to be modifiable; often makes sense just to get things from them.
• So some operations are optional (add, addAll, clear, remove, removeAll, retainAll)
• The library developers decided to have all Collections implement these, but allowed implementations to throw an exception:
UnsupportedOperationException
• An alternative design would have created separate interfaces:
interface Collection { contains, containsAll, size, iterator, … } interface Expandable extends Collection { add, addAll }
interface Shrinkable extends Collection { remove, removeAll, … } interface ModifiableCollection
extends Collection, Expandable, Shrinkable { }
• You’d soon have lots of interfaces. Perhaps that’s why they didn’t
do it that way.
Last modified: Sun Oct 6 14:43:32 2019 CS61B: Lecture #17 6

The List Interface
• Extends Collection
• Intended to represent indexed sequences (generalized arrays) • Adds new methods to those of Collection:
– Membership tests: indexOf, lastIndexOf.
– Retrieval: get(i), listIterator(), sublist(B, E).
– Modifiers: add and addAll with additional index to say where to add. Likewise for removal operations. set operation to go with get.
• Type ListIterator extends Iterator:
– Adds previous and hasPrevious.
– add, remove, and set allow one to iterate through a list, inserting, removing, or changing as you go.
– Important Question: What advantage is there to saying List L rather than LinkedList L or ArrayList L?
Last modified: Sun Oct 6 14:43:32 2019 CS61B: Lecture #17 7

Implementing Lists (I): ArrayLists
• The main concrete types in Java library for interface List are ArrayList and LinkedList:
• As you might expect, an ArrayList, A, uses an array to hold data. For example, a list containing the three items 1, 4, and 9 might be represented like this:
• After adding four more items to A, its data array will be full, and the value of data will have to be replaced with a pointer to a new, bigger array that starts with a copy of its previous values.
• Question: For best performance, how big should this new array be?
• If we increase the size by 1 each time it gets full (or by any con- stant value), the cost of N additions will scale as Θ(N2), which makes ArrayList look much worse than LinkedList (which uses an IntList-like implementation.)
data: count: 3
Last modified: Sun Oct 6 14:43:32 2019 CS61B: Lecture #17 8

Expanding Vectors Efficiently
• When using array for expanding sequence, best to double the size of array to grow it. Here’s why.
• If array is size s, doubling its size and moving s elements to the new array takes time proportional to 2s.
• In all cases, there is an additional Θ(1) cost for each addition to account for actually assigning the new value into the array.
• When you add up these costs for inserting a sequence of N items, the total cost turns out to be proportional to N, as if each addition took constant time, even though some of the additions actually take time proportional to N all by themselves!
Last modified: Sun Oct 6 14:43:32 2019 CS61B: Lecture #17 9

Amortized Time
• Suppose that the actual costs of a sequence of N operations are c0, c1, . . . , cN −1, which may differ from each other by arbitrary amounts and where ci ∈ O(f(i)).
• Consider another sequence a0, a1, . . . , aN −1, where ai ∈ O(g(i)).
we say that the operations all run in O(g(i)) amortized time.
• That is, the actual cost of a given operation, ci, may be arbitrarily larger than the amortized time, ai, as long as the total amortized time is always greater than or equal to the total actual time, no matter where the sequence of operations stops—i.e., no matter what k is.
• In cases of interest, the amortized time bounds are much less than the actual individual time bounds: g(i) ≪ f(i).
• E.g., for the case of insertion with array doubling, f(i) ∈ O(N) and g(i) ∈ O(1).
􏰅 ai ≥ 􏰅 ci for all k, 0≤i ci so that we can in-
crease Φ (Φi+1 > Φi).
• On expensive ones, we typically have ai ≪ ci and greatly decrease Φ
(but don’t let it go negative—may not be “overdrawn”).
• We try to do all this so that ai remains as we desired (e.g., O(1) for
expanding array), without allowing Φi < 0. • Requires that we choose ai so that Φi always stays ahead of ci. Last modified: Sun Oct 6 14:43:32 2019 CS61B: Lecture #17 12 Application to Expanding Arrays • When adding to our array, the cost, ci, of adding element #i when the array already has space for it is 1 unit. • The array does not initially have space when adding items 1, 2, 4, 8, 16,...—in other words at item 2n for all n ≥ 0. So, –ci =1ifi≥0andisnotapowerof2;and –ci = 2i+1 when i is a power of 2 (copy i items, clear another i items, and then add item #i). • So on each operation #2n we’re going to need to have saved up at least 2·2n = 2n+1 units of potential to cover the expense of expanding the array, and we have this operation and the preceding 2n−1 − 1 operations in which to save up this much potential (everything since the preceding doubling operation). •Sochoosea0 =1andai =5fori>0. ApplyΦi+1 =Φi +(ai −ci),and here is what happens:
i0123456 7 891011121314151617 ci 1 3 5 1 9 1 1 1 17 1 1 1 1 1 1 1 33 1 ai 1 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 Φi 002262610142 6101418222630 2
Pretending each cost is 5 never underestimates true cumulative time.
Last modified: Sun Oct 6 14:43:32 2019
CS61B: Lecture #17 13

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com