CS计算机代考程序代写 Java Excel cache algorithm CS61B

CS61B
Lecture 30: Sorting II: Quicksort
¡ñ Insertion Sort (Continued)
¡ñ Backstory, Partitioning ¡ñ Quicksort
¡ñ Quicksort Performance Caveats and Conclusion
datastructur.es

Insertion Sort Runtime: http://yellkey.com/process What is the runtime of insertion sort?
A. ¦¸(1), O(N) B. ¦¸(N), O(N) C. ¦¸(1), O(N2) D. ¦¸(N), O(N2) E. ¦¸(N2), O(N2)
36 swaps:
datastructur.es

Insertion Sort Runtime
What is the runtime of insertion sort?
A. ¦¸(1), O(N) B. ¦¸(N), O(N) C. ¦¸(1), O(N2) D. ¦¸(N), O(N2) E. ¦¸(N2), O(N2)
You may recall ¦¸ is not ¡°best case¡±.
So technnnniically you could also say
¦¸(1)
36 swaps:
datastructur.es

Picking the Best Sort: http://yellkey.com/standard Suppose we do the following:
¡ñ Read 1,000,000 integers from a file into an array of length 1,000,000.
¡ñ Mergesort these integers.
¡ñ Select one integer randomly and change it.
¡ñ Sort using algorithm X of your choice.
Which sorting algorithm would be the fastest choice for X?
A. Selection Sort: O(N2)
B. Heapsort: O(N Log N)
C. Mergesort: O(N Log N)
D. Insertion Sort: O(N2)
datastructur.es

Observation: Insertion Sort on Almost Sorted Arrays
For arrays that are almost sorted, insertion sort does very little work.
¡ñ Left array: 5 inversions, so only 5 swaps.
¡ñ Right array: 3 inversion, so only 3 swaps.
datastructur.es

Picking the Best Sort (Poll Everywhere)
Suppose we do the following:
¡ñ Read 1,000,000 integers from a file into an array of length 1,000,000.
¡ñ Mergesort these integers.
¡ñ Select one integer randomly and change it.
¡ñ Sort using algorithm X of your choice.
¡ñ In the worst case, we have 999,999 inversions: ¦¨(N) inversions. Which sorting algorithm would be the fastest choice for X? Worst case
run-times:
A. Selection Sort: ¦¨(N2)
B. Heapsort: ¦¨(N log N)
C. Mergesort: ¦¨(N log N)
D. Insertion Sort: ¦¨(N)
datastructur.es

Insertion Sort Sweet Spots
On arrays with a small number of inversions, insertion sort is extremely fast.
¡ñ One exchange per inversion (and number of comparisons is similar). Runtime is ¦¨(N + K) where K is number of inversions.
¡ñ Define an almost sorted array as one in which number of inversions ¡Ü cN for some c. Insertion sort is excellent on these arrays.
Less obvious: For small arrays (N < 15 or so), insertion sort is fastest. ¡ñ More of an empirical fact than a theoretical one. ¡ñ Theoretical analysis beyond scope of the course. ¡ñ Rough idea: Divide and conquer algorithms like heapsort / mergesort spend too much time dividing, but insertion sort goes straight to the conquest. ¡ñ The Java implementation of Mergesort does this (Link). datastructur.es Sorts So Far Best Case Runtime Worst Case Runtime Space Demo Notes Selection Sort ¦¨(N2) ¦¨(N2) ¦¨(1) Link Heapsort (in place) ¦¨(N)* ¦¨(N log N) ¦¨(1) Link Bad cache (61C) performance. Mergesort ¦¨(N log N) ¦¨(N log N) ¦¨(N) Link Fastest of these. Insertion Sort (in place) ¦¨(N) ¦¨(N2) ¦¨(1) Link Best for small N or almost sorted. datastructur.es Backstory, Partitioning datastructur.es Sorting So Far Core ideas: ¡ñ Selection sort: Find the smallest item and put it at the front. ¡ð Heapsort variant: Use MaxPQ to find max element and put at the back. ¡ñ Merge sort: Merge two sorted halves into one sorted whole. ¡ñ Insertion sort: Figure out where to insert the current item. Quicksort: ¡ñ Much stranger core idea: Partitioning. ¡ñ Invented by Sir Tony Hoare in 1960, at the time a novice programmer. datastructur.es Context for Quicksort¡¯s Invention (Source) 1960: Tony Hoare was working on a crude automated translation program for Russian and English. ¡°The cat wore a beautiful hat.¡± N words How would you do this? ¡ñ Binary search for each word. ¡ð Find ¡°the¡± in log D time. ¡ð Find ¡°cat¡± in log D time... ¡ñ Total time: N log D ... ... beautiful §Ü§â§Ñ§ã§Ú§Ó§Ñ§ñ ... ... cat §Ü§à§ê§Ü§Ñ ... ... ¡°§¬§à§ê§Ü§Ñ §ß§à§ã§Ú§Ý §Ü§â§Ñ§ã§Ú§Ó§Ñ§ñ §ê§Ñ§á§Ü§Ñ.¡± Dictionary of D english words datastructur.es Context for Quicksort¡¯s Invention (Source) 1960: Tony Hoare was working on a crude automated translation program for Russian and English. Algorithm: N binary searches of D length dictionary. ¡ñ Total runtime: N log D ¡ñ ASSUMES log time binary search! Limitation at the time: ¡ñ Dictionary stored on long piece of tape, sentence is an array in RAM. ¡ð Binary search of tape is not log time (requires physical movement!). ¡ñ Better: Sort the sentence and scan dictionary tape once. Takes N log N + D time. ... ... beautiful §Ü§â§Ñ§ã§Ú§Ó§Ñ§ñ ... ... cat §Ü§à§ê§Ü§Ñ ... ... ¡ð But Tony had to figure out how to sort an array (without Google!)... datastructur.es The Core Idea of Tony¡¯s Sort: Partitioning [no yellkey] To partition an array a[] on element x=a[i] is to rearrange a[] so that: ¡ñ x moves to position j (may be the same as i) ¡ñ All entries to the left of x are <= x. ¡ñ All entries to the right of x are >= x.
Called the ¡®pivot¡¯.
i
5
550
10
4
10
9
330
4
5
9
10
10
330
550
5
4
9
10
10
550
330
A.
C.
jj B.
jj D.
5
9
10
4
10
550
330
5
9
10
4
10
550
330
Which partitions are valid?
datastructur.es

The Core Idea of Tony¡¯s Sort: Partitioning
To partition an array a[] on element x=a[i] is to rearrange a[] so that:
¡ñ x moves to position j (may be the same as i)
¡ñ All entries to the left of x are <= x. ¡ñ All entries to the right of x are >= x.
Called the ¡®pivot¡¯.
i
5
550
10
4
10
9
330
4
5
9
10
10
330
550
5
4
9
10
10
550
330
A.
C.
jj B.
jj D.
5
9
10
4
10
550
330
5
9
10
4
10
550
330
Which partitions are valid?
datastructur.es

Interview Question (Partitioning)
Given an array of colors where the 0th element is white, and the remaining elements are red (less) or blue (greater), rearrange the array so that all red squares are to the left of the white square, and all blue squares are to the right. Your algorithm must complete in ¦¨(N) time (no space restriction).
¡ñ Relative order of red and blues does NOT need to stay the same. Input
6
8
3
1
2
7
4
Example of a valid output
Another example of a valid output
3
1
2
4
6
8
7
3
4
1
2
6
7
8
datastructur.es

Interview Question, Student Answer #1
Input
6
8
3
1
2
7
4
3
1
2
4
6
8
7
Scan from the right edge of the list. If anything is smaller, stick it in the leftmost position. Skip larger (blue) items.
¡ñ Very natural use for a double ended queue.
¡ñ Maybe I¡¯ll replace palindrome with this next semester.
datastructur.es

Interview Question, Student Answer #1
Input
6
8
3
1
2
7
4
Insert 6 into a BST. Then 8, then 3, then …, then 4.
¡ñ All the small items are on the left.
¡ñ All the large items are on the right.
datastructur.es

Simplest (but not fastest) Answer: 3 Scan Approach
Given an array of colors where the 0th element is white, and the remaining elements are red (less) or blue (greater), rearrange the array so that all red squares are to the left of the white square, and all blue squares are to the right. Your algorithm must complete in ¦¨(N) time (no space restriction).
¡ñ Relative order of red and blues does NOT need to stay the same. Input
Algorithm: Create another array. Scan and copy all the red items to the first R spaces. Then scan for and copy the white item. Then scan and copy the blue items to the last B spaces.
Output
6
8
3
1
2
7
4
3
1
2
4
6
8
7
datastructur.es

Quicksort
datastructur.es

Partition Sort, a.k.a. Quicksort
5
3
2
1
7
8
4
6
Q: How would we use this operation for sorting?
3
2
1
4
5
7
8
6
Observations:
¡ñ 5 is ¡°in its place.¡± Exactly where it¡¯d be if the array were sorted.
¡ñ Can sort two halves separately, e.g. through recursive use of partitioning.
3
2
1
4
5
5
7
8
6
2
1
3
4
5
5
6
7
8
datastructur.es

Partition Sort, a.k.a. Quicksort
Quicksorting N items: (Demo)
¡ñ Partition on leftmost item.
¡ñ Quicksort left half.
¡ñ Quicksort right half.
32
15
2
17
19
26
41
17
17
partition(32)
in its
place
>= 32
<= 32 15 2 17 19 26 17 17 32 41 datastructur.es Quicksort Quicksort was the name chosen by Tony Hoare for partition sort. ¡ñ For most common situations, it is empirically the fastest sort. ¡ð Tony was lucky that the name was correct. How fast is Quicksort? Need to count number and difficulty of partition operations. Theoretical analysis: ¡ñ Partitioning costs ¦¨(K) time, where ¦¨(K) is the number of elements being partitioned (as we saw in our earlier ¡°interview question¡±). ¡ñ The interesting twist: Overall runtime will depend crucially on where pivot ends up. datastructur.es Quicksort Runtime datastructur.es Best Case: Pivot Always Lands in the Middle Only size 1 problems remain, so we¡¯re done. datastructur.es Best Case Runtime? Only size 1 problems remain, so we¡¯re done. What is the best case runtime? datastructur.es Best Case Runtime? Total work at each level: ¡ÖN ¡ÖN/2 + ¡ÖN/2 = ¡ÖN ¡ÖN/4 * 4 = ¡ÖN Only size 1 problems remain, so we¡¯re done. Overall runtime: ¦¨(NH) where H = ¦¨(log N) so: ¦¨(N log N) datastructur.es Worst Case: Pivot Always Lands at Beginning of Array Give an example of an array that would follow the pattern to the right. What is the runtime ¦¨(¡¤)? datastructur.es Worst Case: Pivot Always Lands at Beginning of Array Give an example of an array that would follow the pattern to the right. ¡ñ 1 23 45 6 What is the runtime ¦¨(¡¤)? ¡ñ N2 datastructur.es Quicksort Performance Theoretical analysis: ¡ñ Best case: ¦¨(N log N) ¡ñ Worst case: ¦¨(N2) Compare this to Mergesort. ¡ñ Best case: ¦¨(N log N) ¡ñ Worst case: ¦¨(N log N) Recall that ¦¨(N log N) vs. ¦¨(N2) is a really big deal. So how can Quicksort be the fastest sort empirically? Because on average it is ¦¨(N log N). ¡ñ Rigorous proof requires probability theory + calculus, but intuition + empirical analysis will hopefully convince you. datastructur.es Argument #1: 10% Case Suppose pivot always ends up at least 10% from either edge (not to scale). N N/10 9N/10 Work at each level: O(N) ¡ñ Runtime is O(NH). ¡ð H is approximately log 10/9 N = O(log N) ¡ñ Overall: O(N log N). Punchline: Even if you are unlucky enough to have a pivot that never lands anywhere near the middle, but at least always 10% from the edge, runtime is still O(N log N). N/100 9N/100 9N/100 81N/100 datastructur.es Argument #2: Quicksort is BST Sort 5 3 2 1 7 8 4 6 3 2 1 4 7 8 6 5 5 37 2 1 345678 2468 1 Key idea: compareTo calls are same for BST insert and Quicksort. ¡ñ Every number gets compared to 5 in both. ¡ñ 3 gets compared to only 1, 2, 4, and 5 in both. Reminder: Random insertion into a BST takes O(N log N) time. datastructur.es Empirical Quicksort Runtimes For N items: ¡ñ Mean number of compares to complete Quicksort: ~2N ln N ¡ñ Standard deviation: Lots of arrays take 12,000ish compares to sort with Quicksort. A very small number take 15,000ish compares to sort with Quicksort. Empirical histogram for quicksort compare counts (10,000 trials with N = 1000) Chance of taking 1,000,000ish compares is effectively zero. For more, see: http://www.informit.com/articles/article.aspx?p=2017754&seqNum=7 datastructur.es Quicksort Performance Theoretical analysis: For our pivot/partitioning strategies: Sorted or close to sorted. With extremely high probability!! ¡ñ Best case: ¦¨(N log N) ¡ñ Worst case: ¦¨(N2) ¡ñ Randomly chosen array case: ¦¨(N log N) expected Compare this to Mergesort. ¡ñ Best case: ¦¨(N log N) ¡ñ Worst case: ¦¨(N log N) Why is it faster than mergesort? ¡ñ Requires empirical analysis. No obvious reason why. datastructur.es Sorting Summary (so far) Listed by mechanism: ¡ñ Selection sort: Find the smallest item and put it at the front. ¡ñ Insertion sort: Figure out where to insert the current item. ¡ñ Merge sort: Merge two sorted halves into one sorted whole. ¡ñ Partition (quick) sort: Partition items around a pivot. Listed by memory and runtime: Memory Time Notes Heapsort ¦¨(1) ¦¨(N log N) Bad caching (61C) Insertion ¦¨(1) ¦¨(N2) ¦¨(N) if almost sorted Mergesort ¦¨(N) ¦¨(N log N) Quicksort ¦¨(log N) (call stack) ¦¨(N log N) expected Fastest sort datastructur.es Avoiding the Quicksort Worst Case datastructur.es Quicksort Performance The performance of Quicksort (both order of growth and constant factors) depend critically on: ¡ñ How you select your pivot. ¡ñ How you partition around that pivot. ¡ñ Other optimizations you might add to speed things up. Bad choices can be very bad indeed, resulting in ¦¨(N2) runtimes. datastructur.es Avoiding the Worst Case If pivot always lands somewhere ¡°good¡±, Quicksort is ¦¨(N log N). However, the very rare ¦¨(N2) cases do happen in practice, e.g. ¡ñ Bad ordering: Array already in sorted order (or almost sorted order). ¡ñ Bad elements: Array with all duplicates. What can we do to avoid worst case behavior? Recall, our version of Quicksort has the following properties: ¡ñ Leftmost item is always chosen as the pivot. ¡ñ Our partitioning algorithm preserves the relative order of <= and >= items.
6
8
3
1
2
7
4
3
1
2
4
6
8
7
datastructur.es

If pivot always lands somewhere ¡°good¡±, Quicksort is ¦¨(N log N). However, the very rare ¦¨(N2) cases do happen in practice, e.g.
¡ñ Bad ordering: Array already in sorted order (or almost sorted order).
¡ñ Bad elements: Array with all duplicates.
What can we do to avoid worst case behavior?
¡ñ Always use the median as the pivot — this works.
¡ñ Randomly swap two indices occasionally.
¡ð Sporadic randomness. Maybe works? ¡ñ Shuffle before quicksorting.
¡ð This definitely works and is a harder core version of the above.
¡ñ Partition from the center of the array: Does not work, can still find bad
cases.
datastructur.es

Citations
Quickman from Mega Man 2
datastructur.es

Deleted Slides
datastructur.es

More Quicksort Origins
Amusingly, Quicksort was the wrong tool for the job. Two issues:
¡ñ Language that Tony was using didn¡¯t support recursion (so he couldn¡¯t easily implement Quicksort).
¡ñ Sentences are usually shorter than 15 words.
datastructur.es

Citations
Quickman from Mega Man 2
datastructur.es