Prefix Sums
CMPSC 450
Definition:
The all-prefix-sums operation takes a binary associative operator , and an ordered set of n elements
and returns the ordered set
[a0, a1, …, an−1],
[a0,(a0 a1), …,(a0 a1 … an−1)].
CMPSC 450
Serial example
• Make binary-associative operator ‘+’
b[0] = a[0];
for (i = 1; i < n; i++)
b[i] = a[i] + b[i-1];
• Work: O(n) • Time: T(n)
How can we parallelize this?
CMPSC 450
Parallel prefix-sum
• Balanced binary tree
• Do two passes
• Build the tree bottom-up
• Then traverse the tree top-down
• Work: O(n)
• Time: O(log n)
Historical note:
• Original algorithm due to R. Ladner and M. Fischer at the University of Washington in 1977
CMPSC 450
Example
range 0,8
76
fromleft
sum
range 0,4
36
fromleft
sum
range 4,6
30
fromleft
sum
range 6,8
10
fromleft
sum
r 0,1
s
f
6
r 1,2
s
f
4
r 2,3
s
f
16
r 3,4
s
f
10
r 4,5
s
f
16
r 5,6
s
f
14
r 6,7
s
f
2
r 7,8
s
f
8
6
4
16
10
16
14
2
8
input output
Sophomoric Parallelism and Concurrency, Lecture 3 5
range 4,8
40
fromleft
sum
range 0,2
10
fromleft
sum
range 2,4
26
fromleft
sum
CMPSC 450
Example
range 0,8
76
fromleft 0
sum
range 0,4
36
fromleft 0
sum
range 4,8
40
fromleft 36
sum
range 0,2
10
fromleft 0
sum
range 2,4
26
fromleft 10
sum
range 4,6
30
fromleft 36
sum
range 6,8
10
fromleft 66
r 0,1
s
f0
6
r 1,2
s
f6
4
sum
r 2,3
s
f 10
16
r 3,4
s
f 26
10
r 4,5
s
f 36
16
r 5,6
s
f 52
14
r 6,7
s
f 66
2
r 7,8
s
f 68
8
6
4
16
10
16
14
2
8
input output
6
10
26
36
52
66
68
76
Sophomoric Parallelism and Concurrency, Lecture 3 6
CMPSC 450
PRAM algorithm for prefix sums
Input: n = 2k numbers stored in an array A
Output: An array C such that C(0, j) is the jth prefix sum begin
1. for 1 ≤ j ≤ n pardo Set B(0, j) := A(i)
2. for h = 1 to log n do for1≤j≤n/2h pardo
Set B(h, j) := B(h-1, 2j-1) (X) B(h-1, 2j) 3. for h = log n to 0 do
end
for1≤j≤n/2h pardo
{ j even: Set C(h, j) := C(h+1, j/2)
j=1: Set C(h, 1) := B(h, 1)
j odd > 1: Set C(h, j) := C(h+1, (j-1)/2) (X) B(h, j) }
CMPSC 450
Partitioning-based algorithm for prefix sums
• Partition array into p parts
• Each processor computes prefix sums on its part
• One processor does prefix sums of boundary values (p-1 of them)
• The boundary values are propagated to each partition concurrently • T(n)=θ(n/p+p),W(n)=θ(n+p)
CMPSC 450
8
Pack
[Non-standard terminology]
Given an array input, produce an array output containing only
elements such that f(elt) is true
Example: input [17, 4, 6, 8, 11, 5, 13, 19, 0, 24] f: is elt > 10
output [17, 11, 13, 19, 24]
Parallelizable?
• Finding elements for the output is easy
• But getting them in the right place seems hard
Sophomoric Parallelism and Concurrency, Lecture 3 9
CMPSC 450
Parallel prefix to the rescue
1.
2. 3.
Parallel map to compute a bit-vector for true elements input [17, 4, 6, 8, 11, 5, 13, 19, 0, 24] bits [1, 0, 0, 0, 1, 0, 1, 1, 0, 1]
Parallel-prefix sum on the bit-vector
bitsum[1, 1,1,1, 2,2, 3, 4,4, 5]
Parallel map to produce the output
output [17, 11, 13, 19, 24]
output = new array of size bitsum[n-1] FORALL(i=0; i < input.length; i++){
if(bits[i]==1) output[bitsum[i]-1] = input[i];
}
Sophomoric Parallelism and Concurrency, Lecture 3 10
CMPSC 450
Pack comments
• First two steps can be combined into one pass • Just using a different base case for the prefix sum • No effect on asymptotic complexity
• Can also combine third step into the down pass of the prefix sum • Again no effect on asymptotic complexity
• Analysis: O(n) work, O(log n) span • 2 or 3 passes, but 3 is a constant
• Parallelized packs will help us parallelize quicksort...
Sophomoric Parallelism and Concurrency, Lecture 3 11
CMPSC 450
Prefix sum applications
• The optimal PRAM prefix sum algorithm is used as a subroutine in several other algorithms
• Counting sort algorithm
• Algorithms for list ranking
• Graph algorithms
• Parallel polynomial interpolation, divided differences
• Very important subroutine on GPUs ( )
• Scan operation in functional programming
CMPSC 450
Circuit implementation
CMPSC 450
Further Reading
• Guy Blelloch, Prefix Sums and Their Applications, http://www.cs.cmu.edu/~blelloch/papers/Ble93.pdf
• S. Sengupta et al., Scan Primitives for GPU Computing, Graphics Hardware, 2007
• D. Merrill and A. Grimshaw, Parallel Scan for Stream architectures, Tech report, Univ of Virginia, 2009
• Uzi Vishkin lecture notes
• http://www.umiacs.umd.edu/~vishkin/PUBLICATIONS/classnotes.pdf
CMPSC 450
Quicksort review
Recall quicksort was sequential, in-place, expected time O(n log n)
1. 2.
Pick a pivot element
Partition all the data into:
A. The elements less than the pivot
B. The pivot
C. The elements greater than the pivot
Best / expected case work O(1)
3.
How should we parallelize this?
O(n) 2T(n/2)
Recursively sort A and C
Sophomoric Parallelism and Concurrency, Lecture 3
CMPSC 450
15
Quicksort
1. 2.
3.
Pick a pivot element
Partition all the data into:
A. The elements less than the pivot
B. The pivot
C. The elements greater than the pivot
Recursively sort A and C
Best / expected case work O(1)
Easy: Do the two recursive calls in parallel
• Work: unchanged of course O(n log n)
• Span: now T(n) = O(n) + 1T(n/2) = O(n)
• So parallelism (i.e., work / span) is O(log n)
Sophomoric Parallelism and Concurrency, Lecture 3
CMPSC 450
O(n) 2T(n/2)
16
Doing better
• O(log n) speed-up with an infinite number of processors is okay, but a bit underwhelming
• Sort 109 elements 30 times faster
• Google searches strongly suggest quicksort cannot do better because the partition cannot be parallelized
• The Internet has been known to be wrong
• But we need auxiliary storage (no longer in place)
• In practice, constant factors may make it not worth it, but remember Amdahl’s Law
• Already have everything we need to parallelize the partition...
Sophomoric Parallelism and Concurrency, Lecture 3 17
CMPSC 450
Parallel partition (not in place)
Partition all the data into:
A. The elements less than the pivot
B. The pivot
C. The elements greater than the pivot
• This is just two packs!
• We know a pack is O(n) work, O(log n) span
• Pack elements less than pivot into left side of aux array
• Pack elements greater than pivot into right size of aux array
• Put pivot between them and recursively sort
• With a little more cleverness, can do both packs at once but no effect on asymptotic complexity
• With O(log n) span for partition, the total best-case and expected- case span for quicksort is
T(n) = O(log n) + 1T(n/2) = O(log2 n) Sophomoric Parallelism and Concurrency, Lecture 3
CMPSC 450
18
Example
• Step 1: pick pivot as median of three
• Steps 2a and 2c (combinable): pack less than, then pack greater than into a second array
– Fancy parallel prefix to pull this off not shown
8
1
4
9
0
3
5
2
7
6
1
4
0
3
5
2
1
4
0
3
5
2
6
8
9
7
• Step 3: Two recursive sorts in parallel
– Can sort back into original array (like in mergesort)
Sophomoric Parallelism and Concurrency, Lecture 3 19
CMPSC 450
Now mergesort
Recall mergesort: sequential, not-in-place, worst-case O(n log n)
1. Sort left half and right half 2T(n/2)
2. Merge results O(n)
Just like quicksort, doing the two recursive sorts in parallel changes the
recurrence for the span to T(n) = O(n) + 1T(n/2) = O(n)
• Again, parallelism is O(log n)
• To do better, need to parallelize the merge
– The trick won’t use parallel prefix this time
Sophomoric Parallelism and Concurrency, Lecture 3 20
CMPSC 450
Parallelizing the merge
Need to merge two sorted subarrays (may not have the same size)
0
1
4
8
9
2
3
5
6
7
Idea: Suppose the larger subarray has m elements. In parallel:
• Merge the first m/2 elements of the larger half with the
“appropriate” elements of the smaller half
• Merge the second m/2 elements of the larger half with the rest of the smaller half
Sophomoric Parallelism and Concurrency, Lecture 3 21
CMPSC 450
Parallelizing the merge
0
4
6
8
9
1
2
3
5
7
Sophomoric Parallelism and Concurrency, Lecture 3 22
CMPSC 450
Parallelizing the merge
0
4
6
8
9
1
2
3
5
7
1. Get median of bigger half: O(1) to compute middle index
Sophomoric Parallelism and Concurrency, Lecture 3 23
CMPSC 450
Parallelizing the merge
0
4
6
8
9
1
2
3
5
7
1. Get median of bigger half: O(1) to compute middle index
2. Find how to split the smaller half at the same value as the left-half
split: O(log n) to do binary search on the sorted small half
Sophomoric Parallelism and Concurrency, Lecture 3 24
CMPSC 450
Parallelizing the merge
0
4
6
8
9
1
2
3
5
7
1. Get median of bigger half: O(1) to compute middle index
2. Find how to split the smaller half at the same value as the left-half
split: O(log n) to do binary search on the sorted small half
3. Size of two sub-merges conceptually splits output array: O(1)
Sophomoric Parallelism and Concurrency, Lecture 3 25
CMPSC 450
Parallelizing the merge
0
4
6
8
9
1
2
3
5
7
0
1
2
3
4
5
6
7
8
9
lo
hi
1. Get median of bigger half: O(1) to compute middle index
2. Find how to split the smaller half at the same value as the left-half
split: O(log n) to do binary search on the sorted small half
3. Size of two sub-merges conceptually splits output array: O(1)
4. Do two submerges in parallel
Sophomoric Parallelism and Concurrency, Lecture 3 26
CMPSC 450
The Recursion
0
4
6
8
9
1
2
3
5
7
0
4
6
8
9
1
2
3
5
When we do each merge in parallel, we split the bigger one in half and use binary search to split the smaller one
7
Sophomoric Parallelism and Concurrency, Lecture 3 27
CMPSC 450
Analysis
• Sequential recurrence for mergesort:
T(n) = 2T(n/2) + O(n) which is O(n log n)
• Doing the two recursive calls in parallel but a sequential merge: Work: same as sequential Span: T(n)=1T(n/2)+O(n) which is O(n)
• Parallel merge makes work and span harder to compute
• Each merge step does an extra O(log n) binary search to find how to split the
smaller subarray
• To merge n elements total, do two smaller merges of possibly different sizes
• But worst-case split is (1/4)n and (3/4)n
• Whensubarrayssamesizeand“smaller”splits“all”/“none”
Sophomoric Parallelism and Concurrency, Lecture 3 28
CMPSC 450
Analysis continued
For just a parallel merge of n elements:
• Work is T(n) = T(3n/4) + T(n/4) + O(log n) which is O(n) • Span is T(n) = T(3n/4) + O(log n), which is O(log2 n)
• (neither bound is immediately obvious, but “trust me”)
So for mergesort with parallel merge overall:
• Work is T(n) = 2T(n/2) + O(n), which is O(n log n)
• Span is T(n) = 1T(n/2) + O(log2 n), which is O(log3 n)
So parallelism (work / span) is O(n / log2 n)
• Not quite as good as quicksort’s O(n / log n) • Butworst-caseguarantee
• And as always this is just the asymptotic result
Sophomoric Parallelism and Concurrency, Lecture 3 29
CMPSC 450
Supplemental Material
• Prefix Sums and Their Applications, Blelloch, https://www.cs.cmu.edu/~guyb/papers/Ble93.pdf
• https://homes.cs.washington.edu/~djg/teachingMaterials/spac/gross manSPAC_lec3.pdf
CMPSC 450
Parallel prefix, generalized
Just as sum-array was the simplest example of a common pattern, prefix-sum illustrates a pattern that arises in many, many problems
• Minimum, maximum of all elements to the left of i
• Is there an element to the left of i satisfying some property?
•Countofelementstotheleftofi satisfyingsomeproperty • This last one is perfect for an efficient parallel pack...
• Perfect for building on top of the “parallel prefix trick”
• We did an inclusive sum, but exclusive is just as easy
Sophomoric Parallelism and Concurrency, Lecture 3
CMPSC 450
Example
range 0,8
76
fromleft
sum
range 0,4
36
fromleft
sum
range 4,6
30
fromleft
sum
range 6,8
10
fromleft
sum
r 0,1
s
f
6
r 1,2
s
f
4
r 2,3
s
f
16
r 3,4
s
f
10
r 4,5
s
f
16
r 5,6
s
f
14
r 6,7
s
f
2
r 7,8
s
f
8
6
4
16
10
16
14
2
8
input output
Sophomoric Parallelism and Concurrency, Lecture 3 32
range 4,8
40
fromleft
sum
range 0,2
10
fromleft
sum
range 2,4
26
fromleft
sum
CMPSC 450
Example
range 0,8
76
fromleft 0
sum
range 0,4
36
fromleft 0
sum
range 4,8
40
fromleft 36
sum
range 0,2
10
fromleft 0
sum
range 2,4
26
fromleft 10
sum
range 4,6
30
fromleft 36
sum
range 6,8
10
fromleft 66
r 0,1
s
f0
6
r 1,2
s
f6
4
sum
r 2,3
s
f 10
16
r 3,4
s
f 26
10
r 4,5
s
f 36
16
r 5,6
s
f 52
14
r 6,7
s
f 66
2
r 7,8
s
f 68
8
6
4
16
10
16
14
2
8
input output
6
10
26
36
52
66
68
76
Sophomoric Parallelism and Concurrency, Lecture 3 33
CMPSC 450
Pass 1: Up
1.
Up: Build a binary tree where
• Root has sum of the range [x,y)
• If a node has sum of [lo,hi) and hi>lo,
• Left child has sum of [lo,middle)
• Right child has sum of [middle,hi)
• A leaf has sum of [i,i+1), i.e., input[i]
This is an easy fork-join computation: combine results by actually building a binary tree with all the range-sums
• Tree built bottom-up in parallel
• Could be more clever with an array like with heaps
Analysis: O(n) work, O(log n) span
Sophomoric Parallelism and Concurrency, Lecture 3
CMPSC 450
Pass 2: Down
2.
Down: Pass down a value fromLeft
• •
•
Root given a fromLeft of 0
Node takes its fromLeft value and
• Passes its left child the same fromLeft
• Passes its right child its fromLeft plus its left child’s sum (as stored in part 1)
At the leaf for array position i, output[i]=fromLeft+input[i]
This is an easy fork-join computation: traverse the tree built in step 1 and produce no result
• Leaves assign to output
• Invariant: fromLeft is sum of elements left of the node’s range
Analysis: O(n) work, O(log n) span
Sophomoric Parallelism and Concurrency, Lecture 3
CMPSC 450