CS代考 Week 11: Scalable Algorithmic Templates

Week 11: Scalable Algorithmic Templates

COMMONWEALTH OF AUSTRALIA Copyright Regulations 1969 WARNING
This material has been reproduced and communicated to you by or on behalf of the University of Sydney pursuant to Part VB of the Copyright Act 1968 (the Act).

Copyright By PowCoder代写 加微信 powcoder

The material in this communication may be subject to copyright under the Act. Any further copying or communication of this material by you may be the subject of copyright protection under the Act.
Do not remove this notice.

§ Recursion
§ Divide and Conquer
§ How to parallelize ‘Divide and Conquer’ algorithms § Example: Mergesort
§ Reductions

Recursion (informally)
‘’To understand recursion, you need to understand recursion first.’’
Definition of the acronym ‘GNU’:
GNU is not Unix
From Merriam-Webster:
Recursion: a computer programming technique involving the use of a procedure, subroutine, function, or algorithm that calls itself one or more times until a specified condition is met at which time the rest of each repetition is processed from the last one called to the first — compare iteration.
This is a recursive acronym (geeky!).

What is Recursion?
§ Recursion occurs when a procedure calls itself to solve a simpler version of the problem. With each recursive call, the problem is different from, and simpler than, the original problem.
§ At some stage, the problem is so simple that we can solve it—the recursion terminates.
§ this is the base-case of the recursion
§ Previous problem instance solved in terms of the simple
problem(s).
§ And so on, back to the initial procedure call…

Recursion Example – Solving Factorials
§ The problem of solving factorials is our first example of recursion.
§ The factorial operation (!) in mathematics is illustrated below.
1! = 1 2!=2*1 3!=3*2*1 4!=4*3*2*1
n! = n* (n-1)!
where n is non-negative
or 2*1! or 3*2! or 4*3!
Base Case Recursive Case
Notice that each successive line can be solved in terms of the previous line. For example, 4! is equivalent to the problem

Using A Recursive Method
§ A recursive procedure to solve the factorial problem is given below. Notice the recursive call in the last line.
§ The procedure calls itself to solve a smaller version of the factorial problem.
§ The base case is a fundamental situation for which we know the answer.
§ In the case of finding factorials, the answer of 1! is by definition 1. No further work is needed for the base-case.
int fact(int n) {
// returns the value of n! // precondition: n >= 1 if (n == 1)
return n * fact(n – 1); }

Recursion – Step By Step Suppose we call the method to solve fact(4).
§ This will result in four calls of method fact:
fact(4): This is not the base case (n==1) so we return the result of 4 * fact(3). This multiplication will not be carried out until an answer is found for fact(3). This leads to the second call of fact to solve fact(3).
fact(3): Again, this is not the base case and we return
3 * fact (2). This leads to another recursive call to solve fact(2).
fact(2): Still, this is not the base case, we solve 2 * fact(1). fact(1): Finally we reach the base case, which returns the value 1.
int fact(int n) { if (n == 1)
return n * fact(n – 1); }

Recursion – The Big Picture
Here we compute fact(4) from the solution for fact(3).
Here we compute fact(3) from the solution for fact(2).
§ Each box represents a call of procedure fact(). To solve fact(4) requires four calls of procedure fact.
§ Notice that when the recursive calls were made inside the else statement, the value fed to the recursive call was (n-1). This is where the problem is getting smaller and simpler with the eventual goal of solving 1!.
Here we compute fact(2) from the solution for the base-case .
This is the base-case for which we know the solution.

Recursion – Step By Step
§ When a recursive call is made, the current computation is temporarily suspended with all its current information available for later use.
§ Same as for any other function call.
§ A completely new copy of the procedure’s data is used to evaluate the
recursive call.
§ Fresh set of local variables,
§ Own set of procedure arguments
§ When the recursive call is completed, the value returned by the recursive call is used to complete the suspended computation. The suspended computation’s work now proceeds.
§ When the base case is encountered, the recursion will terminate by returning a value to the caller. The caller will then return a value to the caller’s caller aso until the final answer becomes available in the initial recursive call.

Pitfalls Of Recursion
§ If the recursion never reaches the base case, the recursive calls will continue until the computer runs out of memory and the program crashes. The message “stack overflow error” or “heap storage exhaustion” indicates a possible runaway recursion.
§ When programming recursively, you need to make sure that the algorithm is moving towards the base case. Each successive call of the algorithm must be solving a simpler version of the problem.
§ Any recursive algorithm can be implemented iteratively, but sometimes only with great difficulty. However, a recursive solution will always run more slowly than an iterative one because of the overhead for procedure calls.
int fact(int n) { if (n == 1)
return n * fact(n – 1); }
printf(“%d\n”, fact(-1));
§ Tail recursion and related compiler optimizations can help.

§ Recursionü
§ Divide and Conquer
§ How to parallelize ‘Divide and Conquer’ algorithms § Example: Mergesort
§ Reductions

Divide and Conquer
§ Break the problem into several subproblems that are similar to the original problem but smaller in size, solve subproblems recursively, and combine solutions to create solution to the original problem.
At each level of the recursion:
1) Divide the problem into a number of subproblems.
2) Conquer subproblems:
§ directly if subproblem is simple enough (base case) § recursively otherwise
3) Combine subproblem solutions into solution for the original problem.

1) Divide array into 2 halves:
Assume we want to sort an array of characters, integers or floats.

1) Divide array into 2 halves.
2) Conquer: recursively sort each half:

1) Divide array into 2 halves.
2) Conquer: recursively sort each half.
3) Combine: merge two halves to make sorted whole:
sort merge

§ Keep track of smallest element in each sorted half. § Insert smallest of two elements into array.
§ Repeat until done.
smallest smallest
auxiliary arrays

§ Keep track of smallest element in each sorted half. § Insert smallest of two elements into auxiliary array. § Repeat until done.
smallest smallest

§ Keep track of smallest element in each sorted half. § Insert smallest of two elements into auxiliary array. § Repeat until done.
smallest smallest

§ Keep track of smallest element in each sorted half. § Insert smallest of two elements into auxiliary array. § Repeat until done.
smallest smallest

§ Keep track of smallest element in each sorted half. § Insert smallest of two elements into auxiliary array. § Repeat until done.
smallest smallest

§ Keep track of smallest element in each sorted half. § Insert smallest of two elements into auxiliary array. § Repeat until done.
smallest smallest

§ Keep track of smallest element in each sorted half. § Insert smallest of two elements into auxiliary array. § Repeat until done.
smallest smallest

§ Keep track of smallest element in each sorted half. § Insert smallest of two elements into auxiliary array. § Repeat until done.
smallest smallest

§ Keep track of smallest element in each sorted half. § Insert smallest of two elements into auxiliary array. § Repeat until done.
first half exhausted

§ Keep track of smallest element in each sorted half. § Insert smallest of two elements into auxiliary array. § Repeat until done.
first half exhausted

Mergesort (Divide)
§ Divide until we reach 1-element base-case. § One-element array is trivially sorted.

Mergesort (Combine)
§ Merge, starting from the 1-element base-cases, ….

Mergesort (Combine)
§ … merging the 2-element arrays, …

Mergesort (Combine)
§ … until we merge the final result array.

Implementing Mergesort
void MergeSort(float A[], int p, int r) { intq;
if (p < r) { q = (p+r)/2; MergeSort(A, p, q); MergeSort(A, q+1, r); Merge(A, p, q, r); 0 1 2 3 4 5 6 7 index MergeSort (A, 0, 7) MergeSort (A, 0, 3) MergeSort (A, 4, 7) Implementing Mergesort (cont.) void MergeSort(float A[], int p, int r) { intq; if (p < r) { q = (p+r)/2; MergeSort(A, p, q); MergeSort(A, q+1, r); Merge(A, p, q, r); § MergeSort is a recursive procedure § p and r are indices that mark the range of array A[] to be sorted. § if to-be-sorted range larger than 1: § compute index q of middle element § recursively sort halfs (p..q) and (q+1..r) and then merge sorted halfs. § If to-be-sorted range == 1: § do nothing (base case, arrays of length 1 are trivially sorted!) Implementing the Merge Step of Mergesort (p. 1) void Merge(float A[], int p, int q, int r) { int n1 = q - p + 1; int n2 = r - q; for (i=0; iA, MyArg->p,
if (p < r) { q = (p+r)/2; LeftArg->A = A; LeftArg->p = p; LeftArg->r = q; LeftArg->depth = depth+1;
LeftArg->max_depth = max_depth;
RightArg->A = A; RightArg->p = q+1; RightArg->r = r; RightArg->depth = depth+1;
RightArg->max_depth = max_depth;
pthread_create(&LThread, NULL, PMSort, (void *)&LeftArg); pthread_create(&RThread, NULL, PMSort, (void *)&RightArg); pthread_join(LThread, NULL);
pthread_join(RThread, NULL);
Merge(A, p, q, r);
MyArg->r, MyArg->depth, MyArg->max_depth);

Divide & Conquer Parallelization
base-case solve
base-case solve
base-case solve
base-case solve

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com