程序代写CS代考 algorithm Today:

Today:
ECS 20 — Lecture 14 — Fall 2013 —12 Nov 2013
o Asymptotic notation
o Solving recurrence relations .
Phil Rogaway
Announcements:
o It’s dog day – and there came to class one (1) dog.
(not the actual dog who visited, but a reasonable approximation)
Asymptotic notation and view
Note: below, I am showing O and Θ together. In class, might do one and then the other. Last time I defined
Note: some people throw absolute value signs, | | , signs around the f’s. I am omitting them, as, almost always, f (n) is a nonnegative function.
I find it “weird” to consider negative f’s in this context.
Here’s an almost-equivalent form
O(g) = { f: N  R: C, N s.t. f(n)  C g(n) for all n N} .
Θ(g) = {f: N  R: c, C N s.t. c g(n) f(n)  C g(n) for all n N }
O(g) = { f: N  R: Θ(g) = {f: N  R:
Or, how about O(g) = { f: N  R:
C, C’ s.t. c, C N s.t.
f(n)  C g(n) + C ’ for all n N} . c g(n)  C f(n)  C g(n) for all n N }
C, N s.t.
C, c, N s.t. c  f(n) /g(n)  C as long as n N }
Θ (g) = { f: N  R:
People often use “is” for “is a member of” or “is an anonymous element of”
f(n) /g(n)  C as long as n N } .
1

They even define things that way, ever regarding O(g) or Θ(g) or as a defined “thing ”, but only defining what it means to to say that “f(n) is O(g(n))” or “f(n) O(g(n))” .
“f(n) is O(g(n))”
The “qualitative behavior” of practical computation – where, very roughly, things go from
“practical” to “impractical” – is often determined more by asymptotic growth rates than constants.
See http://www.csupomona.edu/~ftang/courses/CS240/lectures/analysis.htm for some nice stuff on big-O.
n n lg n n^2 n^3 2^n ———————————————————-
10 30ns
100 ns 10 us 1ms
0.1 s
10 s
17 mins
1 us 1 ms 1 sec
17 mins 1 day
32 years
1 usec
10^13 yrs
10^284 yrs
— — —
100
1000
10000
10^5
10^6
700 ns
10 us
100 us
2 ms
20 ms
The simplicity afforded by dealing with asymptotics O(n2) + O(n2) = O(n2)
O(n2) + O(n3) = O(n3)
O(n log n) + O(n) = O(n log n)
etc.
True/False:
5n3 + 100n2 +100 = O(n3)
If f Θ(n2) then f  O(n2) TRUE n! = O(2n) NO
n! = O(nn) YES
(Truth: n! = Θ((n/e)n sqrt(n)) — indeed
Claim: Hn = 1/1 + 1/2 + … + 1/n = O(lg n)
Upperbound by 1 + integral_1^n (1/x)dx = 1 + ln(n) = O(lg n)
Draw picture showing common growth rates
Theta(n!)
Theta(2^n)
Theta(n^3)
(Stirling’s formula)
Suppose 1 step = 1 nsec (10^-9 sec)
2

Theta(n^2)
Theta(n log n log log n)
Theta(n lg n)
Theta(n)
Theta(sqrt(n)
Theta(log n)
Theta(1)
exercise: where is \sqrt(n)
The highest degree term in a polynomial is the term that determines the asymptotic growth rate of that polynomial.
General rules: Characterizing Functions in Simplest Terms — material from URL above
In general we should use the big-Oh notation to characterize a function as closely as possible. For example, while it is true that f(n) = 4n3 + 3n2 is O(n5) or even O(n4), it is more accurate to say that f(n) is O(n3).
It is also considered a poor taste to include constant factors and lower order terms in the big-Oh notation. For example, it is unfashionable to say that the function 2n3 is O(4n3 + 8nlogn), although it is completely correct. We should strive to describe the function in the big-Oh in simplest terms.
:
Rules of using big-Oh
 If f(n) is a polynomial of degree d, then f(n) is O(nd). We can drop the lower order terms and constant factors.
 Use the smallest/closest possible class of functions, for example, “2n is O(n)” instead of “2n is O(n2)”
 Use the simplest expression of the class, for example, “3n + 5 is O(n)” instead of “3n+5 is O(3n)”
Example usages and recurrence relations
Intertwine examples with the analysis of the resulting recurrence relation
1. How long will the following fragment of code take [ nested loops, second loop a nontrivial function of the first] — something O(n2)
2. How long will a computer program take, in the worst case, to run binary search, in the worst case? T(n) = T(n/2) + 1 — reminder: have seen recurrence relations before, as with the Towers of Hanoi problem. – Then do another recurrence, say T(n) = 3T(n/2) + 1. Solution (repeated substitution) n log_23 = n1.5849… What about T(n) = 3T(n/2) + n ? Or T(n) = 3T(n/2) + n2 ? [recursion tree]
3. How many gates do you need to multiply two n-bit numbers using grade-school multiplication?
4. How many comparisons to “selection sort” a list of n elements? T(n) = 1 + T(n-1)
5. How many comparisons to “merge sort” a list of n elements? T(n) = T(n/2) + n
6. What’s the running time of deciding SAT using the obvious algorithm?
3

Warning: don’t think that asymptotic notation is only for talking about the running time or work of algorithms; it is a convenient way of dealing with functions in lots of domains
algorithm BS (X,A, low, high) // Look for X in A[low..high]. low, high nonneg ints. Return -1 if absent if (low>high) return(-1) // (range of A is empty – element not found)
m (low+high)/2
if (A[m] = X) return (m)
if (A[m] < X) return BS (X, A, m + 1, high) // X not in A[1..m] if (A[m] >X) return BS (X, A, low, m – 1 ) // X not in A [m..high]
From Wikipedia: Karatsuba algorithm (1960/1962) The basic step of Karatsuba’s algorithm is a formula that allows us to compute the product of two large numbers x and y using three multiplications of smaller numbers, each with about half as many digits as x or y, plus some additions and digit shifts.
Let x and y be represented as n-digit strings in some base B – say B=10. For any positive integer m less than n, one can write the two given numbers as
x=x110m +x0 y=y1 10m +y0,
where x0 and y0 are less than 10m. The product is then xy = (x110m + x0)(y110m + y0)
= z2102m + z110m + z0 where
z2 = x1y1
z1 = x1y0+ x0y1 z0 = x0y0.
These formulae require four multiplications, and were known to .[4] Karatsuba observed that xy can be computed in only three multiplications, at the cost of a few extra additions. With z0 and z2 as before we can calculate
z1 = (x1 + x0)(y1 + y0) – z2 – z0 which holds since
z1 = x1y0+ x0y1
z1 = (x1 + x0)(y1 + y0) – x1y1 – x0y0.
Example To compute the product of 12345 and 6789, choose B = 10 and m = 3. Then we decompose the input operands using the resulting base (Bm = 1000), as:
12345 = 12 · 1000 + 345
6789 = 6 · 1000 + 789
Only three multiplications are required, and they are operating on smaller integers are used to compute three partial results:
z2 = 12 × 6 = 72
z0 = 345 × 789 = 272205
z1 = (12 + 345) × (6 + 789) − z2 − z0 = 357 × 795 − 72 − 272205 = 283815 − 72 − 272205 = 11538 We get the result by just adding these three partial results, shifted accordingly (and then taking carries into account by decomposing these three inputs in base 1000 like for the input operands): result = z2 · B2m + z1 · 10m + z0, i.e.
result = 72 · 10002 + 11538 · 1000 + 272205 = 83810205.
4

Then: solve the recurrence T(n) = 1 if n=1,
T(n) = 3T(n/2) + n if n>1
(will do afresh next class)
There is more than O and Θ.
(Table modified from Wikipedia)
Informal definition: for sufficiently large …
Notation
Intuition
Formal Definition
is bounded above
by (up to constant factor)
for some positive k
is bounded below by
for some positive k
is bounded above and
below by
for some positive k1, k2
is dominated
by
for every fixed positive number
for every fixed positive number
dominates
is equal
to asymptoti cally
5