程序代写代做代考 flex go C compiler graph algorithm EECS 3101

EECS 3101
Prof. Andy Mirzaian

STUDY MATERIAL:
• [CLRS] chapters 1, 2, 3
• Lecture Note 2
2

Example
Time complexity shows dependence of algorithm’s running time on input size.
Let’s assume: Computer speed = 106 IPS, Input: a data base of size n = 106
Time Complexity
Execution time
n
1 sec.
n log n
20 sec.
n2
12 days
2n
40 quadrillion (1015) years
3

Machine Model
 Algorithm Analysis:
 should reveal intrinsic properties of the algorithm itself.
 should not depend on any computing platform, programming
language, compiler, computer speed, etc.
 Random Access machine (RAM): an idealized computer model
 Elementary steps:
 arithmetic: + –  
 logic: and or not comparison:     
 assigning a value to a scalar variable:   input/output a scalar variable
 following a pointer or array indexing
 on rare occasions: , sin, cos, …
4

Time Complexity
 Time complexity shows dependence of algorithm’s running time on input size.
 Worst-case
 Average or expected-case
 Amortized (studied in EECS 4101)
 What is it good for?
• Tells us how efficient our design is before its costly implementation.
• Reveals inefficiency bottlenecks in the algorithm.
• Can use it to compare efficiency of different algorithms that solve
the same problem.
• Is a tool to figure out the true complexity of the problem itself!
How fast is the “fastest” algorithm for the problem?
• Helps us classify problems by their time complexity.
5

Input size & Cost measure  Input size:
 Bit size: # bits in the input.
This is often cumbersome but sometimes necessary when running time depends on numeric data values, e.g., factoring an n-bit integer.
 Combinatorial size: e.g., # items in an array, a list, a tree,
# vertices and # edges in a graph, …
 Cost measure:  Bit cost:
 Arithmetic cost:
charge one unit of time to each bit operation
charge one unit of time to each elementary RAM step
 Complexity:
 Bit complexity
 Arithmetic complexity
6

Input size & Cost measure  Input size:
 Bit size: # bits in the input.
This is often cumbersome but sometimes necessary when running time depends on numeric data values, e.g., factoring an n-bit integer.
 Combinatorial size: e.g., # items in an array, a list, a tree,
# vertices and # edges in a graph, …
 Cost measure:  Bit cost:
 Arithmetic cost:
 Complexity:
 Bit complexity
 Arithmetic complexity
charge one unit of time to each bit operation
unless stated otherwise
We choose these
charge one unit of time to each elementary RAM step
7

Time Complexity Analysis
Worst-case time complexity is a function of input size T: N  +
T(n) = maximum # steps by the algorithm on any input of size n.
Algorithm InversionCount(A[1..n]) count  0
for i  1 .. n do forji+1..n do
if A[i] > A[j] then count++ return count
end
T(n) = 3n2 + 6n + 4 = Q(n2).
Incidentally, this can be improved to Q(n log n) time by divide-&-conquer
Which line do you prefer? The 2nd is simpler and platform independent.
8

Algorithm SUM(A[1..n]) S  0
for i  1 .. n do
S  S + A[i]
Caution!
Algorithm PRIME(P) § integer P > 1 for i  2 .. P – 1 do
if P mod i = 0 then return NO return YES
return S Worst Case:
end
end
Worst Case: Q(P) time. Linear?
# input bits n  log P  P2n
 T(n) = Q(P) = Q(2n).
EXPONENTIAL!
Q(n) time LINEAR
PRIMALITY TESTING:
used in cryptography …
It was a long standing open problem whether there is any deterministic algorithm that solves this problem in time polynomial in the input bit size. This was eventually answered affirmatively by:
M. Agrawal, N. Kayal, N. Saxena, “PRIMES is in P,” Annals of Mathematics 160, pp: 781-793, 2004.
9

Asymptotic Notations
10

T(n) = Q( f(n) )
T(n) = 23 n3 + 5 n2 log n + 7 n log2 n + 4 log n + 6.
drop constant multiplicative factor
T(n) = Q(n3)
Why do we want to do this?
drop lower order terms
1. Asymptotically (at very large values of n) the leading term largely determines function behaviour.
2. With a new computer technology (say, 10 times faster) the leading coefficient will change (be divided by 10). So, that coefficient is technology dependent any way!
3. This simplification is still capable of distinguishing between important but distinct complexity classes,
e.g., linear vs. quadratic, or polynomial vs exponential.
11

Asymptotic Notations: Q O W o w Rough, intuitive meaning worth remembering:
Theta
f(n) = Q(g(n))
f(n) ≈ c g(n)
Big Oh
f(n) = O(g(n))
f(n) ≤ c g(n)
Big Omega
f(n) = Ω(g(n))
f(n) ≥ c g(n)
Little Oh
f(n) = o(g(n))
f(n) ≪ c g(n)
Little Omega
f(n) = ω(g(n))
f(n) ≫ c g(n)
12

Asymptotics by ratio limit
L = lim n f(n)/g(n). If L exists, then:
Theta
f(n) = Q(g(n))
0 < L<  Big Oh f(n) = O(g(n)) L< Big Omega f(n) = Ω(g(n)) 0< L Little Oh f(n) = o(g(n)) L=0 Little Omega f(n) = ω(g(n)) L= 13  &  quantifiers Logic: “and” commutes with “and” , “or” commutes with “or” , but “and” & “or” do not commute. Similarly, same type quantifiers commute, but different types do not: x y: P(x,y) = y x: P(x,y) = x,y: P(x,y) x y: P(x,y) = y x: P(x,y) = x,y: P(x,y) x y: P(x,y)  y x: P(x,y) Counter-example for the 3rd: LHS: for every person x, there is a date y, such that x is born on date y. RHS: there is a date y, such that for every person x, x is born on date y. Each person has a birth date, but not every person is born on the same date! Give natural examples for the following to show their differences: 1. x y z: 2. x z y: 3. y x z: P(x,y,z) P(x,y,z) P(x,y,z) 14 Theta : Asymptotic Tight Bound c2g(n) f(n) = Q(g(n)) c1, c2,n0 >0 : n  n0 , +
f(n)
g(n) c1g(n)
n0 n
c1g(n)  f(n)  c2g(n).
15

Theta : Example
c1,c2,n0>0 : nn0, c1g(n)  f(n)  c2g(n)
f(n) = Q(g(n))
f(n) = [23 – (10 log n)/n + 7/n2 + 6/n3]n3 factor out leading term
n10: f(n) (23+0+7/100+6/1000)n3 = (23+0+0.07 +0.006)n3
= 23.076 n3 < 24 n3 n10: f(n)(23–log10+0+0)n3>(23– log16)n3 =19n3 n  10 : 19 n3  f(n)  24 n3
(Take n0= 10, c1= 19, c2 = 24, g(n) = n3)
Now you see why we can drop lower order terms &
the constant multiplicative factor.
16
f(n) = 23n3 – 10 n2 log n + 7n + 6
f(n) = Q(n3)

Big Oh: Asymptotic Upper Bound c g(n)
f(n)
g(n)
f(n) = O(g(n))
c, n0 >0 : n  n0 , +
n0 n
f(n)  c g(n).
17

Big Omega : Asymptotic Lower Bound f(n)
f(n) = W(g(n))
c,n0 >0 : nn0, +
g(n) c g(n)
n0 n
cg(n) f(n).
18

Little oh : Non-tight Asymptotic Upper Bound c g(n)
f(n)
n0 n
c >0, n0 >0 : n  n0 , f(n) < c g(n) . No matter how small + f(n) = o(g(n)) 19 Little omega : Non-tight Asymptotic Lower Bound f(n) f(n) = w(g(n)) c >0, n0 >0 : n  n0 , f(n) > c g(n) . No matter how large +
c g(n)
n0 n
20

Definitions of Asymptotic Notations
f(n) = Q(g(n)) f(n) = O(g(n)) f(n) = W(g(n))
f(n) = o(g(n)) f(n) = w(g(n))
c1,c2>0, n0>0: n  n0 , c1g(n)  f(n)  c2g(n) c>0, n0>0: n  n0 , f(n)  c g(n) c>0, n0 >0: n  n0 , cg(n)  f(n)
c>0, n0>0: n  n0 , c>0, n0>0: n  n0 ,
f(n) < cg(n) cg(n) < f(n) 21 Example Proof  c>0, n0>0: n  n0 , f(n) < cg(n) Proof: Needtoshow: (c>0,n0>0:nn0 (n2 < cn)). f(n) = o(g(n)) CLAIM: n2  o(n). Move  inside: Work inside-out:  ( c > 0 , n0>0, n  n0 (n2  c n ) ). nc
n(nn0 andnc) n0 >0, n  max{n0 , c}
choose c=1, then  n0 >0, choose n = max{n0 , c}
Now browse outside-in. Everything OK!
22

f(n) = o(g(n))
Any short cuts?
 c>0, n0>0: n  n0 , f(n) < cg(n) If you are a starter on these notations, you are advised to exercise your predicate logic muscles for a while to get the hang of it! However, in the next few slides, we will study some FACTS & RULES that help us manipulate and reason about asymptotic notations in most common situations with much ease. 23 A Classification of Functions 𝑹𝒆𝒄𝒂𝒍𝒍: 𝑋 < 𝑌  log𝑋 < log𝑌. 𝐴𝑙𝑠𝑜,log(𝑋𝑌) = 𝑌log𝑋.  o(1)  O(1)  Q(1)  Poly-Logarithmic: 𝑓 𝑛 = logQ(1) 𝑛  Polynomial: 𝑓(𝑛) = 𝑛Θ(1)  Super-Polynomial but Sub-Exponential: 𝑓𝑛 =𝑛𝜔(1)∩2𝑜𝑛  Exponential or more: 𝑓(𝑛) = 2Ω(𝑛) e.g., 1/𝜔(1), (log 𝑛)–3 , 𝑛–2 , 2–3𝑛 , 0. asymptotically upper-bounded by a constant e.g., 𝑓(𝑛) = 5 + 1/𝑛, (2𝑛 + 1)/(6𝑛 + log𝑛) e.g., log3 𝑛 . log 𝑓(𝑛) = Q(1) log 𝑛 = Q( log 𝑛 ) e.g., 𝑓(𝑛) = 3𝑛 + 1, 𝑛1.5 𝑙𝑜𝑔−3𝑛, 𝑛100 . log 𝑓(𝑛) = w(log 𝑛)  𝑜(𝑛) e.g., 𝑓 𝑛 = 𝑛𝑙𝑜𝑔 𝑛 , 2 𝑛 , 𝑛 2log 𝑛 log 𝑓(𝑛) = Ω(𝑛) 2 2𝑛 e.g., 𝑓(𝑛) = 23𝑛log𝑛 , 2𝑛 , 2 . 24 1. Order of Growth Rules Below X ≪ Y means X = o(Y) : o(1) ≪ Q(1) ≪ Poly-Log ≪ Polynomial ≪ Exponential or more o(1) ≪ Q(1) ≪ 𝑙𝑜𝑔Q(1) 𝑛 ≪ 𝑛Q(1) ≪ 2W(𝑛) Examples: 𝑙𝑜𝑔 100 𝑛 = o(𝑛0.01) 𝑛100 = o( 1.1 3𝑛+1) 25 2. 3. 4. 5. 6. 7. 8. 9. Skew Symmetric: f(n) = Q(g(n))   f(n) = O(g(n))  g(n) = W(f(n)) f(n) = o(g(n))  g(n) = w(f(n)) f(n) = O(g(n)) & f(n) = W(g(n)) f(n) = O(g(n)) & g(n) = O(f(n)) Asymptotic Relation Rules f(n) = Q(g(n))  g(n) = Q(f(n)) f(n)  o(f(n)), f(n)  w(f(n)). Transitive: f(n) = O(g(n)) & g(n) = O(h(n))  f(n) = O(h(n)) works for Q, W, o, w too. f(n) = o(g(n))  f(n) = O(g(n)) & f(n)  W(g(n))  f(n)  Q(g(n)) f(n) = w(g(n))  f(n) = W(g(n)) & f(n)  O(g(n))  f(n)  Q(g(n)) f(n) = g(n) + o(g(n))  f(n) = Q(g(n)) [can hide lower order terms] Symmetric [only Q]: Reflexive: f(n) = Q(f(n)), f(n) = O(f(n)), f(n) = W(f(n)), 26 10. Arithmetic Rules Given: f(n) = O(F(n)), g(n) = O(G(n)), h(n) = Q(H(n)), constant d > 0.
Sum: f(n) + g(n) = O(F(n)) + O(G(n)) = O(max{ F(n), G(n) }). Product: f(n) · g(n) = O(F(n)) · O(G(n)) = O( F(n) · G(n) ).
f(n) · h(n) = O(F(n)) · Q(H(n)) = O( F(n) · H(n) ). Division: 𝑓(𝑛) = 𝑂(𝐹 𝑛 ) = 𝑂 𝐹(𝑛)
h(𝑛) Q(𝐻 𝑛 ) 𝐻(𝑛) Power: f(n)d = O( F(n)d ).
In addition to O , these rules also apply to Q, W, o, w. Examples:
• 16𝑛2 + 9𝑛log𝑛=𝑂 𝑛2 +𝑂(𝑛log𝑛)=𝑂(max{𝑛2,𝑛log𝑛})=𝑂 𝑛2 .
• 𝑛2 =Q 𝑛2 , log𝑛=𝑜(𝑛0.01)  𝑛2log𝑛=𝑜 𝑛2.01 .
• 3𝑛+7log𝑛= Q(𝑛) =Q 𝑛 =Q 1 . 4𝑛log𝑛+9 Q(𝑛 log𝑛) nlog𝑛 log𝑛
• 3𝑛+1 = Q(𝑛)  (3𝑛+1)3.7=Q 𝑛3.7 .
27

Proof of Rule of Sum
f(n) = O(F(n)) & g(n) = O(G(n)) f(n)+g(n) = O(max{F(n),G(n)})
Proof: From the premise we have: n1,c1>0: nn1, f(n)c1F(n)
n2,c2>0: nn2, g(n)c2G(n)
Nowdefinen0=max{n1,n2}>0 & c=c1+c2 >0. We get:
nn0, f(n)+g(n)  c1F(n)+c2G(n)
 c1 max{F(n), G(n)} + c2 max{F(n), G(n)}
= (c1 + c2) max{F(n), G(n)} = c max{F(n), G(n)}
We have shown:
n0 , c > 0: n  n0 , f(n) + g(n)  c max{F(n), G(n)}.
Therefore, f(n) + g(n) = O( max{F(n), G(n)} ).
28

Where did we go wrong!
𝑛 Q 1 ′𝑠 Q𝑛=Q1+Q1+Q1+⋯+Q1+Q1+Q1+Q1
= Q1+Q1+Q1+⋯+Q1+Q1+ Q1 = Q1 +Q1 +Q1 +⋯+Q1 + Q1
=⋯ =Q1+Q1
=Q1
Do you buy this? Where did we go wrong?
META RULE: Within an expression, you can apply the asymptotic simplification rules only a constant number of times!
WHY?
29

Algorithm Complexity
vs
Problem Complexity
30

Algorithm Time Complexity
T(n) = worst-case time complexity of algorithm ALG.  T(n) = O(f(n)):
You must prove that on every input of size n, for all sufficiently large n, ALG takes at most O(f(n)) time. That is, even the worst input of size n cannot force ALG to require more than O(f(n)) time.
 T(n) = W(f(n)):
Demonstrate the existence of an input instance In of size n
for infinitely many n, for which ALG requires at least W(f(n)) time.
 T(n) = Q(f(n)): Do both!
Just one n or a finite # of sample n’s won’t do, because for all n beyond those samples, ALG could go on cruise speed!
31

Problem Time Complexity
T(n) = worst-case time complexity of problem P
is the time complexity of the “best” algorithm that solves P.
 T(n) = O(f(n)):
Need to demonstrate (the existence of) an algorithm ALG that correctly solves problem P, and that worst-case time complexity of ALG is at most O(f(n)).
of problem P, worst-case time complexity of ALG must be at least W(f(n)).  T(n) = Q(f(n)): Do both!
This is usually the much harder task
 T(n) = W(f(n)):
Prove that for every algorithm ALG that correctly solves every instance
32

Example Problem: Sorting
Some sorting algorithms and their worst-case time complexities:
Quick-Sort:
Insertion-Sort:
Selection-Sort:
Merge-Sort:
Heap-Sort:
there are infinitely many sorting algorithms!
Shown in Slide 5: essentially every algorithm that solves the SORTING Problem
requires at least
W(n log n)
time in the worst-case.
So, Merge-Sort and Heap-Sort are worst-case optimal, and SORTING complexity is Q(n log n).
Q(n2) Q(n2) Q(n2)
Q(n log n) Q(n log n)
33

Insertion Sort
an incremental algorithm
11 4 5 2 15 7
4 11 5 2 15 7 4 5 11 2 15 7 2 4 5 11 15 7
2 4 5 11 15 7
2 4 5 7 11 15
34

Insertion Sort: Time Complexity
Algorithm InsertionSort(A[1..n]) Pre-Condition: A[1..n] is an array of n numbers Post-Condition: A[1..n] permuted in sorted order
Q(1t 1) i
i2  n
end-while
 Qn  t  i
A[j+1]  key
Worst-case: ti  i iterations (reverse sorted).

 i2  n
 Qn  i  i2 
 Q(n  n2 )  Q(n2 ).
() in(n1)1Qn2 .
1.
for i  2 .. n do
LI: A[1..i –1] is sorted, A[i..n] is untouched.
§ insert A[i] into sorted prefix A[1..i–1] by right-cyclic-shift: key  A[i]
j  i –1
while j>0 andA[j]>key do
A[j+1]  A[j] j  j –1
T(n) n
2.
3.
4.
5.
6.
7.
8.
9. end-for end
n
i2 2
35

Exercises
36

1. We listed a number of facts and showed the proofs of some of them. Prove the rest.
2. Give the most simplified answer to each of the following with a brief explanation.
a) 𝑓 𝑛 = 4𝑛3 + 6𝑛2log𝑛+ 1200=Θ(?)
b) 𝑓𝑛=5𝑛6+𝑂𝑛3log𝑛=Θ(?)
3𝑛2 log 𝑛+ 𝑂 𝑛
c) 𝑓(𝑛) = 109𝑛2 + 101010𝑛log𝑛 + 3·2𝑛 =Θ(?) log log 𝑛
d) 𝑓𝑛=𝑛log𝑛 =Θ?
e) Does𝑓𝑛 =o𝑛 +Θ𝑛2log𝑛 impl𝑦 𝑓𝑛 =Ω𝑛loglog𝑛 ?
f) Indicate whether or not each function below is 𝜔(𝑛 log 𝑛) : 3𝑛+2, 5𝑛2/log3𝑛, log 𝑛2 ! .
g) Let 𝑓 𝑛 =3𝑛6/log15𝑛+7𝑛 𝑛. Is 𝑓 𝑛 = 𝑛Θ(1) ?
3. Are there any differences between Θ(1)n and Θ(2n) and 2Q(n) ? Explain.
4. Are any of the following implications always true? Prove or give a counter-example. a) f(n) = Θ(g(n))  f(n) = cg(n) + o(g(n)), for some real constant c > 0.
b) f(n) = Θ(g(n))  f(n) = cg(n) + O(g(n)), for some real constant c > 0.
5. Show that 2O(log log n) = logO(1) n .
6. Proveordisprove: n5=Θ(n5+o(1)).
37

7. [CLRS: Problem 3-4, p. 62] Asymptotic notation properties
Let f(n) and g(n) be asymptotically positive functions. Prove or disprove each of the following conjectures.
(a) (b) (c)
(d) (e) (f) (g) (h)
f(n) = O(g(n)) implies g(n) = O(f(n)).
f(n) + g(n) = Q( min ( f(n) , g(n) ) ).
f(n) = O(g(n)) implies log(f(n)) = O(log (g(n))),
where log(g(n)) ≥ 1 and f(n) ≥ 1 for all sufficiently large n. f(n) = O(g(n)) implies 2f(n) = O( 2g(n) ).
f(n) = O( (f(n))2 ).
f(n) = O(g(n)) implies g(n) = W(f(n)). f(n) = Q( f(n/2) ).
f(n) + o(f(n)) = Q(f(n)).
38

8.
We have: 22log log 𝑛 = 2log 𝑛= 𝑛, and 1
22loglog𝑛−1= 22loglog𝑛 2 = 𝑛.
So,is 22loglog𝑛 equalto Q 𝑛 or Q( 𝑛)? Whichone?
Prove or disprove: The following implications hold.
(a) f(n) = Q(g(n))  f(n)h(n) = Q( g(n)h(n) ) .
(b) f(n) = Q(g(n)) and h(n) = Q(1)  f(n)h(n) = Q( g(n)h(n) ) .
[Note: Compare with “Power Rule”.]
9.
10. Is it true that for every pair of functions f(n) and g(n)
we must have at least one of: f(n) = O(g(n)) or g(n) = O(f(n))?
[Hint: Consider𝑓 𝑛 = 𝑛 𝑓𝑜𝑟𝑜𝑑𝑑𝑛 , 𝑔 𝑛 = 𝑛3 𝑓𝑜𝑟𝑜𝑑𝑑𝑛 . ] 𝑛2 𝑓𝑜𝑟𝑒𝑣𝑒𝑛𝑛 𝑛 𝑓𝑜𝑟𝑒𝑣𝑒𝑛𝑛
39

11. Demonstrate two functions f: N N and g: N N with the following properties: Both f and g are strictly monotonically increasing, and f  O(g), and g  O(f).
12. Show that 1 + 1 Θ(𝑛) = Θ(1). Θ(𝑛)
[𝐻𝑖𝑛𝑡: lim 1 + 𝑥1 𝑥 = 𝑒 ≈ 2.7182 𝑖𝑠 𝑁𝑒𝑝𝑒𝑟′𝑠 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡. ] 𝑥→∞
13. Rank the following functions by asymptotic order of growth. Put them in equivalence classes based on Q-equality. That is, f and g are in the same class iff f = Q(g), and f is in an earlier class than g if f = o(g). [𝐻𝑖𝑛𝑡: 𝑠𝑒𝑒 𝑎𝑙𝑠𝑜 𝑡h𝑒 𝑝𝑟𝑒𝑣𝑖𝑜𝑢𝑠 𝑒𝑥𝑒𝑟𝑐𝑖𝑠𝑒. ]
5, log n, logn, nlogn, logn n, log(n!), (logn)!
n3logn , nloglogn ,
6n3 5
4n , 4 n , 22n3 , 22n3 , nlogn
n4 4n1  n4 1 
, ,()n.  n2 1  2n
2 n2 4n1 2n12n -4
40

14. We say a function f: N  + is asymptotically additive if f(n) + f(m) = Q(f(n+m)). Which of the following functions are asymptotically additive? Justify your answer.
i) Logarithm: log n, (Assume n > 1.)
ii) Monomial: nd, for any real constant d  0 (this includes, e.g., n with d=0.5.) iii) Harmonic: 1/n. (Assume n > 0.)
iv) Exponential: an, for any real constant a > 1.
Prove or disprove: If f and g are asymptotically additive and are positive. then so is:
i) a · f , for any real constant a > 0,
ii) f + g,
iii) f – g, assuming (f – g)(n) is always positive. Extra credit is given if f – g is
monotonically increasing.
iv) f·g. v) fg.
15. The Set Disjointness Problem is defined as follows: We are given two sets A and B, each containing n arbitrary numbers. We want to determine whether their intersection is empty, i.e., whether they have any common element.
a) Design the most efficient algorithm you can to solve this problem and analyze worst-case time complexity of your algorithm.
b) What is the time complexity of the Set Disjointness Problem itself?
[You are not yet equipped to answer part (b).
The needed methods will be covered in Lecture Slide 5.]
41

END
42