程序代写代做代考 algorithm data structure 1 Data Structures and Algorithm Analysis

1 Data Structures and Algorithm Analysis

Defining Efficiency
Asymptotic Complexity – how running time scales as function of size of input
Two problems:
What is the “input size” ?
How do we express the running time ? (The Big-O notation, and its associates)

Input Size
Usually: length (in characters) of input
Sometimes: number of “items” (Ex: numbers in an array, nodes in a graph)
Input size is denoted n.
Which inputs (because there are many inputs at a given size)?
Worst case: look at the input of size n that causes the longest running time; tells us how good an algorithm works
Best case: look at the input on which the algorithm is the fastest; it is rarely relevant
Average case: useful in practice, but there are technical problems here (next)

Input Size
Average Case Analysis
Assume inputs are randomly distributed according to some “realistic” distribution 
Compute expected running time

Drawbacks
Often hard to define realistic random distributions
Usually hard to perform math

Input Size
Example: the function: find(x, v, n)(find v in the array x[1..n])
Input size: n (the length of the array)
T(n) = “running time for size n”
But T(n) needs clarification:
Worst case T(n): it runs in at most T(n) time for any x,v
Best case T(n): it takes at least T(n) time for any x,v
Average case T(n): average time over all v and x

Amortized analysis

Definition of Order Notation
Upper bound: T(n) = O(f(n)) Big-O

Exist constants c and n’ such that
T(n)  c f(n) for all n  n’
Lower bound: T(n) = (g(n)) Omega

Exist constants c and n’ such that
T(n)  c g(n) for all n  n’
Tight bound: T(n) = (f(n)) Theta

When both hold:
T(n) = O(f(n))
T(n) = (f(n))
Other notations: o(f), (f) – later

*
We’ll use some specific terminology to describe asymptotic behavior.

There are some analogies here that you might find useful.

Warning about the notation T(n) = O(f(n))

Example: Upper Bound

*

Using a Different Pair of Constants

Example: Lower Bound

*

Conventions of Order Notation

Common Functions and Their Names
constant: O(1)
logarithmic: O(log n)
linear: O(n)
log-linear: O(n log n)
quadratic: O(n2)
exponential: O(cn) (c is a constant > 1)
hyperexponential: (a tower of n exponentials

Other names:
superlinear: (nc) (c is a constant > 1)
polynomial: O(nc) (c is a constant > 0)
Fastest Growth
Slowest Growth

*

Execution time comparison

O(1) < O(log n) < O(n) < O(n log n) < O(n2) 0. BUT, which one of these is really better?

Race III
n + 100n0.1
2n + 10 log n
vs.

*
Notice that these just look like n and 2n once we get way out. That’s because the larger terms dominate. So, the left is less, but not asymptotically less.

It’s a TIE!

Race IV
5n5
n!
vs.

*
N! is BIG!!!

Race V
n-152n/100
1000n15
vs.

*
No matter how you put it, any exponential beats any polynomial. It doesn’t even take that long here (~250 input size)

Race VI
82log(n)
3n7 + 7n
vs.

*
We can reduce the left hand term to n^6, so they’re both polynomial and it’s an open and shut case.

The big-O arithmetic

The big-O arithmetic

Cont. big-O Arithmetic

Cont. of big-O rules
Rule 4: If T(n) is a polynomial of degree k, then T(n) = O(nk)
where n is the variable and k is the highest power
Example: n5+4n4+8n2+9 = O(n5)
7n3+2n+19 = O(n3)
Remarks:
2n2 = O(n3) and 2n2 = O(n2) are both correct, but O(n2) is more accurate
The following notations are to be avoided:

– O(2n2)
– O(5)
-O(n2+10n)
The big-O notation is an upper bound

Big-O – by taking limits
Determine the relative growth rates f(n) vs. g(n) by computing
limit n-> f(n)/g(n)
(1) if the limit is 0
then f(n) = O(g(n))
more precisely: f(n) = o (g(n))
(2) if the limit is C  0
then f(n) = O(g(n)) and g(n) = O(f(n))
more precisely f(n) = (g(n))
(3) if the limit is 
then f(n) = (g(n))
more precisely: f(n) =  (g(n))

Eliminate low order terms
Eliminate constant coefficients

*

General Rules for Computing Time Complexity
(1) O(1) if the execution time does not depend on the input data

(2) Apply Addition if the code segments are placed consecutively

(3) Apply Multiplication if the code segments are nested

(4) O(log n) if the number of executions is reduced in half repeatedly

(5) O(2n) if the number of executions is doubled repeatedly

(6) Always consider the worst case

Big-O examples
(1) cin >> n;
for (k=1; k<=n; k++) for(j=1; j<=k; j++) for(m=1; m<=n+10; m++) a = 10; Time complexity is O(n3) (2) cin >> n
for(j=0; j < myfunc(n); j++) cout << j for(k=1; k<= n; k++) cout << j << k; Time complexity is: (a) O(n2) if myfunc(n) returns n (b) O(nlogn) if myfunc(n) returns log n Big-O examples(2) (3) cin >> n; total = 0
for(j=1; j > x
total = x + total
if((total %2) == 1)
cout << total else for(k=1; k<=j; k++) cout << k; Time complexity: O(n2) Big-O examples(3) (4) float f(float n) { sum =0; for(j=1; j> n; power =1;
for(j=1; j< n; j++) power = power * 2 for(k=1; k< power; k++) cout << k Time complexity: O(2n) int j,n,k; // Now what? Big-O examples(5) (8) cin >> n;
x =1
while (x < n) x = 2*x Time Complexity: O(log n) Inputs() (,)Prob()RunTime() xn ETnxx D Î = å 2 2 22 2 2 2 Proof: Must find , such that for all , 100 Let's try setting 2. Then 1002 100 100 So we Claim can : 10 set 100 and reverse the steps abov. ( e 0) cnnn nncn c nnn n nnO n n n n ¢¢ >

+
=

£
£
¢
=
=

2
2
22
22
2
Proof: Must find , such that for all ,
100
Let’s try setting 1. Then
100
0
So we can set 0 and reverse the steps ab
ove.
Thus we can also conc
Claim: 100()
100
lude
cnnn
nncn
c
nnn
n
n
nnn
nn
+=
¢¢
>

=
+
¢
+
³
W
³
=
=
2
()
n
q

22
22
Order notation is not symmetric: write
but never
The expression (())(()) is equivalent to
()(())
The expression (())(()) is equivalent to
()((
()
2
))
T
(
he right
2
)
-h
OfnOgn
fnOgn
fngn
fn
nn
gn
Onnn
On
=
=
W=W
=W
=+
+=
2
22
23
18
and side is a “cruder” version of the l
()()
18()(
(2
log)()
f
)
et:
n
nOnOn
nnn
O
nn
=W
==
=W=W
=

)
O(2
2
.

.

.
2
2

322
8
32
8
32
8
32
88
332
88
32
8
3
8
3
8
3
8
3
16log(10)100
16log(10)
log(10)
log(10)log()
log(10)log()
log()
2log()
log()
log(2)log()
log()
nnn
nn
nn
nn
nnn
nn
nn
nn
nn
nn
+
Þ
Þ
éù
Þ+
ëû
Þ+
Þ
Þ
Þ
Þ
Þ

3223
8
16log(10)100(log())
nnnOnn
+=