COMP9024: Data Structures and Algorithms
COMP9024: Data Structures and
Algorithms
Analysis of Algorithms
1
Contents
• Big-oh notation
• Big-theta notation
• Big-omega notation
• Asymptotic algorithm analysis
2
3
Analysis of Algorithms
AlgorithmInput Output
An algorithm is a step-by-step procedure for solving a problem
in a finite amount of time.
Running Time
• Most algorithms transform
input objects into output
objects.
• The running time of an
algorithm typically grows with
the input size.
• Average case time is often
difficult to determine.
• We focus on the worst-case
running time.
• Easier to analyze
• Crucial to applications such as
games, finance and robotics
0
20
40
60
80
100
120
R
un
ni
ng
T
im
e
1000 2000 3000 4000
Input Size
best case
average case
worst case
4
Experimental Studies
• Write a program
implementing the algorithm
• Run the program with inputs
of varying size and
composition
• Use a method like
System.currentTimeMillis() to get
an accurate measure of the
actual running time
• Plot the results 0
1000
2000
3000
4000
5000
6000
7000
8000
9000
0 50 100
Input Size
Ti
m
e
(m
s)
5
Limitations of Experiments
• It is necessary to implement the algorithm, which may be
difficult
• Results may not be indicative of the running time on other
inputs not included in the experiment.
• In order to compare two algorithms, the same hardware and
software environments must be used
6
Theoretical Analysis
• Uses a high-level description of the algorithm instead of an
implementation
• Characterizes running time as a function of the input size,
n.
• Takes into account all possible inputs
• Allows us to evaluate the speed of an algorithm
independent of the hardware/software environment
7
Pseudocode
• High-level description of
an algorithm
• More structured than
English prose
• Less detailed than a
program
• Preferred notation for
describing algorithms
• Hides program design
issues
Algorithm arrayMax(A, n)
{
Input array A of n integers
Output maximum element of A
currentMax = A[0]
for ( i=1; i
currentMax =A[i]
return currentMax
}
Example: find max
element of an array
8
C-Like Pseudocode Details
• Control flow
• if … [else …]
• while …
• do … while …
• for …
• Function declaration
Algorithm functionName (arg [, arg…])
Input …
Output …
• Function call
functionName (arg [, arg…])
• Return value
return expression
• Expressions
= Assignment
= Equality testing
n2Superscripts and other
mathematical formatting
allowed
9
The Random Access Machine (RAM) Model
• A CPU
• An potentially unbounded bank of
memory cells, each of which can
hold an arbitrary number or
character 0
1
2
Memory cells are numbered and accessing any
cell in memory takes unit time.
10
Seven Important Functions
• Seven functions that
often appear in
algorithm analysis:
• Constant ≈ 1
• Logarithmic ≈ log n
• Linear ≈ n
• N-Log-N ≈ n log n
• Quadratic ≈ n2
• Cubic ≈ n3
• Exponential ≈ 2n
• In a log-log chart, the
slope of the line
corresponds to the
growth rate of the
function
1E+0
1E+2
1E+4
1E+6
1E+8
1E+10
1E+12
1E+14
1E+16
1E+18
1E+20
1E+22
1E+24
1E+26
1E+28
1E+30
1E+0 1E+2 1E+4 1E+6 1E+8 1E+10
n
T
(n
)
Cubic
Quadratic
Linear
11
Primitive Operations
• Basic computations performed by
an algorithm
• Identifiable in pseudocode
• Largely independent from the
programming language
• Exact definition not important (we
will see why later)
• Assumed to take a constant amount
of time in the RAM model
• Examples:
• Evaluating an
expression
• Assigning a value to a
variable
• Indexing into an array
• Calling a method
• Returning from a
method
12
Counting Primitive Operations
• By inspecting the pseudocode, we can determine the
maximum number of primitive operations executed by an
algorithm, as a function of the input size
Algorithm arrayMax(A, n)
{ # operations
currentMax = A[0] 1
for (i =1; i
currentMax = A[i] n − 1
// increment counter i n − 1
return currentMax 1
} Total 4n − 1
13
Presenter
Presentation Notes
Estimating Running Time
• Algorithm arrayMax executes 4n − 1 primitive operations in
the worst case. Define:
a = Time taken by the fastest primitive operation
b = Time taken by the slowest primitive operation
• Let T(n) be worst-case time of arrayMax. Then
a (4n − 1) ≤ T(n) ≤ b(4n − 1)
• Hence, the running time T(n) is bounded by two linear
functions
14
Growth Rate of Running Time
• Changing the hardware/ software environment
• Affects T(n) by a constant factor, but
• Does not alter the growth rate of T(n)
• The linear growth rate of the running time T(n) is an
intrinsic property of algorithm arrayMax
15
Constant Factors
• The growth rate is not
affected by
• constant factors or
• lower-order terms
• Examples
• 102n + 105 is a linear
function
• 105n2 + 108n is a
quadratic function
1E+0
1E+2
1E+4
1E+6
1E+8
1E+10
1E+12
1E+14
1E+16
1E+18
1E+20
1E+22
1E+24
1E+26
1E+0 1E+2 1E+4 1E+6 1E+8 1E+10
n
T
(n
)
Quadratic
Quadratic
Linear
Linear
16
Big-Oh Notation
• Given functions f(n) and
g(n), we say that f(n) is
O(g(n)) if there are
positive constants
c and n0 such that
f(n) ≤ cg(n) for n ≥ n0
• Example: 2n + 10 is O(n)
• 2n + 10 ≤ cn
• (c − 2) n ≥ 10
• n ≥ 10/(c − 2)
• Pick c = 3 and n0 = 10
1
10
100
1,000
10,000
1 10 100 1,000
n
3n
2n+10
n
17
Big-Oh Example
• Example: the function
n2 is not O(n)
• n2 ≤ cn
• n ≤ c
• The above inequality
cannot be satisfied
since c must be a
constant
1
10
100
1,000
10,000
100,000
1,000,000
1 10 100 1,000
n
n^2
100n
10n
n
18
More Big-Oh Examples
7n-2
7n-2 is O(n)
need c > 0 and n0 ≥ 1 such that 7n-2 ≤ c•n for n ≥ n0
this is true for c = 7 and n0 = 1
3n3 + 20n2 + 5
3n3 + 20n2 + 5 is O(n3)
need c > 0 and n0 ≥ 1 such that 3n3 + 20n2 + 5 ≤ c•n3 for n ≥ n0
this is true for c = 4 and n0 = 21
3 log n + 5
3 log n + 5 is O(log n)
need c > 0 and n0 ≥ 1 such that 3 log n + 5 ≤ c•log n for n ≥ n0
this is true for c = 8 and n0 = 2
19
Big-Oh and Growth Rate
• The big-Oh notation gives an upper bound on the growth
rate of a function
• The statement “f(n) is O(g(n))” means that the growth rate
of f(n) is no more than the growth rate of g(n)
• We can use the big-Oh notation to rank functions according
to their growth rate
f(n) is O(g(n)) g(n) is O(f(n))
g(n) grows more Yes No
f(n) grows more No Yes
Same growth Yes Yes
20
Big-Oh Rules (1/3)
• If is f(n) a polynomial of degree d, then f(n) is O(nd), i.e.,
1. Drop lower-order terms
2. Drop constant factors
• Use the smallest possible class of functions
• Say “2n is O(n)” instead of “2n is O(n2)”
• Use the simplest expression of the class
• Say “3n + 5 is O(n)” instead of “3n + 5 is O(3n)”
21
Big-Oh Rules (2/3)
5n5+20n4-3n3log2n+107n is O(n5)
Step 1: Drop lower-order terms: 5n5
Step 2: Drop constant factors: n5
Therefore, 5n5+20n4-3n3log2n+107n is O(n5).
10n5+200n4logn-3n3log2n+5000n
Step 1: Drop lower-order terms: 10n5
Step 2: Drop constant factors: n5
Therefore, 10n5+200n4logn-3n3log2n+5000n is O(n5).
10*2n+200n400-3n3log2n+500n
Step 1: Drop lower-order terms: 10*2n
Step 2: Drop constant factors: 2n
10*2n+200n400-3n3log2n+500n is O(2n). 22
Big-Oh Rules (3/3)
1+23+ 33 +… + n3
Step 1: Drop lower-order terms: n3
Step 2: Drop constant factors: n3
Therefore, 1+23+ 33 +… + n3 is O(n3)
The drop-constant-factor rule is only applicable to an arithmetic
expression with a constant number of terms.
1+23+ 33 +… + n3 < n*n3=O(n4)
23
More Examples of Big-Oh
• 100m2n+20mnlog n is O(m2n+mnlog n)
• 10mn+2m2n2+mnlog(mn) is O(m2n2)
• O(mn)+O(mlog n)+O(nlog m) is O(mn)
• O(mn)+O(mn(log m + log n))+O(n3) is O(mn(log m + log n)+n3)
24
Asymptotic Algorithm Analysis
• The asymptotic analysis of an algorithm determines the
running time in big-Oh notation.
• To perform the asymptotic analysis
• We find the worst-case number of primitive operations
executed as a function of the input size
• We express this function with big-Oh notation
• Example:
• We determine that algorithm arrayMax executes at most 4n −
1 primitive operations
• We say that algorithm arrayMax “runs in O(n) time”
• Since constant factors and lower-order terms are eventually
dropped anyhow, we can disregard them when counting
primitive operations
25
Computing Prefix Averages
• We further illustrate
asymptotic analysis with two
algorithms for prefix averages
• The i-th prefix average of an
array X is average of the first (i
+ 1) elements of X:
A[i] = (X[0] + X[1] + … + X[i])/(i+1)
• Computing the array A of prefix
averages of another array X has
applications to financial
analysis
0
5
10
15
20
25
30
35
1 2 3 4 5 6 7
X
A
26
Prefix Averages (Quadratic)
The following algorithm computes prefix averages in
quadratic time by applying the definition
Algorithm prefixAverages1(X, n)
{
Input array X of n integers
Output array A of prefix averages of X #operations
A = new array of n integers; n
for ( i = 0; i < n; i++) n+1
{ s = X[0]; n
for ( j =1; j ≤ i ; j++) 1 + 2 + …+ (n − 1)+n
s = s + X[j]; 1 + 2 + …+ (n − 1)
A[i] = s / (i + 1); n
}
return A; 1
}
27
Arithmetic Progression
• The total number of primitive operations of prefixAverages1 is
n+n+1+n+1 + 2 + …+ (n − 1)+n+1 + 2 + …+ (n −1)+n+1
=n2+4n+2=O(n2).
• Thus, algorithm prefixAverages1 runs in O(n2) time, or we say the
time complexity of prefixAverages1 is O(n2).
28
Prefix Averages (Linear)
The following algorithm computes prefix averages in
linear time by keeping a running sum
Algorithm prefixAverages2(X, n)
{ Input array X of n integers
Output array A of prefix averages of X #operations
A = new array of n integers n
s = 0 1
for ( i = 0; i < n; i++ ) n
{ s = s + X[i] n
A[i] = s / (i + 1) n
}
return A 1
}
Algorithm prefixAverages2 runs in O(n) time
29
Binary Search (1/4)
The following recursive algorithm searches for a value in a sorted array:
BinarySearch(v, a, lo, hi)
Input value v
array a[lo..hi] of values
Output true if v in a[lo..hi]
false otherwise
mid=(lo+hi)/2
if lo>hi return false
if a[mid]=v return true
else if a[mid]
if 𝑛𝑛 > 0 is even
36
Computing Powers (3/3)
Algorithm Power(x, n)
{
Input : A number x and integer n = 0
Output : The value xn
if n = 0 return 1
if n is odd
{ y = Power(x, (n – 1)/ 2)
return x*y*y }
else
{ y = Power(x, n/ 2)
return y*y }
}
Time complexity analysis:
• Each call of Power() takes O(1)
time
• There are O(log n) calls
• Time complexity: O(log n)
37
Computing Fibanacci Numbers (1/3)
• Fibonacci numbers are defined recursively:
F0 = 0
F1 = 1
Fi = Fi-1 + Fi-2 for i > 1.
• As a recursive algorithm (first attempt):
Algorithm BinaryFib(k)
{ Input : Nonnegative integer k
Output : The kth Fibonacci number Fk
if ( k =0 or 1) return k;
else
return BinaryFib(k – 1) + BinaryFib(k – 2);
}
38
Computing Fibanacci Numbers (2/3)
• Let nk denote number of recursive calls made by BinaryFib(k). Then
o n0 = 1
o n1 = 1
o n2 = n1 + n0 + 1 = 1 + 1 + 1 = 3
o n3 = n2 + n1 + 1 = 3 + 1 + 1 = 5
o n4 = n3 + n2 + 1 = 5 + 3 + 1 = 9
o n5 = n4 + n3 + 1 = 9 + 5 + 1 = 15
o n6 = n5 + n4 + 1 = 15 + 9 + 1 = 25
o n7 = n6 + n5 + 1 = 25 + 15 + 1 = 41
o n8 = n7 + n6 + 1 = 41 + 25 + 1 = 67.
• Note that the value at least doubles for every other value of nk. That is, nk > 2k/2.
It is exponential!
• Time complexity: O(2k)
39
Computing Fibanacci Numbers (3/3)
To compute Fk’s only once, we remember the value of each Fk:
Algorithm LinearFibonacci(k):
{ Input : A nonnegative integer k
Output : Pair of Fibonacci numbers (Fk, Fk-1)
if ( k = 1) return (k, 0);
else
{
(i, j) = LinearFibonacci(k – 1);
return (i +j, i);
}
}
Time complexity: O(k)
40
Stack Frames (1/3)
41
• A stack frame is used to store:
return address,
local variables,
parameters, and
the values of some registers.
• When a function is called, a new frame is created for the function.
• When a function returns, its frame is removed (freed up).
Stack Frames (2/3)
42
int main()
{
foo();
bar();
return 1;
}
void foo()
{
// Do something
bar();
}
void bar()
{
// Do something
}
Stack Frames (3/3)
43
The Executable and Linkable Format (ELF, formerly named Extensible Linking Format), is a common
standard file format for executable files, object code, shared libraries, and core dumps.
* Space Complexity Analysis for Recursive Algorithms (1/9)
• In general, space complexity analysis is easier than time complexity
analysis.
• The hard part in analyzing the space complexity of a recursive
algorithm is in the stack space complexity.
• We need to understand how a recursive method is executed on
computers.
• Key concept: stack frame or activation record.
44
* Space Complexity Analysis for Recursive Algorithms (2/9)
• A stack frame for a function stores the local variables, some
parameters, the return address, and some other stuff such as values of
some registers.
• A stack frame is created in the stack space whenever a function is
called.
• A stack frame is freed when the function returns.
• The fame size of each frame can be determined at compile time.
45
* Space Complexity Analysis for Recursive Algorithms (3/9)
• A call graph is a weighted directed graph G = (V, E, W) where
V={v1, v2, …, vn} is a set of nodes each of which denotes an execution of a
function;
E={vi→vj: vi calls vj} is a set of directed edges each of which
denotes the caller-callee relationship, and
W={wi (i=1, 2, …, n): wi is the frame size of vi} is a set of stack
frame sizes.
• The maximum size of stack space needed for function calls can be
derived from the call graph.
46
* Space Complexity Analysis for Recursive Algorithms (4/9)
• How to compute the maximum size of stack space needed for a method
call?
Step 1: Draw the call graph.
Step 2: Find the longest weighted path in the call graph.
The total weight of the longest weighted path is the maximum stack
size needed for the function calls.
47
* Space Complexity Analysis for Recursive Algorithms (5/9)
int main(void)
{ …
func1();
…
func2();
}
void func1()
{ …
func3();
…
}
void func2()
{ …
func4();
…
func5();
…
}
int func3()
{ …
x=func3();
…
}
Assumptions:
• func3() is called 20 times
• Frame sizes (bytes):
main(): 10
func1(): 20
func2(): 60
func3(): 80
func4(): 10
func5(): 30
48
* Space Complexity Analysis for Recursive Algorithms (6/9)
The longest path is main() func1() func3() … func3() with a length (total
weight) of 10+20+80*20=1630. So the maximum stack space needed for main() is
1630 bytes.
49
* Space Complexity Analysis for Recursive Algorithms (7/9)
The above approach can be generalized to recursive algorithms.
The frame size of each algorithm is represented by big O.
Compute the longest path length in terms of big O.
Consider the previous example.
Assume that the frame sizes of all the methods except func4() are O(1),
and the frame size of func4() is O(n). The space complexity of main() is
O(n).
Assume that the frame sizes of all the methods are O(1). The space
complexity of main() is O(1).
50
* Space Complexity Analysis for Recursive Algorithms (8/9)
51
Recursive algorithm for Fibonacci numbers:
Algorithm Fib(k)
{ Input : Nonnegative integer k
Output : The kth Fibonacci number Fk
if ( k =0 or 1) return k;
else
return Fib(k – 1) + Fib(k – 2);
}
What is the space complexity of Fib(n) in terms of big-O?
* Space Complexity Analysis for Recursive Algorithms (9/9)
52
What is the space complexity of BinaryFib(n) in terms of big-O?
Fib(n)
Subtree for
Fib(n-3)
Fib(n-2)
Fib(n-2)
Fib(n-1)
Subtree for
Fib(n-4)
Subtree for
Fib(n-3)
Fib(n-3)
Fib(1)
…
Subtree for
Fib(n-4)
Subtree for
Fib(n-5)
The space complexity:
the number of node on
the longest path * frame
size=n*c=O(n)
53
• properties of logarithms:
logb(xy) = logbx + logby
logb (x/y) = logbx – logby
logbxa = alogbx
logba = logxa/logxb
• properties of exponentials:
a(b+c) = abac
abc = (ab)c
ab/ac = a(b-c)
b = a logab
bc = a c*logab
Summations
Logarithms and Exponents
Proof techniques
Basic probability
Math you need to Review
54
Relatives of Big-Oh
Big-Omega
f(n) is Ω(g(n)) if there is a constant c > 0
and an integer constant n0 ≥ 1 such that
f(n) ≥ c•g(n) for n ≥ n0
Big-Theta
f(n) is Θ(g(n)) if there are constants c’ > 0 and c’’
> 0 and an integer constant n0 ≥ 1 such that
c’•g(n) ≤ f(n) ≤ c’’•g(n) for n ≥ n0
55
Intuition for Asymptotic
Notation
Big-Oh
f(n) is O(g(n)) if f(n) is asymptotically
less than or equal to g(n)
Big-Omega
f(n) is Ω(g(n)) if f(n) is asymptotically
greater than or equal to g(n)
Big-Theta
f(n) is Θ(g(n)) if f(n) is asymptotically
equal to g(n)
56
Example Uses of the
Relatives of Big-Oh
f(n) is Θ(g(n)) if it is Ω(n2) and O(n2). We have already seen the former,
for the latter recall that f(n) is O(g(n)) if there is a constant c > 0 and an
integer constant n0 ≥ 1 such that f(n) < c•g(n) for n ≥ n0
Let c = 5 and n0 = 1
5n2 is Θ(n2)
f(n) is Ω(g(n)) if there is a constant c > 0 and an integer constant n0 ≥ 1
such that f(n) ≥ c•g(n) for n ≥ n0
let c = 1 and n0 = 1
5n2 is Ω(n)
f(n) is Ω(g(n)) if there is a constant c > 0 and an integer constant n0 ≥ 1
such that f(n) ≥ c•g(n) for n ≥ n0
let c = 5 and n0 = 1
5n2 is Ω(n2)
57
Summary
• Big-Oh, big-theta and big-omega notations
• Asymptotic analysis of algorithms
• Examples of algorithms with logarithmic, linear, polynomial,
exponential time complexity
• Suggested reading:
Sedgewick, Ch.2.1-2.4,2.6
COMP9024: Data Structures and Algorithms
Contents
Analysis of Algorithms
Running Time
Experimental Studies
Limitations of Experiments
Theoretical Analysis
Pseudocode
C-Like Pseudocode Details
The Random Access Machine (RAM) Model
Seven Important Functions
Primitive Operations
Counting Primitive Operations
Estimating Running Time
Growth Rate of Running Time
Constant Factors
Big-Oh Notation
Big-Oh Example
Slide Number 19
Big-Oh and Growth Rate
Big-Oh Rules (1/3)
Big-Oh Rules (2/3)
Big-Oh Rules (3/3)
More Examples of Big-Oh
Asymptotic Algorithm Analysis
Computing Prefix Averages
Slide Number 27
Arithmetic Progression
Slide Number 29
Binary Search (1/4)
Binary Search (2/4)
Binary Search (3/4)
Binary Search (4/4)
Linear Time vs Logarithmic Time
Computing Powers (1/3)
Computing Powers (2/3)
Computing Powers (3/3)
Computing Fibanacci Numbers (1/3)
Computing Fibanacci Numbers (2/3)
Computing Fibanacci Numbers (3/3)
Stack Frames (1/3)
Stack Frames (2/3)
Stack Frames (3/3)
* Space Complexity Analysis for Recursive Algorithms (1/9)
* Space Complexity Analysis for Recursive Algorithms (2/9)
* Space Complexity Analysis for Recursive Algorithms (3/9)
* Space Complexity Analysis for Recursive Algorithms (4/9)
* Space Complexity Analysis for Recursive Algorithms (5/9)
* Space Complexity Analysis for Recursive Algorithms (6/9)
* Space Complexity Analysis for Recursive Algorithms (7/9)
* Space Complexity Analysis for Recursive Algorithms (8/9)
* Space Complexity Analysis for Recursive Algorithms (9/9)
Math you need to Review
Slide Number 54
Intuition for Asymptotic Notation
Slide Number 56
Summary
Chart1
1 1 1
10 10 10
100 100 100
1000 1000 1000
10000 10000 10000
100000 100000 100000
1000000 1000000 1000000
10000000 10000000 10000000
100000000 100000000 100000000
1000000000 1000000000 1000000000
10000000000 10000000000 10000000000
Cubic
Quadratic
Linear
n
T(n)
1
1
1
1000
100
10
1000000
10000
100
1000000000
1000000
1000
1000000000000
100000000
10000
1000000000000000
10000000000
100000
1E+18
1000000000000
1000000
1E+21
100000000000000
10000000
1E+24
1E+16
100000000
1E+27
1E+18
1000000000
1E+30
1E+20
10000000000
Sheet1
n Linear Quadratic Cubic
1 1 1 1
10 10 100 1000
100 100 10000 1000000
1000 1000 1000000 1000000000
10000 10000 100000000 1000000000000
100000 100000 10000000000 1000000000000000
1000000 1000000 1000000000000 1000000000000000000
10000000 10000000 100000000000000 1000000000000000000000
100000000 100000000 10000000000000000 1000000000000000000000000
1000000000 1000000000 1000000000000000000 1000000000000000000000000000
10000000000 10000000000 100000000000000000000 1000000000000000000000000000000
Chart1
1 1 1 1
10 10 10 10
100 100 100 100
1000 1000 1000 1000
10000 10000 10000 10000
100000 100000 100000 100000
1000000 1000000 1000000 1000000
10000000 10000000 10000000 10000000
100000000 100000000 100000000 100000000
1000000000 1000000000 1000000000 1000000000
10000000000 10000000000 10000000000 10000000000
Quadratic
Quadratic
Linear
Linear
n
T(n)
100100000
1
100100
1
1010000000
100
101000
10
11000000000
10000
110000
100
200000000000
1000000
200000
1000
11000000000000
100000000
1100000
10000
1.01E+15
10000000000
10100000
100000
1.001E+17
1000000000000
100100000
1000000
1.0001E+19
100000000000000
1000100000
10000000
1.00001E+21
1E+16
10000100000
100000000
1.000001E+23
1E+18
100000100000
1000000000
1.0000001E+25
1E+20
1000000100000
10000000000
Sheet1
n n 10^2n + 10^5 n2 10^5n^2 + 10^8n
1 1 100100 1 100100000
10 10 101000 100 1010000000
100 100 110000 10000 11000000000
1000 1000 200000 1000000 200000000000
10000 10000 1100000 100000000 11000000000000
100000 100000 10100000 10000000000 1010000000000000
1000000 1000000 100100000 1000000000000 100100000000000000
10000000 10000000 1000100000 100000000000000 10001000000000000000
100000000 100000000 10000100000 10000000000000000 1000009999999999900000
1000000000 1000000000 100000100000 1000000000000000000 100000100000000000000000
10000000000 10000000000 1000000100000 100000000000000000000 10000001000000000000000000
Chart1
1 1 1
2 2 2
4 4 4
8 8 8
16 16 16
32 32 32
64 64 64
128 128 128
256 256 256
512 512 512
1024 1024 1024
3n
2n+10
n
n
3
12
1
6
14
2
12
18
4
24
26
8
48
42
16
96
74
32
192
138
64
384
266
128
768
522
256
1536
1034
512
3072
2058
1024
Sheet1
n n 2n+10 3n
1 1 12 3
2 2 14 6
4 4 18 12
8 8 26 24
16 16 42 48
32 32 74 96
64 64 138 192
128 128 266 384
256 256 522 768
512 512 1034 1536
1024 1024 2058 3072
Chart1
1 1 1 1
2 2 2 2
4 4 4 4
8 8 8 8
16 16 16 16
32 32 32 32
64 64 64 64
128 128 128 128
256 256 256 256
512 512 512 512
1024 1024 1024 1024
n^2
100n
10n
n
n
1
100
10
1
4
200
20
2
16
400
40
4
64
800
80
8
256
1600
160
16
1024
3200
320
32
4096
6400
640
64
16384
12800
1280
128
65536
25600
2560
256
262144
51200
5120
512
1048576
102400
10240
1024
Sheet1
1 10 100
n n 10n 100n n^2
1 1 10 100 1
2 2 20 200 4
4 4 40 400 16
8 8 80 800 64
16 16 160 1600 256
32 32 320 3200 1024
64 64 640 6400 4096
128 128 1280 12800 16384
256 256 2560 25600 65536
512 512 5120 51200 262144
1024 1024 10240 102400 1048576
Chart1
21 21
23 22
25 23
31 25
20 24
18 23
16 22
X
A
Sheet1
X 21 23 25 31 20 18 16
A 21 22 23 25 24 23 22