CS代写 Quiz 4 at 9:00 today.

Quiz 4 at 9:00 today.
Notes March 17
March 17 2022
Today: a small taste of Chapter 3, which is on algorithms. Primarily, big-O notation.

Copyright By PowCoder代写 加微信 powcoder

An algorithm is kind of an abstraction of a computer program: it’s a com- pletely well-defined sequence of steps to follow to solve some general problem. For instance, you learned algorithms for addition and multiplication in elemen- tary school.
The same algorithm can behave very differently on different machines and in different languages. For instance, this machine is several orders of magnitude faster than the first machine I ever owned. Even with this in mind, some algorithms will be faster on any given machine than others. We want a way of studying this.
For today, we’ll consider only algorithms whose inputs can be measured in length–for instance, an algorithm that takes in an integer can have input measured in that number of bits. Or, an algorithm that takes two integers, the length of the input is the sum of the number of bits of the two inputs. (The number of bits in an integer is its length when written in binary notation, see next week.)
Given these assumptions, any implementation of a given algorithm has a couple of associated functions: the time complexity function f : N → N is defined so that f(n) is the number of steps the computer takes to execute the algorithm on an input of length n. (eg to multiply a k-bit number with an n−k- bit number.) Note that since this number of steps could vary depending on the input, we should be more precise: usually f(n) is defined as the worst possible number of steps the program could take on an input of length n. eg the worst case with multiplication or addition comes when you have to do the maximal number of carries. What’s “number of steps”, by the way? It will turn out not to really matter, but you could take a step to be a basic operation of the CPU.
There is also g : N → N, the space complexity function, which has g(n) the number of bits of memory the computer needs to run your algorithm on input of length n. eg addition needs much less storage than multiplication, since in multiplication you write down several rows of partial multiplications before summing.
Both of these functions are important ways to measure how good your al- gorithm has. The problem is, again, how to distinguish algorithms that are

really better versus an algorithm that’s just running 1000x faster on a better computer. There are two simplifications we make to compare the f’s and g’s for different algorithms in a meaningful way.
• We treat two f’s as the name if they are within a constant multiple of each other.
• We also ignore differences on any finite number of small inputs. Algo- rithms can be weird on small inputs but end up much better for arbitrarily large inputs. These inputs are where the efficiency really matters, after all; alternatively, a fast enough computer will always handle small inputs as fast as you what, but will eventually get too slow, using a slow algorithm, for really huge inputs.
Definition: Given functions f1, f2 : N → N, we say that f is O(g) (f is “big
O” of g) if there are constants C and k such that, if n > k, then f1(n) ≤ Cf2(n).
Formally, f1 is O(f2) if
∃C,k ∈ R∀n ∈ N(n > k → f1(n) ≤ Cf2(n)).
This means, intuitively, that f1 is within a constant factor of f2, at least as long as you ignore the first k inputs. eg f1(n) ≤ 10f2(n), as long as n > 1000. If f1,f2 are (say) time complexity functions for some programs, this says that f2 is not coming from a more efficient algorithm than f1. eg in the example above, f2 is maybe 10 times better than f1 for inputs larger than 1000.
• 2x+1 is O(x). It’s close to true that 2x+1 ≤ 2x, but not quite, so we can’t take C = 2. Instead, maybe 2x+1 ≤ 3x? True as long as x ≥ 1! So for k = 1, C = 3, we have 2x + 1 ≤ Cx as long as x ≥ k, which shows 2x + 1 is O(x).
• Ofcourse,xisO(2x+1)too,sincex≤1·2x+1forallx∈N,sowecan take k = 0, C + 1. Since these two functions are O of each other, we write 2x + 1 is Θ(x) and vice versa: they’re “basically the same as each other” for our purposes.
• 2n2+1isO(n3−1).Infact,n3−1>2n2+1ifn=3,anditwillnever stop being bigger, so we can take k = 3 and C = 1.
• But n3 − 1 is not O(2n2 + 1)! How to prove such a thing? The negation of the formal definition is: ∀C, k ∈ R∃n ∈ N(n > k ∧ f1(n) > Cf2(n). That is, we need to show that no matter what C,k you choose, there’s some n > k such that n3 −1 > C(2n2 +1) = 2Cn2 +C. In other words, we need to show that for big enough n, we have n3 −2Cn2 −C −1 > 0. This follows because this is a cubic with positive leading coefficient, so its limit as n → ∞ is +∞. (To see this more directly, divide by n3, so we need to get 1−2C/n−(C +1)/n3 > 0. Take n so big that 2C/n < 1/2 and (C + 1)/n3 < 1/2, then you’re good.) So a quadratic is O of a cubic but not vice versa. More generally, xm is O(xn) if and only if n ≥ m. Thus one way to make an algorithm truly better is to reduce the exponent on n in its time or space complexity function. This is how some parts of algorithms research work: the elementary school algorithm for multiplication takes O(n2) steps. There are cutting-edge algo- rithms that actually reduce this exponent. Even more important is matrix multiplication. The standard algorithm to multiply two n×n matrices together takes O(n3) steps: it takes O(n) steps to take a single dot product, and you have to take n2 such dot products. This exponent has been improved repeatedly over the years: to O(n2.8074) algorithm in 1969, then to O(n2.496) in 1981...to O(n2.3728639) in 2014, and in 2020, to O(n2.3728596). Other important facts: For any f : N → N, we have logf is O(f) and f is O(2f). In particular, logn is O(n) and n is O(2n), but not vice versa. Indeed, any polynomial is O(2n), while logn is O(nx) for any x > 0. Why is this? Easiest approach is
with l’Hopital: observe that limn→∞ log n = limn→∞ 1/n = limn→∞ 1 = 0. n1n
This means that n > log n, say, for sufficiently big n, since otherwise the limit couldn’t be smaller than 1. Similar stories work for 2n and for polynomials in place of n.
In general: if lim f(n) = 0, then f is O(g). (More precisely, by definition n→∞ g(n)
this means f is o(g).)

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com