代写代考 EECS 376: Foundations of Computer Science

EECS 376: Foundations of Computer Science
University of Michigan, Winter 2022 Discussion Notes 12 1 Chernoff Bounds
When estimating using Monte-Carlo algorithms we are usually trying to get some result that’s as close as to an expected value as possible. A very important question is how likely are we to actually get a result that is close to the expected value? While we can apply Markov’s inequality, the bound we get is very loose, and it does not give us any information about how the number of samples affects the quality of the estimate. The law of large numbers states that the actual result converges to the expected value as the number of samples n increases. But how fast does it converge? There are many types of concentration bounds that allow us to reason about the deviation of a random variable from its expectation; Markov’s inequality is just the simplest one. Chernoff bounds are another tool. There are multiple variants of Chernoff bounds, but the ones we will use are as follows.
Let X = X1 + X2 + Xn be the sum of independent indicator random variables, where the indicator Xi has the expectation E[Xi]=Pr[Xi=1]=pi

Copyright By PowCoder代写 加微信 powcoder

Let μ be the expected value of X
μ = E[X] = ΣiE(Xi) = Σipi
Here, we’ve applied linearity of expectation to relate the expectation of X to that of the indi-
cators Xi. The Chernoff bounds then are as follows. 1.1 Chernoff Bound – Upper Tail
Let X = X1 + X2 + Xn, where the Xi are independent indicator random variables with E[Xi]=pi, and μ = Σipi. Suppose we wish to bound the probability of X exceeding its expectation μ by at leastafactorof1+δ,whereδ>0. Then:
Pr[X ≥(1+δ)μ]≤( eδ )μ (1+δ)(1+δ)
1.2 Chernoff Bound – Lower Tail
Let X = X1 + X2 + Xn, where the Xi are independent indicator random variables with E[Xi]=pi, and μ = Σipi. Suppose we wish to bound the probability of X being below its expectation μ by at leastafactorof1−δ,where0<δ<1. Then: Pr[X ≤(1−δ)μ]≤( e−δ )μ (1−δ)(1−δ) 2 Fast Modular Exponentiation 2.1 Power of 2 Suppose we want to calculate ab mod n, where b is a power of 2. Notice that a b = a 2b · a 2b = ( a 2b ) 2 . Since b is a power of 2, we can keep on pulling 2’s out of the exponent: ab = a2log b = (((a2)2)...)2 and thus ab is computable in O(log b) multiplications (note that the input size of this algorithm is O(log b), so this algorithm is efficient in the input size). The name of this algorithm is repeated squaring, since we repeatedly square the number. EECS 376: Foundations of Computer Science University of Michigan, Winter 2022 Discussion Notes 12 Example 2.1. We can compute 7256 mod 13 using only log2(256) = 8 multiplications 72 ≡7·7≡49 ≡10 (mod13) 74 ≡72·72 ≡10·10≡100≡9 (mod13) 78≡74·74≡9·9≡81 ≡3 (mod13) 716≡78·78≡3·3 ≡9 (mod13) . 7256 ≡7128·7128 ≡3·3 ≡9 (mod13) so 7256 mod 13 = 9. 2.2 Calculate ab mod n in O(log b) multiplications We have just showed above that if b is a power of 2 we can calculate it in O(log b) multiplications. We now can extend this algorithm to work for any non-negative integer b. Claim 2.2. ab mod n can be computed in O(log b) multiplications by composing modular exponen- tiations of powers of 2. Proof. Let brbr−1 · · · b1b0 be the binary representation of b. Note that this means that b=b0 ·20 +b1 ·21 +···+br ·2r ab =ab0·20+b1·21+···+br·2r =ab0·20 ·ab1·21 ···abr·2r Note that in the process of computing a2r by the fast exponentiation algorithm, we compute a2i for i = 0,1,...,r. Thus, the number of multiplications it takes to compute ab0·20 ·ab1·21 ···abr·2r is the same as the number of multiplications it takes to compute a2r , plus O(r) more to compute the final product. Observing that r = O(log b) and that the number of multiplications to compute a2r is O(log b) by the fast exponentiation algorithm, we conclude that we can compute ab in O(log b) multiplications. Example 2.3. Compute 7117 mod 13. First we calculate the powers of 2 up to 64: 71 mod13=7mod13=7 72 mod13=49mod13=10 74 mod13=100mod13=9 78 mod13=81mod13=3 716 mod13=9mod13=9 732 mod13=81mod13=3 764 mod13=9mod13=9 Since117=11101012 =64+32+16+4+1,wehave: 7117 mod 13 = (71 · 74 · 716 · 732 · 764) mod 13 EECS 376: Foundations of Computer Science University of Michigan, Winter 2022 = (7 · 9 · 9 · 3 · 9) mod 13 = 63 · 27 · 9 mod 13 = 11 · 1 · 9 mod 13 = 99 mod 13 = 8. 3 (For reference) Diffie- Exchange Problem setup: Alice wants to a share a secret key with Bob but there is an eavesdropper Eve that eavesdrops on their communications. In fact, many encryption schemes require a preshared secret key. How can two people that have never met generate a shared secret key? Definition 3.1. For any n ∈ N, we define Zn = {0,1,2,...,n−1} as the set of equivalence classes modulo n. Definition 3.2. The group Z∗n is a subset of Zn such that all elements of Zn have an inverse modulo n (in other words, the set of elements which we can divide by when working with an expression about integers modulo n). Proposition 3.3. A nonzero integer a has a modular inverse modulo n if and only if a and n are relatively prime (coprime). In other words, Z∗n ={x∈Zn |gcd(x,n)=1}. Fact 3.4. When we consider an arbitrary prime p, we have Z∗p = {1, 2, . . . , p − 1} = Zp \{0} because all positive numbers less than p are relatively prime to p. Definition 3.5. Let p be prime. g ∈ Z∗p is a generator of Z∗p if for every h ∈ Z∗p, there exists x ∈ {0,1,...,p−2} such that h = gx mod p. In other words, Z∗p = {gx mod p : x ∈ {0,1,...,p−2}}. Fact 3.6. For any prime p, there is some g ∈ Z∗p that is a generator of Z∗p. • Alice chooses some secret a ← Z∗p at random • Bob chooses some secret b ← Z∗p at random • AlicesendsA=ga modptoBob • BobsendsB=gb modptoAlice • Alice computes Ba = gab mod p as the secret key • Bob computes Ab = gab mod p as the secret key Eve, an eavesdropper, knows the public information: p, g, A = ga wants to learn the shared key k = gab modp. One way she can do this is to find a from ga mod p (i.e. solve the discrete log problem) by trying consecutive powers of g, but as we have seen before, this is inefficient (O(p)) with respect to the input size (O(log2 p)). The Baby-step Giant- step algorithm described in Section ?? is a slightly faster algorithm with subexponential runtime, on the order of O(√p log p log log p). Therefore, for Diffie-Hellman to be safe, we hope that no algorithm can solve the discrete logarithm problem (DLOG) efficiently. Unfortunately, with the dawn of quantum computing, efficient quantum algorithms to solve DLOG (such as Shor’s algorithm) have been developed. Discussion Notes 12 mod p, B = gb mod p, and EECS 376: Foundations of Computer Science University of Michigan, Winter 2022 Discussion Notes 12 4 Diffie- Example 4.1. Diffie- that the prime is p = 23 and the generator is g = 10. • Alice chooses some secret a ← Z∗p at random, so say Alice picks a = 15. • Bob chooses some secret b ← Z∗p at random, so say Bob picks b = 3. • Alice sends A = ga (mod p) i,e, • BobsendsB=gb modpi,e, (mod 23) = 5 (mod 23) = 11. (mod 23) = 10 • Alice computes Ba = gab as the secret key • Bob computes Ab = gab B = 103 mod p i,e, mod p i,e, as the secret key, which is indeed the same key that Alice computes. (mod 23) = 10 EECS 376: Foundations of Computer Science University of Michigan, Winter 2022 Discussion Notes 12 5 Cryptography and NP-Completeness RSA and Diffie-Hellman both rely on, what are thought to be, hard problems. In order to crack RSA, one needs an efficient algorithm for Integer Factorization. To crack Diffie-Hellman, one needs an efficient solution to the Discrete Logarithm problem. It is unknown whether these two problems are in P. Additionally, it is unknown whether or not these problems are NP-Hard. However, both of these problems are in NP, so if P = NP then both of these problems would be solvable in polynomial time, which would be very problematic for modern cryptographic applications. 5.1 The NP-Intermediate complexity class Recall Figure 1 from Section Notes 6, which shows two possible structures for the relationship between NP-Hard, NP and P: Figure 1: Two possible structures depending on the relationship between NP-Hard, NP and P. It is thought that the left case is more likely, even though we do not have a complete proof yet. If that is true, then it is reasonable to say that there are problems that belong to NP, but are neither in P nor NP-Complete. We speculate that such problems form a class that is referred to as NP-Intermediate. So far, the only thing we can say is that we “expect a certain language to belong to this set.” Question: Why is NP-Intermediate an interesting class after all? What if P̸= NP but all of the languages that are in NP \ P also happen to be NP-Complete? In fact, that is impossible. Ladner’s theorem states that if P ̸= NP, then NP-Intermediate is nonempty. This theorem implies that it is very likely that there are some difficult problems (not in P) that are not powerful enough to make P = NP , if they are proven to be in P (i.e. not NP-Complete). As mentioned before, the attack on Diffie-Hellman key exchange (finding the modular exponent for any given value) is associated with the Discrete Logarithm problem (DLOG). DLOG is “expected to be” NP-Intermediate. In fact, if DLOG is proven to be NP-Complete, coNP= NP. Here is a list of other interesting problems currently expected to be in NP-Intermediate, compiled by professor from UIC, his students, and others: https://cstheory.stackexchange.com/questions/79/problems-between-p-and-npc/237#237 程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com