代写代考 CS-282A / MATH-209A

CS-282A / MATH-209A
Foundations of Cryptography
Draft Lecture Notes Winter 2010
Copyright © 2003-2010

Copyright By PowCoder代写 加微信 powcoder

Acknowledgements: These lecture notes were produced while I was teaching this graduate course at UCLA. After each lecture, I asked students to read old notes and update them according to the lecture. I am grateful to all of the students who participated in this massive endeavor throughout the years, for asking questions, taking notes and improving these lecture notes each time.

PART 1: PART 2:
PART 3: PART 4:
PART 5: PART 6:
PART 7: PART 8: PART 9: PART 10:
PART 11: PART 12: PART 13: PART 14:
PART 15: PART 16:
Table of contents
Overview, Complexity classes, Weak and Strong One-way functions, Number Theory Background.
Hard-Core Bits.
Private Key Encryption, Perfectly Secure Encryption and its limitations, Semantic Security, Pseudo-Random Generators.
Implications of Pseudo-Random Generators, Pseudo-random Functions and its applications.
Digital Signatures.
Trapdoor Permutations, Public-Key Encryption and its definitions, Goldwasser-Micali, El-Gamal and Cramer-Shoup cryptosystems.
Interactive Proofs and Zero-Knowledge.
Bit Commitment Protocols in different settings.
Non-Interactive Zero Knowledge (NIZK)
CCA1 and CCA2 Public Key Encryption from general Assumptions: Naor-Yung, DDN.
Non-Malleable Commitments, Non-Malleable NIZK.
Multi-Server PIR
Single-Server PIR, OT, 1-2-OT, MPC, Program Obfuscation.
More on MPC including GMW and BGW protocols in the honest-but-curious setting.
Yao’s garbled circuits, Efficient ZK Arguments, Non-Black-Box Zero Knowledge.
Remote Secure Data Storage (in a Cloud), Oblivious RAMs.

CS 282A/MATH 209A: Foundations of Cryptography ⃝c 2006-2010 Prof. Part 1
1 Overview of Cryptography
This section gives an overview of the various branches of cryptography.
1.1 Brief History
While cryptography has been in use since the times of the Romans, it has only recently been formalized into a science.
• (1940s)
– Formalized “Perfect Security” while working on the ENIGMA project
– Defined the private key communication model • and (1970s)
– Laid foundations for cryptographic techniques based on theoretical complexity assumptions; if a technique is cracked this directly implies a solution to a long- standing problem in mathematics
– Defined public-key cryptography for secure transmissions over an insecure chan- nel
• Rivest, Shamir, and Adleman (1978) – RSA public-key cryptosystem
• Blum, Micali and Yao (in early 1980s)
– Developed rigorous theoretical foundations of pseudo-random generators.
• Goldwasser and Micali (1980s)
– Provided formal and first satisfactory definition of encryption
– Developed definition for probabilistic encryption: partial information of the un- encoded message should not provide information about the encrypted message

1.2 Example Problem Spaces in Cryptography
Secrecy in Communication
Two parties (Alice and Bob) want to be able to communicate with each other in some manner such that an adversary (Eve) intercepting their communication will not be able to get any information about the message being sent. One simple protocol is as follows:
• Alice locks her message with lock LA and sends it to Bob.
• Bob locks the message with lock LB and sends it back to Alice. • Alice unlocks LA and sends the message back again.
• Bob unlocks LB and reads the message.
Non-Malleability
A non-malleable encryption scheme has the property that it is difficult to modify an en- crypted message to another similar message.
As an example, consider a blind auction in which the auction-goers Gi send their bids to an auctioneer A. Without any encryption, A can collude with G1 so that G1 can send in a bid equal to max(Gi) + 1, i > 1. Simply using a commitment protocol f(bid) is not sufficient, as it can be possible for A to determine f(max(Gi) + 1), i > 1 without knowing the value of f(max(Gi)),i > 1. A non-malleable encryption scheme would prevent collusion.
Authentication/Data Integrity
In this problem Alice would like to send Bob a message in such a way that Eve cannot replace the message with her own message without Bob’s knowledge. A common form of authentication is a digital signature, which is a piece of data that certifies a document so that the acceptor can be sure that it was issued by the correct sender.
1.3 Cryptographic Primitives
Cryptographic primitives are used as building blocks to build more advanced security pro- tocols and cryptosystems.

Interactive/Zero Knowledge Proofs
In an interactive proof (due GMR) there exist two parties, a prover P and a verifier V. P has a statement that he wishes to prove to V. P and V would like to run a protocol at the end of which V is convinced that the statement is true (to within any arbitrary degree of certainty) if it is true, but P (no matter how computationally powerful) is unable to prove a false statement to V.
In a zero-knowledge proof, V should additionally gain no more knowledge than the validity of the statement; that is, V must accept the statement as true but does not know why it is true.
Interactive proofs are far more powerful than conventional proofs, being able to prove any true statement in PSPACE. Regular proofs, by comparison, can only prove statements in NP.
Coin-Flipping Protocols
A coin flipping protocol is one in which Alice and Bob, who don’t trust each other, run a protocol at the end of which, they agree on a fair coin flip such that neither of them can cheat.
Commitment Protocols
A commitment protocol is a protocol in which both parties commit to a decision in a way that appears simultaneous. This requires that once one party’s decision is committed, he is unable to modify it, and also that the other party is unable to make an unfair decision based on the committed decision. Commitment protocols can form the basis for coin-flipping protocols.
Secure Two-Party Computation
In this protocol Alice and Bob have inputs xa and xb and they wish to compute some function f(xa,xb). At the end of the protocol, Alice and Bob should learn the computed value f(xa,xb) without learning the value of the other’s input.

Private Information Retrieval
Here, there is a database D of bits and a user wants to find out the value of bit i without revealing to the database program the index i. While this can be done by sending the user the entire database, there are more efficient ways to provide the appropriate data without revealing this index i.
2 Background in Complexity Theory
2.1 Basing Cryptography on Complexity Theory
Modern cryptography is based on assumptions on complexity thoery. One of these assump- tions is the existance of a one way function. Informally a one way function is a function that is easy to compute on an input, but given the output it is hard to come up with an inverse. Computation is computationally “easy” if it takes polynomial time in the size of the input. Computation is coputationally “hard” if it takes super-polynomial time in the size of the input.
It is easy to see that if P=NP than no one way function can exist. Thus we must make the assumption that P ̸= NP. While there is no proof that these complexity assumptions are true many researches believe them to be true.
Once we have made some basic complexity assumptions we can then build protocols on these assumptions by showing that if the protocol is broken we can break the underlining complexity assumption. In other words if there is a program that can break the protocol we can use this program as a subroutine for breaking the underlining complexity assumption. Thus the protocol is at least as strong as the underlining complexity assumption.
For example one complexity assumption is that it is hard to factor the product of two large primes. We can show that if we assume this to be hard then there exists a One- way Function. In turn, this implies that there exists a pseudo-random number generator which implies that there exists a secure commitment scheme. Finally, existence of a secure commitment scheme implies that there exists a coin-flipping protocol.
Thus if there exists an adversary that can break the coin flipping protocol, we can use this adversary to factor large primes which would violate the complexity assumption.
2.2 Complexity Classes
A language is a subset of a universal set of alphabet, that adheres to some rules. An algorithm is a procedure that decides if a given input is in the language.

The class of languages P is the set of all languages that can be recognized in deterministic polynomial time.
The class of languages NP is the set of all languages that can be recognized in non- deterministic polynomial time.
A Probabilistic Turing Machine is a non-deterministic Turing machine which randomly chooses transitions based on some underlying probability distribution. A probabilistic Tur- ing machine may give its output with certain error probabilities. Thus we can define com- plexity classes based on these error probabilities.
Afunctiong(n):N→Nisnegligibleif∀c>0,∃N >0suchthat∀n>N,|g(n)|< 1. c cnc Randomized Polytime (RP) is the set of all languages that can be recognized in poly- Otherwise, g(n) is said to be non-negligible. nomial time. Additionally, any RP algorithm satisfies the following properties 1. it always runs in polynomial time as a function of input size. 2. if x is not in the language, the algorithm returns that x is not in the language. 3. If x is in the language, then it returns that x is in the language with probability greater than 32 (false negatives are possible.) Points 2 and 3 can be represented mathematically as Prx,w[Mw(x) = Yes|x ∈ L] > 23
Prx,w[Mw(x) = Yes|x ̸∈ L] = 0
where w is the coin tosses of turing machine M. The probability is taken over all coin tosses of M. Note that by running M many times using fresh randomness on every run, we can boost the probability of success to 1 − ε(n) where ε(n) is negligible. If M outputs “No” on any input, we can conclude that x ̸∈ L. If M outputs “Yes” on all runs, we can conclude that x ∈ L with probability 1 − ε(n).
Co-RP is the set of all languages that can be recognized in polynomial time with the properties
1. If x is in the language then the algorithm will indicate that x is in the language (with probability 1.)
2. If x is not in the language then the algorithm will indicate that x is in the language (false positive) with probability less than 13 and will indicate that x is not in the language with probability greater than 23.

These conditions can be represented as
Prw[Mw(x) = Yes |x ∈ L] = 1
Prw[Mw(x) = Yes |x ̸∈ L] < 13 Bounded Probabilistic Polytime (BPP or PPT) is the set of all languages that can be recognized in polynomial time with the following properties: Prw[Mw(x) = Yes |x ∈ L] > 23
Prw[Mw(x) = Yes |x ̸∈ L] < 13 We can reduce error probability in BPP to a negligible function by running M many times with fresh randomness each time. The Chernoff Bound states that given n independent random variables X1, X2, ...., Xn with identical probability distributions, if X = 􏰂ni=1 Xi, then Pr[X ≥ (1 + β)E(X)] < e−β2E(X)/2 where E(X) is the expectation of X. The Chernoff Bound implies that the probability of X being greater than or equal to values farther and farther away from the mean decreases exponentially as a function of distance β. This result is intuitive and provides an upper bound on the probability. We make a new machine M′ that executes M a total of k times. M′ will output “Yes” if the majority of the executions output “Yes”, and “No” otherwise. Let Xi = 1 if M′ makes a mistake on the ith run of machine M, and 0 otherwise. Since each run of M is independent, Pr[Xi = 1] = (1 − 2/3) = 1/3. If we run the machine k times, then E(X) = k/3. M′’s output will be wrong if more than half the outcomes of M are incorrect. Thus M′’s output will be incorrect if 􏰂ni=1 Xi ≥ k/2. Thus setting β to 1/2 in the Chernoff bound gives us an upper bound on this probability, as (1 + β)E(X) is then equal to k/2: Pr[X ≥ 32 · k3] < e−k/24 Pr[X ≥ k2] < e−k/24 This shows that the probability of M′ making a mistake can be made negligble. It is the case that P ⊆ RP ⊆ NP. Clearly any problem in P can be solved in RP by simply ignoring all randomness. Any problem in RP can by solved in NP by guessing the randomness. 3 Comparison of Uniform and Non-Uniform Complexity Classes Uniform algorithms are those that use Turing Machines to make their decision. The class of problems that can be solved by these algorithms is known as the Uniform Complexity class. Non-Uniform algorithms are those that can be solved by sequences {Ci} of circuits where each Ci has i inputs, one output, and a number of gates polynomial with respect to i, as shown in the following figure. Figure 1: Example circuit sequence {Ci} for a P/poly decision algorithm It is important to note that each Ci are circuits for inputs with lengths i; for example, C4 correctly determines whether 0101 ∈ L but not necessarily whether 101 ∈ L. These circuits can also be thought of as Turing machines which each get a single polynomial- length advise string for all inputs of the same length. These definitions can be shown to be equivalent. The class of problems that can be solved by these algorithms is known as the Non-Uniform Complexity class denoted by P/poly. 3.1 P/poly ⊆ NP? The following is a wrong proof for the claim that P/poly ⊆ NP: Construct an NP machine that guesses the advice that was given to a P/poly machine. Now, any problem in P/poly is in NP. What is wrong with this? The problem with this proof is that although the NP machine uses the advice to make a decision, it does not know if the advice is correct for every instance of size |x|. In the circuit analogue, this is equivalent to allowing the NP machine to guess a circuit Ci. All 2i input cases must still be tested for Ci to determine whether the circuit is correct. It is not clear how an NP machine would be able to come up with a short certificate that guarantees the circuit works for all 2i input cases. To give an idea of the power of P/poly we will in fact show that it can recognize undecidable languages. Consider some standard enumeration of the Turing machines M1, M2, ..., Mn such as Godel numbering. Now consider the following language L: x∈L iff machine number |x| (in the above enumeration) halts. Since we are allowed magic advice for each input length, the magic advice could tell us which machines halt and which do not in the above enumeration. Hence a P/poly algorithm can solve the above problem which is undecidable. 3.2 BPP ⊆ P/poly P/poly circuits are so powerful that they do not require coin flips to make decisions. In fact, proved that BPP ⊆ P/poly. (In contrast, it is currently an important open question as to whether coin flips are really useful for polynomial-time languages; that is, whether BPP = P. For example, the current best deterministic algorithm for primality testing runs in time O(n6) (Lenstra and Pomerance improvement over AKS) whereas there is a randomized test that runs in time O(n3). (Miller-Rabin)) Theorem 1 BPP ⊆ P/poly. [Adleman] Proof Given a BPP machine M with 2/3 probability of falsely rejecting an input and 1/3 probability of falsely accepting an input, we can easily create another BPP machine M′ that runs M many times on the input (with new randomness every time) to reduce the probability of error to a very small value [see previous subsection]. Let this new machine M′ on input x be characterized in the following way: Prw[M′(x) = yes|x ∈ L] > (1 − 2−(n+1)) Prw[M′(x) = yes|x ∈/ L] < 2−(n+1) Figure 2: Input space for a BPP machine M′ with r coin flips and input string of size n where n = |x|. Let r be the number of coin flips used by the machine. Now let us construct a matrix with 2n rows and 2r columns, where the rows are all possible inputs and columns are all possible coin-flip sequences. This matrix is the space of all possible inputs (including coin flips) to the machine M′. Put in a cell of the matrix a value 1 when there is an error corresponding to the input and the coin flips of the cell’s position, and a value 0 when there is no error (see Figure 2). Now, the probability of a 1 in a cell, is the probability that machine M′ makes a mistake < 2−(n+1). Hence, the expected number of ones in the matrix < 2n ·2r ·2−(n+1) = 2r−1. The number of columns in the matrix = 2r. By pigeonhole principle, at least half the columns in the matrix will have no ones in them. In other words, at least half the coin flip sequences will give the correct result on all 2n inputs. Choose one of these coin flip sequences and hardwire the P/poly circuit with this set of coin flips. By doing this, the P/poly circuit will be able to solve any problem in BPP without making use of any randomness. 4 Introduction to One-Way Functions 4.1 Overview In secure cryptosystems the user should have the ability to encrypt its data in probabilistic polynomial time; but the adversary should fail to decrypt the encrypt data in probabilistic polynomial time. Figure 3: An illustration for a secure cryptosystem As a motivating example, suppose we wish to construct a cryptosystem consisting of an encryption E and a decryption D, both which are poly-time. The encryption takes a clear- text message x and some random bits r, and gives y = E(x, r). A polynomial-time adversary has access only to the cipher-text y. For the cryptosystem to be secure, it should be hard for the adversary to recover the clear-text, i.e. a poly-time adversary who is given E(x,r) should not be able to figure out any information about x. Figure 4: An example for a cryptosystem This brings up two questions: what assumptions do we need to design such a cryptosystem, and what is meant by security of the cryptosystem? In this lecture we will answer only the first question. random value r, e.g coin flips y=E(x,r) D(E(x,r))=x 4.2 Necessary assumptions For 1-way functions to exist, our first assumption must be that P ̸= NP. If we allow the adversary to use non-deterministic polynomial time machines, then A can always break our encryption. If P ̸= NP, then we know that there is some function f such that f−1(y) is hard to compute by a poly-time adversary A on some number of instances of y. All we can say about the number of those hard instances is that they are infinitely many. If there were only finitely many hard instances, then one can create a look-up table for those hard pairs, and thus get a polynomial time algorithm that easily inverts f in all instances. BPP and NP Next, we want to assume that the adversary A is as ‘smart’ as possible. Since we are allowing our machine to flip coins, we want to assume that A can do that as well, and that A can occasionally make mistakes. Thus, let A be any BPP (bounded probabilistic polynomial time) machine. Our next assumption becomes BPP ̸= NP. Since P ⊆ NP, this assumption is at least as strong as the assumption P ̸= NP. Now we are guaranteed that there is some f such that a BPP adversary fails to invert f on some infinite number of instances. By fails to invert we mean that the probability that A finds some z so that f(z) = y is very small. We make this notion precise in the next section. Ave-P and Ave-NP It is important to note that the assumption BPP ̸= NP, while necessary, is not sufficient to guarantee the existence of 1-way functions. Even though we know that infinitely many hard instances exist, we may not have a way of actually finding them. Consider the function f defined below, that illustrates the well-known NP-complete problem of determining whether a given graph G has a Hamiltonian cycle.  G if H is a hamiltonian cycle of G, f(G,H) = 00...0 otherwise.  􏰅 􏰄􏰃 􏰆 |G| If, BPP ̸= NP, there may be infinitely many pairs (G,H) for which f is hard to invert, 程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com