A Proof of Security of Yao’s Protocol for Two-Party Computation
Yehuda Lindell∗ Benny Pinkas† June 26, 2006
Abstract
In the mid 1980’s, Yao presented a constant-round protocol for securely computing any two- party functionality in the presence of semi-honest adversaries (FOCS 1986). In this paper, we provide a complete description of Yao’s protocol, along with a rigorous proof of security. Despite the importance of Yao’s protocol to the theory of cryptography, and in particular to the field of secure computation, to the best of our knowledge, this is the first time that an explicit proof of security has been published.
1 Introduction
In the setting of two-party computation, two parties with respective private inputs x and y, wish to jointly compute a functionality f(x,y) = (f1(x,y),f2(x,y)), such that the first party receives f1(x,y) and the second party receives f2(x,y). This functionality may be probabilistic, in which case f(x,y) is a random variable. Loosely speaking, the security requirements are that nothing is learned from the protocol other than the output (privacy), and that the output is distributed according to the prescribed functionality (correctness). The definition of security that has become standard today [10, 11, 1, 4] blends these two conditions. In this paper, we consider the problem of achieving security in the presence of semi-honest (or passive) adversaries who follow the protocol specification, but attempt to learn additional information by analyzing the transcript of messages received during the execution.
The first general solution for the problem of secure two-party computation in the presence of semi-honest adversaries was presented by Yao [15]. Later, solutions were provided for the multi- party and malicious adversarial cases by Goldreich et al. [9]. These ground-breaking results essen- tially began the field of secure multiparty computation and served as the basis for countless papers. In addition to its fundamental theoretic contribution, Yao’s protocol is remarkably efficient in that it has only a constant number of rounds and uses one oblivious transfer per input bit only (with no additional oblivious transfers in the rest of the computation). Unfortunately, to the best of our knowledge, a full proof of security of Yao’s protocol has never been published. Our motivation for publishing such a proof is twofold. First, Yao’s result is central to the field of secure computation. This is true both because of its historic importance as the first general solution to the two-party problem, and because many later results have relied on it in their constructions. As such, having a rigorous proof of the result is paramount. Second, the current situation is very frustrating for those who wish to study secure multiparty computation, but are unable to find a complete presentation of one of the most basic results in the field. We hope to correct this situation in this paper.
∗Department of Computer Science, Bar-Ilan University, Israel. email: lindell@cs.biu.ac.il. Most of this work was carried out while at IBM T.J.Watson Research, New York.
†Department of Computer Science, Haifa University, Israel. email: benny@pinkas.net. Most of this work was carried out while at HP Labs, New Jersey.
1
Yao’s protocol [15]. Let f be a polynomial-time functionality (assume for now that it is deter- ministic), and let x and y be the parties’ respective inputs. The first step is to view the function f as a boolean circuit C. In order to describe Yao’s protocol, it is helpful to first recall how such a circuit is computed. Let x and y be the parties’ inputs. Then, the circuit C(x,y) is computed gate-by-gate, from the input wires to the output wires. Once the incoming wires to a gate g have obtained values α, β ∈ {0, 1}, it is possible to give the outgoing wires of the gate the value g(α, β). The output of the circuit is given by the values obtained in the output wires of the circuit. Thus, essentially, computing a circuit involves allocating appropriate zero-one values to the wires of the circuit. In the description below, we refer to four different types of wires in a circuit: circuit-input wires (that receive the input values x and y), circuit-output wires (that carry the value C(x,y)), gate-input wires (that enter some gate g), and gate-output wires (that leave some gate g).
We now present a high-level description of Yao’s protocol. The construction is actually a “compiler” that takes any polynomial-time functionality f, or actually a circuit C that computes f, and constructs a protocol for securely computing f in the presence of semi-honest adversaries. In a secure protocol, the only value learned by a party should be its output. Therefore, the values that are allocated to all wires that are not circuit-output, should not be learned by either party (these values may reveal information about the other party’s input that could not be otherwise learned from the output). The basic idea behind Yao’s protocol is to provide a method of computing a circuit so that values obtained on all wires other than circuit-output wires are never revealed. For every wire in the circuit, two random values are specified such that one value represents 0 and the other represents 1. For example, let w be the label of some wire. Then, two values kw0 and kw1 are chosen, where kwσ represents the bit σ. An important observation here is that even if one of the parties knows the value kwσ obtained by the wire w, this does not help it to determine if σ = 0 or σ = 1 (because both kw0 and kw1 are identically distributed). Of course, the difficulty with such an idea is that it seems to make computation of the circuit impossible. That is, let g be a gate with incoming wires w1 and w2 and output wire w3. Then, given two random values k1σ and k2τ, it does not seem possible to compute the gate because σ and τ are unknown. We therefore need a method of computing the value of the output wire of a gate (also a random value k30 or k31), given the value of the two input wires to that gate. In short, this method involves providing “garbled computation tables” that map the random input values to random output values. However, this mapping should have the property that given two input values, it is only possible to learn the output value that corresponds to the output of the gate (the other output value must be kept secret). This is accomplished by viewing the four possible inputs to the gate k10,k1,k20,k21 as encryption keys. Then, the output values k30 and k31, which are also keys, are encrypted under the appropriate keys from the incoming wires. For example, let g be an OR gate. Then, the key k31 is encrypted under the pairs of keys associated with the values (1,1), (1,0) and (0,1). In contrast, the key k30 is encrypted under the pair of keys associated with (0, 0). See Table 1 below.
Table 1: Garbled OR Gate
input wire w1
input wire w2
output wire w3
garbled computation table
k10 k10 k1 k1
k20 k21 k20 k21
k30 k31 k31 k31
Ek10 (Ek20 (k30)) Ek10 (Ek21 (k31)) Ek1 (Ek20 (k31)) Ek1 (Ek21 (k31))
2
Notice that given the input wire keys k1α and k2β corresponding to α and β, and the four table
values (found in the fourth column of Table 1), it is possible to decrypt and obtain the output wire
key kg(α,β). Furthermore, as required above, this is the only value that can be obtained (the other 3
keys on the input wires are not known and so only a single table value can be decrypted). In other words, it is possible to compute the output key kg(α,β) of a gate, and only that key, without learning
anything about the real values α, β or g(α, β). (We note that the values of the table are randomly ordered so that a key’s position does not reveal anything about the value that it is associated with. Despite this random ordering, the specific construction is such that given a pair of input wire keys, it is possible to locate the table entry that is encrypted by those keys.)
So far we have described how to construct a single garbled gate. A garbled circuit consists of garbled gates along with “output decryption tables”. These tables map the random values on circuit-output wires back to their corresponding real values. That is, for a circuit-output wire w, the pairs (0, kw0 ) and (1, kw1 ) are provided. Then, after obtaining the key kwγ on a circuit-output wire, it is possible to determine the actual output bit by comparing the key to the values in the output decryption table.1 Notice that given the keys associated with inputs x and y, it is possible to (obliviously) compute the entire circuit gate-by-gate. Then, having obtained the keys on the circuit-output wires, these can be “decrypted” providing the result C(x,y).
The above construction can be described metaphorically using “locked boxes”. The basic idea, as above, is that every wire is allocated two padlock keys; one key is associated with the bit zero and the other with the bit one. Then, for each gate four doubly-locked boxes are provided, where each box is associated with a row in the truth table computing the gate (i.e., one box is associated with inputs (0,0), another (0,1) and so on). The four boxes are locked so that each pair of keys (one from each input wire) opens exactly one box. Furthermore, in each box a single key relating to the output wire of the gate is stored. This key is chosen so that it correctly associates the input bits to the output bit of the gate. (For example, if the keys that open the box are associated with 0 and 1 and the gate computes the and function, then the key inside the box is the key associated with 0 in the output wire.) The first important observation is that given the set of keys that are associated with the parties’ inputs, it is possible to “compute the circuit” by opening the locked boxes one at a time (for each gate, only one box will open). The process concludes at the output-gate boxes, which can contain the actual output rather than a key. The second important observation is that the computation of the circuit reveals absolutely no information beyond the output itself. This is due to the fact that the keys are not labelled and so it is impossible to know if a given key is associated with zero or with one. This all holds under the assumption that the keys associated with the circuit-input wires are obtained in an “oblivious manner” that does not reveal the association with the parties’ inputs. Furthermore, we must assume that only a single set of keys is provided (and so in each gate only a single box can opened). Of course, in the actual garbled-circuit construction, double-encryption replaces doubly-locked boxes and decryption keys replace physical padlock keys.
We now proceed to informally describe Yao’s protocol. In this protocol, one of the parties, henceforth the sender, constructs a garbled circuit and sends it to the other party, henceforth the receiver. The sender and receiver then interact so that the receiver obtains the input-wire keys that are associated with the inputs x and y (this interaction is described below). Given these keys, the receiver then computes the circuit as described, obtains the output and concludes the protocol. This description only shows how the receiver obtains its output, while ignoring the output of the sender. However, the receiver’s output can include the sender’s output in encrypted form (where
1Alternatively, in the output gates it is possible to directly encrypt 0 or 1 instead of kw0 or kw1 , respectively. 3
3
only the sender knows the decryption key). Then, the receiver can just forward the sender its output at the end of the computation. Since the sender’s output is encrypted, the receiver learns nothing more than its own output, as required.
It remains for us to describe how the receiver obtains the keys for the circuit-input wires. Here we differentiate between the inputs of the sender and the inputs of the receiver. Regarding the sender, it simply sends the receiver the values that correspond to its input. That is, if its ith input bit is 0 and the wire wi receives this input, then the sender just hands the receiver the string ki0. Notice that since all of the keys are identically distributed, the receiver can learn nothing about the sender’s input from these keys. Regarding the receiver, this is more problematic. The sender cannot hand it all of the keys pertaining to its input (i.e., both the 0 and 1 keys on the receiver’s input wires), because this would enable the receiver to compute more than just its output. (For a given input x of the sender, this would enable the receiver to compute C(x,y ̃) for every y ̃. This is much more information than a single value C(x, y).) On the other hand, the receiver cannot openly tell the sender which keys to send it, because then the sender would learn the receiver’s input. The solution to this is to use a 1-out-of-2 oblivious transfer protocol [13, 6]. In such a protocol, a sender inputs two values x0 and x1 (in this case, kw0 and kw1 for some circuit-input wire w), and a receiver inputs a bit σ (in this case, corresponding to its appropriate input bit). The outcome of the protocol is that the receiver obtains the value xσ (in this case, the key kwσ ). Furthermore, the receiver learns nothing about the other value x1−σ, and the sender learns nothing about the receiver’s input σ. By having the receiver obtain its keys in this way, we obtain that (a) the sender learns nothing of the receiver’s input value, and (b) the receiver obtains only a single set of keys and so can compute the circuit on only a single value, as required. This completes our high-level description of Yao’s protocol.
Related work. Sketches of Yao’s protocol have appeared in a number of places; see, for example, [2, 12, 7]. In addition, an extension of Yao’s protocol to the multiparty case was presented in [3], with a full proof in [14]. This work also contains an implicit description (and proof) of Yao’s protocol. We remark also that a full proof of [9] has recently appeared in [7].
2 Definitions
We denote the length of the inputs and the security parameter by n. We say that a function μ(·) is negligible in n (or just negligible) if for every positive polynomial p(·) and all sufficiently large n’s it holds that μ(n) < 1/p(n). Let S be an infinite set and let X = {Xs}s∈S and Y = {Ys}s∈S be distribution ensembles. We say that X and Y are computationally indistinguishable, denoted X ≡c Y , if for every non-uniform probabilistic polynomial-time distinguisher D and all sufficiently large s ∈ S, |Pr[D(Xs) = 1]−Pr[D(Ys) = 1]| is negligible in |s|. Finally, for a probabilistic machine M, we denote by a ← M(x) the event of obtaining a by invoking M on input x and a uniformly chosen random tape.
2.1 Secure Two-Party Protocols for Semi-Honest Adversaries
The model that we consider here is that of two-party computation in the presence of static semi- honest adversaries. Such an adversary controls one of the parties (statically, and so at the onset of the computation) and follows the protocol specification exactly. However, it may try to learn more information than allowed by looking at the transcript of messages that it received. Since we only consider static semi-honest adversaries here, we will sometimes omit the qualification that security
4
is with respect to such adversaries only. The definitions presented here are according to Goldreich in [7].
Two-party computation. A two-party protocol problem is cast by specifying a random process that maps pairs of inputs to pairs of outputs (one for each party). We refer to such a process as a functionality and denote it f : {0, 1}∗ × {0, 1}∗ → {0, 1}∗ × {0, 1}∗, where f = (f1, f2). That is, for every pair of inputs x, y ∈ {0, 1}n, the output-pair is a random variable (f1(x, y), f2(x, y)) ranging over pairs of strings. The first party (with input x) wishes to obtain f1(x,y) and the second party (with input y) wishes to obtain f2(x,y). We often denote such a functionality by (x, y) → (f1(x, y), f2(x, y)). Thus, for example, the oblivious transfer functionality is specified by ((z0, z1), σ) → (λ, zσ), where λ denotes the empty string. When the functionality f is probabilistic, we sometimes use the notation f(x,y,r), where r is a uniformly chosen random tape used for computing f.
Privacy by simulation. Intuitively, a protocol is secure if whatever can be computed by a party participating in the protocol can be computed based on its input and output only. This is formalized according to the simulation paradigm. Loosely speaking, we require that a party’s view inaprotocolexecutionbesimulatablegivenonlyitsinputandoutput.2 Thisthenimpliesthatthe parties learn nothing from the protocol execution itself, as desired.
Definition of security. We begin with the following notation:
• Let f = (f1,f2) be a probabilistic polynomial-time functionality and let π be a two-party
protocol for computing f.
• The view of the ith party (i ∈ {1, 2}) during an execution of π on (x, y) is denoted viewπi (x, y) and equals (x,ri,mi1,...,mit), where ri equals the contents of the ith party’s internal random tape, and mij represents the jth message that it received.
• The output of the ith party during an execution of π on (x, y) is denoted outputπi (x, y) and can be computedfromitsownviewoftheexecution. Denoteoutputπ(x,y)=(outputπ1(x,y),outputπ2(x,y)).
Definition 1 (security w.r.t. semi-honest behavior): Let f = (f1,f2) be a functionality. We say that π securely computes f in the presence of static semi-honest adversaries if there exist probabilistic polynomial-time algorithms S1 and S2 such that
{(S1(x,f1(x,y)),f(x,y))}x,y∈{0,1}∗ ≡c {(viewπ1(x,y),outputπ(x,y))}x,y∈{0,1}∗ (1) {(S2(y,f2(x,y)),f(x,y))}x,y∈{0,1}∗ ≡c {(viewπ2(x,y),outputπ(x,y))}x,y∈{0,1}∗ (2)
where |x| = |y|.
Equations (1) and (2) state that the view of a party can be simulated by a probabilistic polynomial-
time algorithm given access to the party’s input and output only. We emphasize that the adversary
2A different definition of security for multiparty computation compares the output of a real protocol execution to the output of an ideal computation involving an incorruptible trusted third party. This trusted party receives the parties’ inputs, computes the functionality on these inputs and returns to each their respective output. Loosely speaking, a protocol is secure if any real-model adversary can be converted into an ideal-model adversary such that the output distributions are computationally indistinguishable. We remark that in the case of semi-honest adversaries, this definition is equivalent to the (simpler) simulation-based definition presented here; see [7].
5
here is semi-honest and therefore the view is exactly according to the protocol definition. We note that it is not enough for the simulator Si to generate a string indistinguishable from viewπi (x, y). Rather, the joint distribution of the simulator’s output and the functionality output f (x, y) must be indistinguishable from (viewπi (x, y), outputπ (x, y)). This is necessary for probabilistic functionalities; see [4, 7] for a full discussion.
A simpler formulation for deterministic functionalities. In the case that the functionality f is deterministic, a simpler definition can be used. Specifically, we do not need to consider the joint distribution of the simulator’s output with the protocol output. Rather we separately require that
{outputπ(x,y))}x,y∈{0,1}∗ ≡c {f(x,y)}x,y∈{0,1}∗ and in addition, that there exist S1 and S2 such that:
{S1(x,f1(x,y))}x,y∈{0,1}∗ ≡c {viewπ1(x,y)}x,y∈{0,1}∗ (3) {S2(y,f2(x,y))}x,y∈{0,1}∗ ≡c {viewπ2(x,y)}x,y∈{0,1}∗ (4)
The reason that this suffices is that when f is deterministic, outputπ(x,y) must equal f(x,y). Furthermore, the distinguisher for the ensembles can compute f(x,y) by itself (because it is given x and y which are the indices of the ensemble). See [7, Section 7.2.2] for more discussion.
Deterministic same-output functionalities. We say that a functionality f = (f1, f2) is same- output if f1 = f2. In our presentation, we will show how to securely compute deterministic same output functionalities only. This suffices for obtaining secure protocols for arbitrary probabilistic functionalities.
In order to see this, first note that given a protocol for securely computing any deterministic functionality, it is possible to construct a secure protocol for computing any probabilistic func- tionality as follows. Let f = (f1,f2) be a probabilistic functionality. Then, define a deterministic functionality f′((x, r), (y, s)) = f(x, y, r ⊕ s) and assume that we have a secure protocol π′ for computing f′. Now, the following is a secure protocol π for computing f. Upon respective in- puts x, y ∈ {0, 1}n, parties P1 and P2 choose uniformly distributed strings r ∈R {0, 1}q(n) and s ∈R {0, 1}q(n), respectively, where q(n) is an upper bound on the number of random bits used to compute f. They then invoke the protocol π′ for securely computing f′ in order to both obtain f′((x, r), (y, s)) = f(x, y, r ⊕ s). The fact that this yields a secure protocol for computing f was formally proved in [7, Section 7.3]. Note that the size of the circuit computing f′ is of the same order as the size of the circuit computing f. The only difference is that the circuit for f′ has |r| additional exclusive-or gates, where |r| is the length of f’s random tape.
So far we have shown that it suffices to consider deterministic functionalities. Next, we show that the restriction to same-output functionalities is also not a limitation. That is, as above it is possible to construct a secure protocol for computing arbitrary functionalities from a secure protocol for computing same-output functionalities. In particular, let f = (f1, f2) be an arbitrary functionality and define the same-output functionality f′ as follows: f′((x, r), (y, s)) = (f1(x, y)⊕r ∥ f2(x, y)⊕s) where a∥b denotes the concatenation of a with b. Now, given a secure protocol π′ for computing the same-output functionality f′, it is possible to securely compute the functionality f = (f1,f2). As above, upon respective inputs x,y ∈ {0,1}n, parties P1 and P2 choose uniformly distributed strings r ∈R {0, 1}q(n) and s ∈R {0, 1}q(n), respectively, where q(n) is an upper bound on the output length of f on inputs of length n. They then invoke the protocol π′ for securely computing f′ in
6
order to both obtain f′((x,r),(y,s)); denote the first half of this output by v and the second half by w. Then, upon receiving (v, w), party P1 computes v ⊕ r and obtains f1(x, y). Likewise, upon receiving (v, w), party P2 computes w ⊕ s and obtains f2(x, y). It is easy to see that the resulting protocol securely computes f. This is due to the fact that r completely obscures f1(x,y) from P2 and likewise s completely obscures f2(x,y) from P1. Thus, neither party learns more than its own input. (In fact, the strings f1(x, y) ⊕ r and f2(x, y) ⊕ s are uniformly distributed and so are easily simulated.) As above, the size of the circuit computing f′ is of the same order as the size of the circuit computing f. The only difference is that f′ has one additional exclusive-or gate for every circuit-output wire.
Since it suffices to consider deterministic same-output functions only, we will present Yao’s protocol for this simpler case. The generalization to arbitrary probabilistic functionalities will then be derived by corollary from the above arguments.
3 Tools
3.1 “Special” Private-Key Encryption
Our construction uses a private-key encryption scheme that has indistinguishable encryptions for multiple messages. Informally speaking, this means that for every two (known) vectors of messages x and y, no polynomial-time adversary can distinguish an encryption of the vector x from an encryption of the vector y. We stress that according to our construction of Yao’s garbled circuit, the encryption scheme must be secure for multiple messages. Therefore one-time pads cannot be used. In our proof of security, we will actually use an encryption scheme that is secure under chosen- plaintext attacks (strictly speaking this is not necessary, but it does simplify the presentation). We refer the reader to [7, Chapter 5] for formal definitions of secure encryption.
We will require an additional property from the encryption scheme that we use. Loosely speak- ing, we require that an encryption under one key will fall in the range of an encryption under another key with negligible probability. We also require that given the key k, it is possible to efficiently verify if a given ciphertext is in the range of k. (These two requirements are very easily satisfied, as demonstrated below.) The reason that we require these additional properties is to enable the receiver to correctly compute the garbled circuit. Recall that in every gate, the receiver is given two random keys that enable it to decrypt and obtain the random key for the gate-output wire; see Table 1. A problem that immediately arises here is how can the receiver know which value is the intended decryption. (Notice that it may be the case that all strings can be decrypted.) By imposing the requirement that encryptions under one key will almost never be valid encryptions under another key, and requiring that this can also be efficiently verified, it will hold that only one of the values will be valid (except with negligible probability). The receiver will then take the (single) correctly-decrypted value as the key for the gate-output wire.
We now formally define the requirements on the encryption scheme:
Definition 2 Let (G,E,D) be a private-key encryption scheme and denote the range of a key in
def
the scheme by Rangen(k) = {Ek(x)}x∈{0,1}n . Then,
1. We say that (G, E, D) has an elusive range if for every probabilistic polynomial-time machine A, every polynomial p(·) and all sufficiently large n’s
Prk←G(1n)[A(1n) ∈ Rangen(k)] < 1 p(n)
7
2. We say that (G, E, D) has an efficiently verifiable range if there exists a probabilistic polynomial- time machine M such that M(1n, k, c) = 1 if and only if c ∈ Rangen(k).
By convention, for every c ∈/ Rangen(k), we have that Dk(c) = ⊥.
Notice that the requirements for an “elusive range” are quite weak. In particular, the machine A is oblivious in that it is given no information on k and no examples of ciphertexts within Rangen(k). Thus, A must “hit” the range with no help whatsoever.
We now show that it is easy to construct encryption schemes with the above properties. Let F = {fk} be a family of pseudorandom functions [8], where fk : {0, 1}n → {0, 1}2n for k ∈ {0, 1}n. Then, define
Ek(x) = ⟨r, fk(r) ⊕ x0n⟩
where x ∈ {0, 1}n, r ∈R {0, 1}n and x0n denotes the concatenation of x and 0n.3 The fact that this encryption scheme has indistinguishable encryptions under chosen-plaintext attacks is well-known. Regarding our additional requirements:
1. Elusive range: Notice that if a truly random function frand was used instead of fk, then the probability that a value c output by the machine A is in the range of ⟨r, frand(r) ⊕ x0n⟩ is negligible. This follows from the fact that obtaining such a c involves finding a value r and then predicting the last n bits of frand(r) (notice that these last n bits are fully revealed in frand(r) ⊕ x0n). Since frand is random, this prediction can succeed with probability at most 2−n. Now, by the assumption that fk is pseudorandom, it follows that a polynomial- time machine A will also succeed in generating such a c with at most negligible probability. Otherwise, such an A could be used to distinguish fk from a random function.
2. Efficiently verifiable range: Given k and c = ⟨r,s⟩, it is possible to compute fk(r) and verify that the last n bits of fk(r) equal the last n bits of s. If yes, then it follows that c ∈ Rangen(k), and if not then c ∈/ Rangen(k).
We stress that there are many possible ways to ensure correctness in the decryption of a gate. For example, as described in [12], explicit (and randomly permuted) indices may be used instead.4
Double-encryption security. In Yao’s protocol, the private-key encryption scheme is used in order to double-encrypt values. As we have described, the protocol works by double-encrypting four values, where each double encryption uses a different combination of the keys associated with the input wires. Intuitively, given only two keys, it is possible to decrypt only one of the values. However, formally, this must be proven. We define a double-encryption experiment here and prove that any encryption scheme that is secure under chosen plaintext attacks is secure for double- encryption here. We remark that the experiment does not look very natural. However, it is exactly what is needed in our proof of security. Let (G,E,D) be a private-key encryption scheme and assume without loss of generality that G(1n) returns a string of length-n (i.e., the length of a key generated with security parameter 1n is exactly n). We denote E(k0, k1, m) = Ek0 (Ek1 (m)). The experiment definition is as follows:
3In fact, the string of 0’s can have any super-logarithmic length. We set it to be of length n for simplicity.
4We chose this method somewhat arbitrarily. We feel some preference due to the fact that the gate description and circuit construction is the simplest this way. As we will see, however, some price is paid in the proof of correctness.
8
Exptdouble(n, σ) A
1. The adversary A is invoked upon input 1n and outputs two keys k0 and k1 of length n and two triples of messages (x0, y0, z0) and (x1, y1, z1) where all messages are of the same length.
2. Two keys k0′ , k1′ ← G(1n) are chosen for the encryption scheme.
3. A is given the challenge ciphertext ⟨E(k0,k1′,xσ),E(k0′,k1,yσ),E(k0′,k1′,zσ)⟩ as
well as oracle access to E(·, k1′ , ·) and E(k0′ , ·, ·).5
4. A outputs a bit b and this is taken as the output of the experiment.
Security under double encryption simply means that the adversary outputs 1 when σ = 0 with almost the same probability as it outputs 1 when σ = 1.
Definition 3 An encryption scheme (G, E, D) is secure under chosen double encryption if for every non-uniform probabilistic polynomial-time machine A, every polynomial p(·) and all sufficiently
large n’s,
double double 1 Pr ExptA (n,1)=1 −Pr ExptA (n,0)=1 < p(n)
We now show that any encryption scheme that is secure (i.e., has indistinguishable encryptions) under chosen plaintext attacks, is secure under chosen double-encryption. We remark that all security here is in the non-uniform model (and so we assume security under chosen plaintext attacks for non-uniform adversaries). It is well known that under chosen plaintext attacks, security for a single message implies security for multiple messages, see [7, Section 5.4], and we will thus assume this is in our proof. For the sake of completeness, we define the chosen-plaintext experiment for the case of multiple messages. In fact, we consider only the case of two messages, because this suffices for our proof later.
Exptcpa(n, σ) A
1. A key k ← G(1n) is chosen and the adversary A is invoked with input 1n and oracle access to Ek(·). The adversary A outputs two pairs of messages (x0,y0) and (x1, y1).
2. The challenge ciphertexts c1 = Ek(xσ) and c2 = Ek(yσ) are computed.
3. A is given the pair (c1,c2) as well as continued oracle access to Ek(·)
4. A outputs a bit b and this is taken as the output of the experiment.
The definition of security under chosen plaintext attacks is analogous to Definition 3 except that Exptdouble is replaced with Exptcpa. We are now ready to state the lemma.
AA
Lemma 4 Let (G, E, D) be a private-key encryption scheme that has indistinguishable encryptions under chosen plaintext attacks in the presence of non-uniform adversaries. Then (G, E, D) is secure under chosen double encryption.
5Note that in the ciphertexts that A receives, at least one of the keys used is unknown to A. In addition, the oracle access here means that A can provide any k and m to E(·, k1′ , ·) and receive back E(k, k1′ , m); likewise for E(k0′ , ·, ·).
9
Proof Sketch: We do not provide a full proof of this lemma, but rather a detailed proof sketch
only. The full proof can be derived from this sketch in a straightforward manner. In order to prove
this lemma, we define a modified experiment, denoted Exptmod(n,σ), which is exactly the same as A
Exptdouble(σ) except that the y part of the challenge ciphertext does not depend on σ. That is, A
thechallengeciphertextequals⟨E(k0,k1′,xσ),E(k0′,k1,y0),E(k0′,k1′,zσ)⟩;notethatxσ andzσ are
encrypted as before, but y0 is always encrypted (even if σ = 1). Clearly,
Pr Exptdouble(n, 0) = 1 = Pr Exptmod(n, 0) = 1 (5) AA
because in both cases, the encrypted values are x0, y0 and z0. We will prove that for every non- uniform probabilistic polynomial-time adversary and for some negligible function μ(·), the following
two equations hold:
mod mod
Pr ExptA (n,0)=1 −Pr ExptA (n,1)=1 <μ(n) (6)
mod double
Pr ExptA (n,1)=1 −Pr ExptA (n,1)=1 <μ(n) (7)
Combining Equations (5) to (7), we obtain that (G, E, D) is secure under chosen double encryption. We will prove Eq. (6); Eq. (7) is proven in an analogous way.
We begin by modifying Exptmod in the following way. First, we claim that indistinguishability A
holds even if the adversary A can chooses k0′ by itself. We can therefore let A choose k0, k1 and k0′ . Given that this is the case, we need not generate E(k0′ , k1, y0) as part of the challenge ciphertext (because given k0′ and k1, A can compute it by itself). For the same reason, we can remove the oracle E(k0′ , ·, ·) from the experiment. We therefore need only to prove that Eq. (6) holds for the further modified experiment Exptmod′ defined as follows:
Exptmod0 (n, σ) A
A
1. The adversary A is invoked upon input 1n and outputs three keys k0, k1 and k0′ of length n and two pairs of messages (x0,z0) and (x1,z1) where all messages are of the same length.
2. A key k1′ ← G(1n) is chosen for the encryption scheme.
3. A is given ⟨E(k0, k1′ , xσ), E(k0′ , k1′ , zσ)⟩ as well as oracle access to E(·, k1′ , ·).
4. A outputs a bit b and this is taken as the output of the experiment.
From what we have stated above, if we prove that the analogue of Eq. (6) holds for Exptmod′ , then mod′ cpa A
Eq. (6) itself clearly also holds. However, ExptA is now almost identical to ExptA . The only differences are:
1. In Exptmod′ the challenge ciphertext is first encrypted with k′ (the secret key) and then with A cpa 1
k0 or k0′ , whereas in ExptA the challenge ciphertext is encrypted with the secret key only. However, this clearly does not matter because the adversary knows k0 and k0′ and so can compute this itself.
2. In Exptmod′ the oracle given to the adversary is E(·,k′,·) whereas in Exptcpa it is E (·). A′1Ak
However, since k and k1 play the same role as the secretly-chosen key, it is clear that given oracleEk1′(·)itispossibletoefficientlyemulatetheoracleE(·,k1′,·). Therefore,thisalso makes no difference.
We conclude that Eq. (6) follows from the security of (G, E, D) under chosen plaintext attacks. As we have stated, Eq. (7) is proven in an analogous way, and thus we obtain that (G,E,D) is also secure under chosen double encryption. This concludes the proof sketch.
10
3.2 Oblivious Transfer
As we have mentioned, the 1-out-of-2 oblivious transfer functionality is defined by ((x0, x1), σ) → (λ,xσ) where λ denotes the empty string. For the sake of self-containment, we will briefly describe the oblivious transfer protocol of [6], that is secure in the presence of semi-honest adversaries. Our description will be for the case that x0, x1 ∈ {0, 1}; when considering semi-honest adversaries, the general case can be obtained by running the single-bit protocol many times in parallel.
Protocol 1 (oblivious transfer [6]):
• Inputs: P1 has x0,x1 ∈ {0,1} and P2 has σ ∈ {0,1}.
• The protocol:
1. P1 randomly chooses a permutation-trapdoor pair (f, t) from a family of enhanced trapdoor
permutations.6 P1 sends f (but not the trapdoor t) to P2.
2. P2 chooses a random vσ in the domain of f and computes wσ = f(vσ). In addition, P2 chooses a random w1−σ in the domain of f, using the “enhanced” sampling algorithm (see Footnote 6). P2 sends (w0, w1) to P1.
3. P1 uses the trapdoor t and computes v0 = f−1(w0) and v1 = f−1(w1). Then, it computes b0 = B(v0) ⊕ x0 and b1 = B(v1) ⊕ x1, where B is a hard-core bit of f. Finally, P1 sends (b0, b1) to P2.
4. P1 computes xσ = B(vσ) ⊕ bσ and outputs xσ.
The proof to the following theorem can be found in [7, Section 7.3.2].
Theorem 5 Assuming that (f,t) are chosen from a family of enhanced trapdoor permutations, Protocol 1 securely computes the 1-out-of-2 oblivious transfer functionality in the presence of static semi-honest adversaries.
Recall that since the oblivious transfer functionality is deterministic, it suffices to use the simplified definition of Equations (3) and (4). Thus, it is guaranteed that there exist simulators, denoted Sot
and Sot, that generate the appropriate views of parties P and P , respectively. We remark that 212
simulator Sot receives P ’s input (x ,x ) and outputs a full view of P that includes the input 11011
(x0,x1), a random tape, and the incoming messages that P1 expects to see in a real execution (of course, this view output by Sot is only computationally indistinguishable from a real view). Notice
that P has no output in the oblivious transfer functionality, and so Sot receives only P ’s input. 111
The simulator Sot receives P ’s input σ and output x and outputs a view, as described above. 22σ
4 Yao’s Two-Party Protocol
We are now ready to describe the protocol. We begin by formally describing how the garbled circuit is constructed. Then, we describe the protocol and prove its security.
6Informally speaking, an enhanced trapdoor permutation has the property that it is possible to sample from the range, so that given the coins used for sampling it is still hard to invert the value. See [7, Appendix C.1] for more details.
1
1
11
4.1 The Garbled Circuit Construction
In this section, we describe the garbled circuit construction. Let C be a boolean circuit that receives two inputs x, y ∈ {0, 1}n and outputs C(x, y) ∈ {0, 1}n (for simplicity, we assume that the input length, output length and the security parameter are all of the same length n). We also assume that C has the property that if a circuit-output wire comes from a gate g, then gate g has no wires that are input to other gates.7 (Likewise, if a circuit-input wire is itself also a circuit-output, then it is not input into any gate.)
We begin by describing the construction of a single garbled gate g in C. The circuit C is
boolean, and therefore any gate is represented by a function g : {0, 1} × {0, 1} → {0, 1}. Now,
let the two input wires to g be labelled w1 and w2, and let the output wire from g be labelled
w3. Furthermore, let k10, k1, k20, k21, k30, k31 be six keys obtained by independently invoking the key-
generation algorithm G(1n); for simplicity, assume that these keys are also of length n. Intuitively,
we wish to be able to compute kg(α,β) from kα and kβ, without revealing any of the other three 312
values kg(1−α,β), kg(α,1−β), kg(1−α,1−β). The gate g is defined by the following four values 333
c0,0 = E 0 (E 0 (kg(0,0))) k1 k2 3
c0,1 = E 0 (E 1 (kg(0,1))) k1 k2 3
c1,0 = E 1 (E 0 (kg(1,0))) k1 k2 3
c1,1 = E 1 (E 1 (kg(1,1))) k1 k2 3
where E is from a private key encryption scheme (G,E,D) that has indistinguishable encryptions under chosen plaintext attacks, and has an elusive efficiently verifiable range; see Section 3.1. The actual gate is defined by a random permutation of the above values, denoted as c0, c1, c2, c3; from here on we call them the garbled table of gate g. Notice that given k1α and k2β, and the values
c0, c1, c2, c3, it is possible to compute the output of the gate kg(α,β) as follows. For every i, compute 3
Dk2β (Dk1α (ci)). If more than one decryption returns a non-⊥ value, then output abort. Otherwise,
define k3γ to be the only non-⊥ value that is obtained. (Notice that if only a single non-⊥ value is
obtained, then this will be kg(α,β) because it is encrypted under the given keys kα and kβ. Later 312
we will show that except with negligible probability, only one non-⊥ value is indeed obtained.) We are now ready to show how to construct the entire garbled circuit. Let m be the number of wires in the circuit C, and let w1,...,wm be labels of these wires. These labels are all chosen uniquely with the following exception: if wi and wj are both output wires from the same gate g, then wi = wj (this occurs if the fan-out of g is greater than one). Likewise, if an input bit enters more than one gate, then all circuit-input wires associated with this bit will have the same label.8 Next, for every label wi, choose two independent keys ki0,ki1 ← G(1n); we stress that all of these keys are chosen independently of the others. Now, given these keys, the four garbled values of each gate are computed as described above and the results are permuted randomly. Finally, the
7This requirement is due to our labelling of gates described below; see Footnote 8. We note that this assumption on C increases the number of gates by at most n.
8This choice of labelling is not essential and it is possible to provide unique labels for all wires. However, in such a case, the table of a gate with fan-out greater than one will have to be redefined so that the keys of all of the wires leaving the gate are encrypted. We chose this labelling because it seems to make for a simpler gate definition. We note, however, that due to this choice, we must assume that if a gate g has an output wire exiting from it, then it does not have another wire that exits it and enters another gate. As we have mentioned, this increases the number of gates by at most n.
12
output or decryption tables of the garbled circuit are computed. These tables simply consist of the values (0,ki0) and (1,ki1) where wi is a circuit-output wire. (Alternatively, output gates can just compute 0 or 1 directly. That is, in an output gate, one can define cα,β = Ek1α (Ek2β (g(α, β))) for
every α, β ∈ {0, 1}.)
The entire garbled circuit of C, denoted G(C), consists of the garbled table for each gate and
the output tables. We note that the structure of C is given, and the garbled version of C is simply defined by specifying the output tables and the garbled table that belongs to each gate. This completes the description of the garbled circuit.
Correctness. We now claim that the above garbled circuit enables correct computation of the function. That is, given the appropriate input strings and the garbled table for each gate, it is possible to obtain the correct output. It is at this point that we use the “special” properties of the encryption scheme described in Section 3.1.
Claim 6 (correctness): Let x = x1 · · · xn and y = y1 · · · yn be two n-bit inputs for C. Furthermore,
let win1 , . . . , winn be the labels of the circuit-input wires corresponding to x, and let winn+1 , . . . , win2n
be the labels of the circuit-input wires corresponding to y. Finally, assume that the encryption
scheme used to construct G(C) has an elusive and efficiently verifiable range. Then, given the
garbled circuit G(C) and the strings kx1 ,...,kxn ,ky1 ,...,kyn , it is possible to compute C(x,y),
except with negligible probability.
Proof: We begin by showing that every gate can be “decrypted” correctly. Specifically, let g be a
gate with incoming wires w1, w2 and outgoing wire w3. Then, we show that for every α, β ∈ {0, 1},
given kα and kβ and the garbled table of g, it is possible to determine kg(α,β), except with negligible 123
1. cj =Ek1α(Ek1−β(z))forz∈{k30,k31}: 2
in1 inn inn+1 in2n
probability. More formally, let c0, c1, c2, c3 be the garbled table of gate g. We wish to find ci such that ci = Ekα (E β (kg(α,β))). We claim that except with negligible probability, there exists a single
1 k2 3
i such that ci ∈ Rangen(k1α) and Dk1α(ci) ∈ Rangen(k2β). In other words, at most one of the values decrypts correctly (from here on we use this informal term to mean what is formally described above).
This follows from the fact that the encryption scheme has an elusive range. Specifically, recall that the gate was constructed by first choosing independent values for the gate-input and gate- output wires k10, k1, k20, k21, k30, k31. Next, the values c0, c1, c2 and c3 are computed. Now, assume that there are (at least) two values c such that for both of them c ∈ Range(k1α) and Dk1α(c) ∈
Rangen(k2β); denote these two values ci and cj. Without loss of generality, assume also that ci = E α (E β (kg(α,β))); i.e., assume that c should be correctly decrypted. There are two cases regarding
k1 k2 3 i cj:
By our assumption regarding cj, it follows that cj ∈ Range(k1α) and Dk1α(cj) ∈ Rangen(k2β). ThismeansthatE 1−β(z)∈Range (kβ). Next,asmentionedabove,recallthatk1−β,k0,and
k2n2 23
k31 are all uniform and independent of k2β. Therefore, we can define a machine A that chooses
two random keys k′, k′′ ← G(1n) and outputs c = Ek′ (k′′). The probability that c ∈ Range(k)
for k ← G(1n) equals the probability that Ek1−β(z) ∈ Rangen(k2β) (recall that z ∈ {k30,k31}). 2
Since the encryption scheme (G, E, D) has an elusive range, we conclude that the probability
that c ∈ Rangen(k) is negligible. Therefore, the probability that Ek1−β(z) ∈ Rangen(k2β) is
also negligible. This concludes this case.
13
2
2. cj = Ek1−α (z) for z = Ek′ (k′′) where k′ ∈ {k20, k21} and k′′ ∈ {k30, k31}: 1
In this case, we have that Ek1−α (z) ∈ Rangen(k1α). Using the same arguments as above, and 1
noticing once again that k1−α,k′ and k′′ are all independent of kα, we have that this case 11
occurs also with at most negligible probability.
Now, given that in every gate at most one ci decrypts correctly, we prove the claim. In order to
do this, we define that the key k is correct for wire wi if k = kiα, where α ∈ {0,1} is the value
obtained on wire wi when computing the un-garbled circuit C on inputs (x,y). By induction on
the circuit, starting from the bottom and working up, we show that for every wire, the correct
key for the wire is obtained. This holds for the circuit-input wires by the fact that the keys
kx1 ,...,kxn ,ky1 ,...,kyn are given, and is the base case of the induction. Assume that it is in1 inn inn+1 in2n
true for a gate g with gate-input wires wi and wj and let kiα and kjβ be the respective keys held for thesewires. Then,bythedecryptionprocedure,itfollowsthatthevaluekg(α,β) =D β(Dkα(cα,β))
l kj i
is obtained, where wl is the output wire of the gate.9 Furthermore, by the arguments shown above, this is the only value that is decrypted correctly. Therefore, the correct key for the output wire of
gate g is also obtained. This concludes the inductive step.
It follows that the correct keys of the output wires of the circuit are obtained, except with
negligible probability. That is, the keys obtained for the circuit-output wires all correspond to the output value C(x, y). Therefore, the value obtained after using the output tables is exactly C(x, y), as required.
Removing the error probability. The above construction allows for a negligible probability of error. This is due to two possible events: (a) in some gate more than one value decrypts correctly, or (b) in some gate, the correct value does not decrypt correctly. As we have mentioned in Footnote 9, this second event can occur if the encryption scheme has decryption errors. This problem can be removed by using a scheme without decryption errors (this is not a limitation because decryption errors can always be removed [5]).
Regarding the first event causing error, this can be overcome in one of two ways. First, when constructing the circuit, it is possible to check that an error does not occur. Then, if an error has occurred, it is possible to reconstruct the garbled circuit again, repeating until no errors occur. (For this to work, we need to assume that the machine that verifies if a value is in the range of a key runs in deterministic polynomial-time, as is the case in our construction. Alternatively, it suffices to assume that it has only a one-sided error and never returns 1 when a value is not in the range.) The problem with this approach is that the construction of the circuit now runs in expected, and not strict, polynomial-time. Another approach is to use explicit randomly permuted indices, meaning that the decrypted values in the gates reveal exactly which item in the next table is to be opened. This approach was described in [12].
4.2 Yao’s Two-Party Protocol
As we have seen above, given the keys that correspond to the correct input, it is possible to obtain the correct output from the garbled circuit. Thus, the protocol proceeds by party P1 constructing the garbled circuit and giving it to P2. Furthermore, P1 hands P2 the keys that correspond to
9This holds if there are no decryption errors (i.e., if for every k and every x, Dk(Ek(x)) = x). If there is a negligible error in the decryption, then we will inherit a negligible error probability here.
14
x = x1 ···xn. In addition, P2 must obtain the keys that correspond to its input y = y1 ···yn. However, this must be done carefully, ensuring the following:
1. P1 should not learn anything about P2’s input string y.
2. P2 should obtain the keys corresponding to y and no others. (Otherwise, P2 could compute C(x, y) and C(x, y′) for y′ ̸= y, in contradiction to the requirement that C(x, y) and nothing else is learned.)
The above two problems are solved by having P1 and P2 run 1-out-of-2 oblivious transfer proto-
cols [13, 6]. That is, for every bit of P2’s input, the parties run an oblivious transfer protocol where
P1’s input is (k0 ,k1 ) and P2’s input is yi. In this way, P2 obtains the keys ky1 ,...,kyn and n+i n+i n+1 2n
only these keys. In addition, P1 learns nothing about y. We are now ready to formally describe the protocol.
Protocol 2 (Yao’s two-party protocol):
• Inputs: P1 has x ∈ {0,1}n and P2 has y ∈ {0,1}n.
• Auxiliary input: A boolean circuit C such that for every x, y ∈ {0, 1}n it holds that C(x, y) = f(x,y), where f:{0,1}n ×{0,1}n →{0,1}n. We require that C is such that if a circuit-output wire leaves some gate g, then gate g has no other wires leading from it into other gates (i.e., no circuit-output wire is also a gate-input wire). Likewise, a circuit-input wire that is also a circuit-output wire enters no gates.
• The protocol:
1. P1 constructs the garbled circuit G(C) as described in Section 4.1, and sends it to P2.
2. Let w1,...,wn be the circuit-input wires corresponding to x, and let wn+1,...,w2n be the circuit-input wires corresponding to y. Then,
(a) P sends P the strings kx1,...,kxn. 121n
(b) For every i, P1 and P2 execute a 1-out-of-2 oblivious transfer protocol in which P1’s input equals (kn0+i,kn1+i) and P2’s input equals yi.
The above oblivious transfers can all be run in parallel.
3. Following the above, P2 has obtained the garbled circuit and 2n keys corresponding to the 2n input wires to C. Party P2 then computes the circuit, as described in Section 4.1, obtaining f(x,y). P2 then sends f(x,y) to P1 and they both output this value.
We now provide a formal proof that Protocol 2 securely computes the functionality f. Our proof could be simplified by relying on a composition theorem, such as that found in [7, Section 7.3.1]. However, for the sake of self-containment, we provide a direct proof of the security of the protocol.
Theorem 7 Let f be a deterministic same-output functionality. Furthermore, assume that the oblivious transfer protocol is secure in the presence of static semi-honest adversaries, and that the encryption scheme has indistinguishable encryptions under chosen plaintext attacks, and has an elusive and efficiently verifiable range. Then, Protocol 2 securely computes f in the presence of static semi-honest adversaries.
15
Proof: Intuitively, since the oblivious transfer protocol is secure, party P2 receives exactly one key per circuit-input wire. Then, by the security of the encryption scheme, it is only able to decrypt one value in each gate. Furthermore, it has no idea if the value obtained in this decryption corresponds to a 0 or a 1. Therefore, it learns nothing from this computation, except for the output itself. We now formally prove this. Recall that since we consider deterministic functionalities, we can use the simpler formulation of security as stated in Equations (3) and (4). We prove the case separately when P1 is corrupted and when P2 is corrupted.
A simplifying convention. In the proof below, we will use the simulators Sot and Sot that 12
exist for the oblivious transfer functionality in order to generate views for the corrupted parties. In general, a view is represented as the party’s input followed by its random tape and concluding with the series of incoming messages. In order to simplify the presentation, we will present the view of a party in a different order. Specifically, we will write the view of a party in Protocol 2 – excluding the oblivious transfers – in the usual way. However, the view of the party in the oblivious transfers is written in full where it appears in the protocol transcript. That is, instead of splitting the view in the oblivious transfers into input, random-tape and incoming messages, the input and random-tape are written together with the incoming messages. This clearly makes no difference and is just to simplify notation (the standard way of writing the view of a party can be received by a trivial transformation of the view that we write below).
Case 1 – P1 is corrupted
Notice that P1’s view in an execution of π consists only of its view in the oblivious transfer protocols, and a single message that it receives from P2 at the end (that is supposedly the output). By the security of the oblivious transfer protocol, P1’s view in the oblivious transfer executions can be generated without knowing P2’s input. Furthermore, by the correctness of the construction of the garbled circuit (Claim 6), party P2 obtains the correct output f(x,y), except with negligible probability. Therefore, the message that P1 receives from P2 at the end of a real protocol execution equals f (x, y), except with negligible probability. A simulator that is given (x, f (x, y)) can therefore simulate the complete view of P1 by first simulating its view in the oblivious transfers and then writing f(x,y) at the end. The formal proof of this follows a rather standard hybrid argument.
We begin by describing the simulator S1: Upon input (x,f(x,y)), simulator S1 uniformly
chooses a random-tape rC for P1 and generates the garbled circuit that P1 would generate with
randomness rC . Then, let kn0+1, kn1+1, . . . , k20n, k21n be the keys that correspond to P2’s input in the
constructed garbled circuit, and let Sot be the simulator that is guaranteed to exist for party P 11
in the oblivious transfer protocol. For every i = 1, . . . , n, simulator S invokes the simulator SOT 11
upon input (kn0+i,kn1+i) in order to obtain P1’s view in the ith oblivious transfer (since P1 has no output from the oblivious transfer, the simulator is invoked with its input only). Recall that the
view generated by Sot is made up of the input (in this case (k0 ,k1
1 n+i n+i
)), a random tape, and a transcript of messages received. As we have mentioned, we will place the entire view of the party in the oblivious transfers together with the message transcript. In addition, S1 writes the output
f(x,y) that P1 expects to receive at the end of the execution from P2. We have that S1 outputs
x,r ,Sot(k0 ,k1 ),...,Sot(k0 ,k1 ),f(x,y) (8) C 1 n+1 n+1 1 2n 2n
This concludes the description of S1. We now prove that {S1(x,f(x,y))}x,y∈{0,1}∗ ≡c {viewπ1(x,y)}x,y∈{0,1}∗
16
where S1(x,f(x,y)) is as shown in Eq. (8) and π denotes Protocol 2. We first prove a hybrid argu-
ment over the simulated views for the oblivious transfers. That is, we define a hybrid distribution
Hi in which the first i oblivious transfers are simulated and the last n − i are real. Formally, let
Hi(x, y, rC ) denote the distribution:
(x,r ,Sot(k0 ,k1 ),...,Sot(k0 ,k1 ),Rot((k0 ,k1 ),y ),...,Rot((k0 ,k1 ),y ),f(x,y)) C 1 n+1 n+1 1 n+i n+i 1 n+i+1 n+i+1 i+1 1 2n 2n n
where Rot((k0 ,k1 ),y ) denotes the real transcript from viewot((k0 ,k1 ),y ). Notice that 1 n+j n+j j 1 n+j n+j j
the keys kn0+j,kn1+j here are as defined by the garbled circuit, when generated with the random tape rC. Notice also that when rC is uniformly chosen, Hn(x,y,rC) equals the distribution that appears in Eq. (8); i.e., it equals S1(x,f(x,y)). Furthermore, H0(x,y,rC) is almost the same as viewπ1(x,y); the only difference is that the last component of H0 equals f(x,y) whereas the last component of viewπ1 (x, y) is the message that P2 would send P1 in the last message of the protocol. For simplicity, from here on we will assume that x, y, rC are all of the same length, and in particular, are of length n.
Wenowprovethat{H0(x,y,rC)}≡c {Hn(x,y,rC)}.Bycontradiction,assumethatthereexists a probabilistic polynomial-time distinguisher D and a polynomial p(·) such that for infinitely many n’s (and x, y, rC ∈ {0, 1}n),
|Pr[D(H0(x, y, rC )) = 1] − Pr[D(Hn(x, y, rC )) = 1]| > 1 p(n)
It follows that there exists an i such that for infinitely many x, y, rC , |Pr[D(Hi(x, y, rC )) = 1] − Pr[D(Hi+1(x, y, rC )) = 1]| > 1
np(n)
We now use D to contradict the security of the oblivious transfer protocol. First, notice that the
only difference between Hi(x,y,rC) and Hi+1(x,y,rC) is that the random-tape and transcript of
the (i + 1)th oblivious transfer are according to viewot((k0 , k1 ), y ) in H and accord- 1 n+i+1 n+i+1 i+1 i
) or Sot(k0 ,k1 )) it is possible to construct a distribution H i+1 1 n+i+1 n+i+1
ing to Sot(k0 ,k1 ) in H . Furthermore, given x,y,r ,i and a view v (which is either 1 n+i+1 n+i+1 i+1 C
viewot((k0 ,k1
1 n+i+1 n+i+1
),y
suchthatifvisfromviewOTthenH=H(x,y,r )andifvisfromSotthenH=H (x,y,r ).It
1 iC 1 i+1C therefore follows that for infinitely many inputs, it is possible to distinguish the view of P1 in a real oblivious transfer execution from its simulated view with the same probability that it is possible to distinguish Hi(x, y, rC ) from Hi+1(x, y, rC ). However, this contradicts the security of the oblivious transfer protocol. We therefore conclude that {H0(x, y, rC )} ≡c {Hn(x, y, rC )}. (We remark that the distinguisher that we construct here is non-uniform because it needs to have x,y,rC and i. For this reason, we defined non-uniform indistinguishability; see the beginning of Section 2.) Until
now, we have shown that
{S1(x,f(x,y))} ≡ (x,rC,R1 ((kn,kn),y1),…,R1 ((k2n,k2n),yn),f(x,y)) (9)
cot01 ot01
However, this does not quite suffice because we cannot just assume that P2 sends the correct value f(x,y) to P1 in a real execution (notice that in the right-hand distribution in Eq. (9), the last message received by P1 in the view is f(x,y)). We now show that
(x,r ,Rot((k0,k1),y ),…,Rot((k0 ,k1 ),y ),f(x,y))
C1nn1 12n2nn
≡c (x,r ,Rot((k0,k1),y ),…,Rot((k0 ,k1 ),y ),msg (P →P )) (10) C 1 n n 1 1 2n 2n n 3 2 1
17
where msg3(P2 →P1) denotes the message that P2 sends to P1 in step 3 of the protocol. Notice that the only difference between these distributions is whether the last component equals f(x,y) or the message sent by P2 to P1 in step 3. Recall that this message sent by P2 is exactly the output that it obtains from the garbled circuit. Now, by Claim 6, the output obtained by P2 from the garbled circuit when P1 sends it the keys corresponding to x and it receives the keys corresponding to y from the oblivious transfers, equals f(x,y) except with negligible probability. By the security of the oblivious transfer protocol, we have that P2 receives the keys corresponding to y, except with negligible probability. (This follows immediately from the correctness condition which is implied by the definition of security.) Therefore, msg3(2→1) = f(x,y) except with negligible probability, and Eq. (10) follows. Notice now that
(x,r ,Rot((k0,k1),y ),…,Rot((k0 ,k1 ),y ),msg (2→1)) ≡ {viewπ(x,y)} (11) C 1 n n 1 1 2n 2n n 3 1
and so by combining Equations (9) to (11), the proof of this case is concluded.
Case 2 – P2 is corrupted
In this case, we construct a simulator S2 that is given input (y,f(x,y)) and generates the view of P2 in Protocol 2. Notice that P2 expects to receive a garbled circuit, and so S2 must generate such a circuit. Furthermore, this circuit must be such that P2 would obtain f(x,y) when computing the circuit according to the protocol instructions. Of course, S2 cannot just honestly generate the circuit, because it does not know x. (Without knowing x, it would not know which of the keys k10,k1,…,kn0,kn1 tohandtoP2.) Itthereforegeneratesa“fake”garbledcircuitthatalwaysevaluates to f (x, y), irrespective of which keys are used. This is achieved by using gate tables in which all four entries encrypt the same key, and therefore the values of the input wires do not affect the value of the output wire. The crux of the proof is in showing that this circuit is indistinguishable from the real garbled circuit that P2 receives in a real execution. In order to show this we use a hybrid argument. We first show that P2’s view in a real execution of the protocol is indistinguishable from a hybrid distribution Hot(x, y) in which the real oblivious transfers are replaced with simulated ones. Next, we consider a series of hybrids Hi(x,y) in which one gate at a time is replaced in the real garbled circuit. The hybrid distributions are such that H0(x, y) contains a real garbled circuit (and therefore equals Hot(x,y)). In contrast, distribution H|C|(x,y) contains the same fake circuit constructed by S2 (and, as we will see, therefore equals S2(y,f(x,y))). By a standard hybrid argument, it follows that a distinguisher between H0(x, y) and H|C|(x, y) can be used to distinguish between two successive hybrids. However, the security of the encryption scheme that is used for generating the gate tables ensures that neighboring hybrids are computationally indistinguishable. We conclude that H0(x,y) is indistinguishable from H|C|(x,y), and so {S2(y,f(x,y))} ≡c {viewπ2(x,y)}.
We now formally describe S2. Simulator S2 begins by constructing a fake garbled circuit, denote G ̃(C). This is accomplished as follows. For every wire wi in the circuit C, simulator S2 chooses two random keys ki and ki′. Next, the gates are computed: let g be a gate with input wires wi,wj and output wire wl. Then, g contains encryptions of the single key kl under all four combinations of the keys ki,ki′,kj,kj′ that are associated with the input wires to g (in contrast, the key kl′ is not encrypted at all). That is, S2 computes the following values:
c0,0 =Eki(Ekj(kl)) c0,1 = Eki (Ekj′ (kl)) c1,0 = Eki′ (Ekj (kl)) c1,1 = Eki′ (Ekj′ (kl))
18
and writes them in random order. This is carried out for all of the gates of the circuit. It re- mains to describe how the output decryption tables are constructed. Denote the n-bit output f(x, y) by z1 · · · zn (recall that this is part of S2’s input), and denote the circuit-output wires by wm−n+1, . . . , wm. In addition, for every i = 1, . . . , n, let km−n+i be the (single) key encrypted in the gate from which wire wm−n+i left, and let km′ −n+i be the other key (as described above). Then, the output decryption table for wire wm−n+i is given by: [(0, km−n+i), (1, km′ −n+i)] if zi = 0, and [(0, km′ −n+i), (1, km−n+i)] if zi = 1. This completes the description of the construction of the fake garbled circuit G ̃(C). (Notice that the keys km−n+1,…,km decrypt to z1 ···zn = f(x,y) exactly.)
Next, S2 generates the view of P2 in the phase where it obtains the keys. First, in the simulated
view, it sets the keys that P2 receives from P1 in step 2a of Protocol 2 to be k1, . . . , kn. (Recall that
w1, . . . , wn are the circuit-input wires associated with P1’s input x and that the keys for these wires
are k1,k1′ ,…,kn,kn′ . Here, S2 takes the keys k1,…,kn. However, it could have taken k1′ ,…,kn′
or any other combination and this would make no difference.) Next, let Sot be the simulator that 2
is guaranteed to exist for the oblivious transfer protocol. Then, for every i = 1, . . . , n, simulator S invokes the simulator Sot upon input (y , k ) in order to obtain P ’s view in the ith oblivious
2 2 in+i 2
transfer. (Here yi and kn+i are P2’s respective input and output in the ith oblivious transfer. As
above, we use the keys kn+1, . . . , k2n associated with the input wires for y. However, this choice is
arbitrary and we could have used kn′ +1, . . . , k2′ n or any other combination.) Recall that the view
generated by Sot is made up of the input (in this case y ), a random tape, and a transcript of 2i
messages received. Recall also that by convention, we place the entire view in the oblivious transfer (including the random-tape) together. We therefore have that S2 outputs
̃ ot ot y,G(C),k1,…,kn,S2 (y1,kn+1),…,S2 (yn,k2n)
This concludes the description of S2. We now prove that {S2(y,f(x,y))}x,y∈{0,1}∗ ≡c {viewπ2(x,y)}x,y∈{0,1}∗
First, observe that
{view2(x,y)} ≡ where Rot((k0 , k1
(y,G(C),k1 ,…,kn ,R2 ((kn+1,kn+1),y1),…,R2 ((k2n,k2n),yn))
), y ) denotes the real transcript from viewot((k0 , k1 ), y ). We also denote
πx1xnot01 ot01
i 2 n+i n+i i H (x,y) = (y,G(C),kx1,…,kxn,Sot(y ,ky1 ),…,Sot(y ,kyn))
2 n+i n+i
the hybrid distribution where the real oblivious transfers are replaced by simulated ones by
ot 1 n 2 1 n+1 2 n 2n
We stress that in the hybrid distribution Hot, the garbled circuit G(C) that appears is the real one
and not the fake one. We first claim that
{Hot(x,y)}x,y∈{0,1}∗ ≡c {viewπ2(x,y)}x,y∈{0,1}∗ (12)
The only difference between the distributions in Eq. (12) is due to the fact that simulated views of the oblivious transfers are provided instead of real ones. Indistinguishability therefore follows from the security of the oblivious transfer protocol. The formal proof of this is almost identical to the case that P1 is corrupted, and is therefore omitted.
Next, we consider a series of hybrid experiments Hi(x, y) in which one gate at a time is replaced in the real garbled circuit G(C) until the result is the fake garbled circuit G ̃(C). Before we do this, we consider an alternative way of constructing the fake garbled circuit G ̃(C). This alternative
19
construction uses knowledge of both inputs x and y, but results in exactly the same fake garbled circuit as that constructed by S2 that is given only y and f(x,y). (This is therefore just a mental experiment, or a different description of S2. Nevertheless, it is helpful in describing the proof.)
The alternative construction works by first traversing the circuit from the circuit-input wires to the circuit-output wires, and labelling all keys as active or inactive. Intuitively, a key is active if it is used in order to compute the garbled circuit upon input (x, y); otherwise it is inactive. Formally, a key kaα that is associated with wire wa is active if when computing the non-garbled circuit C on input (x, y), the bit that is obtained on wire wa equals α. As expected, an inactive key is just any key that is not active. Now, the alternative construction of G ̃(C) works by first constructing the real garbled circuit G(C). Next, using knowledge of both x and y, all keys in G(C) are labelled active or inactive (given x and y, it is possible to compute C(x,y) and obtain the real values on each wire). Finally, G ̃(C) is obtained by replacing each gate g as follows: Let wa be the wire that exits gate g. Then, recompute g by encrypting the active key on wire wa with all four combinations of the (active and inactive) keys that are on the wires that enter g. This completes the alternative construction. We claim that the circuit obtained in this alternative construction is identically distributed to the circuit constructed by S2(x,f(x,y)). First, in both constructions, all gates contain encryptions of a single key only. Second, in both constructions, the order of the ciphertexts in each gate is random. Finally, in both constructions, the output decryption tables yield the same result (i.e., exactly f(x,y)). This last observation is due to the fact that in the alternative construction, the output decryption table decrypts active keys to f(x,y) and these active keys are the only ones encrypted in the gates from which the circuit-output wires exit. Likewise, in the circuit G ̃(C), the only keys encrypted in the gates from which the circuit-output wires exit are the keys that decrypt to f(x,y).
Before proceeding we order the gates g1, . . . , g|C| of the circuit C as follows: if the input wires to a gate gl come from gates gi and gj, then i < l and j < l; this is called a topological sort of the circuit. We are now ready to define the hybrid experiment Hi(x,y).
Hybrid experiment Hi(x,y). In this experiment the view of P2 in the oblivious transfers is generated in exactly the same way as in HOT(x,y). However, the garbled circuit is constructed differently. As in the alternative construction of G ̃(C), the first step is to construct the real garbled circuit G(C) and then use x and y in order to label all keys in G(C) as active or inactive. Next, the first i gates g1, . . . , gi are modified as in the alternative construction. That is, let wa be the wire that exits gate gj for 1 ≤ j ≤ i. Then, recompute gj by encrypting the active key on wire wa with all four combinations of the (active and inactive) keys that are on the wires that enter gj . The remaining gates gi+1, . . . , g|C| are left unmodified, and are therefore as in the real garbled circuit G(C).
We claim that the distribution {H0(x,y)} equals {Hot(x,y)}. This follows from the fact that the only difference is that in H0(x,y) the keys are labelled active or inactive. However, since nothing is done with this labelling, there is no difference in the resulting distribution. Next, notice that in H|C|(x,y), the circuit that appears in the distribution is exactly the fake garbled circuit G ̃(C) as constructed by S2. This follows immediately from the fact that in H|C| all gates are replaced, and so the circuit obtained is exactly that of the full alternative construction described above.
We wish to show that {H0(x,y)} ≡c {H|C|(x,y)}. Intuitively, this follows from the indistin- guishability of encryptions. Specifically, the only difference between H0 and H|C| is that the circuit in H0 is made up of gates that contain encryptions of active and inactive keys, whereas the circuit in H|C| is made up of gates that contain encryptions of active keys only. Since only active keys
20
are seen by P2 during the computation of the garbled circuit, the difference between H0 and H|C| cannot be detected. c
We prove that {H0(x, y)} ≡ {H|C|(x, y)} using a hybrid argument. That is, assume that there exists a non-uniform probabilistic polynomial-time distinguisher D and a polynomial p(·) such that for infinitely many n’s (and values x, y ∈ {0, 1}n), |Pr[D(H0(x, y)) = 1] − Pr[D(H|C|(x, y)) = 1]| > 1/p(n). Then, it follows that there exists an i such that |Pr[D(Hi−1(x, y)) = 1] − Pr[D(Hi(x, y)) = 1]|>1/|C|p(n). WeuseDandx,y,iinordertoconstructanon-uniformprobabilisticprobabilistic polynomial-time distinguisher AE for the encryption scheme (G, E, D). The high-level idea here is for AE to receive some ciphertexts from which it will construct a partially real and partially fake garbled circuit G′(C). However, the construction will be such that if the ciphertexts received were of one “type”, then the resulting circuit is according to Hi−1(x, y). However, if the ciphertexts received were of another “type”, then the resulting circuit is according to Hi(x,y). In this way, the ability to successfully distinguish Hi−1(x,y) from Hi(x,y) yields the ability to distinguish ciphertexts, in contradiction to the security of the encryption scheme. We now formally prove the above intuition, using Lemma 4 that states that (G,E,D) is secure under chosen double encryption.
A concrete case. First, let us consider the concrete case that gi is an OR gate, and that wires wa and wb enter gi, and wire wc exits gi. Furthermore, assume that the wires wa and wb enter gate gi and no other gate. Finally, assume that when the inputs to the circuit are x and y, the wire wa obtains the bit 0 and the wire wb obtains the bit 1. Then, it follows that the keys ka0 and kb1 are active, and the keys ka1 and kb0 are inactive (we mark the inactive keys in bold in order to distinguish them from the active ones). Likewise, the key kc1 is active (because gi(0, 1) = 0 ∨ 1 = 1) and the key kc0 is inactive. The difference between a real garbled gate gi and a fake garbled gate gi is with respect to the encrypted values. Specifically, the real garbled OR gate gi contains the following values:
Eka0 (Ekb0 (kc0)), Eka0 (Ekb1 (kc1)), Eka1 (Ekb0 (kc1)), Eka1 (Ekb1 (kc1)) (13) In contrast, the fake garbled OR gate gi contains the following values which are all encryptions of
the active value kc1 (recall that the input to gi equals 0 and 1, and so the output is 1):
Eka0 (Ekb0 (kc1)), Eka0 (Ekb1 (kc1)), Eka1 (Ekb0 (kc1)), Eka1 (Ekb1 (kc1)) (14)
Thus, in this concrete case, the indistinguishability between the gates depends on the indistin-
guishability of a single encryption (of kc0 versus kc1) under the inactive key kb0. (In other cases, the
indistinguishability may depend on both inactive keys ka1 and kb0, and may depend on more than
one encryption under a key; see the general case below.) It is not difficult to show here that indis-
tinguishability follows directly from the chosen-plaintext security of the encryption scheme E with
key k0. Nevertheless, it follows immediately from the definition of Exptdouble and security under bA
chosen double encryption (see Definition 3 in Section 3.1). Specifically, we construct a non-uniform probabilistic polynomial-time machine A for Exptdouble such that
E AE
double double 1
Pr ExptAE (n,1)=1 −Pr ExptAE (n,0)=1 ≥ |C|p(n)
Upon input 1n, machine AE outputs keys ka0, kb1 ← G(1n) and message triples (kc0, kc1, kc1) and (kc1,kc1,kc1). By the experiment Exptdouble, two keys are chosen. For the sake of consistency, we denote them ka1 and kb0. Then, machine AE receives either the ciphertexts
⟨Eka0 (Ekb0 (kc0)), Eka1 (Ekb1 (kc1)), Eka1 (Ekb0 (kc1))⟩ (15) 21
or the ciphertexts
⟨Eka0 (Ekb0 (kc1)), Eka1 (Ekb1 (kc1)), Eka1 (Ekb0 (kc1))⟩ (16)
depending on whether σ = 0 (the first case) or σ = 1 (the second case); note that the only
difference is that in the first case the first plaintext is kc0 and in the second case the first plaintext
is kc1 . Denote the ciphertexts received by AE by (c1 , c2 , c3 ). Now, AE first computes the value
c = Eka0 (Ekb1 (kc1)); it can do this by itself because it knows both ka0 and kb1, as well as kc1. Next, given
c, AE generates the tuple ⟨c1, c, c3, c2⟩. The important point to notice here is that if AE received
the ciphertexts in Eq. (15) then ⟨c1, c, c3, c2⟩ is identical to the ciphertexts in Eq. (13). On the other
hand, if AE received the ciphertexts in Eq. (16) then ⟨c1, c, c3, c2⟩ is identical to the ciphertexts in
Eq. (14). Therefore, if it is possible to distinguish between the gates in Equations (13) and (14)
with non-negligible probability, then A can succeed in Exptdouble with non-negligible probability, E AE
in contradiction to the security of the encryption scheme.
This does not yet suffice because we must still show how AE can generate the rest of the Hi−1 or
Hi distributions. Notice that AE knows the active keys that enter gi (because it chose them itself), but does not know the inactive keys. We therefore show that the distributions can be constructed without knowledge of the inactive keys ka1 and kb0. In order to show this, we distinguish between two cases:
1. Case 1 – wb is a circuit-input wire: In this case, the keys associated with wire wb do not
appear in any gates gj for j < i. However, keys that are associated with circuit-input wires
do appear in the distributions Hi−1 and Hi: the keys kxi appear directly and the keys kyi
i n+i
are used to generate the view of P2 in the oblivious transfers. Nevertheless, notice that the keys used here are all active. Therefore, AE can construct the distributions, as required. We note that AE uses the keys kc0 and kc1 that it receives in its experiment in order to construct the gates into which wire wc enters.
2. Case 2 – wb is not a circuit-input wire: In this case, the keys associated with wire wb can appear only in the gate gj from which wb exits. However, by our ordering of the gates, j < l. Therefore, in both Hi−1 and Hi, gate gj contains encryptions of the active key kb0 only. It follows that AE can construct the rest of the distribution, as required. (Again, as above, AE uses the keys kc0 and kc1 in this construction.)
Now, as we have shown above, if A participates in Exptdouble(n, 0), then the gate g is constructed E AE i
as for a real garbled circuit. In contrast, if A participates in Exptdouble(n,1), then the gate g is E AE i
constructed as for a fake garbled circuit. The only dependence between the gate gi and the rest of
the distribution Hi−1 or Hi is with respect to the keys kc0 and kc1; however, these are known to AE
and used appropriately. We therefore conclude that if A participates in Exptdouble(n,0), then it E AE
generates a distribution H that equals H (x,y). In contrast, if it participates in Exptdouble(n,1), i−1 AE
then it generates a distribution H that equals Hi(x,y). Distinguisher AE concludes by running machine D on the distribution H and outputting whatever D does. By the contradicting assump- tion, machine D distinguishes Hi−1(x,y) from Hi(x,y) with probability 1/|C|p(n). That is, we have that for infinitely many n’s
|Pr[Exptdouble(n, 0) = 1] − Pr[Exptdouble(n, 1) = 1]| AE AE
= |Pr[D(Hi−1(x, y)) = 1] − Pr[D(Hi(x, y)) = 1]| > 1
|C |p(n)
incontradictiontothesecurityoftheencryptionscheme.Itfollowsthat{H0(x,y)}≡c {H|C|(x,y)}. Having proven the argument with respect to a concrete case, we now move to the general case.
22
The general case. Let gi be an arbitrary gate, let wa and wb be the wires entering gi and let
wc be the wire that exits gi. Furthermore, let α and β be the respective values obtained on wa and
wb in C(x,y). Note that this means that kaα and kβ are active, and k1 and k1 are inactive. bab
Then, the real garbled gate gi contains the following values (in a random order):
E α(E β(kgi(α,β))), E α(E 1(kgi(α,1−β))), E 1(E β(kgi(1−α,β))),E 1(E 1(kgi(1−α,1−β)))
ka kb c ka kb c ka kb c ka kb c
In contrast, the fake garbled gate gi contains the following values which are all encryptions of the
active value kgi(α,β): c
E α(E β(kgi(α,β))), E α(E 1(kgi(α,β))), E 1(E β(kgi(α,β))),E 1(E 1(kgi(α,β))) (18) ka kb c ka kb c ka kb c ka kb c
Thus, the indistinguishability between the gates depends on the indistinguishability of encryptions under the inactive keys k1 and k1. As above, we use Exptdouble and security under chosen
ab
double encryption. The gate is generated in exactly the same way here as in the concrete case.
Now, in the restricted case that both wires wa and wb enter the gate gi only, it is possible to proceed
in the same way as in the concrete case above. However, in the more general case wires wa and wb
may enter multiple gates ga , . . . , ga and gb , . . . , gb , respectively. In this case, AE cannot construct
i1 ij i1 il 1 the rest of the circuit given only the active keys, because the inactive keys k1 and k
are used in more than one gate. (We stress that in order to prove the indistinguishability of the neighboring hybrid Hi−1 and Hi, it is crucial that AE is not given these inactive keys. Therefore, it cannot construct these other gates itself.) This is solved by using the special chosen-plaintext attack of Exptdouble. Recall that in Exptdouble, the adversary has access to oracles E(·, k1′ , ·) and E(k0′ , ·, ·),
ab
(17)
where k′ =k1 and k′ =k1. Here, this means that the adversary can ask for encryptions under 0a1b aabb
these inactive keys, as needed for constructing all of the other gates gi1 , . . . , gij and gi1 , . . . , gil that use them. Once again, we have that in Exptdouble(n, 0) the distribution generated by A is exactly
AE E
that of H (x,y), whereas in Exptdouble(n,1) the distribution generated by A is exactly that of
i−1AE E
Hi(x,y). Therefore, as above, we conclude that Hi−1(x,y) is indistinguishable from Hi(x,y) and
so{H0(x,y)}≡c {H|C|(x,y)}.
Concluding the proof. Having proven that {H0(x, y)} ≡c {H|C|(x, y)}, we obtain that
̃ x1 xn ot y1 ot yn (y,G(C),k1 ,…,kn ,S2 (y1,kn+1),…,S2 (yn,k2n))
≡c (y,G(C),kx1,…,kxn,Sot(y ,ky1 ),…,Sot(y ,kyn)) = {H (x,y)} (19) 1 n 2 1 n+1 2 n 2n ot
Notice that the first distribution in Eq. (19) looks almost the same as the distribution {S2(y, f(x, y))}.
The only difference is that in S2(y,f(x,y)) the keys k1,…,kn,kn+1,…,k2n are used instead of
the keys kx1,…,kxn,ky1 ,…,kyn. That is, the keys that S takes for the circuit-input wires 1 n n+1 2n 2
have no correlation to the actual input (unlike in a real execution). However, in the fake garbled
circuit G ̃(C), there is no difference between ki and ki′ because all combinations of keys are used
to encrypt the same (active) key. Thus, the distribution over the keys k1, . . . , kn, kn+1, . . . , k2n
and kx1 , . . . , kxn , ky1 , . . . , kyn are identical in the fake garbled-circuit construction. (Notice that 1 n n+1 2n
there isn’t even any distinction between ki0 and ki1 in the fake garbled circuit. Nevertheless, one could define ki0 = ki and ki1 = ki′ and the result would still be that the distributions are identical.) We therefore obtain that the first distribution in Eq. (19) is actually identical to the distribution {S2(y, f(x, y))} and so
{S2(y,f(x,y))}x,y∈{0,1}∗ ≡c {Hot(x,y)}x,y∈{0,1}∗ 23
RecallingthatbyEq.(12),{Hot(x,y)}≡c {viewπ2(x,y)},weconcludethat {S2(y,f(x,y))}x,y∈{0,1}∗ ≡c {viewπ2(x,y)}x,y∈{0,1}∗
as required.
By Theorem 5 it is possible to securely compute the oblivious transfer functionality assuming the existence of enhanced trapdoor permutations. Furthermore, secure encryption schemes as required in Theorem 7 can be constructed from one-way functions, and so also from enhanced trapdoor permutations. Finally, recall that given a secure protocol for deterministic same-output functionalities, it is possible to obtain a secure protocol for arbitrary probabilistic functionalities. Combining these facts with Theorem 7, we obtain the following corollary:
Corollary 8 Let f = (f1 , f2 ) be a probabilistic functionality. Then, assuming the existence of enhanced trapdoor permutations, there exists a constant-round protocol that securely computes f in the presence of static semi-honest adversaries.
Acknowledgements
We would like to thank Yael Tauman Kalai and the anonymous referees for many helpful comments on the write-up.
References
[1] D. Beaver. Foundations of Secure Interactive Computing. In CRYPTO’91, Springer-Verlag (LNCS 576), pages 377–391, 1991.
[2] D. Beaver. Correlated Pseudorandomness and the Complexity of Private Computations. In 28th STOC, pages 479–488, 1996.
[3] D. Beaver, S. Micali and P. Rogaway. The Round Complexity of Secure Protocols. In 22nd STOC, pages 503–513, 1990.
[4] R. Canetti. Security and Composition of Multiparty Cryptographic Protocols. Journal of Cryptology, 13(1):143–202, 2000.
[5] C. Dwork, M. Naor and O. Reingold. Immunizing Encryption Schemes from Decryption Errors. In Eurocrypt 2004, Springer-Verlag (LNCS 3027), pages 342–360, 2004.
[6] S. Even, O. Goldreich and A. Lempel. A Randomized Protocol for Signing Contracts. In Communications of the ACM, 28(6):637–647, 1985.
[7] O. Goldreich. Foundations of Cryptography: Volume 2 – Basic Applications. Cambridge University Press, 2004.
[8] O. Goldreich, S. Goldwasser and S. Micali. How to Construct Random Functions. Journal of the ACM, 33(4):792–807, 1986.
[9] O. Goldreich, S. Micali and A. Wigderson. How to Play any Mental Game – A Completeness Theorem for Protocols with Honest Majority. In 19th STOC, pages 218–229, 1987. For details see [7].
24
[10] S. Goldwasser and L. Levin. Fair Computation of General Functions in Presence of Immoral Majority. In CRYPTO’90, Springer-Verlag (LNCS 537), pages 77–93, 1990.
[11] S. Micali and P. Rogaway. Secure Computation. Unpublished manuscript, 1992. Preliminary version in CRYPTO’91, Springer-Verlag (LNCS 576), pages 392–404, 1991.
[12] M. Naor, B. Pinkas and R. Sumner. Privacy Preserving Auctions and Mechanism Design. In the 1st ACM Conference on Electronic Commerce, pages 129–139, 1999.
[13] M. Rabin. How to Exchange Secrets by Oblivious Transfer. Tech. Memo TR-81, Aiken Computation Laboratory, Harvard U., 1981.
[14] P. Rogaway. The Round Complexity of Secure Protocols. MIT Ph.D. Thesis, June 1991.
[15] A. Yao. How to Generate and Exchange Secrets. In 27th FOCS, pages 162–167, 1986.
25