CS代考计算机代写 ER Prof. Mark Bun

Prof. Mark Bun
CAS CS 591 B: Communication Complexity
Lecture Notes 4: Distributional Complexity, Yao’s Principle
Fall 2019
Reading.
• Rao-Yehudayoff Chapter 3, “Minimax”
We would like to develop techniques for proving lower bounds against randomized communication protocols. We saw one technique for doing this in the last lecture, which was to simulate a private- coin protocol with a deterministic protocol using an exponential blow-up in communication. In this lecture we will start to develop techniques which allow us to prove stronger lower bounds.
Recall that Newman’s Theorem showed that public coins do not give us substantially more power than private coins. It is often easier to prove both upper and lower bounds against the public coin model. The technique we will use to prove public coin lower bounds relies on a different way of introducing randomness into the communication model: Instead of considering distributions over protocols, we consider distributions over inputs.
Definition1.Letf:X×Y →{0,1}beatwo-partyfunctionandletμ:X×Y →[0,1]bea probability measure. The ε-error distributional complexity of f with respect to μ, denoted Dεμ(f), is the least cost of a deterministic protocol Π such that
Pr [Π(x,y)=f(x,y)]≥1−ε. (x,y)∼μ
Example 2. Let μ be the uniform distribution over {0, 1}n × {0, 1}n. Then Dμ (GTn) ≤ 2. Alice 1/4
can send the most significant bit x1 of her input, and Bob can check whether x1 > y1. This will give the correct answer on at least 3/4 of input pairs. (I.e., if x1 ̸= y1, which happens with probability 1/2, this protocol always succeeds. Otherwise, if x1 = y1, this protocol succeeds with probability 1/2.)
To prove a public coin randomized communication lower bound, it suffices to identify a distri- bution μ for which Dεμ is large.
Theorem 3. For every distribution μ : X × Y → [0, 1], we have BPPpub(f) ≥ Dμ(f).
εε
Proof. Fix a distribution μ. We show that a public coin randomized protocol for computing f can always be converted into a deterministic protocol which succeeds with respect to μ. Let Π(x, y; r) denote such a protocol. Since Π succeeds on every input (x, y), it in particular succeeds with respect
1

to (x, y) drawn from μ:
∀(x, y) Pr[Π(x, y; r) = f (x, y)] ≥ 1 − ε r
⇒ Pr [Π(x,y;r)=f(x,y)]≥1−ε r,(x,y)∼μ
⇒∃r Pr [Π(x,y;r)=f(x,y)]≥1−ε. (x,y)∼μ
Consider the deterministic protocol Π(·, ·; r) obtained by fixing the coins of Π to the random string guaranteed by the final inequality. Then this is a good protocol with respect to μ.
The proof of Theorem 3 appears lossy. It seems we’ve replaced a universal quantifier over (x, y) with an existential quantifier over r. Nevertheless, it turns out we haven’t lost anything at all, and in fact distributional complexity is equivalent to BPPpub complexity:
Theorem 4.
BPPpub(f) = max Dμ(f). εμε
One way to interpret this theorem is that every lower bound against public coin protocols can be obtained by identifying a suitable hard distribution for deterministic protocols. This equivalence is known as Yao’s minimax principle. To describe the proof, we take a detour into game theory.
1 Zero Sum Games
In a two player game, a player Row holds a set of strategies R and a player Column holds a set of strategies C. To play the game, Row selects a strategy r ∈ R, Column (simultaneously) selects a strategy c ∈ C, and the players receive rewards UR(r,c) and UC(r,c) respectively. Such a game is zero-sum if the reward to Row is exactly the negative of the reward to Column, i.e., there exists a payoff function U(r,c) such that UR = U and UC = −U.
The goal of the Row player is to maximize U(r,c) and the goal of the column player is to minimize U(r,c). In addition to playing pure strategies, the players can play mixed strategies, which are distributions over pure strategies. Let p ∈ [0, 1]|R| be a vector with 􏰂r∈R pr = 1, denoted p ∈ ∆(R). Row playing the mixed strategy p corresponds to playing r ∈ R with probability pr. Similarly, Column playing a mixed strategy q ∈ ∆(C) corresponds to playing c ∈ C with probability qc. If Row and Column both play mixed strategies, the expected payoff to Row is
U(p,q) := 􏰕􏰕prU(r,c)qc. r∈R c∈C
(This is an abuse of notation. We could remove the abuse by replacing U(r,c) with U(er,ec) where er and ec are the indicator vectors for actions r and c, respectively.) Let’s consider two variations of this game. In the first variation, we (apparently) tip the scales in favor of Column.
Variant 1. Row has to announce a mixed strategy p and then Column is allowed to best respond with a strategy q attaining payoff
min U(p,q). q∈∆(C)
2

In this variation of the game, Row should choose a mixed strategy p so as to maximize this quantity, so as to guarantee payoff
max min U(p,q). p∈∆(R) q∈∆(C)
Variant 2. Similarly, we can rig the game in favor of Row, and force Column to announce a mixed strategy q beforehand. In this case, Column should choose q so as to minimize maxp∈∆R U (p, q), guaranteeing payoff
min max U(p,q). q∈∆(C) p∈∆(R)
Forcing Row to go first should always make that player worse off, so we have
max min U(p,q) ≤ min max U(p,q). p∈∆(R) q∈∆(C) q∈∆(C) p∈∆(R)
Von Neumann’s Minimax Theorem asserts that this inequality is actually an equality. Each player can commit to an “equilibrium strategy” (p∗,q∗) such that, even if they have to announce this strategy first, they can guarantee a particular payoff called the “value” of the game.
Theorem 5 (Minimax Theorem). For every two-player zero sum game with payoff function U, there exists a value
val = max min U(p,q) = min max U(p,q). p∈∆(R) q∈∆(C) q∈∆(C) p∈∆(R)
Example 6. Consider the game Rock-Paper-Scissors. We can conveniently organize the payoffs of pure strategies in the following matrix:
0 −1 1 U=1 0 −1.
−1 1 0
The equilibrium strategies for this game are for each player to play the uniform distribution over the rock, paper, and scissors pure strategies. If Row announces this strategy first, uniform is still an optimal strategy for Column and guarantees expected payoff zero. (Similarly if Column announces first.)
2 Proof of Theorem 4
Let c = maxμ Dεμ(f). We define a two-player zero sum game between a protocol designer (Row player) and an adversary (Column player). Their available strategies are as follows:
Protocol Designer (Row): Let Pc denote the set of all deterministic protocols with cost c. Pure strategies are Π ∈ Pc. Mixed strategies are all randomized protocols Π ̃, which are distributions in ∆(Pc).
Adversary (Column): Pure strategies are inputs (x, y). Mixed strategies are distributions μ over X×Y.
3

The payoff U (Π, (x, y)) to the protocol designer is 1 if Π(x, y) = f (x, y) and 0 otherwise. That is, the Row player wins iff the protocol chosen succeeds on the input chosen by the adversary.
By assumption, for every mixed strategy μ for the Column player, there exists a (pure) strategy for the Row player with expected payoff ≥ 1 − ε. This puts us in the setting of Variant 2: The Column player announces first, and the Row player gets to pick a strategy so as to maximize their payoff. That is,
min max U(Π ̃,μ)≥1−ε. μ∈∆(X×Y ) Π ̃∈∆(Pc)
By the Minimax Theorem, this implies
max min U (Π ̃ , μ) ≥ 1 − ε. Π ̃∈∆(Pc) μ∈∆(X×Y )
Interpreting this as the setting of game Variant 1, there exists a mixed strategy for the Row player such that for every (mixed, and in particular, pure) strategy of the Column player, Row can guar- antee payoff ≥ 1−ε. This mixed strategy exactly corresponds to a protocol with success probability at least 1 − ε on every input (x, y).
3 “Proof” of Theorem 5
One way to prove the Minimax Theorem is to appeal to strong duality of linear programming. This is only a “proof” in the sense that the latter is already a deep result which we’ll just take on faith, at least for the time being. But strong duality is itself important for understanding a number of concrete lower bound techniques, so we may as well start using it now.
Recall that our expected payoffs in the two variants of the game are max min U (p, q) if Row goes first
p∈∆(R) q∈∆(C)
min max U(p,q) if Column goes first.
q∈∆(C) p∈∆(R)
However, if Row goes first, observe that Column can always choose a pure strategy so as to minimize
U(p,c). (And similarly if Column goes first.) So we can actually write these payoffs as max min􏰕U(r,c)pr ifRowgoesfirst
p∈∆(R) c∈C r∈R
min max 􏰕 U (r, c)qc if Column goes first.
q∈∆(C) r∈R c∈C
We can encode the first quantity as the solution to the following linear program:
max t
s.t. t−􏰕U(r,c)pr ≤0
r∈R
􏰕pr =1
r∈R
pr ≥ 0
∀c∈C
∀r ∈ R
4

The dual of this LP is given by:
min s
s.t. s−􏰕U(r,c)qc≥0
c∈C
􏰕 qc = 1 c∈C
∀r∈R
∀c ∈ C
LP can be seen to exactly capture the second quantity, proving the Minimax Theorem.
qc ≥ 0
By strong LP duality, the value of the dual LP is equal to the value of the primal LP. But the dual
5