程序代写代做代考 algorithm database kernel Bayesian C GPU information theory Fast Computation of Wasserstein Barycenters

Fast Computation of Wasserstein Barycenters
Marco Cuturi
Graduate School of Informatics, Kyoto University
Arnaud Doucet
Department of Statistics, University of Oxford
Abstract
We present new algorithms to compute the mean of a set of empirical probability measures under the optimal transport metric. This mean, known as the Wasserstein barycenter, is the measure that minimizes the sum of its Wasserstein distances to each element in that set. We propose two orig- inal algorithms to compute Wasserstein barycen- ters that build upon the subgradient method. A di- rect implementation of these algorithms is, how- ever, too costly because it would require the re- peated resolution of large primal and dual optimal transport problems to compute subgradients. Ex- tending the work of Cuturi (2013), we propose to smooth the Wasserstein distance used in the defi- nition of Wasserstein barycenters with an entropic regularizer and recover in doing so a strictly con- vex objective whose gradients can be computed for a considerably cheaper computational cost us- ing matrix scaling algorithms. We use these algo- rithms to visualize a large family of images and to solve a constrained clustering problem.
1. Introduction
Comparing, summarizing and reducing the dimensionality of empirical probability measures defined on a space Ω are fundamental tasks in statistics and machine learning. Such tasks are usually carried out using pairwise comparisons of measures. Classic information divergences (Amari and Na- gaoka, 2001) are widely used to carry out such comparisons.
Unless Ω is finite, these divergences cannot be directly ap- plied to empirical measures, because they are ill-defined for measures that do not have continuous densities. They also fail to incorporate prior knowledge on the geometry
Proceedings of the 31st International Conference on Machine Learning, Beijing, China, 2014. JMLR: W&CP volume 32. Copy- right 2014 by the author(s).
MCUTURI@I.KYOTO-U.AC.JP DOUCET@STAT.OXFORD.AC.UK
(a) (b) (c) (d)
(e)
Figure 1. (Top) 30 artificial images of two nested random ellipses. Mean measures using the (a) Euclidean distance (b) Euclidean after re-centering images (c) Jeffrey centroid (Nielsen, 2013) (d) RKHS distance (Gaussian kernel, σ = 0.002) (e) 2-Wasserstein distance.

Fast Computation of Wasserstein Barycenters
of Ω, which might be available if, for instance, Ω is also a Hilbert space. Both of these issues are usually solved us- ing Parzen’s approach (1962) to smooth empirical measures with smoothing kernels before computing divergences: the Euclidean (Gretton et al., 2007) and χ2 distances (Harchaoui et al., 2008), the Kullback-Leibler and Pearson divergences (Kanamori et al., 2012a;b) can all be computed fairly effi- ciently by considering matrices of kernel evaluations.
The choice of a divergence defines implicitly the mean ele- ment, or barycenter, of a set of measures, as the particular measure that minimizes the sum of all its divergences to that set of target measures (Veldhuis, 2002; Banerjee et al., 2005; Teboulle, 2007; Nielsen, 2013). The goal of this paper is to compute efficiently barycenters (possibly in a constrained subset of all probability measures on Ω) defined by the op- timal transport distance between measures (Villani, 2009, §6). We propose to minimize directly the sum of optimal transport distances from one measure (the variable) to a set of fixed measures by gradient descent. These gradients can be computed for a moderate cost by solving smoothed opti- mal transport problems as proposed by Cuturi (2013).
Wasserstein distances have many favorable properties, doc- umented both in theory (Villani, 2009) and practice (Rubner et al., 1997; Pele and Werman, 2009). We argue that their versatility extends to the barycenters they define. We illus- trate this intuition in Figure 1, where we consider 30 images of nested ellipses on a 100 × 100 grid. Each image is a dis- crete measure on [0, 1]2 with normalized intensities. Com- puting the Euclidean, Gaussian RKHS mean-maps or Jef- frey centroid of these images results in mean measures that hardly make any sense, whereas the 2-Wasserstein mean on that grid (defined in §3.1) produced by Algorithm 1 captures perfectly the structure of these images. Note that these re- sults were recovered without any prior knowledge on these images other than that of defining a distance in [0, 1]2, here the Euclidean distance. Note also that the Gaussian kernel smoothing approach uses the same distance, in addition to a bandwidth parameter σ which needs to be tuned in practice.
This paper is organized as follows: we provide background on optimal transport in §2, followed by the definition of Wasserstein barycenters with motivating examples in §3. Novel contributions are presented from §4: we present two subgradient methods to compute Wasserstein barycenters, one which applies when the support of the mean measure is known in advance and another when that support can be freely chosen in Ω. These algorithms are very costly even for measures of small support or histograms of small size. We show in §5 that the key ingredients of these approaches—the computation of primal and dual optimal transport solutions—can be bypassed by solving smoothed optimal transport problems. We conclude with two applica- tions of our algorithms in §6.
2. Background on Optimal Transport
Let Ω be an arbitrary space, D a metric on that space and P (Ω) the set of Borel probability measures on Ω. For any pointx∈Ω,δxistheDiracunitmassonx.
Definition 1 (Wasserstein Distances). For p ∈ [1, ∞) and probability measures μ, ν in P (Ω), their p-Wasserstein dis- tance (Villani, 2009, §6) is
def􏰃 􏰅 p 􏰄1/p Wp(μ,ν)= inf D(x,y) dπ(x,y) ,
π∈Π(μ,ν) Ω2
where Π(μ, ν) is the set of all probability measures on Ω2
that have marginals μ and ν.
2.1. Restriction to Empirical Measures
We will only consider empirical measur􏰆es throughout this paper, that is measures of the form μ = ni=1 aiδxi where n is an integer, X = (x1,…,xn) ∈ Ωn and (a1,…,an) lives in the probability simplex Σn ,
􏰇n Σn={u∈R |∀i≤n,ui≥0, ui=1}.
i=1
Let us introduce additional notations:
Measures on a Set X with Constrained Weights. Let Θ
be a non-empty closed subset Θ of Σ . We write n
i=1
Measures supported on up to k points. Given an integer k and a subset Θ of Σk, we consider the set Pk(Ω,Θ) of measures of Ω that have discrete support of size up to k and weights in Θ,
X∈Ωk
def n
P(X,Θ)={μ=
def
􏰇
aiδxi, a∈Θ}.
def 􏰈 Pk(Ω,Θ) =
P(X,Θ).
When no constraints on the weights are considered, namely
when the weights are free to be chosen anywhere on the def
probability simplex, we use the shorter notations P (X ) = def
P(X,Σn) and Pk(Ω) = Pk(Ω,Σk).
2.2. Wasserstein & Discrete Optimal Transport
Consider two families X = (x ,…,x􏰆) and Y = 1nn
(y1, . .􏰆. , ym) of points in Ω. When μ = i=1 aiδxi and ν = ni=1 biδyi , the Wasserstein distance Wp(μ, ν) be- tween μ and ν is the pth root of the optimum of a network flow problem known as the transportation problem (Bertsi- mas and Tsitsiklis, 1997, §7.2). This problem builds upon two elements: the matrix MXY of pairwise distances be- tween elements of X and Y raised to the power p, which
n

acts as a cost parameter,
def p
Fast Computation of Wasserstein Barycenters
• Centroids of Histograms: N > 1, Ω finite, P = P (Ω).
MXY = [D(xi,yj) ]ij ∈ R
n×m
,
(1)
When Ω is a set of size d and a matrix M ∈ Rd×d describes +
thepairwisedistancesbetweenthesedpoints(usuallycalled in that case bins or features), the 1-Wasserstein distance is known as the Earth Mover’s Distance (EMD) (Rubner et al., 1997). In that context, Wasserstein barycenters have also been called EMD prototypes by Zen and Ricci (2011).
• EuclideanΩ:N=1,D(x,y)=∥x−y∥2,p=2,P= Pk (Ω). Minimizing f on Pk (Ω) when (Ω, D) is a Euclidean metric space and p = 2 is equivalent to the k-means problem (Pollard, 1982; Canas and Rosasco, 2012).
• Constrained k-Means: N = 1, P = Pk(Ω, {1k/k}). Consider a measure ν with support Y ∈ Ωm and weights b ∈ Σm. The problem of approximating this mea- sure by a uniform measure with k atoms—a measure in Pk(Ω,{1k/k})—in2-Wassersteinsensewastoourknowl- edge first considered by Ng (2000), who proposed a variant of Lloyd’s algorithm (1982) for that purpose. More recently, Reich (2013) remarked that such an approximation can be used in the resampling step of particle filters and proposed in that context two ensemble methods inspired by optimal transport, one of which reduces to a single iteration of Ng’s algorithm. Such approximations can also be obtained with kernel-based approaches, by minimizing an information di- vergence between the (smoothed) target measure ν and its (smoothed) uniform approximation as proposed recently by Chen et al. (2010) and Sugiyama et al. (2011).
3.2. Recent Work
Agueh and Carlier (2011) consider conditions on the νi’s for a Wasserstein barycenter in P (Ω) to be unique using the multi-marginal transportation problem. They provide solu- tions in the cases where either (i) Ω = R; (ii) N = 2 using McCann’s interpolant (1997); (iii) all the measures νi are Gaussians in Ω = Rd, in which case the barycenter is a Gaussian with the mean of all means and a variance matrix which is the unique positive definite root of a matrix equa- tion (Agueh and Carlier, 2011, Eq.6.2).
Rabin et al. (2012) were to our knowledge the first to con- sider practical approaches to compute Wasserstein barycen- ters between point clouds in Rd. To do so, Rabin et al. (2012) propose to approximate the Wasserstein distance be- tween two point clouds by their sliced Wasserstein distance, the expectation of the Wasserstein distance between the pro- jections of these point clouds on lines sampled randomly. Because the optimal transport between two point clouds on the real line can be solved with a simple sort, the sliced Wasserstein barycenter can be computed very efficiently, us- ing gradient descent. Although their approach seems very effective in lower dimensions, it may not work for d ≥ 4 and does not generalize to non-Euclidean metric spaces.
and the transportation polytope U (a, b) of a ∈ Σn and b ∈ Σm, which acts as a feasible set, defined as the set of n × m nonnegative matrices such that their row and column marginals are equal to a and b respectively. Writing 1n for the n-dimensional vector of ones,
def n×m T U(a,b)={T∈R+ |T1m=a,T 1n=b}. (2)
def T
Let ⟨A, B ⟩ = tr(A B) be the Frobenius dot-product
of matrices. Combining Eq. (1) & (2), we have that Wp(μ, ν)—the distance Wp(μ, ν) raised to the power p— can be written as the optimum of a parametric linear pro- gramponn×mvariables,parameterizedbythemarginals a, b and a (cost) matrix MXY :
p def Wp (μ,ν) = p(a,b,MXY ) =
min ⟨T,MXY ⟩. T∈U(a,b)
(3)
3. Wasserstein Barycenters
We present in this section the Wasserstein barycenter prob- lem, a variational problem involving all Wasserstein dis- tances from one to many measures, and show how it encom- passes known problems in clustering and approximation.
3.1. Definition and Special Cases
Definition 2 (Agueh and Carlier, 2011). A Wasserstein barycenter of N measures {ν1,…,νN} in P ⊂ P(Ω) is a minimizer of f over P, where
(4)
d e f 1 􏰇N
Wp(μ,νi).
f(μ) = N
i=1
Agueh and Carlier consider more generally a non-negative weight λi in front of each distance Wp(μ,νi). The algo- rithms we propose extend trivially to that case but we use uniform weights in this work to keep notations simpler.
We highlight a few special cases where minimizing f over a set P is either trivial, relevant to data analysis and/or has been considered in the literature with different tools or under a different name. In what follows X ∈ Ωn and Y ∈ Ωm are arbitrary finite subsets of Ω.
• N = 1,P = P(X) When only one measure ν, supported on Y ∈ Ωm is considered, its closest element μ in P (X )—if no constraints on weights a are given—can be computed by defining a weight vector a on the elements of X that results from assigning all of the mass bi to the closest neighbor in metric D of yi in X.

4. New Computational Approaches
We propose in this section new approaches to compute Wasserstein barycenters when (i) each of the N measures νi is an empirical measure, described by a list of atoms Yi ∈ Ωmi of size mi ≥ 1, and a probability vector bi in the simplex Σmi ; (ii) the search for a barycenter is not consid- ered on the whole of P (Ω) but restricted to either P (X, Θ) (the set of measures supported on a predefined finite set X ofsizenwithweightsinasubsetΘofΣn)orPk(Ω,Θ)(the set of measures supported on up to k atoms with weights in a subset Θ of Σk).
Looking for a barycenter μ with atoms X and weights a is equivalent to minimizing f (see Eq. 3 for a definition of p),
also the maximum of a finite set of linear functions, each indexed by the set of extreme points of CM , evaluated at a and is therefore polyhedral convex. When the dual optimal vector is unique, α⋆ is a gradient of p at a, and a subgradient otherwise.
Because for any real value t the pair (α + t1n, β − t1m) is feasible if the pair (α,β) is feasible, and because their objective are identical, any dual optimum (α,β) is deter- mined up to an additive constant. To remove this degree of freedom—which arises from the fact that one among all n+m row/column sum constraints of U (a, b) is redundant— we can either remove a dual variable or normalize any dual optimum α⋆ so that it sums to zero, to enforce that it belongs to the tangent space of Σn. We follow the latter strategy in the rest of the paper.
4.2. Fixed Support: Minimizing f over P (X )
Let X ⊂ Ωn be fixed and let Θ be a closed convex subset
of Σn. The aim of this section is to compute weights a ∈ Θ
such that f(a,X) is minimal. Let α⋆ be the optimal dual i
variable of d(a, bi; MXYi ) normalized to sum to 0. f being a sum of terms p(a, bi, MXYi ), we have that:
Corollary1. Thefunctiona􏰀→f(a,X)ispolyhedralcon- vex, with subgradient
def 1 f(a,X) = N
􏰇N i=1
p(a,bi,MXYi), (5)
Fast Computation of Wasserstein Barycenters
over relevant feasible sets for a and X. When X is fixed, we show in §4.1 that f is convex w.r.t a regardless of the properties of Ω. A subgradient for f w.r.t a can be re- covered through the dual optimal solutions of all problems p(a, bi, MXYi ), and f can be minimized using a projected subgradient method outlined in §4.2. If X is free, con- strainedtobeofcardinalk,andΩanditsmetricDareboth Euclidean, we show in §4.4 that f is not convex w.r.t X but we can provide subgradients for f using the primal optimal solutions of all problems p(a, bi, MXYi ). This in turn sug- gests an algorithm to reach a local minimum for f w.r.t. a and X in Pk (Ω, Θ) by combining both approaches.
4.1. Differentiability of p(a, b, MXY ) w.r.t a
Dual transportation problem. Given a matrix M ∈ Rn×m , the optimum p(a, b, M ) admits the following dual Linear Program (LP) form (Bertsimas and Tsitsiklis, 1997, §7.6,§7.8), known as the dual optimal transport problem:
􏰇N def1 ⋆
α = N αi . i=1
d(a,b,M)= max αTa+βTb, (α,β)∈CM
wherethepolyhedronCM ofdualvariablesis
(6)
Assuming Θ is closed and convex, we can consider a naive projected subgradient minimization of f. Alternatively, if there exists a Bregman divergence B(a,b) = ω(b) − ω(a) − ⟨∇ω(a),b − a⟩ for a,b ∈ Θ defined by a prox- function ω, we can define the proximal mapping Pa(b) = argminc∈Θ (⟨b, c − a ⟩ + B(a, c)) and consider accelerated gradient approaches (Nesterov, 2005). We summarize this ideainAlgorithm1.
Algorithm 1 Wasserstein Barycenter in P (X, Θ) Inputs:X∈Ωn,Θ⊂Σn.Fori≤N:Yi ∈Ωmi,bi ∈
Σmi,p ∈ [1,∞),t0 > 0.
Form all n × mi matrices Mi = MXYi , see Eq. (1). Set aˆ = a ̃ = argminΘ ω.
while not converged do
β = ( t + 1 ) / 2 , a ← ( 1 − β − 1 )􏰆aˆ + β − 1 a ̃ .
Form subgradient α ← N−1 Ni=1 α⋆i using all dual
optimaα⋆i ofd(a,bi,Mi).
a ̃ ← Pa(t0βα).
aˆ ← ( 1 − β − 1 ) aˆ + β − 1 a ̃ , t ← t + 1 .
end while
Notice that when Θ = Σn and B is the Kullback-Leibler divergence (Beck and Teboulle, 2003), we can initialize a ̃
CM ={(α,β)∈R
n+m
|αi+βj ≤mij}.
By LP duality, d(a, b, M ) = p(a, b, M ). The dual optimal solutions—which can be easily recovered from the primal optimal solution (Bertsimas and Tsitsiklis, 1997, Eq.7.10)— define a subgradient for p as a function of a:
Proposition 1. Given b ∈ Σm and M ∈ Rn×m, the map a 􏰀→ p(a, b, M ) is a polyhedral convex function. Any optimal dual vector α⋆ of d(a, b, M ) is a subgradient of p(a, b, M ) with respect to a.
Proof. These results follow from sensitivity analysis in LP’s (Bertsimas and Tsitsiklis, 1997, §5.2). d is bounded and is

between points in these sets can be recovered by writing def T def T
x = diag(X X) and y = diag(Y Y ), and observing that
MXY =x1Tm +1nyT −2XTY ∈Rn×m.
Transport Cost as a function of X. Due to the margin con- straints that apply if a matrix T is in the polytope U (a, b), we have:
⟨T,MXY ⟩=⟨T,x1Td +1Tdy−2XTY ⟩ =trTTx1T +trTT1Ty−2⟨T,XTY⟩
Algorithm 2 2-Wasserstein Barycenter in Pk (Rd , Θ) Input:Yi ∈Rd×mi,bi ∈Σmi fori≤N.
initialize X ∈ Rd×k and a ∈ Θ
while X and a have not converged do
a ← a⋆ using Algorithm 1. fori∈(1,…,N)do
Ti⋆ ←optimalsolutio􏰆nofp(a,bi;MXYi) endfor 􏰁1 N ⋆T􏰂 −1 X ← (1−θ)X +θ N i=1 YiTi diag(a
ting θ ∈ [0, 1] with line-search or a preset value. end while
Fast Computation of Wasserstein Barycenters
with 1n/n and use the multiplicative update to realize the proximal update: a ̃ ← a ̃ ◦ e−t0βα; a ̃ ← a ̃/a ̃T 1n, where ◦ is Schur’s product. Alternative sets Θ for which this projection can be easily carried out include, for instance, all (convex) level set of the entropy function H, namely Θ={a∈Σn|H(a)≥τ}where0≤τ ≤logn.
4.3. Differentiability of p(a, b, MXY ) w.r.t X
We consider now the case where Ω = Rd with d ≥ 1,
D is the Euclidean distance and p = 2. When Ω = Rd,
a family of n points X and a family of m points Y can
be represented respectively as a matrix in Rd×n and an-
other in Rd×m. The pairwise squared-Euclidean distances
A simple interpretation of this update is as follows: the ma- trix T⋆T diag(a−1) has n column-vectors in the simplex Σm. The suggested update for X is to replace it by n barycenters of points enumerated in Y with weights defined by the optimal transport T ⋆ . Note that, because the mini- mization problem we consider in X is not convex to start with, one could be fairly creative when it comes to choosing D and p among other distances and exponents. This sub- stitution would only involve more complicated gradients of MXY w.r.t. X that would appear in Eq. (7).
4.4. Free Support: Minimizing f over Pk (Rd , Θ)
We now consider, as a natural extension of §4.2 when Ω = Rd, the problem of minimizing f over a probability measure μ that is (i) supported by at most k atoms described in X, a matrix of size d × k, (ii) with weights in a ∈ Θ ⊂ Σk.
Alternating Optimization. To obtain an approximate mini- mizer of f (a, X ) we propose in Algorithm 2 to update alter- natively locations X (with the Newton step defined in Eq. 8) and weights a (with Algorithm 1).
dd =xTa+yTb−2⟨T,XTY ⟩.
Discarding constant terms in y and b, we have that minimiz- ing p(a, b, MX Y ) with respect to locations X is equivalent to solving
min xTa+p(a,b,−XTY). (7) X ∈Rd×k
As a function of X, that objective is the sum of a convex quadratic function of X with a piecewise linear concave function, since
p(a,b,−XTY)= min ⟨X,−YTT ⟩ T∈U(a,b)
is the minimum of linear functions indexed by the vertices of the polytope U (a, b). As a consequence, p(a, b, MX Y ) is not convex with respect to X.
Quadratic Approximation. Suppose that T ⋆ is optimal for problem p(a, b, MX Y ). Updating Eq. (7),
xT a−⟨T ⋆, XT Y ⟩ = ∥X diag(a1/2)−Y T ⋆T diag(a−1/2)∥2 − ∥Y T ⋆T diag(a−1/2)∥2.
Minimizing a local quadratic approximation of p at X yields thus the Newton update
X←YT⋆Tdiag(a−1). (8)
), set-
Algorithm 2 and Lloyd/Ng Algorithms. As mentioned in §2, minimizing f defined in Eq. (5) over Pk (Rd ), with N =1,p=2andnoconstraintsontheweights(Θ=Σk), is equivalent to solving the k-means problem applied to the set of points enumerated in ν1. In that particular case, Algo- rithm 2 is also equivalent to Lloyd’s algorithm. Indeed, the assignment of the weight of each point to its closest centroid in Lloyd’s algorithm (the maximization step) is equivalent to the computation of a⋆ in ours, whereas the re-centering step (the expectation step) is equivalent to our update for X using the optimal transport, which is in that case the trivial trans- port that assigns the weight (divided by N) of each atom in Yi to its closest neighbor in X. When the weight vector a is constrained to be uniform (Θ = {1k/k}), Ng (2000) proposed a heuristic to obtain uniform k-means that is also equivalent to Algorithm 2, and which also relies on the re- peated computation of optimal transports. For more general sets Θ, Algorithm 1 ensures that the weights a remain in Θ at each iteration of Algorithm 2, which cannot be guaranteed by neither Lloyd’s nor Ng’s approach.

Fast Computation of Wasserstein Barycenters
Algorithm 2 and Reich’s (2013) Transform. Reich (2013) has recently suggested to approximate a weighted measure ν by a uniform measure supported on as many atoms. This approximation is motivated by optimal transport theory, no- tably asymptotic results by McCann (1995), but does not attempt to minimize, as we do in Algorithm 2, any Wasser- stein distance between that approximation and the original measure. This approach results in one application of the Newton update defined in Eq. (8), when X is first initialized to Y and a = 1m/m to compute the optimal transport T ⋆.
Summary We have proposed two original algorithms to compute Wasserstein barycenters of probability measures: one which applies when the support of the barycenter is fixed and its weights are constrained to lie in a convex subset Θ of the simplex, another which can be used when the sup- port can be chosen freely. These algorithms are relatively simple, yet—to the best of our knowledge—novel. We sus- pect these approaches were not considered before because of their prohibitive computational cost: Algorithm 1 computes at each iteration the dual optima of N transportation prob- lems to form a subgradient, each with n + mi variables and n × mi inequality constraints. Algorithm 2 incurs an even higher cost, since it involves running Algorithm 1 at each iteration, in addition to solving N primal optimal transport problems to form a subgradient to update X . Since both ob- jectives rely on subgradient descent schemes, they are also likely to suffer from a very slow convergence. We propose to solve these issues by following Cuturi’s approach (2013) to smooth the objective f and obtain strictly convex objec- tives whose gradients can be computed more efficiently.
5. Smoothed Dual and Primal Problems
To circumvent the major computational roadblock posed by the repeated computation of primal and dual optimal trans- ports, we extend Cuturi’s approach (2013) to obtain smooth and strictly convex approximations of both primal and dual problems p and d. The matrix scaling approach advocated by Cuturi was motivated by the fact that it provided a fast approximation pλ to p. We show here that the same ap- proach can be used to smooth the objective f and recover for a cheap computational price its gradients w.r.t. a and X.
5.1. Regularized Primal and Smoothed Dual
smoothed version of the original dual transportation prob-
lem, where the positivity constraints of each term mij −
αi − βj have been replaced by penalties 1 e−λ(mij −αi −βj ) : λ
􏰇 e−λ(mij −αi −βj ) dλ(a,b;M)= max αTa+βTb− .
(α,β)∈Rn+m λ i≤n,j≤m
These two problems are related below in the sense that their respective optimal solutions are linked by a unique positive vectoru∈Rn+:
Proposition 2. Let K be the elementwise exponential of
def −λMX Y −λMXY , K = e
. Then there exists a pair of vec- tors (u, v) ∈ Rn+ × Rm+ such that the optimal solutions of
pλ and dλ are respectively given by
log(u) log(u)T 1 Tλ⋆ = diag(u)K diag(v), α⋆λ = − +
n 1n.
λ λn
A n × m transport T , which is􏰆by definition in the nm- def n,m
up to scalar multiplication, such that diag(u)A diag(v) ∈ U (a, b). Such a pair (u, v) can be recovered as a fixed point of the Sinkhorn map
(u, v) 􏰀→ (Av−1./b, AT u−1./a).
The convergence of the algorithm is linear when us- ing Hilbert’s projective metric between the scaling factors (Franklin and Lorenz, 1989, §3). Although we use this al- gorithm in our experiments because of its simplicity, other algorithms exist (Knight and Ruiz, 2012) which are known to be more reliable numerically when λ is large.
Summary: Given a smoothing parameter λ > 0, using Sinkhorn’s algorithm on matrix K, defined as the elemen- twise exponential o√f −λM (the pairwise Gaussian kernel matrix between the supports X and Y when p = 2, using bandwidth σ = 1/ 2λ) we can recover smoothed optima α⋆λ and Tλ⋆ for both smoothed primal pλ and dual dλ trans- port problems. To take advantage of this, we simply propose to substitute the smoothed optima α⋆λ and Tλ⋆ to the original optima α⋆ and T ⋆ that appear in Algorithms 1 and 2.
simplex, has entropy h(T ) = − i,j=1 tij log(tij ). Cuturi (2013) has recently proposed to consider, for λ > 0, a regu- larized primal transport problem pλ as
Proof. The result follows from the Lagrange method of multipliers for the primal as shown by Cuturi (2013, Lemma 2), and a direct application of first-order conditions for the dual, which is an unconstrained convex problem. The term
log(u)T 1n 1 in the definition of α⋆ is used to normalize α⋆ λnn λ λ
so that it sums to zero as discussed in the end of §4.1. 5.2. Matrix Scaling Computation of (u, v)
The positive vectors (u, v) mentioned in Proposition 2 can be computed through Sinkhorn’s matrix scaling algorithm applied to K, as outlined in Algorithm 3:
Lemma 1 (Sinkhorn, 1967). For any positive matrix A in Rn×m and positive probability vectors a ∈ Σ and b ∈ Σ ,
+ nnmm there exist positive vectors u ∈ R+ and v ∈ R+ , unique
pλ(a,b;M)= min ⟨X,M⟩−1h(T). T∈U(a,b) λ
We introduce in this work its dual problem, which is a

Fast Computation of Wasserstein Barycenters
Figure 2. (top) For each digit, 15 out of the ≈ 5.000 scaled and translated images considered for each barycenter. (bottom) Barycenters after t = 1, 10, 60 gradient steps. For t = 60, images are cropped to show the 30 × 30 central pixels.
Algorithm 3 Smoothed Primal Tλ⋆ and Dual α⋆λ Optima
Input M, λ, a, b
K = exp(−λM);
K􏰉 = diag(a−1)K % use bsxfun(@rdivide,K,a) Set u =ones(n,1)/n;
while u changes do
u = 1./(K􏰉 (b./(K T u))). end while
v = b./(KT u).
play intermediate barycenter solutions for each of these 10 datasets of images for t = 1, 10, 60 gradient iterations. λ is set to 60/median(M ), where M is the squared-Euclidean distance matrix between all 2,500 pixels in the grid. Using a Quadro K5000 GPU with close to 1500 cores, the com- putation of a single barycenter takes about 2 hours to reach 100 iterations. Because we use warm starts to initialize u in Algorithm 3 at each iteration of Algorithm 1, the first it- erations are typically more computationally intensive than those carried out near the end.
⋆ 1
αλ =−λ log(u)+
log(u)T 1n
λn 1n.
Tλ⋆ = diag(u)K diag(v).
% use bsxfun(@times, v, (bsxfun(@times, K, u))′); 6.2. Clustering with Uniform Centroids
6. Applications
We present two applications, one of Algorithm 1 and one of Algorithm 2, that both rely on the smooth approximations presented in §5. The settings we consider involve computing respectively tens of thousands or tens of high-dimensional optimal transport problems—2.500×2.500 for the first ap- plication, 57.647 × 48 for the second—which cannot be re- alistically carried out using network flow solvers. Using net- work flow solvers, the resolution of a single transport prob- lem of these dimensions could take between several minutes to several hours. We also take advantage in the first appli- cation of the fact that Algorithm 3 can be run efficiently on GPGPUs using vectorized code (Cuturi, 2013, Alg.1).
6.1. Visualization of Perturbed Images
We use 50.000 images of the MNIST database, with approx- imately 5.000 images for each digit from 0 to 9. Each image (originally 20 × 20 pixels) is scaled randomly, uniformly between half-size and double-size, and translated randomly within a 50 × 50 grid, with a bias towards corners. We dis-
In practice, the k-means cost function applied to a given em- pirical measure could be minimized with a set of centroids X and weight vector a such that the entropy of a is very small. This can occur when most of the original points in the dataset are attributed to a very small subset of the k cen- troids, and could be undesirable in applications of k-means where a more regular attribution is sought. For instance, in sensor deployment, when each centroid (sensor) is limited in the number of data points (users) it can serve, we would like to ensure that the attributions agree with those limits.
Whereas the original k-means cannot take into account such limits, we can ensure them using Algorithm 2. We illus- trate the difference between looking for optimal centroids with “free” assignments (Θ = Σk), and looking for opti- mal “uniform” centroids with constrained assignments (Θ = {1k/k}) using US census data for income and population repartitions across 57.647 spatial locations in the 48 contigu- ous states. These weighted points can be interpreted as two empirical measures on R2 with weights directly proportional to these respective quantities. We initialize both “free” and “uniform” clustering with the actual 48 state capitals. Re- sults displayed in Figure 3 show that by forcing our approx- imation to be uniform, we recover centroids that induce a

Fast Computation of Wasserstein Barycenters
speed which is directly comparable to that of the k-means in terms of iterations, with a relatively modest computational overhead. Unsurprisingly, the Wasserstein distance between the clusters and the original measure is higher when adding uniform constraints on the weights.
15
10
5
Uniform Wasserstein Barycenter
K−Means (Free Was. Barycenter)
2 4 6 8 10 12 14
Iterations
Figure 3. Comparison of two Wasserstein barycenters on spatial repartitions of income and population in the 48 contiguous states, using as many (k = 48) centroids. The size of each of the 57.647 blue crosses is proportional to the local average of the relevant vari- able (income above and population below) at that location, nor- malized to sum to 1. Each downward triangle is a centroid of the k-means clustering (equivalent to a Wasserstein barycenter with Θ = Σk) whose size is proportional to the portion of mass cap- tured by that centroid. Red dots indicate centroids obtained with a uniform constraint on the weights, Θ = {1k/k}. Since such cen- troids are constrained to carry a fixed portion of the total weight, one can observe that they provide a more balanced clustering than the k-means solution.
more balanced clustering. Indeed, each cell of the Voronoi diagram built with these centroids is now constrained to hold the same aggregate wealth or population. These centroids could form the new state capitals of equally rich or equally populated states. On an algorithmic note, we notice in Fig- ure 4 that Algorithm 2 converges to its (local) optimum at a
Figure 4. Wasserstein distance of the uniform Wasserstein barycenter (weights constrained to be in Θ = {1k/k}) and its unconstrained equivalent (k-means) to the income empirical measure. Note that, because of the constraints on weights, the Wasserstein distance of the uniform Wasserstein barycenter is necessarily larger. On a single CPU core, these computations require 12.5 seconds for the constrained case, using Sinkhorn’s ap- proximation, and 1.55 seconds for the regular k-means algorithm. Using a regular transportation solver, computing the optimal transport from the 57.647 points to the 48 centroids would require about 1 hour for a single iteration
Conclusion We have proposed in this paper two original algorithms to compute Wasserstein barycenters of empiri- cal measures. Using these algorithms in practice for mea- sures of large support is a daunting task for two reasons: they are inherently slow because they rely on the subgradi- ent method; the computation of these subgradients involves solving optimal and dual optimal transport problems. Both issues can be substantially alleviated by smoothing the pri- mal optimal transport problem with an entropic penalty and considering its dual. Both smoothed problems admit gra- dients which can be computed efficiently using only matrix vector products. Our aim in proposing such algorithms is to demonstrate that Wasserstein barycenters can be used for vi- sualization, constrained clustering, and hopefully as a core component within more complex data analysis techniques in future applications. We also believe that our smoothing ap- proach can be directly applied to more complex variational problems that involve multiple Wasserstein distances, such as Wasserstein propagation (Solomon et al., 2014).
Acknowledgements We thank reviewers for their com- ments and Gabriel Peyre ́ for fruitful discussions. MC was supported by grant 26700002 from JSPS. AD was partially supported by EPSRC.
Wasserstein Distance

Fast Computation of Wasserstein Barycenters
References
M. Agueh and G. Carlier. Barycenters in the Wasserstein space. SIAM Journal on Mathematical Analysis, 43(2):904–924, 2011.
S.-I. Amari and H. Nagaoka. Methods of Information Geometry. AMS vol. 191, 2001.
A. Banerjee, S. Merugu, I. S. Dhillon, and J. Ghosh. Clustering with bregman divergences. The Journal of Machine Learning Research, 6:1705–1749, 2005.
A. Beck and M. Teboulle. Mirror descent and nonlinear projected subgradient methods for convex optimization. Operations Re- search Letters, 31(3):167–175, 2003.
D. Bertsimas and J. Tsitsiklis. Introduction to Linear Optimization. Athena Scientific, 1997.
G. Canas and L. Rosasco. Learning probability measures with re- spect to optimal transport metrics. In Adv. in Neural Infor. Proc. Systems 25, pages 2501–2509. 2012.
Y. Chen, M. Welling, and A. J. Smola. Supersamples from kernel- herding. In Uncertainty in Artificial Intelligence (UAI), 2010.
M. Cuturi. Sinkhorn distances: Lightspeed computation of opti- mal transport. In Advances in Neural Information Processing Systems 26, pages 2292–2300, 2013.
J. Franklin and J. Lorenz. On the scaling of multidimensional ma- trices. Linear Algebra and its applications, 114:717–735, 1989.
A. Gretton, K. M. Borgwardt, M. Rasch, B. Schoelkopf, and A. J. Smola. A kernel method for the two-sample-problem. In Ad- vances in Neural Information Processing Systems 19, pages 513–520. MIT Press, 2007.
Z. Harchaoui, F. Bach, and E. Moulines. Testing for homogeneity with kernel fisher discriminant analysis. In Advances in Neural Information Processing Systems 20, pages 609–616. MIT Press, 2008.
T. Kanamori, T. Suzuki, and M. Sugiyama. f-divergence esti- mation and two-sample homogeneity test under semiparametric density-ratio models. IEEE Trans. on Information Theory, 58 (2):708–720, 2012a.
T. Kanamori, T. Suzuki, and M. Sugiyama. Statistical analysis of kernel-based least-squares density-ratio estimation. Machine Learning, 86(3):335–367, 2012b.
P. A. Knight and D. Ruiz. A fast algorithm for matrix balancing. IMA Journal of Numerical Analysis, 2012.
S. Lloyd. Least squares quantization in pcm. IEEE Trans. on In- formation Theory, 28(2):129–137, 1982.
R. J. McCann. Existence and uniqueness of monotone measure- preserving maps. Duke Mathematical Journal, 80(2):309–324, 1995.
R. J. McCann. A convexity principle for interacting gases. Ad- vances in Mathematics, 128(1):153–179, 1997.
Y. Nesterov. Smooth minimization of non-smooth functions. Math. Program., 103(1):127–152, May 2005.
M. K. Ng. A note on constrained k-means algorithms. Pattern Recognition, 33(3):515–519, 2000.
F. Nielsen. Jeffreys centroids: A closed-form expression for pos- itive histograms and a guaranteed tight approximation for fre- quency histograms. IEEE Signal Processing Letters (SPL), 2013.
E. Parzen. On estimation of a probability density function and mode. The Annals of Mathematical Statistics, 33(3):1065–1076, 1962.
O. Pele and M. Werman. Fast and robust earth mover’s distances. In ICCV’09, 2009.
D. Pollard. Quantization and the method of k-means. IEEE Trans. on Information Theory, 28(2):199–205, 1982.
J. Rabin, G. Peyre ́, J. Delon, and M. Bernot. Wasserstein barycen- ter and its application to texture mixing. In Scale Space and Variational Methods in Computer Vision, volume 6667 of Lec- ture Notes in Computer Science, pages 435–446. Springer, 2012.
S. Reich. A non-parametric ensemble transform method for bayesian inference. SIAM Journal of Scientific Computing, 35 (4):A2013–A2024, 2013.
Y. Rubner, L. Guibas, and C. Tomasi. The earth movers dis- tance, multi-dimensional scaling, and color-based image re- trieval. In Proceedings of the ARPA Image Understanding Work- shop, pages 661–668, 1997.
R. Sinkhorn. Diagonal equivalence to matrices with prescribed row and column sums. The American Mathematical Monthly, 74(4): 402–405, 1967.
J. Solomon, R. Rustamov, G. Leonidas, and A. Butscher. Wasser- stein propagation for semi-supervised learning. In Proceed- ings of The 31st International Conference on Machine Learning, pages 306–314, 2014.
M. Sugiyama, M. Yamada, M. Kimura, and H. Hachiya. On information-maximization clustering: Tuning parameter selec- tion and analytic solution. In Proceedings of the 28th Interna- tional Conference on Machine Learning, pages 65–72, 2011.
M. Teboulle. A unified continuous optimization framework for center-based clustering methods. The Journal of Machine Learning Research, 8:65–102, 2007.
R. Veldhuis. The centroid of the symmetrical kullback-leibler dis- tance. Signal Processing Letters, IEEE, 9(3):96–99, 2002.
C. Villani. Optimal transport: old and new, volume 338. Springer Verlag, 2009.
G. Zen and E. Ricci. Earth mover’s prototypes: A convex learning approach for discovering activity patterns in dynamic scenes. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 3225–3232. IEEE, 2011.