fast_wasserstein_revised_final2.dvi
Fast Computation of Wasserstein Barycenters
Marco Cuturi -U.AC.JP
Graduate School of Informatics, Kyoto University
Arnaud Doucet .AC.UK
Department of Statistics, University of Oxford
Abstract
We present new algorithms to compute the mean
of a set of empirical probability measures under
the optimal transport metric. This mean, known
as the Wasserstein barycenter, is the measure that
minimizes the sum of its Wasserstein distances to
each element in that set. We propose two orig-
inal algorithms to compute Wasserstein barycen-
ters that build upon the subgradient method. A di-
rect implementation of these algorithms is, how-
ever, too costly because it would require the re-
peated resolution of large primal and dual optimal
transport problems to compute subgradients. Ex-
tending the work of Cuturi (2013), we propose to
smooth the Wasserstein distance used in the defi-
nition of Wasserstein barycenters with an entropic
regularizer and recover in doing so a strictly con-
vex objective whose gradients can be computed
for a considerably cheaper computational cost us-
ing matrix scaling algorithms. We use these algo-
rithms to visualize a large family of images and to
solve a constrained clustering problem.
1. Introduction
Comparing, summarizing and reducing the dimensionality
of empirical probability measures defined on a space ! are
fundamental tasks in statistics and machine learning. Such
tasks are usually carried out using pairwise comparisons of
measures. Classic information divergences (Amari and Na-
gaoka, 2001) are widely used to carry out such comparisons.
Unless ! is finite, these divergences cannot be directly ap-
plied to empirical measures, because they are ill-defined
for measures that do not have continuous densities. They
also fail to incorporate prior knowledge on the geometry
Proceedings of the 31 st International Conference on Machine
Learning, Beijing, China, 2014. JMLR: W&CP volume 32. Copy-
right 2014 by the author(s).
(a) (b)
(c)
(e)
(d)
Figure 1. (Top) 30 artificial images of two nested random ellipses.
Mean measures using the (a) Euclidean distance (b) Euclidean after
re-centering images (c) Jeffrey centroid (Nielsen, 2013) (d) RKHS
distance (Gaussian kernel, ! = 0.002) (e) 2-Wasserstein distance.
Fast Computation of Wasserstein Barycenters
of !, which might be available if, for instance, ! is also
a Hilbert space. Both of these issues are usually solved us-
ing Parzen’s approach (1962) to smooth empirical measures
with smoothing kernels before computing divergences: the
Euclidean (Gretton et al., 2007) and!2 distances (Harchaoui
et al., 2008), the Kullback-Leibler and Pearson divergences
(Kanamori et al., 2012a;b) can all be computed fairly effi-
ciently by considering matrices of kernel evaluations.
The choice of a divergence defines implicitly the mean ele-
ment, or barycenter, of a set of measures, as the particular
measure that minimizes the sum of all its divergences to that
set of target measures (Veldhuis, 2002; Banerjee et al., 2005;
Teboulle, 2007; Nielsen, 2013). The goal of this paper is to
compute efficiently barycenters (possibly in a constrained
subset of all probability measures on !) defined by the op-
timal transport distance between measures (Villani, 2009,
§6). We propose to minimize directly the sum of optimal
transport distances from one measure (the variable) to a set
of fixed measures by gradient descent. These gradients can
be computed for a moderate cost by solving smoothed opti-
mal transport problems as proposed by Cuturi (2013).
Wasserstein distances have many favorable properties, doc-
umented both in theory (Villani, 2009) and practice (Rubner
et al., 1997; Pele and Werman, 2009). We argue that their
versatility extends to the barycenters they define. We illus-
trate this intuition in Figure 1, where we consider 30 images
of nested ellipses on a 100! 100 grid. Each image is a dis-
crete measure on [0, 1]2 with normalized intensities. Com-
puting the Euclidean, Gaussian RKHS mean-maps or Jef-
frey centroid of these images results in mean measures that
hardly make any sense, whereas the 2-Wasserstein mean on
that grid (defined in §3.1) produced by Algorithm 1 captures
perfectly the structure of these images. Note that these re-
sults were recovered without any prior knowledge on these
images other than that of defining a distance in [0, 1]2, here
the Euclidean distance. Note also that the Gaussian kernel
smoothing approach uses the same distance, in addition to a
bandwidth parameter ” which needs to be tuned in practice.
This paper is organized as follows: we provide background
on optimal transport in §2, followed by the definition of
Wasserstein barycenters with motivating examples in §3.
Novel contributions are presented from §4: we present two
subgradient methods to compute Wasserstein barycenters,
one which applies when the support of the mean measure
is known in advance and another when that support can
be freely chosen in !. These algorithms are very costly
even for measures of small support or histograms of small
size. We show in §5 that the key ingredients of these
approaches—the computation of primal and dual optimal
transport solutions—can be bypassed by solving smoothed
optimal transport problems. We conclude with two applica-
tions of our algorithms in §6.
2. Background on Optimal Transport
Let ! be an arbitrary space, D a metric on that space and
P (!) the set of Borel probability measures on !. For any
point x ” !, #x is the Dirac unit mass on x.
Definition 1 (Wasserstein Distances). For p ” [1,#) and
probability measures µ, $ in P (!), their p-Wasserstein dis-
tance (Villani, 2009, §6) is
Wp(µ, $)
def
=
!
inf
!!!(µ,”)
”
“2
D(x, y)pd%(x, y)
#1/p
,
where “(µ, $) is the set of all probability measures on !2
that have marginals µ and $.
2.1. Restriction to Empirical Measures
We will only consider empirical measures throughout this
paper, that is measures of the form µ =
$n
i=1 ai#xi where
n is an integer, X = (x1, . . . , xn) ” !n and (a1, . . . , an)
lives in the probability simplex #n,
#n
def
= {u ” Rn | $i % n, ui & 0,
n%
i=1
ui = 1}.
Let us introduce additional notations:
Measures on a Set X with Constrained Weights. Let $
be a non-empty closed subset $ of #n. We write
P (X,$)
def
= {µ =
n%
i=1
ai#xi , a ” $}.
Measures supported on up to k points. Given an integer
k and a subset $ of #k, we consider the set Pk(!,$) of
measures of ! that have discrete support of size up to k and
weights in $,
Pk(!,$)
def
=
&
X!”k
P (X,$).
When no constraints on the weights are considered, namely
when the weights are free to be chosen anywhere on the
probability simplex, we use the shorter notations P (X)
def
=
P (X,#n) and Pk(!)
def
= Pk(!,#k).
2.2. Wasserstein & Discrete Optimal Transport
Consider two families X = (x1, . . . , xn) and Y =
(y1, . . . , ym) of points in !. When µ =
$n
i=1 ai#xi and
$ =
$n
i=1 bi#yi , the Wasserstein distance Wp(µ, $) be-
tween µ and $ is the pth root of the optimum of a network
flow problem known as the transportation problem (Bertsi-
mas and Tsitsiklis, 1997, §7.2). This problem builds upon
two elements: the matrix MXY of pairwise distances be-
tween elements of X and Y raised to the power p, which
Fast Computation of Wasserstein Barycenters
acts as a cost parameter,
MXY
def
= [D(xi, yj)
p]ij ” Rn”m, (1)
and the transportation polytope U(a, b) of a ” #n and
b ” #m, which acts as a feasible set, defined as the set of
n!m nonnegative matrices such that their row and column
marginals are equal to a and b respectively. Writing 1n for
the n-dimensional vector of ones,
U(a, b)
def
= {T ” Rn”m+ | T1m = a, T
T
1n = b}. (2)
Let ‘A,B ( def= tr(ATB) be the Frobenius dot-product
of matrices. Combining Eq. (1) & (2), we have that
W pp (µ, $)—the distance Wp(µ, $) raised to the power p—
can be written as the optimum of a parametric linear pro-
gram p on n!m variables, parameterized by the marginals
a, b and a (cost) matrix MXY :
W pp (µ, $) = p(a, b,MXY )
def
= min
T!U(a,b)
‘T,MXY (. (3)
3. Wasserstein Barycenters
We present in this section the Wasserstein barycenter prob-
lem, a variational problem involving all Wasserstein dis-
tances from one to many measures, and show how it encom-
passes known problems in clustering and approximation.
3.1. Definition and Special Cases
Definition 2 (Agueh and Carlier, 2011). A Wasserstein
barycenter of N measures {$1, . . . , $N} in P ) P (!) is
a minimizer of f over P, where
f(µ)
def
=
1
N
N%
i=1
W pp (µ, $i). (4)
Agueh and Carlier consider more generally a non-negative
weight &i in front of each distance W
p
p (µ, $i). The algo-
rithms we propose extend trivially to that case but we use
uniform weights in this work to keep notations simpler.
We highlight a few special cases where minimizing f over
a set P is either trivial, relevant to data analysis and/or has
been considered in the literature with different tools or under
a different name. In what follows X ” !n and Y ” !m are
arbitrary finite subsets of !.
• N = 1,P = P (X) When only one measure $, supported
on Y ” !m is considered, its closest element µ in P (X)—if
no constraints on weights a are given—can be computed by
defining a weight vector a on the elements of X that results
from assigning all of the mass bi to the closest neighbor in
metric D of yi in X .
• Centroids of Histograms: N > 1,! finite, P = P (!).
When ! is a set of size d and a matrix M ” Rd”d+ describes
the pairwise distances between these d points (usually called
in that case bins or features), the 1-Wasserstein distance is
known as the Earth Mover’s Distance (EMD) (Rubner et al.,
1997). In that context, Wasserstein barycenters have also
been called EMD prototypes by Zen and Ricci (2011).
• Euclidean !: N = 1, D(x, y) = *x + y*2, p = 2,P =
Pk(!). Minimizing f on Pk(!) when (!, D) is a Euclidean
metric space and p = 2 is equivalent to the k-means problem
(Pollard, 1982; Canas and Rosasco, 2012).
• Constrained k-Means: N = 1, P = Pk(!, {1k/k}).
Consider a measure $ with support Y ” !m and weights
b ” #m. The problem of approximating this mea-
sure by a uniform measure with k atoms—a measure in
Pk(!, {1k/k})—in 2-Wasserstein sense was to our knowl-
edge first considered by Ng (2000), who proposed a variant
of Lloyd’s algorithm (1982) for that purpose. More recently,
Reich (2013) remarked that such an approximation can be
used in the resampling step of particle filters and proposed
in that context two ensemble methods inspired by optimal
transport, one of which reduces to a single iteration of Ng’s
algorithm. Such approximations can also be obtained with
kernel-based approaches, by minimizing an information di-
vergence between the (smoothed) target measure $ and its
(smoothed) uniform approximation as proposed recently by
Chen et al. (2010) and Sugiyama et al. (2011).
3.2. Recent Work
Agueh and Carlier (2011) consider conditions on the $i’s
for a Wasserstein barycenter in P (!) to be unique using the
multi-marginal transportation problem. They provide solu-
tions in the cases where either (i) ! = R; (ii) N = 2 using
McCann’s interpolant (1997); (iii) all the measures $i are
Gaussians in ! = Rd, in which case the barycenter is a
Gaussian with the mean of all means and a variance matrix
which is the unique positive definite root of a matrix equa-
tion (Agueh and Carlier, 2011, Eq.6.2).
Rabin et al. (2012) were to our knowledge the first to con-
sider practical approaches to compute Wasserstein barycen-
ters between point clouds in Rd. To do so, Rabin et al.
(2012) propose to approximate the Wasserstein distance be-
tween two point clouds by their sliced Wasserstein distance,
the expectation of the Wasserstein distance between the pro-
jections of these point clouds on lines sampled randomly.
Because the optimal transport between two point clouds on
the real line can be solved with a simple sort, the sliced
Wasserstein barycenter can be computed very efficiently, us-
ing gradient descent. Although their approach seems very
effective in lower dimensions, it may not work for d & 4
and does not generalize to non-Euclidean metric spaces.
Fast Computation of Wasserstein Barycenters
4. New Computational Approaches
We propose in this section new approaches to compute
Wasserstein barycenters when (i) each of the N measures
$i is an empirical measure, described by a list of atoms
Yi ” !mi of size mi & 1, and a probability vector bi in the
simplex #mi ; (ii) the search for a barycenter is not consid-
ered on the whole of P (!) but restricted to either P (X,$)
(the set of measures supported on a predefined finite set X
of size n with weights in a subset $ of #n) or Pk(!,$) (the
set of measures supported on up to k atoms with weights in
a subset $ of #k).
Looking for a barycenter µ with atoms X and weights a is
equivalent to minimizing f (see Eq. 3 for a definition of p),
f(a,X)
def
=
1
N
N%
i=1
p(a, bi,MXYi), (5)
over relevant feasible sets for a and X . When X is fixed,
we show in §4.1 that f is convex w.r.t a regardless of the
properties of !. A subgradient for f w.r.t a can be re-
covered through the dual optimal solutions of all problems
p(a, bi,MXYi), and f can be minimized using a projected
subgradient method outlined in §4.2. If X is free, con-
strained to be of cardinal k, and ! and its metric D are both
Euclidean, we show in §4.4 that f is not convex w.r.t X but
we can provide subgradients for f using the primal optimal
solutions of all problems p(a, bi,MXYi). This in turn sug-
gests an algorithm to reach a local minimum for f w.r.t. a
and X in Pk(!,$) by combining both approaches.
4.1. Differentiability of p(a, b,MXY ) w.r.t a
Dual transportation problem. Given a matrix M ”
R
n”m, the optimum p(a, b,M) admits the following dual
Linear Program (LP) form (Bertsimas and Tsitsiklis, 1997,
§7.6,§7.8), known as the dual optimal transport problem:
d(a, b,M) = max
(#,$)!CM
‘Ta+ (T b , (6)
where the polyhedron CM of dual variables is
CM = {(‘,() ” Rn+m |’i + (j % mij}.
By LP duality, d(a, b,M) = p(a, b,M). The dual optimal
solutions—which can be easily recovered from the primal
optimal solution (Bertsimas and Tsitsiklis, 1997, Eq.7.10)—
define a subgradient for p as a function of a:
Proposition 1. Given b ” #m and M ” Rn”m, the map
a ,- p(a, b,M) is a polyhedral convex function. Any
optimal dual vector ‘% of d(a, b,M) is a subgradient of
p(a, b,M) with respect to a.
Proof. These results follow from sensitivity analysis in LP’s
(Bertsimas and Tsitsiklis, 1997, §5.2). d is bounded and is
also the maximum of a finite set of linear functions, each
indexed by the set of extreme points of CM , evaluated at a
and is therefore polyhedral convex. When the dual optimal
vector is unique, ‘% is a gradient of p at a, and a subgradient
otherwise.
Because for any real value t the pair (‘ + t1n,( + t1m)
is feasible if the pair (‘,() is feasible, and because their
objective are identical, any dual optimum (‘,() is deter-
mined up to an additive constant. To remove this degree
of freedom—which arises from the fact that one among all
n+m row/column sum constraints ofU(a, b) is redundant—
we can either remove a dual variable or normalize any dual
optimum’% so that it sums to zero, to enforce that it belongs
to the tangent space of #n. We follow the latter strategy in
the rest of the paper.
4.2. Fixed Support: Minimizing f over P (X)
Let X ) !n be fixed and let $ be a closed convex subset
of #n. The aim of this section is to compute weights a ” $
such that f(a,X) is minimal. Let ‘%i be the optimal dual
variable of d(a, bi;MXYi) normalized to sum to 0. f being
a sum of terms p(a, bi,MXYi), we have that:
Corollary 1. The function a ,- f(a,X) is polyhedral con-
vex, with subgradient
”’
def
=
1
N
N%
i=1
‘%i .
Assuming $ is closed and convex, we can consider a naive
projected subgradient minimization of f . Alternatively,
if there exists a Bregman divergence B(a, b) = )(b) +
)(a) + ‘.)(a), b + a ( for a, b ” $ defined by a prox-
function ), we can define the proximal mapping Pa(b) =
argminc!# (‘b, c+ a (+B(a, c)) and consider accelerated
gradient approaches (Nesterov, 2005). We summarize this
idea in Algorithm 1.
Algorithm 1 Wasserstein Barycenter in P (X,$)
Inputs: X ” !n,$ ) #n. For i % N : Yi ” !mi , bi ”
#mi , p ” [1,#), t0 > 0.
Form all n!mi matrices Mi = MXYi , see Eq. (1).
Set â = ã = argmin# ).
while not converged do
( = (t+ 1)/2, a/ (1+ (#1)â+ (#1ã.
Form subgradient ”’ / N#1
$N
i=1 ‘
%
i using all dual
optima ‘%i of d(a, bi,Mi).
ã/ Pa(t0(”’).
â/ (1+ (#1)â+ (#1ã, t/ t+ 1.
end while
Notice that when $ = #n and B is the Kullback-Leibler
divergence (Beck and Teboulle, 2003), we can initialize ã
Fast Computation of Wasserstein Barycenters
with 1n/n and use the multiplicative update to realize the
proximal update: ã / ã 0 e#t0$###; ã / ã/ãT1n, where
0 is Schur’s product. Alternative sets $ for which this
projection can be easily carried out include, for instance,
all (convex) level set of the entropy function H , namely
$ = {a ” #n|H(a) & *} where 0 % * % logn.
4.3. Differentiability of p(a, b,MXY ) w.r.t X
We consider now the case where ! = Rd with d & 1,
D is the Euclidean distance and p = 2. When ! = Rd,
a family of n points X and a family of m points Y can
be represented respectively as a matrix in Rd”n and an-
other in Rd”m. The pairwise squared-Euclidean distances
between points in these sets can be recovered by writing
x
def
= diag(XTX) and y
def
= diag(Y TY ), and observing that
MXY = x1
T
m + 1ny
T + 2XTY ” Rn”m.
Transport Cost as a function of X . Due to the margin con-
straints that apply if a matrix T is in the polytope U(a, b),
we have:
‘T,MXY ( = ‘T,x1Td + 1
T
d y + 2X
TY (
= tr T Tx1Td + trT
T
1
T
d y + 2’T,X
TY (
= xT a+ yT b+ 2’T,XTY (.
Discarding constant terms in y and b, we have that minimiz-
ing p(a, b,MXY ) with respect to locations X is equivalent
to solving
min
X!Rd!k
xT a+ p(a, b,+XTY ). (7)
As a function of X , that objective is the sum of a convex
quadratic function of X with a piecewise linear concave
function, since
p(a, b,+XTY ) = min
T!U(a,b)
‘X,+Y T T (
is the minimum of linear functions indexed by the vertices
of the polytope U(a, b). As a consequence, p(a, b,MXY )
is not convex with respect to X .
Quadratic Approximation. Suppose that T % is optimal for
problem p(a, b,MXY ). Updating Eq. (7),
xTa+’T %, XTY ( = *X diag(a1/2)+Y T %T diag(a#1/2)*2
+ *Y T %T diag(a#1/2)*2.
Minimizing a local quadratic approximation ofp at X yields
thus the Newton update
X / Y T %T diag(a#1). (8)
A simple interpretation of this update is as follows: the ma-
trix T %T diag(a#1) has n column-vectors in the simplex
#m. The suggested update for X is to replace it by n
barycenters of points enumerated in Y with weights defined
by the optimal transport T %. Note that, because the mini-
mization problem we consider in X is not convex to start
with, one could be fairly creative when it comes to choosing
D and p among other distances and exponents. This sub-
stitution would only involve more complicated gradients of
MXY w.r.t. X that would appear in Eq. (7).
4.4. Free Support: Minimizing f over Pk(R
d,$)
We now consider, as a natural extension of §4.2 when ! =
R
d, the problem of minimizing f over a probability measure
µ that is (i) supported by at most k atoms described in X , a
matrix of size d! k, (ii) with weights in a ” $ ) #k.
Alternating Optimization. To obtain an approximate mini-
mizer of f(a,X) we propose in Algorithm 2 to update alter-
natively locations X (with the Newton step defined in Eq. 8)
and weights a (with Algorithm 1).
Algorithm 2 2-Wasserstein Barycenter in Pk(R
d,$)
Input: Yi ” Rd”mi , bi ” #mi for i % N.
initialize X ” Rd”k and a ” $
while X and a have not converged do
a/ a% using Algorithm 1.
for i ” (1, . . . , N) do
T %i / optimal solution of p(a, bi;MXYi)
end for
X / (1+ +)X + +
‘
1
N
$N
i=1 YiT
%T
i
(
diag(a#1), set-
ting + ” [0, 1] with line-search or a preset value.
end while
Algorithm 2 and Lloyd/Ng Algorithms. As mentioned
in §2, minimizing f defined in Eq. (5) over Pk(Rd), with
N = 1, p = 2 and no constraints on the weights ($ = #k),
is equivalent to solving the k-means problem applied to the
set of points enumerated in $1. In that particular case, Algo-
rithm 2 is also equivalent to Lloyd’s algorithm. Indeed, the
assignment of the weight of each point to its closest centroid
in Lloyd’s algorithm (the maximization step) is equivalent to
the computation of a% in ours, whereas the re-centering step
(the expectation step) is equivalent to our update forX using
the optimal transport, which is in that case the trivial trans-
port that assigns the weight (divided by N ) of each atom
in Yi to its closest neighbor in X . When the weight vector
a is constrained to be uniform ($ = {1k/k}), Ng (2000)
proposed a heuristic to obtain uniform k-means that is also
equivalent to Algorithm 2, and which also relies on the re-
peated computation of optimal transports. For more general
sets $, Algorithm 1 ensures that the weights a remain in $
at each iteration of Algorithm 2, which cannot be guaranteed
by neither Lloyd’s nor Ng’s approach.
Fast Computation of Wasserstein Barycenters
Algorithm 2 and Reich’s (2013) Transform. Reich (2013)
has recently suggested to approximate a weighted measure
$ by a uniform measure supported on as many atoms. This
approximation is motivated by optimal transport theory, no-
tably asymptotic results by McCann (1995), but does not
attempt to minimize, as we do in Algorithm 2, any Wasser-
stein distance between that approximation and the original
measure. This approach results in one application of the
Newton update defined in Eq. (8), when X is first initialized
to Y and a = 1m/m to compute the optimal transport T
%.
Summary We have proposed two original algorithms to
compute Wasserstein barycenters of probability measures:
one which applies when the support of the barycenter is
fixed and its weights are constrained to lie in a convex subset
$ of the simplex, another which can be used when the sup-
port can be chosen freely. These algorithms are relatively
simple, yet—to the best of our knowledge—novel. We sus-
pect these approaches were not considered before because of
their prohibitive computational cost: Algorithm 1 computes
at each iteration the dual optima of N transportation prob-
lems to form a subgradient, each with n+mi variables and
n !mi inequality constraints. Algorithm 2 incurs an even
higher cost, since it involves running Algorithm 1 at each
iteration, in addition to solving N primal optimal transport
problems to form a subgradient to update X . Since both ob-
jectives rely on subgradient descent schemes, they are also
likely to suffer from a very slow convergence. We propose
to solve these issues by following Cuturi’s approach (2013)
to smooth the objective f and obtain strictly convex objec-
tives whose gradients can be computed more efficiently.
5. Smoothed Dual and Primal Problems
To circumvent the major computational roadblock posed by
the repeated computation of primal and dual optimal trans-
ports, we extend Cuturi’s approach (2013) to obtain smooth
and strictly convex approximations of both primal and dual
problems p and d. The matrix scaling approach advocated
by Cuturi was motivated by the fact that it provided a fast
approximation p& to p. We show here that the same ap-
proach can be used to smooth the objective f and recover
for a cheap computational price its gradients w.r.t. a and X .
5.1. Regularized Primal and Smoothed Dual
A n ! m transport T , which is by definition in the nm-
simplex, has entropy h(T )
def
= +
$n,m
i,j=1 tij log(tij). Cuturi
(2013) has recently proposed to consider, for & > 0, a regu-
larized primal transport problem p& as
p&(a, b;M) = min
T!U(a,b)
‘X,M ( +
1
&
h(T ).
We introduce in this work its dual problem, which is a
smoothed version of the original dual transportation prob-
lem, where the positivity constraints of each term mij +
‘i + (j have been replaced by penalties 1&e
#&(mij##i#$j):
d&(a, b;M) = max
(#,$)!Rn+m
‘Ta+ (T b+
%
i$n,j$m
e#&(mij##i#$j)
&
.
These two problems are related below in the sense that their
respective optimal solutions are linked by a unique positive
vector u ” Rn+:
Proposition 2. Let K be the elementwise exponential of
+&MXY , K
def
= e#&MXY . Then there exists a pair of vec-
tors (u, v) ” Rn+ ! R
m
+ such that the optimal solutions of
p& and d& are respectively given by
T %& = diag(u)K diag(v),’
%
& = +
log(u)
&
+
log(u)T1n
&n
1n.
Proof. The result follows from the Lagrange method of
multipliers for the primal as shown by Cuturi (2013, Lemma
2), and a direct application of first-order conditions for the
dual, which is an unconstrained convex problem. The term
log(u)T 1n
&n
1n in the definition of ‘
%
& is used to normalize ‘
%
&
so that it sums to zero as discussed in the end of §4.1.
5.2. Matrix Scaling Computation of (u, v)
The positive vectors (u, v) mentioned in Proposition 2 can
be computed through Sinkhorn’s matrix scaling algorithm
applied to K , as outlined in Algorithm 3:
Lemma 1 (Sinkhorn, 1967). For any positive matrix A in
R
n”m
+ and positive probability vectors a ” #n and b ” #m,
there exist positive vectors u ” Rn+ and v ” R
m
+ , unique
up to scalar multiplication, such that diag(u)Adiag(v) ”
U(a, b). Such a pair (u, v) can be recovered as a fixed point
of the Sinkhorn map
(u, v) ,- (Av#1./b, ATu#1./a).
The convergence of the algorithm is linear when us-
ing Hilbert’s projective metric between the scaling factors
(Franklin and Lorenz, 1989, §3). Although we use this al-
gorithm in our experiments because of its simplicity, other
algorithms exist (Knight and Ruiz, 2012) which are known
to be more reliable numerically when & is large.
Summary: Given a smoothing parameter & > 0, using
Sinkhorn’s algorithm on matrix K , defined as the elemen-
twise exponential of +&M (the pairwise Gaussian kernel
matrix between the supports X and Y when p = 2, using
bandwidth ” = 1/
1
2&) we can recover smoothed optima
‘%& and T
%
& for both smoothed primal p& and dual d& trans-
port problems. To take advantage of this, we simply propose
to substitute the smoothed optima ‘%& and T
%
& to the original
optima ‘% and T % that appear in Algorithms 1 and 2.
Fast Computation of Wasserstein Barycenters
Figure 2. (top) For each digit, 15 out of the ! 5.000 scaled and translated images considered for each barycenter. (bottom) Barycenters
after t = 1, 10, 60 gradient steps. For t = 60, images are cropped to show the 30″ 30 central pixels.
Algorithm 3 Smoothed Primal T %& and Dual ‘
%
& Optima
Input M,&, a, b
K = exp(+&M);
)K = diag(a#1)K % use bsxfun(@rdivide,K,a)
Set u =ones(n,1)/n;
while u changes do
u = 1./( )K(b./(KTu))).
end while
v = b./(KTu).
‘%& = +
1
&
log(u) +
log(u)T 1n
&n
1n.
T %& = diag(u)K diag(v).
% use bsxfun(@times, v, (bsxfun(@times,K, u))%);
6. Applications
We present two applications, one of Algorithm 1 and one of
Algorithm 2, that both rely on the smooth approximations
presented in §5. The settings we consider involve computing
respectively tens of thousands or tens of high-dimensional
optimal transport problems—2.500!2.500 for the first ap-
plication, 57.647! 48 for the second—which cannot be re-
alistically carried out using network flow solvers. Using net-
work flow solvers, the resolution of a single transport prob-
lem of these dimensions could take between several minutes
to several hours. We also take advantage in the first appli-
cation of the fact that Algorithm 3 can be run efficiently on
GPGPUs using vectorized code (Cuturi, 2013, Alg.1).
6.1. Visualization of Perturbed Images
We use 50.000 images of the MNIST database, with approx-
imately 5.000 images for each digit from 0 to 9. Each image
(originally 20 ! 20 pixels) is scaled randomly, uniformly
between half-size and double-size, and translated randomly
within a 50 ! 50 grid, with a bias towards corners. We dis-
play intermediate barycenter solutions for each of these 10
datasets of images for t = 1, 10, 60 gradient iterations. &
is set to 60/median(M), where M is the squared-Euclidean
distance matrix between all 2,500 pixels in the grid. Using
a Quadro K5000 GPU with close to 1500 cores, the com-
putation of a single barycenter takes about 2 hours to reach
100 iterations. Because we use warm starts to initialize u
in Algorithm 3 at each iteration of Algorithm 1, the first it-
erations are typically more computationally intensive than
those carried out near the end.
6.2. Clustering with Uniform Centroids
In practice, the k-means cost function applied to a given em-
pirical measure could be minimized with a set of centroids
X and weight vector a such that the entropy of a is very
small. This can occur when most of the original points in
the dataset are attributed to a very small subset of the k cen-
troids, and could be undesirable in applications of k-means
where a more regular attribution is sought. For instance, in
sensor deployment, when each centroid (sensor) is limited
in the number of data points (users) it can serve, we would
like to ensure that the attributions agree with those limits.
Whereas the original k-means cannot take into account such
limits, we can ensure them using Algorithm 2. We illus-
trate the difference between looking for optimal centroids
with “free” assignments ($ = #k), and looking for opti-
mal “uniform” centroids with constrained assignments ($ =
{1k/k}) using US census data for income and population
repartitions across 57.647 spatial locations in the 48 contigu-
ous states. These weighted points can be interpreted as two
empirical measures onR2 with weights directly proportional
to these respective quantities. We initialize both “free” and
“uniform” clustering with the actual 48 state capitals. Re-
sults displayed in Figure 3 show that by forcing our approx-
imation to be uniform, we recover centroids that induce a
Fast Computation of Wasserstein Barycenters
Figure 3. Comparison of two Wasserstein barycenters on spatial
repartitions of income and population in the 48 contiguous states,
using as many (k = 48) centroids. The size of each of the 57.647
blue crosses is proportional to the local average of the relevant vari-
able (income above and population below) at that location, nor-
malized to sum to 1. Each downward triangle is a centroid of the
k-means clustering (equivalent to a Wasserstein barycenter with
! = “k) whose size is proportional to the portion of mass cap-
tured by that centroid. Red dots indicate centroids obtained with a
uniform constraint on the weights, ! = {1k/k}. Since such cen-
troids are constrained to carry a fixed portion of the total weight,
one can observe that they provide a more balanced clustering than
the k-means solution.
more balanced clustering. Indeed, each cell of the Voronoi
diagram built with these centroids is now constrained to hold
the same aggregate wealth or population. These centroids
could form the new state capitals of equally rich or equally
populated states. On an algorithmic note, we notice in Fig-
ure 4 that Algorithm 2 converges to its (local) optimum at a
speed which is directly comparable to that of the k-means in
terms of iterations, with a relatively modest computational
overhead. Unsurprisingly, the Wasserstein distance between
the clusters and the original measure is higher when adding
uniform constraints on the weights.
2 4 6 8 10 12 14
5
10
15
Iterations
W
as
se
rs
te
in
D
is
ta
nc
e
Uniform Wasserstein Barycenter
K−Means (Free Was. Barycenter)
Figure 4. Wasserstein distance of the uniform Wasserstein
barycenter (weights constrained to be in ! = {1k/k}) and
its unconstrained equivalent (k-means) to the income empirical
measure. Note that, because of the constraints on weights, the
Wasserstein distance of the uniform Wasserstein barycenter is
necessarily larger. On a single CPU core, these computations
require 12.5 seconds for the constrained case, using Sinkhorn’s ap-
proximation, and 1.55 seconds for the regular k-means algorithm.
Using a regular transportation solver, computing the optimal
transport from the 57.647 points to the 48 centroids would require
about 1 hour for a single iteration
Conclusion We have proposed in this paper two original
algorithms to compute Wasserstein barycenters of empiri-
cal measures. Using these algorithms in practice for mea-
sures of large support is a daunting task for two reasons:
they are inherently slow because they rely on the subgradi-
ent method; the computation of these subgradients involves
solving optimal and dual optimal transport problems. Both
issues can be substantially alleviated by smoothing the pri-
mal optimal transport problem with an entropic penalty and
considering its dual. Both smoothed problems admit gra-
dients which can be computed efficiently using only matrix
vector products. Our aim in proposing such algorithms is to
demonstrate that Wasserstein barycenters can be used for vi-
sualization, constrained clustering, and hopefully as a core
component within more complex data analysis techniques in
future applications. We also believe that our smoothing ap-
proach can be directly applied to more complex variational
problems that involve multiple Wasserstein distances, such
as Wasserstein propagation (Solomon et al., 2014).
Acknowledgements We thank reviewers for their com-
ments and Gabriel Peyré for fruitful discussions. MC was
supported by grant 26700002 from JSPS. AD was partially
supported by EPSRC.
Fast Computation of Wasserstein Barycenters
References
M. Agueh and G. Carlier. Barycenters in the Wasserstein space.
SIAM Journal on Mathematical Analysis, 43(2):904–924, 2011.
S.-I. Amari and H. Nagaoka. Methods of Information Geometry.
AMS vol. 191, 2001.
A. Banerjee, S. Merugu, I. S. Dhillon, and J. Ghosh. Clustering
with bregman divergences. The Journal of Machine Learning
Research, 6:1705–1749, 2005.
A. Beck and M. Teboulle. Mirror descent and nonlinear projected
subgradient methods for convex optimization. Operations Re-
search Letters, 31(3):167–175, 2003.
D. Bertsimas and J. Tsitsiklis. Introduction to Linear Optimization.
Athena Scientific, 1997.
G. Canas and L. Rosasco. Learning probability measures with re-
spect to optimal transport metrics. In Adv. in Neural Infor. Proc.
Systems 25, pages 2501–2509. 2012.
Y. Chen, M. Welling, and A. J. Smola. Supersamples from kernel-
herding. In Uncertainty in Artificial Intelligence (UAI), 2010.
M. Cuturi. Sinkhorn distances: Lightspeed computation of opti-
mal transport. In Advances in Neural Information Processing
Systems 26, pages 2292–2300, 2013.
J. Franklin and J. Lorenz. On the scaling of multidimensional ma-
trices. Linear Algebra and its applications, 114:717–735, 1989.
A. Gretton, K. M. Borgwardt, M. Rasch, B. Schoelkopf, and A. J.
Smola. A kernel method for the two-sample-problem. In Ad-
vances in Neural Information Processing Systems 19, pages
513–520. MIT Press, 2007.
Z. Harchaoui, F. Bach, and E. Moulines. Testing for homogeneity
with kernel fisher discriminant analysis. In Advances in Neural
Information Processing Systems 20, pages 609–616. MIT Press,
2008.
T. Kanamori, T. Suzuki, and M. Sugiyama. f-divergence esti-
mation and two-sample homogeneity test under semiparametric
density-ratio models. IEEE Trans. on Information Theory, 58
(2):708–720, 2012a.
T. Kanamori, T. Suzuki, and M. Sugiyama. Statistical analysis
of kernel-based least-squares density-ratio estimation. Machine
Learning, 86(3):335–367, 2012b.
P. A. Knight and D. Ruiz. A fast algorithm for matrix balancing.
IMA Journal of Numerical Analysis, 2012.
S. Lloyd. Least squares quantization in pcm. IEEE Trans. on In-
formation Theory, 28(2):129–137, 1982.
R. J. McCann. Existence and uniqueness of monotone measure-
preserving maps. Duke Mathematical Journal, 80(2):309–324,
1995.
R. J. McCann. A convexity principle for interacting gases. Ad-
vances in Mathematics, 128(1):153–179, 1997.
Y. Nesterov. Smooth minimization of non-smooth functions. Math.
Program., 103(1):127–152, May 2005.
M. K. Ng. A note on constrained k-means algorithms. Pattern
Recognition, 33(3):515–519, 2000.
F. Nielsen. Jeffreys centroids: A closed-form expression for pos-
itive histograms and a guaranteed tight approximation for fre-
quency histograms. IEEE Signal Processing Letters (SPL),
2013.
E. Parzen. On estimation of a probability density function and
mode. The Annals of Mathematical Statistics, 33(3):1065–1076,
1962.
O. Pele and M. Werman. Fast and robust earth mover’s distances.
In ICCV’09, 2009.
D. Pollard. Quantization and the method of k-means. IEEE Trans.
on Information Theory, 28(2):199–205, 1982.
J. Rabin, G. Peyré, J. Delon, and M. Bernot. Wasserstein barycen-
ter and its application to texture mixing. In Scale Space and
Variational Methods in Computer Vision, volume 6667 of Lec-
ture Notes in Computer Science, pages 435–446. Springer, 2012.
S. Reich. A non-parametric ensemble transform method for
bayesian inference. SIAM Journal of Scientific Computing, 35
(4):A2013–A2024, 2013.
Y. Rubner, L. Guibas, and C. Tomasi. The earth movers dis-
tance, multi-dimensional scaling, and color-based image re-
trieval. In Proceedings of the ARPA Image Understanding Work-
shop, pages 661–668, 1997.
R. Sinkhorn. Diagonal equivalence to matrices with prescribed row
and column sums. The American Mathematical Monthly, 74(4):
402–405, 1967.
J. Solomon, R. Rustamov, G. Leonidas, and A. Butscher. Wasser-
stein propagation for semi-supervised learning. In Proceed-
ings of The 31st International Conference on Machine Learning,
pages 306–314, 2014.
M. Sugiyama, M. Yamada, M. Kimura, and H. Hachiya. On
information-maximization clustering: Tuning parameter selec-
tion and analytic solution. In Proceedings of the 28th Interna-
tional Conference on Machine Learning, pages 65–72, 2011.
M. Teboulle. A unified continuous optimization framework for
center-based clustering methods. The Journal of Machine
Learning Research, 8:65–102, 2007.
R. Veldhuis. The centroid of the symmetrical kullback-leibler dis-
tance. Signal Processing Letters, IEEE, 9(3):96–99, 2002.
C. Villani. Optimal transport: old and new, volume 338. Springer
Verlag, 2009.
G. Zen and E. Ricci. Earth mover’s prototypes: A convex learning
approach for discovering activity patterns in dynamic scenes. In
Computer Vision and Pattern Recognition (CVPR), 2011 IEEE
Conference on, pages 3225–3232. IEEE, 2011.