Efficient Gradient Flows in Sliced-Wasserstein Space - CNRS - Centre national de la recherche scientifique Access content directly
Journal Articles Transactions on Machine Learning Research Journal Year : 2022

Efficient Gradient Flows in Sliced-Wasserstein Space

Abstract

Minimizing functionals in the space of probability distributions can be done with Wasserstein gradient flows. To solve them numerically, a possible approach is to rely on the Jordan-Kinderlehrer-Otto (JKO) scheme which is analogous to the proximal scheme in Euclidean spaces. However, it requires solving a nested optimization problem at each iteration, and is known for its computational challenges, especially in high dimension. To alleviate it, very recent works propose to approximate the JKO scheme leveraging Brenier's theorem, and using gradients of Input Convex Neural Networks to parameterize the density (JKO-ICNN). However, this method comes with a high computational cost and stability issues. Instead, this work proposes to use gradient flows in the space of probability measures endowed with the sliced-Wasserstein (SW) distance. We argue that this method is more flexible than JKO-ICNN, since SW enjoys a closed-form differentiable approximation. Thus, the density at each step can be parameterized by any generative model which alleviates the computational burden and makes it tractable in higher dimensions.
Fichier principal
Vignette du fichier
285_efficient_gradient_flows_in_sl.pdf (4.63 Mo) Télécharger le fichier
Origin Publisher files allowed on an open archive
Licence

Dates and versions

hal-04250914 , version 1 (27-05-2024)

Licence

Identifiers

Cite

Clément Bonet, Nicolas Courty, François Septier, Lucas Drumetz. Efficient Gradient Flows in Sliced-Wasserstein Space. Transactions on Machine Learning Research Journal, 2022. ⟨hal-04250914⟩
64 View
18 Download

Altmetric

Share

Gmail Mastodon Facebook X LinkedIn More