Convergence of SGD for Training Neural Networks with Sliced Wasserstein Losses - CNRS - Centre national de la recherche scientifique Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2023

Convergence of SGD for Training Neural Networks with Sliced Wasserstein Losses

Résumé

Optimal Transport has sparked vivid interest in recent years, in particular thanks to the Wasserstein distance, which provides a geometrically sensible and intuitive way of comparing probability measures. For computational reasons, the Sliced Wasserstein (SW) distance was introduced as an alternative to the Wasserstein distance, and has seen uses for training generative Neural Networks (NNs). While convergence of Stochastic Gradient Descent (SGD) has been observed practically in such a setting, there is to our knowledge no theoretical guarantee for this observation. Leveraging recent works on convergence of SGD on non-smooth and non-convex functions by Bianchi et al. (2022), we aim to bridge that knowledge gap, and provide a realistic context under which fixed-step SGD trajectories for the SW loss on NN parameters converge. More precisely, we show that the trajectories approach the set of (sub)-gradient flow equations as the step decreases. Under stricter assumptions, we show a much stronger convergence result for noised and projected SGD schemes, namely that the long-run limits of the trajectories approach a set of generalised critical points of the loss function.
Fichier principal
Vignette du fichier
main.pdf (541.93 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04232792 , version 1 (09-10-2023)
hal-04232792 , version 2 (18-03-2024)

Identifiants

  • HAL Id : hal-04232792 , version 2

Citer

Eloi Tanguy. Convergence of SGD for Training Neural Networks with Sliced Wasserstein Losses. 2024. ⟨hal-04232792v2⟩
10 Consultations
13 Téléchargements

Partager

Gmail Facebook X LinkedIn More