Convergence of Langevin-Simulated Annealing algorithms with multiplicative noise - CNRS - Centre national de la recherche scientifique Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2021

Convergence of Langevin-Simulated Annealing algorithms with multiplicative noise

Résumé

We study the convergence of Langevin-Simulated Annealing type algorithms with multiplicative noise, i.e. for $V : \mathbb{R}^d \to \mathbb{R}$ a potential function to minimize, we consider the stochastic equation $dY_t = - \sigma \sigma^\top \nabla V(Y_t) dt + a(t)\sigma(Y_t)dW_t + a(t)^2\Upsilon(Y_t)dt$, where $(W_t)$ is a Brownian motion, where $\sigma : \mathbb{R}^d \to \mathcal{M}_d(\mathbb{R})$ is an adaptive (multiplicative) noise, where $a : \mathbb{R}^+ \to \mathbb{R}^+$ is a function decreasing to $0$ and where $\Upsilon$ is a correction term. This setting can be applied to optimization problems arising in Machine Learning. The case where $\sigma$ is a constant matrix has been extensively studied however little attention has been paid to the general case. We prove the convergence for the $L^1$-Wasserstein distance of $Y_t$ and of the associated Euler-scheme $\bar{Y}_t$ to some measure $\nu^\star$ which is supported by $\text{argmin}(V)$ and give rates of convergence to the instantaneous Gibbs measure $\nu_{a(t)}$ of density $\propto \exp(-2V(x)/a(t)^2)$. To do so, we first consider the case where $a$ is a piecewise constant function. We find again the classical schedule $a(t) = A\log^{-1/2}(t)$. We then prove the convergence for the general case by giving bounds for the Wasserstein distance to the stepwise constant case using ergodicity properties.

Dates et versions

hal-03891234 , version 1 (09-12-2022)

Identifiants

Citer

Pierre Bras, Gilles Pagès. Convergence of Langevin-Simulated Annealing algorithms with multiplicative noise. 2021. ⟨hal-03891234⟩
15 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More