Taking a Big Step: Large Learning Rates in Denoising Score Matching Prevent Memorization
Résumé
Denoising score matching plays a pivotal role in the performance of diffusion-based generative models. However, the empirical optimal score–the exact solution to the denoising score matching–leads to memorization, where generated samples replicate the training data. Yet, in practice, only a moderate degree of memorization is observed, even without explicit regularization. In this paper, we investigate this phenomenon by uncovering an implicit regularization mechanism driven by large learning rates. Specifically, we show that in the small-noise regime, the empirical optimal score exhibits high irregularity. We then prove that, when trained by stochastic gradient descent with a large enough learning rate, neural networks cannot stably converge to a local minimum with arbitrarily small excess risk. Consequently, the learned score cannot be arbitrarily close to the empirical optimal score, thereby mitigating memorization. To make the analysis tractable, we consider one-dimensional data and two-layer neural networks. Experiments validate the crucial role of the learning rate in preventing memorization, even beyond the one-dimensional setting.
Fichier principal
main.pdf (591)
Télécharger le fichier
plot_function_t_0.05.pdf (11)
Télécharger le fichier
plot_function_t_0.2.pdf (11)
Télécharger le fichier
plot_loss.pdf (18)
Télécharger le fichier
plot_loss_10d.pdf (19)
Télécharger le fichier
plot_mmd.pdf (15)
Télécharger le fichier
plot_mmd_high_dim_proportion_0.5.pdf (16)
Télécharger le fichier
plot_samples_0.05.pdf (42)
Télécharger le fichier
plot_samples_10.pdf (25)
Télécharger le fichier
plot_samples_2.0.pdf (42)
Télécharger le fichier
plot_samples_400.pdf (24)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Origine | Fichiers produits par l'(les) auteur(s) |
---|