Practical Riemannian Neural Networks - CNRS - Centre national de la recherche scientifique Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2016

Practical Riemannian Neural Networks

Résumé

We provide the first experimental results on non-synthetic datasets for the quasi-diagonal Riemannian gradient descents for neural networks introduced in [Ollivier, 2015]. These include the MNIST, SVHN, and FACE datasets as well as a previously unpublished electroencephalogram dataset. The quasi-diagonal Riemannian algorithms consistently beat simple stochastic gradient gradient descents by a varying margin. The computational overhead with respect to simple backpropagation is around a factor $2$. Perhaps more interestingly, these methods also reach their final performance quickly, thus requiring fewer training epochs and a smaller total computation time. We also present an implementation guide to these Riemannian gradient descents for neural networks, showing how the quasi-diagonal versions can be implemented with minimal effort on top of existing routines which compute gradients.

Dates et versions

hal-01695102 , version 1 (29-01-2018)

Identifiants

Citer

Gaétan Marceau-Caron, Yann Ollivier. Practical Riemannian Neural Networks. 2016. ⟨hal-01695102⟩
110 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More