Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing Its Gradient Estimator Bias - CNRS - Centre national de la recherche scientifique Accéder directement au contenu
Article Dans Une Revue Frontiers in Neuroscience Année : 2021

Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing Its Gradient Estimator Bias

Axel Laborieux
  • Fonction : Auteur
Maxence Ernoult
  • Fonction : Auteur
Benjamin Scellier
  • Fonction : Auteur
Yoshua Bengio
  • Fonction : Auteur
Julie Grollier
Damien Querlioz

Résumé

Equilibrium Propagation is a biologically-inspired algorithm that trains convergent recurrent neural networks with a local learning rule. This approach constitutes a major lead to allow learning-capable neuromophic systems and comes with strong theoretical guarantees. Equilibrium propagation operates in two phases, during which the network is let to evolve freely and then “nudged” toward a target; the weights of the network are then updated based solely on the states of the neurons that they connect. The weight updates of Equilibrium Propagation have been shown mathematically to approach those provided by Backpropagation Through Time (BPTT), the mainstream approach to train recurrent neural networks, when nudging is performed with infinitely small strength. In practice, however, the standard implementation of Equilibrium Propagation does not scale to visual tasks harder than MNIST. In this work, we show that a bias in the gradient estimate of equilibrium propagation, inherent in the use of finite nudging, is responsible for this phenomenon and that canceling it allows training deep convolutional neural networks. We show that this bias can be greatly reduced by using symmetric nudging (a positive nudging and a negative one). We also generalize Equilibrium Propagation to the case of cross-entropy loss (by opposition to squared error). As a result of these advances, we are able to achieve a test error of 11.7% on CIFAR-10, which approaches the one achieved by BPTT and provides a major improvement with respect to the standard Equilibrium Propagation that gives 86% test error. We also apply these techniques to train an architecture with unidirectional forward and backward connections, yielding a 13.2% test error. These results highlight equilibrium propagation as a compelling biologically-plausible approach to compute error gradients in deep neuromorphic systems.
Fichier principal
Vignette du fichier
fnins-15-633674 (1).pdf (1.34 Mo) Télécharger le fichier
Origine : Publication financée par une institution

Dates et versions

hal-03372043 , version 1 (09-10-2021)

Licence

Paternité

Identifiants

Citer

Axel Laborieux, Maxence Ernoult, Benjamin Scellier, Yoshua Bengio, Julie Grollier, et al.. Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing Its Gradient Estimator Bias. Frontiers in Neuroscience, 2021, 15, ⟨10.3389/fnins.2021.633674⟩. ⟨hal-03372043⟩
19 Consultations
40 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More