Training and Generalization Errors for Underparameterized Neural Networks - Laboratoire Méthodes Formelles
Communication Dans Un Congrès Année : 2024

Training and Generalization Errors for Underparameterized Neural Networks

Résumé

It has been theoretically explained, through the notion of Neural Tangent Kernel, why the training error of overparameterized networks converges linearly to 0. In this work, we focus on the case of small (or underparameterized) networks. An advantage of small networks is that they are faster to train while retaining sufficient precision to perform useful tasks in many applications. Our main theoretical contribution is to prove that the training error of small networks converges linearly to a (non-null) constant, of which we give a precise estimate. We verify this result on a neural network of 10 neurons simulating a Model Predictive Controller. We also observe that an upper bound of the generalization error follows a double-peak curve as the number of training data increases.
Fichier principal
Vignette du fichier
ACC_2024_Daniel_MARTIN.pdf (557.88 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04423901 , version 1 (29-01-2024)
hal-04423901 , version 2 (13-03-2024)

Identifiants

  • HAL Id : hal-04423901 , version 2

Citer

Daniel Martin Xavier, Ludovic Chamoin, Laurent Fribourg. Training and Generalization Errors for Underparameterized Neural Networks. 2024 American Control Conference, Jul 2024, Toronto, Canada. ⟨hal-04423901v2⟩
436 Consultations
167 Téléchargements

Partager

More