Attention layers provably solve single-location regression - Laboratoire de Probabilités, Statistique et Modélisation
Pré-Publication, Document De Travail Année : 2024

Attention layers provably solve single-location regression

Résumé

Attention-based models, such as Transformer, excel across various tasks but lack a comprehensive theoretical understanding, especially regarding token-wise sparsity and internal linear representations. To address this gap, we introduce the single-location regression task, where only one token in a sequence determines the output, and its position is a latent random variable, retrievable via a linear projection of the input. To solve this task, we propose a dedicated predictor, which turns out to be a simplified version of a non-linear self-attention layer. We study its theoretical properties, by showing its asymptotic Bayes optimality and analyzing its training dynamics. In particular, despite the non-convex nature of the problem, the predictor effectively learns the underlying structure. This work highlights the capacity of attention mechanisms to handle sparse token information and internal linear structures.
Fichier principal
Vignette du fichier
paper.pdf (1.46 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
licence

Dates et versions

hal-04720799 , version 1 (04-10-2024)

Licence

Identifiants

Citer

Pierre Marion, Raphaël Berthier, Gérard Biau, Claire Boyer. Attention layers provably solve single-location regression. 2024. ⟨hal-04720799⟩
134 Consultations
25 Téléchargements

Altmetric

Partager

More