Pré-Publication, Document De Travail Année : 2026

Homogenized Transformers

Résumé

We study a random model of deep multi-head self-attention in which the weights are resampled independently across layers and heads, as at initialization of training. Viewing depth as a time variable, the residual stream defines a discrete-time interacting particle system on the unit sphere. We prove that, under suitable joint scalings of the depth, the residual step size, and the number of heads, this dynamics admits a nontrivial homogenized limit. Depending on the scaling, the limit is either deterministic or stochastic with common noise; in the mean-field regime, the latter leads to a stochastic nonlinear Fokker--Planck equation for the conditional law of a representative token. In the Gaussian setting, the limiting drift vanishes, making the homogenized dynamics explicit enough to study representation collapse. This yields quantitative trade-offs between dimension, context length, and temperature, and identifies regimes in which clustering can be mitigated.

Fichier principal
Vignette du fichier
2604.01978v1.pdf (1.47 Mo) Télécharger le fichier

Dates et versions

hal-05578815 , version 1 (03-04-2026)

Licence

Identifiants

Citer

Hugo Koubbi, Borjan Geshkovski, Philippe Rigollet. Homogenized Transformers. 2026. ⟨hal-05578815⟩
159 Consultations
31 Téléchargements

Altmetric

Partager

  • More