Unsupervised Methods for the Study of Transformer Embeddings - CNRS - Centre national de la recherche scientifique Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

Unsupervised Methods for the Study of Transformer Embeddings

François Role
  • Fonction : Auteur
  • PersonId : 1360641
Mohamed Nadif
  • Fonction : Auteur
  • PersonId : 1134506

Résumé

Over the last decade neural word embeddings have become a cornerstone of many important text mining applications such as text classification, sentiment analysis, named entity recognition, question answering systems, etc. Particularly, Transformer-based contextual word embeddings have gained much attention with several works trying to understanding how such models work, through the use of supervised probing tasks, and usually emphasizing on BERT. In this paper, we propose a fully unsupervised manner to analyze Transformer-based embedding models in their bare state with no fine-tuning. We more precisely focus on characterizing and identifying groups of Transformer layers across 6 different Transformer models.
Fichier principal
Vignette du fichier
IDA_2021_ait-saada.pdf (639.38 Ko) Télécharger le fichier

Dates et versions

hal-04492973 , version 1 (08-04-2024)

Identifiants

Citer

Mira Ait Saada, François Role, Mohamed Nadif. Unsupervised Methods for the Study of Transformer Embeddings. Advances in Intelligent Data Analysis XIX. IDA 2021. Lecture Notes in Computer Science, vol 12695. Springer, Cham, 2021, Porto, Portugal, France. pp.287-300, ⟨10.1007/978-3-030-74251-5_23⟩. ⟨hal-04492973⟩
0 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More