How to Leverage a Multi-layered Transformer Language Model for Text Clustering: an Ensemble Approach - Centre Borelli UMR 9010 Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

How to Leverage a Multi-layered Transformer Language Model for Text Clustering: an Ensemble Approach

Résumé

Pre-trained Transformer-based word embeddings are now widely used in text mining where they are known to significantly improve supervised tasks such as text classification and named entity recognition and question answering. Since the Transformer models create several different embeddings for the same input, one at each layer of their architecture, various studies have already tried to identify those of these embeddings that most contribute to the success of the above-mentioned tasks. In contrast the same performance analysis has not yet been carried out in the unsupervised setting. In this paper we evaluate the effectiveness of Transformer models on the important task of text clustering. In particular, we present a clustering ensemble approach that harnesses all the network's layers. Numerical experiments carried out on real datasets with different Transformer models show the effectiveness of the proposed method compared to several baselines.
Fichier principal
Vignette du fichier
aitsaada_etal_cikm2021.pdf (634.75 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03963423 , version 1 (06-02-2023)

Identifiants

Citer

Mira Ait-Saada, François Role, Mohamed Nadif. How to Leverage a Multi-layered Transformer Language Model for Text Clustering: an Ensemble Approach. CIKM '21: The 30th ACM International Conference on Information and Knowledge Management, Nov 2021, Queensland Australia, Australia. pp.2837-2841, ⟨10.1145/3459637.3482121⟩. ⟨hal-03963423⟩
16 Consultations
110 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More