Learning to Parse Sentences with Cross-Situational Learning using Different Word Embeddings Towards Robot Grounding - CNRS - Centre national de la recherche scientifique Accéder directement au contenu
Poster De Conférence Année : 2021

Learning to Parse Sentences with Cross-Situational Learning using Different Word Embeddings Towards Robot Grounding

Subba Reddy Oota
Frédéric Alexandre
Xavier Hinaut

Résumé

How pre-trained transformer-based language models perform grounded language acquisition through cross-situational learning (CSL) remains unclear. In particular, it is still not understood how meaning concepts are captured from complex sentences, along with learning language-based interactions, could benefit the field of human-robot interactions and help understand how children learn and ground language. In our current work, we study cross-situational learning to understand the mechanisms enabling children to learn rapidly word-to-meaning mapping with two sequence-based models: (i) Echo State Networks (i.e., Reservoir Computing), and (ii) Long-Short Term Memory Networks (LSTM). We consider the three different input representations: (i) One-Hot encoding, (ii) BERT fine-tuned on Juven+GOLD corpus, and (iii) Google BERT. We investigate which of these three representations better predict the stimulated vision as a function of sentences describing the scenes using two models. Using our approach, we test two datasets: Juven and GoLD, and present how these models generalize only after a few hundred partially described scenes via cross-situational learning. We find that both One-Hot encoding and BERT fine-tuned representations (for both models) significantly improve the predictions. Moreover, we argue that these models are able to learn complex relations between the contexts in which a word appears and their corresponding meaning concepts, handling polysemous and synonymous words. This aspect could be incorporated into a human-robot interaction study that examines grounding language to objects in a physical world and poses a challenge for researchers to investigate better the use of transformer models in robotics and HCI.
Fichier principal
Vignette du fichier
Otta2021_SpLU_10_Poster.pdf (644.41 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03533730 , version 1 (19-01-2022)

Identifiants

  • HAL Id : hal-03533730 , version 1

Citer

Subba Reddy Oota, Frédéric Alexandre, Xavier Hinaut. Learning to Parse Sentences with Cross-Situational Learning using Different Word Embeddings Towards Robot Grounding. Spatial Language Understanding and Grounded Communication for Robotics Workshop, ACL-IJCNLP 2021, Aug 2021, Bangkok, Thailand. . ⟨hal-03533730⟩
57 Consultations
63 Téléchargements

Partager

Gmail Facebook X LinkedIn More