How comparable are languages across linguistic corpora? Some methodological thoughts - CNRS - Centre national de la recherche scientifique Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

How comparable are languages across linguistic corpora? Some methodological thoughts

Matthew Stave
François Delafontaine
  • Fonction : Auteur
  • PersonId : 1072765
François Pellegrino

Résumé

As the quality and availability of corpora of lesser-documented languages grow, linguists are faced with a wide variety of new avenues for typological research, ranging from fine phonetic analysis to high-level discourse topics (Easterday et al., 2021, Levshina, 2021; Schnell et al., 2021; Schnell & Schiborr, 2022, inter alia). As the scope of cross-linguistic comparison increases, corpus-based quantitative methods become increasingly useful tools. With these new quantitative approaches, however, come new (and familiar) potential pitfalls, which bear documenting. We discuss a broad selection of such issues in the context of an analysis of information rate, using both morphological glosses and entropy-based measures. The corpora in question come from the DoReCo dataset (Paschen, et al., 2020), a collection of 50 spoken corpora, created and shared by language experts, and phonemically time-aligned using MAUS software (Kisler, et al., 2017). A subset of 37 of the DoReCo corpora also contains morphological annotation. It provides a rich source to study how information is distributed in natural speech, both on the temporal dimension (quantification of the information rate) and grammatical dimension (average information encoded per morpheme or word) (see Cohen Priva, 2017; Coupé, et al., 2019; Levshina & Moran, 2021; Meister, et al., 2021 for somewhat similar approaches). In this framework, each corpus delineates a language-specific sample: those samples not being parallel, we lose in comparability what we gain in naturalness. The cornerstone is consequently to tell the variation induced by the languages themselves from the variation inherent to differences in sampling, coding, and processing. Issues in cross-linguistic morphological comparison can be divided into those that originate in linguistic structure and those that originate in human decisions. Language-based issues include familiar typological problems of how (and whether) grammatical categories extend across languages, (leading to statistical issues of representation and missingness), as well as issues of quantifying information across different categories, such as segmental and supra-segmental, or lexical and grammatical units. Human-based issues can occur within a corpus, from simple technical issues like variant spellings of glosses to less identifiable ones like different texts being coded at different stages of linguistic analysis. Even more commonly, they occur across languages, where they can involve straightforward issues of statistical comparison like differing ages, genders, or numbers of speakers, different genres of texts, or different glossing conventions. Or they can involve more subtle differences, such as different theoretical approaches (leading to e.g. different notions of wordhood), different research foci (leading to e.g. more fine-grained parsing of certain linguistic elements) or differences in availability of comparative data (leading to e.g. fewer or less precise morphological segmentations). Another difficulty relates to how information can be cross-linguistically defined and quantified across non-parallel corpora. Modern Transformer-based computational languages models, such as GPT-2 or BERT (Devlin, et al., 2018; Radford, et al., 2019) are convincingly used to approximate linguistic information through the Shannonian notion of surprisal (Shannon, 1948; Wilcox, et al., 2020) in English, but their relevance and consistency to perform cross-language comparisons is still largely terra incognita (but see Bjerva, et al. 2019). We will shed some light on their strengths and limitations with illustrations on multiple translations of a subset of the DoReCo dataset. References Bjerva, J., Östling, R., Veiga, M. H., Tiedemann, J., & Augenstein, I. (2019). What Do Language Representations Really Represent? Computational Linguistics, 45(2), 381–389. https://doi.org/10.1162/coli_a_00351 Cohen Priva, U. (2017). Not so fast: Fast speech correlates with lower lexical and structural information. Cognition 160: 27-34. Coupé, C., Oh, Y. M., Dediu, D., & Pellegrino, F. (2019). Different languages, similar encoding efficiency: Comparable information rates across the human communicative niche. Science advances, 5(9), eaaw2594. Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv:1810.04805. Easterday, S., Stave, M., Allassonnière-Tang, M., & Seifart, F. (2021). Syllable Complexity and Morphological Synthesis: A Well-Motivated Positive Complexity Correlation Across Subdomains. Frontiers in Psychology, 12, 583 Kisler, T., Reichel, U. D., & Schiel, F. (2017): Multilingual processing of speech via web services, Computer Speech & Language, Volume 45, September 2017, 326–347. Levshina, N. (2021). Corpus-based typology: Applications, challenges and some solutions. Linguistic Typology, https://doi.org/10.1515/lingty-2020-0118 Levshina, N., & Moran, S. (2021). Efficiency in human languages: Corpus evidence for universal principles. Linguistics Vanguard, 7(s3). Meister, C., Pimentel, T., Haller, P., Jäger, L., Cotterell, R., & Levy, R. (2021). Revisiting the Uniform Information Density Hypothesis. arXiv:2109.11635. Paschen, Ludger, François Delafontaine, Christoph Draxler, Susanne Fuchs, Matthew Stave & Frank Seifart (2020). Building a Time-Aligned Cross-Linguistic Reference Corpus from Language Documentation Data (DoReCo). Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020), 2657–2666 Pimentel, T., Meister, C., Salesky, E., Teufel, S., Blasi, D., & Cotterell, R. (2021). A surprisal—Duration trade-off across and within the world’s languages. arXiv:2109.15000 Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI blog, 1(8), 9. Schnell, S., & Schiborr, N. N. (2022). Crosslinguistic Corpus Studies in Linguistic Typology. Annual Review of Linguistics, 8, 171-191. Schnell, Stefan, Geoffrey Haig & Frank Seifart. 2021. The role of language documentation in corpus-based typology. In Geoffrey Haig, Stefan Schnell & Frank Seifart (eds.), Doing corpus-based typology with spoken language corpora. State of the art (Language Documentation & Conservation Special Publication 25), 1–28. Honolulu: University of Hawai’i Press. http://hdl.handle.net/10125/74656. Shannon, C. E. (1948). A mathematical theory of communication. The Bell system technical journal, 27(3), 379-423. Wilcox, E. G., Gauthier, J., Hu, J., Qian, P., & Levy, R. (2020). On the predictive power of neural language models for human real-time comprehension behavior. arXiv:2006.01912.

Domaines

Linguistique
Fichier non déposé

Dates et versions

hal-03900052 , version 1 (19-01-2023)

Identifiants

  • HAL Id : hal-03900052 , version 1

Citer

Matthew Stave, François Delafontaine, François Pellegrino, Christophe Coupé. How comparable are languages across linguistic corpora? Some methodological thoughts. ALT 2022 - 14th Conference of the Association for Linguistic Typology, Dec 2022, Austin, United States. ⟨hal-03900052⟩
57 Consultations
5 Téléchargements

Partager

Gmail Facebook X LinkedIn More