A study of analogical density in various corpora at various granularity

Rashel Fam*, Yves Lepage

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

In this paper, we inspect the theoretical problem of counting the number of analogies between sentences contained in a text. Based on this, we measure the analogical density of the text. We focus on analogy at the sentence level, based on the level of form rather than on the level of semantics. Experiments are carried on two different corpora in six European languages known to have various levels of morphological richness. Corpora are tokenised using several tokenisation schemes: character, sub-word and word. For the sub-word tokenisation scheme, we employ two popular sub-word models: unigram language model and byte-pair-encoding. The results show that the corpus with a higher Type-Token Ratio tends to have higher analogical density. We also observe that masking the tokens based on their frequency helps to increase the analogical density. As for the tokenisation scheme, the results show that analogical density decreases from the character to word. However, this is not true when tokens are masked based on their frequencies. We find that tokenising the sentences using sub-word models and masking the least frequent tokens increase analogical density.

Original languageEnglish
Article number314
JournalInformation (Switzerland)
Volume12
Issue number8
DOIs
Publication statusPublished - 2021 Aug

Keywords

  • Automatic acquisition
  • Language productivity
  • Proportional analogy

ASJC Scopus subject areas

  • Information Systems

Fingerprint

Dive into the research topics of 'A study of analogical density in various corpora at various granularity'. Together they form a unique fingerprint.

Cite this