SemSeq: A Regime for Training Widely-Applicable Word-Sequence Encoders

Hiroaki Tsuyuki, Tetsuji Ogawa, Tetsunori Kobayashi, Yoshihiko Hayashi

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

A sentence encoder that can be readily employed in many applications or effectively fine-tuned to a specific task/domain is highly demanded. Such a sentence encoding technique would achieve a broader range of applications if it can deal with almost arbitrary word-sequences. This paper proposes a training regime for enabling encoders that can effectively deal with word-sequences of various kinds, including complete sentences, as well as incomplete sentences and phrases. The proposed training regime can be distinguished from existing methods in that it first extracts word-sequences of an arbitrary length from an unlabeled corpus of ordered or unordered sentences. An encoding model is then trained to predict the adjacency between these word-sequences. Herein an unordered sentence indicates an individual sentence without neighboring contextual sentences. In some NLP tasks, such as sentence classification, the semantic contents of an isolated sentence have to be properly encoded. Further, by employing rather unconstrained word-sequences extracted from a large corpus, without heavily relying on complete sentences, it is expected that linguistic expressions of various kinds are employed in the training. This property contributes to enhancing the applicability of the resulting word-sequence/sentence encoders. The experimental results obtained from supervised evaluation tasks demonstrated that the trained encoder achieved performance comparable to existing encoders while exhibiting superior performance in unsupervised evaluation tasks that involve incomplete sentences and phrases.

Original languageEnglish
Title of host publicationComputational Linguistics - 16th International Conference of the Pacific Association for Computational Linguistics, PACLING 2019, Revised Selected Papers
EditorsLe-Minh Nguyen, Satoshi Tojo, Xuan-Hieu Phan, Kôiti Hasida
PublisherSpringer
Pages43-55
Number of pages13
ISBN (Print)9789811561672
DOIs
Publication statusPublished - 2020
Event16th International Conference of the Pacific Association for Computational Linguistics, PACLING 2019 - Hanoi, Viet Nam
Duration: 2019 Oct 112019 Oct 13

Publication series

NameCommunications in Computer and Information Science
Volume1215 CCIS
ISSN (Print)1865-0929
ISSN (Electronic)1865-0937

Conference

Conference16th International Conference of the Pacific Association for Computational Linguistics, PACLING 2019
CountryViet Nam
CityHanoi
Period19/10/1119/10/13

Keywords

  • Semantic tasks
  • Sentence encoding
  • Unsupervised representation learning

ASJC Scopus subject areas

  • Computer Science(all)
  • Mathematics(all)

Fingerprint Dive into the research topics of 'SemSeq: A Regime for Training Widely-Applicable Word-Sequence Encoders'. Together they form a unique fingerprint.

  • Cite this

    Tsuyuki, H., Ogawa, T., Kobayashi, T., & Hayashi, Y. (2020). SemSeq: A Regime for Training Widely-Applicable Word-Sequence Encoders. In L-M. Nguyen, S. Tojo, X-H. Phan, & K. Hasida (Eds.), Computational Linguistics - 16th International Conference of the Pacific Association for Computational Linguistics, PACLING 2019, Revised Selected Papers (pp. 43-55). (Communications in Computer and Information Science; Vol. 1215 CCIS). Springer. https://doi.org/10.1007/978-981-15-6168-9_4