LT@Helsinki at SemEval-2020 Task 12: Multilingual or language-specific BERT?

Marc Pàmies, Emily Öhman, Kaisla Kajava, Jörg Tiedemann

研究成果: Conference contribution

1 被引用数 (Scopus)

抄録

This paper presents the different models submitted by the LT@Helsinki team for the SemEval 2020 Shared Task 12. Our team participated in sub-tasks A and C; titled offensive language identification and offense target identification, respectively. In both cases we used the so-called Bidirectional Encoder Representation from Transformer (BERT), a model pre-trained by Google and fine-tuned by us on the OLID and SOLID datasets. The results show that offensive tweet classification is one of several language-based tasks where BERT can achieve state-of-the-art results.

本文言語English
ホスト出版物のタイトル14th International Workshops on Semantic Evaluation, SemEval 2020 - co-located 28th International Conference on Computational Linguistics, COLING 2020, Proceedings
編集者Aurelie Herbelot, Xiaodan Zhu, Alexis Palmer, Nathan Schneider, Jonathan May, Ekaterina Shutova
出版社International Committee for Computational Linguistics
ページ1569-1575
ページ数7
ISBN(電子版)9781952148316
出版ステータスPublished - 2020
外部発表はい
イベント14th International Workshops on Semantic Evaluation, SemEval 2020 - Barcelona, Spain
継続期間: 2020 12月 122020 12月 13

出版物シリーズ

名前14th International Workshops on Semantic Evaluation, SemEval 2020 - co-located 28th International Conference on Computational Linguistics, COLING 2020, Proceedings

Conference

Conference14th International Workshops on Semantic Evaluation, SemEval 2020
国/地域Spain
CityBarcelona
Period20/12/1220/12/13

ASJC Scopus subject areas

  • 理論的コンピュータサイエンス
  • 計算理論と計算数学
  • コンピュータ サイエンスの応用

フィンガープリント

「LT@Helsinki at SemEval-2020 Task 12: Multilingual or language-specific BERT?」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル