TY - GEN
T1 - Leveraging Pre-trained Language Model for Speech Sentiment Analysis
AU - Shon, Suwon
AU - Brusco, Pablo
AU - Pan, Jing
AU - Han, Kyu J.
AU - Watanabe, Shinji
N1 - Publisher Copyright:
Copyright ©2021 ISCA.
PY - 2021
Y1 - 2021
N2 - In this paper, we explore the use of pre-trained language models to learn sentiment information of written texts for speech sentiment analysis. First, we investigate how useful a pre-trained language model would be in a 2-step pipeline approach employing Automatic Speech Recognition (ASR) and transcripts-based sentiment analysis separately. Second, we propose a pseudo label-based semi-supervised training strategy using a language model on an end-to-end speech sentiment approach to take advantage of a large, but unlabeled speech dataset for training. Although spoken and written texts have different linguistic characteristics, they can complement each other in understanding sentiment. Therefore, the proposed system can not only model acoustic characteristics to bear sentimentspecific information in speech signals, but learn latent information to carry sentiments in the text representation. In these experiments, we demonstrate the proposed approaches improve F1 scores consistently compared to systems without a language model. Moreover, we also show that the proposed framework can reduce 65% of human supervision by leveraging a large amount of data without human sentiment annotation and boost performance in a low-resource condition where the human sentiment annotation is not available enough.
AB - In this paper, we explore the use of pre-trained language models to learn sentiment information of written texts for speech sentiment analysis. First, we investigate how useful a pre-trained language model would be in a 2-step pipeline approach employing Automatic Speech Recognition (ASR) and transcripts-based sentiment analysis separately. Second, we propose a pseudo label-based semi-supervised training strategy using a language model on an end-to-end speech sentiment approach to take advantage of a large, but unlabeled speech dataset for training. Although spoken and written texts have different linguistic characteristics, they can complement each other in understanding sentiment. Therefore, the proposed system can not only model acoustic characteristics to bear sentimentspecific information in speech signals, but learn latent information to carry sentiments in the text representation. In these experiments, we demonstrate the proposed approaches improve F1 scores consistently compared to systems without a language model. Moreover, we also show that the proposed framework can reduce 65% of human supervision by leveraging a large amount of data without human sentiment annotation and boost performance in a low-resource condition where the human sentiment annotation is not available enough.
KW - End-to-end speech recognition
KW - Pre-trained language model
KW - Speech sentiment analysis
UR - http://www.scopus.com/inward/record.url?scp=85119298780&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85119298780&partnerID=8YFLogxK
U2 - 10.21437/Interspeech.2021-1723
DO - 10.21437/Interspeech.2021-1723
M3 - Conference contribution
AN - SCOPUS:85119298780
T3 - Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
SP - 566
EP - 570
BT - 22nd Annual Conference of the International Speech Communication Association, INTERSPEECH 2021
PB - International Speech Communication Association
T2 - 22nd Annual Conference of the International Speech Communication Association, INTERSPEECH 2021
Y2 - 30 August 2021 through 3 September 2021
ER -