TY - GEN
T1 - Transformer ASR with Contextual Block Processing
AU - Tsunoo, Emiru
AU - Kashiwagi, Yosuke
AU - Kumakura, Toshiyuki
AU - Watanabe, Shinji
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/12
Y1 - 2019/12
N2 - The Transformer self-Attention network has recently shown promising performance as an alternative to recurrent neural networks (RNNs) in end-To-end (E2E) automatic speech recognition (ASR) systems. However, the Transformer has a drawback in that the entire input sequence is required to compute self-Attention. In this paper, we propose a new block processing method for the Transformer encoder by introducing a context-Aware inheritance mechanism. An additional context embedding vector handed over from the previously processed block helps to encode not only local acoustic information but also global linguistic, channel, and speaker attributes. We introduce a novel mask technique to implement the context inheritance to train the model efficiently. Evaluations of the Wall Street Journal (WSJ), Librispeech, VoxForge Italian, and AISHELL-1 Mandarin speech recognition datasets show that our proposed contextual block processing method outperforms naive block processing consistently. Furthermore, the attention weight tendency of each layer is analyzed to clarify how the added contextual inheritance mechanism models the global information.
AB - The Transformer self-Attention network has recently shown promising performance as an alternative to recurrent neural networks (RNNs) in end-To-end (E2E) automatic speech recognition (ASR) systems. However, the Transformer has a drawback in that the entire input sequence is required to compute self-Attention. In this paper, we propose a new block processing method for the Transformer encoder by introducing a context-Aware inheritance mechanism. An additional context embedding vector handed over from the previously processed block helps to encode not only local acoustic information but also global linguistic, channel, and speaker attributes. We introduce a novel mask technique to implement the context inheritance to train the model efficiently. Evaluations of the Wall Street Journal (WSJ), Librispeech, VoxForge Italian, and AISHELL-1 Mandarin speech recognition datasets show that our proposed contextual block processing method outperforms naive block processing consistently. Furthermore, the attention weight tendency of each layer is analyzed to clarify how the added contextual inheritance mechanism models the global information.
KW - Block Processing
KW - End-To-end
KW - Self-Attention Network
KW - Speech Recognition
KW - Transformer
UR - http://www.scopus.com/inward/record.url?scp=85081565959&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85081565959&partnerID=8YFLogxK
U2 - 10.1109/ASRU46091.2019.9003749
DO - 10.1109/ASRU46091.2019.9003749
M3 - Conference contribution
AN - SCOPUS:85081565959
T3 - 2019 IEEE Automatic Speech Recognition and Understanding Workshop, ASRU 2019 - Proceedings
SP - 427
EP - 433
BT - 2019 IEEE Automatic Speech Recognition and Understanding Workshop, ASRU 2019 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2019 IEEE Automatic Speech Recognition and Understanding Workshop, ASRU 2019
Y2 - 15 December 2019 through 18 December 2019
ER -