Automatically solving math word problems is a critical task in the field of natural language processing. Recent models have reached their performance bottleneck and require more high-quality data for training. We propose a novel data augmentation method that reverses the mathematical logic of math word problems to produce new high-quality math problems and introduce new knowledge points that can benefit learning the mathematical reasoning logic. We apply the augmented data on two SOTA math word problem solving models and compare our results with a strong data augmentation baseline. Experimental results show the effectiveness of our approach (we release our code and data at https://github.com/yiyunya/RODA).
|ジャーナル||IEEE/ACM Transactions on Audio Speech and Language Processing|
|出版ステータス||Published - 2022|
ASJC Scopus subject areas
- コンピュータ サイエンス（その他）