Similarity is not entailment - Jointly learning similarity transformations for textual entailment

Ken Ichi Yokote, Danushka Bollegala, Mitsuru Ishizuka

Research output: Chapter in Book/Report/Conference proceedingConference contribution

7 Citations (Scopus)

Abstract

Predicting entailment between two given texts is an important task upon which the performance of numerous NLP tasks depend on such as question answering, text summarization, and information extraction. The degree to which two texts are similar has been used extensively as a key feature in much previous work in predicting entailment. However, using similarity scores directly, without proper transformations, results in suboptimal performance. Given a set of lexical similarity measures, we propose a method that jointly learns both (a) a set of non-linear transformation functions for those similarity measures and, (b) the optimal non-linear combination of those transformation functions to predict textual entailment. Our method consistently outperforms numerous baselines, reporting a micro-averaged F-score of 46.48 on the RTE-7 benchmark dataset. The proposed method is ranked 2-nd among 33 entailment systems participated in RTE-7, demonstrating its competitiveness over numerous other entailment approaches. Although our method is statistically comparable to the current state-of-the-art, we require less external knowledge resources.

Original languageEnglish
Title of host publicationProceedings of the National Conference on Artificial Intelligence
Pages1720-1726
Number of pages7
Volume2
Publication statusPublished - 2012
Externally publishedYes
Event26th AAAI Conference on Artificial Intelligence and the 24th Innovative Applications of Artificial Intelligence Conference, AAAI-12 / IAAI-12 - Toronto, ON
Duration: 2012 Jul 222012 Jul 26

Other

Other26th AAAI Conference on Artificial Intelligence and the 24th Innovative Applications of Artificial Intelligence Conference, AAAI-12 / IAAI-12
CityToronto, ON
Period12/7/2212/7/26

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence

Cite this

Yokote, K. I., Bollegala, D., & Ishizuka, M. (2012). Similarity is not entailment - Jointly learning similarity transformations for textual entailment. In Proceedings of the National Conference on Artificial Intelligence (Vol. 2, pp. 1720-1726)

Similarity is not entailment - Jointly learning similarity transformations for textual entailment. / Yokote, Ken Ichi; Bollegala, Danushka; Ishizuka, Mitsuru.

Proceedings of the National Conference on Artificial Intelligence. Vol. 2 2012. p. 1720-1726.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Yokote, KI, Bollegala, D & Ishizuka, M 2012, Similarity is not entailment - Jointly learning similarity transformations for textual entailment. in Proceedings of the National Conference on Artificial Intelligence. vol. 2, pp. 1720-1726, 26th AAAI Conference on Artificial Intelligence and the 24th Innovative Applications of Artificial Intelligence Conference, AAAI-12 / IAAI-12, Toronto, ON, 12/7/22.
Yokote KI, Bollegala D, Ishizuka M. Similarity is not entailment - Jointly learning similarity transformations for textual entailment. In Proceedings of the National Conference on Artificial Intelligence. Vol. 2. 2012. p. 1720-1726
Yokote, Ken Ichi ; Bollegala, Danushka ; Ishizuka, Mitsuru. / Similarity is not entailment - Jointly learning similarity transformations for textual entailment. Proceedings of the National Conference on Artificial Intelligence. Vol. 2 2012. pp. 1720-1726
@inproceedings{7b4c621a47d842cdb41c3668b04e66f5,
title = "Similarity is not entailment - Jointly learning similarity transformations for textual entailment",
abstract = "Predicting entailment between two given texts is an important task upon which the performance of numerous NLP tasks depend on such as question answering, text summarization, and information extraction. The degree to which two texts are similar has been used extensively as a key feature in much previous work in predicting entailment. However, using similarity scores directly, without proper transformations, results in suboptimal performance. Given a set of lexical similarity measures, we propose a method that jointly learns both (a) a set of non-linear transformation functions for those similarity measures and, (b) the optimal non-linear combination of those transformation functions to predict textual entailment. Our method consistently outperforms numerous baselines, reporting a micro-averaged F-score of 46.48 on the RTE-7 benchmark dataset. The proposed method is ranked 2-nd among 33 entailment systems participated in RTE-7, demonstrating its competitiveness over numerous other entailment approaches. Although our method is statistically comparable to the current state-of-the-art, we require less external knowledge resources.",
author = "Yokote, {Ken Ichi} and Danushka Bollegala and Mitsuru Ishizuka",
year = "2012",
language = "English",
isbn = "9781577355687",
volume = "2",
pages = "1720--1726",
booktitle = "Proceedings of the National Conference on Artificial Intelligence",

}

TY - GEN

T1 - Similarity is not entailment - Jointly learning similarity transformations for textual entailment

AU - Yokote, Ken Ichi

AU - Bollegala, Danushka

AU - Ishizuka, Mitsuru

PY - 2012

Y1 - 2012

N2 - Predicting entailment between two given texts is an important task upon which the performance of numerous NLP tasks depend on such as question answering, text summarization, and information extraction. The degree to which two texts are similar has been used extensively as a key feature in much previous work in predicting entailment. However, using similarity scores directly, without proper transformations, results in suboptimal performance. Given a set of lexical similarity measures, we propose a method that jointly learns both (a) a set of non-linear transformation functions for those similarity measures and, (b) the optimal non-linear combination of those transformation functions to predict textual entailment. Our method consistently outperforms numerous baselines, reporting a micro-averaged F-score of 46.48 on the RTE-7 benchmark dataset. The proposed method is ranked 2-nd among 33 entailment systems participated in RTE-7, demonstrating its competitiveness over numerous other entailment approaches. Although our method is statistically comparable to the current state-of-the-art, we require less external knowledge resources.

AB - Predicting entailment between two given texts is an important task upon which the performance of numerous NLP tasks depend on such as question answering, text summarization, and information extraction. The degree to which two texts are similar has been used extensively as a key feature in much previous work in predicting entailment. However, using similarity scores directly, without proper transformations, results in suboptimal performance. Given a set of lexical similarity measures, we propose a method that jointly learns both (a) a set of non-linear transformation functions for those similarity measures and, (b) the optimal non-linear combination of those transformation functions to predict textual entailment. Our method consistently outperforms numerous baselines, reporting a micro-averaged F-score of 46.48 on the RTE-7 benchmark dataset. The proposed method is ranked 2-nd among 33 entailment systems participated in RTE-7, demonstrating its competitiveness over numerous other entailment approaches. Although our method is statistically comparable to the current state-of-the-art, we require less external knowledge resources.

UR - http://www.scopus.com/inward/record.url?scp=84868274751&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84868274751&partnerID=8YFLogxK

M3 - Conference contribution

SN - 9781577355687

VL - 2

SP - 1720

EP - 1726

BT - Proceedings of the National Conference on Artificial Intelligence

ER -