TY - CONF

T1 - On uncertain logic based upon information theory

AU - Matsushima, Toshiyasu

AU - Suzuki, Joe

AU - Inazumi, Hiroshige

AU - Hirasawa, Shigeichi

PY - 1988/12/1

Y1 - 1988/12/1

N2 - Summary form only given, as follows. The authors propose a semantic generalized predicate logic that is based on probability theory and information theory for providing theoretical methods for processing uncertain knowledge in artifical intelligence (AI) applications. The basic concept of the proposed logic is that the interpretation of the well-formed formula (wff) containing uncertainty is represented by using conditional probability. By the interpretation model using the conditional probability, a lot of problems that are impossible to treat by conventional AI methods can be explained in terms of information theory. From the definition, the self-information of the wff, the mutual information between a couple of predicates, and information gain by the reasoning can be shown. Next, reasoning rules are evaluated using the information gain which expresses the difference between the prior information and the posterior information of the consequent wff. Finally, the authors give a new calculation method for reasoning that gives the most unbiased probability estimation, given the available evidence, and prove that the proposed method is optimal from the principle of maximum entropy, subject to the given marginal probability condition.

AB - Summary form only given, as follows. The authors propose a semantic generalized predicate logic that is based on probability theory and information theory for providing theoretical methods for processing uncertain knowledge in artifical intelligence (AI) applications. The basic concept of the proposed logic is that the interpretation of the well-formed formula (wff) containing uncertainty is represented by using conditional probability. By the interpretation model using the conditional probability, a lot of problems that are impossible to treat by conventional AI methods can be explained in terms of information theory. From the definition, the self-information of the wff, the mutual information between a couple of predicates, and information gain by the reasoning can be shown. Next, reasoning rules are evaluated using the information gain which expresses the difference between the prior information and the posterior information of the consequent wff. Finally, the authors give a new calculation method for reasoning that gives the most unbiased probability estimation, given the available evidence, and prove that the proposed method is optimal from the principle of maximum entropy, subject to the given marginal probability condition.

UR - http://www.scopus.com/inward/record.url?scp=0024125474&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0024125474&partnerID=8YFLogxK

M3 - Paper

AN - SCOPUS:0024125474

SP - 133

EP - 134

ER -