On polynomial-time learnability in the limit of strictly deterministic automata

Research output: Contribution to journalArticle

37 Citations (Scopus)

Abstract

This paper deals with the polynomial-time learnability of a language class in the limit from positive data, and discusses the learning problem of a subclass of deterministic finite automata (DFAs), called strictly deterministic automata (SDAs), in the framework of learning in the limit from positive data. We first discuss the difficulty of Pitt's definition in the framework of learning in the limit from positive data, by showing that any class of languages with an infinite descending chain property is not polynomial-time learnable in the limit from positive data. We then propose new definitions for polynomial-time learnability in the limit from positive data. We show in our new definitions that the class of SDAs is iteratively, consistently polynomial-time learnable in the limit from positive data. In particular, we present a learning algorithm that learns any SDA M in the limit from positive data, satisfying the properties that (i) the time for updating a conjecture is at most O(ℓm), (ii) the number of implicit prediction errors is at most O(ℓn), where ℓ is the maximum length of all positive data provided, m is the alphabet size of M and n is the size of M, (iii) each conjecture is computed from only the previous conjecture and the current example, and (iv) at any stage the conjecture is consistent with the sample set seen so far. This is in marked contrast to the fact that the class of DFAs is neither learnable in the limit from positive data nor polynomial-time learnable in the limit.

Original languageEnglish
Pages (from-to)153-179
Number of pages27
JournalMachine Learning
Volume19
Issue number2
DOIs
Publication statusPublished - 1995 May
Externally publishedYes

Fingerprint

Polynomials
Finite automata
Learning algorithms

Keywords

  • iterative learning
  • polynomial-time learnability in the limit
  • strictly deterministic automata

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Artificial Intelligence

Cite this

On polynomial-time learnability in the limit of strictly deterministic automata. / Yokomori, Takashi.

In: Machine Learning, Vol. 19, No. 2, 05.1995, p. 153-179.

Research output: Contribution to journalArticle

@article{049505e2954b449f9e328aecec434872,
title = "On polynomial-time learnability in the limit of strictly deterministic automata",
abstract = "This paper deals with the polynomial-time learnability of a language class in the limit from positive data, and discusses the learning problem of a subclass of deterministic finite automata (DFAs), called strictly deterministic automata (SDAs), in the framework of learning in the limit from positive data. We first discuss the difficulty of Pitt's definition in the framework of learning in the limit from positive data, by showing that any class of languages with an infinite descending chain property is not polynomial-time learnable in the limit from positive data. We then propose new definitions for polynomial-time learnability in the limit from positive data. We show in our new definitions that the class of SDAs is iteratively, consistently polynomial-time learnable in the limit from positive data. In particular, we present a learning algorithm that learns any SDA M in the limit from positive data, satisfying the properties that (i) the time for updating a conjecture is at most O(ℓm), (ii) the number of implicit prediction errors is at most O(ℓn), where ℓ is the maximum length of all positive data provided, m is the alphabet size of M and n is the size of M, (iii) each conjecture is computed from only the previous conjecture and the current example, and (iv) at any stage the conjecture is consistent with the sample set seen so far. This is in marked contrast to the fact that the class of DFAs is neither learnable in the limit from positive data nor polynomial-time learnable in the limit.",
keywords = "iterative learning, polynomial-time learnability in the limit, strictly deterministic automata",
author = "Takashi Yokomori",
year = "1995",
month = "5",
doi = "10.1007/BF01007463",
language = "English",
volume = "19",
pages = "153--179",
journal = "Machine Learning",
issn = "0885-6125",
publisher = "Springer Netherlands",
number = "2",

}

TY - JOUR

T1 - On polynomial-time learnability in the limit of strictly deterministic automata

AU - Yokomori, Takashi

PY - 1995/5

Y1 - 1995/5

N2 - This paper deals with the polynomial-time learnability of a language class in the limit from positive data, and discusses the learning problem of a subclass of deterministic finite automata (DFAs), called strictly deterministic automata (SDAs), in the framework of learning in the limit from positive data. We first discuss the difficulty of Pitt's definition in the framework of learning in the limit from positive data, by showing that any class of languages with an infinite descending chain property is not polynomial-time learnable in the limit from positive data. We then propose new definitions for polynomial-time learnability in the limit from positive data. We show in our new definitions that the class of SDAs is iteratively, consistently polynomial-time learnable in the limit from positive data. In particular, we present a learning algorithm that learns any SDA M in the limit from positive data, satisfying the properties that (i) the time for updating a conjecture is at most O(ℓm), (ii) the number of implicit prediction errors is at most O(ℓn), where ℓ is the maximum length of all positive data provided, m is the alphabet size of M and n is the size of M, (iii) each conjecture is computed from only the previous conjecture and the current example, and (iv) at any stage the conjecture is consistent with the sample set seen so far. This is in marked contrast to the fact that the class of DFAs is neither learnable in the limit from positive data nor polynomial-time learnable in the limit.

AB - This paper deals with the polynomial-time learnability of a language class in the limit from positive data, and discusses the learning problem of a subclass of deterministic finite automata (DFAs), called strictly deterministic automata (SDAs), in the framework of learning in the limit from positive data. We first discuss the difficulty of Pitt's definition in the framework of learning in the limit from positive data, by showing that any class of languages with an infinite descending chain property is not polynomial-time learnable in the limit from positive data. We then propose new definitions for polynomial-time learnability in the limit from positive data. We show in our new definitions that the class of SDAs is iteratively, consistently polynomial-time learnable in the limit from positive data. In particular, we present a learning algorithm that learns any SDA M in the limit from positive data, satisfying the properties that (i) the time for updating a conjecture is at most O(ℓm), (ii) the number of implicit prediction errors is at most O(ℓn), where ℓ is the maximum length of all positive data provided, m is the alphabet size of M and n is the size of M, (iii) each conjecture is computed from only the previous conjecture and the current example, and (iv) at any stage the conjecture is consistent with the sample set seen so far. This is in marked contrast to the fact that the class of DFAs is neither learnable in the limit from positive data nor polynomial-time learnable in the limit.

KW - iterative learning

KW - polynomial-time learnability in the limit

KW - strictly deterministic automata

UR - http://www.scopus.com/inward/record.url?scp=0029308944&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0029308944&partnerID=8YFLogxK

U2 - 10.1007/BF01007463

DO - 10.1007/BF01007463

M3 - Article

AN - SCOPUS:0029308944

VL - 19

SP - 153

EP - 179

JO - Machine Learning

JF - Machine Learning

SN - 0885-6125

IS - 2

ER -