Optimization of overtraining and overgeneration

Goutam Chakraborty, Norio Shiratori, Shoichi Noguchi

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    1 Citation (Scopus)

    Abstract

    The task of any supervised classifier is to assign optimum boundaries in the input space, for the different class membership. This is done using informations from the available set of known samples. This mapping of sample position in the input space to sample class is further used to classify unknown samples. The available set of known sample is generally a finite set. A boundary exactly defined by those finite sample set is usually not the best boundary to classify the new unknown samples. We end up with a overfitted boundary i.e. a overtrained classifier, resulting in poor classification for unknown new samples. We therefore need to smooth the boundary to be able to generalize for the unknown samples. But to what extent? If we smooth the boundary too much, we will not be exploiting all the class informations contained in the known sample set, and the classification result will again be poor. Depending on the number of known samples and the dimension of the actual solution (which, of course, is not known in any of the practical problems), there will be a certain amount of smoothness, which is optimum for generalization. In this paper, we are trying to focus on this problem. We introduce some practical ways to arrive at optimum smoothness, with regards to single hidden layer neural network classifier using radial basis function.

    Original languageEnglish
    Title of host publicationProceedings of the International Joint Conference on Neural Networks
    Place of PublicationPiscataway, NJ, United States
    PublisherPubl by IEEE
    Pages2257-2262
    Number of pages6
    Volume3
    ISBN (Print)0780314212, 9780780314214
    Publication statusPublished - 1993
    EventProceedings of 1993 International Joint Conference on Neural Networks. Part 1 (of 3) - Nagoya, Jpn
    Duration: 1993 Oct 251993 Oct 29

    Other

    OtherProceedings of 1993 International Joint Conference on Neural Networks. Part 1 (of 3)
    CityNagoya, Jpn
    Period93/10/2593/10/29

    Fingerprint

    Classifiers
    Neural networks

    ASJC Scopus subject areas

    • Engineering(all)

    Cite this

    Chakraborty, G., Shiratori, N., & Noguchi, S. (1993). Optimization of overtraining and overgeneration. In Proceedings of the International Joint Conference on Neural Networks (Vol. 3, pp. 2257-2262). Piscataway, NJ, United States: Publ by IEEE.

    Optimization of overtraining and overgeneration. / Chakraborty, Goutam; Shiratori, Norio; Noguchi, Shoichi.

    Proceedings of the International Joint Conference on Neural Networks. Vol. 3 Piscataway, NJ, United States : Publ by IEEE, 1993. p. 2257-2262.

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Chakraborty, G, Shiratori, N & Noguchi, S 1993, Optimization of overtraining and overgeneration. in Proceedings of the International Joint Conference on Neural Networks. vol. 3, Publ by IEEE, Piscataway, NJ, United States, pp. 2257-2262, Proceedings of 1993 International Joint Conference on Neural Networks. Part 1 (of 3), Nagoya, Jpn, 93/10/25.
    Chakraborty G, Shiratori N, Noguchi S. Optimization of overtraining and overgeneration. In Proceedings of the International Joint Conference on Neural Networks. Vol. 3. Piscataway, NJ, United States: Publ by IEEE. 1993. p. 2257-2262
    Chakraborty, Goutam ; Shiratori, Norio ; Noguchi, Shoichi. / Optimization of overtraining and overgeneration. Proceedings of the International Joint Conference on Neural Networks. Vol. 3 Piscataway, NJ, United States : Publ by IEEE, 1993. pp. 2257-2262
    @inproceedings{040480c7a97447a9b89dbf03244f524b,
    title = "Optimization of overtraining and overgeneration",
    abstract = "The task of any supervised classifier is to assign optimum boundaries in the input space, for the different class membership. This is done using informations from the available set of known samples. This mapping of sample position in the input space to sample class is further used to classify unknown samples. The available set of known sample is generally a finite set. A boundary exactly defined by those finite sample set is usually not the best boundary to classify the new unknown samples. We end up with a overfitted boundary i.e. a overtrained classifier, resulting in poor classification for unknown new samples. We therefore need to smooth the boundary to be able to generalize for the unknown samples. But to what extent? If we smooth the boundary too much, we will not be exploiting all the class informations contained in the known sample set, and the classification result will again be poor. Depending on the number of known samples and the dimension of the actual solution (which, of course, is not known in any of the practical problems), there will be a certain amount of smoothness, which is optimum for generalization. In this paper, we are trying to focus on this problem. We introduce some practical ways to arrive at optimum smoothness, with regards to single hidden layer neural network classifier using radial basis function.",
    author = "Goutam Chakraborty and Norio Shiratori and Shoichi Noguchi",
    year = "1993",
    language = "English",
    isbn = "0780314212",
    volume = "3",
    pages = "2257--2262",
    booktitle = "Proceedings of the International Joint Conference on Neural Networks",
    publisher = "Publ by IEEE",

    }

    TY - GEN

    T1 - Optimization of overtraining and overgeneration

    AU - Chakraborty, Goutam

    AU - Shiratori, Norio

    AU - Noguchi, Shoichi

    PY - 1993

    Y1 - 1993

    N2 - The task of any supervised classifier is to assign optimum boundaries in the input space, for the different class membership. This is done using informations from the available set of known samples. This mapping of sample position in the input space to sample class is further used to classify unknown samples. The available set of known sample is generally a finite set. A boundary exactly defined by those finite sample set is usually not the best boundary to classify the new unknown samples. We end up with a overfitted boundary i.e. a overtrained classifier, resulting in poor classification for unknown new samples. We therefore need to smooth the boundary to be able to generalize for the unknown samples. But to what extent? If we smooth the boundary too much, we will not be exploiting all the class informations contained in the known sample set, and the classification result will again be poor. Depending on the number of known samples and the dimension of the actual solution (which, of course, is not known in any of the practical problems), there will be a certain amount of smoothness, which is optimum for generalization. In this paper, we are trying to focus on this problem. We introduce some practical ways to arrive at optimum smoothness, with regards to single hidden layer neural network classifier using radial basis function.

    AB - The task of any supervised classifier is to assign optimum boundaries in the input space, for the different class membership. This is done using informations from the available set of known samples. This mapping of sample position in the input space to sample class is further used to classify unknown samples. The available set of known sample is generally a finite set. A boundary exactly defined by those finite sample set is usually not the best boundary to classify the new unknown samples. We end up with a overfitted boundary i.e. a overtrained classifier, resulting in poor classification for unknown new samples. We therefore need to smooth the boundary to be able to generalize for the unknown samples. But to what extent? If we smooth the boundary too much, we will not be exploiting all the class informations contained in the known sample set, and the classification result will again be poor. Depending on the number of known samples and the dimension of the actual solution (which, of course, is not known in any of the practical problems), there will be a certain amount of smoothness, which is optimum for generalization. In this paper, we are trying to focus on this problem. We introduce some practical ways to arrive at optimum smoothness, with regards to single hidden layer neural network classifier using radial basis function.

    UR - http://www.scopus.com/inward/record.url?scp=0027842719&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=0027842719&partnerID=8YFLogxK

    M3 - Conference contribution

    AN - SCOPUS:0027842719

    SN - 0780314212

    SN - 9780780314214

    VL - 3

    SP - 2257

    EP - 2262

    BT - Proceedings of the International Joint Conference on Neural Networks

    PB - Publ by IEEE

    CY - Piscataway, NJ, United States

    ER -