Multiple descent cost competition: Restorable self-organization and multimedia information processing

Yasuo Matsuyama

    Research output: Contribution to journalArticle

    3 Citations (Scopus)

    Abstract

    Multiple descent cost competition is a composition of learning phases for minimizing a given measure of total performance, i.e., cost. If these phases are heterogeneous toward each other, the total learning algorithm shows a variety of extraordinary abilities; especially in regards to multimedia information processing. In the first phase of descent cost learning, elements of source data are grouped. Simultaneously, a weight vector for minimal learning, (i.e., a winner), is found. Then, the winner and its partners are updated for further cost reduction. Therefore, two classes of self-organizing feature maps are generated. One is called a grouping feature map, which partitions the source data. The other is an ordinary weight vector feature map. The grouping feature map, together with the winners, retains most of the source data information. This feature map is able to assist in a high quality approximation of the original data. Traditional weight vector feature maps lack this ability. Another important capacity of the grouping feature map is that it can change its shape. Thus, the grouping pattern can accept external directions in order to metamorphose. In the text, the total algorithm of the multiple descent cost competition is explained first. In that section, image processing concepts are introduced in order to assist in the description of this algorithm. Then, a still image is first data-compressed (DC). Next, a restored image is morphed using the grouping feature map by receiving directions given by an external intelligence. Next, an interpolation of frames is applied in order to complete animation coding (AC). Thus, multiple descent cost competition bridges "DC to AC." Examples of multimedia processing on virtual digital movies are given.

    Original languageEnglish
    Pages (from-to)106-122
    Number of pages17
    JournalIEEE Transactions on Neural Networks
    Volume9
    Issue number1
    DOIs
    Publication statusPublished - 1998

    Fingerprint

    Self-organization
    Descent
    Information Processing
    Multimedia
    Grouping
    Costs
    Animation
    Feature Vector
    Coding
    Self-organizing Feature Map
    Self organizing maps
    Cost reduction
    Learning algorithms
    Learning Algorithm
    Image Processing
    Interpolation
    Image processing
    Interpolate
    Partition
    Approximation

    Keywords

    • Competitive learning
    • Coordination with external intelligence
    • Data compression
    • Grouping feature map
    • Image processing
    • Multiple descent cost
    • Self-organization
    • Standard pattern set
    • Vector quantization
    • Virtual movie generation

    ASJC Scopus subject areas

    • Control and Systems Engineering
    • Theoretical Computer Science
    • Electrical and Electronic Engineering
    • Artificial Intelligence
    • Computational Theory and Mathematics
    • Hardware and Architecture

    Cite this

    Multiple descent cost competition : Restorable self-organization and multimedia information processing. / Matsuyama, Yasuo.

    In: IEEE Transactions on Neural Networks, Vol. 9, No. 1, 1998, p. 106-122.

    Research output: Contribution to journalArticle

    @article{8f22d099562d45c7a1eefbc96b19ed4a,
    title = "Multiple descent cost competition: Restorable self-organization and multimedia information processing",
    abstract = "Multiple descent cost competition is a composition of learning phases for minimizing a given measure of total performance, i.e., cost. If these phases are heterogeneous toward each other, the total learning algorithm shows a variety of extraordinary abilities; especially in regards to multimedia information processing. In the first phase of descent cost learning, elements of source data are grouped. Simultaneously, a weight vector for minimal learning, (i.e., a winner), is found. Then, the winner and its partners are updated for further cost reduction. Therefore, two classes of self-organizing feature maps are generated. One is called a grouping feature map, which partitions the source data. The other is an ordinary weight vector feature map. The grouping feature map, together with the winners, retains most of the source data information. This feature map is able to assist in a high quality approximation of the original data. Traditional weight vector feature maps lack this ability. Another important capacity of the grouping feature map is that it can change its shape. Thus, the grouping pattern can accept external directions in order to metamorphose. In the text, the total algorithm of the multiple descent cost competition is explained first. In that section, image processing concepts are introduced in order to assist in the description of this algorithm. Then, a still image is first data-compressed (DC). Next, a restored image is morphed using the grouping feature map by receiving directions given by an external intelligence. Next, an interpolation of frames is applied in order to complete animation coding (AC). Thus, multiple descent cost competition bridges {"}DC to AC.{"} Examples of multimedia processing on virtual digital movies are given.",
    keywords = "Competitive learning, Coordination with external intelligence, Data compression, Grouping feature map, Image processing, Multiple descent cost, Self-organization, Standard pattern set, Vector quantization, Virtual movie generation",
    author = "Yasuo Matsuyama",
    year = "1998",
    doi = "10.1109/72.655033",
    language = "English",
    volume = "9",
    pages = "106--122",
    journal = "IEEE Transactions on Neural Networks and Learning Systems",
    issn = "2162-237X",
    publisher = "IEEE Computational Intelligence Society",
    number = "1",

    }

    TY - JOUR

    T1 - Multiple descent cost competition

    T2 - Restorable self-organization and multimedia information processing

    AU - Matsuyama, Yasuo

    PY - 1998

    Y1 - 1998

    N2 - Multiple descent cost competition is a composition of learning phases for minimizing a given measure of total performance, i.e., cost. If these phases are heterogeneous toward each other, the total learning algorithm shows a variety of extraordinary abilities; especially in regards to multimedia information processing. In the first phase of descent cost learning, elements of source data are grouped. Simultaneously, a weight vector for minimal learning, (i.e., a winner), is found. Then, the winner and its partners are updated for further cost reduction. Therefore, two classes of self-organizing feature maps are generated. One is called a grouping feature map, which partitions the source data. The other is an ordinary weight vector feature map. The grouping feature map, together with the winners, retains most of the source data information. This feature map is able to assist in a high quality approximation of the original data. Traditional weight vector feature maps lack this ability. Another important capacity of the grouping feature map is that it can change its shape. Thus, the grouping pattern can accept external directions in order to metamorphose. In the text, the total algorithm of the multiple descent cost competition is explained first. In that section, image processing concepts are introduced in order to assist in the description of this algorithm. Then, a still image is first data-compressed (DC). Next, a restored image is morphed using the grouping feature map by receiving directions given by an external intelligence. Next, an interpolation of frames is applied in order to complete animation coding (AC). Thus, multiple descent cost competition bridges "DC to AC." Examples of multimedia processing on virtual digital movies are given.

    AB - Multiple descent cost competition is a composition of learning phases for minimizing a given measure of total performance, i.e., cost. If these phases are heterogeneous toward each other, the total learning algorithm shows a variety of extraordinary abilities; especially in regards to multimedia information processing. In the first phase of descent cost learning, elements of source data are grouped. Simultaneously, a weight vector for minimal learning, (i.e., a winner), is found. Then, the winner and its partners are updated for further cost reduction. Therefore, two classes of self-organizing feature maps are generated. One is called a grouping feature map, which partitions the source data. The other is an ordinary weight vector feature map. The grouping feature map, together with the winners, retains most of the source data information. This feature map is able to assist in a high quality approximation of the original data. Traditional weight vector feature maps lack this ability. Another important capacity of the grouping feature map is that it can change its shape. Thus, the grouping pattern can accept external directions in order to metamorphose. In the text, the total algorithm of the multiple descent cost competition is explained first. In that section, image processing concepts are introduced in order to assist in the description of this algorithm. Then, a still image is first data-compressed (DC). Next, a restored image is morphed using the grouping feature map by receiving directions given by an external intelligence. Next, an interpolation of frames is applied in order to complete animation coding (AC). Thus, multiple descent cost competition bridges "DC to AC." Examples of multimedia processing on virtual digital movies are given.

    KW - Competitive learning

    KW - Coordination with external intelligence

    KW - Data compression

    KW - Grouping feature map

    KW - Image processing

    KW - Multiple descent cost

    KW - Self-organization

    KW - Standard pattern set

    KW - Vector quantization

    KW - Virtual movie generation

    UR - http://www.scopus.com/inward/record.url?scp=0031646494&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=0031646494&partnerID=8YFLogxK

    U2 - 10.1109/72.655033

    DO - 10.1109/72.655033

    M3 - Article

    C2 - 18252433

    AN - SCOPUS:0031646494

    VL - 9

    SP - 106

    EP - 122

    JO - IEEE Transactions on Neural Networks and Learning Systems

    JF - IEEE Transactions on Neural Networks and Learning Systems

    SN - 2162-237X

    IS - 1

    ER -