Learning of labeling room space for mobile robots based on visual motor experience

Tatsuro Yamada, Saki Ito, Hiroaki Arie, Tetsuya Ogata

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    1 Citation (Scopus)

    Abstract

    A model was developed to allow a mobile robot to label the areas of a typical domestic room, using raw sequential visual and motor data, no explicit information on location was provided, and no maps were constructed. The model comprised a deep autoencoder and a recurrent neural network. The model was demonstrated to (1) learn to correctly label areas of different shapes and sizes, (2) be capable of adapting to changes in room shape and rearrangement of items in the room, and (3) attribute different labels to the same area, when approached from different angles. Analysis of the internal representations of the model showed that a topological structure corresponding to the room structure was self-organized as the trajectory of the internal activations of the network.

    Original languageEnglish
    Title of host publicationArtificial Neural Networks and Machine Learning – ICANN 2017 - 26th International Conference on Artificial Neural Networks, Proceedings
    PublisherSpringer-Verlag
    Pages35-42
    Number of pages8
    ISBN (Print)9783319685991
    DOIs
    Publication statusPublished - 2017 Jan 1
    Event26th International Conference on Artificial Neural Networks, ICANN 2017 - Alghero, Italy
    Duration: 2017 Sep 112017 Sep 14

    Publication series

    NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
    Volume10613 LNCS
    ISSN (Print)0302-9743
    ISSN (Electronic)1611-3349

    Other

    Other26th International Conference on Artificial Neural Networks, ICANN 2017
    CountryItaly
    CityAlghero
    Period17/9/1117/9/14

    Fingerprint

    Mobile Robot
    Mobile robots
    Labeling
    Labels
    Internal
    Recurrent neural networks
    Recurrent Neural Networks
    Topological Structure
    Rearrangement
    Model
    Activation
    Chemical activation
    Attribute
    Trajectories
    Trajectory
    Angle
    Experience
    Learning
    Vision

    Keywords

    • Deep autoencoder
    • Indoor scene labeling
    • Mobile robots
    • Recurrent neural network
    • Symbol grounding

    ASJC Scopus subject areas

    • Theoretical Computer Science
    • Computer Science(all)

    Cite this

    Yamada, T., Ito, S., Arie, H., & Ogata, T. (2017). Learning of labeling room space for mobile robots based on visual motor experience. In Artificial Neural Networks and Machine Learning – ICANN 2017 - 26th International Conference on Artificial Neural Networks, Proceedings (pp. 35-42). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 10613 LNCS). Springer-Verlag. https://doi.org/10.1007/978-3-319-68600-4_5

    Learning of labeling room space for mobile robots based on visual motor experience. / Yamada, Tatsuro; Ito, Saki; Arie, Hiroaki; Ogata, Tetsuya.

    Artificial Neural Networks and Machine Learning – ICANN 2017 - 26th International Conference on Artificial Neural Networks, Proceedings. Springer-Verlag, 2017. p. 35-42 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 10613 LNCS).

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Yamada, T, Ito, S, Arie, H & Ogata, T 2017, Learning of labeling room space for mobile robots based on visual motor experience. in Artificial Neural Networks and Machine Learning – ICANN 2017 - 26th International Conference on Artificial Neural Networks, Proceedings. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 10613 LNCS, Springer-Verlag, pp. 35-42, 26th International Conference on Artificial Neural Networks, ICANN 2017, Alghero, Italy, 17/9/11. https://doi.org/10.1007/978-3-319-68600-4_5
    Yamada T, Ito S, Arie H, Ogata T. Learning of labeling room space for mobile robots based on visual motor experience. In Artificial Neural Networks and Machine Learning – ICANN 2017 - 26th International Conference on Artificial Neural Networks, Proceedings. Springer-Verlag. 2017. p. 35-42. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/978-3-319-68600-4_5
    Yamada, Tatsuro ; Ito, Saki ; Arie, Hiroaki ; Ogata, Tetsuya. / Learning of labeling room space for mobile robots based on visual motor experience. Artificial Neural Networks and Machine Learning – ICANN 2017 - 26th International Conference on Artificial Neural Networks, Proceedings. Springer-Verlag, 2017. pp. 35-42 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
    @inproceedings{35d3f026e5f647129517f400de1a0bc4,
    title = "Learning of labeling room space for mobile robots based on visual motor experience",
    abstract = "A model was developed to allow a mobile robot to label the areas of a typical domestic room, using raw sequential visual and motor data, no explicit information on location was provided, and no maps were constructed. The model comprised a deep autoencoder and a recurrent neural network. The model was demonstrated to (1) learn to correctly label areas of different shapes and sizes, (2) be capable of adapting to changes in room shape and rearrangement of items in the room, and (3) attribute different labels to the same area, when approached from different angles. Analysis of the internal representations of the model showed that a topological structure corresponding to the room structure was self-organized as the trajectory of the internal activations of the network.",
    keywords = "Deep autoencoder, Indoor scene labeling, Mobile robots, Recurrent neural network, Symbol grounding",
    author = "Tatsuro Yamada and Saki Ito and Hiroaki Arie and Tetsuya Ogata",
    year = "2017",
    month = "1",
    day = "1",
    doi = "10.1007/978-3-319-68600-4_5",
    language = "English",
    isbn = "9783319685991",
    series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
    publisher = "Springer-Verlag",
    pages = "35--42",
    booktitle = "Artificial Neural Networks and Machine Learning – ICANN 2017 - 26th International Conference on Artificial Neural Networks, Proceedings",

    }

    TY - GEN

    T1 - Learning of labeling room space for mobile robots based on visual motor experience

    AU - Yamada, Tatsuro

    AU - Ito, Saki

    AU - Arie, Hiroaki

    AU - Ogata, Tetsuya

    PY - 2017/1/1

    Y1 - 2017/1/1

    N2 - A model was developed to allow a mobile robot to label the areas of a typical domestic room, using raw sequential visual and motor data, no explicit information on location was provided, and no maps were constructed. The model comprised a deep autoencoder and a recurrent neural network. The model was demonstrated to (1) learn to correctly label areas of different shapes and sizes, (2) be capable of adapting to changes in room shape and rearrangement of items in the room, and (3) attribute different labels to the same area, when approached from different angles. Analysis of the internal representations of the model showed that a topological structure corresponding to the room structure was self-organized as the trajectory of the internal activations of the network.

    AB - A model was developed to allow a mobile robot to label the areas of a typical domestic room, using raw sequential visual and motor data, no explicit information on location was provided, and no maps were constructed. The model comprised a deep autoencoder and a recurrent neural network. The model was demonstrated to (1) learn to correctly label areas of different shapes and sizes, (2) be capable of adapting to changes in room shape and rearrangement of items in the room, and (3) attribute different labels to the same area, when approached from different angles. Analysis of the internal representations of the model showed that a topological structure corresponding to the room structure was self-organized as the trajectory of the internal activations of the network.

    KW - Deep autoencoder

    KW - Indoor scene labeling

    KW - Mobile robots

    KW - Recurrent neural network

    KW - Symbol grounding

    UR - http://www.scopus.com/inward/record.url?scp=85034244147&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=85034244147&partnerID=8YFLogxK

    U2 - 10.1007/978-3-319-68600-4_5

    DO - 10.1007/978-3-319-68600-4_5

    M3 - Conference contribution

    SN - 9783319685991

    T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

    SP - 35

    EP - 42

    BT - Artificial Neural Networks and Machine Learning – ICANN 2017 - 26th International Conference on Artificial Neural Networks, Proceedings

    PB - Springer-Verlag

    ER -