Reorganization of agent networks with reinforcement learning based on communication delay

Kazuki Urakawa, Toshiharu Sugawara

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    3 Citations (Scopus)

    Abstract

    We propose the team formation method for task allocations in agent networks by reinforcement learning based on communication delay and by reorganization of agent networks. A task in a distributed environment like an Internet application, such as grid computing and service-oriented computing, is usually achieved by doing a number of subtasks. These subtasks are constructed on demand in a bottom-up manner and must be done with appropriate agents that have capabilities and computational resources required in each subtask. Therefore, the efficient and effective allocation of tasks to appropriate agents is a key issue in this kind of system. In our model, this allocation problem is formulated as the team formation of agents in the task-oriented domain. From this perspective, a number of studies were conducted in which learning and reorganization were incorporated. The aim of this paper is to extend the conventional method from two viewpoints. First, our proposed method uses only information available locally for learning, so as to make this method applicable to real systems. Second, we introduce the elimination of links as well as the generation of links in the agent network to improve learning efficiency. We experimentally show that this extension can considerably improve the efficiency of team formation compared with the conventional method. We also show that it can make the agent network adaptive to environmental changes.

    Original languageEnglish
    Title of host publicationProceedings - 2012 IEEE/WIC/ACM International Conference on Intelligent Agent Technology, IAT 2012
    Pages324-331
    Number of pages8
    Volume2
    DOIs
    Publication statusPublished - 2012
    Event2012 IEEE/WIC/ACM International Conference on Intelligent Agent Technology, IAT 2012 - Macau
    Duration: 2012 Dec 42012 Dec 7

    Other

    Other2012 IEEE/WIC/ACM International Conference on Intelligent Agent Technology, IAT 2012
    CityMacau
    Period12/12/412/12/7

    Fingerprint

    Reinforcement learning
    Communication
    Grid computing
    Internet

    Keywords

    • Distributed cooperative
    • Multi-agent reinforcement learning
    • Reorganization
    • Team formation

    ASJC Scopus subject areas

    • Artificial Intelligence
    • Software

    Cite this

    Urakawa, K., & Sugawara, T. (2012). Reorganization of agent networks with reinforcement learning based on communication delay. In Proceedings - 2012 IEEE/WIC/ACM International Conference on Intelligent Agent Technology, IAT 2012 (Vol. 2, pp. 324-331). [6511589] https://doi.org/10.1109/WI-IAT.2012.105

    Reorganization of agent networks with reinforcement learning based on communication delay. / Urakawa, Kazuki; Sugawara, Toshiharu.

    Proceedings - 2012 IEEE/WIC/ACM International Conference on Intelligent Agent Technology, IAT 2012. Vol. 2 2012. p. 324-331 6511589.

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Urakawa, K & Sugawara, T 2012, Reorganization of agent networks with reinforcement learning based on communication delay. in Proceedings - 2012 IEEE/WIC/ACM International Conference on Intelligent Agent Technology, IAT 2012. vol. 2, 6511589, pp. 324-331, 2012 IEEE/WIC/ACM International Conference on Intelligent Agent Technology, IAT 2012, Macau, 12/12/4. https://doi.org/10.1109/WI-IAT.2012.105
    Urakawa K, Sugawara T. Reorganization of agent networks with reinforcement learning based on communication delay. In Proceedings - 2012 IEEE/WIC/ACM International Conference on Intelligent Agent Technology, IAT 2012. Vol. 2. 2012. p. 324-331. 6511589 https://doi.org/10.1109/WI-IAT.2012.105
    Urakawa, Kazuki ; Sugawara, Toshiharu. / Reorganization of agent networks with reinforcement learning based on communication delay. Proceedings - 2012 IEEE/WIC/ACM International Conference on Intelligent Agent Technology, IAT 2012. Vol. 2 2012. pp. 324-331
    @inproceedings{8269c84cf39f4f6782b98c781c60887e,
    title = "Reorganization of agent networks with reinforcement learning based on communication delay",
    abstract = "We propose the team formation method for task allocations in agent networks by reinforcement learning based on communication delay and by reorganization of agent networks. A task in a distributed environment like an Internet application, such as grid computing and service-oriented computing, is usually achieved by doing a number of subtasks. These subtasks are constructed on demand in a bottom-up manner and must be done with appropriate agents that have capabilities and computational resources required in each subtask. Therefore, the efficient and effective allocation of tasks to appropriate agents is a key issue in this kind of system. In our model, this allocation problem is formulated as the team formation of agents in the task-oriented domain. From this perspective, a number of studies were conducted in which learning and reorganization were incorporated. The aim of this paper is to extend the conventional method from two viewpoints. First, our proposed method uses only information available locally for learning, so as to make this method applicable to real systems. Second, we introduce the elimination of links as well as the generation of links in the agent network to improve learning efficiency. We experimentally show that this extension can considerably improve the efficiency of team formation compared with the conventional method. We also show that it can make the agent network adaptive to environmental changes.",
    keywords = "Distributed cooperative, Multi-agent reinforcement learning, Reorganization, Team formation",
    author = "Kazuki Urakawa and Toshiharu Sugawara",
    year = "2012",
    doi = "10.1109/WI-IAT.2012.105",
    language = "English",
    isbn = "9780769548807",
    volume = "2",
    pages = "324--331",
    booktitle = "Proceedings - 2012 IEEE/WIC/ACM International Conference on Intelligent Agent Technology, IAT 2012",

    }

    TY - GEN

    T1 - Reorganization of agent networks with reinforcement learning based on communication delay

    AU - Urakawa, Kazuki

    AU - Sugawara, Toshiharu

    PY - 2012

    Y1 - 2012

    N2 - We propose the team formation method for task allocations in agent networks by reinforcement learning based on communication delay and by reorganization of agent networks. A task in a distributed environment like an Internet application, such as grid computing and service-oriented computing, is usually achieved by doing a number of subtasks. These subtasks are constructed on demand in a bottom-up manner and must be done with appropriate agents that have capabilities and computational resources required in each subtask. Therefore, the efficient and effective allocation of tasks to appropriate agents is a key issue in this kind of system. In our model, this allocation problem is formulated as the team formation of agents in the task-oriented domain. From this perspective, a number of studies were conducted in which learning and reorganization were incorporated. The aim of this paper is to extend the conventional method from two viewpoints. First, our proposed method uses only information available locally for learning, so as to make this method applicable to real systems. Second, we introduce the elimination of links as well as the generation of links in the agent network to improve learning efficiency. We experimentally show that this extension can considerably improve the efficiency of team formation compared with the conventional method. We also show that it can make the agent network adaptive to environmental changes.

    AB - We propose the team formation method for task allocations in agent networks by reinforcement learning based on communication delay and by reorganization of agent networks. A task in a distributed environment like an Internet application, such as grid computing and service-oriented computing, is usually achieved by doing a number of subtasks. These subtasks are constructed on demand in a bottom-up manner and must be done with appropriate agents that have capabilities and computational resources required in each subtask. Therefore, the efficient and effective allocation of tasks to appropriate agents is a key issue in this kind of system. In our model, this allocation problem is formulated as the team formation of agents in the task-oriented domain. From this perspective, a number of studies were conducted in which learning and reorganization were incorporated. The aim of this paper is to extend the conventional method from two viewpoints. First, our proposed method uses only information available locally for learning, so as to make this method applicable to real systems. Second, we introduce the elimination of links as well as the generation of links in the agent network to improve learning efficiency. We experimentally show that this extension can considerably improve the efficiency of team formation compared with the conventional method. We also show that it can make the agent network adaptive to environmental changes.

    KW - Distributed cooperative

    KW - Multi-agent reinforcement learning

    KW - Reorganization

    KW - Team formation

    UR - http://www.scopus.com/inward/record.url?scp=84878465215&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=84878465215&partnerID=8YFLogxK

    U2 - 10.1109/WI-IAT.2012.105

    DO - 10.1109/WI-IAT.2012.105

    M3 - Conference contribution

    AN - SCOPUS:84878465215

    SN - 9780769548807

    VL - 2

    SP - 324

    EP - 331

    BT - Proceedings - 2012 IEEE/WIC/ACM International Conference on Intelligent Agent Technology, IAT 2012

    ER -