Task migration for mobile edge computing using deep reinforcement learning

Cheng Zhang, Zixuan Zheng

    Research output: Contribution to journalArticle

    7 Citations (Scopus)

    Abstract

    Mobile edge computing (MEC) is a new network architecture that puts computing capabilities and storage resource at the edges of the network in a distributed manner, instead of a kind of centralized cloud computing architecture. The computation tasks of the users can be offloaded to the nearby MEC servers to achieve high quality of computation experience. As many applications’ users have high mobility, such as applications of autonomous driving, the original MEC server with the offloaded tasks may become far from the users. Therefore, the key challenge of the MEC is to make decisions on where and when the tasks had better be migrated according to users’ mobility. Existing works formulated this problem as a sequential decision making model and using Markov decision process (MDP) to solve, with assumption that mobility pattern of the users is known ahead. However, it is difficult to get users’ mobility pattern in advance. In this paper, we propose a deep Q-network (DQN) based technique for task migration in MEC system. It can learn the optimal task migration policy from previous experiences without necessarily acquiring the information about users’ mobility pattern in advance. Our proposed task migration algorithm is validated by conducting extensive simulations in the MEC system.

    Original languageEnglish
    Pages (from-to)111-118
    Number of pages8
    JournalFuture Generation Computer Systems
    Volume96
    DOIs
    Publication statusPublished - 2019 Jul 1

    Fingerprint

    Reinforcement learning
    Servers
    Cloud computing
    Network architecture
    Decision making

    Keywords

    • Deep reinforcement learning
    • Mobile edge computing
    • Service migration

    ASJC Scopus subject areas

    • Software
    • Hardware and Architecture
    • Computer Networks and Communications

    Cite this

    Task migration for mobile edge computing using deep reinforcement learning. / Zhang, Cheng; Zheng, Zixuan.

    In: Future Generation Computer Systems, Vol. 96, 01.07.2019, p. 111-118.

    Research output: Contribution to journalArticle

    @article{2c865cb593bb4187b385c66a91b1bb26,
    title = "Task migration for mobile edge computing using deep reinforcement learning",
    abstract = "Mobile edge computing (MEC) is a new network architecture that puts computing capabilities and storage resource at the edges of the network in a distributed manner, instead of a kind of centralized cloud computing architecture. The computation tasks of the users can be offloaded to the nearby MEC servers to achieve high quality of computation experience. As many applications’ users have high mobility, such as applications of autonomous driving, the original MEC server with the offloaded tasks may become far from the users. Therefore, the key challenge of the MEC is to make decisions on where and when the tasks had better be migrated according to users’ mobility. Existing works formulated this problem as a sequential decision making model and using Markov decision process (MDP) to solve, with assumption that mobility pattern of the users is known ahead. However, it is difficult to get users’ mobility pattern in advance. In this paper, we propose a deep Q-network (DQN) based technique for task migration in MEC system. It can learn the optimal task migration policy from previous experiences without necessarily acquiring the information about users’ mobility pattern in advance. Our proposed task migration algorithm is validated by conducting extensive simulations in the MEC system.",
    keywords = "Deep reinforcement learning, Mobile edge computing, Service migration",
    author = "Cheng Zhang and Zixuan Zheng",
    year = "2019",
    month = "7",
    day = "1",
    doi = "10.1016/j.future.2019.01.059",
    language = "English",
    volume = "96",
    pages = "111--118",
    journal = "Future Generation Computer Systems",
    issn = "0167-739X",
    publisher = "Elsevier",

    }

    TY - JOUR

    T1 - Task migration for mobile edge computing using deep reinforcement learning

    AU - Zhang, Cheng

    AU - Zheng, Zixuan

    PY - 2019/7/1

    Y1 - 2019/7/1

    N2 - Mobile edge computing (MEC) is a new network architecture that puts computing capabilities and storage resource at the edges of the network in a distributed manner, instead of a kind of centralized cloud computing architecture. The computation tasks of the users can be offloaded to the nearby MEC servers to achieve high quality of computation experience. As many applications’ users have high mobility, such as applications of autonomous driving, the original MEC server with the offloaded tasks may become far from the users. Therefore, the key challenge of the MEC is to make decisions on where and when the tasks had better be migrated according to users’ mobility. Existing works formulated this problem as a sequential decision making model and using Markov decision process (MDP) to solve, with assumption that mobility pattern of the users is known ahead. However, it is difficult to get users’ mobility pattern in advance. In this paper, we propose a deep Q-network (DQN) based technique for task migration in MEC system. It can learn the optimal task migration policy from previous experiences without necessarily acquiring the information about users’ mobility pattern in advance. Our proposed task migration algorithm is validated by conducting extensive simulations in the MEC system.

    AB - Mobile edge computing (MEC) is a new network architecture that puts computing capabilities and storage resource at the edges of the network in a distributed manner, instead of a kind of centralized cloud computing architecture. The computation tasks of the users can be offloaded to the nearby MEC servers to achieve high quality of computation experience. As many applications’ users have high mobility, such as applications of autonomous driving, the original MEC server with the offloaded tasks may become far from the users. Therefore, the key challenge of the MEC is to make decisions on where and when the tasks had better be migrated according to users’ mobility. Existing works formulated this problem as a sequential decision making model and using Markov decision process (MDP) to solve, with assumption that mobility pattern of the users is known ahead. However, it is difficult to get users’ mobility pattern in advance. In this paper, we propose a deep Q-network (DQN) based technique for task migration in MEC system. It can learn the optimal task migration policy from previous experiences without necessarily acquiring the information about users’ mobility pattern in advance. Our proposed task migration algorithm is validated by conducting extensive simulations in the MEC system.

    KW - Deep reinforcement learning

    KW - Mobile edge computing

    KW - Service migration

    UR - http://www.scopus.com/inward/record.url?scp=85061362089&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=85061362089&partnerID=8YFLogxK

    U2 - 10.1016/j.future.2019.01.059

    DO - 10.1016/j.future.2019.01.059

    M3 - Article

    AN - SCOPUS:85061362089

    VL - 96

    SP - 111

    EP - 118

    JO - Future Generation Computer Systems

    JF - Future Generation Computer Systems

    SN - 0167-739X

    ER -