Edge Enabled Two-Stage Scheduling Based on Deep Reinforcement Learning for Internet of Everything

Xiaokang Zhou, Wei Liang, Ke Yan, Weimin Li, Kevin I.Kai Wang, Jianhua Ma, Qun Jin

研究成果: Article査読

8 被引用数 (Scopus)

抄録

Internet of Everything (IoE) is playing an increasingly indispensable role in modern intelligent applications. These smart applications are known for their real-time requirements under limited network and computing resources, in which it becomes a high consuming task to transform and compute tremendous amount of raw data in cloud center. The edge-cloud computing infrastructure allows large amount of data to be processed on nearby edge nodes and then only the extracted and encrypted key features are transmitted to the data center. This offers the potential to achieve an edge-cloud based big data intelligence for IoE in a typical two-stage data processing scheme, while satisfying data security constraint. In this study, a deep reinforcement learning enhanced scheduling method is proposed to address the NP-hard challenge of two-stage scheduling, which is able to allocate computing resources within an edge-cloud infrastructure to ensure computing task to be completed with minimum cost. The proposed reinforcement learning algorithm, which incorporates the Johnson’s rule, is designed to achieve an optimal schedule in IoE. The performance of our method is evaluated and compared with several existing scheduling techniques, and experiment results demonstrate the ability of our proposed algorithm in achieving a more efficient schedule with 1.1-approximation to the targeted optimal IoE applications.

本文言語English
ページ(範囲)1
ページ数1
ジャーナルIEEE Internet of Things Journal
DOI
出版ステータスAccepted/In press - 2022

ASJC Scopus subject areas

  • 信号処理
  • 情報システム
  • ハードウェアとアーキテクチャ
  • コンピュータ サイエンスの応用
  • コンピュータ ネットワークおよび通信

フィンガープリント

「Edge Enabled Two-Stage Scheduling Based on Deep Reinforcement Learning for Internet of Everything」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル