抄録
Internet of Everything (IoE) is playing an increasingly indispensable role in modern intelligent applications. These smart applications are known for their real-time requirements under limited network and computing resources, in which it becomes a high consuming task to transform and compute tremendous amount of raw data in cloud center. The edge-cloud computing infrastructure allows large amount of data to be processed on nearby edge nodes and then only the extracted and encrypted key features are transmitted to the data center. This offers the potential to achieve an edge-cloud based big data intelligence for IoE in a typical two-stage data processing scheme, while satisfying data security constraint. In this study, a deep reinforcement learning enhanced scheduling method is proposed to address the NP-hard challenge of two-stage scheduling, which is able to allocate computing resources within an edge-cloud infrastructure to ensure computing task to be completed with minimum cost. The proposed reinforcement learning algorithm, which incorporates the Johnson’s rule, is designed to achieve an optimal schedule in IoE. The performance of our method is evaluated and compared with several existing scheduling techniques, and experiment results demonstrate the ability of our proposed algorithm in achieving a more efficient schedule with 1.1-approximation to the targeted optimal IoE applications.
本文言語 | English |
---|---|
ページ(範囲) | 1 |
ページ数 | 1 |
ジャーナル | IEEE Internet of Things Journal |
DOI | |
出版ステータス | Accepted/In press - 2022 |
ASJC Scopus subject areas
- 信号処理
- 情報システム
- ハードウェアとアーキテクチャ
- コンピュータ サイエンスの応用
- コンピュータ ネットワークおよび通信