Residential College | false |
Status | 已發表Published |
Deep Reinforcement Learning-Based Dynamic Resource Management for Mobile Edge Computing in Industrial Internet of Things | |
Chen Ying1; Liu Zhiyong1; Zhang Yongchao1; Wu Yuan2; Chen Xin1; Zhao Lian3 | |
2021-07 | |
Source Publication | IEEE Transactions on Industrial Informatics |
ISSN | 1551-3203 |
Volume | 17Issue:7Pages:4925-4934 |
Abstract | Nowadays, driven by the rapid development of smart mobile equipments and 5G network technologies, the application scenarios of Internet of Things (IoT) technology are becoming increasingly widespread. The integration of IoT and industrial manufacturing systems forms the industrial IoT (IIoT). Because of the limitation of resources, such as the computation unit and battery capacity in the IIoT equipments (IIEs), computation-intensive tasks need to be executed in the mobile edge computing (MEC) server. However, the dynamics and continuity of task generation lead to a severe challenge to the management of limited resources in IIoT. In this article, we investigate the dynamic resource management problem of joint power control and computing resource allocation for MEC in IIoT. In order to minimize the long-term average delay of the tasks, the original problem is transformed into a Markov decision process (MDP). Considering the dynamics and continuity of task generation, we propose a deep reinforcement learning-based dynamic resource management (DDRM) algorithm to solve the formulated MDP problem. Our DDRM algorithm exploits the deep deterministic policy gradient and can deal with the high-dimensional continuity of the action and state spaces. Extensive simulation results demonstrate that the DDRM can reduce the long-term average delay of the tasks effectively. |
Keyword | Deep Reinforcement Learning (Drl) Dynamic Resource Management Industrial Internet Of Things (Iiot) Mobile Edge Computing (Mec) |
DOI | 10.1109/TII.2020.3028963 |
URL | View the original |
Indexed By | SCIE |
Language | 英語English |
WOS Research Area | Automation & Control Systems ; Computer Science ; Engineering |
WOS Subject | Automation & Control Systems ; Computer Science, Interdisciplinary Applications ; Engineering, Industrial |
WOS ID | WOS:000638402700049 |
Publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 445 HOES LANE, PISCATAWAY, NJ 08855-4141 |
Scopus ID | 2-s2.0-85104173435 |
Fulltext Access | |
Citation statistics | |
Document Type | Journal article |
Collection | THE STATE KEY LABORATORY OF INTERNET OF THINGS FOR SMART CITY (UNIVERSITY OF MACAU) |
Corresponding Author | Chen Ying |
Affiliation | 1.School of Computer, Beijing Information Science and Technology University, Beijing, 100101, China 2.State Key Lab of Internet of Things for Smart City, University of Macau, Macao 3.Department of Electrical, Computer and Biomedical Engineering, Ryerson University, Toronto, M5B 2K3, Canada |
Recommended Citation GB/T 7714 | Chen Ying,Liu Zhiyong,Zhang Yongchao,et al. Deep Reinforcement Learning-Based Dynamic Resource Management for Mobile Edge Computing in Industrial Internet of Things[J]. IEEE Transactions on Industrial Informatics, 2021, 17(7), 4925-4934. |
APA | Chen Ying., Liu Zhiyong., Zhang Yongchao., Wu Yuan., Chen Xin., & Zhao Lian (2021). Deep Reinforcement Learning-Based Dynamic Resource Management for Mobile Edge Computing in Industrial Internet of Things. IEEE Transactions on Industrial Informatics, 17(7), 4925-4934. |
MLA | Chen Ying,et al."Deep Reinforcement Learning-Based Dynamic Resource Management for Mobile Edge Computing in Industrial Internet of Things".IEEE Transactions on Industrial Informatics 17.7(2021):4925-4934. |
Files in This Item: | There are no files associated with this item. |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment