Residential College | false |
Status | 已發表Published |
Deep Reinforcement Learning-Based Optimization for IRS-Assisted Cognitive Radio Systems | |
Canwei Zhong1; Miao Cui1; Guangchi Zhang1; Qingqing Wu2; Xinrong Guan3; Xiaoli Chu4; H. Vincent Poor5 | |
2022-06-16 | |
Source Publication | IEEE Transactions on Communications |
ISSN | 0090-6778 |
Volume | 70Issue:6Pages:3849-3864 |
Abstract | In this paper, we consider an intelligent reflecting surface (IRS)-assisted cognitive radio system and maximize the secondary user (SU) rate by jointly optimizing the transmit power of secondary transmitter (ST) and the IRS’s reflect beamforming, subject to the constraints of the minimum required signal-to-interference-plus-noise ratio at the primary receiver, the ST’s maximum transmit power, and the unit modulus of the IRS reflect beamforming vector. This joint optimization problem can be solved suboptimally by the non-convex optimization techniques, which however usually require complicated mathematical transformations and are computationally intensive. To address this challenge, we propose an algorithm based on the deep deterministic policy gradient (DDPG) method. To achieve a higher learning efficiency and a lower reward variance, we propose another algorithm based on the soft actor-critic (SAC) method. In these proposed algorithms, a reward impact adjustment approach is proposed to improve their learning efficiency and stability. Simulation results show that the two proposed algorithms can achieve comparable SU rate performance with much shorter running time, as compared to the existing non-convex optimization-based benchmark algorithm, and that the proposed SAC-based algorithm learns faster and achieves a higher average reward with lower variance, as compared to the proposed DDPG-based algorithm. |
Keyword | Array Signal Processing Cognitive Radio Deep Reinforcement Learning Intelligent Reflecting Surface Interference Miso Communication Optimization Quality Of Service Reflect Beamforming Signal To Noise Ratio Simulation Transmit Power Control |
DOI | 10.1109/TCOMM.2022.3171837 |
URL | View the original |
Indexed By | SCIE |
Language | 英語English |
WOS Research Area | Engineering ; Telecommunications |
WOS Subject | Engineering, Electrical & Electronic ; Telecommunications |
WOS ID | WOS:000811589400022 |
Publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 445 HOES LANE, PISCATAWAY, NJ 08855-4141 |
Scopus ID | 2-s2.0-85129685697 |
Fulltext Access | |
Citation statistics | |
Document Type | Journal article |
Collection | THE STATE KEY LABORATORY OF INTERNET OF THINGS FOR SMART CITY (UNIVERSITY OF MACAU) |
Corresponding Author | Guangchi Zhang; Qingqing Wu |
Affiliation | 1.School of Information Engineering, Guangdong University of Technology, Guangzhou, 510006, China, the State Key Laboratory of Integrated Services Networks, Xi’an, China 2.State Key Laboratory of Internet of Things for Smart City, University of Macau, Macau, 999078, China 3.College of Communications Engineering, Army Engineering University of PLA, Nanjing, 210007, China 4.Department of Electronic and Electrical Engineering, The University of Sheffield, Sheffield, UK 5.Department of Electrical Engineering, Princeton University, Princeton, NJ 08544, USA |
Corresponding Author Affilication | University of Macau |
Recommended Citation GB/T 7714 | Canwei Zhong,Miao Cui,Guangchi Zhang,et al. Deep Reinforcement Learning-Based Optimization for IRS-Assisted Cognitive Radio Systems[J]. IEEE Transactions on Communications, 2022, 70(6), 3849-3864. |
APA | Canwei Zhong., Miao Cui., Guangchi Zhang., Qingqing Wu., Xinrong Guan., Xiaoli Chu., & H. Vincent Poor (2022). Deep Reinforcement Learning-Based Optimization for IRS-Assisted Cognitive Radio Systems. IEEE Transactions on Communications, 70(6), 3849-3864. |
MLA | Canwei Zhong,et al."Deep Reinforcement Learning-Based Optimization for IRS-Assisted Cognitive Radio Systems".IEEE Transactions on Communications 70.6(2022):3849-3864. |
Files in This Item: | There are no files associated with this item. |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment