Residential College | false |
Status | 已發表Published |
Adversarial Caching Training: Unsupervised Inductive Network Representation Learning on Large-Scale Graphs | |
Junyang Chen1; Zhiguo Gong2; Wei Wang3; Cong Wang4; Zhenghua Xu5; Jianming Lv6; Xueliang Li1; Kaishun Wu1; Weiwen Liu7 | |
2022-12-01 | |
Source Publication | IEEE Transactions on Neural Networks and Learning Systems |
ISSN | 2162-237X |
Volume | 33Issue:12Pages:7079-7090 |
Abstract | Network representation learning (NRL) has far-reaching effects on data mining research, showing its importance in many real-world applications. NRL, also known as network embedding, aims at preserving graph structures in a low-dimensional space. These learned representations can be used for subsequent machine learning tasks, such as vertex classification, link prediction, and data visualization. Recently, graph convolutional network (GCN)-based models, e.g., GraphSAGE, have drawn a lot of attention for their success in inductive NRL. When conducting unsupervised learning on large-scale graphs, some of these models employ negative sampling (NS) for optimization, which encourages a target vertex to be close to its neighbors while being far from its negative samples. However, NS draws negative vertices through a random pattern or based on the degrees of vertices. Thus, the generated samples could be either highly relevant or completely unrelated to the target vertex. Moreover, as the training goes, the gradient of NS objective calculated with the inner product of the unrelated negative samples and the target vertex may become zero, which will lead to learning inferior representations. To address these problems, we propose an adversarial training method tailored for unsupervised inductive NRL on large networks. For efficiently keeping track of high-quality negative samples, we design a caching scheme with sampling and updating strategies that has a wide exploration of vertex proximity while considering training costs. Besides, the proposed method is adaptive to various existing GCN-based models without significantly complicating their optimization process. Extensive experiments show that our proposed method can achieve better performance compared with the state-of-the-art models. |
Keyword | Adversarial Learning Graph Neural Network Inductive Learning Negative Sampling (Ns) Network Embedding |
DOI | 10.1109/TNNLS.2021.3084195 |
URL | View the original |
Indexed By | SCIE |
Language | 英語English |
WOS Research Area | Computer Science ; Engineering |
WOS Subject | Computer Science, Artificial Intelligence ; Computer Science, Hardware & Architecture ; Computer Science, Theory & Methods ; Engineering, Electrical & Electronic |
WOS ID | WOS:000732137700001 |
Publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 445 HOES LANE, PISCATAWAY, NJ 08855-4141 |
Scopus ID | 2-s2.0-85143180761 |
Fulltext Access | |
Citation statistics | |
Document Type | Journal article |
Collection | THE STATE KEY LABORATORY OF INTERNET OF THINGS FOR SMART CITY (UNIVERSITY OF MACAU) Faculty of Science and Technology DEPARTMENT OF COMPUTER AND INFORMATION SCIENCE |
Corresponding Author | Zhiguo Gong |
Affiliation | 1.Shenzhen University, College of Computer Science and Software Engineering, Shenzhen, 518060, China 2.University of Macau, State Key Laboratory of Internet of Things for Smart City, Department of Computer Information Science, Macao 3.Sun Yat-sen University, School of Intelligent Systems Engineering, Guangzhou, 510275, China 4.The Hong Kong Polytechnic University, Department of Computing, Hong Kong 5.Hebei University of Technology, State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Tianjin, 300401, China 6.South China University of Technology, School of Computer Science and Engineering, Guangzhou, 510006, China 7.The Chinese University of Hong Kong, Department of Computer Science and Engineering, Hong Kong |
Corresponding Author Affilication | University of Macau |
Recommended Citation GB/T 7714 | Junyang Chen,Zhiguo Gong,Wei Wang,et al. Adversarial Caching Training: Unsupervised Inductive Network Representation Learning on Large-Scale Graphs[J]. IEEE Transactions on Neural Networks and Learning Systems, 2022, 33(12), 7079-7090. |
APA | Junyang Chen., Zhiguo Gong., Wei Wang., Cong Wang., Zhenghua Xu., Jianming Lv., Xueliang Li., Kaishun Wu., & Weiwen Liu (2022). Adversarial Caching Training: Unsupervised Inductive Network Representation Learning on Large-Scale Graphs. IEEE Transactions on Neural Networks and Learning Systems, 33(12), 7079-7090. |
MLA | Junyang Chen,et al."Adversarial Caching Training: Unsupervised Inductive Network Representation Learning on Large-Scale Graphs".IEEE Transactions on Neural Networks and Learning Systems 33.12(2022):7079-7090. |
Files in This Item: | There are no files associated with this item. |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment