Residential Collegefalse
Status已發表Published
Adversarial Caching Training: Unsupervised Inductive Network Representation Learning on Large-Scale Graphs
Junyang Chen1; Zhiguo Gong2; Wei Wang3; Cong Wang4; Zhenghua Xu5; Jianming Lv6; Xueliang Li1; Kaishun Wu1; Weiwen Liu7
2022-12-01
Source PublicationIEEE Transactions on Neural Networks and Learning Systems
ISSN2162-237X
Volume33Issue:12Pages:7079-7090
Abstract

Network representation learning (NRL) has far-reaching effects on data mining research, showing its importance in many real-world applications. NRL, also known as network embedding, aims at preserving graph structures in a low-dimensional space. These learned representations can be used for subsequent machine learning tasks, such as vertex classification, link prediction, and data visualization. Recently, graph convolutional network (GCN)-based models, e.g., GraphSAGE, have drawn a lot of attention for their success in inductive NRL. When conducting unsupervised learning on large-scale graphs, some of these models employ negative sampling (NS) for optimization, which encourages a target vertex to be close to its neighbors while being far from its negative samples. However, NS draws negative vertices through a random pattern or based on the degrees of vertices. Thus, the generated samples could be either highly relevant or completely unrelated to the target vertex. Moreover, as the training goes, the gradient of NS objective calculated with the inner product of the unrelated negative samples and the target vertex may become zero, which will lead to learning inferior representations. To address these problems, we propose an adversarial training method tailored for unsupervised inductive NRL on large networks. For efficiently keeping track of high-quality negative samples, we design a caching scheme with sampling and updating strategies that has a wide exploration of vertex proximity while considering training costs. Besides, the proposed method is adaptive to various existing GCN-based models without significantly complicating their optimization process. Extensive experiments show that our proposed method can achieve better performance compared with the state-of-the-art models.

KeywordAdversarial Learning Graph Neural Network Inductive Learning Negative Sampling (Ns) Network Embedding
DOI10.1109/TNNLS.2021.3084195
URLView the original
Indexed BySCIE
Language英語English
WOS Research AreaComputer Science ; Engineering
WOS SubjectComputer Science, Artificial Intelligence ; Computer Science, Hardware & Architecture ; Computer Science, Theory & Methods ; Engineering, Electrical & Electronic
WOS IDWOS:000732137700001
PublisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 445 HOES LANE, PISCATAWAY, NJ 08855-4141
Scopus ID2-s2.0-85143180761
Fulltext Access
Citation statistics
Document TypeJournal article
CollectionTHE STATE KEY LABORATORY OF INTERNET OF THINGS FOR SMART CITY (UNIVERSITY OF MACAU)
Faculty of Science and Technology
DEPARTMENT OF COMPUTER AND INFORMATION SCIENCE
Corresponding AuthorZhiguo Gong
Affiliation1.Shenzhen University, College of Computer Science and Software Engineering, Shenzhen, 518060, China
2.University of Macau, State Key Laboratory of Internet of Things for Smart City, Department of Computer Information Science, Macao
3.Sun Yat-sen University, School of Intelligent Systems Engineering, Guangzhou, 510275, China
4.The Hong Kong Polytechnic University, Department of Computing, Hong Kong
5.Hebei University of Technology, State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Tianjin, 300401, China
6.South China University of Technology, School of Computer Science and Engineering, Guangzhou, 510006, China
7.The Chinese University of Hong Kong, Department of Computer Science and Engineering, Hong Kong
Corresponding Author AffilicationUniversity of Macau
Recommended Citation
GB/T 7714
Junyang Chen,Zhiguo Gong,Wei Wang,et al. Adversarial Caching Training: Unsupervised Inductive Network Representation Learning on Large-Scale Graphs[J]. IEEE Transactions on Neural Networks and Learning Systems, 2022, 33(12), 7079-7090.
APA Junyang Chen., Zhiguo Gong., Wei Wang., Cong Wang., Zhenghua Xu., Jianming Lv., Xueliang Li., Kaishun Wu., & Weiwen Liu (2022). Adversarial Caching Training: Unsupervised Inductive Network Representation Learning on Large-Scale Graphs. IEEE Transactions on Neural Networks and Learning Systems, 33(12), 7079-7090.
MLA Junyang Chen,et al."Adversarial Caching Training: Unsupervised Inductive Network Representation Learning on Large-Scale Graphs".IEEE Transactions on Neural Networks and Learning Systems 33.12(2022):7079-7090.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Junyang Chen]'s Articles
[Zhiguo Gong]'s Articles
[Wei Wang]'s Articles
Baidu academic
Similar articles in Baidu academic
[Junyang Chen]'s Articles
[Zhiguo Gong]'s Articles
[Wei Wang]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Junyang Chen]'s Articles
[Zhiguo Gong]'s Articles
[Wei Wang]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.