UM  > Faculty of Science and Technology
Residential Collegefalse
Status已發表Published
Redundancy-free and load-balanced TGNN training with hierarchical pipeline parallelism
Xia, Yaqi1; Zhang, Zheng1; Yang, Donglin2; Hu, Chuang1; Zhou, Xiaobo3; Chen, Hongyang4; Sang, Qianlong1; Cheng, Dazhao1
2024-11
Source PublicationIEEE Transactions on Parallel and Distributed Systems
ISSN1045-9219
Volume35Issue:11Pages:1904-1919
Abstract

Recently, Temporal Graph Neural Networks (TGNNs), as an extension of Graph Neural Networks, have demonstrated remarkable effectiveness in handling dynamic graph data. Distributed TGNN training requires efficiently tackling temporal dependency, which often leads to excessive cross-device communication that generates significant redundant data. However, existing systems are unable to remove the redundancy in data reuse and transfer, and suffer from severe communication overhead in a distributed setting. This work introduces Sven, a co-designed algorithm-system library aimed at accelerating TGNN training on a multi-GPU platform. Exploiting dependency patterns of TGNN models, we develop a redundancy-free graph organization to mitigate redundant data transfer. Additionally, we investigate communication imbalance issues among devices and formulate the graph partitioning problem as minimizing the maximum communication balance cost, which is proved to be an NP-hard problem. We propose an approximation algorithm called Re-FlexBiCut to tackle this problem. Furthermore, we incorporate prefetching, adaptive micro-batch pipelining, and asynchronous pipelining to present a hierarchical pipelining mechanism that mitigates the communication overhead. Sven represents the first comprehensive optimization solution for scaling memory-based TGNN training. Through extensive experiments conducted on a 64-GPU cluster, Sven demonstrates impressive speedup, ranging from 1.9x to 3.5x, compared to state-of-the-art approaches. Additionally, Sven achieves up to 5.26x higher communication efficiency and reduces communication imbalance by up to 59.2%.

KeywordCommunication Balance Distributed Training Dynamic Gnn Pipeline Parallelism Redundancy-free
DOI10.1109/TPDS.2024.3432855
URLView the original
Indexed BySCIE
Language英語English
WOS Research AreaComputer Science ; Engineering
WOS SubjectComputer Science, Theory & Methods ; Engineering, Electrical & Electronic
WOS IDWOS:001311204500002
PublisherIEEE COMPUTER SOC, 10662 LOS VAQUEROS CIRCLE, PO BOX 3014, LOS ALAMITOS, CA 90720-1314
Scopus ID2-s2.0-85199570814
Fulltext Access
Citation statistics
Document TypeJournal article
CollectionFaculty of Science and Technology
THE STATE KEY LABORATORY OF INTERNET OF THINGS FOR SMART CITY (UNIVERSITY OF MACAU)
DEPARTMENT OF COMPUTER AND INFORMATION SCIENCE
Corresponding AuthorSang, Qianlong; Cheng, Dazhao
Affiliation1.School of Computer Science, Wuhan University, Hubei 430072, China
2.Nvidia Corp, Santa Clara, CA 95051 USA
3.IOTSC & Department of Computer and Information Sciences, University of Macau, Macau 999078, China
4.Research Center for Graph Computing, Zhejiang Lab, Hangzhou 311100, China
Recommended Citation
GB/T 7714
Xia, Yaqi,Zhang, Zheng,Yang, Donglin,et al. Redundancy-free and load-balanced TGNN training with hierarchical pipeline parallelism[J]. IEEE Transactions on Parallel and Distributed Systems, 2024, 35(11), 1904-1919.
APA Xia, Yaqi., Zhang, Zheng., Yang, Donglin., Hu, Chuang., Zhou, Xiaobo., Chen, Hongyang., Sang, Qianlong., & Cheng, Dazhao (2024). Redundancy-free and load-balanced TGNN training with hierarchical pipeline parallelism. IEEE Transactions on Parallel and Distributed Systems, 35(11), 1904-1919.
MLA Xia, Yaqi,et al."Redundancy-free and load-balanced TGNN training with hierarchical pipeline parallelism".IEEE Transactions on Parallel and Distributed Systems 35.11(2024):1904-1919.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Xia, Yaqi]'s Articles
[Zhang, Zheng]'s Articles
[Yang, Donglin]'s Articles
Baidu academic
Similar articles in Baidu academic
[Xia, Yaqi]'s Articles
[Zhang, Zheng]'s Articles
[Yang, Donglin]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Xia, Yaqi]'s Articles
[Zhang, Zheng]'s Articles
[Yang, Donglin]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.