UM

Browse/Search Results:  1-2 of 2 Help

Selected(0)Clear Items/Page:    Sort:
MPMoE: Memory Efficient MoE for Pre-Trained Models With Adaptive Pipeline Parallelism Journal article
Zhang, Zheng, Xia, Yaqi, Wang, Hulin, Yang, Donglin, Hu, Chuang, Zhou, Xiaobo, Cheng, Dazhao. MPMoE: Memory Efficient MoE for Pre-Trained Models With Adaptive Pipeline Parallelism[J]. IEEE Transactions on Parallel and Distributed Systems, 2024, 35(6), 843-856.
Authors:  Zhang, Zheng;  Xia, Yaqi;  Wang, Hulin;  Yang, Donglin;  Hu, Chuang; et al.
Favorite | TC[WOS]:0 TC[Scopus]:1  IF:5.6/4.5 | Submit date:2024/05/16
Distributed Training  Memory Redundancy  Mixture Of Experts  Performance Model  Pipeline Parallelism  
MPipeMoE: Memory Efficient MoE for Pre-trained Models with Adaptive Pipeline Parallelism Conference paper
Zhang, Zheng, Yang, Donglin, Xia, Yaqi, Ding, Liang, Tao, Dacheng, Zhou, Xiaobo, Cheng, Dazhao. MPipeMoE: Memory Efficient MoE for Pre-trained Models with Adaptive Pipeline Parallelism[C], USA:Institute of Electrical and Electronics Engineers Inc., 2023, 167-177.
Authors:  Zhang, Zheng;  Yang, Donglin;  Xia, Yaqi;  Ding, Liang;  Tao, Dacheng; et al.
Favorite | TC[WOS]:1 TC[Scopus]:1 | Submit date:2023/08/08
Mixture Of Experts  Pipeline Parallelism  Distributed Training  Memory Efficiency