×
验证码:
换一张
Forgotten Password?
Stay signed in
Login With UMPASS
English
|
繁體
Login With UMPASS
Log In
ALL
ORCID
TI
AU
PY
SU
KW
TY
JN
DA
IN
PB
FP
ST
SM
Study Hall
Home
Faculties & Institutes
Scholars
Publications
Subjects
Statistics
News
Search in the results
Faculties & Institutes
Faculty of Scien... [4]
THE STATE KEY LA... [3]
Authors
XIAOBO ZHOU [3]
CHENGZHONG XU [2]
Document Type
Journal article [3]
Conference paper [2]
Date Issued
2024 [4]
2023 [1]
Language
英語English [5]
Source Publication
IEEE Transaction... [2]
Proceedings of t... [2]
Proceedings - 20... [1]
Indexed By
SCIE [2]
CPCI-S [1]
Funding Organization
Funding Project
×
Knowledge Map
UM
Start a Submission
Submissions
Unclaimed
Claimed
Attach Fulltext
Bookmarks
Browse/Search Results:
1-5 of 5
Help
Selected(
0
)
Clear
Items/Page:
5
10
15
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
Sort:
Select
Issue Date Ascending
Issue Date Descending
WOS Cited Times Ascending
WOS Cited Times Descending
Title Ascending
Title Descending
Author Ascending
Author Descending
Journal Impact Factor Ascending
Journal Impact Factor Descending
Submit date Ascending
Submit date Descending
Redundancy-free and load-balanced TGNN training with hierarchical pipeline parallelism
Journal article
Xia, Yaqi, Zhang, Zheng, Yang, Donglin, Hu, Chuang, Zhou, Xiaobo, Chen, Hongyang, Sang, Qianlong, Cheng, Dazhao. Redundancy-free and load-balanced TGNN training with hierarchical pipeline parallelism[J]. IEEE Transactions on Parallel and Distributed Systems, 2024, 35(11), 1904-1919.
Authors:
Xia, Yaqi
;
Zhang, Zheng
;
Yang, Donglin
;
Hu, Chuang
;
Zhou, Xiaobo
; et al.
Favorite
|
TC[WOS]:
0
TC[Scopus]:
1
IF:
5.6
/
4.5
|
Submit date:2024/08/05
Communication Balance
Distributed Training
Dynamic Gnn
Pipeline Parallelism
Redundancy-free
MPMoE: Memory Efficient MoE for Pre-Trained Models With Adaptive Pipeline Parallelism
Journal article
Zhang, Zheng, Xia, Yaqi, Wang, Hulin, Yang, Donglin, Hu, Chuang, Zhou, Xiaobo, Cheng, Dazhao. MPMoE: Memory Efficient MoE for Pre-Trained Models With Adaptive Pipeline Parallelism[J]. IEEE Transactions on Parallel and Distributed Systems, 2024, 35(6), 843-856.
Authors:
Zhang, Zheng
;
Xia, Yaqi
;
Wang, Hulin
;
Yang, Donglin
;
Hu, Chuang
; et al.
Favorite
|
TC[WOS]:
0
TC[Scopus]:
4
IF:
5.6
/
4.5
|
Submit date:2024/05/16
Distributed Training
Memory Redundancy
Mixture Of Experts
Performance Model
Pipeline Parallelism
Planck: Optimizing LLM Inference Performance in Pipeline Parallelism with Fine-Grained SLO Constraint
Journal article
Lin, Yanying, Peng, Shijie, Wu, Shuaipeng, Li, Yanbo, Lu, Chengzhi, Xu, Chengzhong, Ye, Kejiang. Planck: Optimizing LLM Inference Performance in Pipeline Parallelism with Fine-Grained SLO Constraint[J]. Proceedings of the IEEE International Conference on Web Services, ICWS, 2024, 1306-1313.
Authors:
Lin, Yanying
;
Peng, Shijie
;
Wu, Shuaipeng
;
Li, Yanbo
;
Lu, Chengzhi
; et al.
Favorite
|
TC[WOS]:
0
TC[Scopus]:
1
|
Submit date:2024/12/26
LLM Serving
Pipeline Bubble
Pipeline Parallelism
SLO Constraint
Planck: Optimizing LLM Inference Performance in Pipeline Parallelism with Fine-Grained SLO Constraint
Conference paper
Lin, Yanying, Peng, Shijie, Wu, Shuaipeng, Li, Yanbo, Lu, Chengzhi, Xu, Chengzhong, Ye, Kejiang. Planck: Optimizing LLM Inference Performance in Pipeline Parallelism with Fine-Grained SLO Constraint[C]:Institute of Electrical and Electronics Engineers Inc., 2024, 1306-1313.
Authors:
Lin, Yanying
;
Peng, Shijie
;
Wu, Shuaipeng
;
Li, Yanbo
;
Lu, Chengzhi
; et al.
Favorite
|
TC[WOS]:
0
TC[Scopus]:
1
|
Submit date:2024/12/05
Llm Serving
Pipeline Bubble
Pipeline Parallelism
Slo Constraint
MPipeMoE: Memory Efficient MoE for Pre-trained Models with Adaptive Pipeline Parallelism
Conference paper
Zhang, Zheng, Yang, Donglin, Xia, Yaqi, Ding, Liang, Tao, Dacheng, Zhou, Xiaobo, Cheng, Dazhao. MPipeMoE: Memory Efficient MoE for Pre-trained Models with Adaptive Pipeline Parallelism[C], USA:Institute of Electrical and Electronics Engineers Inc., 2023, 167-177.
Authors:
Zhang, Zheng
;
Yang, Donglin
;
Xia, Yaqi
;
Ding, Liang
;
Tao, Dacheng
; et al.
Favorite
|
TC[WOS]:
2
TC[Scopus]:
6
|
Submit date:2023/08/08
Mixture Of Experts
Pipeline Parallelism
Distributed Training
Memory Efficiency