×
验证码:
换一张
Forgotten Password?
Stay signed in
Login With UMPASS
English
|
繁體
Login With UMPASS
Log In
ALL
ORCID
TI
AU
PY
SU
KW
TY
JN
DA
IN
PB
FP
ST
SM
Study Hall
Image search
Paste the image URL
Home
Faculties & Institutes
Scholars
Publications
Subjects
Statistics
News
Search in the results
Faculties & Institutes
THE STATE KEY LA... [5]
Faculty of Scien... [3]
Institute of Chi... [2]
THE STATE KEY LA... [2]
Authors
XIAOBO ZHOU [5]
CHEN MEIWAN [2]
HE CHENGWEI [1]
Document Type
Journal article [5]
Conference paper [2]
Date Issued
2024 [3]
2023 [2]
2022 [2]
Language
英語English [7]
Source Publication
IEEE Transaction... [2]
Asian Journal of... [1]
Bosnian Journal ... [1]
IEEE TRANSACTION... [1]
Proceedings - 20... [1]
Indexed By
SCIE [5]
CPCI-S [1]
Funding Organization
Funding Project
×
Knowledge Map
UM
Start a Submission
Submissions
Unclaimed
Claimed
Attach Fulltext
Bookmarks
Browse/Search Results:
1-7 of 7
Help
Selected(
0
)
Clear
Items/Page:
5
10
15
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
Sort:
Select
Issue Date Ascending
Issue Date Descending
Journal Impact Factor Ascending
Journal Impact Factor Descending
WOS Cited Times Ascending
WOS Cited Times Descending
Submit date Ascending
Submit date Descending
Title Ascending
Title Descending
Author Ascending
Author Descending
Redundancy-free and load-balanced TGNN training with hierarchical pipeline parallelism
Journal article
Xia, Yaqi, Zhang, Zheng, Yang, Donglin, Hu, Chuang, Zhou, Xiaobo, Chen, Hongyang, Sang, Qianlong, Cheng, Dazhao. Redundancy-free and load-balanced TGNN training with hierarchical pipeline parallelism[J]. IEEE Transactions on Parallel and Distributed Systems, 2024, 35(11), 1904-1919.
Authors:
Xia, Yaqi
;
Zhang, Zheng
;
Yang, Donglin
;
Hu, Chuang
;
Zhou, Xiaobo
; et al.
Favorite
|
TC[WOS]:
0
TC[Scopus]:
0
IF:
5.6
/
4.5
|
Submit date:2024/08/05
Communication Balance
Distributed Training
Dynamic Gnn
Pipeline Parallelism
Redundancy-free
Raptor-T: A Fused and Memory-Efficient Sparse Transformer for Long and Variable-Length Sequences
Journal article
Wang, Hulin, Yang, Donglin, Xia, Yaqi, Zhang, Zheng, Wang, Qigang, Fan, Jianping, Zhou, Xiaobo, Cheng, Dazhao. Raptor-T: A Fused and Memory-Efficient Sparse Transformer for Long and Variable-Length Sequences[J]. IEEE TRANSACTIONS ON COMPUTERS, 2024, 73(7), 1852-1865.
Authors:
Wang, Hulin
;
Yang, Donglin
;
Xia, Yaqi
;
Zhang, Zheng
;
Wang, Qigang
; et al.
Favorite
|
TC[WOS]:
1
TC[Scopus]:
1
IF:
3.6
/
3.2
|
Submit date:2024/05/16
Sparse Transformer
Inference Acceleration
Gpu
Deep Learning
Memory Optimization
Resource Management
MPMoE: Memory Efficient MoE for Pre-Trained Models With Adaptive Pipeline Parallelism
Journal article
Zhang, Zheng, Xia, Yaqi, Wang, Hulin, Yang, Donglin, Hu, Chuang, Zhou, Xiaobo, Cheng, Dazhao. MPMoE: Memory Efficient MoE for Pre-Trained Models With Adaptive Pipeline Parallelism[J]. IEEE Transactions on Parallel and Distributed Systems, 2024, 35(6), 843-856.
Authors:
Zhang, Zheng
;
Xia, Yaqi
;
Wang, Hulin
;
Yang, Donglin
;
Hu, Chuang
; et al.
Favorite
|
TC[WOS]:
0
TC[Scopus]:
1
IF:
5.6
/
4.5
|
Submit date:2024/05/16
Distributed Training
Memory Redundancy
Mixture Of Experts
Performance Model
Pipeline Parallelism
Redundancy-Free High-Performance Dynamic GNN Training with Hierarchical Pipeline Parallelism
Conference paper
Xia, Yaqi, Zhang, Zheng, Wang, Hulin, Yang, Donglin, Zhou, Xiaobo, Cheng, Dazhao. Redundancy-Free High-Performance Dynamic GNN Training with Hierarchical Pipeline Parallelism[C], 2023, 17-13.
Authors:
Xia, Yaqi
;
Zhang, Zheng
;
Wang, Hulin
;
Yang, Donglin
;
Zhou, Xiaobo
; et al.
Favorite
|
TC[WOS]:
4
TC[Scopus]:
5
|
Submit date:2023/08/08
MPipeMoE: Memory Efficient MoE for Pre-trained Models with Adaptive Pipeline Parallelism
Conference paper
Zhang, Zheng, Yang, Donglin, Xia, Yaqi, Ding, Liang, Tao, Dacheng, Zhou, Xiaobo, Cheng, Dazhao. MPipeMoE: Memory Efficient MoE for Pre-trained Models with Adaptive Pipeline Parallelism[C], USA:Institute of Electrical and Electronics Engineers Inc., 2023, 167-177.
Authors:
Zhang, Zheng
;
Yang, Donglin
;
Xia, Yaqi
;
Ding, Liang
;
Tao, Dacheng
; et al.
Favorite
|
TC[WOS]:
1
TC[Scopus]:
1
|
Submit date:2023/08/08
Mixture Of Experts
Pipeline Parallelism
Distributed Training
Memory Efficiency
Development of nanoscale drug delivery systems of dihydroartemisinin for cancer therapy: A review
Journal article
Wong, Ka Hong, Yang, Donglin, Chen, Shanshan, He, Chengwei, Chen, Meiwan. Development of nanoscale drug delivery systems of dihydroartemisinin for cancer therapy: A review[J]. Asian Journal of Pharmaceutical Sciences, 2022, 17(4), 475-490.
Authors:
Wong, Ka Hong
;
Yang, Donglin
;
Chen, Shanshan
;
He, Chengwei
;
Chen, Meiwan
Favorite
|
TC[WOS]:
30
TC[Scopus]:
33
IF:
10.7
/
9.0
|
Submit date:2023/01/30
Chemodynamic Therapy
Dihydroartemisinin
Ferroptosis
Nano-drug Delivery
Photodynamic Therapy
Photothermal Therapy
ROS responsive polyethylenimine-based fluorinated polymers for enhanced transfection efficiency and lower cytotoxicity
Journal article
Hua, Peng, Yang, Donglin, Chen, Ruie, Qiu, Peiqi, Chen, Meiwan. ROS responsive polyethylenimine-based fluorinated polymers for enhanced transfection efficiency and lower cytotoxicity[J]. Bosnian Journal of Basic Medical Sciences, 2022, 22(4), 593-607.
Authors:
Hua, Peng
;
Yang, Donglin
;
Chen, Ruie
;
Qiu, Peiqi
;
Chen, Meiwan
Favorite
|
TC[WOS]:
5
TC[Scopus]:
6
IF:
3.1
/
3.3
|
Submit date:2022/08/08
Fluorination
Polycation
Ros-responsive
Serum-resistant
Transfection