×
验证码:
换一张
Forgotten Password?
Stay signed in
Login With UMPASS
English
|
繁體
Login With UMPASS
Log In
ALL
ORCID
TI
AU
PY
SU
KW
TY
JN
DA
IN
PB
FP
ST
SM
Study Hall
Image search
Paste the image URL
Home
Faculties & Institutes
Scholars
Publications
Subjects
Statistics
News
Search in the results
Faculties & Institutes
Faculty of Scie... [12]
THE STATE KEY LA... [7]
Authors
CHENGZHONG XU [8]
XIAOBO ZHOU [4]
XU HUANLE [2]
LI LI [1]
Document Type
Journal article [22]
Date Issued
2024 [5]
2023 [2]
2022 [2]
2021 [2]
2020 [1]
2019 [1]
More...
Language
英語English [22]
Source Publication
IEEE Transactio... [17]
IEEE TRANSACTION... [5]
Indexed By
SCIE [14]
Funding Organization
Funding Project
×
Knowledge Map
UM
Start a Submission
Submissions
Unclaimed
Claimed
Attach Fulltext
Bookmarks
Browse/Search Results:
1-10 of 22
Help
Selected(
0
)
Clear
Items/Page:
5
10
15
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
Sort:
Select
Issue Date Ascending
Issue Date Descending
Journal Impact Factor Ascending
Journal Impact Factor Descending
WOS Cited Times Ascending
WOS Cited Times Descending
Submit date Ascending
Submit date Descending
Title Ascending
Title Descending
Author Ascending
Author Descending
DeepTM: Efficient Tensor Management in Heterogeneous Memory for DNN Training
Journal article
Zhou, Haoran, Rang, Wei, Chen, Hongyang, Zhou, Xiaobo, Cheng, Dazhao. DeepTM: Efficient Tensor Management in Heterogeneous Memory for DNN Training[J]. IEEE Transactions on Parallel and Distributed Systems, 2024, 35(11), 1920-1935.
Authors:
Zhou, Haoran
;
Rang, Wei
;
Chen, Hongyang
;
Zhou, Xiaobo
;
Cheng, Dazhao
Favorite
|
TC[WOS]:
0
TC[Scopus]:
0
IF:
5.6
/
4.5
|
Submit date:2024/08/05
Deep Neural Network Training
Heterogeneous Memory
Memory Management
Performance Optimization
Redundancy-free and load-balanced TGNN training with hierarchical pipeline parallelism
Journal article
Xia, Yaqi, Zhang, Zheng, Yang, Donglin, Hu, Chuang, Zhou, Xiaobo, Chen, Hongyang, Sang, Qianlong, Cheng, Dazhao. Redundancy-free and load-balanced TGNN training with hierarchical pipeline parallelism[J]. IEEE Transactions on Parallel and Distributed Systems, 2024, 35(11), 1904-1919.
Authors:
Xia, Yaqi
;
Zhang, Zheng
;
Yang, Donglin
;
Hu, Chuang
;
Zhou, Xiaobo
; et al.
Favorite
|
TC[WOS]:
0
TC[Scopus]:
0
IF:
5.6
/
4.5
|
Submit date:2024/08/05
Communication Balance
Distributed Training
Dynamic Gnn
Pipeline Parallelism
Redundancy-free
InSS: An Intelligent Scheduling Orchestrator for Multi-GPU Inference with Spatio-Temporal Sharing
Journal article
Han, Ziyi, Zhou, Ruiting, Xu, Chengzhong, Zeng, Yifan, Zhang, Renli. InSS: An Intelligent Scheduling Orchestrator for Multi-GPU Inference with Spatio-Temporal Sharing[J]. IEEE Transactions on Parallel and Distributed Systems, 2024, 35(10), 1735-1748.
Authors:
Han, Ziyi
;
Zhou, Ruiting
;
Xu, Chengzhong
;
Zeng, Yifan
;
Zhang, Renli
Favorite
|
TC[WOS]:
0
TC[Scopus]:
2
IF:
5.6
/
4.5
|
Submit date:2024/08/05
Dnn Inference
Gpu Resource Management
Online Scheduling
Joint Participant and Learning Topology Selection for Federated Learning in Edge Clouds
Journal article
Wei, Xinliang, Ye, Kejiang, Shi, Xinghua, Xu, Cheng Zhong, Wang, Yu. Joint Participant and Learning Topology Selection for Federated Learning in Edge Clouds[J]. IEEE Transactions on Parallel and Distributed Systems, 2024, 35(8), 1456-1468.
Authors:
Wei, Xinliang
;
Ye, Kejiang
;
Shi, Xinghua
;
Xu, Cheng Zhong
;
Wang, Yu
Favorite
|
TC[WOS]:
0
TC[Scopus]:
1
IF:
5.6
/
4.5
|
Submit date:2024/07/04
Edge Computing
Federated Learning
Learning Topology
Participant Selection
MPMoE: Memory Efficient MoE for Pre-Trained Models With Adaptive Pipeline Parallelism
Journal article
Zhang, Zheng, Xia, Yaqi, Wang, Hulin, Yang, Donglin, Hu, Chuang, Zhou, Xiaobo, Cheng, Dazhao. MPMoE: Memory Efficient MoE for Pre-Trained Models With Adaptive Pipeline Parallelism[J]. IEEE Transactions on Parallel and Distributed Systems, 2024, 35(6), 843-856.
Authors:
Zhang, Zheng
;
Xia, Yaqi
;
Wang, Hulin
;
Yang, Donglin
;
Hu, Chuang
; et al.
Favorite
|
TC[WOS]:
0
TC[Scopus]:
1
IF:
5.6
/
4.5
|
Submit date:2024/05/16
Distributed Training
Memory Redundancy
Mixture Of Experts
Performance Model
Pipeline Parallelism
AutoRS: Environment-Dependent Real-Time Scheduling for End-to-End Autonomous Driving
Journal article
Ma, Jialiang, Li, Li, Xu, Chengzhong. AutoRS: Environment-Dependent Real-Time Scheduling for End-to-End Autonomous Driving[J]. IEEE Transactions on Parallel and Distributed Systems, 2023, 34(12), 3238-3252.
Authors:
Ma, Jialiang
;
Li, Li
;
Xu, Chengzhong
Favorite
|
TC[WOS]:
0
TC[Scopus]:
3
IF:
5.6
/
4.5
|
Submit date:2024/01/02
Autonomous Driving
Real-time Scheduling
Cloud Configuration Optimization for Recurring Batch-Processing Applications
Journal article
Liu, Yang, Xu, Huanle, Lau, Wing Cheong. Cloud Configuration Optimization for Recurring Batch-Processing Applications[J]. IEEE Transactions on Parallel and Distributed Systems, 2023, 34(5), 1495-1507.
Authors:
Liu, Yang
;
Xu, Huanle
;
Lau, Wing Cheong
Favorite
|
TC[WOS]:
1
TC[Scopus]:
1
IF:
5.6
/
4.5
|
Submit date:2023/06/05
Big Data Analytics
Cloud Configuration
Gaussian-process Ucb
Kubernetes
An In-depth Study of Microservice Call Graph and Runtime Performance
Journal article
Luo, Shutian, Xu, Huanle, Lu, Chengzhi, Ye, Kejiang, Xu, Guoyao, Zhang, Liping, He, Jian, Xu, Cheng Zhong. An In-depth Study of Microservice Call Graph and Runtime Performance[J]. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2022, 33(12), 3901-3914.
Authors:
Luo, Shutian
;
Xu, Huanle
;
Lu, Chengzhi
;
Ye, Kejiang
;
Xu, Guoyao
; et al.
Favorite
|
TC[WOS]:
28
TC[Scopus]:
34
IF:
5.6
/
4.5
|
Submit date:2022/08/05
Trace Analysis
Microservice
Performance Characterization
The State of the Art of Metadata Managements in Large-Scale Distributed File Systems Scalability, Performance and Availability
Journal article
Hao Dai, Yang Wang, Kenneth B. Kent, Lingfang Zeng, Chengzhong Xu. The State of the Art of Metadata Managements in Large-Scale Distributed File Systems Scalability, Performance and Availability[J]. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2022, 33(12), 3850-3869.
Authors:
Hao Dai
;
Yang Wang
;
Kenneth B. Kent
;
Lingfang Zeng
;
Chengzhong Xu
Favorite
|
TC[WOS]:
7
TC[Scopus]:
7
IF:
5.6
/
4.5
|
Submit date:2022/05/17
High-availability
High-performance
High-scalability
Large-scale Distributed File System
Metadata Management
Overlapping Communication with Computation in Parameter Server for Scalable DL Training
Journal article
Wang,Shaoqi, Pi,Aidi, Zhou,Xiaobo, Wang,Jun, Xu,Cheng Zhong. Overlapping Communication with Computation in Parameter Server for Scalable DL Training[J]. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2021, 32(9), 2144-2159.
Authors:
Wang,Shaoqi
;
Pi,Aidi
;
Zhou,Xiaobo
;
Wang,Jun
;
Xu,Cheng Zhong
Favorite
|
TC[WOS]:
19
TC[Scopus]:
20
IF:
5.6
/
4.5
|
Submit date:2021/05/31
Backward Computation
Forward Computation
Gradient Communication
Parameter Communication
Parameter Server