×
验证码:
换一张
Forgotten Password?
Stay signed in
Login With UMPASS
English
|
繁體
Login With UMPASS
Log In
ALL
ORCID
TI
AU
PY
SU
KW
TY
JN
DA
IN
PB
FP
ST
SM
Study Hall
Image search
Paste the image URL
Home
Faculties & Institutes
Scholars
Publications
Subjects
Statistics
News
Search in the results
Faculties & Institutes
THE STATE KEY L... [24]
Faculty of Scie... [17]
Faculty of Healt... [1]
Authors
XIAOBO ZHOU [19]
CHENGZHONG XU [5]
WU QINGQING [5]
ZHENG WENHUA [1]
CHEN CHUN LUNG P... [1]
WANG YE [1]
More...
Document Type
Journal article [18]
Conference paper [7]
Project [2]
Date Issued
2024 [10]
2023 [7]
2022 [5]
2021 [4]
2020 [1]
Language
英語English [24]
中文Chinese [1]
Source Publication
IEEE Transaction... [3]
IEEE Transaction... [2]
IEEE Transaction... [2]
2021 IEEE Global... [1]
Briefings in Bio... [1]
Dianzi Yu Xinxi ... [1]
More...
Indexed By
SCIE [16]
CPCI-S [3]
ESCI [1]
Funding Organization
UM [2]
Funding Project
×
Knowledge Map
UM
Start a Submission
Submissions
Unclaimed
Claimed
Attach Fulltext
Bookmarks
Browse/Search Results:
1-10 of 27
Help
Selected(
0
)
Clear
Items/Page:
5
10
15
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
Sort:
Select
Issue Date Ascending
Issue Date Descending
Title Ascending
Title Descending
Author Ascending
Author Descending
WOS Cited Times Ascending
WOS Cited Times Descending
Submit date Ascending
Submit date Descending
Journal Impact Factor Ascending
Journal Impact Factor Descending
Redundancy-free and load-balanced TGNN training with hierarchical pipeline parallelism
Journal article
Xia, Yaqi, Zhang, Zheng, Yang, Donglin, Hu, Chuang, Zhou, Xiaobo, Chen, Hongyang, Sang, Qianlong, Cheng, Dazhao. Redundancy-free and load-balanced TGNN training with hierarchical pipeline parallelism[J]. IEEE Transactions on Parallel and Distributed Systems, 2024, 35(11), 1904-1919.
Authors:
Xia, Yaqi
;
Zhang, Zheng
;
Yang, Donglin
;
Hu, Chuang
;
Zhou, Xiaobo
; et al.
Favorite
|
TC[WOS]:
0
TC[Scopus]:
0
IF:
5.6
/
4.5
|
Submit date:2024/08/05
Communication Balance
Distributed Training
Dynamic Gnn
Pipeline Parallelism
Redundancy-free
DeepTM: Efficient Tensor Management in Heterogeneous Memory for DNN Training
Journal article
Zhou, Haoran, Rang, Wei, Chen, Hongyang, Zhou, Xiaobo, Cheng, Dazhao. DeepTM: Efficient Tensor Management in Heterogeneous Memory for DNN Training[J]. IEEE Transactions on Parallel and Distributed Systems, 2024, 35(11), 1920-1935.
Authors:
Zhou, Haoran
;
Rang, Wei
;
Chen, Hongyang
;
Zhou, Xiaobo
;
Cheng, Dazhao
Favorite
|
TC[WOS]:
0
TC[Scopus]:
0
IF:
5.6
/
4.5
|
Submit date:2024/08/05
Deep Neural Network Training
Heterogeneous Memory
Memory Management
Performance Optimization
AntiFormer: graph enhanced large language model for binding affinity prediction
Journal article
Wang, Qing, Feng, Yuzhou, Wang, Yanfei, Li, Bo, Wen, Jianguo, Zhou, Xiaobo, Song, Qianqian. AntiFormer: graph enhanced large language model for binding affinity prediction[J]. Briefings in Bioinformatics, 2024, 25(5), bbae403.
Authors:
Wang, Qing
;
Feng, Yuzhou
;
Wang, Yanfei
;
Li, Bo
;
Wen, Jianguo
; et al.
Favorite
|
TC[WOS]:
0
TC[Scopus]:
0
IF:
6.8
/
7.9
|
Submit date:2024/09/03
Antibody Binding Affinity
Antibody Maturation
Large Language Model
Single-cell Bcr
Heterogeneity-aware Proactive Elastic Resource Allocation for Serverless Applications
Journal article
Feng, Binbin, Ding, Zhijun, Zhou, Xiaobo, Jiang, Changjun. Heterogeneity-aware Proactive Elastic Resource Allocation for Serverless Applications[J]. IEEE Transactions on Services Computing, 2024, 17(5), 2473-2487.
Authors:
Feng, Binbin
;
Ding, Zhijun
;
Zhou, Xiaobo
;
Jiang, Changjun
Favorite
|
TC[WOS]:
1
TC[Scopus]:
2
IF:
5.5
/
5.9
|
Submit date:2024/05/16
Instance Allocation
Numa
Resource Estimation
Server Scaling
Serverless
Workflow
Workload Prediction
Raptor-T: A Fused and Memory-Efficient Sparse Transformer for Long and Variable-Length Sequences
Journal article
Wang, Hulin, Yang, Donglin, Xia, Yaqi, Zhang, Zheng, Wang, Qigang, Fan, Jianping, Zhou, Xiaobo, Cheng, Dazhao. Raptor-T: A Fused and Memory-Efficient Sparse Transformer for Long and Variable-Length Sequences[J]. IEEE TRANSACTIONS ON COMPUTERS, 2024, 73(7), 1852-1865.
Authors:
Wang, Hulin
;
Yang, Donglin
;
Xia, Yaqi
;
Zhang, Zheng
;
Wang, Qigang
; et al.
Favorite
|
TC[WOS]:
1
TC[Scopus]:
1
IF:
3.6
/
3.2
|
Submit date:2024/05/16
Sparse Transformer
Inference Acceleration
Gpu
Deep Learning
Memory Optimization
Resource Management
Expeditious High-Concurrency MicroVM SnapStart in Persistent Memory with an Augmented Hypervisor
Conference paper
XINGGUO PANG, YANZE ZHANG, LIU LIU, DAZHAO CHENG, CHENG-ZHONG XU, XIAOBO ZHOU. Expeditious High-Concurrency MicroVM SnapStart in Persistent Memory with an Augmented Hypervisor[C]:USENIX Association, 2024, 985-998.
Authors:
XINGGUO PANG
;
YANZE ZHANG
;
LIU LIU
;
DAZHAO CHENG
;
CHENG-ZHONG XU
; et al.
Adobe PDF
|
Favorite
|
TC[WOS]:
0
TC[Scopus]:
0
|
Submit date:2024/08/10
A unified hybrid memory system for scalable deep learning and big data applications
Journal article
Rang, Wei, Liang, Huanghuang, Wang, Ye, Zhou, Xiaobo, Cheng, Dazhao. A unified hybrid memory system for scalable deep learning and big data applications[J]. Journal of Parallel and Distributed Computing, 2024, 186, 104820.
Authors:
Rang, Wei
;
Liang, Huanghuang
;
Wang, Ye
;
Zhou, Xiaobo
;
Cheng, Dazhao
Favorite
|
TC[WOS]:
1
TC[Scopus]:
1
IF:
3.4
/
3.4
|
Submit date:2024/05/02
Data Placement And Migration
Dnn Applications
Hybrid Memory System
Nvm
Unified Memory Management
MPMoE: Memory Efficient MoE for Pre-Trained Models With Adaptive Pipeline Parallelism
Journal article
Zhang, Zheng, Xia, Yaqi, Wang, Hulin, Yang, Donglin, Hu, Chuang, Zhou, Xiaobo, Cheng, Dazhao. MPMoE: Memory Efficient MoE for Pre-Trained Models With Adaptive Pipeline Parallelism[J]. IEEE Transactions on Parallel and Distributed Systems, 2024, 35(6), 843-856.
Authors:
Zhang, Zheng
;
Xia, Yaqi
;
Wang, Hulin
;
Yang, Donglin
;
Hu, Chuang
; et al.
Favorite
|
TC[WOS]:
0
TC[Scopus]:
1
IF:
5.6
/
4.5
|
Submit date:2024/05/16
Distributed Training
Memory Redundancy
Mixture Of Experts
Performance Model
Pipeline Parallelism
Locality-aware and Fault-tolerant Batching for Machine Learning on Distributed Datasets
Journal article
Liu, Liu, Ding, Zhijun, Cheng, Dazhao, Zhou, Xiaobo. Locality-aware and Fault-tolerant Batching for Machine Learning on Distributed Datasets[J]. IEEE Transactions on Cloud Computing, 2024, 12(2), 370-387.
Authors:
Liu, Liu
;
Ding, Zhijun
;
Cheng, Dazhao
;
Zhou, Xiaobo
Favorite
|
TC[WOS]:
0
TC[Scopus]:
0
IF:
5.3
/
4.6
|
Submit date:2024/05/16
Adaptation Models
Byzantine Gradient
Computational Modeling
Data Models
Distributed Databases
Distributed Dataset
Graphics Processing Units
Load Management
Machine Learning Training
Straggler
Training
Incendio: Priority-based Scheduling for Alleviating Cold Start in Serverless Computing
Journal article
Cai, Xinquan, Sang, Qianlong, Hu, Chuang, Gong, Yili, Suo, Kun, Zhou, Xiaobo, Cheng, Dazhao. Incendio: Priority-based Scheduling for Alleviating Cold Start in Serverless Computing[J]. IEEE Transactions on Computers, 2024, 73(7), 1780-1794.
Authors:
Cai, Xinquan
;
Sang, Qianlong
;
Hu, Chuang
;
Gong, Yili
;
Suo, Kun
; et al.
Favorite
|
TC[WOS]:
1
TC[Scopus]:
1
IF:
3.6
/
3.2
|
Submit date:2024/05/16
Serverless Computing
Cold Start
Priority
Prediction
Scheduling
In-memory Computing
Distributed Systems