UM

Browse/Search Results:  1-8 of 8 Help

Selected(0)Clear Items/Page:    Sort:
Heterogeneity-Aware Coordination for Federated Learning via Stitching Pre-trained blocks Conference paper
Zhan, Shichen, Wu, Yebo, Tian, Chunlin, Zhao, Yan, Li, Li. Heterogeneity-Aware Coordination for Federated Learning via Stitching Pre-trained blocks[C]:Institute of Electrical and Electronics Engineers Inc., 2024.
Authors:  Zhan, Shichen;  Wu, Yebo;  Tian, Chunlin;  Zhao, Yan;  Li, Li
Favorite | TC[WOS]:0 TC[Scopus]:1 | Submit date:2024/11/05
Federated Learning  Pre-training  Resource-efficient  Training  Performance Evaluation  Energy Consumption  Accuracy  Memory Management  Quality Of Service  
Boosting Image Restoration via Priors from Pre-Trained Models Conference paper
Xu, Xiaogang, Kong, Shu, Hu, Tao, Liu, Zhe, Bao, Hujun. Boosting Image Restoration via Priors from Pre-Trained Models[C]:IEEE Computer Society, 2024, 2900-2909.
Authors:  Xu, Xiaogang;  Kong, Shu;  Hu, Tao;  Liu, Zhe;  Bao, Hujun
Favorite | TC[Scopus]:2 | Submit date:2024/11/05
Computer Vision  Shape  Computational Modeling  Noise Reduction  Training Data  Boosting  Data Models  Pre-trained Models  Image Restoration  Spatial-varying Enhancement  Channel-spatial Attention  
LightVLP: A Lightweight Vision-Language Pre-training via Gated Interactive Masked AutoEncoders Conference paper
Sun, Xingwu, Yang, Zhen, Xie, Ruobing, Lian, Fengzong, Kang, Zhanhui, Xu, Chengzhong. LightVLP: A Lightweight Vision-Language Pre-training via Gated Interactive Masked AutoEncoders[C]:European Language Resources Association (ELRA), 2024, 10499-10510.
Authors:  Sun, Xingwu;  Yang, Zhen;  Xie, Ruobing;  Lian, Fengzong;  Kang, Zhanhui; et al.
Favorite | TC[Scopus]:0 | Submit date:2024/07/04
Lightweight v&l Pre-training  Mask Autoencoder  Vision-language Pre-training  
When Pre-Training Model Meets Smart Meter Data Applications: A Preliminary Trial of General Way Conference paper
Wang, Zhenyi, Zhang, Hongcai, Zhou, Baorong, Zhao, Wenmeng, Mao, Tian. When Pre-Training Model Meets Smart Meter Data Applications: A Preliminary Trial of General Way[C]:IEEE Computer Society, 2024, 203130.
Authors:  Wang, Zhenyi;  Zhang, Hongcai;  Zhou, Baorong;  Zhao, Wenmeng;  Mao, Tian
Favorite | TC[Scopus]:0 | Submit date:2024/11/05
Deep Learning  Load Forecasting  Load Profiling  Pre-training Model  Smart Meter Data  Transformer  
ProposalContrast: Unsupervised Pre-training for LiDAR-Based 3D Object Detection Conference paper
Yin, Junbo, Zhou, Dingfu, Zhang, Liangjun, Fang, Jin, Xu, Cheng Zhong, Shen, Jianbing, Wang, Wenguan. ProposalContrast: Unsupervised Pre-training for LiDAR-Based 3D Object Detection[C]:SPRINGER-VERLAG BERLIN, HEIDELBERGER PLATZ 3, D-14197 BERLIN, GERMANY, 2022, 17-33.
Authors:  Yin, Junbo;  Zhou, Dingfu;  Zhang, Liangjun;  Fang, Jin;  Xu, Cheng Zhong; et al.
Favorite | TC[WOS]:34 TC[Scopus]:51 | Submit date:2023/01/30
3d Object Detection  Unsupervised Point Cloud Pre-training  
ProposalContrast: Unsupervised Pre-training for LiDAR-Based 3D Object Detection Conference paper
Junbo, Yin, Junbo, Yin, Liangjun, Zhang, Jin, Fang, Dingfu, Zhou, Cheng-Zhong, Xu, Jianbing, Shen, Wenguan, Wang. ProposalContrast: Unsupervised Pre-training for LiDAR-Based 3D Object Detection[C], 2022.
Authors:  Junbo, Yin;  Junbo, Yin;  Liangjun, Zhang;  Jin, Fang;  Dingfu, Zhou; et al.
Favorite | TC[WOS]:34 TC[Scopus]:51 | Submit date:2023/08/08
3d Object Detection  Unsupervised Point Cloud Pre-training  
A Simple yet Effective Layered Loss for Pre-training of Network Embedding Journal article
Chen, Junyang, Li, Xueliang, Li, Yuanman, Li, Paul, Wang, Mengzhu, Zhang, Xiang, Gong, Zhiguo, Wu, Kaishun, Leung, Victor C.M.. A Simple yet Effective Layered Loss for Pre-training of Network Embedding[J]. IEEE Transactions on Network Science and Engineering, 2022, 9(3), 1827 - 1837.
Authors:  Chen, Junyang;  Li, Xueliang;  Li, Yuanman;  Li, Paul;  Wang, Mengzhu; et al.
Favorite | TC[WOS]:6 TC[Scopus]:3  IF:6.7/6.0 | Submit date:2022/05/17
Graph Neural Networks  Layered Loss  Network Embedding  Pre-training Of Unlabeled Nodes  
RPT: Toward Transferable Model on Heterogeneous Researcher Data via Pre-Training Journal article
Ziyue Qiao, Yanjie Fu, Pengyang Wang, Meng Xiao, Zhiyuan Ning, Denghui Zhang, Yi Du, Yuanchun Zhou. RPT: Toward Transferable Model on Heterogeneous Researcher Data via Pre-Training[J]. IEEE Transactions on Big Data, 2022, 9(1), 186-199.
Authors:  Ziyue Qiao;  Yanjie Fu;  Pengyang Wang;  Meng Xiao;  Zhiyuan Ning; et al.
Favorite | TC[WOS]:7 TC[Scopus]:9  IF:7.5/5.8 | Submit date:2022/05/17
Pre-training  Contrastive Learning  Transformer  Graph Representation Learning