Residential College | false |
Status | 已發表Published |
Heterogeneity-Aware Coordination for Federated Learning via Stitching Pre-trained blocks | |
Zhan, Shichen1; Wu, Yebo1; Tian, Chunlin1; Zhao, Yan2; Li, Li1 | |
2024-09 | |
Conference Name | 32nd IEEE/ACM International Symposium on Quality of Service, IWQoS 2024 |
Source Publication | 2024 IEEE/ACM 32nd International Symposium on Quality of Service (IWQoS) |
Conference Date | 19-21 June 2024 |
Conference Place | Guangzhou, China |
Country | China |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Abstract | Federated learning (FL) coordinates multiple devices to collaboratively train a shared model while preserving data privacy. However, large memory footprint and high energy consumption during the training process excludes the low-end devices from contributing to the global model with their own data, which severely deteriorates the model performance in real-world scenarios. In this paper, we propose FedStitch, a hierarchical coordination framework for heterogeneous federated learning with pre-trained blocks. Unlike the traditional approaches that train the global model from scratch, for a new task, FedStitch composes the global model via stitching pre-trained blocks. Specifically, each participating client selects the most suitable block based on their local data from the candidate pool composed of blocks from pre-trained models. The server then aggregates the optimal block for stitching. This process iterates until a new stitched network is generated. Except for the new training paradigm, FedStitch consists of the following three core components: 1) an RL-weighted aggregator, and 2) a search space optimizer deployed on the server side, and 3) a local energy optimizer deployed on each participating client. The RL-weighted aggregator helps to select the right block in the non-IID scenario, while the search space optimizer continuously reduces the size of the candidate block pool during stitching. Meanwhile, the local energy optimizer is designed to minimize the energy consumption of each client while guaranteeing the overall training progress. The results demonstrate that compared to existing approaches, FedStitch improves the model accuracy up to 20.93%. At the same time, it achieves up to 8.12× speedup, reduces the memory footprint up to 79.5%, and achieves 89.41% energy saving at most during the learning procedure. |
Keyword | Federated Learning Pre-training Resource-efficient Training Performance Evaluation Energy Consumption Accuracy Memory Management Quality Of Service |
DOI | 10.1109/IWQoS61813.2024.10682959 |
URL | View the original |
Language | 英語English |
Scopus ID | 2-s2.0-85206349116 |
Fulltext Access | |
Citation statistics | |
Document Type | Conference paper |
Collection | THE STATE KEY LABORATORY OF INTERNET OF THINGS FOR SMART CITY (UNIVERSITY OF MACAU) |
Corresponding Author | Li, Li |
Affiliation | 1.University of Macau, State Key Laboratory of Internet of Things for Smart City, Macao 2.Bytedance Inc., China |
First Author Affilication | University of Macau |
Corresponding Author Affilication | University of Macau |
Recommended Citation GB/T 7714 | Zhan, Shichen,Wu, Yebo,Tian, Chunlin,et al. Heterogeneity-Aware Coordination for Federated Learning via Stitching Pre-trained blocks[C]:Institute of Electrical and Electronics Engineers Inc., 2024. |
APA | Zhan, Shichen., Wu, Yebo., Tian, Chunlin., Zhao, Yan., & Li, Li (2024). Heterogeneity-Aware Coordination for Federated Learning via Stitching Pre-trained blocks. 2024 IEEE/ACM 32nd International Symposium on Quality of Service (IWQoS). |
Files in This Item: | There are no files associated with this item. |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment