Mix2FLD: Downlink Federated Learning after Uplink Federated Distillation with Two-Way Mixup

Seungeun Oh, Jihong Park, Eunjeong Jeong, Hyesung Kim, Mehdi Bennis, Seong Lyun Kim

Research output: Contribution to journalArticlepeer-review

15 Citations (Scopus)

Abstract

This letter proposes a novel communication-efficient and privacy-preserving distributed machine learning framework, coined Mix2FLD. To address uplink-downlink capacity asymmetry, local model outputs are uploaded to a server in the uplink as in federated distillation (FD), whereas global model parameters are downloaded in the downlink as in federated learning (FL). This requires a model output-to-parameter conversion at the server, after collecting additional data samples from devices. To preserve privacy while not compromising accuracy, linearly mixed-up local samples are uploaded, and inversely mixed up across different devices at the server. Numerical evaluations show that Mix2FLD achieves up to 16.7% higher test accuracy while reducing convergence time by up to 18.8% under asymmetric uplink-downlink channels compared to FL.

Original languageEnglish
Article number9121290
Pages (from-to)2211-2215
Number of pages5
JournalIEEE Communications Letters
Volume24
Issue number10
DOIs
Publication statusPublished - 2020 Oct

Bibliographical note

Funding Information:
Manuscript received May 14, 2020; accepted June 3, 2020. Date of publication June 19, 2020; date of current version October 9, 2020. This work was partly supported by Institute of Information and Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2018-0-00170, Virtual Presence in Moving Objects through 5G), Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT (NRF-2017R1A2A2A05069810), the Academy of Finland Project MISSION, SMARTER, and the 2019 EU-CHISTERA Projects LeadingEdge and CONNECT. The associate editor coordinating the review of this letter and approving it for publication was L. Lampe. (Corresponding author: Seong-Lyun Kim.) Seungeun Oh, Eunjeong Jeong, and Seong-Lyun Kim are with the School of Electrical and Electronic Engineering, Yonsei University, Seoul 03722, South Korea (e-mail: seoh@ramo.yonsei.ac.kr; ejjeong@ramo.yonsei.ac.kr; slkim@ramo.yonsei.ac.kr).

Funding Information:
This work was partly supported by Institute of Information and Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2018-0-00170, Virtual Presence in Moving Objects through 5G), Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT (NRF-2017R1A2A2A05069810), the Academy of Finland Project MISSION, SMARTER, and the 2019 EU-CHISTERA Projects LeadingEdge and CONNECT.

Publisher Copyright:
© 1997-2012 IEEE.

All Science Journal Classification (ASJC) codes

  • Modelling and Simulation
  • Computer Science Applications
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Mix2FLD: Downlink Federated Learning after Uplink Federated Distillation with Two-Way Mixup'. Together they form a unique fingerprint.

Cite this