| |||||||||||||||
ACM ToMPECS SI on Federated Learning 2024 : ACM ToMPECS 2024: Special Issue on Performance Evaluation of Federated Learning Systems | |||||||||||||||
Link: https://dl.acm.org/pb-assets/static_journal_pages/tompecs/pdf/ACM-CFP-TOMPECS-FL.pdf | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
Federated learning has recently emerged as a trendy privacy-preserving approach for training machine learning models on data that is scattered across multiple heterogeneous devices/clients. In federated learning, clients iteratively compute updates to the machine learning models on their local datasets. These updates are periodically aggregated across clients, typically but not always with the help of a central parameter server. In many real-world applications of federated learning such as connected-and-autonomous vehicles (CAVs), the underlying distributed/decentralized systems on which federated learning algorithms are executing suffer a wide degree of heterogeneity, including but not limited to data distributions, computation speeds, and external local environments. Moreover, the clients in federated learning systems are often resource-constrained edge or end devices and may compete for common resources such as communication bandwidth. Many federated learning algorithms have been proposed and analyzed experimentally and theoretically, yet these only cover a limited range of heterogeneity. In addition, running federated learning in resource-constrained settings often presents complex and not well-understood tradeoffs among various performance metrics, including final accuracy, convergence rate, and resource consumption.
Topics: This special issue will focus on the performance evaluation of federated learning systems. We solicit papers that include theoretical models or numerical analysis of federated learning performance, as well as system-oriented papers that evaluate implementations of federated learning systems. Specific topics of interest include, but are not limited to: • Novel techniques for analyzing the convergence of federated learning algorithms • Performance analysis of emerging federated learning paradigms, e.g., personalized models, asynchronous learning, cache-enhanced learning • Analysis of performance tradeoffs in federated learning systems • Active client selection in federated learning • Fairness metrics for federated learning systems • Novel federated learning algorithms that aim to address system heterogeneity or other practical implementation challenges, e.g., dynamic client availability • Benchmark platforms that enable evaluation of multiple federated learning algorithms • New federated learning algorithms or analysis frameworks motivated by specific applications, e.g., large language models or recommendation systems • Experimental results from large-scale federated learning deployments Important Dates: • Submission deadline: April 22, 2024 • First-round review decisions: June 30, 2024 • Deadline for revision submissions: August 31, 2024 • Notification of final decisions: October 15, 2024 • Tentative publication: December 1, 2024 Submission Information: Submissions should follow the standard ACM TOMPECS formatting requirements (https://dl.acm.org/journal/tompecs/author-guidelines#submission). We will use Manuscript Central (https://mc.manuscriptcentral.com/tompecs) to handle submissions. We look forward to your contributions. Guest Editors: Dr. Carlee Joe-Wong (cjoewong@andrew.cmu.edu) Dr. Lili Su (l.su@northeastern.edu) |
|