posted by organizer: nmlam1986 || 1493 views || tracked by 3 users: [display]

NFFL 2021 : New Frontiers in Federated Learning: Privacy, Fairness, Robustness, Personalization and Data Ownership (NeurIPS 2021 Workshop)

FacebookTwitterLinkedInGoogle

Link: https://neurips2021workshopfl.github.io/NFFL-2021/
 
When Dec 13, 2021 - Dec 13, 2021
Where Virtual
Submission Deadline Oct 1, 2021
Notification Due Oct 22, 2021
Final Version Due Oct 29, 2021
Categories    federated learning   robustness   privacy   fairness
 

Call For Papers

Federated Learning (FL) has recently emerged as the de facto framework for distributed machine learning (ML) that preserves the privacy of data, especially in the proliferation of mobile and edge devices with their increasing capacity for storage and computation. To fully utilize the vast amount of geographically distributed, diverse and privately owned data that is stored across these devices, FL provides a platform on which local devices can build their own local models whose training processes can be synchronized via sharing differential parameter updates. This was done without exposing their private training data, which helps mitigate the risk of privacy violation, in light of recent policies such as the General Data Protection Regulation (GDPR). Such potential use of FL has since then led to an explosive attention from the ML community, resulting in a vast, growing amount of both theoretical and empirical literature that push FL closer to being the new standard of ML as a democratized data analytic service.

Interestingly, as FL comes closer to being deployable in real-world scenarios, it also surfaces a growing set of challenges on trustworthiness, fairness, auditability, scalability, robustness, security, privacy preservation, decentralizability, data ownership and personalizability that are all becoming increasingly important in many interrelated aspects of our digitized society. Such challenges are particularly important in economic landscapes that do not have the presence of big tech corporations with big data and are instead driven by government agencies and institutions with valuable data locked up or small-to-medium enterprises & start-ups with limited data and little funding. With this forethought, the workshop envisions the establishment of an AI ecosystem that facilitates data and model sharing between data curators as well as interested parties in the data and models while protecting personal data ownership.

This raises the following questions:

1. Data curators may own different types of ML models. Due to their own interest in protecting the IP, there is no reason to believe that they would be willing to share information on their model architectures or parameters. Thus, if we are to facilitate meaningful collaboration in such cases, how then do data curators aggregate and distill latent knowledge from their heterogeneous, black-box models to bring home the distilled model(s) for future use?

2. How do we incentivize data curators to come together to share their data for model building? How does a participant know that the other data curator(s) are contributing valuable, authentic and safe data to the collaboration (and vice versa)? How do we incentivize them to collaborate? In this view, there is a need to consider data auditability and fairness in data sharing based on their respective contributions. Furthermore, as far as personal data ownership goes, how do we guarantee the right to be forgotten in terms of the participant’s data footprint?

We believe addressing these challenges will make another key milestone in shaping FL as a democratized machine learning (ML) service supported by a trustworthy AI ecosystem built on the aforementioned concepts.

We invite researchers to submit work in (but not limited to) the following areas:

- Personalized Federated Learning and/or Meta Learning.
- Differential Privacy in Federated Learning.
- Fairness in Federated Learning.
- Optimization for Large-Scale Federated Learning Systems.
- Certifiable Robustness for Federated Learning.
- Trustworthiness, Auditability and Verification in Federated Learning.
- Model Aggregation and Protecting Personal Data Ownership.

Submissions will be double blind: Reviewers will not see author names while reviewing, and authors will not know the identities of their reviewers. We use CMT to host paper submissions and allow for disseminating paper reviews and recommendations. The program will include keynote presentations from the invited speakers, oral presentations and posters.

Authors can revise their papers as many times as needed up to the paper submission deadline. Changes to paper will not be allowed once the submission deadline has passed. Papers must be submitted using CMT. We will open the submission site in August. Each submission is expected to strictly conform to NeurIPS-2021 format and an upper limit of 7 pages for the main text, with unlimited additional pages for references. Authors might also use as many pages of appendices as they wish but the reviewers are not required to read these. Authors have the right to withdraw papers from consideration at any time.

Submission Site: https://cmt3.research.microsoft.com/NFFL2021/

Organizing Committee:
- Nghia Hoang, Senior Research Scientist, AWS AI Labs, Amazon
- Lam M. Nguyen, Research Staff Member, IBM Research, Thomas J. Watson Research Center
- Pin-Yu Chen, Research Staff Member, IBM Research, Thomas J. Watson Research Center
- Tsui-Wei (Lily) Weng, Assistant Professor, UC San Diego / MIT-IBM Watson AI Lab
- Sara Magliacane, Assistant Professor, University of Amsterdam
- Kian Hsiang (Bryan) Low, Associate Professor, National University of Singapore
- Anoop Deoras, Applied Research Manager, AWS AI Labs, Amazon

Related Resources

ICMLA 2024   23rd International Conference on Machine Learning and Applications
NYC-2024-SV 2024   New York Annual Conference on Security and Privacy 2024
CEVVE 2024   2024 2nd International Conference on Electric Vehicle and Vehicle Engineering (CEVVE 2024)
NYC-2024-CY 2024   New York Annual Conference on Cyber Security 2024
Security 2025   Special Issue on Recent Advances in Security, Privacy, and Trust
PrivateNLP 2024   ACL 2024 Workshop on Privacy and Natural Language Processing
JCICE 2024   2024 International Joint Conference on Information and Communication Engineering(JCICE 2024)
DSIT 2024   7th International Conference on Data Science and Information Technology
SI on ATD&IS III 2024   Special Issue on Advanced Technologies in Data and Information Security III, Applied Sciences, MDPI
BIAS 2024   International Workshop on Algorithmic Bias in Search and Recommendation