| |||||||||||||||
SAFE-ML 2025 : International Workshop on Secure, Accountable, and Verifiable Machine Learning | |||||||||||||||
Link: https://conf.researchr.org/home/icst-2025/safe-ml-2025#Call-for-Papers | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
Machine Learning (ML) models are becoming deeply integrated into our daily lives, with their use expected to expand even further in the coming years. However, as these models grow in importance, potential vulnerabilities — such as biased decision-making and privacy breaches — could result in serious unintended consequences.
The 1st International Workshop on Secure, Accountable, and Verifiable Machine Learning (SAFE-ML 2025) aims to bring together experts from industry and academia, with software testing and ML backgrounds, to discuss and address these challenges. The focus will be on innovative methods and tools to ensure correctness, robustness, security, fairness of ML models and in decentralized learning schemes. Topics of the workshop will cover, but are not limited to: -Privacy preservation of ML models; -Adversarial robustness in ML models; -Security of ML models against poisoning attacks; -Ensuring fairness and mitigating bias in ML models; -Unlearning algorithms in ML; -Unlearning algorithms in decentralized learning schemes, such as Federated Learning (FL), gossip learning and split learning; -Secure aggregation in FL; -Robustness of FL models against malicious clients or model inversion attacks; -Fault tolerance and resilience to client dropouts in FL; -Secure model updates in FL; -Proof of client participation in FL, -Explainability and interpretability of ML algorithms; -ML accountability. Submission Format: The submissions must conform to the IEEE conference proceedings template, specified in the IEEE Conference Proceedings Formatting Guidelines. Submissions may fall into the following categories: Full Papers (up to 8 pages): Comprehensive presentations of mature research findings or industrial applications; Short Papers (up to 4 pages): Explorations of emerging ideas or preliminary research results; Position Papers (up to 2 pages): Statements outlining positions or open challenges that stimulate discussion and debate. Submission site: https://easychair.org/my/conference?conf=icst2025. Please be sure to select The 1st International Workshop on Secure, Accountable, and Verifiable Machine Learning as track for you submission. Workshop Format: This workshop is held as part of ICST 2025 and will be an in-person event held in Naples, Italy. For details see the main ICST website. Accepted paper presentations will have the following duration, depending on pthe aper type: -Full Papers: 22 minutes (including Q&A); -Short Papers: 15 minutes (including Q&A); -Position Papers: 7 minutes (including Q&A). -Panel Discussion SAFE-ML requires all presentations to be in-person. Review Process: The review process will follow a single-blind format, meaning authors are not required to anonymize their submissions. Important Dates: Paper Submission: 3rd January AoE, 2025 Decision Notification: 6th February, 2025 Camera-ready: 8th of March, 2025 Contacts: Any doubts or queries can be addressed to the General Co-Chairs using the following e-mails: -Carlo Mazzocca (cmazzocca@unisa.it) -Alessio Mora (alessio.mora@unibo.it) |
|