|
| |||||||||||||||
STAI 2026 : Workshop on Secure and Trustworthy AI co-located ECML-PKDD 2026 | |||||||||||||||
| Link: https://stai-workshop.org/ | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
|
=====================================================================
STAI 2026 The 1st Workshop on Secure and Trustworthy AI September 7, 2026 - Naples, Italy https://stai-workshop.org/ co-located with ECML-PKDD 2026 European Conference on Machine Learning and Principles and Practice of Knowledge Discovery September 7-11, 2026 - Naples, Italy https://ecmlpkdd.org/2026/ ================================================================================ OVERVIEW -------- We invite submissions to the 1st STAI Workshop on Secure and Trustworthy AI to be held in conjunction with ECML-PKDD 2026 - European Conference on Machine Learning and Principles and Practice of Knowledge Discovery. Overview The increasing adoption of Artificial Intelligence (AI) technologies in critical infrastructure and decision-making processes has made AI-driven components a foundational part of complex software, cyber-physical, and socio-technical systems (e.g., malware detection, fraud detection, autonomous driving, and biometric systems). In these settings, AI outputs directly influence automated and human-in-the-loop decisions, making failures consequential beyond technical performance and raising fundamental concerns regarding the security, robustness, transparency, and trustworthiness of AI systems. While machine learning has demonstrated remarkable performance across a wide range of applications, a growing body of research has shown that AI systems are inherently vulnerable. Adversarial manipulation can compromise not only model predictions but also other critical properties of AI systems, exposing organizations and individuals to significant risks. Vulnerabilities such as adversarial examples, neural backdoors, bias, privacy leakage, and lack of transparency can undermine safety, reliability, and public trust, particularly in security- and safety-critical environments. This workshop addresses the challenge of holistically securing AI systems beyond an accuracy-centric perspective. It focuses on vulnerabilities and defense strategies across the full AI lifecycle, including adversarial learning, security-critical AI applications, and the role of auxiliary components that support model deployment, interpretation, and human oversight. In particular, the workshop emphasizes the security implications of mechanisms such as explainability, uncertainty estimation, and system-level constraints, considering them as integral parts of AI systems rather than isolated add-ons. STAI welcomes both research papers reporting results from mature work and recently published work, as well as more speculative papers describing new ideas or preliminary exploratory work. Papers reporting industry experiences and case studies will also be encouraged. Submissions are accepted in two formats: Regular research papers with 12 to 16 pages, including references. Research papers must be original, not published previously, and not submitted concurrently elsewhere to be published in the proceedings. Short research statements of at most 6 pages, including references. Research statements aim at fostering discussion and collaboration. They may review previously published research or outline emerging ideas. Papers based on recently published work will not be considered for publication in the proceedings. TOPICS OF INTEREST ------------------ Topics of interest include but not limited to: Trustworthy and secure-by-design training and AI pipelines Adversarial machine learning Evasion, poisoning, backdoor, jailbreak, physical-world, and supply-chain attacks Prompt injection and security risks in foundation and generative models Model extraction, inversion, membership inference, and other privacy attacks Attacks on explanations, uncertainty estimation, confidence calibration, and sustainability constraints Robustness and defenses against adversarial and system-level attacks System-level robustness assessment beyond predictive accuracy Trustworthiness evaluation metrics and holistic AI security benchmarks Explainability of machine learning and deep learning models Explainable AI for the explanation of security AI-based systems Explainable AI to improve the accuracy of AI models Explainable AI to improve the robustness of AI models against malicious attacks Attacks on explainability methods and explanation manipulation Privacy and information leakage through interpretability mechanisms Privacy-preserving learning and differential privacy under adversarial settings Human-in-the-loop security and adversarial decision manipulation Security and trustworthiness of agentic and autonomous AI systems Applications of AI to improve security in safety-critical applications (e.g., cybersecurity, fraud detection, biometrics, autonomous systems) Artificial Intelligence for cyber threat detection (e.g., in malware detection, intrusion detection, spam detection) Data-centric security, including poisoning detection, secure data curation, and lifecycle protection SUBMISSION GUIDELINES --------------------- All the papers must be written in English and formatted according to the Springer LNCS style (available here: https://www.springer.com/gp/computer-science/lncs/conference-proceedings-guidelines?countryChanged=true ). Contributions should be submitted in PDF format, electronically, using the workshop submission site at https://cmt3.research.microsoft.com/ECMLPKDDWT2026/Track/34/Submission/Create. PROCEEDINGS ----------- Accepted papers will be part of the ECML-PKDD 2026 workshop post-proceedings, which will be likely published as a Springer CCIS volume, jointly with other ECML-PKDD 2026 workshops (this is what happened in the last years). IMPORTANT DATES (11:59pm AoE time) ----------------------------------- Paper Submission deadline: June 5, 2026 Acceptance notification: June 27, 2026 Camera-ready deadline: July 10, 2026 Workshop date: September 7, 2026 (afternoon) Workshop chairs ---------- Giuseppina Andresini, University of Bari, Italy Antonio Emanuele CinĂ , University of Genoa, Italy Christian Wressnegger, Karlsruhe Institute of Technology (KIT), KASTEL Security Research Labs, German |
|