![]() |
| |||||||||||||||
STRAI 2025 : International Workshop on Secure, Trustworthy, and Robust AI | |||||||||||||||
Link: https://2025.ares-conference.eu/program/strai/ | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
International Workshop on Secure, Trustworthy, and Robust AI
The increasing use of AI systems creates concerns about security, privacy and trust. The security risks for AI systems, the privacy implications stemming from their use, and the requirements for (human-to-machine and machine-to-machine) trust establishment and management are highly critical topics requiring concrete methodologies and solutions. The wide adoption of AI systems necessitates urgent advancements in terms of AI security, trustworthiness, and robustness. AI systems should be resilient to risks arising from their inherent limitations and protected against malicious actions that could compromise security, leading to harmful or undesirable outcomes. The International Workshop on Secure, Trustworthy, and Robust AI (STRAI 2025) seeks to comprehensively explore the core principles of AI trustworthiness under an overarching umbrella from multiple perspectives, including: Policy and Governance – Addressing regulatory, ethical, and governance frameworks for accountable AI systems Ethics – Investigating responsible AI practices, fairness, and societal impacts Human-AI Collaboration – Enhancing human trust, oversight, and interaction in AI systems Technology and Techniques – Developing robust, secure, and resilient AI systems Security and Privacy – Protecting AI systems against adversarial attacks, data poisoning, and privacy breaches Resilience and Robustness – Ensuring AI systems are reliable, safe, and adaptable to evolving threats The workshop promotes interdisciplinary collaboration, examines real-world applications, and engages with policy and regulatory discussions. It invites contributions that provide innovative solutions and insights into the complex and evolving challenges of AI trustworthiness. Topics of interest Topics of interest include, but are not limited to: Transparency, Explainability, and Interpretability of AI models Security and Privacy Issues in AI systems Trustworthy AI Systems and their evaluation metrics Evaluating Explainability and Trust in AI-enabled decision-making Misinformation Detection and defense mechanisms Data Poisoning and Adversarial Examples attacks and countermeasures Audit Techniques for data and AI models to ensure accountability Fairness and Exclusion Studies (benchmarks and datasets) Evaluation Methods to ensure fair outcomes, especially for underrepresented groups Social Good and Participatory AI and applications of the above principles to critical domains Robustness, Safety, and Security of AI systems Availability and Reliability of AI systems in critical applications AI, Surveillance, Privacy, Security, and Reliability – Balancing innovation and ethical considerations Governance, Regulation, Control, Safety, and Security of AI systems Trustworthiness and Decision Making in AI-enabled systems Accountability, Responsibility, and Trustworthiness of AI systems AI Applications to Promote Security and Robustness Human Oversight and Control in AI systems Human Trust and Understanding of AI – Building and maintaining trust in AI systems Human-Centered AI, Human-AI Interaction, and Human-AI Teaming – Enhancing collaboration and co-adaptation Human-Machine Trust and Risk – Understanding risk perception and trust calibration Performance Benchmarks for Trust in AI systems Public Perception and societal acceptance of trustworthy AI systems Case Studies and Simulation in the Energy, Manufacturing, Transportation, Health, Space, and other Industrial and Critical Infrastructure Sectors – Demonstrating real-world applications and implications of trustworthy AI Submission Guidelines The submission guidelines valid for the workshop are the same as for the ARES conference. They can be found in the lower half of https://2025.ares-conference.eu/call-for-papers/ |
|