posted by organizer: koo_ec || 2119 views || tracked by 5 users: [display]

AI Safety 2024 : Special Issue for the Journal Frontiers in Robotics and AI on AI Safety: Safety Critical Systems

FacebookTwitterLinkedInGoogle

Link: https://www.frontiersin.org/research-topics/57900/ai-safety-safety-critical-systems
 
When N/A
Where N/A
Abstract Registration Due Nov 13, 2023
Submission Deadline Mar 12, 2024
Notification Due Apr 28, 2024
Final Version Due Jun 20, 2024
Categories    artificial intelligence   deep learning   safety, software   safe ai
 

Call For Papers

A robotic system is autonomous when it can operate in a real-world environment for an extended time without being controlled by humans. As artificial intelligence (AI) continues to advance, it is increasingly applied in safety-critical autonomous systems to perform complex tasks, where failures can have catastrophic consequences. Examples of such safety-critical autonomous systems include: self-driving cars, surgical robots, unmanned aerial vehicles in urban environments, etc.

Therefore, as AI technology becomes more pervasive, it is crucial to address the challenges associated with deploying AI in safety-critical systems. These systems must adhere to stringent safety requirements to ensure the well-being of individuals and the environment.

Despite the great success of AI, the use of Deep Learning models presents new dependability challenges, such as lack of well-defined specification, black-box nature of the models, high-dimensionality of data and over-confidence of neural networks over out-of-distribution data.

Therefore, to cope with such issues, a new topic emerged: AI Safety. AI Safety is a multidisciplinary domain that lies at the intersection between AI, Software Engineering, Safety Engineering and Ethics, and is an essential and challenging topic that aims at improving the safety and provide certifiably safety-critical autonomous systems powered by AI solutions. It involves mitigating risks associated with AI failures, ensuring the robustness and resilience of AI algorithms, enabling human-AI collaboration and addressing ethical concerns in critical domains.

This Research Topic aims to gather cutting-edge research, insights, and methodologies in the field of AI safety, focusing specifically on safety-critical systems. We invite original contributions in the form of research articles, survey papers, case studies and reviews that explore various aspects of AI safety for safety-critical systems.

The topics of interest include, but are not limited to:
• Risk assessment and management for AI in safety-critical systems
• Verification and validation techniques for AI-driven systems
• Explainability (interpretability) of AI models in safety-critical domains
• Robustness and resilience of AI algorithms and systems
• Human-AI interaction and collaboration in safety-critical settings
• Ethical considerations and responsible AI practices for safety-critical systems
• Regulatory frameworks and standards for AI safety in critical domains
• Case studies and practical applications of AI safety in real-world scenarios

Related Resources

AIxRobotics 2025   International Conference on Artificial Intelligence x Robotics
Ei/Scopus-SGGEA 2025   2025 2nd Asia Conference on Smart Grid, Green Energy and Applications (SGGEA 2025)
DS 2025   28th International Conference on Discovery Science
ICDM 2025   The 25th IEEE International Conference on Data Mining
PJA 78 (1) 2027   AI, Art, and Ethics - The Polish Journal of Aesthetics
BINLP 2025   5th International Conference on Big Data, IOT & NLP
FPC 2025   Foresight Practitioner Conference 2025
IJRAP 2025   International Journal of Recent advances in Physics
CONVERSATIONS 2025   International Symposium on Chatbots and Human-centred AI
ICTAI 2025   IEEE 37th International Conference on Tools with Artificial Intelligence