posted by organizer: koo_ec || 1823 views || tracked by 6 users: [display]

AI Safety 2024 : Special Issue for the Journal Frontiers in Robotics and AI on AI Safety: Safety Critical Systems

FacebookTwitterLinkedInGoogle

Link: https://www.frontiersin.org/research-topics/57900/ai-safety-safety-critical-systems
 
When N/A
Where N/A
Abstract Registration Due Nov 13, 2023
Submission Deadline Mar 12, 2024
Notification Due Apr 28, 2024
Final Version Due Jun 20, 2024
Categories    artificial intelligence   deep learning   safety, software   safe ai
 

Call For Papers

A robotic system is autonomous when it can operate in a real-world environment for an extended time without being controlled by humans. As artificial intelligence (AI) continues to advance, it is increasingly applied in safety-critical autonomous systems to perform complex tasks, where failures can have catastrophic consequences. Examples of such safety-critical autonomous systems include: self-driving cars, surgical robots, unmanned aerial vehicles in urban environments, etc.

Therefore, as AI technology becomes more pervasive, it is crucial to address the challenges associated with deploying AI in safety-critical systems. These systems must adhere to stringent safety requirements to ensure the well-being of individuals and the environment.

Despite the great success of AI, the use of Deep Learning models presents new dependability challenges, such as lack of well-defined specification, black-box nature of the models, high-dimensionality of data and over-confidence of neural networks over out-of-distribution data.

Therefore, to cope with such issues, a new topic emerged: AI Safety. AI Safety is a multidisciplinary domain that lies at the intersection between AI, Software Engineering, Safety Engineering and Ethics, and is an essential and challenging topic that aims at improving the safety and provide certifiably safety-critical autonomous systems powered by AI solutions. It involves mitigating risks associated with AI failures, ensuring the robustness and resilience of AI algorithms, enabling human-AI collaboration and addressing ethical concerns in critical domains.

This Research Topic aims to gather cutting-edge research, insights, and methodologies in the field of AI safety, focusing specifically on safety-critical systems. We invite original contributions in the form of research articles, survey papers, case studies and reviews that explore various aspects of AI safety for safety-critical systems.

The topics of interest include, but are not limited to:
• Risk assessment and management for AI in safety-critical systems
• Verification and validation techniques for AI-driven systems
• Explainability (interpretability) of AI models in safety-critical domains
• Robustness and resilience of AI algorithms and systems
• Human-AI interaction and collaboration in safety-critical settings
• Ethical considerations and responsible AI practices for safety-critical systems
• Regulatory frameworks and standards for AI safety in critical domains
• Case studies and practical applications of AI safety in real-world scenarios

Related Resources

Hong Kong-MIST 2025   2025 Asia-Pacific Conference on Marine Intelligent Systems and Technologies (MIST 2025)
IEEE-Ei/Scopus-ITCC 2025   2025 5th International Conference on Information Technology and Cloud Computing (ITCC 2025)-EI Compendex
HKUST-MIST 2025   2025 Asia-Pacific Conference on Marine Intelligent Systems and Technologies (MIST 2025)
AMLDS 2025   IEEE--2025 International Conference on Advanced Machine Learning and Data Science
LM 2025   Living Machines 2025: The 14th International Conference on Biomimetics and Biohybrid Systems
AIR 2025   International Conference on AI and Robotics (AIR) 2025
Ei/Scopus-ACAI 2024   2024 7th International Conference on Algorithms, Computing and Artificial Intelligence(ACAI 2024)
FPC 2025   Foresight Practitioner Conference 2025
21st AIAI 2025   21st (AIAI) Artificial Intelligence Applications and Innovations