| |||||||||||||||
AISafety 2022 : IJCAI-ECAI-22 Workshop on Artificial Intelligence Safety | |||||||||||||||
Link: https://www.aisafetyw.org/ | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
In the last decade, there has been a growing concern on risks of Artificial Intelligence (AI). Safety is becoming increasingly relevant as humans are progressively ruled out from the decision/control loop of intelligent, and learning-enabled machines. In particular, the technical foundations and assumptions on which traditional safety engineering principles are based, are inadequate for systems in which AI algorithms, in particular Machine Learning (ML) algorithms, are interacting with the physical world at increasingly higher levels of autonomy. We must also consider the connection between the safety challenges posed by present-day AI systems, and more forward-looking research focused on more capable future AI systems, up to and including Artificial General Intelligence (AGI).
This workshop seeks to explore new ideas on AI safety with particular focus on addressing the following questions: * How can we engineer trustable AI software architectures? * Do we need to specify and use bounded morality in system engineering to make AI-based systems more ethically aligned? * What is the status of existing approaches in ensuring AI and ML safety and what are the gaps? * What safety engineering considerations are required to develop safe human-machine interaction in automated decision-making systems? * What AI safety considerations and experiences are relevant from industry? * How can we characterise or evaluate AI systems according to their potential risks and vulnerabilities? * How can we develop solid technical visions and paradigm shift articles about AI Safety? * How do metrics of capability and generality affect the level of risk of a system and how trade-offs can be found with performance? * How do AI system feature for example ethics, explainability, transparency, and accountability relate to, or contribute to, its safety? * How to evaluate AI safety? TOPICS --------- We invite theoretical, experimental and position papers covering any aspect of AI Safety including, but not limited to: * Safety in AI-based system architectures * Continuous V&V and predictability of AI safety properties * Runtime monitoring and (self-)adaptation of AI safety * Accountability, responsibility and liability of AI-based systems * Explainable AI and interpretable AI * Avoiding negative side effects in AI-based systems * Role and effectiveness of oversight: corrigibility and interruptibility * Loss of values and the catastrophic forgetting problem * Confidence, self-esteem and the distributional shift problem * Safety of AGI systems and the role of generality * Reward hacking and training corruption * Self-explanation, self-criticism and the transparency problem * Human-machine interaction safety * Regulating AI-based systems: safety standards and certification * Human-in-the-loop and the scalable oversight problem * Evaluation platforms for AI safety * AI safety education and awareness * Experiences in AI-based safety-critical systems, including industrial processes, health, automotive systems, robotics, critical infrastructures, among others. IMPORTANT DATES --------- * Papers submission: May 13, 2022 * Notification of acceptance: June 03, 2022 * Camera-ready submission: June 17, 2022 SUBMISSION AND SELECTION --------- You are invited to submit: * Full Technical Papers (6-8 pages), * Proposals for Technical Talks (up to one-page abstract including short Bio of the main speaker), * Position Papers (4-6 pages) Manuscripts must be submitted as PDF files via EasyChair online submission system: https://easychair.org/conferences/?conf=aisafety2022 Please keep your paper format according to IJCAI-ECAI Formatting Instructions. For Formatting Guidelines, LaTeX Styles and Word Template, see more information on https://www.ijcai.org/authors_kit Papers will be peer-reviewed by the Program Committee (2-3 reviewers per paper). The workshop follows a single-blind reviewing process. However, we will also accept anonymized submissions. We are happy to receive papers that have not been accepted for IJCAI-ECAI, and we welcome the review comments if the authors want to send them as additional material. The workshop proceedings will be published on CEUR-WS (http://ceur-ws.org/). We are also planning a Special Issue in a Journal, after the workshop. |
|