| |||||||||||||
RAIE 2024 : 2nd International Workshop on Responsible AI Engineering | |||||||||||||
Link: https://conf.researchr.org/home/icse-2024/raie-2024 | |||||||||||||
| |||||||||||||
Call For Papers | |||||||||||||
The recent release of ChatGPT, Bard, and other large language model (LLM)-based chatbots has drawn huge global attention. The black box nature and the rapid advancements in AI have sparked significant concerns about responsible AI. It is crucial to ensure that the AI systems are developed and used responsibly throughout their entire lifecycle and trusted by humans who are expected to use them and rely on them.
A number of AI ethics principles have been published recently, which AI systems should conform to. Some consensus around AI ethics principles has begun to emerge. A principle-based approach allows technology-neutral, future-proof and context-specific interpretations and operationalisation. However, high-level AI ethics principles are far from ensuring trustworthy and responsible AI systems. There is a significant gap between high-level AI ethics principles and low-level concrete engineering solutions. Without further concrete methods and tools, practitioners are left with nothing much beyond truisms. For example, it is a very challenging and complex task to operationalise the human-centered value principle regarding how it can be designed for, implemented and monitored throughout the entire lifecycle of AI systems. Trustworthy and responsible AI challenges can occur at any stage of the AI system development lifecycle, crosscutting AI components, non-AI components, and data components of systems. New and improved software engineering approaches are required to ensure that the AI systems developed are trustworthy throughout the entire lifecycle and trusted by those who use and rely on them. To ensure the enforcement of responsible AI requirements, the requirements need to be measurable, verifiable, and monitorable. Also, we need assessment mechanisms and engineering tools to systematically support the implementation of responsible AI requirements across all phases of AI application development, maintenance, and operations. Achieving responsible AI engineering (i.e., building adequate software engineering tools to support responsible engineering of AI systems) requires a good understanding of human expectations and the utilisation context of AI systems. Hence, the aim of this workshop is to bring together researchers and practitioners not only in software engineering and AI, but also social scientists and regulatory bodies to build up a community that will target the AI engineering challenges that practitioners are facing in developing AI systems responsibly. In this workshop, we are looking for cutting-edge software/AI engineering methods, techniques, tools and real-world case studies that can help operationalise responsible AI. Topics of interests include, but are not limited to: * Requirement engineering for responsible AI * Software architecture and design of responsible AI systems * Verification and validation for responsible AI systems * DevOps, MLOps, MLSecOps, LLMOps for responsible AI systems * Development processes for responsible AI systems * Responsible AI governance, assessment tools/techniques * Reproducibility and traceability of AI systems * Trust and trustworthiness of AI systems * Human aspect of responsible AI engineering * Responsible AI engineering for next-generation foundation model based AI systems (e.g., LLM-based) * Regulatory and policy implications * Education and training in responsible AI The workshop will be highly interactive, including invited keynotes/talks, paper presentations for different topics in the area of responsible AI engineering. |
|