| |||||||||||||||
XAI^3 Workshop 2023 : Joint workshops on XAI methods, challenges and applications at the 26th European Conference on Artificial Intelligence | |||||||||||||||
Link: https://xai3ecai2023.github.io/ | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
Call for Papers
Welcome to the Joint workshops on XAI methods, challenges and applications (XAI^3), where we aim to discuss opportunities for the new generation of explainable AI (XAI) methods that are reliable, robust, and trustworthy. Explainability of AI models and systems is crucial for humans to trust and use intelligent systems, yet their utility in high-risk applications such as healthcare and industry has been severely limited. Our workshop will have three tracks: medical, industry, and future challenges, where we will explore the challenges and opportunities in creating useful XAI methods for medical applications, integrating explainability in highly automated industrial processes, and evaluating current and future XAI methods. We welcome contributions from researchers, academia, and industries primarily from a technical and application point of view, but also from an ethical and sociological perspective. Join us in discussing the latest developments in XAI and their practical applications at the 26th European Conference on Artificial Intelligence (ECAI 2023) in Kraków, Poland. Dates Paper submission deadline June 12, 2023 Decision notification August 02, 2023 Camera-ready due August 15, 2023 All times Anywhere on Earth (AoE), UTC-12 Tracks and Topics of interest Towards Explainable AI 2.0 (XAI2.0) Chair: Przemysław Biecek * Emerging challenges in explainable AI towards XAI 2.0 * Evaluation and limitations of current XAI methods * Trade-off between model-agnostic and model-specific explainability * Adversarial attacks and defenses in XAI * Privacy, leakage of sensitive information, fairness and bias * Human-centered XAI through visualization, active learning, model improvement and debugging * XAI beyond classification and regression, e.g. in unsupervised learning, image segmentation, survival analysis Explainable AI for Medical Applications (XAIM) Chair: Neo Christopher Chung * Theory and application of XAI for medical imaging and other medical applications * Uncertainty estimation of AI models using medical data * Multimodal learning, e.g., PET/CT, healthcare records, genomics, and other heterogeneous datasets * Clinical cases, evaluation, and software of XAI for medicine * Fairness, bias, and transparency in medical AI models * Human-computer interaction (HCI) and human in the loop (HITL) approaches in medicine * Inherently interpretable models in supervised, unsupervised and semi-supervised learning for biology and medicine XAI for Industry 4.0 & 5.0 (XAI4I) Chair: Sławomir Nowaczyk * Ethical considerations in industrial deployment of AI * AI transparency and accountability in smart factories * Explainable systems fusing various sources of industrial information * XAI in performance and efficiency of industrial systems * Prediction of maintenance, product, and process quality * Data and information fusion in the industrial XAI context * Application in manufacturing systems, production processes, energy, power, and transport systems Submission instructions Submissions should follow the ECAI 2023 format available at https://ecai2023.eu/ECAI2023. We welcome max 7-page submissions of papers. The page limit does not include references and supplementary material. All submissions should be in the anonymized ECAI 2023 format available at https://vtex-soft.github.io/texsupport.iospress-ecai. Overlength or non-anonymized submissions will be rejected without review. For more informaiton visit https://xai3ecai2023.github.io/ |
|