| |||||||||||||||
iMIMIC 2020 : Workshop on Interpretability of Machine Intelligence in Medical Image Computing at MICCAI 2020 | |||||||||||||||
Link: https://imimic-workshop.com | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
CALL FOR PAPERS: iMIMIC @ MICCAI 2020
Workshop on Interpretability of Machine Intelligence in Medical Image Computing at MICCAI 2020 iMIMIC 2020 workshop: October 4 2020, Lima, Peru, (https://imimic-workshop.com) MICCAI 2020 conference: October 4-8, 2020, Lima, Peru, (https://www.miccai2020.org/) Submission: (https://cmt3.research.microsoft.com/IMIMIC2020) OVERVIEW The annual MICCAI conference attracts world leading biomedical scientists, engineers, and clinicians from a wide range of disciplines associated with medical imaging and computer assisted intervention. Machine learning (ML) systems are achieving remarkable performances at the cost of increased complexity. Hence, they become less interpretable, which may cause distrust. As these systems are pervasively being introduced to critical domains, such as medical image computing and computer assisted intervention (MICCAI), it becomes imperative to develop methodologies to explain their predictions. Such methodologies would help physicians to decide whether they should follow/trust a prediction or not. Additionally, it could facilitate the deployment of such systems, from a legal perspective. Ultimately, interpretability is closely related with AI safety in healthcare. However, there is very limited work regarding interpretability of ML systems among the MICCAI research. Besides increasing trust and acceptance by physicians, interpretability of ML systems can be helpful during method development. For instance, by inspecting if the model is learning aspects coherent with domain knowledge, or by studying failures. Also, it may help revealing biases in the training data, or identifying the most relevant data (e.g., specific MRI sequences in multi-sequence acquisitions). This is critical since the rise of chronic conditions has led to a continuous growth in usage of medical imaging, while at the same time reimbursements have been declining. Hence, improved productivity through the development of more efficient acquisition protocols is urgently needed. The Workshop on Interpretability of Machine Intelligence in Medical Image Computing (iMIMIC) at MICCAI 2020 aims at introducing the challenges & opportunities related to the topic of interpretability of ML systems in the context of MICCAI. SCOPE Interpretability can be defined as an explanation of the machine learning system. It can be broadly defined as global, or local. The former explains the model and how it learned, while the latter is concerned with explaining individual predictions. Visualization is often useful for assisting the process of model interpretation. The model’s uncertainty may be seen as a proxy for interpreting it, by identifying difficult instances. Still, although we can find some approaches for tackling machine learning interpretability, there is a lack of formal and clear definition and taxonomy, as well as general approaches. Additionally, interpretability results often rely on comparing explanations with domain knowledge. Hence, there is the need for defining objective, quantitative, and systematic evaluation methodologies. Covered topics include but are not limited to: - Definition of interpretability in context of medical image analysis. - Visualization techniques useful for model interpretation in medical image analysis. - Local explanations for model interpretability in medical image analysis. - Methods to improve transparency of machine learning models commonly used in medical image analysis. - Textual explanations of model decisions in medical image analysis. - Uncertainty quantification in context of model interpretability. - Quantification and measurement of interpretability. - Legal and regulatory aspects of model interpretability in medicine. IMPORTANT DATES Submission Deadline: July 14, 2020. Notification of Acceptance: July 31, 2020. Camera-ready Deadline: August 7, 2020. Workshop: October 4 2020. KEYNOTE SPEAKERS Himabindu Lakkaraju, Harvard University, USA. Wojciech Samek, Fraunhofer HHI, Germany. VENUE The iMIMIC workshop will be held in the morning of 4 of October as a workshop of MICCAI 2020. We would like to inform you that in light of the ongoing COVID-19 pandemic, the MICCAI 2020 Conference Organizing team and the MICCAI Society Board have decided to hold the MICCAI 2020 annual meeting planned for October 4-8, 2020 in Lima, Peru as a fully virtual conference. More information regarding the venue can be found at the conference website at (https://www.miccai2020.org/en/CONFERENCE-VENUE.html) ADDITIONAL INFORMATION AND SUBMISSION DETAILS Submissions must be original and not published elsewhere. Authors should prepare a manuscript of 8 pages, excluding references. The manuscript should be formatted according to the Lecture Notes in Computer Science (LNCS) style. All submissions will be reviewed by 3 reviewers. The reviewing process will be single-blinded. Authors will be asked to disclose possible conflict of interests, such as cooperation in the previous two years. Moreover, care will be taken to avoid reviewers from the same institution as the authors. The selection of the papers will be based on their relevance for medical image analysis, significance of results, technical and experimental merit, and clear presentation. Authors should submit their articles in a single pdf file in the submission website - ? no later than June 30 2020. Notification of acceptance will be sent by July 21 2020. and the camera-ready version of the papers revised according to the reviewers comments should be submitted by July 31 2020. We intend to join the MICCAI Satellite Events joint proceedings, and publish the accepted papers as LNCS. We are also considering making the pre-print of the accepted papers publicly available. Best paper will receive a 300€ award. ORGANIZING COMMITTE Jaime S. Cardoso, INESC TEC and University of Porto, Portugal. Pedro H. Abreu, CISUC and University of Coimbra, Portugal. Ivana Isgum, Amsterdam University Medical Center, The Netherlands José P. Amorim, CISUC and University of Coimbra, Portugal - Publicity Chair Wilson Silva, INESC TEC and University of Porto, Portugal - Program Chair Ricardo Cruz, INESC TEC and University of Porto, Portugal - Sponsor Chair PROGRAM COMMITEE Ben Glocker, Imperial College, United Kingdom. Bjoern Menze, Technical University of Munich, Germany. Carlos A. Silva, University of Minho, Portugal. Christoph Molnar, Ludwig Maximilian University of Munich, Germany. Claes Nøhr Ladefoged, Rigshospitalet, Denmark. Dwarikanath Mahapatra, Inception Institute of AI, Abu Dhabi, UAE. George Panoutsos, University of Sheffield, United Kingdom. Hrvoje Bogunovic, Medical University of Vienna, Austria. Islem Rekik, Istanbul Technical University, Turkey. Joana Santos, University of Coimbra, Portugal. Miriam Santos, University of Coimbra, Portugal. Nick Pawlowski, Imperial College London, United Kingdom. Sérgio Pereira, Lunit, South Korea. Ute Schmid, University of Bamberg, Germany. |
|