posted by user: rgiot || 4635 views || tracked by 4 users: [display]

EDL-AI 2020 : Explainable Deep Learning- AI / ICPR'2020 WorkShop

FacebookTwitterLinkedInGoogle

Link: https://edl-ai-icpr.labri.fr/
 
When Sep 13, 2020 - Sep 18, 2020
Where Milan
Submission Deadline Jun 15, 2020
Notification Due Jul 15, 2020
Final Version Due Jul 30, 2020
Categories    machine learning   explanaible ai   visualization
 

Call For Papers


About

The recent focus of AI and Pattern Recognition communities on the supervised learning approaches, and particularly to Deep Learning / AI, resulted in considerable increase of performance of Pattern Recognition and AI systems, but also raised the question of the trustfulness and explainability of their predictions for decision-making. Instead of developing and using Deep NNs as black boxes and adapting known architectures to variety of problems, the goal of explainable Deep Learning / AI is to propose methods to “understand” and “explain” how the these systems produce their decisions. AI systems may produce errors, can exhibit overt or subtle bias, may be sensitive to noise in the data, and often lack technical and judicial transparency and explainability. These shortcomings raise many ethical and policy concerns that impede wider adoption of this potentially very beneficial technology. In various Pattern Recognition and AI application domains such as health, ecology, autonomous driving cars, security, culture it is mandatory to understand how the predictions are correlated with the information perception and decision making by the experts. The goals of the workshop are to bring together research community which is working on the question of improving explainability of AI and Pattern Recognition algorithms and systems. The Workshop is a part of ICPR'2020 and supported by research project XAI-LABRI


Topics

“Sensing” or “salient features” of Neural Networks and AI systems - explanation of which features for a given configuration yield predictions both in spatial (images) and temporal (time-series, video) data;
Attention mechanisms in Deep Neural Networks and their explanation;
For temporal data, the explanation of which features and at what time are the most prominent for the prediction and what are the time intervals when the contribution of each data is important;
How the explanation can help on making Deep learning architectures more sparse (pruning) and light-weight;
When using multimodal data how the prediction in data streams are correlated and explain each other;
Automatic generation of explanations / justifications of algorithms and systems’ decisions;
Decisional uncertainly and explicability
Evaluation of the explanations generated by Deep Learning and other AI systems.

Program Committee

Christophe Garcia (LIRIS, France)
Hugues Talbot (EC, France)
Dragutin Petkovic (SFSU,USA)
Alexandre Benoît( LISTIC,France)
Mark T. Keane (UCD, Ireland)
Georges Quenot(LIG, France)
Stefanos Kolias (NTUA, Grece)
Jenny Benois-Pineau(LABRI, France)
Jenny Benois-Pineau(LABRI, France)
Hervé Le Borgne (LIST, France)
Noel O’Connor (DCU, Ireland)
Nicolas Thome(CNAM, France)

Dates

Submission deadline : June 15th 2020
Workshop author notification: July 15th 2020
Camera-ready submission: July 30th 2020
Finalized workshop program: August 15th 2020

Related Resources

EAIH 2024   Explainable AI for Health
ECAI 2024   27th European Conference on Artificial Intelligence
EAICI 2024   Explainable AI for Cancer Imaging
AIM@EPIA 2024   Artificial Intelligence in Medicine
ICDM 2024   IEEE International Conference on Data Mining
ITNG 2024   The 21st Int'l Conf. on Information Technology: New Generations ITNG 2024
ICMLA 2024   23rd International Conference on Machine Learning and Applications
AI4SS Summer School 2024   Summer School on Artificial Intelligence for a Secure Society
EXPLAINS 2024   1st International Conference on Explainable AI for Neural and Symbolic Methods
IEEE ICA 2022   The 6th IEEE International Conference on Agents