The aim of this special session is to provide a forum to discuss and disseminate recent and significant research efforts on Interpretable Fuzzy Systems, dealing with current challenges and new trends on this topic (both on theoretical and practical aspects).
Interpretability is widely acknowledged as the main advantage of Fuzzy Systems against other black-box models such as conventional Neural Networks. Intelligent system design moves towards a more human-centric perspective, where users understand and rely on the knowledge embodied in such systems. In the recent past there have been many papers related to interpretability issues in Fuzzy Logic what proofs that the session topic still remains a hot research topic.
In earlier research on Fuzzy Systems, the main goal was achieving models with high interpretability, mainly working with expert knowledge and a few simple linguistic rules. Then, researchers realized that expert knowledge was not enough to deal with complex problems, and the use of techniques to learn knowledge from data became a hot topic. As a result, from 1990 to 2000, the main effort was made regarding the accuracy of the final model, building complex models with high accuracy, but often disregarding the model interpretability. Nowadays, a new challenge lies in designing Fuzzy Systems that acquire accurate, robust and interpretable knowledge from data.
The human-centric character of the interpretability feature poses new challenges on Fuzzy System research. How can it be formalized computationally? How can it be evaluated? What are the dimensions of interpretability (structural/semantic, descriptive/explanatory, etc.)? How to design interpretable Fuzzy Systems? These and other research topics will be the focus of this special session.
|