| |||||||||||
SENSORS SI: Human Centered AI 2022 : Human Centered Artificial Intelligence: Putting the Human in the Loop for Implementing Sensors Based Intelligent Environments | |||||||||||
Link: https://www.mdpi.com/journal/sensors/special_issues/Artificial_Intelligence_Implementing_Sensors | |||||||||||
| |||||||||||
Call For Papers | |||||||||||
The ushering of Artificial Intelligence (AI) in our everyday life has already stimulated rich discussions with, on the one hand, enthusiastic forecasts about how AI technologies will support human activities and improve quality of life, and, on the other hand, dark scenarios about the potential pitfalls and dangers entailed. In light of the above, there is an urgent need for a human-centered AI approach that will not just aim to consider human needs and requirements, but, more importantly, will actively aim to put humans in the loop. Recent research efforts towards better explainability, trustworthiness, and transparency pinpoint this need as a prerequisite for effective and efficient human utilization of autonomous systems.
In parallel, the unprecedented growth of the data generated from a vast number of sensors, cyber–physical and embedded systems, and IoT, which are already interwoven in our everyday life, lays the foundation for the fast pace developments of AI in general and machine learning in particular. However, this rapid evolution in the field of AI, associated with a remarkable depth of novel research contributions focusing on AI functionality, has not been accompanied by similar emphasis and progress on equally important fundamental aspects and design considerations advocated by the human-centered design process. With the aim of bridging this gap in mind, this Special Issue aims to solicit original and high quality research articles that consider the current evolution of AI approaches under a human-centric approach in the development of intelligent environments. Exceptional contributions that extend previously published work will also be considered, provided that they contribute at least 60% new results. Authors of such submissions will be required to provide a clear indication of the new contributions and explain how this work extends the previously published contributions. Topics may include, but are not limited to, the following: • Active machine learning • Adaptive personal AI systems • Causal learning, causal discovery, causal reasoning, causal explanations, and causal inference • Cognitive computing • Decision making and decision support systems • Emotional intelligence • Explainable, accountable, transparent, and fair AI • Explanatory user interfaces and HCI for explainable AI • Ethical and trustworthy AI • Federated learning and cooperative intelligent information systems and tools • Gradient-based interpretability • Interaction modalities and devices: visual, 2D/3D, augmented reality, simulations, digital twin, conversational interfaces, and multimodal interfaces • Interactive machine learning • Interpretability in reinforcement learning • Human–AI interactions and intelligent user interfaces • Human–AI teaming • Natural language generation for explanatory models • Processes, tools, methods, user involvement, user research, evaluation, AI technology assessment and customization, and standards • Rendering of reasoning processes • Self-explanatory agents and decision support systems • Usability of human–AI interfaces The deadline for manuscript submission is 31 December 2022. All submissions will be peer-reviewed and judged on originality, significance, technical strength, correctness, quality of presentation and relevance to the special issue topics of interest. The Special Issue Guest Editors Professor Constantine Stephanidis Dr. George Margetis |
|