posted by user: A4257537582 || 2827 views || tracked by 4 users: [display]

Ideal Machine-Human Interface 2024 : Special Call for Computers in Human Behavior: The AI Interface: Designing for the Ideal Machine-Human Experience

FacebookTwitterLinkedInGoogle

Link: https://www.sciencedirect.com/journal/computers-in-human-behavior/about/call-for-papers#the-ai-interface-designing-for-the-ideal-machine-human-experience
 
When Oct 31, 2023 - Jan 1, 2025
Where USA
Submission Deadline Dec 31, 2023
Final Version Due Aug 31, 2024
Categories    artificial intelligence   design   cognitive science   UX
 

Call For Papers

The AI Interface: Designing for the Ideal Machine-Human Experience
Source is an important criterion that determines whether the information is followed through or otherwise. In this call, we are looking to expand on our understanding of the psychology of design, into the way in which we deliberate and design for AI interfaces such as chatbots, robotics, IoT, AI assistants, and more. Artificial intelligence (AI) and machine learning (ML) are transformative technologies that organizations everywhere are investing heavily in. AI has the ability to replicate tasks that require human intelligence utilizing probabilistic outcomes based on existing real-world data to predict future outcomes. ML uses large amounts of data to create and validate decision logics often mimicking biological neuron signals such as in deep learning or natural language processing (NLP). No-code tools allow business analysts to make ML predictions even without ML experience. In this special issue we evaluate the role of AI in Human Perception and Inferences. We are looking to expand how to design for the ideal machine-human experience, essentially interfaces that AI is incorporated within.

Guest editors:

Aparna Sundar

asundar@uw.edu

Karen Machleit

k.machleit@uc.edu

Special issue information:

User perception can vary in several ways due to individual differences, environmental factors and cultural differences. All this impacts user experiences. User inferences are mental processes that allow individuals to draw conclusions, make judgments and generate new knowledge based on the information they are exposed to. Both perception and inferences influence the ultimate user experience. In AI, design of systems that are transparent, intuitive, and align with the users’ needs and expectations is vital. With the explosion of technological investment and innovation in this domain, the need for research is heightened. This is more so the case from a design and product development standpoint. User experience and research is essential to bridge the interface between AI and users. There is more to user experience in terms of user mental models, trust and transparency, in terms of: how psychology can influence the design of the machine-human interface, personalization and adaptability of AI assistants, and more. This call aims to address that gap.

As an example, one of the most critical aspects of interacting with AI is the language that AI uses in messaging, or in persuasion attempts. We know very little about the psychology of AI experience design. Research in the way marketers communicate to consumers indicate that there is a robust effect of message tone (Sundar & Cao, 2018 & Sundar & Paik, 2017), repetition of messaging (Sundar, Kardes & Wright, 2015), language structure and categorization (Schmitt & Zhang, 1998). While scholars have established the anthropomorphic relationship of individuals with AI assistants (Uysal, Alavi, & Benzencon, 2022), how individuals react or respond to AI needs more investigation. Toward this end, this call aims to bridge the cross discipline of communication, marketing and judgement and decision making. Research in this area is aimed to amplify how humans perceive communication, especially when claims come for AI assistants or sources that are not perceived to be human.

This call aims to mobilize articles that explore considerations in the evolution of AI and ML in human behavior. Very specifically, we seek articles that bring out complexities of AI/human interactions, possible human perceptions, and learnings to improve ML and inferences on both the human and machine side that can transform technologies meaningfully. Given the role of AI and ML in the digital evolution of computers, this special issue emphasizes the human response, or psychology toward transformative digital technologies. Some areas that AI influences human perception and inferences are:

1. Machine-Human Interface: Nature of anthropomorphic communication in the machine-human Interface (Schmitt & Zhang, 1998; Uysal, Alavi & Benzencon, 2022)

2. User inferences as it influences user experience: AI uses sensemaking to make meaning of the language humans use to the communicate with. (Cabrera et. al., 2023). Articles on how mental models of users communicating with AI are formulated, AI assistant modality etc.

3. Consumer perception and user behavior: Humans react to the tangible aesthetics of product (Sundar, Cao & Machleit, 2020) How consumers behave when presented with information from AI assistants

4. Personalization and Adaptability: Gao and Liu (2022) note that the way in which personalization through the customer journey is important. Research exploring the role of personalization and adaptability of the AI assistant

5. Transparency and Explainability: Sources that AI assistant get their information from

6. Clarity of Communication: Style, tone, language all make a difference in perception (Sundar & Cao, 2018) and extensions to research to AI assistant communication

7. Trust in AI Technologies: Trust is a multi-faceted construct in interpersonal relationships, and research investigating how to boost trust in AI are needed, what Trustworthy AI (Kaur et. al 2022), is and how companies can build this in the AI development

To extend the literature and understanding on how designers and product developers can improve AI assistants or user experiences, this call invites a multi-disciplinary investigation into the psychology of designing influencing AI experiences. The psychology of design encompasses many domains such as visual design, language, auditory consideration and other perceptual cues that ultimately impact behavior. This call therefore invites researchers to submit original papers that address the following areas:

1. Considerations in AI design: The various forms that AI assistants can take, considerations of situation, where best to locate AI and how best it can help the user and others.

2. AI Interactions: This can be multi-modal, visual, haptic, auditory or other manifestations of AI in improving the lives of users. Ultimately research that investigates the use and reactions to AI in the IoT and other forms and others.

3. Experience and learnability: Research papers that examine the different learning models of AI and the way in which information that is factually mature versus grounded in commonsense and has the most implications for users. Research investigating effectiveness of AI maturity and others.

4. Machine and Human interaction: Research extending the literature on Anthropomorphism to the AI domain and others.

5. Use of AI in specific domains: The implications of AI assistants are far reaching and can influence evaluative considerations from analysis of graphs, dashboards, e-commerce, to large statistical models, to writing, or transcribing etc. Research highlighting the nuances, challenges and how to overcome these and others.

A multi-disciplinary approach to research, with methodology that is relevant to the research questions is welcome. The paper should have strong implications on how current day AI design can be improved and how designers and product managers can think through human computer interaction to create incremental and improved experiences with AI.

Manuscript submission information:

All interested researchers are invited to submit your manuscript at: https://www.sciencedirect.com/journal/computers-in-human-behavior/about/call-for-papers

The Journal’s submission system is open for receiving submissions to our Special Issue. To ensure that all manuscripts are correctly identified for inclusion into the special issue, it is important to select “VSI: Ideal Machine-Human Interface” when you reach the “Article Type” step in the submission process.

Full manuscripts will undergo double-blind review as per the usual procedures for this journal.

Deadline for manuscript submissions: Dec 31st, 2023

Inquiries related to the special issue, including questions about appropriate topics, may be sent electronically to the Guest Editor Dr. Aparna Sundar [asundar@uw.edu).

Learn more about the benefits of publishing in a special issue: https://www.elsevier.com/authors/submit-your-paper/special-issues

Important Dates:

Submission Deadline: Dec 31st 2023
Notification of Acceptance: August 1st 2023(accommodating review rounds)
Expected Publication Date: end of 2024

References:

Cabrera, Á. A., Tulio Ribeiro, M., Lee, B., Deline, R., Perer, A., & Drucker, S. M. (2023). What did my AI learn? how data scientists make sense of model behavior. ACM Transactions on Computer-Human Interaction, 30(1), 1-27.

Gao, Y., & Liu, H. (2022). Artificial intelligence-enabled personalization in interactive marketing: a customer journey perspective. Journal of Research in Interactive Marketing, (ahead-of-print), 1-18.

Kaur, D., Uslu, S., Rittichier, K. J., & Durresi, A. (2022). Trustworthy artificial intelligence: a review. ACM Computing Surveys (CSUR), 55(2), 1-38.

Schmitt, B. H., & Zhang, S. (1998). Language structure and categorization: A study of classifiers in consumer cognition, judgment, and choice. Journal of Consumer Research, 25(2), 108-122.

Sundar, A., & Cao, E. S. (2018). Punishing politeness: The role of language in promoting brand trust. Journal of Business Ethics, 164, 39-60.

Sundar, A., Cao, E., & Machelit, K. A. (2020). How product aesthetics cues efficacy beliefs of produce performance. Psychology & Marketing, 37(9), 1246-62.

Sundar, A., Kardes, F. R., & Wright, S. A. (2015). The influence of repetitive health messages and sensitivity to fluency on the truth effect in advertising. Journal of Advertising, 44(4), 375-387.

Sundar, A., & Paik, W. (2017). Punishing politeness: Moderating role of belief in just world on severity. Association for Consumer Research, 45, 903-905.

Uysal, E., Alavi, S., & Bezençon, V. (2022). Trojan horse or useful helper? A relationship perspective on artificial intelligence assistants with humanlike features. Journal of the Academy of Marketing Science, 50(6), 1153-1175.

Keywords:

Artificial intelligence interface, UX, design

Learn more about the benefits of publishing in a special issue: https://www.elsevier.com/authors/submit-your-paper/special-issues

Interested in becoming a guest editor? Discover the benefits of guest editing a special issue and the valuable contribution that you can make to your field: https://www.elsevier.com/editors/role-of-an-editor/guest-editors

Related Resources

IEEE CACML 2025   2025 4th Asia Conference on Algorithms, Computing and Machine Learning (CACML 2025)
SPIE-Ei/Scopus-DMNLP 2025   2025 2nd International Conference on Data Mining and Natural Language Processing (DMNLP 2025)-EI Compendex&Scopus
IEEE-Ei/Scopus-ITCC 2025   2025 5th International Conference on Information Technology and Cloud Computing (ITCC 2025)-EI Compendex
IEEE-Ei/Scopus-CNIOT 2025   2025 IEEE 6th International Conference on Computing, Networks and Internet of Things (CNIOT 2025) -EI Compendex
Call For Papers Special Issue 2024   Smart Cities, innovating in the Transformation of Urban Environments
AMLDS 2025   IEEE--2025 International Conference on Advanced Machine Learning and Data Science
FAIML 2025   2025 4th International Conference on Frontiers of Artificial Intelligence and Machine Learning
IJFMA Vol. 10 No. 3 - Dossier II 2025   What Future for the Cinema of Small European Countries? - Open Call for Papers IJFMA Vol. 10 No. 3 Dossier II
IJCNN 2025   International Joint Conference on Neural Networks
AI4Energy&Environment 2025   Special Issue on AI-Driven Innovations for Renewable Energy and Environmental Sustainability