| |||||||||||||||
HUBEDA 2024 : Workshop on Human Behavior Data Acquisition for Human-Robot Interaction | |||||||||||||||
Link: https://tars-home.github.io/hubeda2024 | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
We invite researchers to contribute papers to the Workshop on Human Behavior Data Acquisition for Human-Robot Interaction (HUBEDA, https://tars-home.github.io/hubeda2024) at IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) 2024, in Pasadena, California, USA.
Authors are invited to submit papers of up to 6 pages in the area of addressing challenges on acquiring, analyzing, and using data on social human interactions, and up to 4 pages on early-phase work, late-breaking results, and demonstrations. Website: https://tars-home.github.io/hubeda2024 Paper Submission link: https://easychair.org/my/conference?conf=hubeda2024 A vast number of advancements have been made in making robots engage in more natural human-robot interaction (HRI). To close the gap on enabling safe seamless human-aware HRI, collection of large data on human behavior, specially diverse multi-person interactions spanning large participant counts, is still an open challenge. Data on human behavior spans visual information on human movements, gestures, and interactions, spoken content, textual information, and physiological data such as heart rate, electromyography, and electrodermal activity. Through invited talks, papers, and posters, HUBEDA addresses fundamental questions on the acquisition, analysis, and use of large-scale data on human behavior and interactions to enable data-informed HRI. Topics of interest include, but are not limited to: - Datasets on single-person behavior and multi-person interaction for HRI - Multimodal setups integrating visual, audio, inertial, tactile, and physiological sensing for full-range capture - Augmented reality / virtual reality (AR/VR) approaches to scale up human subject data collection for HRI - AI, machine learning (ML), vision, and natural language processing (NLP) approaches to estimate parameters of interest for HRI from datasets on human behavior - Leveraging large vision and language models to decipher human activity, text, and spoken content - Integrating biomechanics models with human behavior data - Robotic implementations driven using human behavior datasets - Robot learning from human behavior datasets and multi-person interactions - Applications involving multiple agents such as collaborative assembly, handover, repair, and others - Ethical practices in data collection for single/multi-person behavior - Addressing challenges in reaching out to diverse population groups, e.g., children, older adults, and individuals with disabilities, to scale up data collection The workshop is supported through a generous donation from the De Luca Foundation (https://delucafoundation.org/). |
|