| |||||||||||||||
UMUM 2023 : The 2nd Workshop on Ubiquitous and Multi-domain User Modeling (UMUM2023), Held in Conjunction with PerCom 2023 | |||||||||||||||
Link: https://www.um-um.net/umum2023/ | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
AI/ML technology has gradually shifted from a specific tool to a general tool for pervasive tasks. Typically, BERT, a well-known language pre-trained model has been successfully adapted to many language-related tasks. Recently, AI/ML has also been widely applied to user modeling for real-world applications such as context-aware recommendation. However, many studies focus on building user models only in one or few domains and applying them to identical domains. Thus, how to build ubiquitous and multi-domain user models and increase their applicability in real-world contexts remains an open problem. Here, we refer to “ubiquitous” as the capability to adapt to, and therefore exist in diverse contexts, and “multi-domain” as the capability of training jointly using data from multiple domains.
In concrete, we think that there are several technical issues towards realizing ubiquitous and multi-domain user modeling, as follows: 1. Real-world environment is a mixed environment in which many users and many objects are interacting in various ways. How to learn diverse contexts from user-object interaction data obtained from smartphones, IoT and sensor platforms, and so on, is a challenge. 2. Collecting personal data from multiple domains and aggregating them in one place become increasingly difficult due to privacy concerns or huge data volume. How to model different user behaviors in each individual domain, integrate the models to enhance every prediction and, consequently, improve modeling performance is another challenge. 3. The relationships between multiple domains may change from time to time. Therefore, it will be a challenge to detect and adapt to such changes so that the performance of the models do not degrade. We invite contributions to the workshop about topics related to UMUM (but not limited to): - AI/ML technologies: Deep Learning; Transfer Learning; Domain Adaptation; Active Learning; Multi-task Learning; Meta Learning; Online Learning; Continual Learning; Self/Un/Semi/Weekly-supervised Learning; Representation Learning; Knowledge Representation and Reasoning; * Explainable AI; Federated Learning; Privacy Preserving Data Mining; MLOps; Anomaly Detection; Concept Drift Detection - Application areas: Context Modeling and Reasoning; Spatio-temporal Modeling; Activity Recognition; Social Sensing and Modeling; Location/Context/Activity-aware Services; Navigation; Personalization; Cross-domain Recommendation ** Paper Submission and Guidelines ** - Format: Maximum of 6 pages including all figures, tables, and references, formatted in accordance with the IEEE Computer Society author guidelines. The LaTeX and Microsoft Word templates can be found at the IEEE Computer Society website (https://www.ieee.org/conferences/publishing/templates.html). - Review process: The review process will be double blind. Names and affiliations of the authors, and the other description should be anonymized in submission papers. - Submission: Through the EDAS submission system (https://edas.info/N30130). - Publication: All accepted workshop papers will be included in the IEEE PerCom 2023 proceedings and will appear in the IEEE digital library (Xplore). ** Important Dates ** - Submission deadline: November 14, 2022 (AOE) → November 28, 2022 (AOE) Extended - Notification: January 05, 2023 - Camera ready deadline: January 30, 2023 (TBA) - Workshop: March, 2023 (TBA) |
|