| |||||||||||
NeurIPS 2022 : Robustness in Sequence Modeling | |||||||||||
Link: https://robustseq2022.github.io | |||||||||||
| |||||||||||
Call For Papers | |||||||||||
Hi all,
I would like to share the Call for Paper for our (in-person) workshop on Robustness in Sequence Modeling (RobustSeq) at NeurIPS 2022! website: https://robustseq2022.github.io Email: robustseq2022@gmail.com Important Dates Submission Deadline: September 22nd, 2022 Anywhere on Earth (AoE) Acceptance Notifications: October 22nd, 2022 Workshop event: December 2nd, 2022, In-person in New Orleans, LA, USA. Abstract: As machine learning models find increasing use in the real world, ensuring their safe and reliable deployment depends on ensuring their robustness to distribution shift. This is especially true for sequential data, which occurs naturally in various data domains such as natural language processing, healthcare, computational biology, and finance. However, building models for sequence data which are robust to distribution shifts presents a unique challenge. Sequential data are often discrete rather than continuous, exhibit difficult to characterize distributions, and can display a much greater range of types of distributional shifts. Although many methods for improving model robustness exist for imaging or tabular data, extending these methods to sequential data is a challenging research direction that often requires fundamentally different techniques. This workshop aims to provide a forum that outlines the main challenges in this area, as well as facilitates theoretical and methodological explorations for improving model robustness on sequential data and to highlight the importance of robustness in these settings. We encourage submissions on topics including but not limited to: - How well do existing robustness methods work on sequential data, and when or why do they succeed or fail? - Can we directly predict or otherwise characterize the performance of models on sequential data under distribution shifts? - How can we leverage the sequential nature of data to develop novel and distributionally robust methods? - What kinds of guarantees can we derive on predictive performance under distribution shifts, and how can we formalize these shifts? Where appropriate, we encourage authors to add discussions of any ethical considerations relevant to the presented work. Submission Instructions We invite extended abstract submissions that are 3-4 pages long (not including references). All accepted papers will be presented in person as posters and lightning talks. There are no formal proceedings generated from this workshop. Authors are encouraged to make their work publicly available through our online listing of presented work. The reviewing process will be double-blind. Please submit anonymized versions of your paper that include no identifying information about any author identities or affiliations. Submitted papers must be new work that has not yet been published. Submission format: NeurIPS paper style Submission link: https://openreview.net/group?id=NeurIPS.cc/2022/Workshop/RobustSeq Organizers: Nathan Ng, University of Toronto/Vector Institute/MIT Haoran Zhang, MIT Vinith Suriyakumar, MIT Chantal Shaib, Northeastern Kyunghyun Cho, NYU, Genentech Yixuan Li, UW Madison Alice Oh, KAIST Marzyeh Ghassemi, MIT |
|