posted by user: CH_Chen || 2615 views || tracked by 3 users: [display]

ACL 2018 : ACL Workshop on Machine Reading for Question Answering

FacebookTwitterLinkedInGoogle

Link: https://mrqa2018.github.io/
 
When Jul 15, 2018 - Jul 20, 2018
Where Melbourne
Submission Deadline Apr 23, 2018
Notification Due May 15, 2018
Final Version Due May 28, 2018
 

Call For Papers

Machine Reading for Question Answering (MRQA) has become an important testbed for evaluating how well computer systems understand human language, as well as a crucial technology for industry applications such as search engines and dialog systems. Successful MRQA systems must deal with a wide range of important phenomena, including syntactic attachments, coreference links, and entailment. Recognizing the potential of MRQA as a comprehensive language understanding benchmark, the research community has recently created a multitude of large-scale datasets over text sources such as Wikipedia (WikiReading, SQuAD, WikiHop), news and other articles (CNN/Daily Mail, NewsQA, RACE), fictional stories (MCTest, CBT, NarrativeQA), and general web sources (MS MARCO, TriviaQA, SearchQA). These new datasets have in turn inspired an even wider array of new question answering systems.

Despite this rapid progress, there is much to understand about these datasets and systems. While in-domain test accuracy has been improving rapidly on these datasets, systems struggle to generalize gracefully when tested on new domains and datasets. The ideal MRQA system is not only accurate on in-domain data, but is also interpretable, robust to distributional shift, able to abstain from answering when there is no adequate answer, and capable of making logical inferences (e.g., via entailment and multi-sentence reasoning). Meanwhile, the diversity of recent datasets calls for an analysis of the various natural language phenomena (e.g., coreference, paraphrase, entailment, multi-step reasoning) these datasets present.

We seek submissions on the following topics:

Accuracy: How can we make MRQA systems more accurate?
Interpretability: How can systems provide rationales for their predictions? To what extent can cues such as attention over the document be helpful, compared to direct explanations? Can models generate derivations that justify their predictions?
Speed and Scalability: Can models scale to consider multiple, lengthy documents, or the entire web as information source? Similarly, can they scale to consider richer answer spaces, such as sets of spans or entities instead of a single answer one?
Robustness: How can systems generalize to other datasets and settings beyond the training distribution? Can we guarantee good performance on certain types of questions or documents?
Dataset Creation: What are effective methods for building new MRQA datasets?
Dataset Analysis: What challenges do current MRQA datasets pose?
Error Analysis: What types of questions or documents are particularly challenging for existing systems?

Related Resources

ACL 2025   The 63rd Annual Meeting of the Association for Computational Linguistics
ACL 2026   64th Annual Meeting of the Association for Computational Linguistics
ACL-SRW 2025   ACL 2025 Student Research Workshop
EACL-SRW 2026   ACL 2026 Student Research Workshop
AMLDS 2026   IEEE--2026 2nd International Conference on Advanced Machine Learning and Data Science
Ei/Scopus-ITCC 2026   2026 6th International Conference on Information Technology and Cloud Computing (ITCC 2026)
ClimateNLP @ ACL 2025   The 2nd Workshop of Natural Language Processing meets Climate Change
AMW 2025   Argument Mining Workshop @ ACL 2025
CACML 2026   2026 5th Asia Conference on Algorithms, Computing and Machine Learning (CACML 2026)
CL4Health 2025   Second Workshop on Patient-Oriented Language Processing