posted by user: anilsingh || 4359 views || tracked by 7 users: [display]

RevOpiD 2018 : Opinion Mining, Summarization and Diversification

FacebookTwitterLinkedInGoogle

Link: https://sites.google.com/view/revopid-2018/home
 
When Jul 9, 2018 - Jul 12, 2018
Where Baltimore, Maryland, USA
Submission Deadline Apr 10, 2018
Notification Due May 8, 2018
Final Version Due May 19, 2018
Categories    information retrieval   opinion mining   summarization   natural language processing
 

Call For Papers

ACM Hypertext-2018 Workshop on Opinion Mining, Summarization and Diversification

Call for Papers and Participation in the Shared Task
Website: https://sites.google.com/view/revopid-2018
Contact email: aksingh.cse@iitbhu.ac.in

Submission Deadline: April 10, 2018

This workshop aims at uncovering diverse perspectives to defining opinions. How can opinions be better summarized on online forums, in web search results or elsewhere? What relationships can be mapped between exchange of opinions on the web? We invite submissions on all such relatively unexplored dynamics of opinion mining and modeling.

Through a workshop on Opinion Mining, Summarization and Diversification, we aim to cover the following themes, around which we invite submissions, in the form of original work and progress reports:

* Review Opinion Diversification
* Opinion Modeling techniques
* Text and Sentiment Summarization
* Opinion summarization in ranking
* Exchange of opinions as network graphs
* Joint Topic Sentiment Modeling
* Phrase Embeddings
* Sentiment Normalization on a relative scale
* Paraphrase detection in opinionated text
* Factors affecting likeability of online reviews
* Fake review detection
* Sarcasm detection in online reviews
* Bias propagation on online forums
* Evaluation of opinion diversity
* Evaluation of representativeness and diversity in ranking
* Knowledge Representation methods for opinions


Shared Task

As part of the workshop, we will also be hosting a shared task on Review Opinion Diversification. The shared task aims to identify opinions from online product reviews. By identification of opinions, we don’t just mean string matching with a predefined list. Instead, we reward two systems equally whether they recognize ‘this product is cost-effective’ as an opinion or, instead, ‘this product is inexpensive’ or ‘this product is worth the money.’ We have an annotated dataset of 80+ products, with more than 10,000 reviews in totality, each review being labelled with its constituent opinions in the form of one opinion matrix per product.


Subtask A (Usefulness Ranking)

A supervised task to predict the helpfulness rating of product reviews based on review text. For a review which 3 users rated as helpful and 2 users found not-helpful, will be 3/5.

Subtask B (Representativeness Ranking)

Subtask B judges a system on its ability to tell whether a given review R1 contains a given opinion O1 or not. While R1 can be easily identified by its Reviewer ID, opinions are not labeled with words. Instead, they are identified by the other reviews that they appear in. Therefore, we ask the participants to provide an opinion matrix as output, which we will evaluate using several verified metrics.

Subtask C (Exhaustive Coverage Ranking)

This subtask aims at producing, for each product, top-k reviews from a set of reviews such that the selected top-kreviews act as a summary of all the opinions expressed in the reviews set.

Data and Resources

The training, development and test data has been extracted and annotated from Amazon SNAP Review Dataset and will be available after registration.

Invitation

We invite participation from all researchers and practitioners. The organizers rely, as is usual in shared tasks, on the honesty of all participants who might have some prior knowledge of part of the data that will eventually be used for evaluation, not to unfairly use such knowledge. The only exceptions (to participation) are the members of the organizing team, who cannot submit a system. The organizing chair will serve as an authority to resolve any disputes concerning ethical issues or completeness of system descriptions.

Timeline

Research Papers

Paper Submission Deadline: April 10, 2018
Notification of Acceptance: May 8, 2018
Camera-Ready Deadline: May 19, 2018
Conference Dates: July 9-12, 2018

Shared Task

Registration open: January 26, 2018
Release of Training Data: January 28, 2018
Dryrun: Release of Development Set: February 5, 2018
Dryrun: Submission on Development Set: February 20, 2018
Dryrun: Release of Scores: February 24, 2018
Registration Ends: March 8, 2018
Release of Test Set: March 10, 2018
Submission of Systems: March 17, 2018
System Results: March 25, 2018
System Description Paper Due: April 10, 2018
Notification of Acceptance: May 8, 2018
Camera-Ready Deadline: May 19, 2018
Conference Dates: July 9-12, 2018

See https://sites.google.com/view/revopid-2018 for more information.

Related Resources

IEEE-Ei/Scopus-ITCC 2025   2025 5th International Conference on Information Technology and Cloud Computing (ITCC 2025)-EI Compendex
SPIE-Ei/Scopus-DMNLP 2025   2025 2nd International Conference on Data Mining and Natural Language Processing (DMNLP 2025)-EI Compendex&Scopus
AMLDS 2025   IEEE--2025 International Conference on Advanced Machine Learning and Data Science
CSITEC 2025   11th International Conference on Computer Science, Information Technology
DMML 2025   6th International Conference on Data Mining & Machine Learning
IEEE CACML 2025   2025 4th Asia Conference on Algorithms, Computing and Machine Learning (CACML 2025)
DATA ANALYTICS 2025   The Fourteenth International Conference on Data Analytics
COIT 2025   5th International Conference on Computing and Information Technology
CL4Health 2025   Second Workshop on Patient-Oriented Language Processing
NLPI 2025   5th International Conference on NLP & Information Retrieval