posted by user: uhassan || 4785 views || tracked by 4 users: [display]

CROWDBENCH 2016 : Workshop on Benchmarks for Ubiquitous Crowdsourcing at IEEE PerCom 2016

FacebookTwitterLinkedInGoogle

Link: http://crowdbench.insight-centre.org/
 
When Mar 14, 2016 - Mar 18, 2016
Where Sydney, Australia
Abstract Registration Due Nov 20, 2015
Submission Deadline Nov 27, 2015
Notification Due Jan 2, 2016
Final Version Due Jan 15, 2016
Categories    crowdsourcing   ubiquitous computing   benchmarks   crowdsensing
 

Call For Papers

=== CALL FOR PAPERS ====
** Deadline is November 20, 2015 **

CROWDBENCH 2016 - http://crowdbench.insight-centre.org/
International Workshop on Benchmarks for Ubiquitous Crowdsourcing: Metrics, Methodologies, and Datasets

In conjunction with IEEE PERCOM 2016 - https://wwww.percom.org
International Conference on Pervasive Comptuing and Communication
14-18 March, 2016
Sydney, Australia


WORKSHOP SCOPE:

The primary goal of this workshop is to synthesize existing research work, in ubiquitous crowdsourcing and crowdsensing, for establishing guidelines and methodologies for the evaluation of crowd-based algorithms and systems.
This goal will be achieved by bringing together researchers from the community to discuss and disseminate ideas for comparative analysis and evaluation on shared tasks and data sets.
A variety of views has emerged on the evaluation of crowdsourcing, across research communities, but so far there has been little effort to clarify key differences and commonalities in a forum.
The aim of this workshop is to provide such a forum; such that, it creates the time and involvement required to subject the different views to rigorous discussion.
It is expected that the workshop would result in a set of short papers that will clearly argue the positions on the issue.
These papers will serve as a base resource for consolidating research in the field and moving it forward.
Further, we expect that, the discussions at the workshop would provide basic specifications for metrics, benchmarks, and evaluation campaigns that can then be considered by the wider community.

We invite submission of short papers which identify and motivate comparative analysis and evaluation approaches for crowdsourcing.
We encourage submissions identifying and clearly articulating problems in terms of evaluating crowdsourcing approaches or algorithms designed for improving the process of crowdsourcing.
We welcome early work, and particularly encourage submission of visionary position papers that provide possible directions towards improving the validity of evaluations and benchmarks.
Topics include but are not limited to:

- Domain or application specific datasets for the evaluation of crowdsourcing/crowdsensing techniques
- Generalized metrics for task aggregation methods in crowdsourcing/crowdsensing
- Generalized metrics for task assignment techniques in crowdsourcing/crowdsensing
- Online evaluation methods for task aggregation and task assignment
- Simulation methodologies for testing crowdsourcing/crowdsensing algorithms
- Agent-based modeling methods for using existing simulation tools
- Bechmarking tools for comparing crowdsourcing/crowdsensing platforms or services
- Mobile-based datasets for crowdsourcing/crowdsensing
- Data sets with detailed spatio-temporal information for crowdsourcing/crowdsensing
- Using online collected data for offline evaluation

Each submitted paper should focus on one dimension of evaluation and benchmarks in crowdsourcing/crowdsensing. Multiple submissions per author are encouraged for articulating distinct topics for discussion at the workshop. Papers are welcome to argue the merits of an approach or problem already published in earlier work by the author (or anyone else). Papers should clearly identify the analytical and practical aspects of evaluation methods and their specificity in terms of crowdsourcing tasks, application domains, and/or type of platforms. During the workshop, papers will be grouped together into tracks, with each track further elaborating upon a particular critical area meriting further work and study.


IMPORTANT DATES:

Abstract registration: 20 November 2015
Paper submissions: 27 November 2015
Paper notifications: 2 January 2016

Camera-ready submissions: 15 January 2016
Author registration: 15 January 2016
Workshop date: 14-18 March 2016


PAPER SUBMISSION AND PUBLICATION:

Authors will submit papers at https://edas.info/N21129 via EDAS.
Accepted papers will be included in the IEEE PerCom Workshop Proceedings and will be indexed in the IEEE Xplore digital library.
All submissions will be reviewed by the Technical Program Committee for relevance, originality, significance, validity and clarity.
We are also looking for opportunities to publish extended version of workshop papers in journals and book chapters.

FORMATTING GUIDELINES:

Submissions are limited to a length of 6 pages and must adhere to the IEEE format (2 column, 10 pt font).
LaTex and Microsoft Word templates are available as part of the IEEE Computer Society website at http://www.computer.org/web/cs-cps/authors and the conference website https://www.percom.org/.

CONFERENCE REGISTRATION:

At least one author of each accepted paper must register for PerCom 2016 and must present the paper at the workshop.
Failure to present the paper at the workshop will result in the withdrawal of the paper from the PerCom Workshop Proceedings.
For detailed venue and registration information, please consult the conference website https://www.percom.org/.


ORGANIZING COMMITTEE:

- Umair ul Hassan, Insight Centre of Data Analytics, Ireland
- Edward Curry, University College Dublin, Ireland
- Daqing Zhang, Institut Mines-Telecom/Telecom SudParis, France

TECHNICAL PROGRAM COMMITTEE:

- Afra Mashhadi, Bell Labs, Ireland
- Alessandro Bozzon, Delft University of Technology, Netherlands
- Amrapali Zaveri, University of Leipzig, Germany
- Bin Guo, Northwestern Polytechnical University, China
- Brian Mac Namee, University College Dublin, Ireland
- David Coyle, University College Dublin, Ireland
- Fan Ye, Stony Brook University, United States
- Gianluca Demartini, University of Sheffield, United Kingdom
- Hien To, University of Southern California, United States
- John Krumm, Microsoft Research, United States
- Lora Aroyo, VU University Amsterdam, Netherlands
- Matt Lease, University of Texas at Austin, United States
- Raghu K. Ganti, IBM T. J. Watson Research Center, United States
- Wang Yasha, Peking University, China


Please address queries related to this call to Umair ul Hassan, Email: umair.ulhassan(at)insight-centre.org

Related Resources

IEEE-Ei/Scopus-ITCC 2025   2025 5th International Conference on Information Technology and Cloud Computing (ITCC 2025)-EI Compendex
IJCSA 2024   International Journal on Computational Science & Applications
BUSTECH 2025   The Fifteenth International Conference on Business Intelligence and Technology
BigComp 2025   2025 IEEE International Conference on Big Data and Smart Computing
IJU 2024   International Journal of Ubiquitous Computing
ISPASS 2025   2025 IEEE International Symposium on Performance Analysis of Systems and Software
IEEE BDAI 2025   IEEE--2025 the 8th International Conference on Big Data and Artificial Intelligence (BDAI 2025)
MLCL 2025   6th International Conference on Machine learning and Cloud Computing
IMCOM 2025   19th International Conference on Ubiquitous Information Management and Communication
IEEE ACIRS 2025   IEEE--2025 10th Asia-Pacific Conference on Intelligent Robot Systems (ACIRS 2025)