posted by organizer: oana_inel || 1951 views || tracked by 3 users: [display]

HumBL 2019 : Augmenting Intelligence with Bias-aware Humans­-in-­the-­Loop @ TheWebConf 2019

FacebookTwitterLinkedInGoogle

Link: https://humlworkshop.github.io/HumBL-WWW2019/
 
When May 13, 2019 - May 14, 2019
Where San Franciscon, California, US
Abstract Registration Due Jan 20, 2019
Submission Deadline Feb 1, 2019
Notification Due Feb 24, 2019
Final Version Due Mar 3, 2019
Categories    human computation   human-in-the-loop   bias in crowdsourcing   computer science
 

Call For Papers

------------------- IMPORTANT DATES --------------------
Abstract submission: ASAP before paper deadline
Paper submission EXTENDED: 8 february 2019
Author notification: 28 Feb 2019
Final version deadline: 8 Mar 2019
Workshop date: 13/14 May 2019

------------------- CALL FOR PAPERS ---------------------
Human-in-the-loop is a model of interaction where a machine process and one or more humans have an iterative interaction. In this paradigm the user has the ability to heavily influence the outcome of the process by providing feedback to the system as well as the opportunity to grab different perspectives about the underlying domain and understand the step by step machine process leading to a certain outcome. Amongst the current major concerns in Artificial Intelligence research are being able to explain and understand the results as well as avoiding bias in the underlying data that might lead to unfair or unethical conclusions. Typically, computers are fast and accurate in processing vast amounts of data. People, however, are creative and bring in their perspectives and interpretation power. Bringing humans and machines together creates a natural symbiosis for accurate interpretation of data at scale.
The goal of this workshop is to bring together researchers and practitioners in various areas of AI (i.e., Machine Learning, NLP, Computational Advertising, etc.) to explore new pathways of the human-in-the-loop paradigm. We aim to analyze both existing biases in crowdsourcing, and explore various methods to manage bias via crowdsourcing. We would like to discuss different types of biases, measures and methods to track bias, as well as methodologies to prevent and mitigate different types of bias. We will provide a framework for discussion among scholars, practitioners and other interested parties, including crowd workers, requesters and crowdsourcing platform managers.

------------------ RESEARCH TOPICS ----------------------
The old paradigm of computing - where machines do something for the humans - has changed: more and more humans and machines are working with and for each other, in a partnership. We can see the effectiveness of this paradigm in many areas, ranging from human computation (where humans do some of the computation in place of the machines), computer-supported cooperative work, social computing, computer-mediated communication to name a few.

In this workshop we welcome novel work focusing on the partnership between humans and machines. Topic of interest include (but are not limited to):

*Human Factors:
**Human­-computer cooperative work
**Mobile crowdsourcing applications
**Human Factors in Crowdsourcing
**Social computing
**Ethics of Crowdsourcing
**Gamification techniques

*Data Collection:
**Data annotations task design
**Data collection for specific domains (e.g. with privacy constraints)
**Data privacy
**Multi­-linguality aspects

*Machine Learning:
**Dealing with sparse and noisy annotated data
**Crowdsourcing for Active Learning
**Statistics and learning theory

*Applications:
**Healthcare
**NLP technologies
**Translation
**Data quality control
**Sentiment analysis

*Bias in Crowdsourcing:
**Contributor and crowd worker sampling bias during the recruitment
**Effect of cultural, gender and ethnic biases
**Effect of worker training and past experiences
**Effect of worker expertise vs interest
**Bias in experts vs bias in crowdsourcing
**Bias in outsourcing vs bias in crowdsourcing
**Sources of bias in crowdsourcing: task selection, experience, devices, reward, etc.
**Taxonomies and categorizations of different biases in crowdsourcing
**Task assignment/recommendation for reducing bias
**Effect of worker engagement on bias
**Responsibility and ethics in crowdsourcing and bias management
**Preventing bias in crowdsourcing
**Creating awareness of cognitive biases among crowdsourcing agents

*Crowdsourcing for Bias Management:
**Identifying new types of cognitive bias in data or content using crowdsourcing
**Measuring bias in data or content using crowdsourcing
**Removing bias in data or content using crowdsourcing
**Presenting bias information to end users to create awareness
**Ethics of data collection for bias management
**Dealing with algorithmic bias using crowdsourcing
**Fake news detection with crowdsourcing
**Diversification of sources by means of crowdsourcing
**Provenance and traceability in crowdsourcing
**Long-term crowd engagement
**Generating benchmarks for bias management through crowdsourcing


------------------------- SUBMISSION ---------------------
Authors can submit four types of papers:
* short papers (up to 6 pages in length), plus unlimited pages for references
* full papers (up to 10 pages in length), plus unlimited pages for references
* position papers (up to 4 pages in length), plus unlimited pages for references
* demo papers (up to 4 pages in length), plus unlimited pages for references
Page limits include diagrams and appendices. Submissions should be formatted according to the formatting instructions in the General Guidelines.

Submit papers through https://easychair.org/conferences/?conf=humblwww2019
All submissions must be written in English.

The proceedings of the workshops will be published jointly with the conference proceedings, therefore submissions should be formatted according to the formatting instructions in the General Guidelines for the WebConference and must be submitted in PDF according to the ACM format published in the ACM guidelines, selecting the generic “sigconf” sample. The PDF files must have all non-standard fonts embedded.


----------------------- WORKSHOP CHAIRS ------------------
Lora Aroyo, Google, US
Alessandro Checco, University of Sheffield
Gianluca Demartini, University of Queensland, AU
Ujwal Gadiraju, L3S Research Center, Leibniz Universität Hannover
Anna Lisa Gentile, IBM Research Almaden, US
Oana Inel, Vrije Universiteit Amsterdam, NL
Cristina Sarasua, University of Zurich

Related Resources

CPAIOR 2024   International Conference on the Integration of Constraint Programming, Artificial Intelligence, and Operations Research
ACM-Ei/Scopus-CCISS 2024   2024 International Conference on Computing, Information Science and System (CCISS 2024)
BIAS 2024   International Workshop on Algorithmic Bias in Search and Recommendation
GreeNet Symposium - SGNC 2024   15th Symposium on Green Networking and Computing (SGNC 2024)
FLAIRS-37 ST XAI, Fairness, and Trust 2024   FLAIRS-37 Special Track on Explainable, Fair, and Trustworthy AI
IEEE ICA 2022   The 6th IEEE International Conference on Agents
GenderBiasNLP 2024   Fifth Workshop on Gender Bias in Natural Language Processing
SPIE-Ei/Scopus-CVCM 2024   2024 5th International Conference on Computer Vision, Communications and Multimedia (CVCM 2024) -EI Compendex
ISCAI 2024   2024 3rd International Symposium on Computing and Artificial Intelligence
IAAI 2024   Innovative Applications of Artificial Intelligence