| |||||||||||||
KG-BIAS 2020 : Bias in Automatic Knowledge Graph Construction: A Workshop | |||||||||||||
Link: https://kg-bias.github.io/ | |||||||||||||
| |||||||||||||
Call For Papers | |||||||||||||
**************************************************************
KG-BIAS 2020 – Bias in Automatic Knowledge Graph Construction: A Workshop at AKBC 2020 UC Irvine, USA – Wed June 24, 2020 https://kg-bias.github.io/ kg-bias@googlegroups.com ************************************************************** ### Overview Knowledge Graphs (KGs) store human knowledge about the world in structured format, e.g., triples of facts or graphs of entities and relations, to be processed by AI systems. In the past decade, extensive research efforts have gone into constructing and utilizing knowledge graphs for tasks in natural language processing, information retrieval, recommender systems, and more. Once constructed, knowledge graphs are often considered as “gold standard” data sources that safeguard the correctness of other systems. Because the biases inherent to KGs may become magnified and spread through such systems, it is crucial that we acknowledge and address various types of bias in knowledge graph construction. Such biases may originate in the very design of the KG, in the source data from which it is created (semi-)automatically, and in the algorithms used to sample, aggregate, and process that data. Causes of bias include systematic errors due to selecting non-random items (selection bias), misremembering certain events (recall bias), and interpreting facts in a way that affirms individuals' preconceptions (confirmation bias). Biases typically appear subliminally in expressions, utterances, and text in general and can carry over into downstream representations such as embeddings and knowledge graphs. This workshop – to be held for the first time at AKBC 2020 – addresses the questions: “how do such biases originate?”, “How do we identify them?”, and “What is the appropriate way to handle them, if at all?”. This topic is as-yet unexplored and the goal of our workshop is to start a meaningful, long-lasting dialogue spanning researchers across a wide variety of backgrounds and communities. Topics of interest include, but are not limited to: * Ethics, bias, and fairness * Qualitatively and quantitatively defining types of bias * Implicit or explicit human bias reflected in data people generate * Algorithmic bias represented in learned models or rules * Taxonomies and categorizations of different biases * Empirically observing biases * Measuring diversity of opinions * Language, gender, geography, or interest bias * Implications of existing bias to human end-users * Benchmarks and datasets for bias in KGs * Measuring or remediating bias * De-biased KG completion methods * Algorithms for making inferences interpretable and explainable * De-biasing or post-processing algorithms * Creating user awareness on cognitive biases * Ethics of data collection for bias management * Diversification of information sources * Provenance and traceability ### Submission Instructions Submission files should not exceed 8 pages with additional pages allowed for references. Reviews are double-blind; author names and affiliations must be removed. All submissions must be written in English and submitted as PDF files formatted using the sigconf template: https://www.acm.org/publications/proceedings-template. Submissions should be made electronically through https://easychair.org/conferences/?conf=kgbias2020. ### Workshop format We accept position papers, short papers, and full papers. Both ongoing and already published work is welcomed, and we will offer authors the option of having their paper included in the workshop proceedings. More details regarding the actual format and schedule of the workshop will be announced closer to the workshop date. ### Important Dates May 04 – KG-BIAS 2020 submission deadline May 18 – KG-BIAS 2020 notification Jun 22-23 – AKBC Conference Jun 24 – KG-BIAS 2020 workshop ### Code of Conduct Our workshop adheres to all principles and guidelines specified in the ACM Code of Ethics and Professional Conduct. ### Organizing committee * Edgar Meij, Bloomberg * Tara Safavi, University of Michigan * Chenyan Xiong, Microsoft Research AI * Miriam Redi, Wikimedia Foundation * Gianluca Demartini, University of Queensland * Fatma Özcan, IBM Research ### Program Committee * Guillaume Bouchard (Facebook AI) * Soumen Chakrabarti (IIIT Bombay) * David Corney (Full Fact) * Jeff Dalton (University of Glasgow) * Maarten de Rijke (University of Amsterdam) * Laura Dietz (University of New Hampshire) * Djellel Difallah (Wikimedia Foundation) * Ying Ding (University of Texas at Austin) * Ujwal Gadiraju (L3S Research Center) * Faegheh Hasibi (Radboud University) * Lucie-Aimée Kaffee (University of Southampton and Wikidata) * Jeff Pan (University of Aberdeen) * Fabrizio Silvestri (Facebook AI) * Emine Yilmaz (University College London) ### Contact information You can find us at https://kg-bias.github.io/ and contact us at kg-bias@googlegroups.com. |
|