posted by organizer: l3s || 5196 views || tracked by 4 users: [display]

BIAS 2019 : Special Issue on Social and Cultural Biases in Information, Algorithms, and Systems

FacebookTwitterLinkedInGoogle

Link: http://www.emeraldgrouppublishing.com/products/journals/call_for_papers.htm?id=7902
 
When N/A
Where N/A
Submission Deadline Oct 31, 2018
Notification Due Dec 31, 2018
Categories    algorithms   bias   information   systems
 

Call For Papers

Special Issue on “Social and Cultural Biases in Information, Algorithms, and Systems”
Online Information Review (SSCI journal by Emerald Insight; 2016 Impact Factor: 1.534)
http://www.emeraldgrouppublishing.com/products/journals/call_for_papers.htm?id=7902

Computer algorithms and analytics play an increasing role in citizens’ lives, as they underlie the popular information services and “smart” technologies, which are rapidly being adopted across sectors of society, from transportation to education to healthcare . Algorithms allow the exploitation of rich and varied data sources, in order to support human decision-making and/or take direct actions; however, there are increasing concerns surrounding their transparency and accountability. There is growing recognition that even when designers and engineers have the best of intentions, systems relying on algorithmic processes can inadvertently result in serious consequences in the social world, such as biases in their outputs that can result in discrimination against individuals and/or groups of people. Recent cases in the news and media have highlighted the wider societal effects of data and algorithms, and have highlighted examples of gender, race and class biases in popular information access services.

It is important to note the complexity of the problem of social and cultural biases in algorithmic processes. For instance, recent research shows that word embeddings, a class of natural language processing techniques that enable machines to use human language in sensible ways, are quite effective at absorbing the accepted meaning of words (Caliskan et al., 2017). These algorithms also pick up on the human biases, such as gender stereotypes (e.g., associating male names with concepts related to career, and female names with home/family) and racial stereotypes (e.g., associating European-/African-American names with pleasant/unpleasant concepts) embedded in our language use. These biases are “accurate” in that they are comparable to those discovered when humans take the Implicit Association Test, a widely used measure in social psychology that reveals the subconscious associations between the mental representations of concepts in our memory (Greenwald et al., 1998).

The biases inherent in word embeddings provide a good illustration for the need to promote algorithmic transparency in information systems. Word embeddings are extensively used in services such as Web search engines and machine translation systems (e.g., Google Translate), which rely on the technique to interpret human language in real time. It may be infeasible to eradicate social biases from algorithms while preserving their power to interpret the world, particularly when this interpretation is based on historical and human-produced training data. In fact, another way of viewing such unconscious biases is as sources of ‘knowledge diversity’; what one thinks are the true facts of the world, and how one uses language to describe them, is very much dependent on local context, culture and intentions. An alternative approach would be to systematically trace and represent sources of ‘knowledge diversity’ in data sources and analytic procedures, rather than eliminate them (Giunchiglia et al., 2012). Such approaches would support accountability in algorithmic systems (e.g., a right to explanation of automated decisions, which to date has proven very challenging to implement). In addition, these approaches could facilitate the development of more “fair” algorithmic processes, which take into account a particular user’s context and extent of “informedness” (Koene et al., 2017).

The *purpose* of the special issue is to bring together researchers from different disciplines who are interested in analysing and tackling bias within their discipline, arising from the data, algorithms and methods they use. The theme of the special issue is social and cultural biases in information, algorithms, and systems, which includes, but is not limited to, the following areas:
- Bias in sources of data and information (e.g., datasets, data production, publications, visualisations, annotations, knowledge bases)
- Bias in categorisation and representation schemes (e.g., vocabularies, standards, etc.)
- Bias in algorithms (e.g., information retrieval, recommendation, classification, etc.)
- Bias in the broader context of information and social systems (e.g., social media, search engines, social networks, crowdsourcing, etc.)
- Considerations in evaluation (e.g., to identify and avoid bias, to create unbiased test and training collections, crowdsourcing, etc.)
- Interactions between individuals, technologies and data/information
- Considerations for data governance and policy

As the topic is highly interdisciplinary, we expect that this will be reflected by the submissions. We intend to invite authors from multiple disciplines, including data/information science, computer science, the social sciences, and psychology. The resulting special issue may also be of great interest to practitioners (e.g., in government, non-profit organisations, or companies) and educators (e.g., in digital literacy).


*Submission and Publication*

Authors are invited to submit original and unpublished papers. All submissions will be peer-reviewed and judged on correctness, originality, significance, quality of presentation, and relevance to the special issue topics of interest. Submitted papers should not have appeared in or be under consideration for another journal.
Instructions for authors:
http://emeraldgrouppublishing.com/products/journals/author_guidelines.htm?id=oir
Paper submission via https://mc.manuscriptcentral.com/oir
Please select the correct issue to submit to: “Social and Cultural Biases in Information, Algorithms, and Systems”.



*Important Dates*

- Submission Deadline: October 2018
- First Round Notification: December 2018
- Revision Due Date: February 2019
- Final Notification: April 2019
- Final Manuscript Due Date: June 2019
- Publication Date: July 2018


*Guest Editors*

Dr. Jo Bates, Information School, University of Sheffield, UK
Prof. Paul Clough, Information School, University of Sheffield, UK
Prof. Robert Jäschke, Humboldt-Universität zu Berlin, Germany
Prof. Jahna Otterbacher, Open University of Cyprus
Prof. Kristene Unsworth, Stockton University, New Jersey, USA

Related Resources

IEEE-Ei/Scopus-ITCC 2025   2025 5th International Conference on Information Technology and Cloud Computing (ITCC 2025)-EI Compendex
SPIE-Ei/Scopus-DMNLP 2025   2025 2nd International Conference on Data Mining and Natural Language Processing (DMNLP 2025)-EI Compendex&Scopus
IEEE-Ei/Scopus-CNIOT 2025   2025 IEEE 6th International Conference on Computing, Networks and Internet of Things (CNIOT 2025) -EI Compendex
ACM SAC 2025   40th ACM/SIGAPP Symposium On Applied Computing
Social Sustainability 2025   Twenty-first International Conference on Environmental, Cultural, Economic & Social Sustainability
IEEE CACML 2025   2025 4th Asia Conference on Algorithms, Computing and Machine Learning (CACML 2025)
IMCOM 2025   19th International Conference on Ubiquitous Information Management and Communication
SPIE-Ei/Scopus-CMLDS 2025   2025 2nd International Conference on Computing, Machine Learning and Data Science (CMLDS 2025) -EI Compendex & Scopus
Hong Kong-MIST 2025   2025 Asia-Pacific Conference on Marine Intelligent Systems and Technologies (MIST 2025)
IJCSES 2024   International Journal of Computer Science and Engineering Survey