posted by organizer: mirkomarras || 3123 views || tracked by 3 users: [display]

SI Algo Bias & Fairness 2020 : Special Issue on Algorithmic Bias and Fairness in Search and Recommendation - Information Processing & Management (I.F. 3.892)


When N/A
Where N/A
Submission Deadline Nov 15, 2020
Notification Due Jan 15, 2021
Final Version Due Mar 15, 2021
Categories    recommender systems   information retrieval   fairness   bias

Call For Papers

Please accept our apologies in case of multiple receptions.
Please send to interested colleagues and students.

*** Call for Papers ***
Special Issue on Algorithmic Bias and Fairness in Search and Recommendation
Information Processing & Management (Elsevier)
Impact Factor: 3.892

*** Important dates ***
- Manuscript submission due: November 15, 2020
- First round decision made: January 15, 2021
- Revised manuscript due: March 15, 2021
- Final decision made: May 15, 2021
- Final paper due: June 15, 2021

*** Aims and Scope ***
Search and recommendation algorithms are playing a primary role in supporting individuals at filtering the overwhelming alternatives our daily life offers. Such an automated intelligence is being used on a myriad of platforms covering different domains, from e-commerce to education, from health to social media, and so on. The ongoing research in these fields is posing search and recommendation algorithms closer and closer, with search algorithms being personalized based on users' characteristics, and recommendation algorithms being optimized on the ranking quality. This attitude results in enabling the identification of common challenges and shared priorities, essential to tailor these systems on the needs of our society.

Over the aspects getting special attention in search and recommendation so far, the capability to uncover, characterize, and counteract data and algorithmic biases, while preserving the original level of accuracy, is proving to be prominent and timely. Both classes of algorithms are trained on historical data, which often conveys imbalances and inequalities. Such patterns in the training data might be captured and emphasized in the results these algorithms provide to users, leading to biased or even unfair decisions. This can happen when an algorithm systematically discriminates users as individuals or as belonging to a legally-protected class, identified by common sensitive attributes.

Given the increasing adoption of systems empowered with search and recommendation capabilities, it is crucial to ensure that their decisions do not lead to biased or even discriminatory outcomes. Controlling the effects generated by popularity bias to improve the user's perceived quality of the results, supporting consumers and providers with fair rankings and recommendations, and providing transparent results are examples of challenges that require attention. This special issue intends to bring together original research methods and applications that put people first, inspect social and ethical impacts, and uplift the public trust on search and recommendation technologies. The goal is to favor a community-wide dialogue on new research perspectives in this field.

*** Topics ***
We solicit different types of contributions (research papers, surveys, replicability and reproducibility studies, resource papers, systematic review articles) on algorithmic bias in search and recommendation, focused but not limited to the following areas. If in doubt about the suitability, please contact the Guest Editors.

Data Set Collection and Preparation:
- Managing imbalances and inequalities within data sets
- Devising collection pipelines that lead to fair and unbiased data sets
- Collecting data sets useful for studying potential biased and unfair situations
- Designing procedures for creating synthetic data sets for research on bias and fairness

Countermeasure Design and Development:
- Conducting exploratory analysis that uncover biases
- Designing treatments that mitigate biases (e.g., popularity bias mitigation)
- Devising interpretable search and recommendation models
- Providing treatment procedures whose outcomes are easily interpretable
- Balancing inequalities among different groups of users or stakeholders

Evaluation Protocol and Metric Formulation:
- Conducting quantitative experimental studies on bias and unfairness
- Defining objective metrics that consider fairness and/or bias
- Formulating bias-aware protocols to evaluate existing algorithms
- Evaluating existing strategies in unexplored domains
- Comparative studies of existing evaluation protocols and strategies

Case Study Exploration:
- E-commerce platforms
- Educational environments
- Entertainment websites
- Healthcare systems
- Social media
- News platforms
- Digital libraries
- Job portals
- Dating platforms

*** Paper Submission and Review ***

Submitted papers must conform to the author guidelines available on the IPM journal website at ( Authors are required to submit their manuscripts online through the IPM submission site at (, article type “SI: Algo Bias & Fairness”.

Submissions must represent original material, that has not appeared elsewhere for publication and that is not under review for another refereed publication. If any portion of your submission has previously appeared in or will appear in a conference/workshop proceeding, you should notify this at the time of submission, make sure that the submission references the conference publication, and supply a copy of the conference version(s). Please also provide a brief description of the differences between the submitted manuscript and the preliminary version(s). You must select the appropriate designation for the files during the submission process in order to assist the guest editors and reviewers with differentiating between the files.

Submissions will be evaluated by at least three independent reviewers on the basis of relevance for the special issue, novelty, clarity, originality, significance of contribution, technical quality, and quality of presentation. The editors reserve the right to reject without review any submissions deemed to be outside the scope of the special issue. Authors are welcome to contact the special issue editors with questions about scope before preparing a submission.

*** Guest Editors ***

Ludovico Boratto
Data Science and Big Data Analytics Research Group
Eurecat - Centre Tecnològic de Catalunya, Barcelona, Spain

Stefano Faralli
Unitelma Sapienza University of Rome, Rome, Italy

Mirko Marras
Department of Mathematics and Computer Science
University of Cagliari, Cagliari, Italy

Giovanni Stilo
Department of Information Engineering, Computer Science and Mathematics
University of L’Aquila, L’Aquila, Italy

Related Resources

WSDM 2021   14th ACM Conference on Web Search and Data Mining
Bias 2020   International Workshop on Algorithmic Bias in Search and Recommendation
ECIR 2021   European Conference on Information Retrieval
KG-BIAS 2020   Bias in Automatic Knowledge Graph Construction: A Workshop
IJNGN 2020   International Journal of Next - Generation Networks
UMAP 2020   ACM International Conference on User Modeling, Adaptation and Personalization
ADCO 2020   7th International Conference on Advanced Computing
FATES 2020   2nd Workshop on Fairness, Accountability, Transparency, Ethics and Society on the Web
IJCSEA 2020   International Journal of Computer Science, Engineering and Applications
ICAIF 2020   ACM International Conference on AI in Finance