key: cord-0039603-qaztv02j authors: Boratto, Ludovico; Marras, Mirko; Faralli, Stefano; Stilo, Giovanni title: International Workshop on Algorithmic Bias in Search and Recommendation (Bias 2020) date: 2020-03-24 journal: Advances in Information Retrieval DOI: 10.1007/978-3-030-45442-5_84 sha: bb6df5b60f7fa35635bb4529623a837532f8e497 doc_id: 39603 cord_uid: qaztv02j Both search and recommendation algorithms provide results based on their relevance for the current user. In order to do so, such a relevance is usually computed by models trained on historical data, which is biased in most cases. Hence, the results produced by these algorithms naturally propagate, and frequently reinforce, biases hidden in the data, consequently strengthening inequalities. Being able to measure, characterize, and mitigate these biases while keeping high effectiveness is a topic of central interest for the information retrieval community. In this workshop, we aim to collect novel contributions in this emerging field and to provide a common ground for interested researchers and practitioners. Search and recommendation are getting closer and closer as research areas. Though they require fundamentally different inputs, i.e., the user is asked to provide a query in search, while implicit and explicit feedback is leveraged in recommendation, existing search algorithms are being personalized based on users' profiles and recommender systems are optimizing their output on the ranking quality. Both classes of algorithms aim to learn patterns from historical data that conveys biases in terms of unbalances and inequalities. These hidden biases are unfortunately captured in the learned patterns, and often emphasized in the results these algorithms provide to users [2] . When a bias affects a sensitive attribute of a user, such as their gender or religion, the inequalities that are reinforced by search and recommendation algorithms even lead to severe societal consequences, like users' discrimination [4] . For this critical reason, being able to detect, measure, characterize, and mitigate these biases while keeping high effectiveness is a prominent and timely topic for the IR community. Mitigating the effects generated by popularity bias [1, 5, 6] , ensuring results that are fair with respect to the users [3, 7] , and being able to interpret why a model provides a given recommendation or search result are examples of challenges that may be important in real-world applications. This workshop aims to collect new contributions in this emerging field and to provide a common ground for interested researchers and practitioners. The workshop welcomes contributions in all topics related to algorithmic bias in search and recommendation, focused (but not limited) to: -Data Set Collection and Preparation: • Managing imbalances and inequalities within data sets. • Devising collection pipelines that lead to fair and unbiased data sets. • Collecting data sets useful for studying potential biased and unfair situations. • Designing procedures for creating synthetic data sets for research on bias and fairness. -Countermeasure Design and Development: • Conducting exploratory analysis that uncover biases. • Designing treatments that mitigate biases (e.g., popularity bias mitigation). • Devising interpretable search and recommendation models. • Providing treatment procedures whose outcomes are easily interpretable. • Balancing inequalities among different groups of users or stakeholders. -Evaluation Protocol and Metric Formulation: • Conducting quantitative experimental studies on bias and unfairness. • Defining objective metrics that consider fairness and/or bias. • Formulating bias-aware protocols to evaluate existing algorithms. • Evaluating existing strategies in unexplored domains -Case Study Exploration: • News channels. • E-commerce platforms. • Educational environments. • Entertainment websites. • Healthcare systems. • Social networks. The workshop has the following main objectives: 1. Raise awareness on the algorithmic bias problem within the IR community. 2. Identify social and human dimensions affected by algorithmic bias in IR. 3. Solicit contributions from researchers who are facing algorithmic bias in IR. 4. Get insights on existing approaches, recent advances, and open issues. 5. Familiarize the IR community with existing practices from the field. 6. Uncover gaps between academic research and real-world needs in the field. Controlling popularity bias in learningto-rank recommendation The effect of algorithmic bias on recommender systems for massive open online courses Balanced neighborhoods for multisided fairness in recommendation Algorithmic bias: from discrimination discovery to fairness-aware data mining What recommenders recommend: an analysis of recommendation biases and possible countermeasures Correcting popularity bias by enhancing recommendation neutrality Fairness in reciprocal recommendations: a speed-dating study