Feature-blind fairness in collaborative filtering recommender systems
Borges, Rodrigo; Stefanidis, Kostas (2022-02-22)
Borges, Rodrigo
Stefanidis, Kostas
22.02.2022
4
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:tuni-202204083130
https://urn.fi/URN:NBN:fi:tuni-202204083130
Kuvaus
Peer reviewed
Tiivistelmä
Recommender systems were originally proposed for suggesting potentially relevant items to users, with the unique objective of providing accurate suggestions. These recommenders started being adopted in several domains, and were identified as generating biased results that could harm the data items being recommended. The exposure in generated rankings, for instance in a job candidate selection situation, is supposed to be fairly distributed among candidates, regardless of their sensitive attributes (gender, race, nationality, age) for promoting equal opportunities. It can happen, however, that no such sensitive information is available in the data applied for training the recommender, and in this case, there is still space for biases that can lead to unfair treatment, named Feature-Blind unfairness. In this work, we adopt Variational Autoencoders (VAE), considered as the state-of-the-art technique for Collaborative Filtering (CF) recommendations, and we present a framework for addressing fairness when having only access to information about user-item interactions. More specifically, we are interested in Position and Popularity Bias. VAE loss function combines two terms associated with accuracy and quality of representation; we introduce a new term for encouraging fairness, and demonstrate the effect of promoting fair results despite of a tolerable decrease in recommendation quality. In our best scenario, position bias is reduced by 42% despite a reduction of 26% in recall in the top 100 recommendation results, compared to the same situation without any fairness constraints.
Kokoelmat
- TUNICRIS-julkaisut [19020]