F2VAE : A Framework for Mitigating User Unfairness in Recommendation Systems
Borges, Rodrigo; Stefanidis, Kostas (2022-04-25)
Borges, Rodrigo
Stefanidis, Kostas
ACM
25.04.2022
This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:tuni-202308037427
https://urn.fi/URN:NBN:fi:tuni-202308037427
Kuvaus
Peer reviewed
Tiivistelmä
Recommendation algorithms are widely used nowadays, especially in scenarios of information overload (i.e., when users have too many options to choose from), due to their ability to suggest potentially relevant items to users in a personalized fashion. Users, nevertheless, might be considered as separated in groups according to sensitive attributes, such as age, gender or nationality, and the recommendation process might be biased towards one of these groups. If observed, this bias has to be mitigated actively, or it can propagate and be amplified over time. Here, we consider a relevant difference of recommendation quality among groups as unfair, and we argue that this difference should be maintained as low as possible. We propose a framework named F2VAE for mitigating user-oriented unfairness in recommender systems. The framework is based on Variational Autoencoders (VAE) and it introduces two extra terms in VAE's standard loss function, one associated to fair representation and another one associated to fair recommendation. The conflicting objectives associated to these terms are discussed in details in a series of experiments considering the bias associated to the users' nationality in a music consumption dataset. We recall recent works proposed for generating fair representations in the context of classification, and we adapt one of these methods to the recommendation task. F2VAE was able to increase the precision by approximately 1% while reducing the unfairness by 21% when compared to standard VAE.
Kokoelmat
- TUNICRIS-julkaisut [19265]