Multi-Attribute Bias Mitigation in Recommender Systems
Ahmed, Uzair (2024)
Ahmed, Uzair
2024
Master's Programme in Computing Sciences
Informaatioteknologian ja viestinnän tiedekunta - Faculty of Information Technology and Communication Sciences
This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.
Hyväksymispäivämäärä
2024-01-17
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:tuni-202401091244
https://urn.fi/URN:NBN:fi:tuni-202401091244
Tiivistelmä
An abundance of data in this day and age opens up many opportunities for data consumption through many outlets. This calls for a framework to optimally categorize relevant data to be suggested to the users. Recommender systems have been in the limelight for quite some time now for this exact use case.
Variational Autoencoder (VAE) based recommender systems have been very successful in matching users with potentially relevant items. VAEs work on the assumption that similar user profiles have similar likenesses and behavior and can be suggested items by finding a pattern in their item relevancy. User profiles can be grouped up based on various factors, the most important one being their history of item rankings, and other personal attributes such as age, country, and sex. An optimal output from the VAE should take into account the most relevant items from the user groups but can also develop a bias towards their attributes which has to be mitigated or it can propagate as the data increases, i.e. learning a pattern that deduces that a certain nationality finds a certain item relevant, which can add unfairness and bias in the results.
A VAE-based framework (Stefanidis & Borges, 2022) was developed to tackle this problem and minimize user unfairness and bias in recommendations by actively taking into account a user's sensitive attribute while training. We propose an improvement to this framework by taking into account multiple sensitive attributes at a time to minimize user unfairness as much as possible and improve the PREC@1 performance metric. In the end, we compare the results of our improved framework to F2VAE and document our findings through multiple metrics like precision (PREC), unfairness (UFAIR), normalized discounted cumulative gain (NDCG), and recall.
Variational Autoencoder (VAE) based recommender systems have been very successful in matching users with potentially relevant items. VAEs work on the assumption that similar user profiles have similar likenesses and behavior and can be suggested items by finding a pattern in their item relevancy. User profiles can be grouped up based on various factors, the most important one being their history of item rankings, and other personal attributes such as age, country, and sex. An optimal output from the VAE should take into account the most relevant items from the user groups but can also develop a bias towards their attributes which has to be mitigated or it can propagate as the data increases, i.e. learning a pattern that deduces that a certain nationality finds a certain item relevant, which can add unfairness and bias in the results.
A VAE-based framework (Stefanidis & Borges, 2022) was developed to tackle this problem and minimize user unfairness and bias in recommendations by actively taking into account a user's sensitive attribute while training. We propose an improvement to this framework by taking into account multiple sensitive attributes at a time to minimize user unfairness as much as possible and improve the PREC@1 performance metric. In the end, we compare the results of our improved framework to F2VAE and document our findings through multiple metrics like precision (PREC), unfairness (UFAIR), normalized discounted cumulative gain (NDCG), and recall.