Algorithmic Decision-making, Discrimination and Disrespect: An Ethical Inquiry
Sahlgren, Otto (2020)
Sahlgren, Otto
2020
Filosofian maisteriohjelma - Master's Programme in Philosophy
Yhteiskuntatieteiden tiedekunta - Faculty of Social Sciences
This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.
Hyväksymispäivämäärä
2020-03-30
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:tuni-202003102642
https://urn.fi/URN:NBN:fi:tuni-202003102642
Tiivistelmä
The increasing use of algorithmic decision-making systems has raised significant legal and ethical concerns in several contexts of application, such as hiring, policing and sentencing. A range of literature in AI ethics shows how predictions and decisions generated on the basis of patterns in historic data may lead to discrimination against different demographic groups – those that are legally protected and/or are in positions of vulnerability, in particular. Both in the literature and public discourse, objectionable algorithmic discrimination is commonly identified as involving discriminatory intent, use of sensitive or inaccurate information data in decision-making, or as involving unintentional reproduction of systemic inequality. Some claim that algorithmic discrimination is inherently objectifying, unfair, and others find issue in the use of statistical evidence in high-stakes decision-making altogether. As is exemplified by this list of claims, the discourse exhibits considerable discrepancies regarding two questions: (i) how does discrimination arise in the development and use of algorithmic decision-making systems and (ii) what makes a given instance of algorithmic discrimination impermissible? Notably, the discussion around biased algorithms seems to have inherited conceptual problems that have long characterized the discussion on discrimination in legal and moral theory.
This study approaches the phenomenon of algorithmic discrimination from the point of view of ethics of discrimination. Through exploring Benjamin Eidelson’s pluralistic, disrespect-based theory of discrimination, in particular, this study argues that while some instances may be wrong due to the issues with accuracy, unfairness, and algorithmic bias, the wrongness of algorithmic discrimination cannot be exhaustively explained by reference to these issues alone. This study suggests that some prevalent issues with discrimination in algorithmic decision-making can be traced to distinct choices and processes pertaining to the design, development and human-controlled use of algorithmic systems. However, as machine learning algorithms perform statistical discrimination by default, biased design choices and issues with “human-in-the-loop” enactment of algorithmic outputs cannot offer the full picture as to why algorithmic decision-making may have a morally objectionable disparate impact on different demographic groups.
Applying Eidelson’s account – albeit with minor modifications – the wrongness of algorithmic discrimination can be explained by reference to the harm it produces, the demeaning social meaning it expresses, and the disrespectful social conduct it sustains and exacerbates by reinforcing stigma. Depending on context, algorithmic discrimination may produce significant individual and societal harms as well as reproduce patterns of behavior that go against the moral requirement that we treat each other both as moral equals and as autonomous individuals. The account also explains why formally similar but idiosyncratic instances of algorithmic discrimination which result in disadvantage for groups that are not specified by socially salient traits, such as gender, may not be morally objectionable. A possible problem with this account stems from lack of transparency in algorithmic decision-making: in constrained cases, algorithmic discrimination may be morally neutral if it is conducted in secret. While this conclusion is striking, the account is both more robust in comparison to alternative accounts, and defensible if one understands transparency as a pre-condition for the satisfaction of multiple other ethical principles, such as trust, accountability, and integrity.
The study contributes to the discussion on discrimination in data mining and algorithmic decision-making by providing insight into both how discrimination may take place in novel technological contexts and how we should evaluate the morality of algorithmic decision-making in terms of dignity, respect, and harm. While room is left for further study, the study serves to clarify the conceptual ground necessary for engaging in an adequate moral evaluation of instances of algorithmic discrimination.
This study approaches the phenomenon of algorithmic discrimination from the point of view of ethics of discrimination. Through exploring Benjamin Eidelson’s pluralistic, disrespect-based theory of discrimination, in particular, this study argues that while some instances may be wrong due to the issues with accuracy, unfairness, and algorithmic bias, the wrongness of algorithmic discrimination cannot be exhaustively explained by reference to these issues alone. This study suggests that some prevalent issues with discrimination in algorithmic decision-making can be traced to distinct choices and processes pertaining to the design, development and human-controlled use of algorithmic systems. However, as machine learning algorithms perform statistical discrimination by default, biased design choices and issues with “human-in-the-loop” enactment of algorithmic outputs cannot offer the full picture as to why algorithmic decision-making may have a morally objectionable disparate impact on different demographic groups.
Applying Eidelson’s account – albeit with minor modifications – the wrongness of algorithmic discrimination can be explained by reference to the harm it produces, the demeaning social meaning it expresses, and the disrespectful social conduct it sustains and exacerbates by reinforcing stigma. Depending on context, algorithmic discrimination may produce significant individual and societal harms as well as reproduce patterns of behavior that go against the moral requirement that we treat each other both as moral equals and as autonomous individuals. The account also explains why formally similar but idiosyncratic instances of algorithmic discrimination which result in disadvantage for groups that are not specified by socially salient traits, such as gender, may not be morally objectionable. A possible problem with this account stems from lack of transparency in algorithmic decision-making: in constrained cases, algorithmic discrimination may be morally neutral if it is conducted in secret. While this conclusion is striking, the account is both more robust in comparison to alternative accounts, and defensible if one understands transparency as a pre-condition for the satisfaction of multiple other ethical principles, such as trust, accountability, and integrity.
The study contributes to the discussion on discrimination in data mining and algorithmic decision-making by providing insight into both how discrimination may take place in novel technological contexts and how we should evaluate the morality of algorithmic decision-making in terms of dignity, respect, and harm. While room is left for further study, the study serves to clarify the conceptual ground necessary for engaging in an adequate moral evaluation of instances of algorithmic discrimination.