"Towards Relational Egalitarianism in the Third Wave of Algorithmic Justice"
Titel: | Towards Relational Egalitarianism in the Third Wave of Algorithmic Justice |
---|---|
Untertitel: | |
Hochschule: | RWTH Aachen University |
Fachbereich: | Computational Social Systems |
Studiengang: | Computational Social Systems |
Geschrieben von: | Laila Wegner |
Vollständige Arbeit als PDF zum Download
Abstract
Given that algorithmic decision-making (ADM) is used for various high-stake applications such as loans, crime sentencing, or hiring, examples of unfair biases and discrimination against minorized groups – such as the gender-biased hiring algorithm of Amazon – raised concerns regarding algorithmic justice.
In response, the research field of fairness-aware machine learning developed, which can be roughly divided into three waves (Huang et al., 2022; Kind, 2020): The 1st wave is dominated by guidelines and principles ensuring a fair development and use of ADM. To account for these guidelines, the 2nd wave of research developed several mathematical approaches to identify and mitigate unfair biases. Yet, these purely technical approaches wereincreasingly criticized, leading to the 3rd wave of AI fairness, which emphasizes that algorithms are sociotechnical systems. While the 2nd wave is mainly based on a distributive understanding of justice and focuses on the outcomes of predictions (Kasirzadeh, 2022), the 3rd wave has a broader focus, including power dynamics and social structures (Kind, 2020).
This development from the 2nd to the 3rd wave indicates similarities to the philosophical discourse on justice, commonly distinguishing distributive and relational accounts of justice. Distributive accounts, focusing on different currencies of equality (e.g., income, wealth, resources) and how they ought to be distributed, are opposed by relational accounts, conceptualizing equality based on the quality of social relations among citizens and the treatment of citizens by social institutions, focusing on unequal power asymmetries, social relations, and structural injustices (Arneson, 2013).
The presented thesis incorporates the relational view into the discussion of algorithmic justice by analyzing the implications of the 3rd wave of AI fairness for ADM in hiring. Therefore, the following research question is investigated: Which topics, including critical and constructive approaches, are discussed within the literature on relational algorithmic justice and what are its implications for automated decision-making in hiring?
Accordingly, the aim of this thesis is twofold: On the one hand, it synthesizes the contents of the 3 rd wave of algorithmic justice and its relational implications, and on the other hand, it conducts a context-specific evaluation for the case study hiring. To this end, the thesis follows an interdisciplinary method based on a systematic literature review (SLR) supplemented by normative reasoning and is structured as follows:
Chapter 2 introduces the background of justice in more detail, including the philosophical discourses on distributive and relational egalitarianism, and a special commitment towards the unconditional appreciation of human diversity as part of relational justice is explicated. The chapter concludes with a brief overview of the current approaches to algorithmic fairness. Based on this background, it is hypothesized in chapter 3 that the 3rd wave of algorithmic justice is influenced by the thematic discourses of relational justice. This observation is used for the search query in the SLR. After methodological details, the results are outlined according to the research question at the end of chapter 3. Particularly evident became the critical emphasis on the categorization and measurements of humans, stressing that the selection of protected attributes is subjective (e.g., Kong, 2022) and reduces fluid concepts such as gender identity to discrete categories (Tomasev et al., 2021). This oversimplification can lead to misrepresentations and stigmatization (Andrus & Villeneuve, 2022; Krupiy, 2020). Furthermore, the analyzed literature highlights the interplay between algorithms, power, and capitalism, including a critical analysis of the approaches to intersectionality. Here, it stands out that the current approaches to intersectional fairness focus on subgroup fairness while failing to engage with systems of oppression (Hoffmann, 2019; Kong, 2022). Additionally, the relational focus revealed several epistemic challenges of algorithmic fairness, highlighting for example that the discourse of algorithmic fairness is Western-centralized (Birhane, 2021; Gwagwa et al., 2022; Tacheva, 2022) and hard codes societal norms by treating constructed categories as facts (Green, 2022; Krupiy, 2020; Lu et al., 2022; Zimmermann & Lee-Stronach, 2021).
Striving for a context-based discussion, the identified topics structure the philosophical discussion of implications for ADM in hiring in Chapter 4. Following a philosophical reasoning based on the commitment towards the unconditional appreciation of human diversity, it is revealed that ADM in hiring fails to recognize individual differences in Section 4.1. Furthermore, it will be illustrated that ADM in hiring constraints the freedom of choice in Section 4.2. It is concluded, that ADM in hiring fails to meet one important condition of relational equality: the appreciation of human diversity. The chapter will end with some practical recommendations in Section 4.3 before Chapter 5 concludes.
References
Andrus, M., & Villeneuve, S. (2022). Demographic-Reliant Algorithmic Fairness: Characterizing the Risks of Demographic Data Collection in the Pursuit of Fairness. In 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 1709–1721). ACM. https://doi.org/10.1145/3531146.3533226 Arneson, R. (2013). Egalitarianism. In Edward N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Summer 2013). Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/entries/egalitarianism/ Birhane, A. (2021). Algorithmic injustice: A relational ethics approach. Patterns (New York, N.Y.), 2(2). https://doi.org/10.1016/j.patter.2021.100205 Green, B. (2022). Escaping the Impossibility of Fairness: From Formal to Substantive Algorithmic Fairness. Philosophy & Technology, 35(4). https://doi.org/10.1007/s13347-022-00584-6 Gwagwa, A., Kazim, E., & Hilliard, A. (2022). The role of the African value of Ubuntu in global AI inclusion discourse: A normative ethics perspective. Patterns (New York, N.Y.), 3(4), 100462. https://doi.org/10.1016/j.patter.2022.100462 Hoffmann, A. L. (2019). Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse. Information, Communication & Society, 22(7), 900–915. https://doi.org/10.1080/1369118X.2019.1573912 Huang, L. T.‑L., Chen, H.‑Y., Lin, Y.‑T., Huang, T.‑R., & Hun, T.‑W. (2022). Ameliorating Algorithmic Bias, or Why Explainable AI Needs Feminist Philosophy. Feminist Philosophy Quarterly, 8(3/4). Kasirzadeh, A. (2022). Algorithmic Fairness and Structural Injustice: Insights from Feminist Political Philosophy. In AIES '22: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery (ACM) (pp. 348–356). https://doi.org/10.1145/3514094.3534188 Kind, C. (2020, August 23). The term ‘ethical AI’ is finally starting to mean something. VentureBeat. https://venturebeat.com/ai/the-term-ethical-ai-is-finally-starting-to-mean-something/ Kong, Y. (2022). Are “Intersectionally Fair” AI Algorithms Really Fair to Women of Color? A Philosophical Analysis. In 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 485–494). ACM. https://doi.org/10.1145/3531146.3533114 Krupiy, T. (2020). A vulnerability analysis: Theorising the impact of artificial intelligence decision-making processes on individuals, society and human diversity from a social justice perspective. Computer Law & Security Review, 38, 105429. https://doi.org/10.1016/j.clsr.2020.105429 Lu, C., Kay, J., & McKee, K. R. (2022). Subverting machines, fluctuating identities: Re-learning human categorization. In FAccT ’22 (pp. 1005–1015). https://doi.org/10.1145/3531146.3533161 Tacheva, Z. (2022). Taking a critical look at the critical turn in data science: From “data feminism” to transnational feminist data science. Big Data & Society, 9(2), 205395172211129. https://doi.org/10.1177/20539517221112901 Tomasev, N., McKee, K. R., Kay, J., & Mohamed, S. (2021). Fairness for Unobserved Characteristics: Insights from Technological Impacts on Queer Communities. In M. Fourcade, B. Kuipers, S. Lazar, & D. Mulligan (Eds.), Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (pp. 254– 265). ACM. https://doi.org/10.1145/3461702.3462540 Zimmermann, A., & Lee-Stronach, C. (2021). Proceed with Caution.