Assessing the translation of ethical AI guidelines into practices of healthcare product providers: Considerations on data representativeness and inclusiveness
Akmaikina, Aleksandra (2025)
Akmaikina, Aleksandra
2025
Master's Programme in Public and Global Health
Yhteiskuntatieteiden tiedekunta - Faculty of Social Sciences
This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.
Hyväksymispäivämäärä
2025-02-21
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:tuni-202502202338
https://urn.fi/URN:NBN:fi:tuni-202502202338
Tiivistelmä
The integration of artificial intelligence (AI) into healthcare (HC) practices is continuously expanding, which has already caused concerns about ethical values and trustworthiness of AI-based HC products worldwide. One of these concerns is social and demographic discrimination caused by the quality of data used in HC AI products. There are numerous ethical guidelines, implementation frameworks, and integration strategies for trustworthy AI developed to mitigate these disparities. However, their uptake in the HC sector is more spontaneous and uncertain than systematic and consistent. There is a lack of reported empirical evidence on how AI practitioners apply existing guidelines and recommendations in their practices.
This study aimed to examine the usability of ethical AI guidelines in HC through the experience and perceptions of AI practitioners. The thesis explored barriers to the translation of ethical AI principles into HC practices. It also investigates practitioners' awareness of the risks associated with poor representativeness and inclusiveness of data used by AI systems. A design of this qualitative study is based on the mixed method, which combined thematic analysis, pattern matching, and ethical analysis. Thematic analysis was applied to data collected from normative documents and eight interviews with experts including AI practitioners, HC AI entrepreneurs, AI educators, and academic experts in AI ethics.
Participants highlighted miscommunication and differences in ethical values between AI practitioners, policymakers, and academics. Limitation in the translation of ethical AI guidelines can be related to the lack of transparency of AI systems’ ethical assessment, insufficient education on AI ethics among practitioners involved in AI product creation, and mistrust towards supervising bodies and policymakers. The go-to-market process was perceived as fragmented and unclear, while the environment and organizational structure were described as unsupportive for AI practitioners concerned with ethics. Overall, the principalistic nature of existing guidelines and insufficient domain expertise involved in their uptake emerged as repeated patterns.
There is a need for multidisciplinary and cross-sectoral collaborative efforts that enhance integration of ethical principles into AI design, development, assessment, and governance. Nevertheless, the accessibility of diverse, representative, and inclusive datasets should be improved, alongside availability of transparent reports on data quality assessment. Identified practical barriers and concerns promote further research, potentially leading to interventions and additional case studies.
This study aimed to examine the usability of ethical AI guidelines in HC through the experience and perceptions of AI practitioners. The thesis explored barriers to the translation of ethical AI principles into HC practices. It also investigates practitioners' awareness of the risks associated with poor representativeness and inclusiveness of data used by AI systems. A design of this qualitative study is based on the mixed method, which combined thematic analysis, pattern matching, and ethical analysis. Thematic analysis was applied to data collected from normative documents and eight interviews with experts including AI practitioners, HC AI entrepreneurs, AI educators, and academic experts in AI ethics.
Participants highlighted miscommunication and differences in ethical values between AI practitioners, policymakers, and academics. Limitation in the translation of ethical AI guidelines can be related to the lack of transparency of AI systems’ ethical assessment, insufficient education on AI ethics among practitioners involved in AI product creation, and mistrust towards supervising bodies and policymakers. The go-to-market process was perceived as fragmented and unclear, while the environment and organizational structure were described as unsupportive for AI practitioners concerned with ethics. Overall, the principalistic nature of existing guidelines and insufficient domain expertise involved in their uptake emerged as repeated patterns.
There is a need for multidisciplinary and cross-sectoral collaborative efforts that enhance integration of ethical principles into AI design, development, assessment, and governance. Nevertheless, the accessibility of diverse, representative, and inclusive datasets should be improved, alongside availability of transparent reports on data quality assessment. Identified practical barriers and concerns promote further research, potentially leading to interventions and additional case studies.