AI Policy Implementation in Finnish Universities: Challenges and Implications for Research and Development from a Multilevel Governance Perspective
Riega Cayetano, José Luis (2025)
Riega Cayetano, José Luis
2025
Master's Programme in Research and Innovation in Higher Education
Johtamisen ja talouden tiedekunta - Faculty of Management and Business
This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.
Hyväksymispäivämäärä
2025-09-08
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:tuni-202509059015
https://urn.fi/URN:NBN:fi:tuni-202509059015
Tiivistelmä
Universities operate within a layered AI governance landscape, where international principles and emerging regulations (such as the EU AI Act) define external boundaries, national arrangements translate them, and institutional rules make them workable in daily research. Within this structure, the integration of AI tools, particularly generative models, is reshaping research practice and has intensified the demand for transparency, documentation, and secure infrastructure. Despite growing attention to ethical guidance and data safeguards, the literature remains focused on pedagogy, offering limited insight into AI governance in research. In Finland, universities have a highly autonomous system supported by strong coordination mechanisms and shared e-infrastructure (e.g., CSC and LUMI). These national resources provide common baselines for research infrastructure and policy implementation, but still require institutional interpretation and adaptation to local workflows and capacities to meet evolving regulatory and ethical expectations. In this context, this thesis examines how artificial intelligence (AI) policy is implemented in Finnish universities across international, national and institutional levels, and explores the challenges and implications for research and development (R&D).
To address this, the study adopts a qualitative multiple-case design focused on Tampere University and the University of Helsinki, using semi-structured interviews and document analysis. Guided by a multilevel governance and policy implementation framework, the study finds that implementation often emerged bottom-up, using soft instruments such as checklists and ethical templates, especially in the absence of detailed mandates. Both universities engaged with transparency and disclosure norms in research, but their approaches varied depending on internal capacities and interpretive discretion. A distinction emerged between governance focused on research practices and the oversight of applied or deployment contexts, where formal risk-based structures aligned with the EU AI Act were limited or under development. These patterns illustrate how the theoretical framework helps to explain how universities adapt external policy expectations through internal routines. Among its practical implications, the study highlights that transparency norms alone are insufficient; R&D settings require capacity-building, clear intake processes, and tailored guidance to support responsible AI use in high-autonomy university contexts.
To address this, the study adopts a qualitative multiple-case design focused on Tampere University and the University of Helsinki, using semi-structured interviews and document analysis. Guided by a multilevel governance and policy implementation framework, the study finds that implementation often emerged bottom-up, using soft instruments such as checklists and ethical templates, especially in the absence of detailed mandates. Both universities engaged with transparency and disclosure norms in research, but their approaches varied depending on internal capacities and interpretive discretion. A distinction emerged between governance focused on research practices and the oversight of applied or deployment contexts, where formal risk-based structures aligned with the EU AI Act were limited or under development. These patterns illustrate how the theoretical framework helps to explain how universities adapt external policy expectations through internal routines. Among its practical implications, the study highlights that transparency norms alone are insufficient; R&D settings require capacity-building, clear intake processes, and tailored guidance to support responsible AI use in high-autonomy university contexts.
