Realizing Privacy-Preserving Machine Learning Through Hybrid Homomorphic Encryption
Budžys, Mindaugas (2023)
Budžys, Mindaugas
2023
Master's Programme in Information Technology
Informaatioteknologian ja viestinnän tiedekunta - Faculty of Information Technology and Communication Sciences
This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.
Hyväksymispäivämäärä
2023-03-30
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:tuni-202303112886
https://urn.fi/URN:NBN:fi:tuni-202303112886
Tiivistelmä
The rising popularity of machine learning (ML) in modern day data analysis has allowed scientist, businesses and ordinary users to gain access to powerful tools, which provide accurate insight into complex data. However, as more research is done into ML, it has been noticed that standard ML models experience privacy leakage, which can jeopardize the confidentiality of sensitive data. Because of this reason, researchers began looking in to applying privacy-preserving techniques, such as differential privacy, homomorphic encryption or secure multi-party computation in ML to create privacy-preserving machine learning (PPML). Adaptation of these novel techniques have been limited because of well documented limitations of the techniques, which affect the efficiency and the usability of the ML model. To help progress the field and help with adaptation, this thesis covers in detail the existing methods used in PPML and covers the limitations of state-of-the-art approaches. Additionally, two novel hybrid homomorphic encryption (HHE) protocols are proposed to show the practical viability of PPML in machine-learning-as-a-service environments. Experiments conducted on these protocols show high promise in terms of computational and communication efficiency, when compared to standard homomorphic encryption approaches. The provided efficiency paves the way towards applying HHE in resource-limited devices and make PPML available for a larger variety of devices. The protocols are analysed against a powerful adversary to show the security of the protocol and that the privacy leakage of the ML model is reduced.