Designing a Benchmark Framework for ML Models in Edge Devices
Haputhantrige, Chalith Tharuka Gunasekara (2024)
Haputhantrige, Chalith Tharuka Gunasekara
2024
Master's Programme in Computing Sciences
Informaatioteknologian ja viestinnän tiedekunta - Faculty of Information Technology and Communication Sciences
This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.
Hyväksymispäivämäärä
2024-05-21
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:tuni-202405135735
https://urn.fi/URN:NBN:fi:tuni-202405135735
Tiivistelmä
The deployment of ML models on edge devices has gained significant research interest due to the growing demand for real-time processing and decision making. However, integrating ML models into resource-limited edge devices presents challenges concerning their operational capability and real world performance. Benchmarking these models on edge devices offers a systematic approach to address these concerns.
This thesis presents a comprehensive framework for benchmarking ML models on edge devices, inspired by the Liquid AI theme and the need for a standardised benchmarking framework. The framework is meticulously designed to evaluate ML models on various ML frameworks across diverse edge devices. It simplifies the benchmarking process while providing a structured methodology to collect meaningful, component-wise performance metrics in ML applications.
The benchmarking components are primarily divided into two parts: initialization and execution. This division enables a clear comparison of runtime performance, which is crucial for real-world use cases. Additionally, this study proposes an automated benchmarking method and explores the extent to which the benchmarking process can be automated. The thesis also presents benchmark results obtained using the proposed framework, offering insights and key considerations regarding result interpretation with the new framework.
The research concludes by discussing the limitations of the proposed methods, providing recommendations for further development, and identifying potential future research directions.
This thesis presents a comprehensive framework for benchmarking ML models on edge devices, inspired by the Liquid AI theme and the need for a standardised benchmarking framework. The framework is meticulously designed to evaluate ML models on various ML frameworks across diverse edge devices. It simplifies the benchmarking process while providing a structured methodology to collect meaningful, component-wise performance metrics in ML applications.
The benchmarking components are primarily divided into two parts: initialization and execution. This division enables a clear comparison of runtime performance, which is crucial for real-world use cases. Additionally, this study proposes an automated benchmarking method and explores the extent to which the benchmarking process can be automated. The thesis also presents benchmark results obtained using the proposed framework, offering insights and key considerations regarding result interpretation with the new framework.
The research concludes by discussing the limitations of the proposed methods, providing recommendations for further development, and identifying potential future research directions.