Comparative Study of Web Front-End Frameworks in the Context of Conversational Artificial Intelligence
Arola, Joonas (2024)
Arola, Joonas
2024
Tietojenkäsittelyopin maisteriohjelma - Master's Programme in Computer Science
Informaatioteknologian ja viestinnän tiedekunta - Faculty of Information Technology and Communication Sciences
This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.
Hyväksymispäivämäärä
2024-06-26
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:tuni-202406197327
https://urn.fi/URN:NBN:fi:tuni-202406197327
Tiivistelmä
The exponential growth of web-based platforms has emphasized the significance of front-end quality and performance in delivering optimal user experiences. Amidst this surge, the integra-tion of artificial intelligence (AI) technologies into web applications has become increasingly prevalent. However, there remains a gap in research concerning the optimization of front-end performance and quality specifically for AI interfaces. This thesis addresses this gap by con-ducting a comparative study of four popular front-end frameworks - Angular, React, Svelte, and Vue - within the context of a small-scale web application featuring a chatbot powered by OpenAI’s GPT-3.5 model.
The study aims to assess how these frameworks perform in the presence of conversation-al AI and answer key questions regarding their performance and quality in small-scale applica-tions. Using a case study approach, various metrics such as load times, request times, memory usage, and maintainability index were evaluated to compare the frameworks. The find-ings suggest that prompt complexity significantly impacts the processing time of the GPT-3.5 model, influencing the quality of the application. Additionally, recommendations are provided for choosing a front-end framework based on specific performance metrics. However, limita-tions exist, including the narrow scope of the test application and the exclusion of certain front-end frameworks, warranting further research into scalability and security aspects. This study contributes valuable insights into optimizing front-end performance and quality in con-versational AI applications, guiding developers in selecting suitable frameworks for similar projects.
The study aims to assess how these frameworks perform in the presence of conversation-al AI and answer key questions regarding their performance and quality in small-scale applica-tions. Using a case study approach, various metrics such as load times, request times, memory usage, and maintainability index were evaluated to compare the frameworks. The find-ings suggest that prompt complexity significantly impacts the processing time of the GPT-3.5 model, influencing the quality of the application. Additionally, recommendations are provided for choosing a front-end framework based on specific performance metrics. However, limita-tions exist, including the narrow scope of the test application and the exclusion of certain front-end frameworks, warranting further research into scalability and security aspects. This study contributes valuable insights into optimizing front-end performance and quality in con-versational AI applications, guiding developers in selecting suitable frameworks for similar projects.