Risks and Countermeasures for Using Large Language Models in Business: Necessity of Establishing Internal Guidelines to Avoid Information Leakage
Matsuda, Fumika (2024)
Matsuda, Fumika
2024
Tieto- ja sähkötekniikan kandidaattiohjelma - Bachelor's Programme in Computing and Electrical Engineering
Tekniikan ja luonnontieteiden tiedekunta - Faculty of Engineering and Natural Sciences
Hyväksymispäivämäärä
2024-12-05
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:tuni-2024120210664
https://urn.fi/URN:NBN:fi:tuni-2024120210664
Tiivistelmä
The widespread use of Large Language Models (LLMs), such as ChatGPT, has led to significant advancements in various industries but also raised concerns regarding the risks of information leakage. This study addresses these risks in the workplace and proposes formulating internal guidelines as an effective countermeasure. The study evaluates such guidelines' feasibility and potential impact through a literature review and qualitative analysis of 145 survey responses. Findings suggest that implementing internal guidelines is a practical and effective strategy to mitigate the risk of information leakage for using general-purpose LLMs at work. The results contribute to developing organizational strategies for safe LLM use and suggest a direction for further research, including refining guidelines and exploring their broader applicability.
Kokoelmat
- Kandidaatintutkielmat [8709]