Assisting early-stage software startups with LLMs: Effective prompt engineering and system instruction design
Ahlgren, Thea Lovise; Sunde, Helene Fønstelien; Kemell, Kai Kristian; Nguyen-Duc, Anh (2025-11)
Ahlgren, Thea Lovise
Sunde, Helene Fønstelien
Kemell, Kai Kristian
Nguyen-Duc, Anh
11 / 2025
Information and Software Technology
107832
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:tuni-202507237728
https://urn.fi/URN:NBN:fi:tuni-202507237728
Kuvaus
Peer reviewed
Tiivistelmä
Context: Early-stage software startups, despite their strong innovative potential, experience high failure rates due to factors such as inexperience, limited resources, and market uncertainty. Generative AI technologies, particularly Large Language Models (LLMs), offer promising support opportunities; however, effective strategies for their integration into startup practices remain underexplored. Objective: This study investigates how prompt engineering and system instruction design can enhance the utility of LLMs in addressing the specific needs and challenges faced by early-stage software startups. Methods: A Design Science Research (DSR) methodology was adopted, structured into three iterative cycles. In the first cycle, use cases for LLM adoption within the startup context were identified. The second cycle experimented with various prompt patterns to optimize LLM responses for the defined use cases. The third cycle developed “StartupGPT”, an LLM-based assistant tailored for startups, exploring system instruction designs. The solution was evaluated with 25 startup practitioners through a combination of qualitative feedback and quantitative metrics. Results: The findings show that tailored prompt patterns and system instructions significantly enhance user perceptions of LLM support in real-world startup scenarios. StartupGPT received strong evaluation scores across key dimensions: satisfaction (93.33%), effectiveness (80%), efficiency (80%), and reliability (86.67%). Nonetheless, areas for improvement were identified, particularly in context retention, personalization of suggestions, communication tone, and sourcing external references. Conclusion: This study empirically validates the applicability of LLMs in early-stage software startups. It offers actionable guidelines for prompt and system instruction design and contributes both theoretical insights and a practical artifact — StartupGPT — that supports startup operations without necessitating costly LLM retraining.
Kokoelmat
- TUNICRIS-julkaisut [24175]
