Automation of Defensics for protocol implementation fuzzing
Ahola, Nico (2022)
Ahola, Nico
2022
Tietotekniikan DI-ohjelma - Master's Programme in Information Technology
Informaatioteknologian ja viestinnän tiedekunta - Faculty of Information Technology and Communication Sciences
This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.
Hyväksymispäivämäärä
2022-05-04
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:tuni-202204012951
https://urn.fi/URN:NBN:fi:tuni-202204012951
Tiivistelmä
Fuzzing is a security testing method that has existed for decades and that has been widely adopted by the industry. Its goal is to expose vulnerabilities by generating inputs that cause unexpected behaviour in a system, e.g. software crashes. There exists several different fuzzing types, one of which is network protocol fuzzing. A network protocol fuzzer tries to find flaws in protocol implementations. Defensics is one such fuzzer.
A Nokia team had been using Defensics manually via GUI as part of their product's security testing. Operating the GUI had taken notable time. The GUI also has limitations not present when using Defensics via its CLI or HTTP API. Another challenge with Defensics had been its execution time. Especially when Defensics marked test cases as skipped or failed seemed to cause extremely slow behaviour.
To address the challenges, Defensics was added into existing CI process and its suites were configured in a way that speeds up fuzzing process when many cases are marked as skipped or failed. As part of the work, a Robot test suite and a Python program were created. The CI pipeline that executes Defensics calls the Robot test suite which then calls the program. The program can execute Defensics processes in parallel using Defensics CLI and multithreading.
A comparison was done between old and new suite configurations. The results show that execution time has slightly improved when many skipped cases are encountered. Even with the improvement, fuzzing was concluded to be too slow to be fully executed for every product release candidate. Therefore, two pipelines exist: one for executing a small subset of cases for release candidates and another for full execution on a weekly or on-demand basis.
A Nokia team had been using Defensics manually via GUI as part of their product's security testing. Operating the GUI had taken notable time. The GUI also has limitations not present when using Defensics via its CLI or HTTP API. Another challenge with Defensics had been its execution time. Especially when Defensics marked test cases as skipped or failed seemed to cause extremely slow behaviour.
To address the challenges, Defensics was added into existing CI process and its suites were configured in a way that speeds up fuzzing process when many cases are marked as skipped or failed. As part of the work, a Robot test suite and a Python program were created. The CI pipeline that executes Defensics calls the Robot test suite which then calls the program. The program can execute Defensics processes in parallel using Defensics CLI and multithreading.
A comparison was done between old and new suite configurations. The results show that execution time has slightly improved when many skipped cases are encountered. Even with the improvement, fuzzing was concluded to be too slow to be fully executed for every product release candidate. Therefore, two pipelines exist: one for executing a small subset of cases for release candidates and another for full execution on a weekly or on-demand basis.