Hyppää sisältöön
    • Suomeksi
    • In English
Trepo
  • Suomeksi
  • In English
  • Kirjaudu
Näytä viite 
  •   Etusivu
  • Trepo
  • Kandidaatintutkielmat
  • Näytä viite
  •   Etusivu
  • Trepo
  • Kandidaatintutkielmat
  • Näytä viite
JavaScript is disabled for your browser. Some features of this site may not work without it.

Robustness Analysis of High-Confidence Image Perception by Deep Neural Networks

Ahonen, Jukka (2020)

 
Avaa tiedosto
AhonenJukka.pdf (1.278Mt)
Lataukset: 



Ahonen, Jukka
2020

Teknisten tieteiden kandidaattiohjelma - Degree Programme in Engineering Sciences, BSc (Tech)
Tekniikan ja luonnontieteiden tiedekunta - Faculty of Engineering and Natural Sciences
This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.
Hyväksymispäivämäärä
2020-03-05
Näytä kaikki kuvailutiedot
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:tuni-202002112000
Tiivistelmä
Deep neural networks are nowadays state-of-the-art method for many pattern recognition problems. As the performance grows, the robustness of them cannot be ignored. Specifically, the lack of robustness against slightly perturbated inputs called adversarial examples has been a hot topic for the last years. The main reason behind this is safety because if one can easily generate an adversarial example that can efficiently “fool” a neural network, it cannot be trusted in a real-life system where safety is an issue.
The goal of this thesis is to better understand deep neural networks, their robustness and tools to test their robustness. One tool called genetic algorithm is further studied and implemented with python to fool an example neural network VGG16 to misclassify images. VGG16 and its prediction probabilities being the fitness function, the algorithm uses crossover, mutation and elitism among other things to find the best adversarial solution. Three types of tests are conducted. First, a false positive randomly generated adversarial example is created. Second, example image is evolved to a false negative adversarial example with perturbation L∞ limited to 5. And finally, example image is evolved to a false positive adversarial example with perturbation L∞ limited to 5. When the perturbation is L∞ limited to 5, the difference between the original image and adversarial image is unnoticeable to the human eye. The tests show that even a rather simple genetic algorithm can make VGG16 misclassify images with high confidence. It seems that deep neural networks without any safety measures are not very robust against adversarial examples.
Kokoelmat
  • Kandidaatintutkielmat [3934]
Kalevantie 5
PL 617
33014 Tampereen yliopisto
oa[@]tuni.fi | Yhteydenotto | Tietosuoja | Saavutettavuusseloste
 

 

Selaa kokoelmaa

TekijätNimekkeetTiedekunta (2019 -)Tiedekunta (- 2018)Tutkinto-ohjelmat ja opintosuunnatAvainsanatJulkaisuajatKokoelmat

Omat tiedot

Kirjaudu sisäänRekisteröidy
Kalevantie 5
PL 617
33014 Tampereen yliopisto
oa[@]tuni.fi | Yhteydenotto | Tietosuoja | Saavutettavuusseloste