Hyppää sisältöön
    • Suomeksi
    • In English
Trepo
  • Suomeksi
  • In English
  • Kirjaudu
Näytä viite 
  •   Etusivu
  • Trepo
  • Opinnäytteet - ylempi korkeakoulututkinto
  • Näytä viite
  •   Etusivu
  • Trepo
  • Opinnäytteet - ylempi korkeakoulututkinto
  • Näytä viite
JavaScript is disabled for your browser. Some features of this site may not work without it.

Incremental Learning in Deep Neural Networks

Liu, Yuan (2015)

 
Avaa tiedosto
Liu.pdf (4.523Mt)
Lataukset: 



Liu, Yuan
2015

Master's Degree Programme in Information Technology
Tieto- ja sähkötekniikan tiedekunta - Faculty of Computing and Electrical Engineering
This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.
Hyväksymispäivämäärä
2015-08-12
Näytä kaikki kuvailutiedot
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:tty-201507291457
Tiivistelmä
Image classification is one of the active yet challenging problems in computer vision field. With the age of big data coming, training large-scale datasets becomes a hot research topic. Most of work pay more attention to the final performance rather than efficiency during the training procedure. It is known that it takes a long time to train large-scale datasets. In the light of this, we exploit a novel incremental learning framework based on deep neural networks to improve both performance and efficiency simultaneously.

Generally, our incremental learning framework is in a manner of coarse-to-fine. The concept of our idea is to utilise the trained network parameters with low-resolution images to improve the initial values of network parameters for images with high resolution. There are two solutions to implement our idea. One is to use the networks with scaled filters. The size of filters in deep networks is extended by upscaling parameters from the previous trained network with lower-resolution images. The other is to add convolutional filters to the network. We not only extend the size of filters by scaling the weights of filters, but also increase the number of filters. The same transformed method with scaled filters can be used for the same number of filters, whereas we initialise parameters of other new added filters. Incremental learning can help neural networks to keep the learned information from coarse images network for initialising the following more detail level network, and continue to learn new information from finer images.

In conclusion, both of these two solutions can synchronously improve accuracy and efficiency. For the networks with scaled filters, the performance is not raised too much, while it can save nearly 40% training time. For the networks with added filters, the performance can be respectively increased from 10.8% to 12.1%, and from 14.1% to 17.0% for the ImageNet dataset and the Places205 dataset, and reducing about 30% training time. In the view of our results, adding new layers to deep neural networks and progressing from coarse to fine resolution is a promising direction.
Kokoelmat
  • Opinnäytteet - ylempi korkeakoulututkinto [40596]
Kalevantie 5
PL 617
33014 Tampereen yliopisto
oa[@]tuni.fi | Tietosuoja | Saavutettavuusseloste
 

 

Selaa kokoelmaa

TekijätNimekkeetTiedekunta (2019 -)Tiedekunta (- 2018)Tutkinto-ohjelmat ja opintosuunnatAvainsanatJulkaisuajatKokoelmat

Omat tiedot

Kirjaudu sisäänRekisteröidy
Kalevantie 5
PL 617
33014 Tampereen yliopisto
oa[@]tuni.fi | Tietosuoja | Saavutettavuusseloste