Hyppää sisältöön
    • Suomeksi
    • In English
Trepo
  • Suomeksi
  • In English
  • Kirjaudu
Näytä viite 
  •   Etusivu
  • Trepo
  • TUNICRIS-julkaisut
  • Näytä viite
  •   Etusivu
  • Trepo
  • TUNICRIS-julkaisut
  • Näytä viite
JavaScript is disabled for your browser. Some features of this site may not work without it.

Super Neurons

Kiranyaz, Serkan; Malik, Junaid; Yamac, Mehmet; Duman, Mert; Adalioglu, Ilke; Guldogan, Esin; Ince, Turker; Gabbouj, Moncef (2023)

 
Avaa tiedosto
Super_Neurons.pdf (12.19Mt)
Lataukset: 



Kiranyaz, Serkan
Malik, Junaid
Yamac, Mehmet
Duman, Mert
Adalioglu, Ilke
Guldogan, Esin
Ince, Turker
Gabbouj, Moncef
2023

IEEE Transactions on Emerging Topics in Computational Intelligence
doi:10.1109/TETCI.2023.3314658
Näytä kaikki kuvailutiedot
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:tuni-2023111710042

Kuvaus

Peer reviewed
Tiivistelmä
Self-Organized Operational Neural Networks (Self-ONNs) have recently been proposed as new-generation neural network models with nonlinear learning units, i.e., the generative neurons that yield an elegant level of diversity; however, like its predecessor, conventional Convolutional Neural Networks (CNNs), they still have a common drawback: <italic>localized</italic> (fixed) kernel operations. This severely limits the receptive field and information flow between layers and thus brings the necessity for deep and complex models. It is highly desired to improve the receptive field size without increasing the kernel dimensions. This requires a significant upgrade over the generative neurons to achieve the &#x201C;non-localized kernel operations&#x201D; for each connection between consecutive layers. In this article, we present superior (generative) neuron models (or super neurons in short) that allow random or learnable kernel shifts and thus can increase the receptive field size of each connection. The kernel localization process varies among the two super-neuron models. The first model assumes <italic>randomly localized</italic> kernels within a range and the second one learns (optimizes) the kernel locations during training. An extensive set of comparative evaluations against conventional and <italic>deformable</italic> convolutional, along with the generative neurons demonstrates that super neurons can empower Self-ONNs to achieve a superior learning and generalization capability with a minimal computational complexity burden. PyTorch implementation of Self-ONNs with super-neurons is now publically shared.
Kokoelmat
  • TUNICRIS-julkaisut [23861]
Kalevantie 5
PL 617
33014 Tampereen yliopisto
oa[@]tuni.fi | Tietosuoja | Saavutettavuusseloste
 

 

Selaa kokoelmaa

TekijätNimekkeetTiedekunta (2019 -)Tiedekunta (- 2018)Tutkinto-ohjelmat ja opintosuunnatAvainsanatJulkaisuajatKokoelmat

Omat tiedot

Kirjaudu sisäänRekisteröidy
Kalevantie 5
PL 617
33014 Tampereen yliopisto
oa[@]tuni.fi | Tietosuoja | Saavutettavuusseloste