Comparison of Different Convolutional Neural Network Activation Functions and Methods for Building Ensembles for Small to Midsize Medical Data Sets
Nanni, Loris; Brahnam, Sheryl; Paci, Michelangelo; Ghidoni, Stefano (2022-08-16)
Nanni, Loris
Brahnam, Sheryl
Paci, Michelangelo
Ghidoni, Stefano
16.08.2022
6129
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:tuni-202209157082
https://urn.fi/URN:NBN:fi:tuni-202209157082
Kuvaus
Peer reviewed
Tiivistelmä
CNNs and other deep learners are now state-of-the-art in medical imaging research. However, the small sample size of many medical data sets dampens performance and results in overfitting. In some medical areas, it is simply too labor-intensive and expensive to amass images numbering in the hundreds of thousands. Building Deep CNN ensembles of pre-trained CNNs is one powerful method for overcoming this problem. Ensembles combine the outputs of multiple classifiers to improve performance. This method relies on the introduction of diversity, which can be introduced on many levels in the classification workflow. A recent ensembling method that has shown promise is to vary the activation functions in a set of CNNs or within different layers of a single CNN. This study aims to examine the performance of both methods using a large set of twenty activations functions, six of which are presented here for the first time: 2D Mexican ReLU, TanELU, MeLU + GaLU, Symmetric MeLU, Symmetric GaLU, and Flexible MeLU. The proposed method was tested on fifteen medical data sets representing various classification tasks. The best performing ensemble combined two well-known CNNs (VGG16 and ResNet50) whose standard ReLU activation layers were randomly replaced with another. Results demonstrate the superiority in performance of this approach.
Kokoelmat
- TUNICRIS-julkaisut [18558]