Temporal Sub-sampling of Audio Feature Sequences for Automated Audio Captioning
Nguyen, Khoa; Drossos, Konstantinos; Virtanen, Tuomas (2020)
URI
https://www.youtube.com/watch?v=oeySQrvo4-4https://arxiv.org/abs/2007.02676
http://dcase.community/documents/workshop2020/proceedings/DCASE2020Workshop_Nguyen_45.pdf
Nguyen, Khoa
Drossos, Konstantinos
Virtanen, Tuomas
Teoksen toimittaja(t)
Ono, Nobutaka
Harada, Noboru
Kawaguchi, Yohei
Mesaros, Annamaria
Imoto, Keisuke
Koizumi, Yuma
Komatsu, Tatsuya
Tokyo Metropolitan University
2020
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:tuni-202103092459
https://urn.fi/URN:NBN:fi:tuni-202103092459
Kuvaus
Peer reviewed
Tiivistelmä
Audio captioning is the task of automatically creating a textual description for the contents of a general audio signal. Typical audio captioning methods rely on deep neural networks (DNNs), where the target of the DNN is to map the input audio sequence to an output sequence of words, i.e. the caption. Though, the length of the textual description is considerably less than the length of the audio signal, for example 10 words versus some thousands of audio feature vectors. This clearly indicates that an output word corresponds to multiple input feature vectors. In this work we present an approach that focuses on explicitly taking advantage of this difference of lengths between sequences, by applying a temporal sub-sampling to the audio input sequence. We employ a sequence-to-sequence method, which uses a fixed-length vector as an output from the encoder, and we apply temporal sub-sampling between the RNNs of the encoder. We evaluate the benefit of our approach by employing the freely available dataset Clotho and we evaluate the impact of different factors of temporal sub-sampling. Our results show an improvement to all considered metrics.
Kokoelmat
- TUNICRIS-julkaisut [18558]