Hyppää sisältöön
    • Suomeksi
    • In English
Trepo
  • Suomeksi
  • In English
  • Kirjaudu
Näytä viite 
  •   Etusivu
  • Trepo
  • TUNICRIS-julkaisut
  • Näytä viite
  •   Etusivu
  • Trepo
  • TUNICRIS-julkaisut
  • Näytä viite
JavaScript is disabled for your browser. Some features of this site may not work without it.

Multi-Modal Deep Learning for Myocardial Infarction Detection Using ECG Signals and Images

Setu, Jahanggir Hossain; Pasha, Syed Tangim; Halder, Nabarun; Ahmed, Eshtiak; Islam, Ashraful; Amin, M. Ashraful (2025-12-29)

 
Avaa tiedosto
Mutimodal_DL_ECG.pdf (2.755Mt)
Lataukset: 



Setu, Jahanggir Hossain
Pasha, Syed Tangim
Halder, Nabarun
Ahmed, Eshtiak
Islam, Ashraful
Amin, M. Ashraful
29.12.2025

This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.
doi:10.1145/3714394.3756348
Näytä kaikki kuvailutiedot
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:tuni-202601302094

Kuvaus

Peer reviewed
Tiivistelmä
Heart attacks, formally referred to as myocardial infarction (MI), are a major cause of morbidity and death globally. Timely intervention and better patient outcomes depend on early and precise MI identification. Traditional methods for diagnosing MI primarily rely on clinical examinations, Electrocardiogram (ECG), and imaging techniques. However, these methods often face challenges in terms of accuracy, sensitivity, and timely interpretation. This study explores a multi-modal Deep Learning (DL) model for detecting MI using both ECG signal and image data. The model integrates a Convolutional Neural Network (CNN) for processing ECG images, Long Short-Term Memory (LSTM) networks for analyzing ECG signals, and an Attention-based feature fusion mechanism to combine features from both modalities. The model was evaluated in two configurations: training on the PTB-XL dataset with testing on the Mendeley ECG image dataset, and training on the Mendeley ECG image dataset with testing on the PTB-XL dataset. The results show that the hypertuned multi-modal model consistently outperforms the baseline, with improvements in F1-score, recall, precision, and accuracy. In the PTB-XL dataset training and Mendeley ECG image dataset testing setup, the hypertuned model achieved an accuracy of 0.982, while in the Mendeley image dataset training and PTB-XL dataset testing setup, it reached 0.9638. These findings demonstrate promising avenues for advancing automated cardiovascular diagnostics combining the strengths of both image and signal-based analysis.
Kokoelmat
  • TUNICRIS-julkaisut [24210]
Kalevantie 5
PL 617
33014 Tampereen yliopisto
oa[@]tuni.fi | Tietosuoja | Saavutettavuusseloste
 

 

Selaa kokoelmaa

TekijätNimekkeetTiedekunta (2019 -)Tiedekunta (- 2018)Tutkinto-ohjelmat ja opintosuunnatAvainsanatJulkaisuajatKokoelmat

Omat tiedot

Kirjaudu sisäänRekisteröidy
Kalevantie 5
PL 617
33014 Tampereen yliopisto
oa[@]tuni.fi | Tietosuoja | Saavutettavuusseloste