Hyppää sisältöön
    • Suomeksi
    • In English
Trepo
  • Suomeksi
  • In English
  • Kirjaudu
Näytä viite 
  •   Etusivu
  • Trepo
  • TUNICRIS-julkaisut
  • Näytä viite
  •   Etusivu
  • Trepo
  • TUNICRIS-julkaisut
  • Näytä viite
JavaScript is disabled for your browser. Some features of this site may not work without it.

Discerning Affect from Touch and Gaze During Interaction with a Robot Pet

Cang, Xi Laura; Bucci, Paul; Rantala, Jussi; Maclean, Karon (2021-07-07)

 
Avaa tiedosto
Cang_et_al._2021_Discerning_Affect_from_Touch_and_Gaze_During_Interaction_with_a_Robot_Pet.pdf (21.89Mt)
Lataukset: 



Cang, Xi Laura
Bucci, Paul
Rantala, Jussi
Maclean, Karon
07.07.2021

IEEE Transactions on Affective Computing
This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.
doi:10.1109/TAFFC.2021.3094894
Näytä kaikki kuvailutiedot
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:tuni-202110067424

Kuvaus

Peer reviewed
Tiivistelmä
<p>Practical affect recognition needs to be efficient and unobtrusive in interactive contexts. One approach to a robust real-time system is to sense and automatically integrate multiple nonverbal sources. We investigated how users' touch, and secondarily gaze, perform as affect-encoding modalities during physical interaction with a robot pet, in comparison to more-studied biometric channels. To elicit authentically experienced emotions, participants recounted two intense memories of opposing polarity in Stressed-Relaxed or Depressed-Excited conditions. We collected data (N=30) from a touch sensor embedded under robot fur (force magnitude and location), a robot-adjacent gaze tracker (location), and biometric sensors (skin conductance, blood volume pulse, respiration rate). Cross-validation of Random Forest classifiers achieved best-case accuracy for combined touch-with-gaze approaching that of biometric results: where training and test sets include adjacent temporal windows, subject-dependent prediction was 94% accurate. In contrast, subject-independent Leave-One-participant-Out predictions resulted in 30% accuracy (chance 25%). Performance was best where participant information was available in both training and test sets. Addressing computational robustness for dynamic, adaptive real-time interactions, we analyzed subsets of our multimodal feature set, varying sample rates and window sizes. We summarize design directions based on these parameters for this touch-based, affective, and hard, real-time robot interaction application.</p>
Kokoelmat
  • TUNICRIS-julkaisut [20740]
Kalevantie 5
PL 617
33014 Tampereen yliopisto
oa[@]tuni.fi | Tietosuoja | Saavutettavuusseloste
 

 

Selaa kokoelmaa

TekijätNimekkeetTiedekunta (2019 -)Tiedekunta (- 2018)Tutkinto-ohjelmat ja opintosuunnatAvainsanatJulkaisuajatKokoelmat

Omat tiedot

Kirjaudu sisäänRekisteröidy
Kalevantie 5
PL 617
33014 Tampereen yliopisto
oa[@]tuni.fi | Tietosuoja | Saavutettavuusseloste