Multimodal Interface for Human–Robot Collaboration
Rautiainen, Samu; Pantano, Matteo; Traganos, Konstantinos; Ahmadi, Seyedamir; Saenz, José; Mohammed, Wael M.; Martinez Lastra, Jose L. (2022)
Rautiainen, Samu
Pantano, Matteo
Traganos, Konstantinos
Ahmadi, Seyedamir
Saenz, José
Mohammed, Wael M.
Martinez Lastra, Jose L.
2022
957
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:tuni-202210257788
https://urn.fi/URN:NBN:fi:tuni-202210257788
Kuvaus
Peer reviewed
Tiivistelmä
Human–robot collaboration (HRC) is one of the key aspects of Industry 4.0 (I4.0) and requires intuitive modalities for humans to communicate seamlessly with robots, such as speech, touch, or bodily gestures. However, utilizing these modalities is usually not enough to ensure a good user experience and a consideration of the human factors. Therefore, this paper presents a software component, Multi-Modal Offline and Online Programming (M2O2P), which considers such characteristics and establishes a communication channel with a robot with predefined yet configurable hand gestures. The solution was evaluated within a smart factory use case in the Smart Human Oriented Platform for Connected Factories (SHOP4CF) EU project. The evaluation focused on the effects of the gesture personalization on the perceived workload of the users using NASA-TLX and the usability of the component. The results of the study showed that the personalization of the gestures reduced the physical and mental workload and was preferred by the participants, while overall the workload of the tasks did not significantly differ. Furthermore, the high system usability scale (SUS) score of the application, with a mean of 79.25, indicates the overall usability of the component. Additionally, the gesture recognition accuracy of M2O2P was measured as 99.05%, which is similar to the results of state-of-the-art applications.
Kokoelmat
- TUNICRIS-julkaisut [18531]