Multi-modal perception and sensor fusion for human-robot collaboration
Ekrekli, Akif (2023)
Ekrekli, Akif
2023
Master's Programme in Information Technology
Informaatioteknologian ja viestinnän tiedekunta - Faculty of Information Technology and Communication Sciences
This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.
Hyväksymispäivämäärä
2023-10-19
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:tuni-202309288519
https://urn.fi/URN:NBN:fi:tuni-202309288519
Tiivistelmä
In an era where human-robot collaboration is becoming increasingly common. sensor-based approaches are emerging as critical factors in enabling robots to work harmoniously alongside humans. In some scenarios, the utilization of multiple sensors becomes imperative due to the complexity of certain tasks. This multi-modal approach enhances cognitive capabilities and ensures higher levels of safety and reliability. The integration of diverse sensor modalities not only improves the accuracy of perception but also enables robots to adapt to surrounding environments and analyze human intentions and commands more effectively. This paper explores sensor fusion and multi-modal approaches within an industrial context. The proposed fusion framework adopts a holistic approach encompassing multiple modalities, integrating object detection, human gesture recognition, and speech recognition techniques. These components are pivotal for human-robot interaction, enabling robots to comprehend their environment and interpret human inputs efficiently. Each modality is individually tested and evaluated within the sensor fusion and multi-modal framework to ensure efficient functionality. Successful industrial experimentation underscores the practicality and relevance of sensor fusion and multi-modality approaches. This work underscores the potential of sensor fusion while emphasizing the ongoing need for exploration and improvement in this field.