Multimodal fusion of non-verbal cues to enhance the understanding of emotional and spatial uncertainties in navigational vocal commands for assistive robots [abstract]

dc.contributor.authorGopura, RARC
dc.contributor.authorJayasekara, AGBP
dc.contributor.authorPriyanayana, S
dc.date.accessioned2025-07-24T04:36:27Z
dc.date.issued2020
dc.descriptionThe following papers were published based on the results of this research project. 1] Priyanayana, K. S., Jayasekara, A. B. P., & Gopura, R. A. R. C. (2021, September). Evaluation of Interactive Modalities Using Preference Factor Analysis for Wheelchair Users. In 2021 IEEE 9th Region 10 Humanitarian Technology Conference (R10-HTC) (pp. 01-06). IEEE. DOI: 10.1109/R10-HTC53172.2021.9641549 [2] Priyanayana, K. S., Jayasekara, A. B. P., & Gopura, R. A. R. C. (2020, December). Multimodal Behavior Analysis of Human-Robot Navigational Commands. In 2020 3rd International Conference on Control and Robots (ICCR) (pp. 79-84). IEEE. DOI: 10.1109/ICCR51572.2020.934441
dc.description.abstractThe world is experiencing a dramatically rising elderly population. As people age, they may become incapacitated to some degree and need assistance from caregivers in doing their daily activities. Since the gap between the supply and demand of caregivers is widening, it is of great significance to develop devices that can support the elderly in their day-to-day life. Human communication is a pillar in social organization. In domestic environments, humans prefer to interact with each other in multiple modalities. A discussion with two people could be in speech, hand gestures, head gestures and facial cues etc. Speech-hand gesture is a more common multimodal pairing identified in human-human communication. Human-Human verbal instructions have uncertainties. Spatial uncertainties such as direction related uncertainties, distance related uncertainties and contextual emotional uncertainties in these verbal commands can affect human-robot interaction. Therefore, this research focused on enhancing the understanding of spatial and context-emotional uncertainties in vocal navigational commands using hand gesture cues. Intentional hand gestures were used to interpret these uncertainties while unintentional hand gestures were filtered out using verbal-hand gesture timeline features and an ensemble model of LWNB and KNNDW was used. Further, spatial hand gestures such as direction and distance related uncertainties were interpreted using partial gesture distance and direction information extracted from gestures using a fuzzy logic approach. For the context-emotional uncertainties, vocal prosody (tonal modality) and visual prosody (hand gestures) were fused together using an ensemble deep learning approach to enhance the understanding of these uncertainties. These intelligent systems were successful in enhancing the understanding these uncertainties in order to increase the accuracy and human-like capabilities to understand vocal navigational commands.
dc.description.sponsorshipSenate Research Committee
dc.identifier.accnoSRC210
dc.identifier.srgnoSRC/LT/2020/40
dc.identifier.urihttps://dl.lib.uom.lk/handle/123/23925
dc.language.isoen
dc.subjectSENATE RESEARCH COMMITTEE – Research Report
dc.subjectMULTIMODAL FUSION
dc.subjectNON-VERBAL CUES
dc.subjectNAVIGATIONAL VOCAL COMMANDS
dc.subjectASSISTIVE ROBOTS
dc.titleMultimodal fusion of non-verbal cues to enhance the understanding of emotional and spatial uncertainties in navigational vocal commands for assistive robots [abstract]
dc.typeSRC-Report

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
SRC210 - Prof. R. A. R. C Gopura SRCLT202040.pdf
Size:
970.95 KB
Format:
Adobe Portable Document Format
Description:
SRC Report

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: