Adapting concept of human-human multimodal interaction in human-robot applications

dc.contributor.authorPriyanayana, S
dc.contributor.authorJayasekara, B
dc.contributor.authorGopura, R
dc.date.accessioned2023-02-14T03:54:05Z
dc.date.available2023-02-14T03:54:05Z
dc.date.issued2022-12
dc.description.abstractHuman communication is multimodal in nature. In a normal environment, people use to interact with other humans and with the environment using more than one modality or medium of communication. They speak, use gestures and look at things to interact with nature and other humans. By listening to the different voice tones, looking at face gazes, and arm movements people understand communication cues. A discussion with two people will be in vocal communication, hand gestures, head gestures, and facial cues, etc. [1]. If textbook definition is considered synergistic use of these interaction methods is known as multimodal interaction [2]. For example, , a wheelchair user might instruct the smart wheelchair or the assistant to go forward, as shown in Fig. 1(a). However, with a hand gesture shown in the figure, he or she might want to go slowly. In the same way as of Fig. 1(b), a person might give someone a direction with a vocal command ‘that way’ and gesture the direction with his or her hand. In most Human-Robot Interaction (HRI) developments, there is an assumption that human interactions are unimodal. This forces the researchers to ignore the information other modalities carry with them. Therefore, it would provide an additional dimension for interpretation of human robot interactions. This article provides a concise description of how to adapt the concept of multimodal interaction in human-robot applications.en_US
dc.identifier.doihttps://doi.org/10.31705/BPRM.v2(2).2022.4en_US
dc.identifier.issn2815-0082en_US
dc.identifier.issue2en_US
dc.identifier.journalBolgoda Plains Research Magazineen_US
dc.identifier.pgnospp. 18-20en_US
dc.identifier.urihttp://dl.lib.uom.lk/handle/123/20467
dc.identifier.volume2en_US
dc.identifier.year2022en_US
dc.language.isoenen_US
dc.publisherFaculty of Graduate Studiesen_US
dc.subjectHuman-Robot Applicationsen_US
dc.titleAdapting concept of human-human multimodal interaction in human-robot applicationsen_US
dc.typeArticle-Full-texten_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Bolgoda Plains 2022.12.22 Final-18-20.pdf
Size:
281.65 KB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: