Multimodal supported chatbot framework for people with aphasia
| dc.contributor.author | Ayesha, RLAS | |
| dc.contributor.author | Vijayarathna, LBHM | |
| dc.contributor.author | Vithanage, NPA | |
| dc.contributor.author | Thayasivam, U | |
| dc.contributor.author | Alahakoon, D | |
| dc.contributor.author | Adikari, A A. | |
| dc.contributor.author | Pallewela, N | |
| dc.date.accessioned | 2026-02-16T06:50:01Z | |
| dc.date.issued | 2024 | |
| dc.description.abstract | Chatbots have become a trending topic with emerging platforms like ChatGPT, Gemini, and Copilot, for conversation assistance. Current chatbots mainly focus on the general public assuming a natural flow of conversation. However, there is a need for a chatbot that supports people with various communication disabilities. This research fills this gap by offering a novel technique for a chatbot that assists people with Aphasia, a condition characterised by difficulties with language. We propose a multi-modal chatbot that is customised and designed to assist users with communication disabilities in navigating awebsite. Unlike typical chatbots, which rely on one form of communication, our architecture combines multiple modalities to improve comprehension and promote effective communication for people with Aphasia. We focus on gathering multimodal inputs by recognising and combining user intents from diverse sources. The use of Txtai, an all-in-one embeddings database for semantic search improves our chatbot’s capacity to process various inputs efficiently. We leverage specialised models like Whisper for audio transcription and MediaPipe Gesture Recognizer for gesture detection to enhance user interactions. Additionally, Rasa Core integration improves conversational experiences for users. We propose that this new approach will make communication more accessible and inclusive for individuals with Aphasia. | |
| dc.identifier.conference | Moratuwa Engineering Research Conference 2024 | |
| dc.identifier.department | Engineering Research Unit, University of Moratuwa | |
| dc.identifier.email | sanduniayesha.19@cse.mrt.ac.lk | |
| dc.identifier.email | hasini.19@cse.mrt.ac.lk | |
| dc.identifier.email | nipunv.19@cse.mrt.ac.lk | |
| dc.identifier.email | rtuthaya@cse.mrt.ac.lk | |
| dc.identifier.email | d.alahakoon@latrobe.edu.au | |
| dc.identifier.email | Adikari@latrobe.edu.au | |
| dc.identifier.email | N.Pallewela@latrobe.edu.au | |
| dc.identifier.faculty | Engineering | |
| dc.identifier.isbn | 979-8-3315-2904-8 | |
| dc.identifier.pgnos | pp. 566-571 | |
| dc.identifier.place | Moratuwa, Sri Lanka | |
| dc.identifier.proceeding | Proceedings of Moratuwa Engineering Research Conference 2024 | |
| dc.identifier.uri | https://dl.lib.uom.lk/handle/123/24871 | |
| dc.language.iso | en | |
| dc.publisher | IEEE | |
| dc.subject | chatbot | |
| dc.subject | multimodality | |
| dc.subject | intent classification | |
| dc.subject | aphasia | |
| dc.subject | communication disabilities | |
| dc.title | Multimodal supported chatbot framework for people with aphasia | |
| dc.type | Conference-Full-text |
