Cross-domain bimodal SER for customer service and tv show domains
| dc.contributor.author | Premnath, N | |
| dc.contributor.author | Lakraj, P | |
| dc.contributor.author | Jayaweera, Y | |
| dc.contributor.editor | Gunawardena, S | |
| dc.date.accessioned | 2025-11-21T04:10:28Z | |
| dc.date.issued | 2025 | |
| dc.description.abstract | Human speech is the most common and expedient way of communication, and understanding speech is one of the complex mechanisms that the human brain performs. As technology advances, replicating this ability in machines has become essential, leading to the rise of Speech Emotion Recognition (SER) as a key field in artificial intelligence and human-computer interaction. However, the challenge of accurately recognizing emotions from speech is compounded by the variability in emotional expression across different contexts [1]. In customer service interactions, emotions like happiness or frustration are often conveyed subtly, whereas in TV shows, they are exaggerated for dramatic effect. This contrast poses a challenge for SER models, as emotional expressions differ significantly across domains. | |
| dc.identifier.conference | Applied Data Science & Artificial Intelligence (ADScAI) Symposium 2025 | |
| dc.identifier.department | Department of Computer Science & Engineering | |
| dc.identifier.doi | https://doi.org/10.31705/ADScAI.2025.35 | |
| dc.identifier.faculty | Engineering | |
| dc.identifier.place | Moratuwa, Sri Lanka | |
| dc.identifier.proceeding | Proceedings of Applied Data Science & Artificial Intelligence Symposium 2025 | |
| dc.identifier.uri | https://dl.lib.uom.lk/handle/123/24420 | |
| dc.language.iso | en | |
| dc.publisher | Department of Computer Science and Engineering | |
| dc.subject | Cross-domain SER | |
| dc.subject | Multimodal SER | |
| dc.subject | Customer Service | |
| dc.subject | TV Show | |
| dc.subject | Context | |
| dc.title | Cross-domain bimodal SER for customer service and tv show domains | |
| dc.type | Conference-Extended-Abstract |
