Poseview3d: dynamic multi-view angular encoding for skeleton-based action recognition
| dc.contributor.author | Ranasinghe, A | |
| dc.contributor.author | Karunanayake, A | |
| dc.contributor.author | Ratnayake, U | |
| dc.contributor.author | Ariyawangsha, V | |
| dc.contributor.author | Godaliyadda, R | |
| dc.contributor.author | Ekanayake, P | |
| dc.contributor.author | Herath, V | |
| dc.date.accessioned | 2025-12-10T04:40:24Z | |
| dc.date.issued | 2025 | |
| dc.description.abstract | Human Action Recognition has gained prominence in computer vision due to its wide applicability. Among various approaches, skeleton-based methods stand out for their compact motion representation, effectively minimizing environmental noise. Despite advances, accurately recognizing human actions remains challenging, especially when distinguishing between fine grained, visually similar motions. Existing methods often rely on complex neural networks to model joint relationships, but still struggle with subtle action differences. In this paper, we introduce PoseView3D, which enhances recognition by generating angular information from pose data. This angular representation acts as a complementary feature representation to improve action classification. Our method introduces temporally dynamic anchor points that provide a multi-view perspective of motion, enabling more discriminative and robust skeleton representations. These anchor points are learned through a customized Non-Local Neural Network, which uses self-attention to capture both spatiotemporaljoint relationships effectively. PoseView3D outperforms current state-of-the-art methods that rely on single-stream angular data. We conduct comprehensive experiments using the NTU RGB+D and NTU RGB+D 120 datasets to evaluate performance across various configurations. Our results demonstrate that PoseView3D delivers competitive accuracy and robust recognition capabilities, validating its effectiveness in capturing nuanced motion features for human action recognition. | |
| dc.identifier.conference | Moratuwa Engineering Research Conference 2025 | |
| dc.identifier.department | Engineering Research Unit, University of Moratuwa | |
| dc.identifier.email | e18280@eng.pdn.ac.lk | |
| dc.identifier.email | e17154@eng.pdn.ac.lk | |
| dc.identifier.email | e18300@eng.pdn.ac.lk | |
| dc.identifier.email | e18027@eng.pdn.ac.lk | |
| dc.identifier.email | roshang@eng.pdn.ac.lk | |
| dc.identifier.email | mpb.ekanayake@ee.pdn.ac.lk | |
| dc.identifier.email | vijitha@eng.pdn.ac.lk | |
| dc.identifier.faculty | Engineering | |
| dc.identifier.isbn | 979-8-3315-6724-8 | |
| dc.identifier.pgnos | pp. 563-568 | |
| dc.identifier.proceeding | Proceedings of Moratuwa Engineering Research Conference 2025 | |
| dc.identifier.uri | https://dl.lib.uom.lk/handle/123/24559 | |
| dc.language.iso | en | |
| dc.publisher | IEEE | |
| dc.subject | action recognition | |
| dc.subject | multi-view | |
| dc.subject | graph neural networks | |
| dc.subject | angular representation | |
| dc.title | Poseview3d: dynamic multi-view angular encoding for skeleton-based action recognition | |
| dc.type | Conference-Full-text |
