Distillation-boosted spiking neural network for gesture recognition
| dc.contributor.author | Aponso, GMMK | |
| dc.contributor.author | Thusyanthan, J | |
| dc.contributor.author | Wellahewa, WIUD | |
| dc.contributor.author | Hettiarachchi, C | |
| dc.date.accessioned | 2025-12-08T05:00:01Z | |
| dc.date.issued | 2025 | |
| dc.description.abstract | Spiking Neural Networks (SNNs) are brain-inspired models known for their high computational efficiency, primarily due to their use of discrete spike signals that closely resemble biological neural processing. Although SNNs offer significant energy efficiency advantages, a major challenge lies in training them, as traditional gradient descent algorithms cannot be directly applied due to the non-differentiable nature of spikes. Several methods have been developed to address this issue, including direct training using surrogate gradients, ANN-to-SNN conversion, and knowledge distillation (KD). Among these, KD has often demonstrated superior accuracy, where a highly performing and complex artificial neural network (ANN) serves as the teacher and a less complex SNN as the student. However, most KD-based approaches have focused on static image classification, with limited exploration in dynamic data domains such as action or gesture recognition. This study addresses this gap by proposing a KD-based training framework that extends the applicability of SNNs to dynamic visual tasks. Specifically, we focus on gesture recognition using event-based data and perform comprehensive evaluations by comparing KD-trained SNNs with directly trained SNNs and the original ANN teacher model. Experimental results show that our KD-trained SNN achieves a Top-5 class accuracy of 89.73% and a Random-5 class accuracy of 85.80%, outperforming directly trained SNNs, which achieve 84.24% and 83.20%, respectively. These findings demonstrate the effectiveness of knowledge distillation in improving the accuracy and generalization of SNNs in dynamic gesture recognition tasks and extending their applicability beyond static image domains. | |
| dc.identifier.conference | Moratuwa Engineering Research Conference 2025 | |
| dc.identifier.email | madhawa.20@cse.mrt.ac.lk | |
| dc.identifier.email | thusyanthan.20@cse.mrt.ac.lk | |
| dc.identifier.email | uditha.20@cse.mrt.ac.lk | |
| dc.identifier.email | chathuranga@cse.mrt.ac.lk | |
| dc.identifier.faculty | Engineering | |
| dc.identifier.isbn | 979-8-3315-6724-8 | |
| dc.identifier.pgnos | pp. 740-745 | |
| dc.identifier.proceeding | Proceedings of Moratuwa Engineering Research Conference 2025 | |
| dc.identifier.uri | https://dl.lib.uom.lk/handle/123/24526 | |
| dc.language.iso | en | |
| dc.publisher | IEEE | |
| dc.subject | Spiking Neural Networks (SNNs) | |
| dc.subject | Knowledge Distillation (KD) | |
| dc.subject | Gesture Recognition | |
| dc.subject | Event-Based Data | |
| dc.subject | Surrogate Gradient Training | |
| dc.title | Distillation-boosted spiking neural network for gesture recognition | |
| dc.type | Conference-Full-text |
