Abstract:
Computer vision based sign language translation is usually based on using thousands of images or video sequences for model training. This is not an issue in the case of widely used languages such as American Sign Language. However, in case of languages with low resources such as Sinhala Sign Language, it’s challenging to use similar methods for developing translators since there are no known data sets available for such studies.In this study we have contributed a new dataset and developed a sign language translation method for the Sinhala Fingerspelling Alphabet. Our approach for recognizing fingerspelling signs involve decoupling pose classification from pose estimation and using postural synergies to reduce the dimensionality of features. As shown by our experiments, our method can achieve an average accuracy of over 87%. The size of the data set used is less than 12% of the size of data sets used in methods which have comparable accuracy. We have made the source code and the dataset publicly available.
Citation:
A. A. Weerasooriya and T. D. Ambegoda, "Sinhala Fingerspelling Sign Language Recognition with Computer Vision," 2022 Moratuwa Engineering Research Conference (MERCon), 2022, pp. 1-6, doi: 10.1109/MERCon55799.2022.9906281.