EEG based brain-computer interface for inner speech classification

Abstract

Electroencephalogram (EEG) based Brain-Computer Interfaces (BCIs) were initially designed to aid those with motor disabilities. However, recent research delves into their potential for non-clinical uses like gaming. Utilizing non-motor imagery, such as inner speech, has emerged as a promising approach for BCI control. Inner speech, a mental form of self-directed speech, serves as a basis for this study to decode control commands like left, right, up, and down for navigation in a game. This paper evaluates EEG signal processing techniques across various applications. It employs passive-time inner speech classification and introduces a successful transfer learning method using ResNet50, achieving an impressive accuracy of 45% when tested with data from entirely different subjects from training. Further fine-tuning with 50% of the data increased the model’s performance to 88%. The study also explores personalized model capabilities and assesses optimal dataset sizes. Additionally, it delves into real-time applications, experimenting with neural network architectures for instantaneous classification. Connectivity between these components is also addressed, underscoring the infrastructure’s significance in EEG-based BCI systems.

Description

Citation

DOI

Collections

Endorsement

Review

Supplemented By

Referenced By