dc.contributor.advisor |
Ambegoda TD |
|
dc.contributor.author |
Gunarathna TMTA |
|
dc.date.accessioned |
2022 |
|
dc.date.available |
2022 |
|
dc.date.issued |
2022 |
|
dc.identifier.citation |
Gunarathna, T.M.T.A. (2022). Self supervised learning of EEG (electroencephalogram) raw data to learn the hidden patterns of human brain activities [Master's theses, University of Moratuwa]. Institutional Repository University of Moratuwa. http://dl.lib.uom.lk/handle/123/21635 |
|
dc.identifier.uri |
http://dl.lib.uom.lk/handle/123/21635 |
|
dc.description.abstract |
EEG is a non-invasive neuroimaging modality that operates by measuring changes in
electrical voltage on the scalp that are induced by cortical activity. In this research, we
propose a method for self-supervised learning of EEG raw data to learn the hidden
patterns of human brain activities. This work was performed through a pipeline
consisting of five phases. Each of the phase’s output will be the input for the next
phase. Phase 1 is for pre-processing raw EEG sequences into EEG representations that
catch the spacial and temporal properties in the original raw EEG sequences. We have
followed a relatively less complex method to pre-process raw EEG sequences. In phase
2, pre-processed raw EEG sequences will be learnt by self-supervised representation
learning. For that self-supervised vision transformers with DINO will be used. These
vision transformers models are computationally more demanding and require more
training data therefore more computational resources and training data will be needed.
So that at the presence of more training data and computational processing power, selfsupervised
vision
transformer
architectures
will
be
expected
to
produce
the
best
results
while outperforming supervised learning architectures. Then at the phase 3, sequences
of prototypes for each raw EEG data sequence of the dataset will be generated. To
evaluate the prototypes that are generated from raw EEG data, phase 4 and 5 have been
used as the downstream task for the self-supervised learning task. For phase 4 and 5,
we again used a transformer architecture, that is a BERT based model called RoBERTa
to learn the synthetic language generated by phase 3 or to learn the context and the
language of generated prototype sequences and by performing a multi class prototype
sequence classification, prototype generation for each representation at specific time
stamp of raw EEG data sequence can be evaluated. We believe that since the models
are computationally demanding and require more training data, the latter explained
pipeline of five phases should be improved with more training and performing
hyperparameter tuning at a high computational resources and data rich environment. |
en_US |
dc.language.iso |
en |
en_US |
dc.subject |
ELECTROENCEPHALOGRAM |
en_US |
dc.subject |
SELF-SUPERVISED LEARNING |
en_US |
dc.subject |
VISION TRANSFORMERS |
en_US |
dc.subject |
NATURAL LANGUAGE PROCESSING |
en_US |
dc.subject |
INFORMATION TECHNOLOGY -Dissertation |
en_US |
dc.subject |
COMPUTER SCIENCE -Dissertation |
en_US |
dc.subject |
COMPUTER SCIENCE & ENGINEERING -Dissertation |
en_US |
dc.title |
Self supervised learning of EEG (electroencephalogram) raw data to learn the hidden patterns of human brain activities |
en_US |
dc.type |
Thesis-Abstract |
en_US |
dc.identifier.faculty |
Engineering |
en_US |
dc.identifier.degree |
MSc In Computer Science and Engineering |
en_US |
dc.identifier.department |
Department of Computer Science and Engineering |
en_US |
dc.date.accept |
2022 |
|
dc.identifier.accno |
TH4988 |
en_US |