• Users Online: 3148
  • Print this page
  • Email this page

 Table of Contents  
MEETING REPORT
Year : 2020  |  Volume : 10  |  Issue : 3  |  Page : 208-216

The 2017 and 2018 Iranian Brain–Computer interface competitions


1 Department of Medical Physics and Biomedical Engineering, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
2 Department of Biomedical Engineering, Amirkabir University of Technology, Tehran, Iran
3 Department of Electrical Engineering, Biomedical Signal and Image Processing Laboratory, Sharif University of Technology, Tehran, Iran
4 Department of Biomedical Engineering, Faculty of Engineering, Shahed University, Tehran, Iran
5 Control and Intelligent Processing Centre of Excellence, School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, Iran
6 Department of Biomedical Engineering, School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran
7 Department of Electrical Engineering, K. N. Toosi University of Technology, Tehran, Iran

Date of Submission26-Nov-2019
Date of Decision22-Dec-2019
Date of Acceptance26-Dec-2019
Date of Web Publication03-Jul-2020

Correspondence Address:
Dr. Bahador Makkiabadi
Department of Medical Physics and Biomedical Engineering, School of Medicine, Tehran University of Medical Sciences, Tehran
Iran
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/jmss.JMSS_65_19

Rights and Permissions
  Abstract 


This article summarizes the first and second Iranian brain–computer interface competitions held in 2017 and 2018 by the National Brain Mapping Lab. Two 64-channel electroencephalography (EEG) datasets were contributed, including motor imagery as well as motor execution by three limbs. The competitors were asked to classify the type of motor imagination or execution based on EEG signals in the first competition and the type of executed motion as well as the movement onset in the second competition. Here, we provide an overview of the datasets, the tasks, the evaluation criteria, and the methods proposed by the top-ranked teams. We also report the results achieved with the submitted algorithms and discuss the organizational strategies for future campaigns.

Keywords: Brain–computer interface, electroencephalography, motor execution, motor imagery, movement onset


How to cite this article:
Aghdam NS, Moradi MH, Shamsollahi MB, Nasrabadi AM, Setarehdan SK, Shalchyan V, Faradji F, Makkiabadi B. The 2017 and 2018 Iranian Brain–Computer interface competitions. J Med Signals Sens 2020;10:208-16

How to cite this URL:
Aghdam NS, Moradi MH, Shamsollahi MB, Nasrabadi AM, Setarehdan SK, Shalchyan V, Faradji F, Makkiabadi B. The 2017 and 2018 Iranian Brain–Computer interface competitions. J Med Signals Sens [serial online] 2020 [cited 2020 Aug 4];10:208-16. Available from: http://www.jmssjournal.net/text.asp?2020/10/3/208/288901




  Introduction Top


Brain–computer interface (BCI) is a wide multidisciplinary research field in which many researchers are active worldwide. The BCI technology mainly aims at voluntary control of the environment such as a computer or a wheelchair by disabled people who have missed their motor performance. Different technologies are available to measure brain activity, among which electroencephalography (EEG)-based BCIs have some advantages including low cost, high temporal resolution, and portability.

The International BCI Award Foundation, a nonprofit organization founded in 2017 in Austria, recognizes outstanding and innovative research projects submitted in the field of BCIs annually.[1] The award is donated by g.tec, a leading provider of BCI research equipment located in Austria. Every year, 12 projects are nominated before the winner is announced at the BCI Award Ceremony. However, probably, the most well-known BCI competitions are the Berlin BCI competitions of which four series have been held from 2001 to 2008.[2] Several laboratories and universities provided the datasets for the competitions. Control of the movement of a cursor on a monitor screen; P300 speller; self-paced key typing; motor imagery of fingers, hands, feet, and tongue; and decoding the direction of hand movement were the tasks considered in these competitions. A complete review of BCI competition IV can be found in the literature.[3]

Iranian BCI competitions were held by the National Brain Mapping Lab (NBML)[4] to provide a high-quality national database for neuroscience and neuroengineering communities with public access. Therefore, researchers who do not have access to measurement systems can easily have access to the data. Moreover, research groups that are relatively new to the field of BCI can attract attention and get renowned if the performance of their algorithms is outstanding. Another important goal is to evaluate the accuracy of signal processing and classification algorithms proposed by the competitors. Therefore, the pros and cons of the methods can be determined and more attention will be drawn to the enhancement of different versions of more effective methods.

The competitions were held in 2017 and 2018, and the participants were asked to take part in the competition in groups of two up to five persons to promote teamwork. Both competitions composed of two stages; at the first stage, the competitors were given two training and test datasets, and they were asked to answer the scientific question and submit the results due to the deadline. Based on the results received, the six and eight top-ranked teams were invited to the second stage of the 2017 and 2018 competitions, respectively, to compete on the new test data in person.


  Materials and Methods Top


In order to conform to international competitions, it was proposed to consider two experimental scenarios, namely, multiclass motor imagery and execution. Moreover, to have relative innovation with respect to other competitions, it was also proposed to include movement onset detection in the second competition.

In this section, the experimental paradigm, data recording, and the questions raised in both competitions are explained. The data are freely and publically available and the researchers could contact the NBML for access.[4] Researchers wishing to publish their results, which use this dataset, must acknowledge NBML by citing this publication.

Experimental paradigm

The data measurement consisted of recording EEG, electrooculogram (EOG), and electromyogram (EMG) signals from 15 (five females) healthy and right-handed individuals aged between 26 and 42 years. The experiment was explained thoroughly to the participants, and they gave a written consent form. The cue-based BCI paradigm consisted of six different motor imagery and execution tasks, namely the execution and imagination of movements of the right thumb, right arm, and right foot. One session was recorded for each participant. Each session comprised of two runs separated by an approximately 10-min break. One run consisted of 120 trials (twenty for each of the six possible classes), yielding a total of 240 trials per participant.

At the beginning of each session, a recording of approximately 2-min rest state was performed; 1 min with eyes closed and then 1 min with eyes open. The participants were sitting in a comfortable armchair in front of a computer screen. At the beginning of a trial (t = 0 s), a fixation cross appeared on the black screen. After 2 s (t = 2 s), a cue in the form of a picture implying the thumb, arm, or foot imagination or execution appeared and stayed on the screen for 2 s. After that, the “Go!” word appeared on the screen for 0.5 s. This prompted the participants to perform the desired motor imagery or execution task. No feedback was provided. The participants were asked to carry out the task in 3 s. A short break preceded the fixation cross where the screen was black for 2 s. The paradigm is illustrated in [Figure 1].
Figure 1: Timing scheme of the paradigm

Click here to view


Data recording

A g-Tec (g.HIamp) recording system was used to record the 64-channel EEG and three EOG signals. Moreover, three pairs of bipolar electrodes were used to record the muscular activity during thumb, arm, and foot movements. The montage of EEG electrodes is shown in [Figure 2]. The right ear was serving as the reference and the FPz channel as the ground. The signals were sampled with 2400 Hz and notched filtered for removing the 50 Hz power-line noise.
Figure 2: Electroencephalography electrode montage by electrode name (left) and electrode number (right)

Click here to view


The data format and the question of the first competition (2017)

The recorded continuous signals from seven participants were visually inspected to remove the artifactual segments. Then, two datasets were extracted separately from these signals for motor execution and motor imagery tasks. Each dataset included the 3-s segments of the continuous data corresponding to the task-performing duration. According to the opinion of the jury panel, 50% of the total number of trials for each type of task was assigned as the training data and 30% was set as test data to be used in the first stage of the competition. This test dataset along with the remaining 20% of the trials composed the test dataset for the final stage of the competition. The teams were asked to classify the motor execution dataset into four classes corresponding to arm, thumb, and foot movements as well as no-motion state. They were also asked to classify the motor imagery dataset into three classes corresponding to the imagination of arm, thumb, and foot movements. The classification procedure had to be applied separately on each participant's dataset.

The data format and the question of the second competition (2018)

The data for this competition included only the motor execution dataset but from all the 15 participants. In addition to movement classification based on each participant's EEG, the competitors were also asked to determine the movement onset instance. In the training dataset, the movement onset was determined from EMG measurements in a two-step way. First, based on Teager–Kaiser energy operator,[5],[6] the onset time was obtained automatically, and then, it was modified finely through visual inspection. The onset time was marked by 1 in the last row of the data matrix. For no-motion class, there was no 1. The number of training and test trials was similar to that of the 2017 competition.

Evaluation criteria

According to the opinion of the jury panel, in 2017 competition, the mean accuracy of classification of motor imagery and motor execution trials was selected as the criterion for evaluating the teams' results. Therefore, two confusion matrices were computed for both motor imagery and motor execution datasets belonging to each of the seven participants. The classification accuracy was computed as the ratio of the summation of the diagonal elements of the confusion matrix to the summation of the whole elements. Then, they were averaged across all the seven participants. Finally, the average of the two accuracy values for motor imagery and motor execution was obtained as the criterion for scoring the teams' performance. It was also suggested by the jury panel to consider the time of delivering the results by the teams provided that the scores were equal.

In 2018 competition, the classification accuracy was computed in a similar way to 2017 except that only one accuracy was obtained for the motor execution dataset. For scoring the movement onset detection, a trapezoid membership function was used. The smaller and the bigger parallel sides of the trapezoid were centered ±0.1 s and ±0.3 s, respectively, around the instance detected by the method explained in the data format and the question of the second competition (2018) section. Therefore, if a team detects the onset time with an error of ±0.1 s, it gets the 100% score and if the error is >±0.3 s, it gets 0. For the errors between ±0.1 s and ±0.3 s, the score is proportional to the value of the nonparallel sides of the trapezoid. This score is averaged across all the test trials of each of the 15 participants. Finally, the evaluation criterion was selected as the average of the classification accuracy and the onset detection score.


  Results Top


Results of the first competition, 2017

In the first competition, a total of 46 teams registered for the competition. Among them, 22 teams submitted the results for the first stage. The teams were asked to submit the classification result as well as a report describing their preprocessing, feature selection, and classification steps. The average accuracy of the classification of motor imagery and motor execution was the criterion in selecting the first six teams. The teams' identification information and the score they achieved are summarized in [Table 1]. In the following section, we will briefly review their proposed methods based on their submitted reports. The names of the participants are provided in [Appendix Table 1].
Table 1: Results of the first stage of the competition, 2017

Click here to view



Tumors applied band-pass filtering with cutoff frequencies of 0.5 and 70 Hz and independent component analysis (Runica algorithm from EEGlab toolbox) as the preprocessing step. They considered both temporal (the mean and the variance of signals) and spectral features (such as the highest frequency bin in the spectrum, the average frequency, the median frequency, and the power spectral ratio). Overall, 640 features were extracted for each electrode. For feature reduction, they utilized the Least Absolute Shrinkage and Selection Operator.[7] Finally, for classification, they applied the majority voting technique on the results obtained by support vector machine (SVM), multilayer perceptron (MLP) and radial basis function (RBF) neural networks, and random forest.

MAZUST denoized EEG signals based on wavelet transform.[8] The average of the wavelet coefficients; the features obtained from the Fourier transform domain; and the statistical features such as mean, standard deviation, entropy, skewness, and kurtosis were extracted as features. Using the genetic algorithm, they selected the superior features. They used the MLP neural network and the KNN algorithm for clustering the features. Based on the mean accuracy of 20% of the training data as validation, they selected KNN as the final classifier as it reached better classification results.

Advanced Technologies in Medicine applied a median filter of length 13 to remove the impulsive noise from EEG signals. For each trial, 13 features were extracted including the mean, Higuchi's fractal dimension, permutation entropy, signal energy in different frequency bands, and three features obtained from the second-order spectrum. They considered a calibration phase for feature selection during which the best features were selected based on different criteria such as t-test, Wilcoxon test, receiver operating characteristic curve, and Bhattacharyya distance. In the calibration phase, they also selected the best classifier for each participant among SVM, KNN, and random forest.

Misagh used a series of three binary classifiers for movement classification. In the first and second classifiers, after band-pass filtering, common spatial pattern (CSP) was applied on the signals to reduce the number of electrodes to m (m ∈ {4, 5, ..., 8} depending on the dataset). Then, the variance of each electrode was selected as the feature. Finally, using linear discriminant analysis (LDA), the data were projected along a line with maximum separation. In the last classifier, they used joint diagonalization that is a blind source separation method for discriminating the classes. For motor imagery dataset, a similar procedure was performed.

Biological Signal Processing Laboratory of AmirKabir performed routine preprocessing steps such as removing the direct current (DC) offset by high-pass filtering. For feature extraction, the covariance matrices of all the trials were obtained, and their centers of gravity were computed based on the Riemannian geometry. Then, each covariance matrix was projected relative to the center of the gravity to the tangent space so that a feature vector is obtained for each trial. Applying principal component analysis (PCA), they made the feature vectors independent of each other and, finally, selected the superior features based on joint mutual information. They classified the data using two methods, namely minimum distance to mean in the Riemannian space and a learning Fuzzy classifier.

OCD extracted ten linear features including the mean, variance, mean frequency, and the power of frequency contents in seven segmented ranges of the spectrum. The features were sorted based on the class scatter matrix. For classifying the motor execution and imagery datasets, they used random forest and SVM, respectively.

In the second stage of the competition, the 20% of the trials which were new to the competitors were mixed with the test data of the first stage and the teams were asked to apply their methods on the whole test dataset. The teams' scores are presented in [Table 2]. The names of the participants are provided in [Appendix Table 1].
Table 2: Results of the final stage of the competition, 2017

Click here to view


Based on [Table 2], the scores of the teams were similar; therefore, according to the opinion jury panel, Misagh and OCD jointly won the first place, MAZUST and Biological Signal Processing Laboratory of AmirKabir jointly won the second place, and Advanced Technologies in Medicine won the third place.

Results of the second brain–computer interface competition, 2018

In the second competition, a total of 71 teams registered for the competition. Among them, 15 teams submitted the results for the first stage. The teams were asked to submit the classification result as well as a report describing the preprocessing, feature selection, and classification steps. The average of the classification accuracy and the movement onset detection score constituted the criterion for selecting the first six top teams. These teams, their method of classification, and their score are summarized in [Table 3].
Table 3: Results of the first stage of the competition, 2018

Click here to view


In the following section, we briefly review the competitors' methods based on their submitted reports.

Zehn Plus Sharif computed the covariance matrix of each trial of the training dataset, the Riemannian distance, and Riemannian mean of both train and test datasets;[9],[10],[11] mapped the trials onto the tangent space; and, finally applying PCA, selected the feature vectors. Then, they used LDA as a classifier. For detecting the onset of the movement, they computed the cross-correlation between every pair of trials belonging to the same class of movement and obtained the lag with maximum correlation. Applying PCA on the computed lags, they selected the features for every class and applied linear regression.

Rayan performed classification and onset detection based on Riemannian geometry. The covariance of training signals was computed, mapped to the Riemannian space, and the mean of each class was computed. The same procedure was performed on the test datasets, and they were assigned to a class that had the minimum distance to the training classes. For onset detection, they used event-related synchronization that happens after movement. Frequency decomposition was applied to the windowed segments of the signals. Then, the fractal dimension of the signal was computed by the Katz algorithm in each frequency. The time instance at which the fractal dimension had the minimum distance to 2 was selected as the movement onset instance.

Kavoshgarane Dadeh, after downsampling, decomposed the data into several independent sub-bands using filter bank analysis, namely nine band-pass filters in the frequency range of 4–40 Hz.[12] The filter bank is applied to all the channels, and then automated channel selection and CSP algorithm [13],[14] were applied independently to every sub-band. After reducing the dimension of features using PCA,[15] they applied the linear logistic regression classifier. For detecting movement onset, they applied a Laplacian spatial filter on the electrodes located on the left motor cortex. Applying a Chebyshev band-pass filter, they extracted the mu band and computed the power in the frequency range of 8.5–11.5 Hz.

Finally, they selected the instance that power reaches the maximum as the movement onset.

Misagh, for movement classification, performed a procedure similar to the method they used in the previous competition. For detecting movement onset, they extracted intervals of the training dataset around the onset instance for the trials belonging to the same movement class. Then, they slipped the interval on the test dataset and computed the correlation. The time instance corresponding to the maximum correlation value is selected as the onset time.

Advanced Technologies in Medicine extracted time–space features including the mean, variance, Higuchi's fractal dimension, and permutation entropy. Using Daubechies wavelets, time–frequency space features such as the mean of coefficients were also extracted. They used step-wise LDA for selecting features. In this method, features are selected whose linear combination is the best possible combination for separating the classes. Then, LDA was used as the classifier. For the detection of movement onset, they extracted the template signal of the movement onset for each participant and each movement class, separately. To do this, the signal of electrode numbers 2 and 3 was smoothed and averaged. Then, the resulting signal was aligned based on the onset trigger and the average was computed. The correlation of this signal with the average of the electrodes 1 and 2 of the test trials was computed and the lag with the maximum value was selected as the onset instance.

Behin Tadbir, first, applied high-pass filtering to remove the low-frequency noise such as eye blink, eye movements, and ECG. They band-pass filtered EEG to extract the beta rebound feature that is an event-related desynchronization (ERD) occurring approximately after 2 s of movement onset and corresponds to the power spectral density of mu and beta bands (8–30 Hz). In addition, using Riemannian geometry, they mapped each trial as a point on a manifold and used a geodesic metric for measuring the similarity of points (trials) based on the nearest neighbor. Finally, they applied machine learning algorithms including LDA and SVM for classification. For finding the onset of movement execution, based on ERD effect, they searched for temporal segments of the signal that over a window ended in a specific point with a minimum value. For doing this, they convolved the signal with a window of a fixed length and considered the point which resulted in a minimum value as the movement onset instance.

Tirdad downsampled and filtered signals in alpha and beta bands, i.e., 8–30 Hz. For feature extraction, they decomposed every channel by the Morlet wavelet into 23 frequency bins and 75 time frames. Then, a four-dimensional tensor of size channel × frequency bins × time frame × trial was formed. Using the high-order discriminant analysis algorithm, the tensor of the training data was factorized into a core tensor and space, time, and frequency factors. The core tensor formed the features, and discriminant features were selected based on Fisher scoring. Finally, SVM was used as the classifier. For onset detection, they used the wavelet transform with the Morlet wavelet. The power of alpha band was obtained at different times. The first instance at which alpha power increased was selected as the movement onset time.

AMBCI, in the preprocessing step, removed the mean of signals, low-pass filtered with a cutoff frequency of 60 Hz, and downsampled the signals for reducing the computational burden. For onset detection, they defined a binary response based on features such as signal phase, short-time energy, and time domain signal. This response was used for training a deep Levenberg–Marquardt backpropagation neural network. For classification, features such as the mean and median frequency, discrete sine transform, discrete cosine transform, short-time Fourier transform, CSP, and discrete wavelet transform were considered. ANOVA test was used for feature selection so that the features having a P value less than a significant threshold were selected for training the algorithm.

According to [Table 4] and based on the averaged classification accuracy of motor execution and the score of onset detection, Zehn Plus Sharif, Behin Tadbir, and Kavoshgarane Dadeh won the first, second, and third places, respectively.
Table 4: Results of the final stage of the competition, 2018

Click here to view



  Conclusion Top


In this article, we presented the first and second Iranian BCI competitions held by the NBML in 2017 and 2018. The competitions aimed to provide a dataset for the researches in the field of BCI, introducing the new scientists in this field and reviewing the methods used by the teams. In the first competition, among 46 registered teams, 22 teams submitted the results, whereas in the second competition, the number of registered teams increased to 71 teams. This shows that participating in these types of competitions is becoming more popular among Iranian researchers. However, the number of teams that submitted the results reduced to 15 teams. This might be because of a more challenging scientific question related to movement onset detection.

In the 2017 competition, the maximum classification accuracy for the motor execution dataset was 77% which was achieved by Tumors. This number was around 69% in the 2018 competition achieved by Tirdad. The former used a majority voting technique on the results obtained by SVM, MLP, and RBF neural networks and random forest classifiers. However, the latter used the SVM classifier along with the features obtained from the tensor factorization. It should be considered that although the number of datasets to be classified in the 2018 competition was around twice that of the 2017 competition, the classification was participant based, therefore the relatively low accuracy might not be contributed to the higher number of datasets (15 vs. 7). Therefore, it is reasonable that applying different classifiers and then labeling based on the majority voting framework could increase the classification accuracy. In the case of motor imagery classification, the maximum accuracy was around 42% which was achieved by Biological Signal Processing Laboratory of AmirKabir that utilized features based on the Riemannian geometry and a learning Fuzzy classifier. Overall, the low accuracy in the classification of this dataset might be contributed to the low quality of datasets as no feedback and training session were available and we were not sure whether the participants performed the classification task accurately. About movement onset detection, the maximum score was 0.71 which was achieved by Zehn Plus Sharif.

According to NBML policy, they will continue holding BCI competitions in future with new and more challenging scientific questions.

Acknowledgments

The Iranian NBML, Tehran, Iran, was the sponsor of both competitions and provided the required datasets. We would like to thank all the study participants as well as the researchers who participated in the competitions.

Financial support and sponsorship

None.

Conflicts of interest

There are no conflicts of interest.


  Biographies Top




Nasser Samadzadehaghdam received a B.Sc. degree in electrical engineering (bioelectric) from Sahand University of Technology, Tabriz, Iran, in 2009. He received M.Sc. and Ph.D. degrees in biomedical engineering (bioelectric) from Isfahan University of Medical Sciences, Isfahan, Iran, in 2014 and Tehran University of Medical Sciences, Tehran, Iran in 2019, respectively. Currently, he cooperates with Research Center for Biomedical Technologies and Robotics (RCBTR). His research interests include biomedical image and signal processing, EEG/MEG forward and inverse problems, neuroscience, and BCI.

Email: n-samadzadeh@razi.tums.ac.ir



Mohammad Hassan Moradi received B.Sc. and M.Sc. degrees in electronic engineering from Tehran University in 1988 and 1990, respectively, and a Ph.D. degree in electrical engineering (biomedical engineering) from University of Tarbiat Modarres, Tehran, in 1995. He is currently a Professor in the biomedical engineering department, Amirkabir University of Technology. He has published over 100 refereed research articles related to biomedical engineering. His primary research and teaching interests involve medical instrumentation, bioimpedance measurement, biomedical signal processing, wavelet theory and applications, time-frequency transforms, and fuzzy neural systems.

Email: mhmoradi@aut.ac.ir



Mohammad Bagher Shamsollahi received B.Sc., M.Sc. and Ph.D. degrees in electrical engineering from Tehran University in 1988, Sharif University of Technology in 1991, and University of Rennes 1, France in 1997, respectively. He is currently a Professor in the Biomedical Engineering Department, Sharif University of Technology. His research interests include biomedical signal processing, statistical signal processing, time-frequency analysis, wavelets, and pattern recognition.

Email: mbshams@sharif.edu



Ali Motie Nasrabadi received a B.Sc. degree in Electronic Engineering in 1994 and his M.Sc. and Ph.D. degrees in biomedical engineering in 1999 and 2004 respectively, from Amirkabir University of Technology, Tehran, Iran. Since 2005, he has joined to Shahed University and currently is a professor in the biomedical engineering department at Shahed University, in Tehran, Iran. His research interests include biomedical signal processing, nonlinear time series analysis, and evolutionary algorithms. Particular applications include EEG signal processing in mental task activities, Hypnosis, BCI and epileptic seizure prediction.

Email: nasrabadi@shahed.ac.ir



Seyed Kamaledin Setarehdan is a Professor of Biomedical Engineering at the School of Electrical and Computer Eng., University of Tehran. Tehran, Iran. He received his B.Sc. in Electronics Engineering from University of Tehran, M.Sc. in Biomedical Engineering from Sharif University of Technology, Tehran, Iran, and Ph.D. in Medical Signal/Image Processing from the University of Strathclyde in Glasgow, UK. He was a Postdoctoral research fellow at the Signal Processing Division of the University of Strathclyde in Glasgow, UK from 1998-2001. Dr. Setarehdan joined the Biomedical Engineering Group, School of Electrical and Computer Eng., Univ. of Tehran from Jan. 2001, where he is a Professor now. Dr. Setarehdan's main research interests are medical signal and image processing in general, medical ultrasound, Medical Optics and medical applications of the Near-Infrared Spectroscopy (fNIRS).

Email: ksetareh@ut.ac.ir



Vahid Shalchyan received the M.Sc. degree in biomedical engineering from the Amirkabir University of Technology, Tehran, in 2002 and the Ph.D. degree in Biomedical Science and Engineering from Aalborg University, Aalborg, Denmark in 2013. From 2011 to 2013, he was a Visiting Researcher at the University Medical Center Göttingen, Georg-August University, Göttingen, Germany. He has been working as an Assistant Professor in the Department of Biomedical Engineering, Faculty of Electrical Engineering, Iran University of Science and Technology (IUST), Tehran, Iran. His main research interests include biomedical signal processing and pattern recognition, with emphasis on their application to neural signals, for neuroscience, neurotechnology, and brain-computer interface researches.

Email: shalchyan@iust.ac.ir



Farhad Faradji received the B.Sc. degree in electrical engineering, the B.Sc. degree in biomedical engineering, and the M.Sc. degree in electrical engineering from the Amirkabir University of Technology, Tehran, Iran, in 2005, 2007, and 2007, respectively, and the Ph.D. degree in electrical and computer engineering from The University of British Columbia, Vancouver, Canada, in 2012. He was an Assistant Professor with the Department of Electrical Engineering, K. N. Toosi University of Technology, Tehran, Iran from 2013 to 2018. He is currently with the R&D Department of BroadbandTV, Vancouver, Canada. His research interests include data science, machine learning, pattern recognition, and signal and image processing.

Email: faradji@eetd.kntu.ac.ir



Bahador Makkiabadi received his B.Sc. degree in electrical engineering from Shiraz University, Shiraz, Iran in 1997. He received M.Sc. and Ph.D. degrees in biomedical engineering from Amirkabir University of Technology, Tehran, Iran, and University of Surrey, Guildford, Surrey, UK in 2000 and 2011, respectively. Currently, he is an assistant professor at Tehran University of Medical Sciences and Research Center for Biomedical Technology and Robotics (RCBTR). His research interests include blind source separation, BCI, and array signal processing.

Email: b-makkiabadi@sina.tums.ac.ir



 
  References Top

1.
The Annual BCI Award. Available from: https://www.bci-award.com/Home. [Last accessed on 2019 Oct 12].  Back to cited text no. 1
    
2.
BCI Competitions. Available from: http://www.bbci.de/competition. [Last accessed on 2019 Oct 12].  Back to cited text no. 2
    
3.
Tangermann M, Müller KR, Aertsen A, Birbaumer N, Braun C, Brunner C, et al. Review of the BCI Competition IV. Front Neurosci 2012;6:55.  Back to cited text no. 3
    
4.
Scientific Tournament. Available from: https://nbml.ir/FA/scientific-tournament/. [Last accessed on 2019 Oct 12].  Back to cited text no. 4
    
5.
Tabie M, Kirchner EA. EMG Onset Detection-Comparison of Different Methods for a Movement Prediction Task based on EMG. In Biosignals; 2013.  Back to cited text no. 5
    
6.
Solnik S, Rider P, Steinweg K, DeVita P, Hortobágyi T. Teager-Kaiser energy operator signal conditioning improves EMG onset detection. Eur J Appl Physiol 2010;110:489-98.  Back to cited text no. 6
    
7.
Tibshirani R. Regression shrinkage and selection via the lasso. J Royal Statistical Soc Series B (Methodological) 1996;58:267-88.  Back to cited text no. 7
    
8.
Princy R, Thamarai P, Karthik B. Denoising EEG signal using wavelet transform. Int J Adv Res Comp Engineering Technol 2015;4:1070-4.  Back to cited text no. 8
    
9.
Barachant A, Bonnet S, Congedo M, Jutten C. Multiclass brain-computer interface classification by Riemannian geometry. IEEE Trans Biomed Eng 2012;59:920-8.  Back to cited text no. 9
    
10.
Barachant A, Bonnet S, Congedo M, Jutten C. Classification of covariance matrices using a Riemannian-based kernel for BCI applications. Neurocomputing 2013;112:172-178.  Back to cited text no. 10
    
11.
Congedo M, Barachant A, Bhatia R. Riemannian geometry for EEG-based brain-computer interfaces; a primer and a review. Brain-Comp Int 2017;4:155-74.  Back to cited text no. 11
    
12.
Ang KK, Chin ZY, Wang C, Guan C, Zhang H. Filter Bank Common Spatial Pattern Algorithm on BCI Competition IV Datasets 2a and 2b. Front Neurosci 2012;6:39.  Back to cited text no. 12
    
13.
Meng J, Meng J, Liu G, Huang G, Zhu X. Automated Selecting Subset of Channels Based on CSP in Motor Imagery Brain-Computer Interface System. in 2009 IEEE International Conference on Robotics and Biomimetics (ROBIO). IEEE; 2009.  Back to cited text no. 13
    
14.
Ramoser H, Müller-Gerking J, Pfurtscheller G. Optimal spatial filtering of single trial EEG during imagined hand movement. IEEE Trans Rehabil Eng 2000;8:441-6.  Back to cited text no. 14
    
15.
Duda RO, Hart PE, Stork DG. Pattern Classification. New York: John Wiley & Sons; 2012.  Back to cited text no. 15
    


    Figures

  [Figure 1], [Figure 2]
 
 
    Tables

  [Table 1], [Table 2], [Table 3], [Table 4]



 

Top
 
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
   Abstract
  Introduction
   Materials and Me...
  Results
  Conclusion
  Biographies
   References
   Article Figures
   Article Tables

 Article Access Statistics
    Viewed100    
    Printed3    
    Emailed0    
    PDF Downloaded31    
    Comments [Add]    

Recommend this journal