• Users Online: 60
  • Print this page
  • Email this page

 Table of Contents  
ORIGINAL ARTICLE
Year : 2020  |  Volume : 10  |  Issue : 4  |  Page : 228-238

Thought-actuated wheelchair navigation with communication assistance using statistical cross-correlation-based features and extreme learning machine


1 Department of Mechatronics Engineering, AMA International University, Salmabad, Bahrain
2 Department of Computer Science and Engineering, Sri Ramakrishna Institute of Technology, Coimbatore, Tamil Nadu, India
3 Electrical, Electronic and Automation Section, Universiti Kuala Lumpur Malaysian Spanish Institute, Kedah, Malaysia
4 School of Mechatronic Engineering, Universiti Malaysia Perlis, Perlis, Malaysia

Date of Submission25-Sep-2019
Date of Decision31-Oct-2019
Date of Acceptance20-Mar-2020
Date of Web Publication11-Nov-2020

Correspondence Address:
Dr. Sathees Kumar Nataraj
Department of Mechatronics Engineering, AMA International University
Bahrain
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/jmss.JMSS_52_19

Rights and Permissions
  Abstract 


Background: A simple data collection approach based on electroencephalogram (EEG) measurements has been proposed in this study to implement a brain–computer interface, i.e., thought-controlled wheelchair navigation system with communication assistance. Method: The EEG signals are recorded for seven simple tasks using the designed data acquisition procedure. These seven tasks are conceivably used to control wheelchair movement and interact with others using any odd-ball paradigm. The proposed system records EEG signals from 10 individuals at eight-channel locations, during which the individual executes seven different mental tasks. The acquired brainwave patterns have been processed to eliminate noise, including artifacts and powerline noise, and are then partitioned into six different frequency bands. The proposed cross-correlation procedure then employs the segmented frequency bands from each channel to extract features. The cross-correlation procedure was used to obtain the coefficients in the frequency domain from consecutive frame samples. Then, the statistical measures (“minimum,” “mean,” “maximum,” and “standard deviation”) were derived from the cross-correlated signals. Finally, the extracted feature sets were validated through online sequential-extreme learning machine algorithm. Results and Conclusion: The results of the classification networks were compared with each set of features, and the results indicated that μ (r) feature set based on cross-correlation signals had the best performance with a recognition rate of 91.93%.

Keywords: Brain–computer interface, communication assistance, online sequential-extreme learning machine, statistical cross correlation-based features, wheelchair navigation system


How to cite this article:
Nataraj SK, Paulraj M P, Yaacob SB, Bin Adom AH. Thought-actuated wheelchair navigation with communication assistance using statistical cross-correlation-based features and extreme learning machine. J Med Signals Sens 2020;10:228-38

How to cite this URL:
Nataraj SK, Paulraj M P, Yaacob SB, Bin Adom AH. Thought-actuated wheelchair navigation with communication assistance using statistical cross-correlation-based features and extreme learning machine. J Med Signals Sens [serial online] 2020 [cited 2020 Nov 23];10:228-38. Available from: https://www.jmssjournal.net/text.asp?2020/10/4/228/300504




  Introduction Top


The fundamental needs of people in day-to-day routine involve walking and interacting with other individuals. Individuals with specific disabilities, including motor neuron disease, severe spinal injuries, involuntary speech failure, and brainstem stroke, have limited mobility and interaction with each other (loss of muscle coordination and speech). Such individuals have active brain functions, and are often referred to as differentially disabled (DE).[1],[2] Under these conditions, DE patients have a hard time to walk or communicate with the outside world. It is, therefore, important to provide the DE communities with an assistive technology device (ATD), enabling them to lead their healthy and normal lives. To date, various ATDs have been established using bioamplifiers, for example, cursor movement,[3] neuroprosthetic arm,[4] whole-body movement,[5] emotion recognition,[6] and driver sleepiness detection,[7] motivated by the transmission of noninvasive brain function measurements through effective electroencephalogram (EEG) amplifiers.[8],[9] This study currently intended in recognizing unspoken speech signals and controlling wheelchair mobility without voluntary muscle function.[10],[11],[12],[13],[14],[15],[16]

With regard to the BCIs for speech communication system,[17] a BCI was proposed that can recognize seven words using electrical and magnetic brain waves, acquired under three experimental conditions (electromyography [EMG] results, single-trial predictions, and subject-independent predictions). It is emphasized from the research that brainwaves contain significant information about the mentally processed words and therefore, it is possible to recognize thought-controlled words using signal processing algorithms.[18] Researchers carried out experiments using five locked-in patients based on slow cortical potentials (SCPs), self-regulated as a control signal to choose alphabet (ABCs), words, or pattern symbols in a computer-based language support arrangement. From this analysis, it is observed that adequate learning speed and success rate are necessary for SCP-based experiments when transmitting binary decisions to the computer. Porbadnigk et al., 2009, introduced a BCI including a brief representation of the double-tree complex wavelet transform and linear discriminant analysis (LDA) paradigm for speech production.[19] The developed model can recognize five words using a 16-channel EEG data acquisition system, but this methodology is still not established in practice and has a low performance rate of 45.50% percent. Guenther et al., 2011, proposed a practical implementation and development of assistive BCI with real-time speech synthesis; the developed system uses the formant tuning analysis and a Kalman filter (linear Gaussian model) to drive a speech synthesizer.[20] Particularly, this approach has been used to recognize vowel productions in a customized mode (subject independent). The electrodes were implanted using MRI-guided stereotactic surgery, and the results suggest that the average performance rate hits 40%–70% and the information transfer rate (ITR) is within 50 ms, across sessions. Salama et al.[21] indicated that the classification rate for unspoken speech recognition relies on the concentration of the individual on the task and captured signal with less artifacts. The findings were based on the acquisition of a single brainwave electrode channel to distinguish two Arabic words “YES” and “NO.”

With regard to the BCIs for navigating an electric wheelchair, Tanaka et al., 2005, used twelve-channel EEG acquisition system to acquire brain signals using motor imaginary tasks (LEFT or RIGHT).[22] The results suggest that subject-independent analysis based on recursive training algorithm shows an average classification rate of 80%. Müller-Putz et al., 2008, and Pfurtscheller, 2008, have shown that a tetraplegic individual can use brain waves to direct wheelchair trajectory in virtual reality using the interpretation of DE footstep motions.[4],[23] The proposed system based on a patient with single-channel analysis has an average performance rate of 90% and single runs up to 100% using asynchronous tasks. Leeb et al., 2007, presented an experiment based on two individuals in five different experimental sessions using imaginary tasks and a simulated wheelchair.[24] Based on the experiments, the machine has a reliable user-defined EEG feature that improves the recognition of imaginary motor activity. From the analysis, individual 1 was able to control the wheelchair with an average of 80% success rate, and the individual can control the dynamic robots autonomously over prolonged periods without the use of sophisticated evolutionary algorithms. A research work proposed by[10] introduced a four-tactile stimulator BCI that helped participants maneuver through the stimuli shown in the odd-ball paradigm. The suggested procedure can also be used to control the movement of wheelchairs and recognition of speech. The results of the outcomes were evaluated by the individuals who controlled the virtual wheelchair. A recent review by[25] stated that the use of P300, sensory motor rhythms (SMRs), or steady-state visual-evoked potential (VEP) in most BCIs has shown promising results, but has been focused on offline evaluation of the acquired signals (database). BCIs are under investigation for real-time implementation with actual online testing of new feature extraction algorithms and classifiers. It is, therefore, important to use enhanced EEG recording technologies, optimized signal analysis algorithms, and real-time integration with online evaluation for effective interaction between the user and the BCI.

Lawhern et al., 2018, proposed a compact EEG-based BCI, i.e., EEGNet, using convolutional neural network.[26] The proposed network model was evaluated across four BCI paradigms: P300 visual-evoked potentials, error-related negativity response, movement-related cortical potentials, and SMR. Research findings suggest that particular EEGNet establishes a reliable framework for learning an extensive range of intelligible characteristics across a variety of BCI tasks.

From literature, it can be observed that despite the discussed articles, there is no research addressed with the extended use of BCI technologies enabling both mobility and speech communication using a custom brain activity measurement approach. Moreover, EEG-based navigation and communication technologies have shown that the efficiency of an efficient BCI is largely contingent on the tasks being executed, the number of EEG channels (electrode positions) being used, and the procedure used for data acquisition. It is also observed that BCI's classification performance or ITR differs with different paradigms (SCPs, P300, YES/NO, and cursor movement), individuals (normal or differentially enabled), and process (custom or generalized mode). The possible way to achieve a successful BCI is by choosing proper data acquisition tasks, stimuli (thought-evoked potential [TEP] or VEP), robust signal processing algorithms, and using suitable training for real-time implementation in different circumstances.[10],[13],[17],[21],[22],[26],[27],[28],[29],[30]

It is also known that certain aspects are important and need to be explored in order to potentially develop wheelchair navigation system (WNS) with communication assistance, for example, backpropagation-based multi-layer neural network, hidden Markov modes, support vector machine (SVM), Gaussian mixture models, and LDA are the most efficient and widely used algorithms for classifying motor imagery tasks, despite the mean square error and number of training iterations can be optimized.[19],[31],[32],[33] In addition, training time for participants to enhance their expertise and effective integration with reliable features has been considered for real-time implementation.[16],[24] As a result of the above literatures, the noninvasive BCIs have given significant contributions to the implementation of the proposed WNS system in this research as a first stage upward to the potential for speech and motor control through the proposed protocol. In the absence of muscle coordination and speech, the proposed system can be used to assist DE peoples. This research attempts to develop a wheelchair that provides mobility control and communication assistance via brainwave stimulation. The established device could then be used by DE and other speech-impaired individuals to move around and express their desires to anyone. The protocol used for data acquisition and preprocessing of the recording signals is explained in section 2. Cross-correlation-based techniques have been proposed in this analysis to derive the features from the frequency-band signals at every electrode channel.[33] As many classifiers such as backpropagation neural network,[34] SVM,[35] and LDA[36] have also shown greater efficiency in recognition, the proposed extreme learning machine (ELM) by Huang et al., 2012, is indeed an effective tuning-free algorithm for training a feature set that employs simply single-hidden layer in feed-forward neural network (SLFNs).[37],[38] The emergence of ELM in the artificial neural nets allows reduced time for training the network models relative to the artificial neural network, which has also been employed in other areas of research, particularly in BCI.[37],[38],[39] As a result of the literature, the statistical features extracted for the WNS classification system are linked with its corresponding imagery tasks and evaluated using online sequential (OS)-ELM.[40] Section 3 explain the feature extraction method and the classifiers used in this research. [Figure 1] illustrates the schematic depiction of the proposed WNS system.
Figure 1: Schematic representation of the proposed wheelchair navigation system

Click here to view



  Wheelchair Navigation System Database Top


The data acquisition process was carried out in the laboratory environment at the School of Mechatronics Engineering, University Malaysia Perlis.[41] Prior to initiating the data collection process, the experimental procedure has been registered and accepted by the “National Medical Research Registration” (NMRR ID: NMRR-13-51-14570) and received ethical clearance with Medical Research Ethics Committee (MREC) and Ministry of Health Malaysia (Ref:[7]dlm. KKM/NIHSEC/800-2/2/2Jld2P13-179).[41] The section discusses several basic techniques used for the experimental setup, including the configuration of wireless bioamplifier and positioning of electrode channels for the measurement of brain activities. In addition, appropriate task identification, the data capture process, and the development of the WNS database were also addressed. This procedure is important to classify the tasks captured based on the TEPs that control the WNS.

Experimental Setup and Data Acquisition

The experimental setup has been configured with a bio-signal data acquisition system known as “g-mobilab+” (a device that captures EEG signals from eight-channel positions) to record brainwave responses.[42] The data acquisition framework comprises the following components:

  1. Electrode cap with nine individual screw-in electrodes
  2. A bio-signal amplifier
  3. Electrode gels, and
  4. Wireless data transmission through the MATLAB® integrated programming package.


In the experimental paradigm, it is proposed to attempt and establish a BCI device suitable for DE community to operate joystick of the wheelchair and interact with anyone (through any odd-ball paradigm), based on brainwave signals.[42],[43]

Therefore, three major tasks that describe the movement of the robotic wheelchair as well as the selection of isolated words/phrases in an odd-ball paradigm, for example, LEFT-, FORWARD-, and RIGHT-hand motion control, were incorporated in the data acquisition process. To communicate with the outside world and to alert the caretaker under an emergency situation, additionally, the following three tasks have also been introduced: “Help,” “YES,” and “NO” tasks to direct the basic human needs. In this experiment, the EEG responses obtained for the RELAX (normal) task have been used as the reference signal. The acquisition was performed in a protected semi-sound chamber, in which the individuals were seated in comfortable state and performed seven asynchronous tasks. The signals are obtained in circumstances where the individuals remain stable. There were no overt movements allowed during the 12 s data recording process.

In this process, the motor imaginary signals relating the tasks are measured from eight electrode locations: they are “temporal lobe” (T3 and T4), “central lobe” (C3 and C4), “parietal lobe” (P3 and P4), and “occipital lobe” (O1 and O2), when the individual executes the seven tasks asynchronously. Moreover, reference recording schemes have been used in the electrode positioning procedure, and the electrodes are positioned at the locations (T3 and T4, C3 and C4, P3 and P4, O1 and O2) with one specific electrode on the left ear lobe of the individual's body. The potentials were maintained relatively constant.[44],[45]

The experimental WNS model measures the patterns of brain activity to distinguish the rhythmic pattern for an individual's seven different thoughts. Therefore, the brainwave signals were collected from a grid of eight Ag/AgCl scalp electrodes with a sampling rate at 256 Hz during the data acquisition procedure. The electrodes were mounted on the scalp as stated in the international 10–20 lead system.[46] The electrodes are mounted on scalp locations, and g-tec impedance checker has been used to monitor the impedance levels. In addition, the impedance level has been subsequently measured after each task was completed and kept below 10 kΩ.

Thought-Evoked Potential Data Acquisition and Wheelchair Navigation System Dataset

In the development of WNS dataset, 10 healthy naives to BCI were selected as volunteers in the data acquisition process (eight male individuals, aged between 21 and 30 years, and two female individuals aged 24 years). In the data collection of each task, a detailed demonstration on the tasks (simulation of the task) was given to the individuals through a liquid crystal display (LCD) monitor. The simulation provides a detailed demonstration of the movement of wheelchair joystick for LEFT, FORWARD, and RIGHT directions using the right hand, both hand, and left hand movement. The visual shows a volunteer executing up-down and left-right head movements with tasks involving “YES” and “NO.” For “HELP,” the individual was instructed to mentally pronounce the term “HELP” (how he/she normally call for help in an emergency), rather than contemplating the hand movements. The LCD screen is then switched off, and a 2-s blank screen was displayed, while, as shown on the simulation, the participant has been asked to perform the tasks asynchronously. The participant subsequently performs a given task, and the EEG responses were collected across the parietal lobe (P3 and P4), temporal lobe (T3 and T4), central lobe (C3 and C4), and occipital lobe (O1 and O2) locations for 12 s. According to the 10–20 method, ground and reference electrodes were placed in Fpz and left earlobe location.[47]

At a frequency of 50 Hz, the acquired EEG signal has interference of unknown noise characteristics such as power line disruption. Hence, a simple 1st-order IIR notch filter has been designed to remove power line disruption from signals acquired. Filter center-frequency (F0) was approximately selected around 50 Hz with bandwidth of Δ F = 4 Hz.[46],[48] The recorded signals were subsequently quantified into discrete signals using a sampling frequency of 256 Hz. Similarly, ten trials were made for each task during the acquisition process, and the participants were allowed to take breaks between each task for 10 min. Additionally, for ten participants, this process has been continued, and the captured signals are established in this manner and labeled as the WNS dataset. The WNS dataset comprises 10participants× 7tasks× 10trials. The proposed TEP-protocol-based WNS database will also be established for future research using more volunteers to develop the generalized system. The database was then used for validation using hypothesis testing based on analysis of variance algorithm, and P value was found to be below α (significance level: 0.05).[49]


  Statistical Cross-Correlation Based Features Top


Frame Blocking and Frequency band Extraction

The eight-channel raw EEG signals were processed to 10-s signal in the feature extraction procedure by excluding one second at the beginning and end of the signal to eliminate the noise due to electrical inference (amplifier ON/OFF). The segmented signals are also categorized into frames of equal length (2 s with 512 samples/frame) including an overlap (1 s with 256 samples).

The first frame is, therefore, composed of 512 samples. Following an overlap of m − 1 (256 samples), the second frame was initiated to overlap the second frame with the n − m samples of the first frame. This process was performed in the frame segmentation procedure until all EEG signals were used as an input signal to derive the frequency band.[50]

The frequencies above 100 Hz have very little information about the tasks performed on the basis of the TEP protocol;[46],[51] therefore, the segmented frame signals are used with band-pass filters to remove artifacts and EMGs above 100 Hz and below 0.5 Hz. The segmented frame signals are split into the following six specific bands: delta-band (δ), 0.1–4 Hz; theta-band (θ), 4–8 Hz; alpha-band (α), 8–16 Hz; beta-band (β), 16–32 Hz; gamma 1-band (γ 1), 32–64 Hz; and gamma 2-band (γ 2), 64–100 Hz. The frequency band is implemented over each frame signal segmented from the eight different channels, and the features are derived using the statistical cross-correlations (SCC) algorithm.

Statistical Cross-Correlations Based Features

Cross-corr (r) analyses were used in this article to examine the segmented signals recorded based on the TEP procedure. r is a form of template-matching tool among two input sequences, which is especially prominent in identifying the significant differences across the active potentials of the neuron.[52],[53] Considering r is a successful method for extracting the features in BCI studies, which also offers high-efficiency level even when the signals are effected by slight variations in the location of the electrodes,[33],[52] This methodology was implemented in this analysis to determine the interrelationship between the corresponding sets.

A simple Hamming window has been placed across each frame in the feature extraction process, and the Fast Fourier Transform Algorithm (FFT) algorithm was implemented to extract the frequency components from the time domain sequence.[54] Accordingly, the six frequency band signals (δ, θ, α, β, γ 1, and γ 2) were extracted using Welch's band-pass filters for each frame.[50] In addition, for all channels, the FFT frequencies are cross-correlated among ith and the (i + 1)th frames to determine the interrelationship between the two discrete frames. Hence, the cross-correlation sequence for was determined one by one from δ i band frequencies and the δ + 1 band frequencies.

where

i represents the frame index

j represents electrode channel index.

Then, the procedure is repeated to compute the cross-correlation sequence for and . In addition, the cross-correlated samples are used to derive four distinct statistical measures which are minimum (min [r]), mean (μ [r]), maximum (max [r]), and standard deviation (σ [r]). These corresponding feature samples are configured to interpret the r sequence distribution and minimize the dimensions of the set of features.[33] For eight channels, as a result, 48 features (6 frequency bands × 8 channels) were obtained between each set of frames. Similarly, for each task, features were extracted at each trial, and the resulting set of features comprises 6300 samples (10 participants × 10 trials × 9 frames × 7 tasks). In similar fashion, the features were derived from each task across every trial to construct the feature set. Consequently, the feature set composed of 6300 samples (10 participants × 10 trials × 9 frames × 7 tasks) is developed. The set of features are then used to design the architecture of the classifier and to recognize the tasks.









where min(r) –, μ(r) – mean, max(r)maximum and σ (r) - standard deviation of the r sequence in the ith frame of the jth electrode location.

The procedures to implement the segmentation of frequency bands and cross-correlation features are illustrated in [Figure 2].
Figure 2: A flowchart procedure for the segmentation of frequency bands and cross-correlation feature

Click here to view



  Classification of Thought-Evoked Potential Tasks Using Online Sequential-Extreme Learning Machine Algorithm Top


ELMs are fundamentally inspired pattern recognition techniques built using SLFN that provides quick training speed, flexibility in implementation, and reduced manual interruptions (Huang, 2004).[38] The concept of ELM becoming a potentially promising technique for BCI technologies has been developed significantly over the years.[37],[38],[39] Liang et al.[40] introduced an effective OS-ELM, which could sequentially process feature vectors and update the existing model with the input of new samples. The SLFN algorithm can learn sequentially with “one per chunk” or “one by one chunk,” which has a set or different chunk size.[40] OS-ELM comprises of two different stages: a first stage and a secondary learning process. Therefore, with the OS-ELM algorithm used in this study,[55] the statistical features derived from the cross-correlation analysis were evaluated.

In this analysis, a simple WNS model has been implemented employing OS-ELM for multiclass pattern recognition. The set of features based on every statistical measure is then processed and linked with the tasks (6300 × 48 features). In addition, the feature vectors are normalized using the bipolar normalization approach, which can be seen in Eq. 5, in which the dataset was transformed between − 0.9 and 0.9 (Sivanandam, 2009).[56] The feature set has been portioned into training and testing sets based on 5-fold cross-validation method[57] for each feature set (min [r]), μ(r), max ( r), and σ (r). The training set has 5040 × 48 samples (80% master data set) as well as the test set does have the other 1260 × 48 samples (20% master data set) to recognize the TEP tasks.[58] In this analysis, 48 neurons in the input layer, and 7 neurons in the output layer, were formed in OS-ELM architectures. We know that the hyperbolic sigmoidal (tanh) activation feature transforms every value into the −0.9 to 0.9 range. Tanh activation, as shown in Eq. 6, has been used in the configured OS-ELM architectures to activate output layers. To improve performance, the hidden neurons were increased linearly between 30 and 1400 neurons. Eventually, the architecture was trained for ten tests with each feature. The primary hidden neuron has been configured to be equivalent toward the samples in the training data. [Figure 3] shows the developed architecture's average training and testing performance across ten trials with varying hidden neurons.
Figure 3: Comparison of training and testing accuracy executed during the training of online sequential-extreme learning machine algorithm models

Click here to view


The maximum number of hidden neurons, i.e., 1400, was chosen based on the highest classification rate. The performance rate has increased substantially as hidden neurons hit 1400 neurons.



Sij is normalized input sample of the ith row and jth column,



Gij is normalized output sample of the ith row and jth column.

For each subset, OS-ELM architectures were trained for ten trial weights. With the initial subset, the neural architecture has been supervised with 8/10 set of features, and the level of recognition was measured with the remaining feature sets ('2/10' subset). Kohavi, 1995, proposed a cross-validation method for evaluating the classifiers developed from different sets of derived features.[58] Therefore, the developed classifiers in this experiment were validated with each '2/10' feature subset for 10 different trials. [Table 1] illustrates the performance of the OS-ELM classifier models based on the 5-fold cross validation approach with time taken during training (seconds), time taken during testing (seconds), training accuracy (%), and test accuracy (%).
Table 1: Comparison of wheelchair navigation system classification using statistical features and online sequential-extreme learning machine algorithm

Click here to view



  Results and Discussion Top


This work preprocesses and blocks raw EEG signals into multiple frame samples. Then, the frame samples are used to extract the δ ,θ ,α ,β ,γ 1, andγ 2 band frequencies. The statistical features are obtained through the correlation among two sequential frequency band frames, and the features are linked to a specific TEP task. The features derived are characterized with OS-ELM algorithm. The statistical analysis of r features and the recognition rate of the system implementations are presented in [Table 1]. [Figure 4] and [Figure 5] indicate the generalized training time and recognition accuracy attained through the training and evaluation of feature sets.
Figure 4: Overall training accuracy obtained during training and testing the feature sets

Click here to view
Figure 5: Overall testing accuracy obtained during training and testing the feature sets

Click here to view


From [Figure 4], it can be emphasized that the statistical features of the OS-ELM model reach the minimal training performance (average) σ(r) – 39.91%, min (r) – 26.39%, max (r) – 48.30%, and μ(r) – 44.05%. It can also be emphasized that the classifiers have obtained maximal training performance (average) of σ(r) – 90.69%, min (r) – 92.13%, max (r) – 94.09%, and μ(r) – 95.92%. Further, μ(r)-based classifier model hits the highest accuracy 95.92% (average) in recognition and σ(r) reaches the average lowest accuracy of 90.69% (average) in recognition.

From [Figure 5], it can be emphasized that the statistical features of the OS-ELM model reach the minimal testing accuracy (average) of σ(r) – 20.91%, min (r) – 11.80%, max (r) – 23.30%, and μ(r) – 22.48% and the maximal testing accuracy (average) of σ(r) – 84.32%, min (r) – 85.74%, max (r) – 89.25%, and μ(r) – 91.93%. It can be interpreted that the μ(r) reaches the highest accuracy of 91.92% (average) in recognition, and σ(r) hits the lowest accuracy of 84.32% (average) in recognition. [Figure 3] and [Figure 6] similarly represent a comparison between training and testing performance conducted throughout the OS-ELM modeling and the total overall training time attained throughout OS-ELM training employing μ(r).
Figure 6: The mean maximum training time obtained during the training of online sequential-extreme learning machine algorithm using μ(r)

Click here to view


From [Figure 3], it can be emphasized that the highest training accuracy of 95.92% and the highest testing accuracy of 91.93% have been attained, while the network was trained with 1399 hidden neurons. As shown in [Figure 3], the network's performance has increased gradually by raising the hidden neurons linearly. The analysis suggests that the feature set has a robust classification system, based on OS-ELM learning.

From [Table 1], it can be emphasized that the statistical features of the OS-ELM model reach the lowest training time (average) for σ(r) – 60 ms, min (r) – 30 ms, max (r) – 140 ms, and μ(r) – 80 ms and the highest testing time (average) of σ(r) – 10,940 ms, min (r) – 14,210 ms, max (r) – 20,890 ms, and μ(r) – 36,610 ms. It is also emphasized that the μ(r) reaches the maximal training time of 36,610 ms (average) and σ(r) reaches the lowest training time of 60 ms (average).

Confusion Matrix for the Classification of Seven Tasks

The confusion matrix is a visual interface method, which provides the classifier's actual output and predictions. The OS-ELM confusion matrix has the maximal recognition rate of 91.93% using μ(r) features, as shown in [Table 2]. From [Table 2], it can be emphasized that perhaps the “LEFT” task has the lowest recognition rate of 89.53% and the “HELP” task has the highest recognition rate of 95.81%. It can be also emphasized that perhaps the “RELAX” task has the lowest false recognition rate of 41.18% (7 samples/17 samples) and the “RIGHT” task has the highest false recognition rate of 76.47% (13 samples/17 samples). In addition, the overall accuracy of the total number of correct predictions achieved for the proposed model is 91.94%. The results from the analysis illustrate that investigation on WNS with the communication aid using our proposed seven tasks together with the SCC-based features have shown promising results. To validate our proposed algorithm in customized modes, the data acquired from ten normal controls were used for the classification. The proposed cross-correlation technique based on different band frequencies has been used to extract the μ(r) features from two consecutive frame samples and classified using the OS-ELM classifier. [Figure 7] depicts the comparison of average classification accuracy, using μ(r) features in customized modes.
Table 2: Confusion matrix for maximum classification accuracy of 91.93% using μ(r) feature set and online sequential-extreme learning machine algorithm classifier

Click here to view
Figure 7: The comparison of mean classification accuracy, using μ(r)features in customized modes

Click here to view


The results suggest that the average recognition rate of 92.59% has been achieved. From [Figure 6], it is observed that participant 9 has the highest classification rate of 93.45% and participant 8 has the lowest classification rate of 86.25% through the customized classification system. The results from the generalized and customized classification system suggest that OS-ELM for WNS tasks reaches the least training time with a self-reliant recognition rate, when comparing the findings among other nonparametric pattern recognition methods found in literature.[10],[27],[28],[33],[59]


  Conclusion Top


In accordance with the objectives of this analysis, a simple wheelchair navigation system with communication assistance (simulation model) has been established using cross-correlation features and OS-ELM pattern recognition algorithm. The trained OS-ELM model representing the classification system can also be used to control the wheelchair or to choose isolated words in an odd-ball paradigm for hardware implantation. The developed classification system has shown promising results for the signals obtained from normal controls, indicating explicitly that the seven tasks introduced in the protocol are convenient to memorize and execute. Therefore, it can be considered that these tasks are convenient for the DE community to grasp and perform the task to navigate the wheelchair and letter/word selection in an odd-ball paradigm. From the analyses, the results indicate a robust classification rate for the proposed feature extraction algorithm; the recognition of tasks reflects on the ability of the participant as described in [Table 2], and classification performance differs with the participants as shown in [Figure 6]. During the training stage, the obtained results from this research indicate a minimal misclassification of 0.0454% (229/5040) samples and 0.0952% (120/1340) during the testing stage. In comparison with the study by Lawhern, 2018,[26] the researcher used the Morlet wavelet approach to analyze EEG signals (0–40 Hz). In this research, the protocol was specifically designed to elicit the tasks linked to the objectives of this research work. The EEG signals are, therefore, evaluated with the six different frequency bands, as each frequency band has meaningful information related to brain function. For the classification of WNS tasks, selection of frequency bands is, therefore, important to obtain more discriminating and dominant features.[26] The results were based on the mean and standard error of classification performance for different paradigms. In this study, the cross-correlation features extracted from two consecutive frames of each band frequencies are used for the classification based on OS-ELM pattern recognition algorithm. The overall performance of the designed classifiers was compared based on the overall training time, testing accuracy, and misclassification errors. The results are robust, and the time taken for training the network was also less for the acquired database.

This proposed study provides the DE community several potential applications and enhancements in WNS, as compared with related works such as EEG-based communication systems[17],[18],[19],[21],[60],[61] and EEG-based navigation systems.[4],[10],[22],[24] Also, these BCIs were developed using multi-electrode brainwave data-capturing devices (up to 16 electrodes) to recognize only two to five tasks. This can also be noted that the overall BCI recognition levels vary from 40% to 80% in the database of 5-23 subjects and patients. However, there are needs in development; in future analyses, feature extraction algorithm based on connectivity features, statistical features and power connectivity features,[62],[63] deep learning algorithms, and interactive training and testing modules should be considered to enhance the recognition of tasks used in the WNS system. Finally, it is also useful to explore the useful properties of brain patterns based on the spatial and frequency domain, feature extraction algorithm, and classification techniques.

Financial support and sponsorship

None.

Conflicts of interest

There are no conflicts of interest.


  Biographies Top




Sathees Kumar Nataraj is currently working as Assistant Professor (Grade 3) with the Department of Mechatronics Engineering, AMA International University, Bahrain. Prior to AMA International University, He worked as Sr. Assistant Professor at Madanapalle Institute of Technology & Science, Andhra Pradesh from August-2016 to December 2018. He received Ph.D in Mechatronic Engineering and Master of Science in Mechatronic Engineering from Universiti Malaysia Perlis, Malaysia and Bachelor of Engineering in Mechatronic Engineering from K. S. Rangaswamy College of Technology, India. His research interests include biomedical signal processing, artificial intelligence, Neural Network and fuzzy systems.

Email: [email protected]



Paulraj Murugesa Pandiyan is currently working as Principal at Sri Ramakrishna Institute of Technology, Coimbatore, Tamilnadu, India. Previously he has worked as Professor in School of Mechatronics Engineering, Universiti Malaysia Perlis, Malaysia. He holds a PhD in Computer Science and carries 32 years of Teaching Experience and more than a decade of Research and Guiding Experience in the field of Neural Networks. He has also worked in Universiti Malaysia Sabah, Kota Kinabalu and Govt College of Technology. He has published more than 400 technical papers in referred journals, national and international conferences in the field of Neural Networks. He has guided and currently guiding many Masters and PhD students. He is enthusiastic, eradic thinking, simple and god loving. He is blessed with a caring wife and a loving son.

Email: [email protected]



Sazali Bin Yaacob received his BEng in Electrical Engineering from University Malaya and later pursued his MSc in System Engineering at University of Surrey and PhD in Control Engineering from University of Sheffield, United Kingdom. He is currently working as a professor in the Department of Electrical Engineering, Universiti Kuala Lumpur Malaysian Spanish Institute. He is also the head of Intelligent Automotive Systems Research Cluster focused on signal processing, driver behaviour, energy management and related research. His working experiences are Associate Professor and Dean in School of Engineering and Information Technology, Universiti Malaysia Sabah, Kota Kinabalu, Sabah, Professor and Dean in School of Mechatronic Engineering, Universiti Malaysia Perlis., and Deputy Vice-Chancellor (Academic & International), Universiti Malaysia Perlis. He has published 4 books, 159 journal papers, 215 International Conference Paper, 23 National Conference Paper, and 14 other publications. His research interests are in Artificial Intelligence applications in the fields of acoustics, vision and robotics In 2005, his journal paper in Intelligent Vision was published and awarded The Sir Thomas Ward Memorial Prize by Institution of Engineer (India). Medals in the National and International Exhibition were conferred to his work on Robotic Force sensor and Navigation Aid for Visually Impaired respectively., He received Charted Engineer status by the Engineering Council, United Kingdom in 2005 and is also a member to the IET (UK).

Email: [email protected]



Abdul Hamid Bin Adom is currently working as a Professor in Mechatronic Engineering Program (RK24), School of Mechatronic Engineering at University Malaysia Perlis. He is also working with the Centre of Excellence for Advanced Sensors Technology (CEASTech) and UniMAP Automotive Racing Team (UniART). He received his B.E, MSc and PhD from LJMU, UK, his research interests include Neural Networks, System Modeling and Control, System Identification, Electronic Nose / Tongue, Mobile Robots. He holds various research grants and has published several research papers. He is currently having h-index of 19 and i10-index of 42. Currently his research interests have ventured into artificial intelligence, robotics, brain-machine interface, multi-modality sensing, as well as Human Mimicking Electronic Sensory Systems such as Electronic Nose and Tongue and development of Human Sensory Mimicking System for agricultural and environmental applications.

Email: [email protected]



 
  References Top

1.
Lacomis D, Petrella JT, Giuliani MJ. Causes of neuromuscular weakness in the intensive care unit: A study of ninety-two patients. Muscle Nerve 1998;21:610-7.  Back to cited text no. 1
    
2.
Lees AJ, Blackburn NA, Campbell VL. The nighttime problems of Parkinson's disease. Clin Neuropharmacol 1988;11:512-9.  Back to cited text no. 2
    
3.
Chávez DL, Cruz JR, Avilés C. Mouse Pointer Controlled by Ocular Movements. In Proceedings of the 7th WSEAS International Conference on Computational Intelligence, Man-Machine Systems and Cybernetics. World Scientific and Engineering Academy and Society; 2008. p. 11-8.  Back to cited text no. 3
    
4.
Pfurtscheller G, Muller-Putz GR, Scherer R, Neuper C. Rehabilitation with brain-computer interface systems. Computer 2008;41:58-65.  Back to cited text no. 4
    
5.
Birbaumer N, Cohen LG. Brain-computer interfaces: Communication and restoration of movement in paralysis. J Physiol 2007;579:621-36.  Back to cited text no. 5
    
6.
Mohammadi Z, Frounchi J, Amiri M. Wavelet-based emotion recognition system using EEG signal. Neural Comput Appl 2017;28:1985-90.  Back to cited text no. 6
    
7.
Barua S, Ahmed MU, Ahlström C, Begum S. Automatic driver sleepiness detection using EEG EOG and contextual information. Expert Syst Appl 2019;115:121-35.  Back to cited text no. 7
    
8.
Fazli S, Mehnert J, Steinbrink J, Curio G, Villringer A, Müller KR, et al. Enhanced performance by a hybrid NIRS-EEG brain computer interface. Neuroimage 2012;59:519-29.  Back to cited text no. 8
    
9.
Grierson LE, Zelek J, Lam I, Black SE, Carnahan H. Application of a tactile way-finding device to facilitate navigation in persons with dementia. Assist Technol 2011;23:108-15.  Back to cited text no. 9
    
10.
Kaufmann T, Herweg A, Kübler A. Toward brain-computer interface based wheelchair control utilizing tactually-evoked event-related potentials. J Neuroeng Rehabil 2014;11:7.  Back to cited text no. 10
    
11.
Lazarou I, Nikolopoulos S, Petrantonakis PC, Kompatsiaris I, Tsolaki M. EEG-Based Brain-Computer Interfaces for Communication and Rehabilitation of People with Motor Impairment: A novel approach of the 21st Century. Front Hum Neurosci 2018;12:14.  Back to cited text no. 11
    
12.
Liu D, Liu C, Hong B. Bi-directional Visual Motion Based BCI Speller. In 2019 9th International IEEE/EMBS Conference on Neural Engineering; 2019. p. 589-92.  Back to cited text no. 12
    
13.
Lotte F, Congedo M, Lécuyer A, Lamarche F, Arnaldi B. A review of classification algorithms for EEG-based brain-computer interfaces: J Neural Eng 2007;4. p. 1.  Back to cited text no. 13
    
14.
McFarland DJ, Miner LA, Vaughan TM, Wolpaw JR. Mu and beta rhythm topographies during motor imagery and actual movements. Brain Topogr 2000;12:177-86.  Back to cited text no. 14
    
15.
Moghimi S, Kushki A, Guerguerian AM, Chau T. A review of EEG-based brain-computer interfaces as access pathways for individuals with severe disabilities. Assist Technol 2013;25:99-110.  Back to cited text no. 15
    
16.
Wolpaw JR, Birbaumer N, McFarland DJ, Pfurtscheller G, Vaughan TM. Brain-computer interfaces for communication and control. Clin Neurophysiol 2002;113:767-91.  Back to cited text no. 16
    
17.
Suppes P, Lu ZL, Han B. Brain wave recognition of words. Proc Natl Acad Sci U S A 1997;94:14965-9.  Back to cited text no. 17
    
18.
Birbaumer N, Kübler A, Ghanayim N, Hinterberger T, Perelmouter J, Kaiser J, et al. The thought translation device (TTD) for completely paralyzed patients. IEEE Trans Rehabil Eng 2000;8:190-3.  Back to cited text no. 18
    
19.
Porbadnigk A, Wester M, Calliess J, Schultz T. EEG-based speech recognition impact of temporal effects. 2nd International Conference on Bio-Inspired Systems and Signal Processing (BIOSIGNALS); 2009.  Back to cited text no. 19
    
20.
Guenther FH, Brumberg JS. Brain-machine interfaces for real-time speech synthesis. Conf Proc IEEE Eng Med Biol Soc 2011;2011:5360-3.  Back to cited text no. 20
    
21.
Salama M, Elsherif L, Lashin H, Gamal T. Recognition of Unspoken Words Using Electrode Electroencephalographic Signals. The Sixth International Conference on Advanced Cognitive Technologies and Applications, ©; 2014. p. 51-5.  Back to cited text no. 21
    
22.
Tanaka K, Matsunaga K, Wang HO. Electroencephalogram-based control of an electric wheelchair. Robot IEEE Trans 2005;21:762-6.  Back to cited text no. 22
    
23.
Müller-Putz GR, Pfurtscheller G. Control of an electrical prosthesis with an SSVEP-based BCI. IEEE Trans Biomed Eng 2008;55:361-4.  Back to cited text no. 23
    
24.
Leeb R, Friedman D, Müller-Putz GR, Scherer R, Slater M, Pfurtscheller G. Self-paced (asynchronous) BCI control of a wheelchair in virtual environments: A case study with a tetraplegic. Comput Intell Neurosci 2007;Article ID 79642. doi:10.1155/2007/79642.  Back to cited text no. 24
    
25.
McFarland DJ, Wolpaw JR. EEG-based brain-computer interfaces. Curr Opin Biomed Eng 2017;4:194-200.  Back to cited text no. 25
    
26.
Lawhern VJ, Solon AJ, Waytowich NR, Gordon SM, Hung CP, Lance BJ. EEGNet: A compact convolutional neural network for EEG-based brain-computer interfaces. J Neural Eng 2018;15:056013.  Back to cited text no. 26
    
27.
Blankertz B, Dornhege G, Krauledat M, Müller KR, Kunzmann V, Losch F, et al. The Berlin Brain-Computer Interface: EEG-based communication without subject training. IEEE Trans Neural Syst Rehabil Eng 2006;14:147-52.  Back to cited text no. 27
    
28.
Fabio B. Brain computer interfaces for communication and control. Front Neurosci 1900;4:767-91.  Back to cited text no. 28
    
29.
Lebedev MA, Nicolelis MA. Brain-machine interfaces: Past, present and future. Trends Neurosci 2006;29:536-46.  Back to cited text no. 29
    
30.
Vaughan TM, Wolpaw JR, Donchin E. EEG-based communication: Prospects and problems. IEEE Trans Rehabil Eng 1996;4:425-30.  Back to cited text no. 30
    
31.
Bhattacharyya S, Konar A, Tibarewala DN. Motor imagery, P300 and error-related EEG-based robot arm movement control for rehabilitation purpose. Med Biol Eng Comput 2014;52:1007-17.  Back to cited text no. 31
    
32.
Galán F, Nuttin M, Lew E, Ferrez PW, Vanacker G, Philips J, et al. A brain-actuated wheelchair: Asynchronous and non-invasive Brain-computer interfaces for continuous control of robots. Clin Neurophysiol 2008;119:2159-69.  Back to cited text no. 32
    
33.
Park SA, Hwang HJ, Lim JH, Choi JH, Jung HK, Im CH. Evaluation of feature extraction methods for EEG-based brain-computer interfaces in terms of robustness to slight changes in electrode locations. Med Biol Eng Comput 2013;51:571-9.  Back to cited text no. 33
    
34.
Jayalakshmi T, Santhakumaran A. Statistical normalization and back propagation for classification. Int J Comput Theory Eng 2011;3:1793-8201.  Back to cited text no. 34
    
35.
Almahasneh HS, Kamel N, Malik AS, Wlater N, Chooi WT. EEG Based Driver Cognitive Distraction Assessment. Intelligent and Advanced Systems 2014 5th International Conference On; 2014.  Back to cited text no. 35
    
36.
Shojaedini SV, Morabbi S, Keyvanpour M. A New method for detecting P300 signals by using deep learning: hyperparameter tuning in high-dimensional space by minimizing nonconvex error function. J Med Signals Sens 2018;8:205-14.  Back to cited text no. 36
[PUBMED]  [Full text]  
37.
Huang GB, Zhou H, Ding X, Zhang R. Extreme learning machine for regression and multiclass classification. IEEE Trans Syst Man Cybern B Cybern 2012;42:513-29.  Back to cited text no. 37
    
38.
Huang GB, Zhu Q, Siew C. Extreme learning machine: A new learning scheme of feedforward neural networks. IEEE Int Joint Conf Neural Netw 2004;2:985-90.  Back to cited text no. 38
    
39.
Shi LC, Lu BL. EEG-based vigilance estimation using extreme learning machines. Neurocomputing 2013;102:135-143.  Back to cited text no. 39
    
40.
Liang NY, Huang GB, Saratchandran P, Sundararajan N. A fast and accurate online sequential learning algorithm for feedforward networks. IEEE Trans Neural Netw 2006;17:1411-23.  Back to cited text no. 40
    
41.
Nataraj SK, Paulraj MP, Bin YS, Adom AH. Thought controlled IRCC using cross-correlation of different frequency band sequence. In 2016 3rd Int Conf Adv Comput Commun Syst 2016;01:1-6.  Back to cited text no. 41
    
42.
Guger C, Allison B, Edlinger G. Brain-Computer Interface Research: A State-of-the-art Summary. in Brain-Computer Interface Research, Cham, Switzerland: Springer; 2019. p. 1-9.  Back to cited text no. 42
    
43.
Donchin E, Spencer KM, Wijesinghe R. The mental prosthesis: Assessing the speed of a P300-based brain-computer interface. IEEE Trans Rehabil Eng 2000;8:174-9.  Back to cited text no. 43
    
44.
Niedermeyer E, da Silva FH. Electroencephalography: Basic Principles, Clinical Applications, and Related Fields. New York: Lippincott Williams and Wilkins; 2005.  Back to cited text no. 44
    
45.
Ortner R, Grünbacher E, Guger C. State of the art in Sensors Signals and Signal processing; 2013.  Back to cited text no. 45
    
46.
Teplan M. Fundamentals of EEG measurement. Measurement Sci Rev 2002;2:1-11.  Back to cited text no. 46
    
47.
Nataraj SK, Yaacob SB, Paulraj MP, Adom AH. EEG Based intelligent robot chair with communication aid using statistical cross correlation based features. IEEE Int Conf Bioinf Biomed (BIBM), Belfast, UK, 2014;12-8.  Back to cited text no. 47
    
48.
Aurup GM, Akgunduz A. Pair-wise preference comparisons using alpha-peak frequencies. J Int Design Process Sci 2012;16:3-18.  Back to cited text no. 48
    
49.
Nataraj SK, Paulraj MP, Bin Yaacob S, Adom AH. Statistical cross-correlation band features based thought controlled communication system. AI Communications 2016;29:497-511.  Back to cited text no. 49
    
50.
Kiymik MK, Güler I, Dizibüyük A, Akin M. Comparison of STFT and wavelet transform methods in determining epileptic seizure activity in EEG signals for real-time application. Comput Biol Med 2005;35:603-16.  Back to cited text no. 50
    
51.
Webster J G. Medical instrumentation-application and design. J Clin Eng 1978;3:306.  Back to cited text no. 51
    
52.
Siuly S, Li Y. Improving the separability of motor imagery EEG signals using a cross correlation-based least square support vector machine for brain-computer interface. IEEE Trans Neural Syst Rehabil Eng 2012;20:526-38.  Back to cited text no. 52
    
53.
Zygierewicz J, Mazurkiewicz J, Durka PJ, Franaszczuk PJ, Crone NE. Estimation of short-time cross-correlation between frequency bands of event related EEG. J Neurosci Methods 2006;157:294-302.  Back to cited text no. 53
    
54.
Jervis BW, Coelho M, Morgan GW. Spectral analysis of EEG responses. Med Biol Eng Comput 1989;27:230-8.  Back to cited text no. 54
    
55.
Liang NY, Saratchandran P, Huang GB, Sundararajan N. Classification of mental tasks from EEG signals using extreme learning machine. Int J Neural Syst 2006;16:29-38.  Back to cited text no. 55
    
56.
Sivanandam M. Introduction to Artificial Neural Networks. India: Vikas publishing House Pvt. Ltd.; 2003.  Back to cited text no. 56
    
57.
Fahimi F, Zhang Z, Goh WB, Lee TS, Ang KK, Guan C. Inter-subject transfer learning with an end-to-end deep convolutional neural network for EEG-based BCI. J Neural Eng 2019;16. p. 026007.  Back to cited text no. 57
    
58.
Kohavi R. A study of cross-validation and bootstrap for accuracy estimation and model selection. Int Joint Conf Articial Int 1995;14:1137-45.  Back to cited text no. 58
    
59.
Hema CR, Paulraj MP, Yaacob S, Adom AH, Nagarajan R. Asynchronous brain machine interface-based control of a wheelchair. Adv Exp Med Biol 2011;696:565-72.  Back to cited text no. 59
    
60.
Brumberg JS, Kennedy PR, Guenther FH. Artificial Speech Synthesizer Control by Brain-Computer Interface. INTERSPEECH; 2009. p. 636-9.  Back to cited text no. 60
    
61.
Guenther FH, Brumberg JS, Wright EJ, Nieto-Castanon A, Tourville JA, Panko M, et al. A wireless brain-machine interface for real-time speech synthesis. PLoS One 2009;4:e8218.  Back to cited text no. 61
    
62.
Lahane P, Jagtap J, Inamdar A, Karne N, Dev R. A review of Recent Trends in EEG Based Brain-Computer Interface. IEEE. 2019 International Conference on Computational Intelligence in Data Science; 2019. p. 1-6.  Back to cited text no. 62
    
63.
Mohammad M, Hussen HM. The state of the art in feature extraction methods for EEG classification. UHD J Sci Technol 2019;3:16-23.  Back to cited text no. 63
    


    Figures

  [Figure 1], [Figure 2], [Figure 3], [Figure 4], [Figure 5], [Figure 6], [Figure 7]
 
 
    Tables

  [Table 1], [Table 2]



 

Top
 
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
   Abstract
  Introduction
   Wheelchair Navig...
   Statistical Cros...
   Classification o...
   Results and Disc...
  Conclusion
  Biographies
   References
   Article Figures
   Article Tables

 Article Access Statistics
    Viewed281    
    Printed6    
    Emailed0    
    PDF Downloaded41    
    Comments [Add]    

Recommend this journal