<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Hum. Neurosci.</journal-id>
<journal-title>Frontiers in Human Neuroscience</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Hum. Neurosci.</abbrev-journal-title>
<issn pub-type="epub">1662-5161</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fnhum.2014.00199</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research Article</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Online transcranial Doppler ultrasonographic control of an onscreen keyboard</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Lu</surname> <given-names>Jie</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://community.frontiersin.org/people/u/97373"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Mamun</surname> <given-names>Khondaker A.</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://community.frontiersin.org/people/u/72421"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Chau</surname> <given-names>Tom</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="author-notes" rid="fn001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://community.frontiersin.org/people/u/67999"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Bloorview Research Institute, Holland Bloorview Kids Rehabilitation Hospital</institution> <country>Toronto, ON, Canada</country></aff>
<aff id="aff2"><sup>2</sup><institution>Institute of Biomaterials and Biomedical Engineering, University of Toronto</institution> <country>Toronto, ON, Canada</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Srikantan S. Nagarajan, University of California, San Francisco, USA</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Srikantan S. Nagarajan, University of California, San Francisco, USA; Theresa M. Vaughan, New York State Department of Health, USA</p></fn>
<fn fn-type="corresp" id="fn001"><p>&#x0002A;Correspondence: Tom Chau, Holland Bloorview Kids Rehabilitation Hospital, 150 Kilgour Road, Toronto, ON M4G 1R8, Canada e-mail: <email>tom.chau&#x00040;utoronto.ca</email></p></fn>
<fn fn-type="other" id="fn002"><p>This article was submitted to the journal Frontiers in Human Neuroscience.</p></fn>
</author-notes>
<pub-date pub-type="epub">
<day>22</day>
<month>04</month>
<year>2014</year>
</pub-date>
<pub-date pub-type="collection">
<year>2014</year>
</pub-date>
<volume>8</volume>
<elocation-id>199</elocation-id>
<history>
<date date-type="received">
<day>10</day>
<month>06</month>
<year>2013</year>
</date>
<date date-type="accepted">
<day>20</day>
<month>03</month>
<year>2014</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2014 Lu, Mamun and Chau.</copyright-statement>
<copyright-year>2014</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/3.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract><p>Brain-computer interface (BCI) systems exploit brain activity for generating a control command and may be used by individuals with severe motor disabilities as an alternative means of communication. An emerging brain monitoring modality for BCI development is transcranial Doppler ultrasonography (TCD), which facilitates the tracking of cerebral blood flow velocities associated with mental tasks. However, TCD-BCI studies to date have exclusively been offline. The feasibility of a TCD-based BCI system hinges on its online performance. In this paper, an online TCD-BCI system was implemented, bilaterally tracking blood flow velocities in the middle cerebral arteries for system-paced control of a scanning keyboard. Target letters or words were selected by repetitively rehearsing the spelling while imagining the writing of the intended word, a left-lateralized task. Undesired letters or words were bypassed by performing visual tracking, a non-lateralized task. The keyboard scanning period was 15 s. With 10 able-bodied right-handed young adults, the two mental tasks were differentiated online using a Na&#x000EF;ve Bayes classification algorithm and a set of time-domain, user-dependent features. The system achieved an average specificity and sensitivity of 81.44 &#x000B1; 8.35 and 82.30 &#x000B1; 7.39%, respectively. The level of agreement between the intended and machine-predicted selections was moderate (&#x003BA; &#x0003D; 0.60). The average information transfer rate was 0.87 bits/min with an average throughput of 0.31 &#x000B1; 0.12 character/min. These findings suggest that an online TCD-BCI can achieve reasonable accuracies with an intuitive language task, but with modest throughput. Future interface and signal classification enhancements are required to improve communication rate.</p></abstract>
<kwd-group>
<kwd>TCD</kwd>
<kwd>BCI</kwd>
<kwd>middle cerebral artery</kwd>
<kwd>hemodynamic response</kwd>
<kwd>lateralization</kwd>
<kwd>communication</kwd>
</kwd-group>
<counts>
<fig-count count="8"/>
<table-count count="1"/>
<equation-count count="8"/>
<ref-count count="44"/>
<page-count count="11"/>
<word-count count="7857"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="introduction" id="s1">
<title>Introduction</title>
<p>Individuals who are cognitively aware but living with severe motor disabilities such as muscular dystrophy, multiple sclerosis, high-level spinal cord injuries or locked-in syndrome may not be able to use conventional means of expression such as speech and gestures for communication. Brain-computer interface (BCI) systems offer an alternative means of communication for these individuals (Tai et al., <xref ref-type="bibr" rid="B37">2008</xref>). BCI systems enable users to generate a control command through mental activity alone (Tai et al., <xref ref-type="bibr" rid="B37">2008</xref>). Many portable brain monitoring modalities have been explored for BCI development. The majority of systems have used electroencephalography (EEG) (Wolpaw et al., <xref ref-type="bibr" rid="B44">2002</xref>), while hemodynamic-based monitoring modalities such as near-infrared spectroscopy (NIRS) (Sitaram et al., <xref ref-type="bibr" rid="B34">2009</xref>; Falk et al., <xref ref-type="bibr" rid="B13">2011</xref>), and transcranial Doppler (TCD) ultrasonography systems (Myrden et al., <xref ref-type="bibr" rid="B29">2011</xref>) are emerging BCI alternatives. The cerebral hemodynamic response is inherently slower than the corresponding electrical response measured using EEG. In fact, there is a hemodynamic delay of 5&#x02013;10 s between the onset of mental activation and the manifestation of blood flow velocity changes (Harders et al., <xref ref-type="bibr" rid="B18">1989</xref>; Szirmai et al., <xref ref-type="bibr" rid="B36">2005</xref>). However, hemodynamic monitoring systems are not prone to electro-genic artifacts due to muscle contractions or eye-movements. In particular, TCD-based systems have recently demonstrated high accuracies in offline studies (Myrden et al., <xref ref-type="bibr" rid="B29">2011</xref>; Aleem and Chau, <xref ref-type="bibr" rid="B3">2013</xref>).</p>
<p>TCD is a non-invasive ultrasound technology that detects the changes in cerebral blood flow velocity (CBFV). It was first introduced as a medical imaging device in 1982, and has been widely applied clinically (Aaslid et al., <xref ref-type="bibr" rid="B2">1982</xref>) for the detection of increased intracranial pressure in neurocritical care, evaluation of subarachnoid haemorrhage, detection of microembolism, and monitoring of cerebral circulation during cardiopulmonary bypass (White and Venkatesh, <xref ref-type="bibr" rid="B42">2006</xref>; Sarkar et al., <xref ref-type="bibr" rid="B32">2007</xref>; Tsivgoulis et al., <xref ref-type="bibr" rid="B39">2009</xref>; Reinsfelt et al., <xref ref-type="bibr" rid="B30">2012</xref>).</p>
<p>TCD has recently been used as a functional brain imaging tool to examine the effects of mental tasks on the blood flow velocities. In particular, functional TCD studies have focused on the middle cerebral arteries (MCAs), which perfuse 80% of the brain, and thus measurements of velocities therein reflect cognitive effort levels (Vingerhoets and Stroobant, <xref ref-type="bibr" rid="B40">1999</xref>; Stroobant and Vingerhoets, <xref ref-type="bibr" rid="B35">2000</xref>). Blood flow lateralization elicited by mental tasks, such as verbal fluency and visuospatial tasks, has been detected using TCD in many studies (Aaslid, <xref ref-type="bibr" rid="B1">1987</xref>; Vingerhoets and Stroobant, <xref ref-type="bibr" rid="B40">1999</xref>; Stroobant and Vingerhoets, <xref ref-type="bibr" rid="B35">2000</xref>; Haag et al., <xref ref-type="bibr" rid="B17">2009</xref>; Whitehouse et al., <xref ref-type="bibr" rid="B43">2009</xref>). Blood flow lateralization is due to the coupling between the cerebral blood flow and oxidative metabolism (Buxton and Lawrence, <xref ref-type="bibr" rid="B8">1997</xref>). The left hemisphere of the brain exhibits augmented blood flow velocity during verbal fluency tasks while the right hemisphere demonstrates heightened activation during visuospatial tasks (Vingerhoets and Stroobant, <xref ref-type="bibr" rid="B40">1999</xref>).</p>
<p>Recent functional TCD-BCI studies have reported promising rates of classifying different mental states (Myrden et al., <xref ref-type="bibr" rid="B29">2011</xref>; Aleem and Chau, <xref ref-type="bibr" rid="B3">2013</xref>; Faress and Chau, <xref ref-type="bibr" rid="B14">2013</xref>). Myrden et al. (<xref ref-type="bibr" rid="B29">2011</xref>) first introduced TCD as a BCI measurement modality and discriminated between word generation and rest (average accuracy of 82.9 &#x000B1; 10.5%) and between mental rotation and rest (85.7 &#x000B1; 10%) in 9 able-bodied adults using 45 s task periods. The authors later followed up with a 3-class offline BCI, discerning among word generation, mental rotation and unconstrained rest with over 70% accuracy and reaching transmission rates of 1.2 bits per min (Myrden et al., <xref ref-type="bibr" rid="B28">2012</xref>). Subsequently, in a study of 18 adults, Aleem and Chau (<xref ref-type="bibr" rid="B3">2013</xref>) reduced the task period to 18 s and classified successive left and right lateralizations offline in a user-independent framework with accuracies up to 74.6 &#x000B1; 12.6%. Most recently, in an offline TCD-NIRS-BCI study, Faress and Chau (<xref ref-type="bibr" rid="B14">2013</xref>) achieved an average accuracy of 76.1 &#x000B1; 9.9% in the automatic differentiation between pre- and post-verbal fluency hemodynamics (Faress and Chau, <xref ref-type="bibr" rid="B14">2013</xref>). Collectively, these past offline TCD-BCI studies have shown that language (e.g., verbal fluency) and spatial tasks (e.g., mental rotation) elicit machine-discernible lateralizations in cerebral blood flow velocities in the MCAs, with time intervals as short as 18 s. The fundamental challenge of TCD-BCIs remains the relatively low throughput. Further, the viability of an online TCD-BCI has yet to be demonstrated.</p>
<p>In light of the above, the aim of the present study was to ascertain the achievable accuracy and throughput of communication with an online TCD-BCI. In particular, we implemented an online spelling system (i.e., scanning keyboard) controlled via two mental states, namely, rest and activation. The activation task was repetitive mental spelling and imagined writing of the intended word and the rest mental task was the visual tracking of a display of TCD signals. We hypothesized that previously reported offline accuracies in excess of 80% could be replicated in the online setting using an activation task that intuitively combined language processing and right-handed motor imagery.</p>
</sec>
<sec sec-type="methods" id="s2">
<title>Methods</title>
<sec>
<title>Participants</title>
<p>Thirteen able-bodied participants were recruited for this study. Participants had normal or corrected to normal vision, and no reported history of neurological, metabolic, respiratory, cardiovascular, or drug/alcohol-related conditions. One participant was excluded after the first session due to the inability to accurately describe the study protocol. A second participant was excluded upon disclosing post-study, a medical history that violated inclusion criteria. A third participant was excluded due to inadequate transtemporal windows, which precluded the location of the MCAs. The ten remaining participants included for study (aged 18&#x02013;40 years; all female), were all right-handed. All participants provided written informed consent. This study was approved by the research ethics boards of both Holland Bloorview Kids Rehabilitation Hospital and the University of Toronto.</p>
</sec>
<sec>
<title>Instrumentation</title>
<p>The Doppler spectra of blood flow velocities through the left and right MCAs were monitored using the MultiDop X-4 TCD (Compumedics Germany) and the accompanying bilateral headgear with two fixed 2 MHz ultrasonic transducers. The data were recorded at a sampling frequency of 100 Hz. The probes were positioned over the transtemporal insonation window according to an established insonation procedure (Alexandrov et al., <xref ref-type="bibr" rid="B4">2007</xref>) as seen in Figure <xref ref-type="fig" rid="F1">1</xref>.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p><bold>Sagittal (left panel) and axial (middle panel) view of the ultrasound probe set at the transtemporal insonation window, directed toward the MCA</bold>. Experimental setup (right panel) showing participant with TCD headgear and corresponding TCD spectrum.</p></caption>
<graphic xlink:href="fnhum-08-00199-g0001.tif"/>
</fig>
<p>Ultrasound gel was applied between the probe and the user&#x00027;s skin to ensure proper signal transduction. Once the probe was placed over the transtemporal window, the TCD was turned on with an initial depth setting of 50 mm. The insonation angle and depth were then adjusted to find the bifurcation of the internal carotid artery into the middle cerebral artery (blood flowing toward the probe) and the anterior cerebral artery (blood flowing away from the probe). The insonation depth was then decreased until the maximum unidirectional flow toward the probe was detected. All participants were given 5 min breaks per every 15 min of TCD usage to provide sufficient time for probe cooling. Throughout the recording process, the thermal cranial index (TIC) of the probes did not exceed 1.5, thus avoiding discomfort or thermal injury to the participants, which is in accordance with the British Medical Ultrasound Society safety guidelines (Group, <xref ref-type="bibr" rid="B16">2010</xref>). The TCD device (MultiDop X-4) was approved by Health Canada&#x00027;s Medical Devices Directorate for investigational testing.</p>
</sec>
<sec>
<title>Mental tasks</title>
<p>Participants performed two mental tasks (i.e., activation and rest) throughout the study. Mental spelling accompanied by imagined writing of each letter with the right hand was used as the activation task, with the intent of eliciting left-lateralized brain activity. To restore CBFV to non-lateralized basal levels, visual tracking of a time-evolving strip chart of left and right mean CBFV (Figure <xref ref-type="fig" rid="F2">2</xref>) was used as the rest task. The participant performed mental spelling throughout each 15 s activation period and the visual tracking task throughout each 15 s rest period.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p><bold>TCD-BCI interface</bold>. The bottom graph is the dynamic feedback signal showing a 10-s segment of left and right mean CBFV. Each step of the graph represents an average of 1.3 s of CBFV data (sampling rate of 100 Hz). This display facilitated the visual tracking task for inducing the rest state.</p></caption>
<graphic xlink:href="fnhum-08-00199-g0002.tif"/>
</fig>
<p>During the training session, participants were presented with either a single letter or multiple letters forming part of a word. Upon seeing this cue, participants were instructed to repetitively rehearse the spelling of the desired word while simultaneously imagining the writing of the word with their right hand. Likewise, participants were instructed to shift their gaze to the TCD feedback signal whenever an hourglass appeared on the screen. Both tasks were completed without any vocalization to avoid an increase in blood flow due to speech.</p>
<p>In the testing sessions, the participants used the TCD BCI to spell target words online. During these sessions, the participants were asked to perform the activation mental task when the desired letter appeared among the currently available letter choices and to perform the rest mental task when the desired letter was not displayed.</p>
</sec>
<sec>
<title>Dynamic keyboard</title>
<p>A custom on-screen keyboard was developed based on the concept of the dynamic keyboard developed by the University of Victoria. In our implementation, each level of the keyboard hierarchy contained multiple bins, although at any given time, only one bin was displayed to minimize mental workload and user confusion. Each bin contained multiple letters or words. Figure <xref ref-type="fig" rid="F2">2</xref> depicts the user interface of the dynamic on-screen keyboard. The dynamic keyboard behaviors were governed by the following operating principles.</p>
<list list-type="order">
<list-item><p>Letters are grouped into bins on the basis of their frequencies of use in the English language. For example, the initial letter bin contains the 5 most frequently occurring first letters (t, a, s, i, o) of English words. When one or more letters have been selected, subsequent letter bins contain the set of most probable next letters.</p></list-item>
<list-item><p>Whenever a letter bin is selected, a word bin containing the most frequent words starting with the sequence of letters selected thus far is presented.</p></list-item>
<list-item><p>Whenever a word bin is selected, each word within the bin is presented sequentially.</p></list-item>
<list-item><p>When none of the letters or words in a sequence is selected, the keyboard returns to the previous level of the hierarchy.</p></list-item>
<list-item><p>Whenever a selection is made, an &#x0201C;undo&#x0201D; option is immediately presented as a means of confirming the user&#x00027;s selection. The &#x0201C;undo&#x0201D; bin also provides an opportunity to delete the most recent letter or word, upon which the interface returns to the previous level of the hierarchy.</p></list-item>
</list>
<p>Figure <xref ref-type="fig" rid="F3">3</xref> portrays an example of dynamic keyboard progression. For simplicity, only a subset of paths is shown. Here, the bin &#x0201C;t, a, s, i, o&#x0201D; is selected. Bypassing the &#x0201C;undo&#x0201D; bin confirms the selection. The first bin on the next level of the hierarchy contains the highest frequency words starting with one of &#x0201C;t, a, s, i, o.&#x0201D; Here, this word bin is bypassed, triggering the presentation of individual letters from the previous level of the hierarchy. The letter &#x0201C;t&#x0201D; is chosen and confirmed (bypassing undo), prompting the presentation of high frequency words starting with &#x0201C;t.&#x0201D; In this example, this word bin is selected and confirmed, resulting in the presentation of the individual words from this bin.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p><bold>An example of dynamic keyboard progression</bold>. In this example, the user has selected the letter &#x0201C;t&#x0201D; and the bin containing high frequency words &#x0201C;the-to-that-this-they.&#x0201D;</p></caption>
<graphic xlink:href="fnhum-08-00199-g0003.tif"/>
</fig>
</sec>
<sec>
<title>Experimental protocol</title>
<p>Each participant completed three sessions. At the beginning of the first session, each participant was given an information sheet, highlighting the nature of each task. In addition, prior to each session, participants also received verbal instruction about how to perform the activation and rest tasks. The first session involved two training blocks and one testing block while subsequent sessions contained one training block followed by two testing blocks. A one minute baseline recoding was obtained before each block for the purpose of normalizing data collected from the block. During baseline, participants performed the rest task. A five minute rest period was offered between blocks.</p>
<p>For each training block, the participants performed a total of forty task segments. Each segment was either an activation or rest task. The sequence of task presentation was randomized (Figure <xref ref-type="fig" rid="F4">4</xref>). A 10 s recovery period was included after each activation task to allow the participant&#x00027;s blood flow velocities to return to baseline levels. During the recovery period, the participants performed the rest task to restore basal blood flow velocities. In the first session, participants had a 10 min break while the two blocks of training data were used to train the appropriate classifier. For sessions two and three, the 10 min break occurred after the first training block. During this break, the classifier was trained with data from the current and initial sessions. After each session, the participants&#x00027; level of fatigue was ascertained via a written survey.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p><bold>Schematic diagram of the training block</bold>. The training block began with a 1-minute baseline period, followed by 40 randomized task segments. During each task segment, the screen randomly displayed either an hourglass or a letter. If a letter was presented, the participant performed the activation mental task for 15 s, followed by the rest task (visual tracking of the TCD feedback signal) for 10 s. If an hourglass was displayed instead of a letter at the beginning of the segment, the participant continued to perform the rest task for an additional 15 s.</p></caption>
<graphic xlink:href="fnhum-08-00199-g0004.tif"/>
</fig>
<p>For each testing block, the participants were asked to spell a given target phrase to the best of their abilities using the dynamic keyboard. The participants performed the activation task only when the bin containing the intended selection was presented. If a false positive occurred, the participants were instructed to select the undo button and correct the error before continuing the spelling process. If a false-negative occurred, the participants were instructed to simply wait until the keyboard looped back to the intended bin.</p>
</sec>
<sec>
<title>Data processing and classification</title>
<p>All data collected from the training blocks were used for classifier training. Therefore, for each participant, a total of forty activation data segments and forty rest data segments were used to train a user-specific classifier in session I. For each subsequent session, training data from session I (40 activation and 40 rest segments) and the session at hand (20 activation and 20 rest segments) were pooled for training (i.e., 60 activation and 60 rest data segments). Each segment was 15 s in duration. A total of forty-four features were extracted from each segment. Twenty four features were based on the left and right CBFV signal mean, slope, standard deviation, and entropy over the following intervals: 0&#x02013;5, 5&#x02013;10, and 10&#x02013;15 s. Six features were extracted from the differences in mean and slope between the left and right signals over the same time intervals. Nine features were extracted based on the correlation, dot product, and mutual information between the left and right signals over the aforementioned time intervals. The last five features were extracted from left and right CBFV signal standard deviation and entropy from 0 to 15 s and the mutual information between the left and right CBFV signals over the 0 to 15 s interval. An example feature computation is presented in Figure <xref ref-type="fig" rid="F5">5</xref>. These features were chosen according to the findings of previous TCD brain lateralization studies (Myrden et al., <xref ref-type="bibr" rid="B29">2011</xref>).</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p><bold>Sample recording depicting the three most common features</bold>. (a) difference between left and right mean velocities, &#x003BC;L&#x02013;&#x003BC;R, at 10&#x02013;15 s (right graph); (b) difference between left and right mean velocities, &#x003BC;L&#x02013;&#x003BC;R, at 5&#x02013;10 s (right graph), and, (c) slope of the right MCA CBFV (mR) at 5&#x02013;10 s (left graph). Data shown are normalized and smoothed and represent one trial performed by participant 10. The left graph depicts a rest trial while the right graph portrays an activation trial, showing the difference between left and right mean CBFV at 5&#x02013;10 s and at 10&#x02013;15 s.</p></caption>
<graphic xlink:href="fnhum-08-00199-g0005.tif"/>
</fig>
<p>Weighted sequential feature selection (WSFS) was used to algorithmically select three to five features for each session, for each participant, to train a Na&#x000EF;ve Bayes classifier. WSFS extended the sequential forward search (SFS) approach (Mamun et al., <xref ref-type="bibr" rid="B25">2012</xref>) by explicitly considering feature contributions (i.e., the number of times a specific feature was chosen). For each fold of a 10-fold cross-validation for feature selection, all features were first ranked according to F-score (Duda et al., <xref ref-type="bibr" rid="B12">2012</xref>) for interclass separability, using the training set. These ranked features were then organized into cumulative subsets such that the first subset contained the top ranked feature, the second subset contained the top two ranked features, and so on. The last subset contained all features. Within each fold, the subset with the highest validation accuracy was selected. Therefore, 10-fold cross-validation yielded 10 such subsets.</p>
<p>We enumerated the occurrence of each feature within these 10 subsets. Certain features appeared consistently across all subsets while others surfaced intermittently. The selected features were regrouped based on their frequency of occurrence, such that the <italic>m</italic>th group contained all the features that appeared at least <italic>m</italic> times, where <italic>m</italic> &#x0003D; {1, 2,&#x02026;,10}. These new subsets were evaluated through a subsequent constrained 10-fold cross-validation (i.e., only the <italic>m</italic> pre-determined feature subsets were cross-validated) with newly randomized testing and training sets. The final set of features was then selected as that with the highest average validation accuracy. The chosen features were used to train a Gaussian Na&#x000EF;ve Bayes classifier.</p>
<p>Figure <xref ref-type="fig" rid="F5">5</xref> demonstrates a single trial of a rest task (left) and activation task (right). The three most common features selected across sessions and participants are highlighted. The least common of the three features (slope of the right MCA CBFV) was selected in six out of ten participants.</p>
</sec>
<sec>
<title>Performance evaluation</title>
<p>To capture the different nuances of online classification performance, several metrics were invoked as suggested by Thomas et al. (<xref ref-type="bibr" rid="B38">2013</xref>) and Schl&#x000F6;gl et al. (<xref ref-type="bibr" rid="B33">2007</xref>). To gauge the correctness of classification for a biased classifier (i.e., unequal performance for each class), sensitivity, and specificity were estimated from the confusion matrix Schl&#x000F6;gl et al. (<xref ref-type="bibr" rid="B33">2007</xref>). Specificity is the number of true negatives divided by the actual number of negatives in the test set while sensitivity is the number of true positives divided by the actual number positives in the test set.</p>
<p>To measure the agreement between the predicted and desired selections (Cohen, <xref ref-type="bibr" rid="B9">1960</xref>; Thomas et al., <xref ref-type="bibr" rid="B38">2013</xref>) in the presence of unbalanced data (i.e., unequal number of samples per class due to the nature of the experiment), Cohen&#x00027;s kappa (&#x003BA;) coefficient was estimated. Kappa ranges from 1 (perfect match) to 0 (chance level). If all values of &#x003BA; within the 95% confidence interval around the mean are above 0 (<overline>&#x003BA;</overline> &#x000B1; 1.96 &#x000D7; &#x003C6;(&#x003BA;) &#x0003E; 0, where &#x003C6;(k) is the standard error), then the average kappa value is significantly above chance (Friedrich et al., <xref ref-type="bibr" rid="B15">2012</xref>). The classification accuracy ACC (overall agreement) was derived from the 2&#x000D7;2 confusion matrix <italic>H</italic>, as
<disp-formula id="E1"><label>(1)</label><mml:math id="M1"><mml:mrow><mml:mi>A</mml:mi><mml:mi>C</mml:mi><mml:mi>C</mml:mi><mml:mo>=</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mstyle displaystyle='true'><mml:msub><mml:mo>&#x02211;</mml:mo><mml:mi>i</mml:mi></mml:msub><mml:mrow><mml:msub><mml:mi>H</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mstyle></mml:mrow><mml:mi>N</mml:mi></mml:mfrac></mml:mrow></mml:math></disp-formula>
where <italic>H</italic><sub><italic>ii</italic></sub> are the main diagonal elements (i.e., number of correct classifications) of the confusion matrix <italic>H</italic> and <italic>N</italic> &#x0003D; &#x02211;<sub><italic>i</italic></sub>&#x02211;<sub><italic>j</italic></sub><italic>H<sub>ij</sub></italic> is the total number of trials. The chance expected agreement <italic>p<sub>e</sub></italic>, is the probability of observing the current confusion matrix and is given by,
<disp-formula id="E2"><label>(2)</label><mml:math id="M2"><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>e</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mstyle displaystyle='true'><mml:msub><mml:mo>&#x02211;</mml:mo><mml:mi>i</mml:mi></mml:msub><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mrow><mml:mtext>+</mml:mtext><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>n</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mtext>+</mml:mtext></mml:mrow></mml:msub></mml:mrow></mml:mstyle></mml:mrow><mml:mrow><mml:msup><mml:mi>N</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:mfrac></mml:mrow></mml:math></disp-formula>
where <italic>n</italic><sub>&#x0002B;<italic>i</italic></sub> and <italic>n</italic><sub><italic>i</italic>&#x0002B;</sub> are the marginal column and row sums, respectively. The estimate of the kappa coefficient <italic>&#x003BA;</italic> is thus,
<disp-formula id="E3"><label>(3)</label><mml:math id="M3"><mml:mrow><mml:mi>&#x003BA;</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mi>e</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mi>e</mml:mi></mml:msub></mml:mrow></mml:mfrac></mml:mrow></mml:math></disp-formula>
while its standard error &#x003C6;(&#x003BA;) is given by,
<disp-formula id="E4"><label>(4)</label><mml:math id="M4"><mml:mrow><mml:mi>&#x003C6;</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>&#x003BA;</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:msqrt><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msubsup><mml:mi>p</mml:mi><mml:mi>e</mml:mi><mml:mn>2</mml:mn></mml:msubsup><mml:mo>&#x02212;</mml:mo><mml:mstyle displaystyle='true'><mml:msub><mml:mo>&#x02211;</mml:mo><mml:mi>i</mml:mi></mml:msub><mml:mrow><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mrow><mml:mo>+</mml:mo><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>n</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>+</mml:mo></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mrow><mml:mo>+</mml:mo><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>n</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>+</mml:mo></mml:mrow></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>/</mml:mo><mml:msup><mml:mi>N</mml:mi><mml:mn>3</mml:mn></mml:msup></mml:mrow></mml:mstyle></mml:mrow></mml:msqrt></mml:mrow><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mi>e</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:msqrt><mml:mi>N</mml:mi></mml:msqrt></mml:mrow></mml:mfrac></mml:mrow></mml:math></disp-formula></p>
<p>This method of evaluation is preferred for problems with unbalanced classes (Danker-Hopfe et al., <xref ref-type="bibr" rid="B10">2004</xref>; Anderer et al., <xref ref-type="bibr" rid="B5">2005</xref>), such as sleep classification.</p>
<p>To gauge performance of the system as a communication channel, we estimated the Nykopp information transfer rate (ITR), which is recommended for classification problems with unbalanced class sizes (Thomas et al., <xref ref-type="bibr" rid="B38">2013</xref>). Letting <italic>x</italic><sub><italic>i</italic></sub> represent the actual input category (<italic>x</italic><sub>0</sub> &#x0003D; rest, <italic>x</italic><sub>1</sub> &#x0003D; activation) and <italic>y</italic><sub><italic>j</italic></sub> represent the predicted output (<italic>y</italic><sub>0</sub> &#x0003D; rest, <italic>y</italic><sub>1</sub> &#x0003D; activation), the ITR was given by
<disp-formula id="E5"><label>(5)</label><mml:math id="M5"><mml:mrow><mml:mi>I</mml:mi><mml:mi>T</mml:mi><mml:msub><mml:mi>R</mml:mi><mml:mrow><mml:mtext>Nykopp</mml:mtext></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mtext>&#x0200A;</mml:mtext><mml:mo>=</mml:mo><mml:mtext>&#x0200A;</mml:mtext><mml:mn>0</mml:mn></mml:mrow><mml:mn>1</mml:mn></mml:munderover><mml:mrow><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mn>1</mml:mn></mml:munderover><mml:mi>p</mml:mi></mml:mstyle></mml:mrow></mml:mstyle><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mi>p</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>y</mml:mi><mml:mi>j</mml:mi></mml:msub><mml:mtext>&#x000A0;</mml:mtext><mml:msub><mml:mi>x</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mi>l</mml:mi><mml:mi>o</mml:mi><mml:msub><mml:mi>g</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>p</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>y</mml:mi><mml:mi>j</mml:mi></mml:msub><mml:mtext>&#x000A0;</mml:mtext><mml:msub><mml:mi>x</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow> <mml:mo>]</mml:mo></mml:mrow></mml:mrow></mml:math></disp-formula>
where
<disp-formula id="E6"><label>(6)</label><mml:math id="M6"><mml:mrow><mml:mi>p</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>y</mml:mi><mml:mi>j</mml:mi></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mn>1</mml:mn></mml:munderover><mml:mi>p</mml:mi></mml:mstyle><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mi>p</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>y</mml:mi><mml:mi>j</mml:mi></mml:msub><mml:mtext>&#x000A0;</mml:mtext><mml:msub><mml:mi>x</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:math></disp-formula>
<disp-formula id="E7"><label>(7)</label><mml:math id="M7"><mml:mrow><mml:mi>p</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mi>y</mml:mi><mml:mi>j</mml:mi></mml:msub><mml:mtext>&#x000A0;</mml:mtext><mml:msub><mml:mi>x</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mi>H</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mtext>+</mml:mtext></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:mrow></mml:math></disp-formula>
while <italic>p</italic>(<italic>x</italic><sub>0</sub>) &#x0003D; 0.7 and <italic>p</italic>(<italic>x</italic><sub>1</sub>) &#x0003D; 0.3 are the prior probabilities of rest and activation tasks, respectively to be, estimated from the average frequency of occurrence of each task when spelling an intended message with no mistakes. To calculate the bit-rate, we multiplied the Nykopp ITR by the average number of trials per min (Thomas et al., <xref ref-type="bibr" rid="B38">2013</xref>).</p>
<p>To assess system efficiency, the average throughput, defined as the number of characters output per min, was determined. Only correct characters were counted while the measured duration included the time required to make error corrections. Since participants were asked to correct mistakes during the spelling process, the estimated throughputs were generally conservative with the low char/min.</p>
<p>To measure the resemblance of the actual output to the intended output, the Levenshtein or edit distance was calculated. The edit distance compares the similarity between two strings of unequal length and is defined as the number of editorial operations required to convert the actual output into the intended output (Sankoff and Kruskal, <xref ref-type="bibr" rid="B31">1993</xref>). Each deletion and insertion of a character was given a weight of 1 while a substitution was given a weight of 2, being equivalent to a deletion followed by an insertion (Sankoff and Kruskal, <xref ref-type="bibr" rid="B31">1993</xref>). Since the intended outputs were of different lengths for the testing blocks of the three sessions, the edit distances were normalized based on the longest string length of the intended outputs (Equation 8). Other normalization methods more severely penalize a lack of input over an incorrect selection (Marzal and Vidal, <xref ref-type="bibr" rid="B26">1993</xref>; Weigel and Fein, <xref ref-type="bibr" rid="B41">1994</xref>; Li and Liu, <xref ref-type="bibr" rid="B24">2007</xref>). However, due to the study design, an incorrect selection should have a higher edit distance than a lack of input since the effort required to correct an incorrect selection is far greater than that needed to produce an intended output with no corrections. The normalized edit distance, <italic>D</italic><sub><italic>EN</italic></sub>, is given by,
<disp-formula id="E8"><label>(8)</label><mml:math id="M8"><mml:mrow><mml:msub><mml:mi>D</mml:mi><mml:mrow><mml:mi>E</mml:mi><mml:mi>N</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mi>D</mml:mi><mml:mi>E</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:mo>&#x0007C;</mml:mo><mml:mi>X</mml:mi><mml:mo>&#x0007C;</mml:mo></mml:mrow></mml:mfrac><mml:mo>&#x000D7;</mml:mo><mml:mo>&#x0007C;</mml:mo><mml:msup><mml:mi>X</mml:mi><mml:mo>*</mml:mo></mml:msup><mml:mo>&#x0007C;</mml:mo></mml:mrow></mml:math></disp-formula>
where |<italic>X</italic>| is the length of the intended output, <italic>D</italic><sub><italic>E</italic></sub> is the raw edit distance between intended and actual output, and |<italic>X</italic><sup>&#x0002A;</sup>| is the length of the longest intended output from all sessions. Given the longest string length in our experiment was 18, i.e., |<italic>X</italic><sup>&#x0002A;</sup>| &#x0003D; 18, a normalized edit distance of 18 indicated no output and 0 meant perfect match between the intended output and the actual output. Any score above 18 indicated that the actual output mismatched the intended output. The larger the normalized edit distance is, the further away the actual output was from the intended output.</p>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>Results</title>
<sec>
<title>Feature selection</title>
<p>Bilateral features were more frequently selected (Figure <xref ref-type="fig" rid="F6">6</xref>), which could be due to the left-lateralized nature of the language task. The higher selection frequency of bilateral features was consistent with that reported in a previous offline TCD-BCI study using verbal fluency (Myrden et al., <xref ref-type="bibr" rid="B29">2011</xref>). Therefore, our modified verbal fluency task (i.e., rehearsing the spelling while imagining the writing of the target word) appeared to elicit machine-discernible left-hemispheric lateralization.</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption><p><bold>Normalized frequency of features (the number of times a feature has been selected divided by the total number of times all feature have been selected) across all participants</bold>.</p></caption>
<graphic xlink:href="fnhum-08-00199-g0006.tif"/>
</fig>
</sec>
<sec>
<title>Inter-participant analysis</title>
<p>The online performance on the testing blocks in sessions II and III is reported in Table <xref ref-type="table" rid="T1">1</xref>. The system achieved an average specificity and sensitivity of 81.44 &#x000B1; 8.35 and 82.30 &#x000B1; 7.39% respectively, resulting in an average kappa coefficient of 0.60 &#x000B1; 0.03. All participants exhibited a kappa coefficient that exceeded chance. Seven out of eight participants achieved a kappa coefficient over 0.4, which is equivalent to an accuracy &#x0003E;70% had the classes been balanced (Friedrich et al., <xref ref-type="bibr" rid="B15">2012</xref>).</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p><bold>Classification performance within individual sessions</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top"><bold>Participant</bold></th>
<th align="center" valign="top"><bold>Session</bold></th>
<th align="center" valign="top"><bold>&#x00023; Features selected</bold></th>
<th align="center" valign="top"><bold>Specificity (%)</bold></th>
<th align="center" valign="top"><bold>Sensitivity (%)</bold></th>
<th align="center" valign="top"><bold>Kappa <overline><bold><italic>k</italic></bold></overline> &#x000B1; &#x003C6;<bold>(<italic>k</italic>)</bold></bold></th>
<th align="center" valign="top"><bold>Information transfer rate <italic>ITR</italic><sub>Nykopp</sub> (bits/trial)</bold></th>
<th align="center" valign="top"><bold>Bit-rate (bits/min)</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">1</td>
<td align="center" valign="top">I</td>
<td align="center" valign="top">3</td>
<td align="center" valign="top">94.23</td>
<td align="center" valign="top">25.00</td>
<td align="center" valign="top">0.23 &#x000B1; 0.19</td>
<td align="center" valign="top">0.05</td>
<td align="center" valign="top">0.16</td>
</tr>
<tr>
<td/>
<td align="center" valign="top">II</td>
<td align="center" valign="top">3</td>
<td align="center" valign="top">84.27</td>
<td align="center" valign="top">70.97</td>
<td align="center" valign="top">0.53 &#x000B1; 0.14</td>
<td align="center" valign="top">0.21</td>
<td align="center" valign="top">0.65</td>
</tr>
<tr>
<td/>
<td align="center" valign="top">III</td>
<td align="center" valign="top">4</td>
<td align="center" valign="top">70.24</td>
<td align="center" valign="top">88.89</td>
<td align="center" valign="top">0.51 &#x000B1; 0.13</td>
<td align="center" valign="top">0.23</td>
<td align="center" valign="top">0.73</td>
</tr>
<tr>
<td align="left" valign="top">2</td>
<td align="center" valign="top">I</td>
<td align="center" valign="top">1</td>
<td align="center" valign="top">78.26</td>
<td align="center" valign="top">71.43</td>
<td align="center" valign="top">0.43 &#x000B1; 0.18</td>
<td align="center" valign="top">0.16</td>
<td align="center" valign="top">0.51</td>
</tr>
<tr>
<td/>
<td align="center" valign="top">II</td>
<td align="center" valign="top">3</td>
<td align="center" valign="top">75.00</td>
<td align="center" valign="top">86.11</td>
<td align="center" valign="top">0.54 &#x000B1; 0.13</td>
<td align="center" valign="top">0.24</td>
<td align="center" valign="top">0.77</td>
</tr>
<tr>
<td/>
<td align="center" valign="top">III</td>
<td align="center" valign="top">2</td>
<td align="center" valign="top">82.93</td>
<td align="center" valign="top">94.74</td>
<td align="center" valign="top">0.72 &#x000B1; 0.14</td>
<td align="center" valign="top">0.42</td>
<td align="center" valign="top">1.33</td>
</tr>
<tr>
<td align="left" valign="top">3</td>
<td align="center" valign="top">I</td>
<td align="center" valign="top">2</td>
<td align="center" valign="top">80.00</td>
<td align="center" valign="top">46.67</td>
<td align="center" valign="top">0.26 &#x000B1; 0.17</td>
<td align="center" valign="top">0.05</td>
<td align="center" valign="top">0.16</td>
</tr>
<tr>
<td/>
<td align="center" valign="top">II</td>
<td align="center" valign="top">2</td>
<td align="center" valign="top">82.72</td>
<td align="center" valign="top">83.78</td>
<td align="center" valign="top">0.63 &#x000B1; 0.14</td>
<td align="center" valign="top">0.30</td>
<td align="center" valign="top">0.93</td>
</tr>
<tr>
<td/>
<td align="center" valign="top">III</td>
<td align="center" valign="top">5</td>
<td align="center" valign="top">82.72</td>
<td align="center" valign="top">78.38</td>
<td align="center" valign="top">0.59 &#x000B1; 0.14</td>
<td align="center" valign="top">0.25</td>
<td align="center" valign="top">0.78</td>
</tr>
<tr>
<td align="left" valign="top">4</td>
<td align="center" valign="top">I</td>
<td align="center" valign="top">2</td>
<td align="center" valign="top">32.65</td>
<td align="center" valign="top">72.73</td>
<td align="center" valign="top">0.03 &#x000B1; 0.08</td>
<td align="center" valign="top">&#x0003C;0.01</td>
<td align="center" valign="top">0.01</td>
</tr>
<tr>
<td/>
<td align="center" valign="top">II</td>
<td align="center" valign="top">4</td>
<td align="center" valign="top">77.03</td>
<td align="center" valign="top">71.88</td>
<td align="center" valign="top">0.45 &#x000B1; 0.14</td>
<td align="center" valign="top">0.15</td>
<td align="center" valign="top">0.49</td>
</tr>
<tr>
<td/>
<td align="center" valign="top">III</td>
<td align="center" valign="top">2</td>
<td align="center" valign="top">71.62</td>
<td align="center" valign="top">68.00</td>
<td align="center" valign="top">0.34 &#x000B1; 0.13</td>
<td align="center" valign="top">0.10</td>
<td align="center" valign="top">0.31</td>
</tr>
<tr>
<td align="left" valign="top">5</td>
<td align="center" valign="top">I</td>
<td align="center" valign="top">4</td>
<td align="center" valign="top">36.84</td>
<td align="center" valign="top">100.00</td>
<td align="center" valign="top">0.27 &#x000B1; 0.13</td>
<td align="center" valign="top">0.16</td>
<td align="center" valign="top">0.50</td>
</tr>
<tr>
<td/>
<td align="center" valign="top">II</td>
<td align="center" valign="top">1</td>
<td align="center" valign="top">83.13</td>
<td align="center" valign="top">91.89</td>
<td align="center" valign="top">0.69 &#x000B1; 0.14</td>
<td align="center" valign="top">0.39</td>
<td align="center" valign="top">1.22</td>
</tr>
<tr>
<td/>
<td align="center" valign="top">III</td>
<td align="center" valign="top">1</td>
<td align="center" valign="top">71.80</td>
<td align="center" valign="top">90.91</td>
<td align="center" valign="top">0.56 &#x000B1; 0.13</td>
<td align="center" valign="top">0.26</td>
<td align="center" valign="top">0.81</td>
</tr>
<tr>
<td align="left" valign="top">6</td>
<td align="center" valign="top">I</td>
<td align="center" valign="top">1</td>
<td align="center" valign="top">94.11</td>
<td align="center" valign="top">33.33</td>
<td align="center" valign="top">0.32 &#x000B1; 0.20</td>
<td align="center" valign="top">0.09</td>
<td align="center" valign="top">0.27</td>
</tr>
<tr>
<td/>
<td align="center" valign="top">II</td>
<td align="center" valign="top">2</td>
<td align="center" valign="top">81.18</td>
<td align="center" valign="top">88.57</td>
<td align="center" valign="top">0.63 &#x000B1; 0.14</td>
<td align="center" valign="top">0.33</td>
<td align="center" valign="top">1.03</td>
</tr>
<tr>
<td/>
<td align="center" valign="top">III</td>
<td align="center" valign="top">3</td>
<td align="center" valign="top">78.41</td>
<td align="center" valign="top">74.19</td>
<td align="center" valign="top">0.47 &#x000B1; 0.13</td>
<td align="center" valign="top">0.18</td>
<td align="center" valign="top">0.57</td>
</tr>
<tr>
<td align="left" valign="top">7</td>
<td align="center" valign="top">I</td>
<td align="center" valign="top">4</td>
<td align="center" valign="top">88.89</td>
<td align="center" valign="top">71.43</td>
<td align="center" valign="top">0.59 &#x000B1; 0.21</td>
<td align="center" valign="top">0.26</td>
<td align="center" valign="top">0.82</td>
</tr>
<tr>
<td/>
<td align="center" valign="top">II</td>
<td align="center" valign="top">3</td>
<td align="center" valign="top">90.91</td>
<td align="center" valign="top">71.88</td>
<td align="center" valign="top">0.63 &#x000B1; 0.15</td>
<td align="center" valign="top">0.29</td>
<td align="center" valign="top">0.91</td>
</tr>
<tr>
<td/>
<td align="center" valign="top">III</td>
<td align="center" valign="top">3</td>
<td align="center" valign="top">93.26</td>
<td align="center" valign="top">80.65</td>
<td align="center" valign="top">0.74 &#x000B1; 0.16</td>
<td align="center" valign="top">0.41</td>
<td align="center" valign="top">1.28</td>
</tr>
<tr>
<td align="left" valign="top">8</td>
<td align="center" valign="top">I</td>
<td align="center" valign="top">4</td>
<td align="center" valign="top">85.42</td>
<td align="center" valign="top">36.36</td>
<td align="center" valign="top">0.22 &#x000B1; 0.17</td>
<td align="center" valign="top">0.04</td>
<td align="center" valign="top">0.13</td>
</tr>
<tr>
<td/>
<td align="center" valign="top">II</td>
<td align="center" valign="top">2</td>
<td align="center" valign="top">82.98</td>
<td align="center" valign="top">72.72</td>
<td align="center" valign="top">0.56 &#x000B1; 0.16</td>
<td align="center" valign="top">0.21</td>
<td align="center" valign="top">0.66</td>
</tr>
<tr>
<td/>
<td align="center" valign="top">III</td>
<td align="center" valign="top">3</td>
<td align="center" valign="top">75.29</td>
<td align="center" valign="top">77.14</td>
<td align="center" valign="top">0.47 &#x000B1; 0.13</td>
<td align="center" valign="top">0.18</td>
<td align="center" valign="top">0.56</td>
</tr>
<tr>
<td align="left" valign="top">9</td>
<td align="center" valign="top">I</td>
<td align="center" valign="top">4</td>
<td align="center" valign="top">82.00</td>
<td align="center" valign="top">40.00</td>
<td align="center" valign="top">0.20 &#x000B1; 0.16</td>
<td align="center" valign="top">0.04</td>
<td align="center" valign="top">0.12</td>
</tr>
<tr>
<td/>
<td align="center" valign="top">II</td>
<td align="center" valign="top">3</td>
<td align="center" valign="top">88.64</td>
<td align="center" valign="top">93.75</td>
<td align="center" valign="top">0.76 &#x000B1; 0.15</td>
<td align="center" valign="top">0.48</td>
<td align="center" valign="top">1.53</td>
</tr>
<tr>
<td/>
<td align="center" valign="top">III</td>
<td align="center" valign="top">3</td>
<td align="center" valign="top">86.05</td>
<td align="center" valign="top">81.82</td>
<td align="center" valign="top">0.64 &#x000B1; 0.15</td>
<td align="center" valign="top">0.31</td>
<td align="center" valign="top">0.99</td>
</tr>
<tr>
<td align="left" valign="top">10</td>
<td align="center" valign="top">I</td>
<td align="center" valign="top">4</td>
<td align="center" valign="top">93.88</td>
<td align="center" valign="top">54.55</td>
<td align="center" valign="top">0.52 &#x000B1; 0.22</td>
<td align="center" valign="top">0.20</td>
<td align="center" valign="top">0.64</td>
</tr>
<tr>
<td/>
<td align="center" valign="top">II</td>
<td align="center" valign="top">2</td>
<td align="center" valign="top">83.75</td>
<td align="center" valign="top">94.87</td>
<td align="center" valign="top">0.73 &#x000B1; 0.15</td>
<td align="center" valign="top">0.43</td>
<td align="center" valign="top">1.37</td>
</tr>
<tr>
<td/>
<td align="center" valign="top">III</td>
<td align="center" valign="top">2</td>
<td align="center" valign="top">88.37</td>
<td align="center" valign="top">82.35</td>
<td align="center" valign="top">0.68 &#x000B1; 0.15</td>
<td align="center" valign="top">0.35</td>
<td align="center" valign="top">1.10</td>
</tr>
<tr>
<td align="left" valign="top" colspan="3">Average online performance (sessions II and III)</td>
<td align="center" valign="top">81.44 &#x000B1; 8.35</td>
<td align="center" valign="top">82.30 &#x000B1; 7.39</td>
<td align="center" valign="top">0.60 &#x000B1; 0.03</td>
<td align="center" valign="top">0.28</td>
<td align="center" valign="top">0.87</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec>
<title>Inter-session results</title>
<p>Classification performance across all sessions is summarized in Table <xref ref-type="table" rid="T1">1</xref>. Only four out of ten participants were able to achieve above chance level kappa coefficient [<overline>&#x003BA;</overline> &#x000B1; 1.96 &#x000D7; &#x003C6;(&#x003BA;) &#x0003E; 0]. Of the four participants, three were able to achieve a moderate agreement within the first session (&#x0003E; 0.4). For sessions II and III, all participants achieved accuracies above chance. Moderate agreement between intended and predicted selections (&#x0003E; 0.4) was achieved in nine out of ten participants.</p>
</sec>
<sec>
<title>Dynamic keyboard output and user feedback</title>
<p>The throughputs for all three sessions for all participants are shown in Figure <xref ref-type="fig" rid="F7">7</xref>. The average throughput for session I, II, and III across participants were 0.04 &#x000B1; 0.05, 0.30 &#x000B1; 0.14, and 0.32 &#x000B1; 0.10 characters/minute, respectively.</p>
<fig id="F7" position="float">
<label>Figure 7</label>
<caption><p><bold>Average throughput in characters/minute for sessions I, II, and III for all participants</bold>.</p></caption>
<graphic xlink:href="fnhum-08-00199-g0007.tif"/>
</fig>
<p>Figure <xref ref-type="fig" rid="F8">8</xref> depicts the edit distances for each session. Using a paired <italic>t</italic>-test, we compared the edit distances for the 10 participants at a rigorous significance level of 0.01. In the session I test block, there was no significant difference between edit distances for the no output case (|<italic>X</italic><sup>&#x0002A;</sup>| &#x0003D; 18) against distances when something was spelled (<italic>p</italic> &#x0003D; 0.619). In other words, the composed output was distant from the target output string. In session II, testing blocks 1 and 2 showed significant reduction in edit distances below that achieved in session I (<italic>p</italic> &#x0003D; 0.001; <italic>p</italic> &#x0003D; 0.005), though there was no significant difference between the edit distances of the two blocks (<italic>p</italic> &#x0003D; 0.019). In session III, testing blocks 1 and 2 again showed significant improvement over session I edit distances (<italic>p</italic> &#x0003D; 0.001; <italic>p</italic> &#x0003C; 0.001). In addition, there was no significant difference between edit distances of the two testing blocks in session III (<italic>p</italic> &#x0003D; 0.790). Finally, there was no significant difference between edit distances from the corresponding blocks of sessions II and III (<italic>p</italic> &#x02265; 0.114).</p>
<fig id="F8" position="float">
<label>Figure 8</label>
<caption><p><bold>Edit distances for test blocks from Sessions I (left plot), II (middle plot) and III (right plot)</bold>. The horizontal line on each graph indicates an edit distance of 18 where no input was observed.</p></caption>
<graphic xlink:href="fnhum-08-00199-g0008.tif"/>
</fig>
<p>The correlation between tiredness levels and performance of all sessions were ascertained through Spearman&#x00027;s coefficient (<italic>r</italic><sub><italic>s</italic></sub>) (Brown and Hollander, <xref ref-type="bibr" rid="B7">1977</xref>). There were no significant correlations between the tiredness levels and edit space or throughput. In addition, there were no significant correlations between tiredness and specificity or sensitivity. However, there was a negative trend on the tiredness of the participant before the session and the specificity of the testing blocks (<italic>r</italic><sub><italic>s</italic></sub> &#x0003D; &#x02212;0.314, <italic>n</italic> &#x0003D; 20, <italic>p</italic> &#x0003D; 0.177).</p>
</sec>
</sec>
<sec sec-type="discussion" id="s4">
<title>Discussion</title>
<p>This study investigated the potential of controlling an onscreen keyboard via an integrated mental spelling-motor imagery activation task. Previous studies have demonstrated the potential of TCD as a BCI modality, but strictly in an offline setting (Myrden et al., <xref ref-type="bibr" rid="B29">2011</xref>; Aleem and Chau, <xref ref-type="bibr" rid="B3">2013</xref>; Faress and Chau, <xref ref-type="bibr" rid="B14">2013</xref>). Using a mental spelling and motor imagery task for making selections we achieved online accuracies comparable to offline accuracies reported previously, but with modest throughputs.</p>
<sec>
<title>Throughput of the online TCD-based BCI communication system</title>
<p>The throughputs for sessions two and three improved beyond those of session one, approaching transmission rates of established BCI spelling devices (0.5 char/min) (Birbaumer et al., <xref ref-type="bibr" rid="B6">1999</xref>). The observed combination of low throughput (Figure <xref ref-type="fig" rid="F7">7</xref>) and high kappa coefficient (Table <xref ref-type="table" rid="T1">1</xref>) can be attributed to the cost (temporal penalty) of a false-negative. If a bin was unintentionally bypassed, the participant had to wait between 4 and 15 additional slides before the target bin would be presented again. This wait time can translate into a temporal penalty of several minutes for missing a selection, and is an inherent limitation of scanning keyboards. Additional practice may help to decrease response latency. Further, the Dynamic Keyboard interface could also be improved (e.g., context-specific word prediction) to enhance the speed and accuracy of letter/word selection.</p>
</sec>
<sec>
<title>Feature selection</title>
<p>Some features were consistently selected across all participants. For most participants, the left lateralization of the mental task was pronounced. This finding confirms previous reports of left hemispheric lateralization accompanying verbal fluency tasks (Vingerhoets and Stroobant, <xref ref-type="bibr" rid="B40">1999</xref>; Myrden et al., <xref ref-type="bibr" rid="B29">2011</xref>). Due to the inherent lateralization, bilateral features were selected more frequently, as shown in Figure <xref ref-type="fig" rid="F6">6</xref>, particularly those corresponding to differences between the mean velocities of the left and right MCAs. Nine out of ten participants had the two most frequent features (i.e., difference of MCA means at 5&#x02013;10 and 10&#x02013;15 s) selected at least in one session. The difference in means between 10 and 15 s was the most frequently selected feature, followed by the difference in means between 5 and 10 s. The feature representing the difference in the means between 0 and 5 s was seldom selected. This is likely due to the inherent 5&#x02013;10 s hemodynamic delay post-mental activation (Harders et al., <xref ref-type="bibr" rid="B18">1989</xref>; Szirmai et al., <xref ref-type="bibr" rid="B36">2005</xref>).</p>
</sec>
<sec>
<title>Classification of mental spelling</title>
<p>All participants exhibited improved upon their session I performance in the latter 2 sessions. This improvement is attributable in part to the increase of training data available to the classifier. In addition, participants may have also become more familiar and comfortable with the study protocol and the user-interface. A longitudinal study of TCD-based BCI may help elucidate the effect of mental practice on functional performance.</p>
<p>Other factors (e.g., fatigue, extended trial duration or head motion) may have also impacted participant performance. For example, participant 4 reported a lack of concentration and physical fatigue, which may explain the lower accuracies for this individual.</p>
</sec>
<sec>
<title>Communication rate</title>
<p>The TCD-BCI was able to achieve an average bit-rate of 0.87 bits/min and a maximum of 1.53 bits/min. If the post-activation task 10 s recovery time was removed, the average bit-rate would improve to 1.10 bits/min. In addition, if we are able to bring a three-task TCD into an online setting, similar to the offline study by Aleem and Chau (<xref ref-type="bibr" rid="B3">2013</xref>), assuming equal priors, we can further increase the bit-rate to 4.38 bits/min. Due to a lack of published online TCD-BCIs at present, we compare our results to those of other hemodynamic BCIs. Recent fMRI BCI studies using two-task algorithm attained an average of 2 bits/min (&#x0007E;80% accuracy). Other fMRI BCI studies with a four-task algorithm attained bit rates between 0.9 and 1.5 bits/min (&#x0007E;90% accuracies) (Yoo et al., <xref ref-type="bibr" rid="B45">2004</xref>; LaConte et al., <xref ref-type="bibr" rid="B22">2007</xref>; Minati et al., <xref ref-type="bibr" rid="B27">2012</xref>). Thus, our system achieved a comparable bit-rate with a much simpler set-up. At present, EEG-BCIs still offer the most compelling bit-rates, typically in the order of 15&#x02013;30 bits/min (Donchin and Arbel, <xref ref-type="bibr" rid="B11">2009</xref>; Kansaku et al., <xref ref-type="bibr" rid="B21">2010</xref>).</p>
<p>The average throughput for session I was not significantly different from 0 characters/min at a significance level of 0.01 (<italic>p</italic> &#x0003D; 0.022), which could have been due to the lowered specificity and sensitivity across participants in session I. Throughput for sessions II and III were significantly different from that of session I (<italic>p</italic> &#x0003D; 0.001; <italic>p</italic> &#x0003C; 0.001) and from 0 characters/minute (<italic>p</italic> &#x0003C; 0.001; <italic>p</italic> &#x0003C; 0.001). This improvement may be due, in part, to the user&#x00027;s increasing familiarity with the keyboard, facilitating more skilled navigation through the user interface. Given the temporal resolution of TCD and the sequential nature of the dynamic keyboard, the throughput may have approached its theoretical limit at 0.3 characters/min. Without modification of the user interface and the temporal window of data acquisition, further improvement of the throughput might not be possible. Incidentally, the change in throughput from session II to III was not significant (<italic>p</italic> &#x0003D; 0.653), but this does not preclude further improvements over extended periods of practice.</p>
<p>Similar to throughput, the edit distances for both session II and III improved significantly beyond session I values. Within sessions II and III, the edit distances for the actual outputs did not differ significantly. This suggested that the duration of TCD usage did not affect the quality of the output as testing block 2 typically occurred an hour after initial TCD set-up. Therefore, prolonged TCD usage may be possible provided that breaks are provided every 15&#x0007E;20 min.</p>
</sec>
<sec>
<title>User feedback questionnaire</title>
<p>Feedback regarding the performance of the online TCD-BCI system was neutral to positive (except for the first sessions for participants 2 and 5). Participants 4 and 8 both indicated that they were &#x0201C;somewhat tired&#x0201D; prior to and &#x0201C;very tired&#x0201D; after every session. The lack of energy prior to the session may have impacted participant performance with the online TCD-BCI. The live feedback may have further frustrated the participants, exacerbating their fatigue and diminishing their concentration, thus forming a negative feedback loop that further impacted performance. However, the lack of significant overall correlation between tiredness levels and performance in terms of specificity, sensitivity, edit distance, and throughput suggest that user perceived fatigue did not directly impact overall user performance.</p>
</sec>
<sec>
<title>Limitations</title>
<p>The inefficiency of the scanning keyboard undoubtedly constrained the observed BCI accuracies. Scanning keyboards are frequently used as an interface for assistive technology devices (Jans and Clark, <xref ref-type="bibr" rid="B20">1994</xref>; Lesher et al., <xref ref-type="bibr" rid="B23">1998</xref>). However, the existing keyboard interface was prone to long delays in the event of incorrect selections. For example, for individuals who achieved high accuracies (&#x0003E;85%), it was still difficult to spell the intended phrase within the allotted time. Further improvement of the Dynamic Keyboard is necessary to achieve more efficient communication in future studies. In addition, the required periodic cooling of the TCD probes introduces further delays in communication. Future improvements in TCD technology may minimize the required duration of probe cooling.</p>
<p>One of the major determinants for participant performance was their motivation and concentration. For participants who reported fatigue during specific sessions, (e.g., session 3, participant 4), the overall accuracy rates were lower compared to those of other participants. For participants who maintained concentration during the testing session, on the other hand, higher accuracies were observed (e.g., participant 7 and participant 9).</p>
<p>Despite the efforts to precisely locate the MCAs, unbalanced left and right CBFV magnitudes were occasionally observed. Probe placement errors may contribute to lower accuracies. Future TCD-BCI studies should endeavor to place the probes flush against the skin overlying the temporal bone and establish the same physiological insonation depth and sampling volume on either side of the head.</p>
<p>Another potential source of signal contamination for functional TCD studies is motion artifact. Conspicuous facial movements may shift the TCD probes, resulting in momentary or continuous deterioration of the recorded signals. Additionally, extensive body movements e.g., swinging of the arms, crossing and uncrossing the legs, and shifting body in the chair) may also introduce CBFV changes unrelated to the mental tasks at hand. Moderate movements (e.g., moving hands, and shifting feet) were observed in many participants during this experiment. However, the high classification accuracies achieved suggest a level of robustness to these motion artifacts.</p>
</sec>
<sec>
<title>Future outlook</title>
<p>Compared to EEG, TCD is robust to electrical artifacts but, like near infrared spectroscopy, subject to long hemodynamic time constants which are several orders of magnitude greater than their corresponding electrical counterparts. However, unlike near-infrared spectroscopy BCIs, TCD is immune to ambient lighting. Based on these relative merits and the findings reported herein, TCD may fulfill a niche need where users possess sufficient literacy skills to do mental spelling, but may be unable to use electrical and optical alternatives, due for example to excessive myogenic noise or light absorption by dark hair. By addressing the aforementioned limitations, a TCD-BCI may eventually provide a means of bedside communication to non-verbal individuals who have severe motor impairments.</p>
<p>Future research will however need to go beyond the controlled, distraction-free laboratory conditions of the present study to gauge feasibility in realistic environments such as the home or inpatient unit. Further, future work must engage clients with physical disabilities to ascertain tolerance for the instrumentation and feasibility of the task paradigm.</p>
</sec>
</sec>
<sec sec-type="conclusion" id="s5">
<title>Conclusion</title>
<p>Using an online TCD-BCI system with an onscreen keyboard and combined mental spelling-motor imagery as the activation task, an average specificity, and sensitivity of 81.44 &#x000B1; 8.35 and 82.30 &#x000B1; 7.39%, respectively, were achieved with 10 able-bodied participants. The agreement between the intended and machine-predicted selections was moderate (&#x003BA; &#x0003D; 0.60 &#x000B1; 0.03), with an average information transfer rate of 0.87 bits/min. These results support further investigation of online bilateral TCD-BCI systems using intuitive language tasks.</p>
<sec>
<title>Conflict of interest statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p></sec>
</sec>
</body>
<back>
<ack>
<p>This research was supported by the University of Toronto and Holland Bloorview Kids Rehabilitation Hospital. Special thanks go to Dr. Young Don Ko, Dr. Saba, KeiYan Chui for their support and help.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Aaslid</surname> <given-names>R.</given-names></name></person-group> (<year>1987</year>). <article-title>Visually evoked dynamic blood flow response of the human cerebral circulation</article-title>. <source>Stroke</source> <volume>18</volume>, <fpage>771</fpage>&#x02013;<lpage>775</lpage>. <pub-id pub-id-type="doi">10.1161/01.STR.18.4.771</pub-id><pub-id pub-id-type="pmid">3299883</pub-id></citation>
</ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Aaslid</surname> <given-names>R.</given-names></name> <name><surname>Markwalder</surname> <given-names>T.</given-names></name> <name><surname>Nornes</surname> <given-names>H.</given-names></name></person-group> (<year>1982</year>). <article-title>Noninvasive transcranial Doppler ultrasound recording of flow velocity in basal cerebral arteries</article-title>. <source>J. Neurosurg</source>. <volume>57</volume>, <fpage>769</fpage>&#x02013;<lpage>774</lpage>. <pub-id pub-id-type="doi">10.3171/jns.1982.57.6.0769</pub-id><pub-id pub-id-type="pmid">7143059</pub-id></citation>
</ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Aleem</surname> <given-names>I.</given-names></name> <name><surname>Chau</surname> <given-names>T.</given-names></name></person-group> (<year>2013</year>). <article-title>Towards a hemodynamic BCI using transcranial Doppler (TCD) without user-specific training data</article-title>. <source>J. Neural Eng</source>. <volume>10</volume>:<fpage>016005</fpage>. <pub-id pub-id-type="doi">10.1088/1741-2560/10/1/016005</pub-id><pub-id pub-id-type="pmid">23234760</pub-id></citation>
</ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Alexandrov</surname> <given-names>A.</given-names></name> <name><surname>Sloan</surname> <given-names>M.</given-names></name> <name><surname>Wong</surname> <given-names>L.</given-names></name> <name><surname>Douville</surname> <given-names>C.</given-names></name> <name><surname>Razumovsky</surname> <given-names>A.</given-names></name></person-group> (<year>2007</year>). <article-title>Practice standards for transcranial Doppler ultrasound: part I - test performance</article-title>. <source>J. Neuroimag</source>. <volume>17</volume>, <fpage>11</fpage>&#x02013;<lpage>18</lpage>. <pub-id pub-id-type="doi">10.1111/j.1552-6569.2006.00088.x</pub-id><pub-id pub-id-type="pmid">17238867</pub-id></citation>
</ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Anderer</surname> <given-names>P.</given-names></name> <name><surname>Gruber</surname> <given-names>G.</given-names></name> <name><surname>Parapatics</surname> <given-names>S.</given-names></name> <name><surname>Woertz</surname> <given-names>M.</given-names></name> <name><surname>Miazhynskaia</surname> <given-names>T.</given-names></name> <name><surname>Klosch</surname> <given-names>G.</given-names></name> <etal/></person-group>. (<year>2005</year>). <article-title>An E-Health solution for automatic sleep classification according to Rechtschaffen and Kales: validation study of the Somnolyzer 24x7 utilizing the Siesta Database</article-title>. <source>Neuropsychobiology</source> <volume>51</volume>, <fpage>115</fpage>&#x02013;<lpage>133</lpage>. <pub-id pub-id-type="doi">10.1159/000085205</pub-id><pub-id pub-id-type="pmid">15838184</pub-id></citation>
</ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Birbaumer</surname> <given-names>N.</given-names></name> <name><surname>Ghanayim</surname> <given-names>N.</given-names></name> <name><surname>Hinterberger</surname> <given-names>T.</given-names></name> <name><surname>Iversen</surname> <given-names>I.</given-names></name> <name><surname>Kotchoubey</surname> <given-names>B.</given-names></name> <name><surname>Kubler</surname> <given-names>A.</given-names></name> <etal/></person-group>. (<year>1999</year>). <article-title>A spelling device for the paralysed</article-title>. <source>Nature</source> <volume>398</volume>, <fpage>297</fpage>&#x02013;<lpage>298</lpage>. <pub-id pub-id-type="doi">10.1038/18581</pub-id><pub-id pub-id-type="pmid">10192330</pub-id></citation>
</ref>
<ref id="B7">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Brown</surname> <given-names>B.</given-names></name> <name><surname>Hollander</surname> <given-names>M.</given-names></name></person-group> (<year>1977</year>). <source>Statistics: A Biomedical Introduction</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>John Wiley and Sons, Inc.</publisher-name> <pub-id pub-id-type="doi">10.1002/9780470316474</pub-id></citation>
</ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Buxton</surname> <given-names>R.</given-names></name> <name><surname>Lawrence</surname> <given-names>F.</given-names></name></person-group> (<year>1997</year>). <article-title>A model for the coupling between cerebral blood flow and oxygen metabolism during neural stimulation</article-title>. <source>J. Cereb. Blood Flow Metab</source>. <volume>17</volume>, <fpage>64</fpage>&#x02013;<lpage>72</lpage>. <pub-id pub-id-type="doi">10.1097/00004647-199701000-00009</pub-id><pub-id pub-id-type="pmid">8978388</pub-id></citation>
</ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cohen</surname> <given-names>J.</given-names></name></person-group> (<year>1960</year>). <article-title>A coefficient of agreement for nomnal scales</article-title>. <source>Educ. Psychol. Meas</source>. <volume>20</volume>, <fpage>37</fpage>&#x02013;<lpage>46</lpage>. <pub-id pub-id-type="doi">10.1177/001316446002000104</pub-id></citation>
</ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Danker-Hopfe</surname> <given-names>H.</given-names></name> <name><surname>Kunz</surname> <given-names>D.</given-names></name> <name><surname>Gruber</surname> <given-names>G.</given-names></name> <name><surname>Klosch</surname> <given-names>G.</given-names></name> <name><surname>Lorenzo</surname> <given-names>J.</given-names></name> <name><surname>Himanen</surname> <given-names>S.</given-names></name> <etal/></person-group>. (<year>2004</year>). <article-title>Interrater reliability between scorers from eight European sleep laboratories in subjects with different sleep disorders</article-title>. <source>J. Sleep Res</source>. <volume>13</volume>, <fpage>63</fpage>&#x02013;<lpage>69</lpage>. <pub-id pub-id-type="doi">10.1046/j.1365-2869.2003.00375.x</pub-id><pub-id pub-id-type="pmid">14996037</pub-id></citation>
</ref>
<ref id="B11">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Donchin</surname> <given-names>E.</given-names></name> <name><surname>Arbel</surname> <given-names>Y.</given-names></name></person-group> (<year>2009</year>). <article-title>P300 based brain computer interfaces: a progress report</article-title>, in <source>Foundations of Augmented Cognition. Neuroergonomics and Operational Neuroscience</source>, eds <person-group person-group-type="editor"><name><surname>Schmorrow</surname> <given-names>D. D.</given-names></name> <name><surname>Estabrooke</surname> <given-names>I. V.</given-names></name> <name><surname>Grootjen</surname> <given-names>M.</given-names></name></person-group> (<publisher-loc>Berlin; Heidelberg</publisher-loc>: <publisher-name>Springer-Verlag</publisher-name>), <fpage>724</fpage>&#x02013;<lpage>731</lpage>.</citation>
</ref>
<ref id="B12">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Duda</surname> <given-names>R.</given-names></name> <name><surname>Hart</surname> <given-names>P.</given-names></name> <name><surname>Stork</surname> <given-names>D.</given-names></name></person-group> (<year>2012</year>). <source>Pattern Classification</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>John Wiley and Sons</publisher-name>.</citation>
</ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Falk</surname> <given-names>T. H.</given-names></name> <name><surname>Guirgis</surname> <given-names>M.</given-names></name> <name><surname>Power</surname> <given-names>S.</given-names></name> <name><surname>Chau</surname> <given-names>T.</given-names></name></person-group> (<year>2011</year>). <article-title>Taking NIRS-BCIs outside the lab: towards achieving robustness against environment noise</article-title>. <source>IEEE Trans. Neural Syst. Rehabil. Eng</source>. <volume>19</volume>, <fpage>136</fpage>&#x02013;<lpage>146</lpage>. <pub-id pub-id-type="doi">10.1109/TNSRE.2010.2078516</pub-id><pub-id pub-id-type="pmid">20876031</pub-id></citation>
</ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Faress</surname> <given-names>A.</given-names></name> <name><surname>Chau</surname> <given-names>T.</given-names></name></person-group> (<year>2013</year>). <article-title>Towards a multimodal brain-computer interface: combining fNIRS and fTCD measurements to enable higher classification accuracy</article-title>. <source>Neuroimage</source> <volume>77</volume>, <fpage>186</fpage>&#x02013;<lpage>194</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2013.03.028</pub-id><pub-id pub-id-type="pmid">23541802</pub-id></citation>
</ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Friedrich</surname> <given-names>E.</given-names></name> <name><surname>Scherer</surname> <given-names>R.</given-names></name> <name><surname>Neuper</surname> <given-names>C.</given-names></name></person-group> (<year>2012</year>). <article-title>The effect of distinct mental strategies on classification performance for brain-computer interfaces</article-title>. <source>Int. J. Psychophysiol</source>. <volume>84</volume>, <fpage>86</fpage>&#x02013;<lpage>94</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijpsycho.2012.01.014</pub-id><pub-id pub-id-type="pmid">22289414</pub-id></citation>
</ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Group</surname> <given-names>B. S.</given-names></name></person-group> (<year>2010</year>). <article-title>Guidelines for the safe use of diagnostic ultrasound equipment</article-title>. <source>Ultrasound</source> <volume>18</volume>, <fpage>52</fpage>&#x02013;<lpage>59</lpage>. <pub-id pub-id-type="doi">10.1258/ult.2010.100003</pub-id></citation>
</ref>
<ref id="B17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Haag</surname> <given-names>A.</given-names></name> <name><surname>Moeller</surname> <given-names>N.</given-names></name> <name><surname>Knake</surname> <given-names>S.</given-names></name> <name><surname>Hermsen</surname> <given-names>A.</given-names></name> <name><surname>Ortel</surname> <given-names>W.</given-names></name></person-group> (<year>2009</year>). <article-title>Language lateralization in children using functional transcranial Doppler sonography</article-title>. <source>Dev. Med. Child Neurol</source>. <volume>52</volume>, <fpage>331</fpage>&#x02013;<lpage>336</lpage>. <pub-id pub-id-type="doi">10.1111/j.1469-8749.2009.03362.x</pub-id><pub-id pub-id-type="pmid">19732120</pub-id></citation>
</ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Harders</surname> <given-names>A.</given-names></name> <name><surname>Laborde</surname> <given-names>G.</given-names></name> <name><surname>Droste</surname> <given-names>D.</given-names></name> <name><surname>Rastogi</surname> <given-names>E.</given-names></name></person-group> (<year>1989</year>). <article-title>Brain activity and blood flow velocity changes: a transcranial Doppler study</article-title>. <source>Int. J. Neurosci</source>. <volume>47</volume>, <fpage>81</fpage>&#x02013;<lpage>102</lpage>. <pub-id pub-id-type="doi">10.3109/00207458908987421</pub-id><pub-id pub-id-type="pmid">2676884</pub-id></citation>
</ref>
<ref id="B20">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Jans</surname> <given-names>D.</given-names></name> <name><surname>Clark</surname> <given-names>S.</given-names></name></person-group> (<year>1994</year>). <source>Augmentative Communication in Practice: an Introduction</source>, ed <person-group person-group-type="editor"><name><surname>Wilson</surname> <given-names>A.</given-names></name></person-group> (<publisher-loc>Edinburgh, CALL Centre</publisher-loc>: <publisher-name>University of Edinburgh</publisher-name>).</citation>
</ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kansaku</surname> <given-names>K.</given-names></name> <name><surname>Hata</surname> <given-names>N.</given-names></name> <name><surname>Takano</surname> <given-names>K.</given-names></name></person-group> (<year>2010</year>). <article-title>My thoughts through a robot&#x00027;s eye: an augmented reality-brain-machine interface</article-title>. <source>Neurosci. Res</source>. <volume>66</volume>, <fpage>219</fpage>&#x02013;<lpage>222</lpage>. <pub-id pub-id-type="doi">10.1016/j.neures.2009.10.006</pub-id><pub-id pub-id-type="pmid">19853630</pub-id></citation>
</ref>
<ref id="B22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>LaConte</surname> <given-names>S.</given-names></name> <name><surname>Peltier</surname> <given-names>S.</given-names></name> <name><surname>Hu</surname> <given-names>X.</given-names></name></person-group> (<year>2007</year>). <article-title>Real-time fMRI using brain-state classification</article-title>. <source>Hum. Brain Mapp</source>. <volume>28</volume>, <fpage>1033</fpage>&#x02013;<lpage>1044</lpage>. <pub-id pub-id-type="doi">10.1002/hbm.20326</pub-id><pub-id pub-id-type="pmid">17133383</pub-id></citation>
</ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lesher</surname> <given-names>G.</given-names></name> <name><surname>Moulton</surname> <given-names>B.</given-names></name> <name><surname>Higginbotham</surname> <given-names>J.</given-names></name></person-group> (<year>1998</year>). <article-title>Techniques for augmenting scanning communication</article-title>. <source>Augmentative Altern. Commun</source>. <volume>14</volume>, <fpage>81</fpage>&#x02013;<lpage>101</lpage>. <pub-id pub-id-type="doi">10.1080/07434619812331278236</pub-id></citation>
</ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>Y.</given-names></name> <name><surname>Liu</surname> <given-names>B.</given-names></name></person-group> (<year>2007</year>). <article-title>A normalized Levenshtein distance metric</article-title>. <source>IEEE Trans. Patt. Anal. Mach. Intell</source>. <volume>29</volume>, <fpage>1091</fpage>&#x02013;<lpage>1095</lpage>. <pub-id pub-id-type="doi">10.1109/TPAMI.2007.1078</pub-id><pub-id pub-id-type="pmid">17431306</pub-id></citation>
</ref>
<ref id="B25">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Mamun</surname> <given-names>K.</given-names></name> <name><surname>Mace</surname> <given-names>M.</given-names></name> <name><surname>Lutman</surname> <given-names>M.</given-names></name> <name><surname>Stein</surname> <given-names>J.</given-names></name> <name><surname>Liu</surname> <given-names>X.</given-names></name> <name><surname>Aziz</surname> <given-names>T.</given-names></name> <etal/></person-group>. (<year>2012</year>). <article-title>A robust strategy for decoding movements from deep brain local field potentials to facilitate brain machine interfaces. Biomedical robotics and biomechatronics (BioRob)</article-title>, in <source>2012 4th IEEE RAS and EMBS International Conference</source> (<publisher-loc>Southampton</publisher-loc>), <fpage>320</fpage>&#x02013;<lpage>325</lpage>. <pub-id pub-id-type="doi">10.1109/BioRob.2012.6290708</pub-id></citation>
</ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Marzal</surname> <given-names>A.</given-names></name> <name><surname>Vidal</surname> <given-names>E.</given-names></name></person-group> (<year>1993</year>). <article-title>Computation of normalized edit distance and applications</article-title>. <source>IEEE Trans. Patt. Anal. Mach. Intell</source>. <volume>15</volume>, <fpage>926</fpage>&#x02013;<lpage>932</lpage>. <pub-id pub-id-type="doi">10.1109/34.232078</pub-id></citation>
</ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Minati</surname> <given-names>L.</given-names></name> <name><surname>Nigri</surname> <given-names>A.</given-names></name> <name><surname>Rosazza</surname> <given-names>C.</given-names></name> <name><surname>Bruzzone</surname> <given-names>M.</given-names></name></person-group> (<year>2012</year>). <article-title>Thoughts turned into high-level commands: proof-of-concept study of a vision-guided robot arm driven by functional MRI(fMRI) signals</article-title>. <source>Med. Eng. Phys</source>. <volume>34</volume>, <fpage>650</fpage>&#x02013;<lpage>658</lpage>. <pub-id pub-id-type="doi">10.1016/j.medengphy.2012.02.004</pub-id><pub-id pub-id-type="pmid">22405803</pub-id></citation>
</ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Myrden</surname> <given-names>A.</given-names></name> <name><surname>Kushki</surname> <given-names>A.</given-names></name> <name><surname>Sejdic</surname> <given-names>E.</given-names></name> <name><surname>Chau</surname> <given-names>T.</given-names></name></person-group> (<year>2012</year>). <article-title>Towards increased data transmission rate for a three-class metabolic brain-computer interface based on transcranial Doppler ultrasound</article-title>. <source>Neurosci. Lett</source>. <volume>528</volume>, <fpage>99</fpage>&#x02013;<lpage>103</lpage>. <pub-id pub-id-type="doi">10.1016/j.neulet.2012.09.030</pub-id><pub-id pub-id-type="pmid">23006241</pub-id></citation>
</ref>
<ref id="B29">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Myrden</surname> <given-names>A.</given-names></name> <name><surname>Kushki</surname> <given-names>A.</given-names></name> <name><surname>Sejdic</surname> <given-names>E.</given-names></name> <name><surname>Guerguerian</surname> <given-names>A.</given-names></name> <name><surname>Chau</surname> <given-names>T.</given-names></name></person-group> (<year>2011</year>). <article-title>A brain-computer interface based on bilateral transcranial Doppler ultrasound</article-title>. <source>PLoS ONE</source> <volume>6</volume>:<fpage>e24170</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0024170</pub-id><pub-id pub-id-type="pmid">21915292</pub-id></citation>
</ref>
<ref id="B30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Reinsfelt</surname> <given-names>B.</given-names></name> <name><surname>Sesterlind</surname> <given-names>A.</given-names></name> <name><surname>Ioanes</surname> <given-names>D.</given-names></name> <name><surname>Zetterberg</surname> <given-names>H.</given-names></name> <name><surname>Freden-Lindqvist</surname> <given-names>J.</given-names></name> <name><surname>Ricksten</surname> <given-names>S.</given-names></name></person-group> (<year>2012</year>). <article-title>Transcranial doppler microembolic signals and serum marker evidence of brain injury during transcatheter aortic valve implantation</article-title>. <source>Acta Anaesthesiol. Scand</source>. <volume>56</volume>, <fpage>240</fpage>&#x02013;<lpage>247</lpage>. <pub-id pub-id-type="doi">10.1111/j.1399-6576.2011.02563.x</pub-id><pub-id pub-id-type="pmid">22092012</pub-id></citation>
</ref>
<ref id="B31">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Sankoff</surname> <given-names>D.</given-names></name> <name><surname>Kruskal</surname> <given-names>J.</given-names></name></person-group> (<year>1993</year>). <source>In Time Wraps, String Edits, and Macromolecules: The Theory and Practice of Sequence Comparison</source>. <publisher-loc>Boston, MA</publisher-loc>: <publisher-name>Addison Wesley Publishing Company</publisher-name>.</citation>
</ref>
<ref id="B32">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sarkar</surname> <given-names>S.</given-names></name> <name><surname>Ghosh</surname> <given-names>S.</given-names></name> <name><surname>Ghosh</surname> <given-names>S.</given-names></name> <name><surname>Collier</surname> <given-names>A.</given-names></name></person-group> (<year>2007</year>). <article-title>Role of transcranial Doppler ultrasonography in stroke</article-title>. <source>Postgrad. Med. J</source>. <volume>83</volume>, <fpage>683</fpage>&#x02013;<lpage>689</lpage>. <pub-id pub-id-type="doi">10.1136/pgmj.2007.058602</pub-id><pub-id pub-id-type="pmid">17989267</pub-id></citation>
</ref>
<ref id="B33">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Schl&#x000F6;gl</surname> <given-names>A.</given-names></name> <name><surname>Kronegg</surname> <given-names>J.</given-names></name> <name><surname>Huggins</surname> <given-names>J. E.</given-names></name> <name><surname>Mason</surname> <given-names>S. G.</given-names></name></person-group> (<year>2007</year>). <article-title>Evaluation criteria in BCI research</article-title>, in <source>Towards Brain-Computer Interfacing</source>, eds <person-group person-group-type="editor"><name><surname>Dornhege</surname> <given-names>G.</given-names></name> <name><surname>Millan</surname> <given-names>J. R.</given-names></name> <name><surname>Hinterberger</surname> <given-names>T.</given-names></name> <name><surname>McFarland</surname> <given-names>D.</given-names></name> <name><surname>M&#x000FC;ller</surname> <given-names>K. R.</given-names></name></person-group> (<publisher-loc>Graz</publisher-loc>: <publisher-name>MIT Press</publisher-name>), <fpage>327</fpage>&#x02013;<lpage>342</lpage>.</citation>
</ref>
<ref id="B34">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sitaram</surname> <given-names>R.</given-names></name> <name><surname>Caria</surname> <given-names>A.</given-names></name> <name><surname>Birbaumer</surname> <given-names>N.</given-names></name></person-group> (<year>2009</year>). <article-title>Hemodynamic brain-computer interfaces for communication and rehabilitation</article-title>. <source>Neural Netw</source>. <volume>22</volume>, <fpage>1320</fpage>&#x02013;<lpage>1328</lpage>. <pub-id pub-id-type="doi">10.1016/j.neunet.2009.05.009</pub-id><pub-id pub-id-type="pmid">19524399</pub-id></citation>
</ref>
<ref id="B35">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stroobant</surname> <given-names>N.</given-names></name> <name><surname>Vingerhoets</surname> <given-names>G.</given-names></name></person-group> (<year>2000</year>). <article-title>Transcranial Doppler ultrasonography monitoring of cerebral hemodynamics during performance of cognitive tasks: a review</article-title>. <source>Neuropsychol. Rev</source>. <volume>10</volume>, <fpage>213</fpage>&#x02013;<lpage>231</lpage>. <pub-id pub-id-type="doi">10.1023/A:1026412811036</pub-id><pub-id pub-id-type="pmid">11132101</pub-id></citation>
</ref>
<ref id="B36">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Szirmai</surname> <given-names>I.</given-names></name> <name><surname>Amrein</surname> <given-names>I.</given-names></name> <name><surname>Palvolgyi</surname> <given-names>L.</given-names></name> <name><surname>Debreczeni</surname> <given-names>R.</given-names></name> <name><surname>Kamondi</surname> <given-names>A.</given-names></name></person-group> (<year>2005</year>). <article-title>Correlation between blood flow velocity in the middle cerebral artery and EEG during cognitive effort</article-title>. <source>Cogn. Brain Res</source>. <volume>24</volume>, <fpage>33</fpage>&#x02013;<lpage>40</lpage>. <pub-id pub-id-type="doi">10.1016/j.cogbrainres.2004.12.011</pub-id><pub-id pub-id-type="pmid">15922155</pub-id></citation>
</ref>
<ref id="B37">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tai</surname> <given-names>K.</given-names></name> <name><surname>Blain</surname> <given-names>S.</given-names></name> <name><surname>Chau</surname> <given-names>T.</given-names></name></person-group> (<year>2008</year>). <article-title>A review of emerging access technologies for individuals with severe motor impairments</article-title>. <source>Assist. Techn</source>. <volume>20</volume>, <fpage>204</fpage>&#x02013;<lpage>219</lpage>. <pub-id pub-id-type="doi">10.1080/10400435.2008.10131947</pub-id><pub-id pub-id-type="pmid">19160907</pub-id></citation>
</ref>
<ref id="B38">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Thomas</surname> <given-names>E.</given-names></name> <name><surname>Dyson</surname> <given-names>M.</given-names></name> <name><surname>Clerc</surname> <given-names>M.</given-names></name></person-group> (<year>2013</year>). <article-title>An analysis of performance evaluation for motor-imagery based BCI</article-title>. <source>J. Neural Eng</source>. <volume>10</volume>:<fpage>031001</fpage>. <pub-id pub-id-type="doi">10.1088/1741-2560/10/3/031001</pub-id><pub-id pub-id-type="pmid">23639955</pub-id></citation>
</ref>
<ref id="B39">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tsivgoulis</surname> <given-names>G.</given-names></name> <name><surname>Alexandrov</surname> <given-names>A.</given-names></name> <name><surname>Sloan</surname> <given-names>M.</given-names></name></person-group> (<year>2009</year>). <article-title>Advances in transcranial Doppler ultrasonography</article-title>. <source>Curr. Neurol. Neurosci. Rep</source>. <volume>9</volume>, <fpage>46</fpage>&#x02013;<lpage>54</lpage>. <pub-id pub-id-type="doi">10.1007/s11910-009-0008-7</pub-id><pub-id pub-id-type="pmid">19080753</pub-id></citation>
</ref>
<ref id="B40">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vingerhoets</surname> <given-names>G.</given-names></name> <name><surname>Stroobant</surname> <given-names>N.</given-names></name></person-group> (<year>1999</year>). <article-title>Lateralization of cerebral blood flow velocity changes during cognitive tasks. A simultaneous bilateral transcranial Doppler study</article-title>. <source>Stroke</source> <volume>30</volume>, <fpage>2152</fpage>&#x02013;<lpage>2158</lpage>. <pub-id pub-id-type="doi">10.1161/01.STR.30.10.2152</pub-id><pub-id pub-id-type="pmid">10512921</pub-id></citation>
</ref>
<ref id="B41">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Weigel</surname> <given-names>A.</given-names></name> <name><surname>Fein</surname> <given-names>F.</given-names></name></person-group> (<year>1994</year>). <article-title>Normalizing the weighted edit distance</article-title>, in <source>Proceedings of 12th IAPR International Conference on Pattern Recognition</source>, <volume>Vol. 2</volume>, (<publisher-loc>Kaiserslautern</publisher-loc>), <fpage>339</fpage>&#x02013;<lpage>402</lpage>.</citation>
</ref>
<ref id="B42">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>White</surname> <given-names>H.</given-names></name> <name><surname>Venkatesh</surname> <given-names>B.</given-names></name></person-group> (<year>2006</year>). <article-title>Applications of transcranial Doppler in the ICU: a review</article-title>. <source>Intensive Care Med</source>. <volume>32</volume>, <fpage>981</fpage>&#x02013;<lpage>994</lpage>. <pub-id pub-id-type="doi">10.1007/s00134-006-0173-y</pub-id><pub-id pub-id-type="pmid">16791661</pub-id></citation>
</ref>
<ref id="B43">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Whitehouse</surname> <given-names>A. J.</given-names></name> <name><surname>Badcock</surname> <given-names>N.</given-names></name> <name><surname>Groen</surname> <given-names>M.</given-names></name> <name><surname>Bishop</surname> <given-names>D. V.</given-names></name></person-group> (<year>2009</year>). <article-title>Reliability of a novel paradigm for determining hemispheric lateralization of visuospatial function</article-title>. <source>J. Int. Neuropsychol. Soc</source>. <volume>15</volume>, <fpage>1028</fpage>&#x02013;<lpage>1032</lpage>. <pub-id pub-id-type="doi">10.1017/S1355617709990555</pub-id><pub-id pub-id-type="pmid">19709454</pub-id></citation>
</ref>
<ref id="B44">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wolpaw</surname> <given-names>J.</given-names></name> <name><surname>Birbaumer</surname> <given-names>N.</given-names></name> <name><surname>McFarland</surname> <given-names>D.</given-names></name> <name><surname>Pfurtscheller</surname> <given-names>G.</given-names></name> <name><surname>Vaughan</surname> <given-names>T.</given-names></name></person-group> (<year>2002</year>). <article-title>Brain-computer interfaces for communication and control</article-title>. <source>Clin. Neurophysiol</source>. <volume>113</volume>, <fpage>767</fpage>&#x02013;<lpage>791</lpage>. <pub-id pub-id-type="doi">10.1016/S1388-2457(02)00057-3</pub-id><pub-id pub-id-type="pmid">12048038</pub-id></citation>
</ref>
<ref id="B45">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yoo</surname> <given-names>S.</given-names></name> <name><surname>Fairneny</surname> <given-names>T.</given-names></name> <name><surname>Chen</surname> <given-names>N.K.</given-names></name> <name><surname>Choo</surname> <given-names>S.E.</given-names></name> <name><surname>Panych</surname> <given-names>L.</given-names></name> <name><surname>Park</surname> <given-names>H.</given-names></name> <etal/></person-group>. (<year>2004</year>). <article-title>Brain&#x02013;computer interfaceusingfMRI: spatialnavigationbythoughts</article-title>. <source>Neuroreport</source> <volume>15</volume>, <fpage>1591</fpage>&#x02013;<lpage>1595</lpage>. <pub-id pub-id-type="doi">10.1097/01.wnr.0000133296.39160.fe</pub-id><pub-id pub-id-type="pmid">15232289</pub-id></citation>
</ref>
</ref-list>
</back>
</article>
