<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Neurorobot.</journal-id>
<journal-title>Frontiers in Neurorobotics</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Neurorobot.</abbrev-journal-title>
<issn pub-type="epub">1662-5218</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fnbot.2018.00003</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Non-Uniform Sample Assignment in Training Set Improving Recognition of Hand Gestures Dominated with Similar Muscle Activities</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Zhang</surname> <given-names>Yao</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Liao</surname> <given-names>Yanjian</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Wu</surname> <given-names>Xiaoying</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Chen</surname> <given-names>Lin</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Xiong</surname> <given-names>Qiliang</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<uri xlink:href="http://frontiersin.org/people/u/483642"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Gao</surname> <given-names>Zhixian</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Zheng</surname> <given-names>Xiaolin</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Li</surname> <given-names>Guanglin</given-names></name>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
<uri xlink:href="http://frontiersin.org/people/u/154332"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Hou</surname> <given-names>Wensheng</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="corresp" rid="cor1">&#x0002A;</xref>
<uri xlink:href="http://frontiersin.org/people/u/458111"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Key Laboratory of Biorheological Science and Technology, Ministry of Education, Bioengineering College, Chongqing University</institution>, <addr-line>Chongqing</addr-line>, <country>China</country></aff>
<aff id="aff2"><sup>2</sup><institution>Chongqing Engineering Research Center of Medical Electronics Technology</institution>, <addr-line>Chongqing</addr-line>, <country>China</country></aff>
<aff id="aff3"><sup>3</sup><institution>Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences</institution>, <addr-line>Shenzhen</addr-line>, <country>China</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Ganesh R. Naik, Western Sydney University, Australia</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Yinlai Jiang, University of Electro-Communications, Japan; Rifai Chai, University of Technology Sydney, Australia; Jose De Jesus Rubio, Instituto Polit&#x000E9;cnico Nacional, Mexico</p></fn>
<corresp content-type="corresp" id="cor1">&#x0002A;Correspondence: Wensheng Hou, <email>w.s.hou&#x00040;cqu.edu.cn</email></corresp>
</author-notes>
<pub-date pub-type="epub">
<day>12</day>
<month>02</month>
<year>2018</year>
</pub-date>
<pub-date pub-type="collection">
<year>2018</year>
</pub-date>
<volume>12</volume>
<elocation-id>3</elocation-id>
<history>
<date date-type="received">
<day>20</day>
<month>07</month>
<year>2017</year>
</date>
<date date-type="accepted">
<day>18</day>
<month>01</month>
<year>2018</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2018 Zhang, Liao, Wu, Chen, Xiong, Gao, Zheng, Li and Hou.</copyright-statement>
<copyright-year>2018</copyright-year>
<copyright-holder>Zhang, Liao, Wu, Chen, Xiong, Gao, Zheng, Li and Hou</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract>
<p>So far, little is known how the sample assignment of surface electromyogram (sEMG) features in training set influences the recognition efficiency of hand gesture, and the aim of this study is to explore the impact of different sample arrangements in training set on the classification of hand gestures dominated with similar muscle activation patterns. Seven right-handed healthy subjects (24.2&#x02009;&#x000B1;&#x02009;1.2&#x02009;years) were recruited to perform similar grasping tasks (fist, spherical, and cylindrical grasping) and similar pinch tasks (finger, key, and tape pinch). Each task was sustained for 4&#x02009;s and followed by a 5-s rest interval to avoid fatigue, and the procedure was repeated 60 times for every task. sEMG were recorded from six forearm hand muscles during grasping or pinch tasks, and 4-s sEMG from each channel was segmented for empirical mode decomposition analysis trial by trial. The muscle activity was quantified with zero crossing (ZC) and Wilson amplitude (WAMP) of the first four resulting intrinsic mode function. Thereafter, a sEMG feature vector was constructed with the ZC and WAMP of each channel sEMG, and a classifier combined with support vector machine and genetic algorithm was used for hand gesture recognition. The sample number for each hand gesture was designed to be rearranged according to different sample proportion in training set, and corresponding recognition rate was calculated to evaluate the effect of sample assignment change on gesture classification. Either for similar grasping or pinch tasks, the sample assignment change in training set affected the overall recognition rate of candidate hand gesture. Compare to conventional results with uniformly assigned training samples, the recognition rate of similar pinch gestures was significantly improved when the sample of finger-, key-, and tape-pinch gesture were assigned as 60, 20, and 20%, respectively. Similarly, the recognition rate of similar grasping gestures also rose when the sample proportion of fist, spherical, and cylindrical grasping was 40, 30, and 30%, respectively. Our results suggested that the recognition rate of hand gestures can be regulated by change sample arrangement in training set, which can be potentially used to improve fine-gesture recognition for myoelectric robotic hand exoskeleton control.</p>
</abstract>
<kwd-group>
<kwd>myoelectric control</kwd>
<kwd>training set</kwd>
<kwd>similar hand gestures</kwd>
<kwd>sample proportion</kwd>
<kwd>pattern recognition</kwd>
</kwd-group>
<counts>
<fig-count count="9"/>
<table-count count="3"/>
<equation-count count="7"/>
<ref-count count="43"/>
<page-count count="12"/>
<word-count count="7351"/>
</counts>
</article-meta>
</front>
<body>
<sec id="S1" sec-type="introduction">
<title>Introduction</title>
<p>Myoelectric control systems have been widely used to control assistive and rehabilitation devices, i.e., EMG-controlled robotic hand exoskeleton (Leonardis et al., <xref ref-type="bibr" rid="B18">2015</xref>), which collected the surface electromyogram (sEMG) from the forearm muscles of non-paretic hand controlling the movement of exoskeleton, and to train and/or guide the grasping or pinch task conduction of paretic hand as well. Feature classification of sEMG in time and/or frequency domain is usually employed for recognizing non-paretic hand gesture under the following principle: different hand motions/gestures are dominated with different muscle activity patterns, which result in a distinguishable sEMG feature vector (Lima et al., <xref ref-type="bibr" rid="B20">2016</xref>). Although a variety of myoelectric pattern identification strategies have been proposed to classify the sEMG signals for different hand gestures, very little attention has been paid to the recognition of hand gestures dominated with similar hand muscle activity patterns (AbdelMaseeh et al., <xref ref-type="bibr" rid="B1">2016</xref>). Improving the classification and identification of similar hand gestures is helpful for exquisite myoelectric control system development (Amsuess et al., <xref ref-type="bibr" rid="B4">2015</xref>).</p>
<p>Up to date, increased interests have been focused on hand gesture recognition based on sEMG features, and high classification accuracies can be obtained (Khezri and Jahed, <xref ref-type="bibr" rid="B17">2007</xref>). Usually, hand gestures dominated by different hand muscle contractions (Young et al., <xref ref-type="bibr" rid="B42">2012</xref>), such as palm extension and closure, wrist flexion and extension, and supination and pronation, are used to test the classification efficiency. Therefore, it is believed that high recognition rates of the hand gesture strongly depend on differentiation of EMG activities among these hand motions. Urwyler et al. (<xref ref-type="bibr" rid="B38">2015</xref>) reported that a high classification accuracy (above 95%) for classifying the four or six movements. Peerdeman et al. (<xref ref-type="bibr" rid="B30">2011</xref>) improved the classification rate in daily hand movements by optimizing the sEMG feature sets and classification algorithm. Although numerous studies have focused on the most suitable signal feature selection and classification strategy design (Sapsanis et al., <xref ref-type="bibr" rid="B35">2013</xref>), little efforts has been put to the specific demand of similar gesture recognition. As one of the most dexterous organs in the world, our hand can perform a variety of hand motions with different finger coordination patterns, and part of these hand motions are controlled with almost same hand muscle contraction patterns, such as hand pinch and hand tripod gestures. Unfortunately, these hand gestures with similar muscle activities patterns were usually excluded from hand motion classification studies due to their low sensitivity and poor classification performance (Castro et al., <xref ref-type="bibr" rid="B5">2015</xref>). According to our previous work, the accuracy rate of similar gestures recognition for pinching different items or grasping bottles with different weights was less than 80% (Zhang et al., <xref ref-type="bibr" rid="B43">2016</xref>). However, to train the paretic hand after stroke with a robotic hand exoskeleton, it is necessary to identify gestures with high similarity based on sEMG features detection from contralateral non-paretic hand.</p>
<p>The key obstacle for similar hand gesture recognition is that these hand movements are dominated with the same hand muscle&#x02019;s contraction patterns (Liu et al., <xref ref-type="bibr" rid="B21">2014</xref>). However, a critical factor for gesture classification is that the feature vector of different gestures should contain sufficient sensitivity and specificity (Chen et al., <xref ref-type="bibr" rid="B10">2016b</xref>). In other words, the distance between gesture classes in the myoelectric feature space must be sufficiently wide. Unfortunately, distances between classes of similar gestures are diminished due to the feature vector extracted from similar muscle activation pattern are difficult to be distinguished, which deteriorate the final classification performance. In addition to the feature selection and classification algorithm optimization, the performance of hand gesture recognition highly depends on the quality of a training set (Lorrain et al., <xref ref-type="bibr" rid="B22">2011</xref>). Growing evidences have shown that the design of training sample assignment, both the sample size and proportion in training set, can impact the classification accuracy. Foody et al. (<xref ref-type="bibr" rid="B11">1995</xref>) verified that variations in the size of each class in the training set affected the pattern of class allocation; Chen et al. (<xref ref-type="bibr" rid="B8">2009</xref>) demonstrated that better performance of a classifier could be achieved when optimizing a training set by expanding the sample size. Wigdahl et al. (<xref ref-type="bibr" rid="B39">2013a</xref>) showed that a small training set size could achieve better overall classification results when they varied the number of normal controls in corresponding training set. Generally, the sample size and proportion of the training sample play a non-ignorable role on the classification efficiency, and better classification could be obtained by optimizing the constitution of the training set (Fratini et al., <xref ref-type="bibr" rid="B13">2015</xref>). Therefore, it can be presumed that optimizing the myoelectric training set could impact similar gesture recognition performance accordingly.</p>
<p>Due to the principle of inter-limb coordination (Luft et al., <xref ref-type="bibr" rid="B23">2014</xref>), voluntary movement of non-paretic hand controlling the paretic hand activities, or bimanual training, is a promising approach for stroke rehabilitation (Oujamaa et al., <xref ref-type="bibr" rid="B29">2009</xref>; Cauraugh et al., <xref ref-type="bibr" rid="B7">2010</xref>). To accurately control the movement of hand exoskeleton for paretic hand training, it is essential to detect the dexterous hand motions performed by finger coordination patterns, which sometimes may be controlled with very similar hand muscle contractions. This study is to investigate how sample arrangement in training set affects the hand gesture classification accuracy. sEMG signals have been recorded from forearm hand muscles when conducting similar grasping gestures or similar pinch gestures, and the impact of the sample proportion in the training set on the recognition efficiency of similar hand gestures are evaluated by changing the sample number of each candidate gesture.</p>
</sec>
<sec id="S2" sec-type="materials|methods">
<title>Materials and Methods</title>
<sec id="S2-1">
<title>Participants</title>
<p>The protocol of this study was approved by Institutional Review Board of Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences. Seven healthy subjects (aged 24.2&#x02009;&#x000B1;&#x02009;1.2&#x02009;years, six males and one female, all right handed, height: 1.71&#x02009;&#x000B1;&#x02009;0.17&#x02009;m, and weight: 65.62&#x02009;&#x000B1;&#x02009;8.1&#x02009;kg) without neurological or muscular disease participated in this study. Explanation of the experiment and protocol were provided to all participants. Written informed consents and permission for publication of photographs for scientific and educational purposes were obtained before procedure.</p>
</sec>
<sec id="S2-2">
<title>Data Acquisition</title>
<p>The sEMG signals were recorded using a surface EMG system (ME6000, Mega Electronics Ltd., Finland). Pairs of disposable surface electrodes were placed on the six forearm hand muscles: (1) the extensor pollicis brevis (EPB), (2) extensor indicis propirus (EPI), (3) flexor digitorum sublimis (FDS), (4) palmaris longus (PL), (5) musculus brachioradialis (MB), and (6) extensor digitorum (ED) (Figures <xref ref-type="fig" rid="F1">1</xref>A,B). To minimize movement artifacts, preamplified EMG sensor units are attached to the limbs using elastic gauze. The recording system bandwidth is 15&#x02013;500&#x02009;Hz, and the sampling rate is 1&#x02009;kHz for sEMG collection.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p>Placement of disposable surface electrodes on the forearm for surface electromyogram (sEMG) collection: <bold>(A)</bold> ch1(FDS), ch2(PL), and ch3(MB); <bold>(B)</bold> ch4(EPI), ch5(ED), and ch6(EPB); <bold>(C)</bold> sEMG recording during fist-grasping movement.</p></caption>
<graphic xlink:href="fnbot-12-00003-g001.tif"/>
</fig>
</sec>
<sec id="S2-3">
<title>Experiment Protocol</title>
<p>Subjects are required to sits on a chair with their upper limbs vertically relaxed in the sagittal plane and forearms flexed to 90&#x000B0;, as shown in Figure <xref ref-type="fig" rid="F1">1</xref>C. To study desirable hand movements in daily life (Windrich et al., <xref ref-type="bibr" rid="B41">2016</xref>), two sets of similar hand grasping gestures (i.e., fist, spherical, and cylindrical grasping) and pinch gestures (finger, key, and tape pinch) are conducted with the right hand (Figure <xref ref-type="fig" rid="F2">2</xref>). Verbal and visual cues are given to the participants to perform the designed movements. Each task is sustained for 4&#x02009;s and followed by a 5-s rest interval to avoid fatigue. The procedure is repeated 60 times for each task, and each subject conducts a total of 180 trials for grasping and pinch movements.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p>Six hand gestures were assigned for two groups of similar gestures. Grasping group includes fist-, spherical-, and cylindrical-grasping gesture <bold>(A&#x02013;C)</bold>, Pinch group includes finger-, key-, tape-pinch gesture <bold>(D&#x02013;F)</bold>.</p></caption>
<graphic xlink:href="fnbot-12-00003-g002.tif"/>
</fig>
</sec>
<sec id="S2-4">
<title>Data Analysis</title>
<sec id="S2-4-1">
<title>Pre-process</title>
<p>We analyzed the data off-line with a customized Matlab program (the Mathworks, Natick, MA, USA). The recorded sEMG signals were bandpass filtered through a Butterworth digital filter (10&#x02013;400&#x02009;Hz, fourth order and zero phase) and followed by a 50-Hz digital notch filter for overcoming the power interference. Furthermore, within a 256-ms sliding window, the average IEMG (integrate sEMG) (Phinyomark et al., <xref ref-type="bibr" rid="B31">2012</xref>) value was calculated as
<disp-formula id="E1"><label>(1)</label><mml:math id="M1"><mml:msub><mml:mrow><mml:mi>X</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">iemg</mml:mi></mml:mrow></mml:msub><mml:mo class="MathClass-rel">&#x0003D;</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mn>256</mml:mn></mml:mrow></mml:mfrac><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo class="MathClass-rel">&#x0003D;</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mrow><mml:mn>255</mml:mn></mml:mrow></mml:munderover></mml:mstyle><mml:mfenced separators="" open="|" close="|"><mml:mi>x</mml:mi><mml:mo class="MathClass-open">(</mml:mo><mml:mi>i</mml:mi><mml:mo class="MathClass-close">)</mml:mo></mml:mfenced></mml:math></disp-formula>
where <italic>x</italic>(<italic>i</italic>) was the <italic>i</italic>th sampled sEMG signal, and <italic>X<sub>iemg</sub></italic> was the IEMG value within 256-ms time window. Once that value exceeded a predefined threshold, the muscle was activated for grasping or pinch movement. Then, the next 4-s sEMG signals was segmented into 256-ms analysis windows with an overlap of 50&#x02009;ms for further processing.</p>
</sec>
<sec id="S2-4-2">
<title>sEMG Feature Vector Construction</title>
<p>The segmented second sEMG signal of a grasping trial or pinch trial was processed as following flow diagram (Figure <xref ref-type="fig" rid="F3">3</xref>) to construct the feature vector for hand gesture recognition.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p>The flow diagram for the surface electromyogram (sEMG) feature vector construction.</p></caption>
<graphic xlink:href="fnbot-12-00003-g003.tif"/>
</fig>
<p>The empirical mode decomposition (EMD) (Shang et al., <xref ref-type="bibr" rid="B37">2011</xref>; Hong et al., <xref ref-type="bibr" rid="B15">2016</xref>) was employed to extract multichannel-recorded sEMG features for pattern recognition. In each trial, a 4-second-recorded sEMG was extracted per channel, and the EMD was used to decompose the sEMG into eight intrinsic mode functions (IMFs) as
<disp-formula id="E2"><label>(2)</label><mml:math id="M2"><mml:mtext>sEMG</mml:mtext><mml:mo class="MathClass-rel">&#x0003D;</mml:mo><mml:mstyle displaystyle='true'><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo class="MathClass-rel">&#x0003D;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mn>8</mml:mn></mml:mrow></mml:munderover></mml:mstyle></mml:mstyle><mml:mspace width="0.3em"/><mml:mtext>IM</mml:mtext><mml:msub><mml:mrow><mml:mtext>F</mml:mtext></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo class="MathClass-bin">&#x0002B;</mml:mo><mml:mi>r</mml:mi></mml:math></disp-formula>
where <italic>r</italic> is the residual component, means the central tendency of sEMG signal. An example of sEMG segment during a fist-grasping trial and its first four IMFs is illustrated in Figure <xref ref-type="fig" rid="F4">4</xref>.
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p>Surface electromyogram (sEMG) activities recorded from right FDS(Ch1), PL(Ch2), MB(Ch3), EPI(Ch4), ED(Ch5), and EPB(Ch6) during fist-grasping or finger-pinch <bold>(A)</bold>. sEMG signal segment collected from right FDS(Ch1) and its first four intrinsic mode functions for a fist-grasping trial <bold>(B)</bold> ED, extensor digitorum; MB, musculus brachioradialis; EPB, extensor pollicis brevis; FDS, flexor digitorum sublimis; EPI, extensor indicis propirus.</p></caption>
<graphic xlink:href="fnbot-12-00003-g004.tif"/>
</fig></p>
<p>To quantify the sEMG intensity, zero crossing (ZC) (Jain et al., <xref ref-type="bibr" rid="B16">2000</xref>) and Wilson amplitude (WAMP) (Castro et al., <xref ref-type="bibr" rid="B6">2014</xref>) were computed for each IMF component with a window of 2&#x02009;s, channel by channel. The WAMP value for the resulted sEMG IMF within 2-s window was calculated as
<disp-formula id="E3"><label>(3)</label><mml:math id="M3"><mml:mtext>WAMP</mml:mtext><mml:mo class="MathClass-rel">&#x0003D;</mml:mo><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>k</mml:mi><mml:mo class="MathClass-rel">&#x0003D;</mml:mo><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mn>2000</mml:mn></mml:mrow></mml:munderover></mml:mstyle><mml:mspace width="0.3em"/><mml:mfenced separators="" open="|" close="|"><mml:mrow><mml:mtext>sEMG</mml:mtext><mml:mfenced separators="" open="(" close=")"><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:mfenced><mml:mo class="MathClass-bin">&#x02212;</mml:mo><mml:mtext>sEMG</mml:mtext><mml:mfenced separators="" open="(" close=")"><mml:mrow><mml:mi>k</mml:mi><mml:mo class="MathClass-bin">&#x02212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:mfenced></mml:mrow></mml:mfenced></mml:math></disp-formula></p>
<p>To reduce the dimension of the feature set, principle component analysis method (Francini et al., <xref ref-type="bibr" rid="B12">2017</xref>) was applied to select the IMFs with more contributions. Here, the first four IMFs components were selected as their contribution ratio was above 90% (or the cumulative percent was 90%). As a result, the dimension of the sEMG feature vector could be reduced to eight (2 features&#x02009;&#x000D7;&#x02009;4 IMFs) for one channel, and a total of 48 sEMG features were extracted for each trial.
<disp-formula id="E4"><label>(4)</label><mml:math id="M4"><mml:mtable columnalign="left" class="align"><mml:mtr><mml:mtd columnalign="left" class="align-odd"><mml:msub><mml:mrow><mml:mi>A</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo class="MathClass-rel">&#x0003D;</mml:mo><mml:mfenced separators="" open="[" close=""><mml:mrow><mml:mtext>Z</mml:mtext><mml:msub><mml:mrow><mml:mtext>C</mml:mtext></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo class="MathClass-punc">,</mml:mo><mml:mtext>WAM</mml:mtext><mml:msub><mml:mrow><mml:mtext>P</mml:mtext></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mtext>, Z</mml:mtext><mml:msub><mml:mrow><mml:mtext>C</mml:mtext></mml:mrow><mml:mrow><mml:mn>2</mml:mn><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo class="MathClass-punc">,</mml:mo><mml:mtext>WAM</mml:mtext><mml:msub><mml:mrow><mml:mtext>P</mml:mtext></mml:mrow><mml:mrow><mml:mn>2</mml:mn><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo class="MathClass-punc">,</mml:mo><mml:mtext>Z</mml:mtext><mml:msub><mml:mrow><mml:mtext>C</mml:mtext></mml:mrow><mml:mrow><mml:mn>3</mml:mn><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo class="MathClass-punc">,</mml:mo><mml:mtext>WAM</mml:mtext><mml:msub><mml:mrow><mml:mtext>P</mml:mtext></mml:mrow><mml:mrow><mml:mn>3</mml:mn><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo class="MathClass-punc">,</mml:mo></mml:mrow></mml:mfenced><mml:mfenced separators="" open="" close="]"><mml:mrow><mml:mtext>Z</mml:mtext><mml:msub><mml:mrow><mml:mtext>C</mml:mtext></mml:mrow><mml:mrow><mml:mn>4</mml:mn><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo class="MathClass-punc">,</mml:mo><mml:mtext>WAM</mml:mtext><mml:msub><mml:mrow><mml:mtext>P</mml:mtext></mml:mrow><mml:mrow><mml:mn>4</mml:mn><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfenced></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E5"><label>(5)</label><mml:math id="M5"><mml:mtable columnalign="left" class="align"><mml:mtr><mml:mtd columnalign="right" class="align-odd"><mml:mi>B</mml:mi><mml:mo class="MathClass-rel">&#x0003D;</mml:mo><mml:mfenced separators="" open="[" close="]"><mml:mrow><mml:mi>A</mml:mi><mml:mn>1</mml:mn><mml:mo class="MathClass-punc">,</mml:mo><mml:mi>A</mml:mi><mml:mn>2</mml:mn><mml:mo class="MathClass-punc">,</mml:mo><mml:mo class="MathClass-op">&#x02026;</mml:mo><mml:mo class="MathClass-punc">,</mml:mo><mml:mi>A</mml:mi><mml:mn>6</mml:mn></mml:mrow></mml:mfenced></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
where <italic>A<sub>j</sub></italic> is the feature vector of the <italic>j</italic>th channel (<italic>j</italic>&#x02009;&#x0003D;&#x02009;1, &#x02026;&#x02009;, 6), and <italic>B</italic> is the myoelectric feature matrix of a gesture including a six-channel sEMG feature.</p>
</sec>
<sec id="S2-4-3">
<title>Assessing the Recognition Efficiency with Different Sample Proportions in the Training Set</title>
<p>As mentioned above, 60 samples of sEMG feature vector were extracted for each hand gesture in grasping group or pinch group. For every grasping gesture or pinch gesture, 70% of 60 samples of sEMG feature (42 samples) have been randomly selected as candidate training samples, whereas the rest 30% of 60 of sEMG feature samples (18 samples) constituted the testing set. A modified classifier combining the support vector machine (SVM) (Alba et al., <xref ref-type="bibr" rid="B3">2007</xref>) and genetic algorithm (GA) (Li et al., <xref ref-type="bibr" rid="B19">2017</xref>; Serdio et al., <xref ref-type="bibr" rid="B36">2017</xref>) was employed for its low computation cost (Marchetti et al., <xref ref-type="bibr" rid="B25">2013</xref>; Martins et al., <xref ref-type="bibr" rid="B26">2014</xref>). In briefly, a GA-modified SVM classifier used the sEMG feature vector (B&#x02009;&#x0003D;&#x02009;[A1, A2, &#x02026;&#x02009;, A6]) as training sample for hand gesture recognition. GA was employed to filter and optimize SVM penalty coefficient (<italic>c</italic>) and kernel parameter (<italic>g</italic>), with a maximum generation of 100. Thereafter, the optimized SVM classifier with fivefold cross-validation is applied to classify hand gestures and evaluate solutions.</p>
<p>To assess the effect of the sample proportion of similar hand gestures on the recognition rate, we constructed a training set with a constant size of 52 samples for the grasping group or pinch group, while the sample number of each gesture was adjusted to alter the sample proportion in training set. When studying how the sample proportion of a grasping gesture or pinch gesture affects the classification rate, we step by step increased the sample number of this gesture and decreased the sample number of other two gestures to maintain a constant size for the training set. As an example shown in Table <xref ref-type="table" rid="T1">1</xref>, when inspecting the fist-grasping gesture, we increased the sample proportion of fist-grasping from &#x0007E;10% (6 samples out of 52) to &#x0007E;80% (42 sample) in the grasping group, and sample proportions of spherical and cylindrical grasping decreased from &#x0007E;45% (23 samples) to &#x0007E;10% (5 samples). Overall, the training set always maintained a constant size of 52 samples; meanwhile, the testing set maintained a size of 54 samples (18 samples for fist, spherical, or cylindrical grasping each). The overall recognition rate of three gestures was also calculated in the testing set of grasping group (fist, spherical, and cylindrical grasping). A similar evaluation procedure was applied to spherical or cylindrical grasping in grasping group, and finger, key, or tape pinch in pinch gesture group.</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p>Example of sample number and proportion assignment in the training set and testing set for the fist grasping.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left">Gesture</th>
<th align="center" valign="top" colspan="9">Number of sample (sample proportion)<hr/></th>
</tr>
<tr>
<th align="center"/>
<th align="center" valign="top" colspan="8">Training set</th>
<th align="center">Testing set</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Fist grasping</td>
<td align="center">6 (11.5%)</td>
<td align="center">10 (19.2%)</td>
<td align="center">16 (30.8%)</td>
<td align="center">20 (38.5%)</td>
<td align="center">26 (50%)</td>
<td align="center">30 (57.7%)</td>
<td align="center">36 (69.2%)</td>
<td align="center">42 (80.1%)</td>
<td align="center">18 (33.3%)</td>
</tr>
<tr>
<td align="left">Spherical grasping</td>
<td align="center">23 (44.2%)</td>
<td align="center">21 (40.4%)</td>
<td align="center">18 (34.6%)</td>
<td align="center">16 (30.8%)</td>
<td align="center">13 (25%)</td>
<td align="center">11 (21.2%)</td>
<td align="center">8 (15.4%)</td>
<td align="center">5 (9.6%)</td>
<td align="center">18 (33.3%)</td>
</tr>
<tr>
<td align="left">Cylindrical grasping</td>
<td align="center">23 (44.2%)</td>
<td align="center">21 (40.4%)</td>
<td align="center">18 (34.6%)</td>
<td align="center">16 (30.8%)</td>
<td align="center">13 (25%)</td>
<td align="center">11 (21.2%)</td>
<td align="center">8 (15.4%)</td>
<td align="center">5 (9.6%)</td>
<td align="center">18 (33.3%)</td>
</tr>
<tr>
<td align="left">Total samples</td>
<td align="center">52</td>
<td align="center">52</td>
<td align="center">52</td>
<td align="center">52</td>
<td align="center">52</td>
<td align="center">52</td>
<td align="center">52</td>
<td align="center">52</td>
<td align="center">54</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="S2-4-4">
<title>Measurement of the Feature Space Distance among Similar Hand Gestures</title>
<p>The Mahalanobis distance (Al-Angari et al., <xref ref-type="bibr" rid="B2">2016</xref>) was used to quantify the changes in the feature space between similar hand gestures. The distance between classes (<italic>D</italic><sub>out</sub>) is defined to measure the distance between classes of different motions by
<disp-formula id="E6"><mml:math id="M6"><mml:mtable columnalign="left" class="align"><mml:mtr><mml:mtd columnalign="right" class="align-odd"><mml:msub><mml:mrow><mml:mi>D</mml:mi></mml:mrow><mml:mrow><mml:mtext>out</mml:mtext><mml:mfenced separators="" open="(" close=")"><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:mfenced></mml:mrow></mml:msub><mml:mo class="MathClass-rel">&#x0003D;</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:mfrac><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo class="MathClass-rel">&#x0003D;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:munderover></mml:mstyle><mml:mfenced separators="" open="(" close=""><mml:mrow><mml:mtext>mi</mml:mtext><mml:msub><mml:mrow><mml:mtext>n</mml:mtext></mml:mrow><mml:mrow><mml:mtable class="array"><mml:mtr><mml:mtd><mml:mi>j</mml:mi><mml:mo class="MathClass-rel">&#x0003D;</mml:mo><mml:mn>1</mml:mn><mml:mo class="MathClass-punc">,</mml:mo><mml:mn>2</mml:mn><mml:mo class="MathClass-punc">,</mml:mo><mml:mn>3</mml:mn><mml:mo class="MathClass-punc">;</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mi>j</mml:mi><mml:mo class="MathClass-rel">&#x02260;</mml:mo><mml:mi>i</mml:mi></mml:mtd></mml:mtr><mml:mtr><mml:mtd></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:msub><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:mfrac></mml:mrow></mml:mfenced><mml:mfenced separators="" open="" close=")"><mml:mrow><mml:mo class="MathClass-bin">&#x000D7;</mml:mo><mml:msqrt><mml:mrow><mml:msup><mml:mrow><mml:mfenced separators="" open="(" close=")"><mml:mrow><mml:msub><mml:mrow><mml:mn>&#x003BC;</mml:mn></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo class="MathClass-bin">&#x02212;</mml:mo><mml:msub><mml:mrow><mml:mn>&#x003BC;</mml:mn></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfenced></mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mfenced separators="" open="[" close="]"><mml:mrow><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:mfrac><mml:mfenced separators="" open="(" close=")"><mml:mrow><mml:mo class="MathClass-op">&#x02211;</mml:mo><mml:mi>i</mml:mi><mml:mo class="MathClass-bin">&#x0002B;</mml:mo><mml:mo class="MathClass-op">&#x02211;</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:mfenced></mml:mrow></mml:mfenced></mml:mrow><mml:mrow><mml:mo class="MathClass-bin">&#x02212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mrow><mml:mo class="MathClass-open">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mn>&#x003BC;</mml:mn></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo class="MathClass-bin">&#x02212;</mml:mo><mml:msub><mml:mrow><mml:mn>&#x003BC;</mml:mn></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo class="MathClass-close">)</mml:mo></mml:mrow></mml:mrow></mml:msqrt></mml:mrow></mml:mfenced></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
where &#x003BC;<italic><sub>i</sub></italic> is the centroid of motion <italic>i</italic>, &#x003BC;<italic><sub>j</sub></italic> is the centroid of motion <italic>j</italic>, and &#x003A3;<italic>i</italic> and &#x003A3;<italic>j</italic> are their covariances. The equation is used to calculate the minimum of the Mahalanobis distance between different motions. A smaller <italic>D</italic><sub>out</sub> indicates a shorter distance between classes of different motions. For each sample proportion in grasping training set or pinch training set, we calculate the <italic>D</italic><sub>out</sub> between the sEMG feature vectors of any two similar hand gestures comparing the inter-class distance.</p>
</sec>
</sec>
</sec>
<sec id="S3">
<title>Results</title>
<sec id="S3-5">
<title>Recognition Rate of a Hand Gesture Increased with Its Proportion in the Training Set</title>
<p>In both task (grasping and pinch gesture) groups, we tested the impact of sample proportion of specific hand gesture of interest in training set on its corresponding classification performance. The proportion of a gesture of interest increased from &#x0007E;10 to &#x0007E;80%, while the proportion of two other gestures decreased from &#x0007E;45 to &#x0007E;10% as the training set maintained a constant size of 52 samples (Table <xref ref-type="table" rid="T1">1</xref>). As shown in Table <xref ref-type="table" rid="T2">2</xref> and Figure <xref ref-type="fig" rid="F5">5</xref>A, the recognition rate or classification accuracy (Acc.) of the fist-grasping gesture increased from 61.1 to 88.9% when its sample number increased from 6 to 42. On other hand, when the sample number for spherical- and cylindrical-grasping gestures decreased from 23 to 5, and the recognition rates for spherical and cylindrical grasping decreased to 72.2 and 66.7%, respectively (Figure <xref ref-type="fig" rid="F5">5</xref>A, lower part). Also, as illustrated in Figures <xref ref-type="fig" rid="F5">5</xref>B,C, the recognition rate of spherical and cylindrical grasping exhibited similar trend when the sample number was adjusted step by step.</p>
<table-wrap position="float" id="T2">
<label>Table 2</label>
<caption><p>The sample proportion for fist, spherical, and cylindrical grasping and the corresponding classification accuracies (mean&#x02009;&#x000B1;&#x02009;SD).</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="center" valign="top" colspan="2">Fist grasping<hr/></th>
<th align="center" valign="top" colspan="2">Spherical grasping<hr/></th>
<th align="center" valign="top" colspan="2">Cylindrical grasping<hr/></th>
<th align="center">Overall Acc. (%)</th>
</tr>
<tr>
<th align="center">Sample proportion</th>
<th align="center">Acc. (%)</th>
<th align="center">Sample proportion</th>
<th align="center">Acc. (%)</th>
<th align="center">Sample proportion</th>
<th align="center">Acc. (%)</th>
<th align="center"/>
</tr>
</thead>
<tbody>
<tr>
<td align="left">6/52</td>
<td align="center">61.1&#x02009;&#x000B1;&#x02009;1.5</td>
<td align="center">23/52</td>
<td align="center">83.3&#x02009;&#x000B1;&#x02009;1.5</td>
<td align="center">23/52</td>
<td align="center">88.9&#x02009;&#x000B1;&#x02009;1.7</td>
<td align="center">77.8&#x02009;&#x000B1;&#x02009;2.8</td>
</tr>
<tr>
<td align="left">10/52</td>
<td align="center">72.2&#x02009;&#x000B1;&#x02009;0.8</td>
<td align="center">21/52</td>
<td align="center">83.3&#x02009;&#x000B1;&#x02009;1.3</td>
<td align="center">21/52</td>
<td align="center">83.3&#x02009;&#x000B1;&#x02009;1.8</td>
<td align="center">79&#x02009;&#x000B1;&#x02009;2.1</td>
</tr>
<tr>
<td align="left">16/52</td>
<td align="center">77.8&#x02009;&#x000B1;&#x02009;1.4</td>
<td align="center">18/52</td>
<td align="center">83.3&#x02009;&#x000B1;&#x02009;1.4</td>
<td align="center">18/52</td>
<td align="center">83.3&#x02009;&#x000B1;&#x02009;2.1</td>
<td align="center">79.6&#x02009;&#x000B1;&#x02009;1.7</td>
</tr>
<tr>
<td align="left">20/52</td>
<td align="center">83.3&#x02009;&#x000B1;&#x02009;0.7</td>
<td align="center">16/52</td>
<td align="center">83.3&#x02009;&#x000B1;&#x02009;1.2</td>
<td align="center">16/52</td>
<td align="center">83.3&#x02009;&#x000B1;&#x02009;2.4</td>
<td align="center">83.3&#x02009;&#x000B1;&#x02009;2.2</td>
</tr>
<tr>
<td align="left">26/52</td>
<td align="center">83.3&#x02009;&#x000B1;&#x02009;1.4</td>
<td align="center">13/52</td>
<td align="center">77.9&#x02009;&#x000B1;&#x02009;0.8</td>
<td align="center">13/52</td>
<td align="center">77.8&#x02009;&#x000B1;&#x02009;1.9</td>
<td align="center">81.4&#x02009;&#x000B1;&#x02009;2.5</td>
</tr>
<tr>
<td align="left">30/52</td>
<td align="center">83.3&#x02009;&#x000B1;&#x02009;1.2</td>
<td align="center">11/52</td>
<td align="center">72.2&#x02009;&#x000B1;&#x02009;1.9</td>
<td align="center">11/52</td>
<td align="center">77.8&#x02009;&#x000B1;&#x02009;1.5</td>
<td align="center">78&#x02009;&#x000B1;&#x02009;1.9</td>
</tr>
<tr>
<td align="left">36/52</td>
<td align="center">88.9&#x02009;&#x000B1;&#x02009;1.3</td>
<td align="center">8/52</td>
<td align="center">66.7&#x02009;&#x000B1;&#x02009;1.3</td>
<td align="center">8/52</td>
<td align="center">77.8&#x02009;&#x000B1;&#x02009;2.1</td>
<td align="center">77.5&#x02009;&#x000B1;&#x02009;2.2</td>
</tr>
<tr>
<td align="left">42/52</td>
<td align="center">88.9&#x02009;&#x000B1;&#x02009;1.9</td>
<td align="center">5/52</td>
<td align="center">66.7&#x02009;&#x000B1;&#x02009;1.9</td>
<td align="center">5/52</td>
<td align="center">72.2&#x02009;&#x000B1;&#x02009;2.5</td>
<td align="center">76&#x02009;&#x000B1;&#x02009;2.6</td>
</tr>
<tr>
<td align="left">17/51</td>
<td align="center">79.3&#x02009;&#x000B1;&#x02009;2.5</td>
<td align="center">17/51</td>
<td align="center">80.2&#x02009;&#x000B1;&#x02009;1.6</td>
<td align="center">17/51</td>
<td align="center">82.3&#x02009;&#x000B1;&#x02009;1.9</td>
<td align="center">80.8&#x02009;&#x000B1;&#x02009;2.5</td>
</tr>
</tbody>
</table>
</table-wrap>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p>The classification accuracies (Acc.) of a grasping gesture increased with the sample proportion increasing and decreased with sample proportion decreasing in the training set. <bold>(A)</bold> The Acc. of fist-grasping increased with its sample proportion increasing in training set (upper), and the Acc. of spherical-/cylindrical-grasping decreased with their sample proportion decreasing in training set (lower); <bold>(B)</bold> The Acc. of spherical-grasping increased with its sample proportion increasing in training set (upper), and the Acc. of fist-/cylindrical-grasping decreased with their sample proportion decreasing in training set (lower); <bold>(C)</bold> The Acc. of cylindrical-grasping increased with its sample proportion increasing in training set (upper), and the Acc. of fist-/spherical-grasping decreased with their sample proportion decreasing in training set (lower).</p></caption>
<graphic xlink:href="fnbot-12-00003-g005.tif"/>
</fig>
<p>For the pinch gesture group (see Figure <xref ref-type="fig" rid="F6">6</xref>), the impact of the sample proportion for specific pinch gesture of interest on its corresponding recognition rate was similar to that for the grasping-gesture group. As listed in Table <xref ref-type="table" rid="T3">3</xref>, when increasing the sEMG feature sample number of the finger-pinch gesture from 6 to 42, the recognition rate or classification accuracy (Acc.) of pinch increased from 50 to 94.4% (see Figure <xref ref-type="fig" rid="F6">6</xref>A). On other hand, the sample number for key- and tape-pinching gestures decreased from 23 to 5, and the corresponding recognition rates decreased to 66.7 and 77.8%, respectively [(Figure <xref ref-type="fig" rid="F6">6</xref>A), lower part].</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption><p>The classification accuracies (Acc.) of pinch gesture increased with the enlarged sample proportion and decreased with the dropped sample proportion in the training set. <bold>(A)</bold> The Acc. of finger-pinch increased with its sample proportion increasing in training set (upper), and the Acc. of key-/tape-pinch decreased with their sample proportion decreasing in training set (lower); <bold>(B)</bold> The Acc. of key-pinch increased with its sample proportion increasing in training set (upper), and the Acc. of finger-/tape-pinch decreased with their sample proportion decreasing in training set (lower); <bold>(C)</bold> The Acc. of tape-pinch increased with its sample proportion increasing in training set (upper), and the Acc. of finger-/key-pinch decreased with their sample proportion decreasing in training set (lower).</p></caption>
<graphic xlink:href="fnbot-12-00003-g006.tif"/>
</fig>
<table-wrap position="float" id="T3">
<label>Table 3</label>
<caption><p>The sample proportion for finger, key, and tape-pinch and classification accuracies (mean&#x02009;&#x000B1;&#x02009;SD).</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="center" valign="top" colspan="2">Finger pinch<hr/></th>
<th align="center" valign="top" colspan="2">Key pinch<hr/></th>
<th align="center" valign="top" colspan="2">Tape pinch<hr/></th>
<th align="center">Overall Acc. (%)</th>
</tr>
<tr>
<th align="center">Sample proportion</th>
<th align="center">Acc. (%)</th>
<th align="center">Sample proportion</th>
<th align="center">Acc. (%)</th>
<th align="center">Sample proportion</th>
<th align="center">Acc. (%)</th>
<th align="center"/>
</tr>
</thead>
<tbody>
<tr>
<td align="left">6/52</td>
<td align="center">50&#x02009;&#x000B1;&#x02009;1.8</td>
<td align="center">23/52</td>
<td align="center">88.9&#x02009;&#x000B1;&#x02009;3.1</td>
<td align="center">23/52</td>
<td align="center">94.4&#x02009;&#x000B1;&#x02009;2.3</td>
<td align="center">75.9&#x02009;&#x000B1;&#x02009;2.1</td>
</tr>
<tr>
<td align="left">10/52</td>
<td align="center">55.6&#x02009;&#x000B1;&#x02009;2.1</td>
<td align="center">21/52</td>
<td align="center">88.9&#x02009;&#x000B1;&#x02009;1.8</td>
<td align="center">21/52</td>
<td align="center">88.9&#x02009;&#x000B1;&#x02009;1.4</td>
<td align="center">77.8&#x02009;&#x000B1;&#x02009;1.8</td>
</tr>
<tr>
<td align="left">16/52</td>
<td align="center">66.7&#x02009;&#x000B1;&#x02009;2.2</td>
<td align="center">18/52</td>
<td align="center">83.3&#x02009;&#x000B1;&#x02009;2.5</td>
<td align="center">18/52</td>
<td align="center">88.9&#x02009;&#x000B1;&#x02009;2.2</td>
<td align="center">79.6&#x02009;&#x000B1;&#x02009;1.6</td>
</tr>
<tr>
<td align="left">20/52</td>
<td align="center">72.2&#x02009;&#x000B1;&#x02009;1.9</td>
<td align="center">16/52</td>
<td align="center">83.3&#x02009;&#x000B1;&#x02009;2.4</td>
<td align="center">16/52</td>
<td align="center">88.9&#x02009;&#x000B1;&#x02009;1.6</td>
<td align="center">81.5&#x02009;&#x000B1;&#x02009;2.8</td>
</tr>
<tr>
<td align="left">26/52</td>
<td align="center">88.9&#x02009;&#x000B1;&#x02009;1.5</td>
<td align="center">13/52</td>
<td align="center">83.3&#x02009;&#x000B1;&#x02009;1.9</td>
<td align="center">13/52</td>
<td align="center">83.3&#x02009;&#x000B1;&#x02009;2.6</td>
<td align="center">85.1&#x02009;&#x000B1;&#x02009;2.2</td>
</tr>
<tr>
<td align="left">30/52</td>
<td align="center">94.4&#x02009;&#x000B1;&#x02009;1.4</td>
<td align="center">11/52</td>
<td align="center">83.3&#x02009;&#x000B1;&#x02009;2.1</td>
<td align="center">11/52</td>
<td align="center">83.3&#x02009;&#x000B1;&#x02009;2.7</td>
<td align="center">87&#x02009;&#x000B1;&#x02009;2.7</td>
</tr>
<tr>
<td align="left">36/52</td>
<td align="center">94.4&#x02009;&#x000B1;&#x02009;2.6</td>
<td align="center">8/52</td>
<td align="center">72.2&#x02009;&#x000B1;&#x02009;1.7</td>
<td align="center">8/52</td>
<td align="center">83.3&#x02009;&#x000B1;&#x02009;1.9</td>
<td align="center">83.3&#x02009;&#x000B1;&#x02009;3.3</td>
</tr>
<tr>
<td align="left">42/52</td>
<td align="center">94.4&#x02009;&#x000B1;&#x02009;3</td>
<td align="center">5/52</td>
<td align="center">66.7&#x02009;&#x000B1;&#x02009;2.2</td>
<td align="center">5/52</td>
<td align="center">77.8&#x02009;&#x000B1;&#x02009;2.2</td>
<td align="center">79.6&#x02009;&#x000B1;&#x02009;1.8</td>
</tr>
<tr>
<td align="left">17/51</td>
<td align="center">70.5&#x02009;&#x000B1;&#x02009;2.2</td>
<td align="center">17/51</td>
<td align="center">80.6&#x02009;&#x000B1;&#x02009;2.9</td>
<td align="center">17/51</td>
<td align="center">80.1&#x02009;&#x000B1;&#x02009;1.8</td>
<td align="center">79.8&#x02009;&#x000B1;&#x02009;2.1</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="S3-6">
<title>Optimizing the Sample Proportion in the Training Set Improving Classification Performance of Similar Hand Gestures</title>
<p>As illustrated in Figure <xref ref-type="fig" rid="F7">7</xref>A, with increasing the sample proportion for fist-grasping from &#x0007E;10 to &#x0007E;80% and decreasing the sample proportion for spherical or cylindrical grasping from &#x0007E;45 to &#x0007E;10%, the overall recognition rate of three grasping gestures increased at first and then decreased. The peak recognition rate reached 83.3% when the sample proportions for fist, spherical, and cylindrical grasping were &#x0007E;40, &#x0007E;30, and &#x0007E;30%, respectively. Similarly, when we adjusted the sample proportion for spherical-grasping or cylindrical-grasping from &#x0007E;10 to &#x0007E;80%, the overall recognition rate also increased first and then finally decreased. The peak recognition rate (81.5%) occurred when the sample proportions for fist, spherical, and cylindrical grasping are &#x0007E;35, &#x0007E;30, and &#x0007E;35%, respectively (Figure <xref ref-type="fig" rid="F7">7</xref>B). Also, the overall recognition rate reached a peak (81.5%) when the sample proportions for fist, spherical, and cylindrical grasping were &#x0007E;30, &#x0007E;30, and &#x0007E;40%, respectively (Figure <xref ref-type="fig" rid="F7">7</xref>C). However, when the gesture sample in grasping training set was assigned uniformly (i.e., 17 sample for fist, spherical, or cylindrical grasping), the overall recognition rate was 80.8% (Table <xref ref-type="table" rid="T2">2</xref>, last row).</p>
<fig id="F7" position="float">
<label>Figure 7</label>
<caption><p>The total recognition rates of fist-, spherical-, and cylindrical-grasping gesture varied with the sample proportion. <bold>(A)</bold> The overall Acc. varied with the sample proportion of fist grasping; <bold>(B)</bold> the overall Acc. varied with the sample proportion of spherical grasping; <bold>(C)</bold> the overall Acc. varied with the sample proportion of cylindrical grasping; &#x025B4; indicating the Acc. when sample proportion of fist-, spherical-, and cylindrical-grasping gesture were one third; &#x025A0; indicating the peak Acc. when sample proportion was optimized in grasping training set.</p></caption>
<graphic xlink:href="fnbot-12-00003-g007.tif"/>
</fig>
<p>For the pinch gesture group, similar trend of the overall recognition rate was observed. When the sample proportion for finger pinch varied from &#x0007E;10 to &#x0007E;80%, the peak overall recognition rate (87%) was obtained when the sample proportion for finger-pinch was &#x0007E;60%, whereas the sample proportions for key, tape, and finger pinch were &#x0007E;20% (see Figure <xref ref-type="fig" rid="F8">8</xref>A). A peak in the overall recognition rate (83.3%) also occurred when the training sample proportions for finger, key, and tape pinch were &#x0007E;30, &#x0007E;40, and &#x0007E;30%, respectively (Figure <xref ref-type="fig" rid="F8">8</xref>B). Another peak in the overall recognition rate (81.5%) occurred when the training sample proportions for finger, key, and tape pinch were &#x0007E;35, &#x0007E;35, and &#x0007E;30%, respectively (Figure <xref ref-type="fig" rid="F8">8</xref>C). When we equally assigned the samples of pinch gestures in training set (i.e., 17 sample for finger, key, or tape pinch) (Table <xref ref-type="table" rid="T3">3</xref>, last row), the overall recognition rate was only 79.8%.</p>
<fig id="F8" position="float">
<label>Figure 8</label>
<caption><p>The total recognition rates of finger-, key-, and tape-grasping gesture varied with the sample proportion. <bold>(A)</bold> The overall Acc. varied with the sample proportion of finger pinch; <bold>(B)</bold> the overall Acc. varied with the sample proportion of key pinch; <bold>(C)</bold> the overall Acc. varied with the sample proportion of tape pinch, &#x025B4; indicating the Acc. when sample proportion of finger-, key-, and tape-pinch gesture were one third, &#x025A0; indicating the peak Acc. when sample proportion was optimized in grasping training set.</p></caption>
<graphic xlink:href="fnbot-12-00003-g008.tif"/>
</fig>
</sec>
<sec id="S3-7">
<title>Gesture Sample Proportion in the Training Set Affects the Inter-Class Distance in Feature Space</title>
<p>The sEMG feature vector of a gesture can be considered as a cluster to be classified among candidate hand gestures (see Figure <xref ref-type="fig" rid="F9">9</xref>). To determine how the sample proportion affects the recognition rate of hand gestures, we assessed the discrimination of the sEMG feature vector in feature space with the Mahalanobis distance between hand gesture classes (<italic>D</italic><sub>out</sub>). We compared the <italic>D</italic><sub>out</sub> values of any two gestures in the grasping group (i.e., fist grasping vs. cylindrical grasping, fist grasping vs. spherical grasping, and cylindrical grasping vs. spherical grasping), and pinch group (i.e., finger pinch vs. key pinch, finger pinch vs. tape pinch, and key pinch vs. tape pinch).</p>
<fig id="F9" position="float">
<label>Figure 9</label>
<caption><p>The distance between classes in feature space of grasping-gesture group and pinch gesture; <bold>(A)</bold> cluster analysis plot when sample of fist-, cylindrical- and spherical-grasping gestures was one third; <bold>(B)</bold> cluster analysis plot when sample of fist-, cylindrical- and spherical-grasping gestures was 40, 30, and 30%; <bold>(C)</bold> the comparison of <italic>D</italic><sub>out</sub> values for grasping group gestures; <bold>(D)</bold> cluster analysis plot when sample of finger-, key-, and tape-pinch gestures was one third; <bold>(E)</bold> cluster analysis plot when sample of finger-, key-, and tape-pinch gestures was 60, 20, and 20%; <bold>(F)</bold> the comparison of <italic>D</italic><sub>out</sub> values for pinch group gestures.</p></caption>
<graphic xlink:href="fnbot-12-00003-g009.tif"/>
</fig>
<p>As shown in Figures <xref ref-type="fig" rid="F9">9</xref>C,F, sample proportion assignment change in training set can significantly affect the Mahalanobis distance between any two hand gestures (<italic>D</italic><sub>out</sub>). In the grasping group, when the sample of fist-, cylindrical-, and spherical-grasping gestures was conventionally set as one third, the <italic>D</italic><sub>out</sub> values for fist-cylindrical, fist-spherical, and cylindrical-spherical gestures were 0.5601&#x02009;&#x000B1;&#x02009;0.21, 0.7347&#x02009;&#x000B1;&#x02009;0.18, and 0.9366&#x02009;&#x000B1;&#x02009;0.15, respectively. The corresponding overall recognition rate was 80.8% (see Figure <xref ref-type="fig" rid="F7">7</xref>). However, if the samples of fist-, cylindrical-, and spherical-grasping gestures were assigned as 40, 30, and 30%, the <italic>D</italic><sub>out</sub> values for the fist-cylindrical, fist-spherical, and cylindrical-spherical gestures were extended to 0.8252&#x02009;&#x000B1;&#x02009;0.19, 1.4374&#x02009;&#x000B1;&#x02009;0.31, and 1.7255&#x02009;&#x000B1;&#x02009;0.46, respectively. The corresponding overall recognition rate was 83.3% (see Figure <xref ref-type="fig" rid="F7">7</xref>A). The Mahalanobis distances of the fist-cylindrical gestures or spherical-cylindrical gestures were nearly twofold enlarged. Similarly, in the pinch group, when the sample of finger-, key-, and tape-pinch gestures was conventionally set as one third, the <italic>D</italic><sub>out</sub> values for finger-key, finger-tape, and key-tape gestures were 0.8753&#x02009;&#x000B1;&#x02009;0.18, 1.8635&#x02009;&#x000B1;&#x02009;0.21, and 1.0353&#x02009;&#x000B1;&#x02009;0.32, respectively. If the samples of finger-, key-, and tape-pinch gestures were assigned as 60, 20, and 20%, the <italic>D</italic><sub>out</sub> values for finger-key, finger-tape, and key-tape were extended to 1.5461&#x02009;&#x000B1;&#x02009;0.19, 2.1367&#x02009;&#x000B1;&#x02009;0.36, and 1.3468&#x02009;&#x000B1;&#x02009;0.46, respectively. The results of paired-samples <italic>t</italic>-test (SPSS for Windows 13.0) indicated that, the Mahalanobis distance of finger-key gestures was significantly (<italic>p</italic>&#x02009;&#x0003C;&#x02009;0.05) improved near twofold as much.</p>
</sec>
</sec>
<sec id="S4" sec-type="discussion">
<title>Discussion</title>
<p>EMG-controlled robotic hand exoskeleton has been proposed to train paretic hand after stroke (Leonardis et al., <xref ref-type="bibr" rid="B18">2015</xref>). Evidences indicate that the simultaneous movement of both non-paretic hand and paretic hand improve the neuro-muscular system to regain some stability and improve usage of the impaired limb (McCombe Waller and Whitall, <xref ref-type="bibr" rid="B28">2008</xref>). Grasping and pinch is the most common hand movement with different finger coordination patterns. This study recorded the sEMG signals from forearm hand muscles through grasping or pinch tasks dominated with similar muscle activities, and the gesture-related myoelectric feature vector was set up with the ZC and WAMP of the IMF components after the EMD decomposition of sEMG. The impact of the gesture sample proportion in the training set on the gesture recognition efficiency was assessed, and our preliminarily results revealed that the recognition rate of alike hand gestures can be improved by optimizing the sample proportion due to the weight or impact of a gesture in a candidate gesture group.</p>
<p>Although the impact of sample size or constitution in training set on recognition efficiency have been observed in hand pattern recognition (Fratini et al., <xref ref-type="bibr" rid="B13">2015</xref>), medical image classification (Wigdahl et al., <xref ref-type="bibr" rid="B40">2013b</xref>), and human limb gesture identification (Chen et al., <xref ref-type="bibr" rid="B9">2016a</xref>), this is the first time to quantify how the sample proportion of a candidate hand gesture influence its classification accuracy. Our results indicated that, for any grasping gesture or pinch gesture, the recognition rate of a hand gesture can be improved by increase the sample proportion of corresponding gesture in the training set. As shown in Figures <xref ref-type="fig" rid="F5">5</xref> and <xref ref-type="fig" rid="F6">6</xref>, the recognition rate of a single gesture quickly improved when corresponding sample proportion increased in the training set. In fact, increasing the sample proportion of a gesture in training set implied enhancing the weight of this gesture in the training stage, and the classifier learned much more from this gesture. Therefore, as a result the improved recognition of interested gesture can be obtained. On other hand, the recognition rate of a gesture would decrease when the sample proportion reduced in the training set as well (in Figures <xref ref-type="fig" rid="F5">5</xref> and <xref ref-type="fig" rid="F6">6</xref>, lower part). Our results revealed that the recognition of one gesture can be improved by increasing the weight of this gesture in the training set.</p>
<p>Unlike the proportional allocation of sample for classifier training, the present work affirmed that non-uniform sample assignment in training set may significantly improves those similar hand gestures recognition. In other words, each candidate grasping gesture or pinch gestures has different impact on classifier, however, it is usually assumed that each class has an equal <italic>a priori</italic> probability of occurrence and the same number of samples for each class had been conventionally allocated in the training set. In fact, as shown in Figures <xref ref-type="fig" rid="F5">5</xref> and <xref ref-type="fig" rid="F6">6</xref>, although the classification accuracies of each grasping gesture or pinch gesture would increase with its sample proportion improved and decrease with its sample proportion dropped, our study also show the slope of the curve is different for each gesture. For example, the classification accuracies of finger-pinch dropped faster than that of key pinch or tape pinch when sample proportion decreased (Figures <xref ref-type="fig" rid="F6">6</xref>B,C). Then, we can assume that the sample proportion of finger-pinch task gave rise to more impact on the recognition of pinch gestures, and more finger-pinch samples is help to get higher classification accuracies. Thus, assigning more finger-pinch samples (60%) in training set is expected for the higher recognition rate (87%) of similar pinch gestures.</p>
<p>Although a classifier trained with equally assigned samples is enough for recognition of hand gestures with distinguished muscle activity patterns (Urwyler et al., <xref ref-type="bibr" rid="B38">2015</xref>), there remain obstacles for similar hand gestures recognition due to the similar muscle activity patterns and similar sEMG features (Geng et al., <xref ref-type="bibr" rid="B14">2014</xref>). As the tasks tested in our study, grasping gestures or pinch gestures requires similar muscular contraction pattern, which makes it challenging to discriminate the characteristic vector of similar gestures in the feature space. As illustrated in Figure <xref ref-type="fig" rid="F9">9</xref>A, the distance between classes for the gestures in the grasping group are too short to be distinguished, however, when the classifier has been trained with unequally assigned samples of candidate grasping or pinch gestures, the Mahalanobis distance for these similar gestures is significantly enlarged. Then, the classification accuracy is improved as well. Furthermore, many more factors should be considered when we assess the overall recognition rate for candidate gestures, such as the slope of ascending recognition rate with increasing sample proportion and the slope of descending recognition rates with decreasing sample proportion. For a training set with constant sample size investigated in present study, the sample proportion should be carefully selected. As shown in Figures <xref ref-type="fig" rid="F7">7</xref> and <xref ref-type="fig" rid="F8">8</xref>, when an appropriate assignment of the sample proportion in the training set for grasping gestures and pinch gestures, the highest overall recognition rate was obtained for similar grasping or pinch gestures.</p>
<p>To the best of our knowledge, the present work is the first to evaluate the effect of the sample proportion in the training set on the recognition rate of hand gestures, and the classification accuracy of similar grasping or pinch gestures can be improved by unequally assigning the samples in training set. Then, an alternative way improving the classification efficiency is to optimize the sample proportion of candidate patterns in training set due to corresponding impact of a pattern on the recognition rate. In other words, we can assign more samples of the candidate gesture with higher weight to obtain better recognition, however, these preliminary results just give a clue that the weight of candidate gesture may be different, and sample proportion in training set should be optimized for improving classification. Further studies are needed to explore how to set the optimal sample proportion in training set, and the classifier will be improved as well (Pratama et al., <xref ref-type="bibr" rid="B32">2016</xref>; Lughofer et al., <xref ref-type="bibr" rid="B24">2017</xref>; Rubio, <xref ref-type="bibr" rid="B33">2017a</xref>,<xref ref-type="bibr" rid="B34">b</xref>). On other hand, enlarging the sample number of one candidate gesture may induce overfitting or overlearning in classifier, it can be suggested to compare the slopes of sample proportion vs. Acc. curve among the candidate gestures (see Figure <xref ref-type="fig" rid="F5">5</xref> or Figure <xref ref-type="fig" rid="F6">6</xref>), and focused on the sample proportion allocated, the quickly ascending and descending part of sample proportion vs. Acc. curves. For the bimanual rehabilitation after stroke with robotic hand exoskeleton, both the gesture and force for hand movement should be implemented to paretic hand training. Also, muscle activation can be act as a good reference guide in bilateral training (McCombe Waller et al., <xref ref-type="bibr" rid="B27">2006</xref>), especially the grasping or pinch force can be estimated with sEMG of non-paretic hand and then replicated as robotic assistance for the paretic hand (Leonardis et al., <xref ref-type="bibr" rid="B18">2015</xref>). In addition to recognition of hand gesture, the finger force or finger joint of non-paretic hand dominated with similar muscle activities will be estimated by sEMG recording next.</p>
</sec>
<sec id="S5">
<title>Author Contributions</title>
<p>YZ and ZG collected the data; ZY analyzed the data. WH, YL, XW, and XZ designed the work. WH drafted the work. WH and GL interpreted the data. LC helped to revise the manuscript. WH and QX created the final report.</p>
</sec>
<sec id="S6">
<title>Conflict of Interest Statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</body>
<back>
<ack>
<p>The authors gratefully acknowledge the help of all the volunteers in this study. We also would like to thank the Department of Rehabilitation Center, Children&#x02019;s Hospital of Chongqing Medical University for help in sEMG data collection.</p>
</ack>
<fn-group>
<fn fn-type="financial-disclosure">
<p><bold>Funding.</bold> This work is supported partially by the National High-Tech Research and Development Program of China (863 Program, Grant No. 2015AA042303), National Natural Science Foundation of China (31470953, 31771069), and Chongqing Science and Technology Program (cstc2015jcyjB0538).</p></fn>
</fn-group>
<ref-list>
<title>References</title>
<ref id="B1"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>AbdelMaseeh</surname> <given-names>M.</given-names></name> <name><surname>Chen</surname> <given-names>T.-W.</given-names></name> <name><surname>Stashuk</surname> <given-names>D. W.</given-names></name></person-group> (<year>2016</year>). <article-title>Extraction and classification of multichannel electromyographic activation trajectories for hand movement recognition</article-title>. <source>IEEE Trans. Neural Syst. Rehabil. Eng.</source> <volume>24</volume>, <fpage>662</fpage>&#x02013;<lpage>673</lpage>.<pub-id pub-id-type="doi">10.1109/TNSRE.2015.2447217</pub-id><pub-id pub-id-type="pmid">26099148</pub-id></citation></ref>
<ref id="B2"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Al-Angari</surname> <given-names>H. M.</given-names></name> <name><surname>Kanitz</surname> <given-names>G.</given-names></name> <name><surname>Tarantino</surname> <given-names>S.</given-names></name> <name><surname>Cipriani</surname> <given-names>C.</given-names></name></person-group> (<year>2016</year>). <article-title>Distance and mutual information methods for EMG feature and channel subset selection for classification of hand movements</article-title>. <source>Biomed. Signal Process. Control</source> <volume>27</volume>, <fpage>24</fpage>&#x02013;<lpage>31</lpage>.<pub-id pub-id-type="doi">10.1016/j.bspc.2016.01.011</pub-id></citation></ref>
<ref id="B3"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Alba</surname> <given-names>E.</given-names></name> <name><surname>Garcia-Nieto</surname> <given-names>J.</given-names></name> <name><surname>Jourdan</surname> <given-names>L.</given-names></name> <name><surname>Talbi</surname> <given-names>E. G.</given-names></name></person-group> (<year>2007</year>). <article-title>Gene selection in cancer classification using PSO/SVM and GA/SVM hybrid algorithms</article-title>. <source>IEEE Congress Evol. Comput.</source> <volume>4</volume>, <fpage>284</fpage>&#x02013;<lpage>290</lpage>.<pub-id pub-id-type="doi">10.1109/CEC.2007.4424483</pub-id></citation></ref>
<ref id="B4"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Amsuess</surname> <given-names>S.</given-names></name> <name><surname>Goebel</surname> <given-names>P.</given-names></name> <name><surname>Graimann</surname> <given-names>B.</given-names></name> <name><surname>Farina</surname> <given-names>D.</given-names></name></person-group> (<year>2015</year>). <article-title>A multi-class proportional myocontrol algorithm for upper limb prosthesis control: validation in real-life scenarios on amputees</article-title>. <source>IEEE Trans. Neural Syst. Rehabil. Eng.</source> <volume>23</volume>, <fpage>827</fpage>&#x02013;<lpage>836</lpage>.<pub-id pub-id-type="doi">10.1109/TNSRE.2014.2361478</pub-id><pub-id pub-id-type="pmid">25296406</pub-id></citation></ref>
<ref id="B5"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Castro</surname> <given-names>M. C.</given-names></name> <name><surname>Arjunan</surname> <given-names>S. P.</given-names></name> <name><surname>Kumar</surname> <given-names>D. K.</given-names></name></person-group> (<year>2015</year>). <article-title>Selection of suitable hand gestures for reliable myoelectric human computer interface</article-title>. <source>Biomed. Eng. Online</source> <volume>14</volume>, <fpage>30</fpage>.<pub-id pub-id-type="doi">10.1186/s12938-015-0025-5</pub-id><pub-id pub-id-type="pmid">25889735</pub-id></citation></ref>
<ref id="B6"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Castro</surname> <given-names>M. C. F.</given-names></name> <name><surname>Colombini</surname> <given-names>E. L.</given-names></name> <name><surname>Aquino</surname> <given-names>P. T.</given-names></name> <name><surname>Arjunan</surname> <given-names>S. P.</given-names></name> <name><surname>Kumar</surname> <given-names>D. K.</given-names></name></person-group> (<year>2014</year>). <article-title>sEMG feature evaluation for identification of elbow angle resolution in graded arm movement</article-title>. <source>Biomed. Eng. Online</source> <volume>13</volume>, <fpage>155</fpage>.<pub-id pub-id-type="doi">10.1186/1475-925X-13-155</pub-id><pub-id pub-id-type="pmid">25422006</pub-id></citation></ref>
<ref id="B7"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cauraugh</surname> <given-names>J. H.</given-names></name> <name><surname>Lodha</surname> <given-names>N.</given-names></name> <name><surname>Naik</surname> <given-names>S. K.</given-names></name> <name><surname>Summers</surname> <given-names>J. J.</given-names></name></person-group> (<year>2010</year>). <article-title>Bilateral movement training and stroke motor recovery progress: a structured review and meta-analysis</article-title>. <source>Hum. Mov. Sci.</source> <volume>29</volume>, <fpage>853</fpage>&#x02013;<lpage>870</lpage>.<pub-id pub-id-type="doi">10.1016/j.humov.2009.09.004</pub-id></citation></ref>
<ref id="B8"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>J.</given-names></name> <name><surname>Chen</surname> <given-names>X. L.</given-names></name> <name><surname>Yang</surname> <given-names>J.</given-names></name> <name><surname>Shan</surname> <given-names>S. G.</given-names></name> <name><surname>Wang</surname> <given-names>R. P.</given-names></name> <name><surname>Gao</surname> <given-names>W.</given-names></name></person-group> (<year>2009</year>). <article-title>Optimization of a training set for more robust face detection</article-title>. <source>Pattern Recognit.</source> <volume>42</volume>, <fpage>2828</fpage>&#x02013;<lpage>2840</lpage>.<pub-id pub-id-type="doi">10.1016/j.patcog.2009.02.006</pub-id></citation></ref>
<ref id="B9"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>X. M.</given-names></name> <name><surname>Pun</surname> <given-names>S. H.</given-names></name> <name><surname>Zhao</surname> <given-names>J. F.</given-names></name> <name><surname>Mak</surname> <given-names>P. U.</given-names></name> <name><surname>Liang</surname> <given-names>B. D.</given-names></name> <name><surname>Vai</surname> <given-names>M. I.</given-names></name></person-group> (<year>2016a</year>). <article-title>Effects of human limb gestures on galvanic coupling intra-body communication for advanced healthcare system</article-title>. <source>Biomed. Eng. Online</source> <volume>15</volume>, <fpage>60</fpage>.<pub-id pub-id-type="doi">10.1186/s12938-016-0192-z</pub-id></citation></ref>
<ref id="B10"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>X. M.</given-names></name> <name><surname>Pun</surname> <given-names>S. H.</given-names></name> <name><surname>Zhao</surname> <given-names>J. F.</given-names></name> <name><surname>Mak</surname> <given-names>P. U.</given-names></name> <name><surname>Liang</surname> <given-names>B. D.</given-names></name> <name><surname>Vai</surname> <given-names>M. I.</given-names></name></person-group> (<year>2016b</year>). <article-title>Effects of human limb gestures on galvanic coupling intra-body communication for advanced healthcare system</article-title>. <source>Biomed. Eng. Online</source> <volume>15</volume>, <fpage>60</fpage>.<pub-id pub-id-type="doi">10.1186/s12938-016-0192-z</pub-id></citation></ref>
<ref id="B11"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Foody</surname> <given-names>G. M.</given-names></name> <name><surname>Mcculloch</surname> <given-names>M. B.</given-names></name> <name><surname>Yates</surname> <given-names>W. B.</given-names></name></person-group> (<year>1995</year>). <article-title>The effect of training set size and composition on artificial neural network classification</article-title>. <source>Int. J. Remote Sens.</source> <volume>16</volume>, <fpage>1707</fpage>&#x02013;<lpage>1723</lpage>.<pub-id pub-id-type="doi">10.1080/01431169508954396</pub-id></citation></ref>
<ref id="B12"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Francini</surname> <given-names>A.</given-names></name> <name><surname>Romeo</surname> <given-names>S.</given-names></name> <name><surname>Cifelli</surname> <given-names>M.</given-names></name> <name><surname>Gori</surname> <given-names>D.</given-names></name> <name><surname>Domenici</surname> <given-names>V.</given-names></name> <name><surname>Sebastiani</surname> <given-names>L.</given-names></name></person-group> (<year>2017</year>). <article-title>H-1 NMR and PCA-based analysis revealed variety dependent changes in phenolic contents of apple fruit after drying</article-title>. <source>Food Chem.</source> <volume>221</volume>, <fpage>1206</fpage>&#x02013;<lpage>1213</lpage>.<pub-id pub-id-type="doi">10.1016/j.foodchem.2016.11.038</pub-id></citation></ref>
<ref id="B13"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fratini</surname> <given-names>A.</given-names></name> <name><surname>Sansone</surname> <given-names>M.</given-names></name> <name><surname>Bifulco</surname> <given-names>P.</given-names></name> <name><surname>Cesarelli</surname> <given-names>M.</given-names></name></person-group> (<year>2015</year>). <article-title>Individual identification via electrocardiogram analysis</article-title>. <source>Biomed. Eng. Online</source> <volume>14</volume>, <fpage>78</fpage>.<pub-id pub-id-type="doi">10.1186/s12938-015-0072-y</pub-id></citation></ref>
<ref id="B14"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Geng</surname> <given-names>Y. J.</given-names></name> <name><surname>Zhang</surname> <given-names>X. F.</given-names></name> <name><surname>Zhang</surname> <given-names>Y. T.</given-names></name> <name><surname>Li</surname> <given-names>G. L.</given-names></name></person-group> (<year>2014</year>). <article-title>A novel channel selection method for multiple motion classification using high-density electromyography</article-title>. <source>Biomed. Eng. Online</source> <volume>13</volume>, <fpage>102</fpage>.<pub-id pub-id-type="doi">10.1186/1475-925X-13-102</pub-id><pub-id pub-id-type="pmid">25060509</pub-id></citation></ref>
<ref id="B15"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hong</surname> <given-names>T.</given-names></name> <name><surname>Zhang</surname> <given-names>X.</given-names></name> <name><surname>Ma</surname> <given-names>H.</given-names></name> <name><surname>Chen</surname> <given-names>Y.</given-names></name> <name><surname>Chen</surname> <given-names>X.</given-names></name></person-group> (<year>2016</year>). <article-title>Fatiguing effects on the multi-scale entropy of surface electromyography in children with cerebral palsy</article-title>. <source>Entropy</source> <volume>18</volume>, <fpage>177</fpage>.<pub-id pub-id-type="doi">10.3390/e18050177</pub-id></citation></ref>
<ref id="B16"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jain</surname> <given-names>A. K.</given-names></name> <name><surname>Duin</surname> <given-names>R. P. W.</given-names></name> <name><surname>Mao</surname> <given-names>J. C.</given-names></name></person-group> (<year>2000</year>). <article-title>Statistical pattern recognition: a review</article-title>. <source>IEEE Trans. Pattern Anal. Mach. Intell.</source> <volume>22</volume>, <fpage>4</fpage>&#x02013;<lpage>37</lpage>.<pub-id pub-id-type="doi">10.1109/34.824819</pub-id></citation></ref>
<ref id="B17"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Khezri</surname> <given-names>M.</given-names></name> <name><surname>Jahed</surname> <given-names>M.</given-names></name></person-group> (<year>2007</year>). <article-title>Real-time intelligent pattern recognition algorithm for surface EMG signals</article-title>. <source>Biomed. Eng. Online</source> <volume>6</volume>, <fpage>45</fpage>.<pub-id pub-id-type="doi">10.1186/1475-925X-6-45</pub-id><pub-id pub-id-type="pmid">18053184</pub-id></citation></ref>
<ref id="B18"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Leonardis</surname> <given-names>D.</given-names></name> <name><surname>Barsotti</surname> <given-names>M.</given-names></name> <name><surname>Loconsole</surname> <given-names>C.</given-names></name> <name><surname>Solazzi</surname> <given-names>M.</given-names></name> <name><surname>Troncossi</surname> <given-names>M.</given-names></name> <name><surname>Mazzotti</surname> <given-names>C.</given-names></name> <etal/></person-group> (<year>2015</year>). <article-title>An EMG-controlled robotic hand exoskeleton for bilateral rehabilitation</article-title>. <source>IEEE Trans. Haptics</source> <volume>8</volume>, <fpage>140</fpage>&#x02013;<lpage>151</lpage>.<pub-id pub-id-type="doi">10.1109/TOH.2015.2417570</pub-id><pub-id pub-id-type="pmid">25838528</pub-id></citation></ref>
<ref id="B19"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>H. Q.</given-names></name> <name><surname>Yuan</surname> <given-names>D. Y.</given-names></name> <name><surname>Ma</surname> <given-names>X. D.</given-names></name> <name><surname>Cui</surname> <given-names>D. Y.</given-names></name> <name><surname>Cao</surname> <given-names>L.</given-names></name></person-group> (<year>2017</year>). <article-title>Genetic algorithm for the optimization of features and neural networks in ECG signals classification</article-title>. <source>Sci. Rep.</source> <volume>7</volume>, <fpage>41011</fpage>.<pub-id pub-id-type="doi">10.1038/srep41011</pub-id><pub-id pub-id-type="pmid">28139677</pub-id></citation></ref>
<ref id="B20"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lima</surname> <given-names>C. A. M.</given-names></name> <name><surname>Coelho</surname> <given-names>A. L. V.</given-names></name> <name><surname>Madeo</surname> <given-names>R. C. B.</given-names></name> <name><surname>Peres</surname> <given-names>S. M.</given-names></name></person-group> (<year>2016</year>). <article-title>Classification of electromyography signals using relevance vector machines and fractal dimension</article-title>. <source>Neural Comput. Appl.</source> <volume>27</volume>, <fpage>791</fpage>&#x02013;<lpage>804</lpage>.<pub-id pub-id-type="doi">10.1007/s00521-015-1953-5</pub-id></citation></ref>
<ref id="B21"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>J.</given-names></name> <name><surname>Li</surname> <given-names>X. Y.</given-names></name> <name><surname>Li</surname> <given-names>G. L.</given-names></name> <name><surname>Zhou</surname> <given-names>P.</given-names></name></person-group> (<year>2014</year>). <article-title>EMG feature assessment for myoelectric pattern recognition and channel selection: a study with incomplete spinal cord injury</article-title>. <source>Med. Eng. Phys.</source> <volume>36</volume>, <fpage>975</fpage>&#x02013;<lpage>980</lpage>.<pub-id pub-id-type="doi">10.1016/j.medengphy.2014.04.003</pub-id><pub-id pub-id-type="pmid">24844608</pub-id></citation></ref>
<ref id="B22"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lorrain</surname> <given-names>T.</given-names></name> <name><surname>Jiang</surname> <given-names>N.</given-names></name> <name><surname>Farina</surname> <given-names>D.</given-names></name></person-group> (<year>2011</year>). <article-title>Influence of the training set on the accuracy of surface EMG classification in dynamic contractions for the control of multifunction prostheses</article-title>. <source>J. Neuroeng. Rehabil.</source> <volume>8</volume>, <fpage>25</fpage>.<pub-id pub-id-type="doi">10.1186/1743-0003-8-25</pub-id><pub-id pub-id-type="pmid">21554700</pub-id></citation></ref>
<ref id="B23"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Luft</surname> <given-names>A. R.</given-names></name> <name><surname>Mccombewaller</surname> <given-names>S.</given-names></name> <name><surname>Whitall</surname> <given-names>J.</given-names></name> <name><surname>Forrester</surname> <given-names>L. W.</given-names></name> <name><surname>Macko</surname> <given-names>R.</given-names></name> <name><surname>Sorkin</surname> <given-names>J. D.</given-names></name> <etal/></person-group> (<year>2005</year>). <article-title>Repetitive bilateral arm training and motor cortex activation in chronic stroke: a randomized controlled trial</article-title>. <source>JAMA</source> <volume>292</volume>, <fpage>1853</fpage>&#x02013;<lpage>1861</lpage>.<pub-id pub-id-type="doi">10.1001/jama.292.15.1853</pub-id></citation></ref>
<ref id="B24"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lughofer</surname> <given-names>E.</given-names></name> <name><surname>Pratama</surname> <given-names>M.</given-names></name> <name><surname>Skrjanc</surname> <given-names>I.</given-names></name></person-group> (<year>2017</year>). <article-title>Incremental rule splitting in generalized evolving fuzzy systems for autonomous drift compensation</article-title>. <source>IEEE Transact. Fuzzy Syst.</source> <volume>99</volume>, <fpage>1</fpage>.<pub-id pub-id-type="doi">10.1109/TFUZZ.2017.2753727</pub-id></citation></ref>
<ref id="B25"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Marchetti</surname> <given-names>M.</given-names></name> <name><surname>Onorati</surname> <given-names>F.</given-names></name> <name><surname>Matteucci</surname> <given-names>M.</given-names></name> <name><surname>Mainardi</surname> <given-names>L.</given-names></name> <name><surname>Piccione</surname> <given-names>F.</given-names></name> <name><surname>Silvoni</surname> <given-names>S.</given-names></name> <etal/></person-group> (<year>2013</year>). <article-title>Improving the efficacy of ERP-based BCIs using different modalities of covert visuospatial attention and a genetic algorithm-based classifier</article-title>. <source>PLoS ONE</source> <volume>8</volume>:<fpage>e53946</fpage>.<pub-id pub-id-type="doi">10.1371/journal.pone.0053946</pub-id><pub-id pub-id-type="pmid">23342043</pub-id></citation></ref>
<ref id="B26"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Martins</surname> <given-names>M.</given-names></name> <name><surname>Costa</surname> <given-names>L.</given-names></name> <name><surname>Frizera</surname> <given-names>A.</given-names></name> <name><surname>Ceres</surname> <given-names>R.</given-names></name> <name><surname>Santos</surname> <given-names>C.</given-names></name></person-group> (<year>2014</year>). <article-title>Hybridization between multi-objective genetic algorithm and support vector machine for feature selection in walker-assisted gait</article-title>. <source>Comput. Met. Programs Biomed.</source> <volume>113</volume>, <fpage>736</fpage>&#x02013;<lpage>748</lpage>.<pub-id pub-id-type="doi">10.1016/j.cmpb.2013.12.005</pub-id><pub-id pub-id-type="pmid">24444751</pub-id></citation></ref>
<ref id="B27"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>McCombe Waller</surname> <given-names>S.</given-names></name> <name><surname>Harris-Love</surname> <given-names>M.</given-names></name> <name><surname>Liu</surname> <given-names>W.</given-names></name> <name><surname>Whitall</surname> <given-names>J.</given-names></name></person-group> (<year>2006</year>). <article-title>Temporal coordination of the arms during bilateral simultaneous and sequential movements in patients with chronic hemiparesis</article-title>. <source>Exp. Brain Res.</source> <volume>168</volume>, <fpage>450</fpage>&#x02013;<lpage>454</lpage>.<pub-id pub-id-type="doi">10.1007/s00221-005-0235-3</pub-id><pub-id pub-id-type="pmid">16331507</pub-id></citation></ref>
<ref id="B28"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>McCombe Waller</surname> <given-names>S.</given-names></name> <name><surname>Whitall</surname> <given-names>J.</given-names></name></person-group> (<year>2008</year>). <article-title>Bilateral arm training: why and who benefits?</article-title> <source>NeuroRehabilitation</source> <volume>23</volume>, <fpage>29</fpage>&#x02013;<lpage>41</lpage>.<pub-id pub-id-type="pmid">18356587</pub-id></citation></ref>
<ref id="B29"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Oujamaa</surname> <given-names>L.</given-names></name> <name><surname>Relave</surname> <given-names>I.</given-names></name> <name><surname>Froger</surname> <given-names>J.</given-names></name> <name><surname>Mottet</surname> <given-names>D.</given-names></name> <name><surname>Pelissier</surname> <given-names>J. Y.</given-names></name></person-group> (<year>2009</year>). <article-title>Rehabilitation of arm function after stroke. Literature review</article-title>. <source>Ann. Phys. Rehabil. Med.</source> <volume>52</volume>, <fpage>269</fpage>&#x02013;<lpage>293</lpage>.<pub-id pub-id-type="doi">10.1016/j.rehab.2008.10.003</pub-id><pub-id pub-id-type="pmid">19398398</pub-id></citation></ref>
<ref id="B30"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Peerdeman</surname> <given-names>B.</given-names></name> <name><surname>Boere</surname> <given-names>D.</given-names></name> <name><surname>Witteveen</surname> <given-names>H.</given-names></name> <name><surname>In&#x02019;t Veld</surname> <given-names>R. H.</given-names></name> <name><surname>Hermens</surname> <given-names>H.</given-names></name> <name><surname>Stramigioli</surname> <given-names>S.</given-names></name> <etal/></person-group> (<year>2011</year>). <article-title>Myoelectric forearm prostheses: state of the art from a user-centered perspective</article-title>. <source>J. Rehabil. Res. Dev.</source> <volume>48</volume>, <fpage>719</fpage>&#x02013;<lpage>737</lpage>.<pub-id pub-id-type="doi">10.1682/JRRD.2010.08.0161</pub-id><pub-id pub-id-type="pmid">21938658</pub-id></citation></ref>
<ref id="B31"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Phinyomark</surname> <given-names>A.</given-names></name> <name><surname>Hu</surname> <given-names>H.</given-names></name> <name><surname>Phukpattaranont</surname> <given-names>P.</given-names></name> <name><surname>Limsakul</surname> <given-names>C.</given-names></name></person-group> (<year>2012</year>). <article-title>Application of linear discriminant analysis in dimensionality reduction for hand motion classification</article-title>. <source>Meas. Sci. Rev.</source> <volume>12</volume>, <fpage>82</fpage>&#x02013;<lpage>89</lpage>.<pub-id pub-id-type="doi">10.2478/v10048-012-0015-8</pub-id></citation></ref>
<ref id="B32"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pratama</surname> <given-names>M.</given-names></name> <name><surname>Lughofer</surname> <given-names>E.</given-names></name> <name><surname>Meng</surname> <given-names>J. E.</given-names></name> <name><surname>Anavatti</surname> <given-names>S.</given-names></name> <name><surname>Lim</surname> <given-names>C. P.</given-names></name></person-group> (<year>2016</year>). <article-title>Data driven modelling based on recurrent interval-valued metacognitive scaffolding fuzzy neural network</article-title>. <source>Neurocomputing</source> <volume>262</volume>, <fpage>4</fpage>&#x02013;<lpage>27</lpage>.<pub-id pub-id-type="doi">10.1016/j.neucom.2016.10.093</pub-id></citation></ref>
<ref id="B33"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rubio</surname> <given-names>J. D. J.</given-names></name></person-group> (<year>2017a</year>). <article-title>A method with neural networks for the classification of fruits and vegetables</article-title>. <source>Soft Comput.</source> <volume>21</volume>, <fpage>7207</fpage>&#x02013;<lpage>7220</lpage>.<pub-id pub-id-type="doi">10.1007/s00500-016-2263-2</pub-id></citation></ref>
<ref id="B34"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rubio</surname> <given-names>J. D. J.</given-names></name></person-group> (<year>2017b</year>). <article-title>Stable Kalman filter and neural network for the chaotic systems identification</article-title>. <source>J. Franklin Inst.</source> <volume>354</volume>, <fpage>7444</fpage>&#x02013;<lpage>7462</lpage>.<pub-id pub-id-type="doi">10.1016/j.jfranklin.2017.08.038</pub-id></citation></ref>
<ref id="B35"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sapsanis</surname> <given-names>C.</given-names></name> <name><surname>Georgoulas</surname> <given-names>G.</given-names></name> <name><surname>Tzes</surname> <given-names>A.</given-names></name> <name><surname>Lymberopoulos</surname> <given-names>D.</given-names></name></person-group> (<year>2013</year>). <article-title>Improving EMG based classification of basic hand movements using EMD</article-title>. <source>Conf. Proc. IEEE Eng. Med. Biol. Soc.</source> <volume>2013</volume>, <fpage>5754</fpage>&#x02013;<lpage>5757</lpage>.<pub-id pub-id-type="doi">10.1109/EMBC.2013.6610858</pub-id><pub-id pub-id-type="pmid">24111045</pub-id></citation></ref>
<ref id="B36"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Serdio</surname> <given-names>F.</given-names></name> <name><surname>Lughofer</surname> <given-names>E.</given-names></name> <name><surname>Zavoianua</surname> <given-names>A. C.</given-names></name> <name><surname>Pichler</surname> <given-names>K.</given-names></name> <name><surname>Pichler</surname> <given-names>M.</given-names></name> <name><surname>Buchegger</surname> <given-names>T.</given-names></name> <etal/></person-group> (<year>2017</year>). <article-title>Improved fault detection employing hybrid memetic fuzzy modeling and adaptive filters</article-title>. <source>Appl. Soft Comput.</source> <volume>51</volume>, <fpage>60</fpage>&#x02013;<lpage>82</lpage>.<pub-id pub-id-type="doi">10.1016/j.asoc.2016.11.038</pub-id></citation></ref>
<ref id="B37"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shang</surname> <given-names>X. J.</given-names></name> <name><surname>Tian</surname> <given-names>Y. T.</given-names></name> <name><surname>Li</surname> <given-names>Y.</given-names></name></person-group> (<year>2011</year>). <article-title>Feature extraction and classification of sEMG based on ICA and EMD decomposition of AR model</article-title>. <source>Int Conf. Electron. Commun. Control (Icecc)</source> <fpage>1464</fpage>&#x02013;<lpage>1467</lpage>.<pub-id pub-id-type="doi">10.1109/ICECC.2011.6067702</pub-id></citation></ref>
<ref id="B38"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Urwyler</surname> <given-names>P.</given-names></name> <name><surname>Rampa</surname> <given-names>L.</given-names></name> <name><surname>Stucki</surname> <given-names>R.</given-names></name> <name><surname>Buchler</surname> <given-names>M.</given-names></name> <name><surname>Muri</surname> <given-names>R.</given-names></name> <name><surname>Mosimann</surname> <given-names>U. P.</given-names></name> <etal/></person-group> (<year>2015</year>). <article-title>Recognition of activities of daily living in healthy subjects using two ad-hoc classifiers</article-title>. <source>Biomed. Eng. Online</source> <volume>14</volume>, <fpage>54</fpage>.<pub-id pub-id-type="doi">10.1186/s12938-015-0050-4</pub-id><pub-id pub-id-type="pmid">26048452</pub-id></citation></ref>
<ref id="B39"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wigdahl</surname> <given-names>J.</given-names></name> <name><surname>Agurto</surname> <given-names>C.</given-names></name> <name><surname>Murray</surname> <given-names>V.</given-names></name> <name><surname>Barriga</surname> <given-names>S.</given-names></name> <name><surname>Soliz</surname> <given-names>P.</given-names></name></person-group> (<year>2013a</year>). <article-title>Training set optimization and classifier performance in a top-down diabetic retinopathy screening system</article-title>. <source>Med. Imaging Comput. Aided Diagn.</source> <fpage>8670</fpage>.<pub-id pub-id-type="doi">10.1117/12.2007931</pub-id></citation></ref>
<ref id="B40"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wigdahl</surname> <given-names>J.</given-names></name> <name><surname>Murray</surname> <given-names>V.</given-names></name> <name><surname>Barriga</surname> <given-names>S.</given-names></name> <name><surname>Soliz</surname> <given-names>P.</given-names></name></person-group> (<year>2013b</year>). <article-title>Training set optimization and classifier performance in a top-down diabetic retinopathy screening system</article-title>. <source>SPIE Med. Imag.</source> <fpage>8670</fpage>.<pub-id pub-id-type="doi">10.1117/12.2007931</pub-id></citation></ref>
<ref id="B41"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Windrich</surname> <given-names>M.</given-names></name> <name><surname>Grimmer</surname> <given-names>M.</given-names></name> <name><surname>Christ</surname> <given-names>O.</given-names></name> <name><surname>Rinderknecht</surname> <given-names>S.</given-names></name> <name><surname>Beckerle</surname> <given-names>P.</given-names></name></person-group> (<year>2016</year>). <article-title>Active lower limb prosthetics: a systematic review of design issues and solutions</article-title>. <source>Biomed. Eng. Online</source> <volume>15</volume>, <fpage>140</fpage>.<pub-id pub-id-type="doi">10.1186/s12938-016-0284-9</pub-id><pub-id pub-id-type="pmid">28105948</pub-id></citation></ref>
<ref id="B42"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Young</surname> <given-names>A. J.</given-names></name> <name><surname>Hargrove</surname> <given-names>L. J.</given-names></name> <name><surname>Kuiken</surname> <given-names>T. A.</given-names></name></person-group> (<year>2012</year>). <article-title>Improving myoelectric pattern recognition robustness to electrode shift by changing interelectrode distance and electrode configuration</article-title>. <source>IEEE Trans. Biomed. Eng.</source> <volume>59</volume>, <fpage>645</fpage>&#x02013;<lpage>652</lpage>.<pub-id pub-id-type="doi">10.1109/TBME.2011.2177662</pub-id><pub-id pub-id-type="pmid">22147289</pub-id></citation></ref>
<ref id="B43"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>Y.</given-names></name> <name><surname>Hou</surname> <given-names>W.</given-names></name> <name><surname>Luo</surname> <given-names>H.</given-names></name> <name><surname>Wu</surname> <given-names>X.</given-names></name> <name><surname>Liao</surname> <given-names>Y.</given-names></name> <name><surname>Fan</surname> <given-names>X.</given-names></name> <etal/></person-group> (<year>2016</year>). <article-title>The impact of sEMG feature weight on the recognition of similar grasping gesture</article-title>. <source>IEEE Int. Conf. Adv. Robot. Mechatronics</source> <fpage>260</fpage>&#x02013;<lpage>265</lpage>.<pub-id pub-id-type="doi">10.1109/ICARM.2016.7606929</pub-id></citation></ref>
</ref-list>
</back>
</article>