<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="2.3" xml:lang="EN">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Neurorobot.</journal-id>
<journal-title>Frontiers in Neurorobotics</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Neurorobot.</abbrev-journal-title>
<issn pub-type="epub">1662-5218</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fnbot.2024.1393738</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Optimization method for human-robot command combinations of hexapod robot based on multi-objective constraints</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author"><name><surname>Chen</surname> <given-names>Xiaolei</given-names></name><xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/2647299/overview"/>
<role content-type="https://credit.niso.org/contributor-roles/methodology/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-original-draft/"/>
</contrib>
<contrib contrib-type="author" corresp="yes"><name><surname>You</surname> <given-names>Bo</given-names></name><xref ref-type="aff" rid="aff1"><sup>1</sup></xref><xref ref-type="aff" rid="aff2"><sup>2</sup></xref><xref ref-type="corresp" rid="c001"><sup>&#x002A;</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/938967/overview"/>
<role content-type="https://credit.niso.org/contributor-roles/funding-acquisition/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-review-editing/"/>
</contrib>
<contrib contrib-type="author"><name><surname>Dong</surname> <given-names>Zheng</given-names></name><xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<role content-type="https://credit.niso.org/contributor-roles/software/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-review-editing/"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>The Key Laboratory of Intelligent Technology for Cutting and Manufacturing Ministry of Education, Harbin University of Science and Technology</institution>, <addr-line>Harbin</addr-line>, <country>China</country></aff>
<aff id="aff2"><sup>2</sup><institution>The Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, Harbin University of Science and Technology</institution>, <addr-line>Harbin</addr-line>, <country>China</country></aff>
<author-notes>
<fn fn-type="edited-by" id="fn0001">
<p>Edited by: Alois C. Knoll, Technical University of Munich, Germany</p>
</fn>
<fn fn-type="edited-by" id="fn0002">
<p>Reviewed by: Shiyin Qiu, Tianjin University, China</p>
<p>Wentao Sheng, Nanjing University of Science and Technology, China</p>
</fn>
<corresp id="c001">&#x002A;Correspondence: Bo You, <email>youbo@hrbust.edu.cn</email></corresp>
</author-notes>
<pub-date pub-type="epub">
<day>05</day>
<month>04</month>
<year>2024</year>
</pub-date>
<pub-date pub-type="collection">
<year>2024</year>
</pub-date>
<volume>18</volume>
<elocation-id>1393738</elocation-id>
<history>
<date date-type="received">
<day>29</day>
<month>02</month>
<year>2024</year>
</date>
<date date-type="accepted">
<day>19</day>
<month>03</month>
<year>2024</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2024 Chen, You and Dong.</copyright-statement>
<copyright-year>2024</copyright-year>
<copyright-holder>Chen, You and Dong</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract>
<p>Due to the heavy burden on human drivers when remotely controlling hexapod robots in complex terrain environments, there is a critical need for robot intelligence to assist in generating control commands. Therefore, this study proposes a mapping process framework that generates a combination of human-robot commands based on decision target values, focusing on the task of robot intelligence assisting drivers in generating human-robot command combinations. Furthermore, human-robot state constraints are quantified as geometric constraints on robot motion and driver fatigue constraints. By optimizing and filtering the feasible set of human-robot commands based on human-robot state constraints, instruction combinations are formed and recommended to the driver in real-time, thereby enhancing the efficiency and safety of human-machine coordination. To validate the effectiveness of the proposed method, a remote human-robot collaborative driving control system based on wearable devices is designed and implemented. Experimental results demonstrate that drivers utilizing the human-robot command recommendation system exhibit significantly improved robot walking stability and reduced collision rates compared to individual driving.</p>
</abstract>
<kwd-group>
<kwd>hexapod robot</kwd>
<kwd>collaborative control</kwd>
<kwd>command combinations</kwd>
<kwd>driving fatigue</kwd>
<kwd>remote control</kwd>
</kwd-group>
<contract-num rid="cn1">52175012</contract-num>
<contract-sponsor id="cn1">National Natural Science Foundation of China<named-content content-type="fundref-id">10.13039/501100001809</named-content></contract-sponsor>
<counts>
<fig-count count="14"/>
<table-count count="0"/>
<equation-count count="19"/>
<ref-count count="29"/>
<page-count count="12"/>
<word-count count="9317"/>
</counts>
<custom-meta-wrap>
<custom-meta>
<meta-name>section-at-acceptance</meta-name>
<meta-value>Neuroscience</meta-value>
</custom-meta>
</custom-meta-wrap>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="sec1">
<label>1</label>
<title>Introduction</title>
<p>Different from conventional terrestrial moving equipment such as wheeled or tracked vehicle, legged robot&#x2019;s track on ground is a series of discrete footprints, and this non-continuous support characteristic effectively increases its adaptability to the uneven road. Legged robots have fully studied from the structural characteristics and movement patterns of legged animals and insects. For example, quadruped robots have drawn inspiration from the musculoskeletal structures of animals like gazelles (<xref ref-type="bibr" rid="ref11">Li et al., 2022a</xref>), cheetahs (<xref ref-type="bibr" rid="ref10">Lei et al., 2022</xref>), and mice (<xref ref-type="bibr" rid="ref2">Bing et al., 2023</xref>), as well as the movement patterns of quadruped animals (<xref ref-type="bibr" rid="ref15">Massi et al., 2019</xref>). Considering the impressive load-bearing capacity and motion stability of arthropod leg structures, hexapod robots have also borrowed from creatures such as cockroaches (<xref ref-type="bibr" rid="ref15">Massi et al., 2019</xref>), ants (<xref ref-type="bibr" rid="ref27">Zhakypov et al., 2019</xref>), and lobsters (<xref ref-type="bibr" rid="ref22">Shim et al., 2016</xref>). In recent years, an increasing number of scholars have fully recognized that hexapod robots, with non-continuous contact points with the ground, can adapt to terrain environments with geometric and physical feature variations. They exhibit high load-bearing capacity and stability, making them an ideal mobile system for outdoor environments.</p>
<p>Unlike conventional robots with simple structures, hexapod robots have as many as 18 degrees of freedom in their legs alone. This high level of complexity, especially when carrying out tasks in complex environments, can impose a heavy burden on the operator and significantly reduce the overall motion coordination of the robot. Therefore, conventional approaches to legged robot self-locomotion intelligence on uneven terrain have yielded increasingly complex self-training architectures. Many rely on training locomotion controller by reinforcement learning in simulation, then transplant the training result to real terrain. ETH Zurich&#x2019;s ANYmal is one of the most promising legged systems of this kind (<xref ref-type="bibr" rid="ref24">Wangbo et al., 2019</xref>). They deployed learning agile and dynamic motor skills for their quadrupedal robot system. Other systems use rapid adaptation training at the robot motors (<xref ref-type="bibr" rid="ref4">Choi et al., 2023</xref>), which can be successful in 70% of the trials when walking downstairs along a hiking trail (<xref ref-type="bibr" rid="ref9">Kumar et al., 2021</xref>).</p>
<p>However, research on autonomous intelligent systems for robots in recent years has shown that the emergence and development of artificial intelligence technology has provided many new methods for robot intelligence, greatly advancing the process of robot intelligence. As for autonomous intelligent systems for robots, it is a highly complex control system that integrates various functions such as environmental perception, dynamic decision-making and planning, behavior control, and execution. Due to the lack of human drivers&#x2019; ability to handle unexpected and imprecise events, the overall intelligence level, flexibility, and adaptability of the system have been greatly limited. This is particularly true for legged mobile robots, as their walking environments are mostly characterized by unknown and rugged complexity, making it difficult for them to rely solely on autonomous intelligent systems. In fact, legged mobile robots often use a human-in-the-loop collaboration approach to accomplish mobility tasks.</p>
<p>Different from early human-robot collaborative methods that required real-time switching of control between humans and robots (<xref ref-type="bibr" rid="ref16">Merat et al., 2008</xref>, <xref ref-type="bibr" rid="ref17">2014</xref>; <xref ref-type="bibr" rid="ref5">Eriksson and Stanton, 2016</xref>), the current mainstream human-robot collaboration method is human-in-the-loop coordination. According to the position of the human operator, it can mainly be divided into two categories: manned shared control and driver remote participation coordination. Among them, the first type, manned shared control, has been widely applied in the fields of intelligent manufacturing and intelligent driving of vehicles. For example, Ma proposed a shared steering controller based on Nash game strategy, considering the differences in human-machine goal consistency (<xref ref-type="bibr" rid="ref14">Ma et al., 2019</xref>). They used a non-cooperative MPC method to model the interaction path tracking tasks between the driver and the automated system, achieving the correctness of cooperative path tracking control between the driver and the vehicle&#x2019;s onboard intelligent system. Huang proposed a human-driver in-loop coordination/shared steering control framework, applying state space small gain theory to the driver-vehicle coupled system, enabling the onboard intelligent system to work in coordination with the driver to achieve ideal lane-keeping performance (<xref ref-type="bibr" rid="ref7">Huang et al., 2019</xref>). In addition, manned shared control theory not only enables machine intelligence at the operational control layer (<xref ref-type="bibr" rid="ref29">Zhou et al., 2022</xref>; <xref ref-type="bibr" rid="ref25">Xu et al., 2023</xref>) but also starts to share human work at the motion planning layer of robots (<xref ref-type="bibr" rid="ref25">Xu et al., 2023</xref>).</p>
<p>For the second type of human-in-the-loop collaborative method, namely driver remote participation coordination, it is mostly used for hexapod robots in underwater (<xref ref-type="bibr" rid="ref26">Yoo et al., 2016</xref>; <xref ref-type="bibr" rid="ref19">Picardi et al., 2020</xref>), planetary surface (<xref ref-type="bibr" rid="ref1">Arm et al., 2023</xref>), resource extraction, and other hazardous environments. This is because the mobile operating environment poses risks that make it unsuitable for manned shared control of human-robots collaboration (<xref ref-type="bibr" rid="ref23">Si et al., 2022</xref>). Li developed a new semi-autonomous bilateral control dual-master/single-slave tactile remote operation system for hexapod robots. Through this system, not only was the sharing of environmental haptic information between the robot and the operator achieved, but also the maneuverability and local autonomy of the robot&#x2019;s remote operation system were improved (<xref ref-type="bibr" rid="ref12">Li et al., 2022b</xref>). Schwarz developed a control system for the rescue robot Momaro that can perform multi-task collaborative processing (<xref ref-type="bibr" rid="ref20">Schwarz et al., 2017</xref>). By coordinating multiple operators to manipulate the robot, they completed the supervision and control of the entire operation process of the robot. However, the main issue faced by driver remote participation coordination at present is that the status information between humans and robots cannot be timely exchanged, severely limiting the effectiveness of human-robots collaboration.</p>
<p>To address the issue of insufficient flow of status-constrained information between humans and machines, particularly the challenge of robots being unable to perceive drivers&#x2019; dynamically adjusting collaborative strategies, researchers utilize wearable physiological signal acquisition equipment to detect and assess driver states. For example, by wearing muscle electrical signal acquisition devices to sense and identify drivers&#x2019; motion intentions, facilitating interpersonal collaborative control (<xref ref-type="bibr" rid="ref28">Zhang et al., 2022</xref>; <xref ref-type="bibr" rid="ref13">Lyu et al., 2023</xref>). After obtaining driver status information, Seet determine the required assistance level based on the driver&#x2019;s workload and performance, increasing the involvement of the assistance system when the driver is overloaded or distracted, and reducing the assistance level when the driver&#x2019;s workload is moderate to ensure driving stability and safety (<xref ref-type="bibr" rid="ref21">Seet et al., 2023</xref>). Nguyen proposed a human-machine collaborative steering control strategy considering driver behavior states (<xref ref-type="bibr" rid="ref18">Nguyen et al., 2017</xref>). They allocate assistance weights based on the driver&#x2019;s behavior state and use fuzzy control theory to address speed and assistance weight variability issues, reducing human-machine conflicts and enhancing collaborative performance between humans and vehicles. Bueno et al. analyzed the impact of changes in driver cognitive load on human-machine driving authority switching through simulating non-driving tasks, indicating that regardless of the cognitive load size, engaging in non-driving tasks negatively affects the switching of human-machine driving authority due to reduced concentration (<xref ref-type="bibr" rid="ref3">Bueno et al., 2016</xref>). Additionally, in driver remote participation collaborative control, the intelligent system interacts with the driver using tactile, visual, and auditory information to stimulate driver focus, while Ji experimentally verified that using tactile seats effectively enhances driver focus during driving, thereby improving safety and smoothness during human-machine driving authority switching (<xref ref-type="bibr" rid="ref8">Ji et al., 2011</xref>). Forster use voice prompts and warning sounds to alert drivers about upcoming authority switches (<xref ref-type="bibr" rid="ref6">Forster and Naujoks, 2017</xref>). These methods aim to enhance mutual perception between humans and machines, utilizing perceptual information to promote and assist the emerging trend of remote collaborative control between drivers and robots more effectively.</p>
<p>Based on the above discussion, in this paper, we consider how to quantitatively analyze the state constraints between humans and robots in remote control mode, assisting drivers in forming reasonable human-robot collaborative control commands. Especially, we place great emphasis on the geometric motion constraints of hexapod robots in irregular terrains and the fatigue state constraints of drivers. Using these two types of human-robot constraint conditions, we filter the feasible set of all human-robot collaborative control command solutions. The selected human-robot commands combinations by the driver are then chosen and issued to the robot, greatly reducing the driver&#x2019;s burden and enhancing the safety and efficiency of remote collaboration The remainder of this paper is divided into the following sections: Section 2 proposes the mapping process framework of human-robot decision target values to command combinations. Section 3 quantifies the geometric motion constraints of hexapod robots in irregular terrains and the fatigue state constraints of drivers. Experimental investigations are conducted in Section 4.</p>
</sec>
<sec id="sec2">
<label>2</label>
<title>Method for generating command combinations from human-robot decision target values</title>
<sec id="sec3">
<label>2.1</label>
<title>Framework of overall process</title>
<p>For robots performing tasks in unstructured terrain environments, the complexity of behavioral decision-making and control by remote operators is a crucial issue that cannot be ignored. In particular, unlike structurally simple conventional wheeled robots, hexapod robots have as many as 18 degrees of freedom. If controlled one by one, it not only imposes a heavy driving burden on the driver but also significantly reduces the overall motion coordination of the robot. During the phase of issuing commands with high control workload, it is particularly necessary to utilize the intelligent system carried by the robot to assist in rapid and efficient command issuance, thereby reducing the workload of the driver.</p>
<p>Our team recorded and summarized the real-time decision-making and control processes of highly experienced hexapod robot drivers through a large number of experiments. After summarizing, it was found that both drivers and robot decision intelligence tend to focus on the top-level decision-making of hexapod robot motion behavior, specifically targeting the next moment&#x2019;s target walking distance, walking speed, and walking direction of the hexapod robot, forming decision goal values mutually recognized by humans and machines. Furthermore, the driver or robot intelligence system then decomposes and maps the decision goal values into corresponding specific control commands. In this process, for the driver, instructions are formulated in the brain based on the observed environment and robot state information, as well as driving experience, and implemented through operating external hardware devices; for the robot, theoretical formulas are established based on the robot&#x2019;s kinematic characteristics to autonomously calculate positions and speeds at the bottom execution layer and generate instructions.</p>
<p>Specifically, as shown in <xref ref-type="fig" rid="fig1">Figure 1</xref>, this article outlines the main steps in the process from behavior decision goal values to recommended selectable human-robot command combinations as follows: (1) confirming and inputting behavior decision goal values; (2) mapping and calculating all human-robot commands from the decision goal values to form a feasible set of human-robot commands, including all four types of command combinations under human-robot collaborative modes (driver control, human primary and machine auxiliary, machine primary and human auxiliary, and robot autonomous mode); (3) filtering the command combinations in the feasible set based on command constraints, which include geometric motion constraints of the robot and driver fatigue constraints; (4) after filtering based on constraints, recommending human-robot commands are output to assist the driver in control.</p>
<fig position="float" id="fig1">
<label>Figure 1</label>
<caption>
<p>Mapping process framework of generating command combinations.</p>
</caption>
<graphic xlink:href="fnbot-18-1393738-g001.tif"/>
</fig>
<p>For example, When a robot is moving in the slop terrain, a command combination generally includes the selection command for the robot&#x2019;s gait type, commands for gait period, step stride, step stroke, and body posture adjustment. Moreover, the commands included in the combination correspond to specific recommended values and the authority for human-robot modifications. Therefore, the primary function of command combinations is to provide the human operator with the types, values, and permissions of recommended commands. Additionally, driver can make real-time modifications to the command online before the robot carries them out.</p>
</sec>
<sec id="sec4">
<label>2.2</label>
<title>Hexapod robot motion characteristics</title>
<p>Unlike drivers who rely on experience to generate control commands, machine intelligence needs to establish a kinematic model based on robot motion characteristics to generate control commands. The physical prototype of the hexapod mobile robot is shown in <xref ref-type="fig" rid="fig2">Figure 2</xref>, which belongs to a type of insect-like electrically driven multi-legged robot. The robot mainly consists of a body and six legs. The body is hexagonal in shape, with the six legs evenly distributed on each side. Each leg has three degrees of freedom, composed of the coxa segment, thigh segment, and shank segment. The coxa segment is connected to the body via a base joint, the thigh segment is connected to the coxa segment by a hip joint, and the shank segment is connected to the thigh segment by knee joint. The robot&#x2019;s foot is rigidly connected to the end of the shank segment. Each of the mentioned rotating joints is driven by a motor.</p>
<fig position="float" id="fig2">
<label>Figure 2</label>
<caption>
<p>Hexapod robot physical prototype.</p>
</caption>
<graphic xlink:href="fnbot-18-1393738-g002.tif"/>
</fig>
<p><inline-formula>
<mml:math id="M1">
<mml:msub>
<mml:mi>&#x03A3;</mml:mi>
<mml:mrow>
<mml:mi>G</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:msub>
<mml:mi>Y</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:msub>
<mml:mi>Z</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula>, <inline-formula>
<mml:math id="M2">
<mml:msub>
<mml:mi>&#x03A3;</mml:mi>
<mml:mrow>
<mml:mi>B</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:msub>
<mml:mi>Y</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:msub>
<mml:mi>Z</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula>, and <inline-formula>
<mml:math id="M3">
<mml:msub>
<mml:mi>&#x03A3;</mml:mi>
<mml:mrow>
<mml:mi>L</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>3</mml:mn>
</mml:msub>
<mml:msub>
<mml:mi>Y</mml:mi>
<mml:mn>3</mml:mn>
</mml:msub>
<mml:msub>
<mml:mi>Z</mml:mi>
<mml:mn>3</mml:mn>
</mml:msub>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> represent the global, body and single-leg coordinate systems, respectively. The base joint angle is denoted by <inline-formula>
<mml:math id="M4">
<mml:mi>&#x03B1;</mml:mi>
</mml:math>
</inline-formula>, the hip joint angle is denoted by <inline-formula>
<mml:math id="M5">
<mml:mi>&#x03B2;</mml:mi>
</mml:math>
</inline-formula>, and the ankle joint angle is denoted by <inline-formula>
<mml:math id="M6">
<mml:mi>&#x03B3;</mml:mi>
</mml:math>
</inline-formula>. The length of the coxa segment is represented by <inline-formula>
<mml:math id="M7">
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> The length of the thigh segment is represented by <inline-formula>
<mml:math id="M8">
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:math>
</inline-formula>, and the length of the is represented by <inline-formula>
<mml:math id="M9">
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
</mml:math>
</inline-formula>. The vertical height from the body&#x2019;s centroid to the ground is denoted by <italic>H</italic>. The forward kinematics and inverse kinematics models of a single leg of the hexapod mobile robot can be determined by <xref ref-type="disp-formula" rid="EQ1">Formula (1)</xref> and <xref ref-type="disp-formula" rid="EQ2">Formula (2)</xref>:</p>
<disp-formula id="EQ1">
<label>(1)</label>
<mml:math id="M10">
<mml:mo stretchy="true">{</mml:mo>
<mml:mtable columnalign="left">
<mml:mtr columnalign="left">
<mml:mtd columnalign="left">
<mml:mmultiscripts>
<mml:mi>P</mml:mi>
<mml:mi>X</mml:mi>
<mml:none/>
<mml:mprescripts/>
<mml:mi>L</mml:mi>
</mml:mmultiscripts>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi mathvariant="normal">c</mml:mi>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>cos</mml:mo>
<mml:mi>&#x03B2;</mml:mi>
<mml:mo>+</mml:mo>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>B</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mspace width="0.25em"/>
<mml:mo>cos</mml:mo>
<mml:mspace width="0.25em"/>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x03B3;</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi>&#x03B2;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mspace width="0.25em"/>
<mml:mo>sin</mml:mo>
<mml:mi>&#x03B1;</mml:mi>
</mml:mtd>
</mml:mtr>
<mml:mtr columnalign="left">
<mml:mtd columnalign="left">
<mml:mmultiscripts>
<mml:mi>P</mml:mi>
<mml:mi>Y</mml:mi>
<mml:none/>
<mml:mprescripts/>
<mml:mi>L</mml:mi>
</mml:mmultiscripts>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi mathvariant="normal">c</mml:mi>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>cos</mml:mo>
<mml:mi>&#x03B2;</mml:mi>
<mml:mo>+</mml:mo>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>B</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mspace width="0.25em"/>
<mml:mo>cos</mml:mo>
<mml:mspace width="0.25em"/>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x03B3;</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi>&#x03B2;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mspace width="0.25em"/>
<mml:mo>cos</mml:mo>
<mml:mi>&#x03B1;</mml:mi>
</mml:mtd>
</mml:mtr>
<mml:mtr columnalign="left">
<mml:mtd columnalign="left">
<mml:mmultiscripts>
<mml:mi>P</mml:mi>
<mml:mi>Z</mml:mi>
<mml:none/>
<mml:mprescripts/>
<mml:mi>L</mml:mi>
</mml:mmultiscripts>
<mml:mo>=</mml:mo>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>H</mml:mi>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>sin</mml:mo>
<mml:mi>&#x03B2;</mml:mi>
<mml:mo>+</mml:mo>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>B</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mspace width="0.25em"/>
<mml:mo>cos</mml:mo>
<mml:mspace width="0.25em"/>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x03B3;</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi>&#x03B2;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
<disp-formula id="EQ2">
<label>(2)</label>
<mml:math id="M11">
<mml:mo stretchy="true">{</mml:mo>
<mml:mtable columnalign="left">
<mml:mtr columnalign="left">
<mml:mtd columnalign="left">
<mml:mi>&#x03B1;</mml:mi>
<mml:mo>=</mml:mo>
<mml:mo>arctan</mml:mo>
<mml:mfenced open="(" close=")">
<mml:mfrac>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:msub>
<mml:mn>0</mml:mn>
<mml:mi>X</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:msub>
<mml:mn>0</mml:mn>
<mml:mi>Y</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
<mml:mtr columnalign="left">
<mml:mtd columnalign="left">
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mi>&#x03B2;</mml:mi>
<mml:mo>=</mml:mo>
<mml:mo>arctan</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:msub>
<mml:mn>0</mml:mn>
<mml:mi>Y</mml:mi>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
<mml:mo>cos</mml:mo>
<mml:mi>&#x03B1;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>P</mml:mi>
<mml:msub>
<mml:mn>0</mml:mn>
<mml:mi>Z</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>cos</mml:mo>
<mml:mi>&#x03B1;</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mo>&#x2212;</mml:mo>
<mml:mo>arcsin</mml:mo>
<mml:mfenced open="(" close=")">
<mml:mfrac>
<mml:msqrt>
<mml:mrow>
<mml:msup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>P</mml:mi>
<mml:msub>
<mml:mn>0</mml:mn>
<mml:mi>Z</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>+</mml:mo>
<mml:msup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>Z</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mo>cos</mml:mo>
<mml:mi>&#x03B1;</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:msqrt>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mspace width="0.25em"/>
<mml:mo>&#x2212;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>S</mml:mi>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>b</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:msup>
<mml:mi>L</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:msqrt>
<mml:mrow>
<mml:msup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>P</mml:mi>
<mml:msub>
<mml:mn>0</mml:mn>
<mml:mi>Z</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>+</mml:mo>
<mml:msup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>Z</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mo>cos</mml:mo>
<mml:mi>&#x03B1;</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:mfrac>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mtd>
</mml:mtr>
<mml:mtr columnalign="left">
<mml:mtd columnalign="left">
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mi>&#x03B3;</mml:mi>
<mml:mo>=</mml:mo>
<mml:mi>&#x03C0;</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:msup>
<mml:mo>sin</mml:mo>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mfenced open="(" close=")">
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:mfrac>
<mml:msqrt>
<mml:mrow>
<mml:msup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>P</mml:mi>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>z</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>+</mml:mo>
<mml:msup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>z</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mo>cos</mml:mo>
<mml:mi>&#x03B1;</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:msqrt>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mo>+</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>&#x2212;</mml:mo>
<mml:msup>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:msqrt>
<mml:mrow>
<mml:msup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>P</mml:mi>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>z</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>+</mml:mo>
<mml:msup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>z</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mo>cos</mml:mo>
<mml:mi>&#x03B1;</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:mfrac>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mspace width="0.25em"/>
<mml:mo>&#x2212;</mml:mo>
<mml:msup>
<mml:mo>sin</mml:mo>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mfenced open="(" close=")">
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:mfrac>
<mml:msqrt>
<mml:mrow>
<mml:msup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>P</mml:mi>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>z</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>+</mml:mo>
<mml:msup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>z</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mo>cos</mml:mo>
<mml:mi>&#x03B1;</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:msqrt>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mo>+</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>&#x2212;</mml:mo>
<mml:msup>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:msqrt>
<mml:mrow>
<mml:msup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>P</mml:mi>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>z</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>+</mml:mo>
<mml:msup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>z</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mo>cos</mml:mo>
<mml:mi>&#x03B1;</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:mfrac>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
</sec>
</sec>
<sec id="sec5">
<label>3</label>
<title>Quantitative methods for human-robot state constraints</title>
<sec id="sec6">
<label>3.1</label>
<title>Geometric motion constraints of the robot</title>
<p>In order to ensure the safety of hexapod robots walking in complex terrain environments, it is necessary to impose specific constraints on the generated commands for both the robot and the driver based on terrain features. This article establishes geometric constraint models between terrain and joint space for sloped terrain, obstacle terrain, and ditch terrain, thereby ensuring that the robot&#x2019;s joint motion space remains within a safe range. This includes constraint equations for joint motion based on terrain feature values, resulting in target step stride, step stroke, pitch, and roll angle constraints for the robot&#x2019;s body pose changes.</p>
<p>Specifically, considering that terrain geometry features can greatly impact the robot&#x2019;s joint motion space, the control process of the robot requires real-time monitoring of the joint&#x2019;s safe working space to prevent issues such as joint position exceeding limits, instability and overturning during movement, and body collisions. In this section, we first utilize the robot&#x2019;s body perception characteristics to establish terrain features, such as estimating the slope of sloped terrain, dimensions of obstacles in obstacle terrain, and the width of ditches in ditch terrain. Subsequently, based on constraints for robot body collision safety, joint limit constraints, and walking safety constraints, a mathematical model for the joint constraints of the hexapod robot is established. Finally, the constraints for the target commands for the robot&#x2019;s body pose changes are obtained based on the constraints imposed by the terrain on the joints. This achieves the necessary rationalization of the feasible command set for human-robot instructions, narrowing the range of recommended commands while improving their rationality. This enhancement ensures that both the driver and the robot intelligence effectively improve the efficiency and safety of controlling the robot&#x2019;s movement using the feasible command set.</p>
<sec id="sec7">
<label>3.1.1</label>
<title>Geometric constraint model for sloped terrain</title>
<p>When a hexapod robot walks on sloped terrain, it needs to adjust the pitch and roll angles of its body as well as the step length in real time to adapt to the changing terrain based on the estimated slope of the ground and joint constraints. Specifically, when the robot is traversing sloped terrain, constraints need to be established based on the joint&#x2019;s extreme positions or potential interference between the robot&#x2019;s body and the geometric terrain, in order to obtain constraints for the numerical values of the hexapod robot&#x2019;s motion commands.</p>
<p>Specifically, during the uphill process, in order to ensure the stability margin of the hexapod robot, a uniformly distributed standing method is adopted, as shown in <xref ref-type="fig" rid="fig3">Figure 3</xref>. When the terrain slope is steep, the knee joints of the front and rear legs will reach their limit positions. Therefore, by establishing geometric constraints on the height from the body to the slope surface, the geometric relationship between the terrain and the robot&#x2019;s body can be mapped. The geometric relationships between the joints and the ground during the transition phase from flat ground to a slope for the hexapod robot are shown in <xref ref-type="fig" rid="fig3">Figure 3</xref>. The height of the front leg base joint position from the slope surface is determined by the knee joint&#x2019;s limit position and the walking step length. The defined limit height of the base joint from the slope surface is denoted as <inline-formula>
<mml:math id="M12">
<mml:msub>
<mml:mi>h</mml:mi>
<mml:mtext>lim</mml:mtext>
</mml:msub>
</mml:math>
</inline-formula>, with the vertical distance being the length of point AB. According to forward kinematic analysis, the limit position <inline-formula>
<mml:math id="M13">
<mml:msub>
<mml:mi>&#x03B3;</mml:mi>
<mml:mtext>lim</mml:mtext>
</mml:msub>
</mml:math>
</inline-formula> of the knee joint will mainly affect the value of <inline-formula>
<mml:math id="M14">
<mml:msub>
<mml:mi>h</mml:mi>
<mml:mtext>lim</mml:mtext>
</mml:msub>
</mml:math>
</inline-formula>. Based on the limit height of <inline-formula>
<mml:math id="M15">
<mml:msub>
<mml:mi>h</mml:mi>
<mml:mtext>lim</mml:mtext>
</mml:msub>
</mml:math>
</inline-formula>, the limit value of the knee joint position can be determined by the <xref ref-type="disp-formula" rid="EQ3">Formula (3)</xref>:</p>
<disp-formula id="EQ3">
<label>(3)</label>
<mml:math id="M16">
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mi>&#x03B3;</mml:mi>
<mml:mtext>lim</mml:mtext>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mi>&#x03C0;</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:msup>
<mml:mo>sin</mml:mo>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mfenced open="(" close=")">
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:mfrac>
<mml:msqrt>
<mml:mrow>
<mml:msup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mi>h</mml:mi>
<mml:mtext>lim</mml:mtext>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>&#x03BB;</mml:mi>
<mml:mo>tan</mml:mo>
<mml:mi>&#x03B8;</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>P</mml:mi>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>+</mml:mo>
<mml:msup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mo>cos</mml:mo>
<mml:mi>&#x03B1;</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:msqrt>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mo>+</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>&#x2212;</mml:mo>
<mml:msup>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:msqrt>
<mml:mrow>
<mml:msup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mi>h</mml:mi>
<mml:mtext>lim</mml:mtext>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>&#x03BB;</mml:mi>
<mml:mo>tan</mml:mo>
<mml:mi>&#x03B8;</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>P</mml:mi>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>+</mml:mo>
<mml:msup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mo>cos</mml:mo>
<mml:mi>&#x03B1;</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:mfrac>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mo>&#x2212;</mml:mo>
<mml:msup>
<mml:mo>sin</mml:mo>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mfenced open="(" close=")">
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:mfrac>
<mml:msqrt>
<mml:mrow>
<mml:msup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mi>h</mml:mi>
<mml:mtext>lim</mml:mtext>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>&#x03BB;</mml:mi>
<mml:mo>tan</mml:mo>
<mml:mi>&#x03B8;</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>P</mml:mi>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>+</mml:mo>
<mml:msup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mo>cos</mml:mo>
<mml:mi>&#x03B1;</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:msqrt>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mo>+</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>&#x2212;</mml:mo>
<mml:msup>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:msqrt>
<mml:msup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mi>h</mml:mi>
<mml:mtext>lim</mml:mtext>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>&#x03BB;</mml:mi>
<mml:mo>tan</mml:mo>
<mml:mi>&#x03B8;</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>P</mml:mi>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:msqrt>
<mml:mo>+</mml:mo>
<mml:msup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:msub>
<mml:mi>z</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mo>cos</mml:mo>
<mml:mi>&#x03B1;</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:mfrac>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
<disp-formula id="EQ4">
<label>(4)</label>
<mml:math id="M17">
<mml:msub>
<mml:mi>&#x03BB;</mml:mi>
<mml:mtext>max</mml:mtext>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mo>arg</mml:mo>
<mml:mo>max</mml:mo>
<mml:mspace width="0.25em"/>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x03B3;</mml:mi>
<mml:mfenced open="(" close=")" separators=",,">
<mml:msub>
<mml:mi>h</mml:mi>
<mml:mtext>lim</mml:mtext>
</mml:msub>
<mml:mi>&#x03BB;</mml:mi>
<mml:mi>&#x03B8;</mml:mi>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
</mml:math>
</disp-formula>
<fig position="float" id="fig3">
<label>Figure 3</label>
<caption>
<p>Robot in slope terrain.</p>
</caption>
<graphic xlink:href="fnbot-18-1393738-g003.tif"/>
</fig>
<p>Since the limit position of the knee joint depends on the robot&#x2019;s leg mechanical structure and joint motor limits, it is a fixed value. According to the <xref ref-type="disp-formula" rid="EQ3">Formula (3)</xref>, it can be seen that the limit height of the base joint from the slope surface, the robot&#x2019;s real-time step stride, and the slope angle will determine the real-time position of the knee joint. Considering that the limit height of the base joint from the slope surface is a predetermined value for safety reasons, and the slope angle is also an estimated determined value based on the robot&#x2019;s body perception. Therefore, the real-time step length is an important factor determining the knee joint position in real time. To ensure that the knee joint&#x2019;s limit position does not exceed its maximum set value, the real-time step length must not exceed a maximum limit value, as shown in <xref ref-type="disp-formula" rid="EQ4">Formula (4)</xref>. By establishing the maximum real-time step length for a hexapod robot walking uphill, it can set practical constraints on step length. This will improve the effectiveness of instruction sets used by both the driver and the robot for controlling robot motion, enhancing human-robot interaction during driving.</p>
</sec>
<sec id="sec8">
<label>3.1.2</label>
<title>Geometric constraint model for obstacle terrain</title>
<p>Obstacle terrain is the most common non-flat terrain encountered by hexapod robots in complex outdoor environments. According to the geometric dimensions of the obstacles, obstacle terrain can be divided into two categories: obstacles that can be crossed (obstacle width is less than the leg support width, and obstacle height is lower than the robot&#x2019;s standing height), as shown in <xref ref-type="fig" rid="fig4">Figure 4</xref>; obstacles that can be climbed (slope greater than the leg support width, and obstacle height is lower than the robot&#x2019;s standing height), as shown in <xref ref-type="fig" rid="fig5">Figure 5</xref>. For obstacles that can be crossed, due to the long lengths of the robot&#x2019;s thigh and shank joints, the robot&#x2019;s standing height can be raised above the height of the obstacle terrain, and a normal walking gait can be used to pass through the obstacle terrain smoothly. For obstacles that can be crossed, when the robot&#x2019;s standing height is greater than the height of the obstacle, the leg posture can be adjusted to achieve a new body standing height. The constraints that need to be satisfied in this state as shown in <xref ref-type="disp-formula" rid="EQ5">Formula (5)</xref>.</p>
<disp-formula id="EQ5">
<label>(5)</label>
<mml:math id="M18">
<mml:mfenced open="{" close="}">
<mml:mtable columnalign="left">
<mml:mtr columnalign="left">
<mml:mtd columnalign="left">
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>B</mml:mi>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>2</mml:mn>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>2</mml:mn>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>cos</mml:mo>
<mml:mi>&#x03B2;</mml:mi>
<mml:mo>&#x003E;</mml:mo>
<mml:msub>
<mml:mi>w</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mtd>
</mml:mtr>
<mml:mtr columnalign="left">
<mml:mtd columnalign="left">
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo>sin</mml:mo>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x03B3;</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>&#x03B2;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo>sin</mml:mo>
<mml:mi>&#x03B2;</mml:mi>
<mml:mo>&#x003E;</mml:mo>
<mml:mi>h</mml:mi>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>h</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mfenced>
</mml:math>
</disp-formula>
<fig position="float" id="fig4">
<label>Figure 4</label>
<caption>
<p>Obstacle that can be crossed.</p>
</caption>
<graphic xlink:href="fnbot-18-1393738-g004.tif"/>
</fig>
<fig position="float" id="fig5">
<label>Figure 5</label>
<caption>
<p>Obstacle that can be climbed.</p>
</caption>
<graphic xlink:href="fnbot-18-1393738-g005.tif"/>
</fig>
<p>Where <inline-formula>
<mml:math id="M19">
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
</mml:math>
</inline-formula>represents the length of the hexapod robot&#x2019;s leg base joint; <inline-formula>
<mml:math id="M20">
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> represents the length of the hexapod robot&#x2019;s leg tibia joint; <inline-formula>
<mml:math id="M21">
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> represents the length of the hexapod robot&#x2019;s leg femur joint; <inline-formula>
<mml:math id="M22">
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>B</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> represents the width of the hexapod robot&#x2019;s body, <inline-formula>
<mml:math id="M23">
<mml:msub>
<mml:mi>w</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:math>
</inline-formula> represents the width of a local obstacle in the terrain environment, <inline-formula>
<mml:math id="M24">
<mml:mi>h</mml:mi>
</mml:math>
</inline-formula>represents the height of a local obstacle in the terrain environment, and<inline-formula>
<mml:math id="M25">
<mml:msub>
<mml:mi>h</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> represents the safety distance between the hexapod robot&#x2019;s leg base joint and the obstacle.</p>
<p>For obstacles in a climbable form, where the obstacle width is greater than the leg&#x2019;s support width and the body height is lower than the maximum standing height, the legs can step on the obstacle and perform climbing actions. By setting a limit value <inline-formula>
<mml:math id="M26">
<mml:msub>
<mml:mi>h</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> for the distance between the leg base and the obstacle surface, we can determine the limit value <inline-formula>
<mml:math id="M27">
<mml:msub>
<mml:mi>&#x03B7;</mml:mi>
<mml:mi mathvariant="normal">robot</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> for the body&#x2019;s pitch angle and establish a geometric constraint model between joint space and obstacle terrain. When the robot&#x2019;s front legs land on the obstacle surface, the joint motion space of the front legs is limited, requiring adjustment of the body&#x2019;s pitch angle to adapt to the terrain changes. The constraints that need to be satisfied in this state as shown in <xref ref-type="disp-formula" rid="EQ6">Formula (6)</xref>:</p>
<disp-formula id="EQ6">
<label>(6)</label>
<mml:math id="M28">
<mml:mfenced open="{" close="}">
<mml:mtable columnalign="left">
<mml:mtr columnalign="left">
<mml:mtd columnalign="left">
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>sin</mml:mo>
<mml:mi>&#x03B2;</mml:mi>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo>cos</mml:mo>
<mml:mspace width="0.25em"/>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mfrac>
<mml:mi>&#x03C0;</mml:mi>
<mml:mn>2</mml:mn>
</mml:mfrac>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>&#x03B2;</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>&#x03B3;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x003E;</mml:mo>
<mml:mi>h</mml:mi>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>h</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
</mml:mtd>
</mml:mtr>
<mml:mtr columnalign="left">
<mml:mtd columnalign="left">
<mml:msub>
<mml:mi>&#x03B7;</mml:mi>
<mml:mi mathvariant="normal">robot</mml:mi>
</mml:msub>
<mml:mo>&#x003E;</mml:mo>
<mml:mo>arccos</mml:mo>
<mml:mspace width="0.25em"/>
<mml:mfenced open="(" close=")">
<mml:mfrac>
<mml:msqrt>
<mml:mrow>
<mml:msup>
<mml:mi>l</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>&#x2212;</mml:mo>
<mml:msup>
<mml:mi>r</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:msqrt>
<mml:msub>
<mml:mi>h</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
</mml:mfrac>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mfenced>
</mml:math>
</disp-formula>
<p>Therefore, for climbable obstacles, the motion instructions of the hexapod robot adhere to the above constraints, effectively achieving reasonable and effective constraints on the pitch angle for both the driver and the robot intelligence when utilizing feasible instruction sets for robot motion control. This enhances the effectiveness of the feasible instruction set in assisting human-machine interaction during driving and control.</p>
</sec>
<sec id="sec9">
<label>3.1.3</label>
<title>Geometric constraint model for ditch terrain</title>
<p>Based on the different geometric dimensions of the ditch terrain, the ditch terrain can be divided into two categories: ditches that can be crossed in a single step where the width of the channel is less than the robot&#x2019;s single support width; and ridges that can be crossed in multiple steps where the width of the channel is greater than the robot&#x2019;s single support width. For ridges that can be crossed in a single step, where the width of the channel is less than the robot&#x2019;s single support width, the robot can increase its step length to autonomously cross the channel, as shown in <xref ref-type="fig" rid="fig6">Figure 6</xref>. The constraints that need to be satisfied in this state as shown in <xref ref-type="disp-formula" rid="EQ7">Formula (7)</xref>:</p>
<disp-formula id="EQ7">
<label>(7)</label>
<mml:math id="M29">
<mml:msub>
<mml:mi>&#x03BB;</mml:mi>
<mml:mtext>min</mml:mtext>
</mml:msub>
<mml:mo>&#x003E;</mml:mo>
<mml:mi>w</mml:mi>
</mml:math>
</disp-formula>
<fig position="float" id="fig6">
<label>Figure 6</label>
<caption>
<p>Ditch that can be crossed in a single step.</p>
</caption>
<graphic xlink:href="fnbot-18-1393738-g006.tif"/>
</fig>
<p>Where <inline-formula>
<mml:math id="M30">
<mml:msub>
<mml:mi>&#x03BB;</mml:mi>
<mml:mtext>min</mml:mtext>
</mml:msub>
</mml:math>
</inline-formula> represents the real-time minimum step length of the hexapod robot, and <inline-formula>
<mml:math id="M31">
<mml:mi>w</mml:mi>
</mml:math>
</inline-formula> represents the width of the channel.</p>
<p>For ditches that can be crossed in multiple steps, where the width of the channel is greater than the robot&#x2019;s single support width, it is not possible to cross the ridge with a single adjustment. However, the robot can achieve the crossing by making multiple adjustments with its legs. In this case, the supporting legs need to take larger steps, which may lead to situations where the joint reaches its limit position, as shown in <xref ref-type="fig" rid="fig7">Figure 7</xref>. The constraints that need to be satisfied in this state as shown in <xref ref-type="disp-formula" rid="EQ8">Formula (8)</xref>:</p>
<disp-formula id="EQ8">
<label>(8)</label>
<mml:math id="M32">
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>cos</mml:mo>
<mml:mi>&#x03B2;</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
<mml:mo>sin</mml:mo>
<mml:mspace width="0.25em"/>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mfrac>
<mml:mi>&#x03C0;</mml:mi>
<mml:mn>2</mml:mn>
</mml:mfrac>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>&#x03B2;</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>&#x03B3;</mml:mi>
<mml:mtext>lim</mml:mtext>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>&#x03BB;</mml:mi>
<mml:mi>D</mml:mi>
</mml:msub>
<mml:mo>&#x003E;</mml:mo>
<mml:mi>w</mml:mi>
</mml:math>
</disp-formula>
<fig position="float" id="fig7">
<label>Figure 7</label>
<caption>
<p>Ditch that can be crossed in multiple steps.</p>
</caption>
<graphic xlink:href="fnbot-18-1393738-g007.tif"/>
</fig>
<p>Where <inline-formula>
<mml:math id="M33">
<mml:msub>
<mml:mi>&#x03B3;</mml:mi>
<mml:mtext>lim</mml:mtext>
</mml:msub>
</mml:math>
</inline-formula> represents the knee joint limit value, <inline-formula>
<mml:math id="M34">
<mml:msub>
<mml:mi>&#x03BB;</mml:mi>
<mml:mi>D</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> represents the real-time dynamic step length of the hexapod robot, and <inline-formula>
<mml:math id="M35">
<mml:mi>w</mml:mi>
</mml:math>
</inline-formula> represents the width of the channel.</p>
<p>Through the above equation, the robot&#x2019;s real-time dynamic maximum step stride can be calculated. When a single leg reaches its maximum step stride and cannot cross the channel, it is necessary to readjust the positions of each leg and the body, and then retry the crossing. Based on the geometric constraints model of the channel terrain mentioned above, autonomous step adjustment for the robot to cross the channel within the range of leg joint limit positions is achieved, enabling the robot to perform the crossing action. Moreover, when encountering obstacles that cannot be overcome or when the landing area is complex and requires finding a suitable landing point, external visual perception of the robot can be used to model the terrain and detect landing points. By modeling the terrain using external visual sensors and equivalent the robot&#x2019;s envelope range to a virtual body model, obstacle detection and avoidance are carried out based on artificial potential field methods. Furthermore, analyzing the ruggedness of the terrain, terrain height, and the area of safe landing zones based on visual information, an evaluation function for the terrain is established to select landing points, avoiding instability of the robot caused by walking on special terrain.</p>
<p>Therefore, for ditches that can be crossed in multiple steps, the motion command s for hexapod robots should prioritize the constraints mentioned above. This will effectively realize the collaboration between the driver and the machine intelligence when using a feasible command set for robot motion control, providing reasonable and effective constraints on dynamic step length. It enhances the effectiveness of the feasible command set in assisting human-machine collaboration in driving and control tasks.</p>
</sec>
</sec>
<sec id="sec10">
<label>3.2</label>
<title>Driver fatigue constraint</title>
<p>Due to its inherent stability under high load and its ability to maneuver in extreme environments, hexapod robots are more likely to perform tasks in complex environments compared to other types of mobile robots. In order to ensure the passability and safety of hexapod robots in complex and unknown environments, remote operation and control of the robot&#x2019;s motion behavior are often carried out through human-robot collaboration. However, the redundancy of the robot&#x2019;s control degrees of freedom and the complexity of environmental tasks will impose a significant burden on the remote operators. This not only significantly affects the comfort of the operators but also has a detrimental impact on the safety and efficiency of the hexapod robot&#x2019;s movement.</p>
<p>Therefore, it is necessary to assess the driver&#x2019;s fatigue status in real-time to determine the optimal human-robot collaborative control mode, which can then be used to optimize the combined form of control commands for humans and robots. For example, when the driver is not fatigued or only mildly fatigued, the control system can switch to manual control mode, allowing the driver to participate in the position control of the hexapod robot&#x2019;s single leg, foot end, and joints. When the driver is moderately fatigued, the control system can switch to human primary and machine auxiliary mode, enabling the driver to participate in the control of the hexapod robot&#x2019;s body posture and gait parameters while disabling manual control mode. In cases of severe fatigue, the control system can switch to machine primary and human auxiliary mode, where the driver is only required to monitor and intervene in emergency situations concerning the hexapod robot.</p>
<p>As shown in <xref ref-type="fig" rid="fig8">Figure 8</xref>, for the quantitative analysis of driver arm fatigue, this paper designs a framework for quantifying upper limb fatigue. The main process includes: real-time collection of the driver&#x2019;s raw electromyography signals from the upper limbs using a myoelectric armband; preprocessing the raw electromyography signals of the upper limbs using data processing methods to extract feature signals; training a BP neural network using the feature signals and the driver&#x2019;s subjective fatigue values as training samples, thereby ultimately establishing and utilizing a neural network model for real-time assessment of driver arm fatigue.</p>
<fig position="float" id="fig8">
<label>Figure 8</label>
<caption>
<p>The remote human-machine collaborative driving control system.</p>
</caption>
<graphic xlink:href="fnbot-18-1393738-g008.tif"/>
</fig>
<p>Specifically, in the process of collecting the driver&#x2019;s raw electromyography signals from the upper limbs, considering that electromyography signals are the electrophysiological signals generated when muscle tissue contracts, this paper collects 8-channel electromyography signal data using the gForcePro+ myoelectric acquisition armband. In the preprocessing stage of the raw signals, to improve the accuracy and anti-interference ability of the data, the sampling frequency of the signals is set to 1,000&#x2009;Hz, and methods such as linear noise elimination, low-pass filtering, and moving average filtering are used to preprocess the original sEMG signals. This stage involves roughly five sub-processes: first, linear noise elimination for DC; second, square rectification of the obtained signals; third, further filtering of the rectified signals using filters; fourth, normalization of the processed signals; and fifth, moving average envelope processing of the normalized signals using a 50-sample moving window. The main formulas and their meanings involved in each process are as follows.</p>
<p>Sub-process 1: Denoising of the original signal by subtracting the mean amplitude of the signal from the signal amplitude within the window, as shown in <xref ref-type="disp-formula" rid="EQ9">Formula (9)</xref>:</p>
<disp-formula id="EQ9">
<label>(9)</label>
<mml:math id="M36">
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mi>i</mml:mi>
</mml:mfenced>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mi>i</mml:mi>
</mml:mfenced>
<mml:mo>&#x2212;</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mi>N</mml:mi>
</mml:mfrac>
<mml:munderover>
<mml:mstyle displaystyle="true">
<mml:mo stretchy="true">&#x2211;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:munderover>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mi>i</mml:mi>
</mml:mfenced>
</mml:math>
</disp-formula>
<p>Where <inline-formula>
<mml:math id="M37">
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mi>i</mml:mi>
</mml:mfenced>
</mml:math>
</inline-formula> represents the sEMG value with linear noise removed; <inline-formula>
<mml:math id="M38">
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mi>i</mml:mi>
</mml:mfenced>
</mml:math>
</inline-formula> represents the value of the original sEMG signal; <inline-formula>
<mml:math id="M39">
<mml:mi>N</mml:mi>
</mml:math>
</inline-formula> represents the sampling window size; <inline-formula>
<mml:math id="M40">
<mml:mi>i</mml:mi>
</mml:math>
</inline-formula> represents the instantaneous moment of processing the sEMG signal.</p>
<p>Sub-process 2: Full-wave rectification of the signal obtained from process 1, as shown in the formula, as shown in <xref ref-type="disp-formula" rid="EQ10">Formula (10)</xref>:</p>
<disp-formula id="EQ10">
<label>(10)</label>
<mml:math id="M41">
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mi>i</mml:mi>
</mml:mfenced>
<mml:mo>=</mml:mo>
<mml:mi mathvariant="italic">abs</mml:mi>
<mml:munderover>
<mml:mstyle displaystyle="true">
<mml:mo stretchy="true">&#x2211;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:munderover>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mi>i</mml:mi>
</mml:mfenced>
</mml:math>
</disp-formula>
<p>Where <inline-formula>
<mml:math id="M42">
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mi>i</mml:mi>
</mml:mfenced>
</mml:math>
</inline-formula> represents the amplitude of the sEMG signal after full-wave rectification, ensuring that the amplitude of the abs signal is entirely non-negative; N represents the sampling window size; <inline-formula>
<mml:math id="M43">
<mml:mi>i</mml:mi>
</mml:math>
</inline-formula> represents the instantaneous moment of processing the sEMG signal.</p>
<p>Sub-process 3: Using a 4th-order Butterworth bandpass filter to limit the frequency to the range of 30-100&#x2009;Hz, the signal is processed to remove high-frequency noise through filtering. This mainly involves processing the amplitude of the denoised signal, as shown in the <xref ref-type="disp-formula" rid="EQ11">Formula (11)</xref>:</p>
<disp-formula id="EQ11">
<label>(11)</label>
<mml:math id="M44">
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mn>3</mml:mn>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mi>i</mml:mi>
</mml:mfenced>
<mml:mo>=</mml:mo>
<mml:mi mathvariant="italic">filter</mml:mi>
<mml:mspace width="0.25em"/>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mi>i</mml:mi>
</mml:mfenced>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>f</mml:mi>
<mml:mi mathvariant="italic">cut</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:math>
</disp-formula>
<p>Sub-process 4: Normalizing the sEMG signal obtained from process 3, as shown in the <xref ref-type="disp-formula" rid="EQ12">Formula (12)</xref>:</p>
<disp-formula id="EQ12">
<label>(12)</label>
<mml:math id="M45">
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mn>4</mml:mn>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mi>i</mml:mi>
</mml:mfenced>
<mml:mo>=</mml:mo>
<mml:munderover>
<mml:mstyle displaystyle="true">
<mml:mo stretchy="true">&#x2211;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:munderover>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mn>3</mml:mn>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mi>i</mml:mi>
</mml:mfenced>
<mml:mo stretchy="true">/</mml:mo>
<mml:mi>M</mml:mi>
<mml:mi>V</mml:mi>
<mml:mi>C</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mn>3</mml:mn>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mi>i</mml:mi>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
</mml:math>
</disp-formula>
<p>Where <inline-formula>
<mml:math id="M46">
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mn>4</mml:mn>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mi>i</mml:mi>
</mml:mfenced>
</mml:math>
</inline-formula> represents the signal amplitude, and MVC represents the maximum voluntary contraction strength of the muscle.</p>
<p>Sub-process 5: Smoothing the signal after normalization, as shown in the <xref ref-type="disp-formula" rid="EQ13">Formula (13)</xref>:</p>
<disp-formula id="EQ13">
<label>(13)</label>
<mml:math id="M47">
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mn>5</mml:mn>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mi>i</mml:mi>
</mml:mfenced>
<mml:mo>=</mml:mo>
<mml:msqrt>
<mml:mrow>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>
<mml:munderover>
<mml:mstyle displaystyle="true">
<mml:mo stretchy="true">&#x222B;</mml:mo>
</mml:mstyle>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:munderover>
<mml:msup>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mn>4</mml:mn>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mi>i</mml:mi>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mi>d</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msqrt>
</mml:math>
</disp-formula>
<p>Where <inline-formula>
<mml:math id="M48">
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mn>5</mml:mn>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mi>i</mml:mi>
</mml:mfenced>
</mml:math>
</inline-formula> represents the amplitude of the signal after processing, and <inline-formula>
<mml:math id="M49">
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> represents the time difference value.</p>
<p>The sEMG signal obtained through the above five processing steps can directly reflect the changing characteristics of the sEMG signal, including the linear variation pattern of the sEMG signal amplitude.</p>
<p>After the original electromyographic (EMG) signals are collected, it is necessary to preprocess the EMG signals and extract features based on the processed signals. The purpose is to extract components of the EMG signals that can reflect the degree of fatigue. Different degrees of fatigue have their own characteristics, and the more representative the feature selection, the more accurate the pattern recognition. Based on the common time-domain and frequency-domain features of EMG signals and their clinical significance, this study selects four main features&#x2014;mean power frequency (MPF), median frequency (MF), root mean square (RMS), and integrated electromyogram (IEMG)&#x2014;to reflect the muscle&#x2019;s fatigue state.</p>
<p>The time-domain features of muscle fatigue can be used to describe the amplitude changes in electromyographic (EMG) signals during the process of muscle fatigue. Calculating the integrated electromyogram (IEMG) and root mean square (RMS) can visually reflect this change. Let <inline-formula>
<mml:math id="M50">
<mml:mi>y</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mi>t</mml:mi>
</mml:mfenced>
</mml:math>
</inline-formula> represent the preprocessed original EMG signal, the calculation formulas are shown in <xref ref-type="disp-formula" rid="EQ14">Formula (14)</xref> and <xref ref-type="disp-formula" rid="EQ15">Formula (15)</xref>:</p>
<disp-formula id="EQ14">
<label>(14)</label>
<mml:math id="M51">
<mml:mi mathvariant="italic">IEMG</mml:mi>
<mml:mo>=</mml:mo>
<mml:munderover>
<mml:mstyle displaystyle="true">
<mml:mo stretchy="true">&#x222B;</mml:mo>
</mml:mstyle>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:mfenced open="|" close="|">
<mml:mrow>
<mml:mi>y</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mi>t</mml:mi>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
<mml:mi>d</mml:mi>
<mml:mi>t</mml:mi>
</mml:math>
</disp-formula>
<disp-formula id="EQ15">
<label>(15)</label>
<mml:math id="M52">
<mml:mi>R</mml:mi>
<mml:mi>M</mml:mi>
<mml:mi>S</mml:mi>
<mml:mo>=</mml:mo>
<mml:msup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mi>T</mml:mi>
</mml:mfrac>
<mml:munderover>
<mml:mstyle displaystyle="true">
<mml:mo stretchy="true">&#x222B;</mml:mo>
</mml:mstyle>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:msup>
<mml:mi>y</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mfenced open="(" close=")">
<mml:mi>t</mml:mi>
</mml:mfenced>
<mml:mi>d</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:math>
</disp-formula>
<p>The frequency-domain features of electromyographic (EMG) signals are obtained by transforming their time-domain signals into frequency-domain signals using Fourier transform, and then analyzing the signal&#x2019;s power spectrum or frequency spectrum. The selected features in this study are median frequency (MF) and mean power frequency (MPF). Let <inline-formula>
<mml:math id="M53">
<mml:mi>P</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mi>f</mml:mi>
</mml:mfenced>
</mml:math>
</inline-formula>represent the power spectral density and <inline-formula>
<mml:math id="M54">
<mml:mi>d</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mi>f</mml:mi>
</mml:mfenced>
</mml:math>
</inline-formula>represent the signal&#x2019;s frequency resolution, the calculation formulas are as shown in <xref ref-type="disp-formula" rid="EQ16">Formula (16)</xref> and <xref ref-type="disp-formula" rid="EQ17">Formula (17)</xref>:</p>
<disp-formula id="EQ16">
<label>(16)</label>
<mml:math id="M55">
<mml:munderover>
<mml:mstyle displaystyle="true">
<mml:mo stretchy="true">&#x222B;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>M</mml:mi>
<mml:mi>F</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:mi>P</mml:mi>
<mml:mi>S</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mi>f</mml:mi>
</mml:mfenced>
<mml:mi>d</mml:mi>
<mml:mi>f</mml:mi>
<mml:mo>=</mml:mo>
<mml:munderover>
<mml:mstyle displaystyle="true">
<mml:mo stretchy="true">&#x222B;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>M</mml:mi>
<mml:mi>F</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:munderover>
<mml:mi>P</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mi>f</mml:mi>
</mml:mfenced>
<mml:mi>S</mml:mi>
<mml:mi>d</mml:mi>
<mml:mi>f</mml:mi>
<mml:mo>=</mml:mo>
<mml:mi>P</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mi>f</mml:mi>
</mml:mfenced>
<mml:mi>S</mml:mi>
<mml:mi>d</mml:mi>
<mml:mi>f</mml:mi>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mn>2</mml:mn>
</mml:mfrac>
<mml:munderover>
<mml:mstyle displaystyle="true">
<mml:mo stretchy="true">&#x222B;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:munderover>
<mml:mi>P</mml:mi>
<mml:mi>S</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mi>f</mml:mi>
</mml:mfenced>
<mml:mi>d</mml:mi>
<mml:mi>f</mml:mi>
</mml:math>
</disp-formula>
<disp-formula id="EQ17">
<label>(17)</label>
<mml:math id="M56">
<mml:mi>M</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>F</mml:mi>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mstyle displaystyle="true">
<mml:mo stretchy="true">&#x222B;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mi>f</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>S</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mi>f</mml:mi>
</mml:mfenced>
<mml:mi>d</mml:mi>
<mml:mi>f</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mstyle displaystyle="true">
<mml:mo stretchy="true">&#x222B;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mi>P</mml:mi>
<mml:mi>S</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mi>f</mml:mi>
</mml:mfenced>
<mml:mi>d</mml:mi>
<mml:mi>f</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:math>
</disp-formula>
<p>The similarity between the two lies in that the types of sEMG features covered are all included in the four features listed in this paper. The difference lies in the fact that general sports movements have larger amplitude and intensity but are relatively singular. This results in sports-related sEMG features showing large numerical values but being singular in type, usually consisting of 1&#x2013;2 of the four features. However, during the driving operation of a hexapod robot driver, although the amplitude of movements is not large, the driver&#x2019;s movements are more diverse and of longer duration, generally encompassing all four listed features. It is necessary to comprehensively analyze all four features to determine the driver&#x2019;s fatigue state.</p>
<p>In order to establish and utilize a neural network model for real-time assessment of driver upper limb fatigue, this study recorded the electromyographic (EMG) signal data of several hexapod robot drivers while operating the hexapod robot, along with their subjective perception data of upper limb fatigue. A BP neural network was used to correlate the EMG feature data with the upper limb fatigue data. The subjective perception data of upper limb fatigue come from the drivers&#x2019; self-rated mental fatigue scores, where higher scores indicate higher levels of current mental fatigue. The feature data of the EMG signals, which include various information such as EMG integral value, median frequency, root mean square value, change when the muscles are fatigued, were used as inputs. Participants&#x2019; comprehensive fatigue values were provided as outputs for model training. The training model includes input layer, output layer, and intermediate layers. A total of 500 sets of data were collected from different participants, with 400 sets chosen for training and 100 sets for testing. The training set includes EMG feature data and drivers&#x2019; subjective fatigue levels. An intermediate layer was set up, and during the training process, the connection weights and thresholds of each layer were calculated to obtain the network model for predicting driver arm fatigue (DAF).</p>
<p>Since drivers may still persist in operating the vehicle with mental strength when experiencing muscular fatigue, sEMG features may not reflect the driver&#x2019;s mental fatigue at this time. Considering that the driver&#x2019;s mental fatigue can be reflected by operational error rate and trajectory deviation, this paper utilizes the Driver Manipulation Error Rate (DMER) and Trajectory Offset Rate (TOR) to assist sEMG features in determining the driver&#x2019;s fatigue state together.</p>
<p>Specifically, the Driver Manipulation Error Rate (DMER) is used to describe the rate of inappropriate manipulation by the driver when issuing control commands to the hexapod robot. Let <inline-formula>
<mml:math id="M57">
<mml:msubsup>
<mml:mi>I</mml:mi>
<mml:mn>0</mml:mn>
<mml:mfenced open="(" close=")">
<mml:mi>k</mml:mi>
</mml:mfenced>
</mml:msubsup>
</mml:math>
</inline-formula> represent the set of instructions given by the driver in a non-fatigued driving state for a particular terrain, and let <inline-formula>
<mml:math id="M58">
<mml:msubsup>
<mml:mi>I</mml:mi>
<mml:mi>i</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mi>k</mml:mi>
</mml:mfenced>
</mml:msubsup>
</mml:math>
</inline-formula> represent the control instructions given in different fatigue states on the same terrain, where <italic>k</italic>&#x2009;=&#x2009;1, 2&#x2026; <italic>N</italic> and <italic>i</italic>&#x2009;=&#x2009;1, 2&#x2026; <italic>n</italic>. <italic>N</italic> represents the number of instruction sets, and n represents the number of different fatigue states. The MER can be expressed as <xref ref-type="disp-formula" rid="EQ18">Formula (18)</xref>:</p>
<disp-formula id="EQ18">
<label>(18)</label>
<mml:math id="M59">
<mml:mi>M</mml:mi>
<mml:mi>E</mml:mi>
<mml:mi>R</mml:mi>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:munderover>
<mml:mstyle displaystyle="true">
<mml:mo stretchy="true">&#x2211;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>n</mml:mi>
</mml:munderover>
<mml:munderover>
<mml:mstyle displaystyle="true">
<mml:mo stretchy="true">&#x2211;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:munderover>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mi>I</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>k</mml:mi>
</mml:msubsup>
<mml:mo>&#x2212;</mml:mo>
<mml:msubsup>
<mml:mi>I</mml:mi>
<mml:mn>0</mml:mn>
<mml:mi>k</mml:mi>
</mml:msubsup>
</mml:mrow>
<mml:msubsup>
<mml:mi>I</mml:mi>
<mml:mn>0</mml:mn>
<mml:mi>k</mml:mi>
</mml:msubsup>
</mml:mfrac>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#x22C5;</mml:mo>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:math>
</disp-formula>
<p>Specifically, the Trajectory Offset (TO) can be described as the deviation of the actual path traveled by the hexapod robot from the average trajectory while being driven by the driver. Assuming the sampling interval for the distance traveled by the hexapod robot is T, the average speed of the hexapod robot within this interval is v, and the number of samples is N, with the actual position traveled denoted as S, the trajectory offset TO can be expressed as <xref ref-type="disp-formula" rid="EQ19">Formula (19)</xref>:</p>
<disp-formula id="EQ19">
<label>(19)</label>
<mml:math id="M60">
<mml:mi>T</mml:mi>
<mml:mi>O</mml:mi>
<mml:mo>=</mml:mo>
<mml:munderover>
<mml:mstyle displaystyle="true">
<mml:mo stretchy="true">&#x2211;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:munderover>
<mml:mi>v</mml:mi>
<mml:mi>T</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:munderover>
<mml:mstyle displaystyle="true">
<mml:mo stretchy="true">&#x2211;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:munderover>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mi>v</mml:mi>
<mml:mi>T</mml:mi>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:mfrac>
</mml:mrow>
</mml:mfenced>
</mml:math>
</disp-formula>
</sec>
</sec>
<sec id="sec11">
<label>4</label>
<title>Experiment</title>
<p>In order to enable drivers to remotely control hexapod robots in a human-machine collaborative manner and validate the effectiveness of the proposed method, this study has built a remote human-machine collaborative driving control system based on wearable VR glasses, EMG armbands, and other human-machine interaction devices. Using wireless network signals, drivers can control the robot from an operating platform 20 meters away. Specifically, considering that the perception and feedback loops between humans and machines constrain the efficiency of human-machine collaborative decision-making, to effectively enhance the depth of human-machine integration, this study processes robot visual camera data and transmits it to virtual reality devices, allowing drivers to experience immersive driving from a first-person perspective. Additionally, drivers wear EMG armbands on both arms to monitor upper limb fatigue in real time. The system has successfully integrated various control hardware such as multifunctional joysticks, throttle levers, touchscreens, etc., greatly enhancing the driver&#x2019;s sense of presence during remote driving control and enabling better collaborative decision-making tasks with the robot. The remote human-machine collaborative driving control system described above is shown in the <xref ref-type="fig" rid="fig9">Figure 9</xref>.</p>
<fig position="float" id="fig9">
<label>Figure 9</label>
<caption>
<p>The remote human-machine collaborative driving control system.</p>
</caption>
<graphic xlink:href="fnbot-18-1393738-g009.tif"/>
</fig>
<p>Based on the remote human-machine collaborative driving control system described above, this study conducted physical experiments on integrated terrains with obstacles, gullies, and slopes. As shown in <xref ref-type="fig" rid="fig10">Figure 10</xref>, the speed of the robot&#x2019;s center of mass when driven alone through integrated terrains is recorded (green dashed line). In addition, with the assistance of the auxiliary system proposed in this study, the driver navigates through integrated terrains and records the speed of the robot&#x2019;s center of mass (blue dashed line). The auxiliary system can generate a feasible set of instructions in real-time during the robot&#x2019;s travel based on decision-making target values, and, after filtering through constraint conditions, provide real-time recommended human-machine instructions to improve the efficiency of driver instruction issuance. The red solid line in the figure represents the maximum limit value of the center of mass speed prompted by the auxiliary system. It can be observed that from 0 to 18 s, the robot walks on flat terrain, during which the maximum prompted speed limit for the center of mass by the auxiliary system is 0.25&#x2009;m/s. From 18&#x2013;60s, the robot moves through obstacle terrain, during which the prompted maximum speed limit for the center of mass by the auxiliary system decreases to 0.15&#x2009;m/s. From 60 to 78 s, the robot once again walks on flat terrain, during which the prompted maximum speed limit for the center of mass by the auxiliary system returns to 0.25&#x2009;m/s. From 78 to 106 s, the robot walks through gully terrain, during which the prompted maximum speed limit for the center of mass by the auxiliary system decreases to 0.09&#x2009;m/s. From 106 to 122 s, the robot walks on flat terrain, during which the prompted maximum speed limit for the center of mass by the auxiliary system is 0.25&#x2009;m/s. From 122 to 170 s, the robot walks on uphill terrain, during which the prompted maximum speed limit for the center of mass by the auxiliary system decreases to 0.09&#x2009;m/s. From 170 to 185 s, the robot walks on flat terrain, during which the prompted maximum speed limit for the center of mass by the auxiliary system returns to 0.25&#x2009;m/s. In summary, when the driver navigates alone through integrated terrains, there is a slight lag in speed control during transitional phases of terrain changes. However, after adopting the auxiliary system for maximum speed prompts, the driver can promptly adjust the speed before the terrain changes and, knowing the maximum speed limit, can also raise the speed in real-time in a reasonable manner, ensuring travel safety and efficiency.</p>
<fig position="float" id="fig10">
<label>Figure 10</label>
<caption>
<p>The velocity of robot in terrain.</p>
</caption>
<graphic xlink:href="fnbot-18-1393738-g010.tif"/>
</fig>
<p>As shown in <xref ref-type="fig" rid="fig11">Figure 11</xref>, the robot&#x2019;s stride when driven alone through integrated terrains is recorded (green dashed line). Additionally, with the assistance of the auxiliary system proposed in this study, the driver navigates through integrated terrains and records the robot&#x2019;s stride (blue dashed line). The red solid line represents the maximum and minimum limit values prompted by the auxiliary system. It can be observed that from 18&#x2013;60s, the robot walks on obstacle terrain, during which the maximum stride limit prompted by the auxiliary system is 0.1&#x2009;m and the minimum limit is 0.5&#x2009;m; from 78 to 106 s, the robot walks on gully terrain, during which the maximum stride limit prompted by the auxiliary system is 0.14&#x2009;m and the minimum limit is 0.1&#x2009;m; from 122 to 170 s, the robot walks on uphill terrain, during which the maximum stride limit prompted by the auxiliary system is 0.09&#x2009;m. In summary, when the driver navigates alone through integrated terrains, there is a slight lag in controlling the robot&#x2019;s stride during transitional phases of terrain changes. However, after using the auxiliary system for maximum stride prompts, the driver can promptly adjust the stride before the terrain changes and, knowing the real-time limits of the stride, can increase it in a reasonable manner in real time, further ensuring travel safety and efficiency.</p>
<fig position="float" id="fig11">
<label>Figure 11</label>
<caption>
<p>The leg stride of robot in terrain.</p>
</caption>
<graphic xlink:href="fnbot-18-1393738-g011.tif"/>
</fig>
<p>As shown in <xref ref-type="fig" rid="fig12">Figure 12</xref>, the robot&#x2019;s step height when driven alone through integrated terrains is recorded (green dotted line). Additionally, with the assistance of the auxiliary system proposed in this study, the driver navigates through integrated terrains and records the robot&#x2019;s step height (blue dashed line). The red solid line represents the maximum and minimum limit values prompted by the auxiliary system. It can be observed that from 18 to 60 s, the robot walks on obstacle terrain, during which the minimum step height limit prompted by the auxiliary system is 0.05&#x2009;m; from 78 to 106&#x2009;s, the robot walks on gully terrain, during which the minimum step height limit prompted by the auxiliary system is 0.01&#x2009;m; from 122 to 170&#x2009;s, the robot walks on uphill terrain, during which the maximum step height limit prompted by the auxiliary system is 0.06&#x2009;m. In conclusion, compared to driving alone through integrated terrains, using the auxiliary system for maximum step height prompts allows for real-time reasonable reduction in step height based on the known real-time limit values, further ensuring travel safety and efficiency.</p>
<fig position="float" id="fig12">
<label>Figure 12</label>
<caption>
<p>The leg stroke of robot in terrain.</p>
</caption>
<graphic xlink:href="fnbot-18-1393738-g012.tif"/>
</fig>
<p>To further compare the impact of driving alone versus using a driving assistance system on the performance of the hexapod robot, this study utilized two driving evaluation indicators: static stability margin and collision coefficient. The stability was quantitatively evaluated during the robot&#x2019;s travel process using established static stability margin evaluation standards, while the collision count was determined and defined by detecting the pausing of swinging legs. As shown in <xref ref-type="fig" rid="fig13">Figure 13</xref>, it can be observed that when the driver utilizes the driving assistance system to remotely control the robot, the average stability is higher than the average stability when driving alone. This is particularly evident when traversing obstacle and uphill terrains, where the robot&#x2019;s stability is significantly higher than when driven alone, as depicted in <xref ref-type="fig" rid="fig14">Figure 14</xref>. When the driver utilizes the driving assistance system for remote control, the collision count between the robot and the environment is noticeably lower than when driving alone, especially when traversing obstacle and gully terrains. Further analysis indicates that, compared to human driving alone, a hexapod driver with the assistance of auxiliary systems improves robot stability by 12.5% and reduces the number of collisions between the robot and the surrounding environment by 50%.</p>
<fig position="float" id="fig13">
<label>Figure 13</label>
<caption>
<p>The stability margin of robot in terrain.</p>
</caption>
<graphic xlink:href="fnbot-18-1393738-g013.tif"/>
</fig>
<fig position="float" id="fig14">
<label>Figure 14</label>
<caption>
<p>The collision numbers of robot in terrain.</p>
</caption>
<graphic xlink:href="fnbot-18-1393738-g014.tif"/>
</fig>
<p>Based on the analysis of the experimental results, it can be seen that the assistance of auxiliary systems provides command combinations, shifting the driver&#x2019;s task from making decisions to making choices, effectively reducing the driver&#x2019;s decision-making burden. At the same time, by providing the extreme values of each command, not only can it enhance the safety of robot locomotion, but it also to some extent improves the robot&#x2019;s moving speed and traffic efficiency in complex environments.</p>
</sec>
<sec sec-type="conclusions" id="sec12">
<label>5</label>
<title>Conclusion</title>
<p>The most important achievement in this paper is the development of a novel neural human-robot command combination method for improving the hexapod robot&#x2019;s walking performance and reducing the burden on drivers&#x2019; control. This article first proposes a mapping process that generates human-robot command combinations from decision target values, focusing the robot intelligence on assisting drivers by generating human-robot instruction combinations. In addition, this article quantifies robot motion geometric constraints and driver fatigue constraints. By using constraints to optimize and filter the feasible set of human-robot commands, a small number of human-machine command combinations are formed. A human-robot command assistance recommendation system is developed to provide real-time recommendations of human-robot command combinations to drivers. The results of the designed experimental platform demonstrate the validity of the human-robot command assistance recommendation system. In the future, considering the situation where both humans and machines have operational authority over the same command combination, we will continue to research human-robot command fusion based on the game theory.</p>
</sec>
<sec sec-type="data-availability" id="sec13">
<title>Data availability statement</title>
<p>The datasets presented in this article are not readily available because they involve the hexapod robot or a robot with the same configuration. Requests to access the datasets should be directed to <email>cxl_920101@163.com</email>.</p>
</sec>
<sec sec-type="author-contributions" id="sec14">
<title>Author contributions</title>
<p>XC: Methodology, Writing &#x2013; original draft. BY: Funding acquisition, Writing &#x2013; review &#x0026; editing. ZD: Software, Writing &#x2013; review &#x0026; editing.</p>
</sec>
</body>
<back>
<sec sec-type="funding-information" id="sec15">
<title>Funding</title>
<p>The author(s) declare financial support was received for the research, authorship, and/or publication of this article. This work was supported in part by the National Natural Science Foundation of China under grant 52175012.</p>
</sec>
<sec sec-type="COI-statement" id="sec16">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec id="sec100" sec-type="disclaimer">
<title>Publisher&#x2019;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="ref1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Arm</surname> <given-names>P.</given-names></name> <name><surname>Waibel</surname> <given-names>G.</given-names></name> <name><surname>Preisig</surname> <given-names>J.</given-names></name> <name><surname>Tuna</surname> <given-names>T.</given-names></name> <name><surname>Zhou</surname> <given-names>R.</given-names></name> <name><surname>Bickel</surname> <given-names>V.</given-names></name> <etal/></person-group>. (<year>2023</year>). <article-title>Scientific exploration of challenging planetary analog environments with a team of legged robots</article-title>. <source>Sci. Robot.</source> <volume>8</volume>:<fpage>eade9548</fpage>. doi: <pub-id pub-id-type="doi">10.1126/scirobotics.ade9548</pub-id></citation>
</ref>
<ref id="ref2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bing</surname> <given-names>Z.</given-names></name> <name><surname>Rohregger</surname> <given-names>A.</given-names></name> <name><surname>Walter</surname> <given-names>F.</given-names></name> <name><surname>Huang</surname> <given-names>Y.</given-names></name> <name><surname>Lucas</surname> <given-names>P.</given-names></name> <name><surname>Morin</surname> <given-names>F. O.</given-names></name> <etal/></person-group>. (<year>2023</year>). <article-title>Lateral flexion of a compliant spine improves motor performance in a bioinspired mouse robot</article-title>. <source>Sci. Robot.</source> <volume>8</volume>, <fpage>1</fpage>&#x2013;<lpage>14</lpage>. doi: <pub-id pub-id-type="doi">10.1126/scirobotics.adg7165</pub-id></citation>
</ref>
<ref id="ref3">
<citation citation-type="other"><person-group person-group-type="author"><name><surname>Bueno</surname> <given-names>M.</given-names></name> <name><surname>Dogan</surname> <given-names>E.</given-names></name> <name><surname>Hadj Selem</surname> <given-names>F.</given-names></name> <name><surname>Monacelli</surname> <given-names>E.</given-names></name> <name><surname>Boverie</surname> <given-names>S.</given-names></name> <name><surname>Guillaume</surname> <given-names>A.</given-names></name></person-group> (<year>2016</year>). &#x201C;<article-title>How different mental workload levels affect the take-over control after automated driving</article-title>&#x201D; in <source>2016 IEEE 19th international conference on intelligent transportation systems (ITSC)</source>, <fpage>2040</fpage>&#x2013;<lpage>2045</lpage>.</citation>
</ref>
<ref id="ref4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Choi</surname> <given-names>S.</given-names></name> <name><surname>Ji</surname> <given-names>G.</given-names></name> <name><surname>Park</surname> <given-names>J.</given-names></name> <name><surname>Kim</surname> <given-names>H.</given-names></name> <name><surname>Mun</surname> <given-names>J.</given-names></name> <name><surname>Lee</surname> <given-names>J. H.</given-names></name> <etal/></person-group>. (<year>2023</year>). <article-title>Learning quadrupedal locomotion on deformable terrain</article-title>. <source>Sci. Robot.</source> <volume>8</volume>, <fpage>1</fpage>&#x2013;<lpage>15</lpage>. doi: <pub-id pub-id-type="doi">10.1126/scirobotics.ade2256</pub-id>, PMID: <pub-id pub-id-type="pmid">36696473</pub-id></citation>
</ref>
<ref id="ref5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Eriksson</surname> <given-names>A.</given-names></name> <name><surname>Stanton</surname> <given-names>N. A.</given-names></name></person-group> (<year>2016</year>). <article-title>Takeover time in highly automated vehicles: noncritical transitions to and from manual control</article-title>. <source>Hum. Factors</source> <volume>59</volume>, <fpage>689</fpage>&#x2013;<lpage>705</lpage>. doi: <pub-id pub-id-type="doi">10.1177/0018720816685832</pub-id>, PMID: <pub-id pub-id-type="pmid">28124573</pub-id></citation>
</ref>
<ref id="ref6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Forster</surname> <given-names>Y.</given-names></name> <name><surname>Naujoks</surname> <given-names>F.</given-names></name></person-group> (<year>2017</year>). <article-title>Driver compliance to take-over requests with different auditory outputs in conditional automation</article-title>. <source>Accid. Anal. Prev.</source> <volume>109</volume>, <fpage>18</fpage>&#x2013;<lpage>28</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.aap.2017.09.019</pub-id>, PMID: <pub-id pub-id-type="pmid">28992451</pub-id></citation>
</ref>
<ref id="ref7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Huang</surname> <given-names>M.</given-names></name> <name><surname>Gao</surname> <given-names>W.</given-names></name> <name><surname>Wang</surname> <given-names>Y.</given-names></name> <name><surname>Member</surname> <given-names>S.</given-names></name></person-group> (<year>2019</year>). <article-title>Data-driven shared steering control of semi-autonomous vehicles</article-title>. <source>IEEE Trans. Hum. Mach. Syst.</source> <volume>49</volume>, <fpage>350</fpage>&#x2013;<lpage>361</lpage>. doi: <pub-id pub-id-type="doi">10.1109/THMS.2019.2900409</pub-id></citation>
</ref>
<ref id="ref8">
<citation citation-type="other"><person-group person-group-type="author"><name><surname>Ji</surname> <given-names>Yong Gu</given-names></name> <name><surname>Lee</surname> <given-names>Kwangil</given-names></name> <name><surname>Hwang</surname> <given-names>Wonil</given-names></name></person-group>. (<year>2011</year>). &#x201C;<article-title>Haptic perceptions in the vehicle seat</article-title>&#x201D;, In: <source>Human Factors And Ergonomics In Manufacturing &#x0026; Service Industries</source>, <volume>21</volume>:<fpage>305</fpage>&#x2013;<lpage>325</lpage>.</citation>
</ref>
<ref id="ref9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kumar</surname> <given-names>A.</given-names></name> <name><surname>Zipeng</surname> <given-names>F.</given-names></name> <name><surname>Pathak</surname> <given-names>D.</given-names></name> <name><surname>Malik</surname> <given-names>J.</given-names></name></person-group> (<year>2021</year>). <article-title>RMA: rapid motor adaptation for legged robots</article-title>. <source>Robot. Sci. Syst.</source> doi: <pub-id pub-id-type="doi">10.15607/RSS.2021.XVII.011</pub-id></citation>
</ref>
<ref id="ref10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lei</surname> <given-names>C.</given-names></name> <name><surname>Dongliang</surname> <given-names>C.</given-names></name> <name><surname>Wei</surname> <given-names>D.</given-names></name></person-group> (<year>2022</year>). <article-title>Effects of spinal structure on quadruped bounding gait</article-title>. <source>Robotica</source> <volume>40</volume>, <fpage>3911</fpage>&#x2013;<lpage>3929</lpage>. doi: <pub-id pub-id-type="doi">10.1017/S0263574722000674</pub-id></citation>
</ref>
<ref id="ref11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>J.</given-names></name> <name><surname>Cong</surname> <given-names>D.</given-names></name> <name><surname>Yang</surname> <given-names>Y.</given-names></name> <name><surname>Yang</surname> <given-names>Z.</given-names></name></person-group> (<year>2022a</year>). <article-title>A new bionic hydraulic actuator system for legged robots with impact buffering, impact energy absorption, impact energy storage, and force burst</article-title>. <source>Robotica</source> <volume>40</volume>, <fpage>2485</fpage>&#x2013;<lpage>2502</lpage>. doi: <pub-id pub-id-type="doi">10.1017/S0263574721001752</pub-id></citation>
</ref>
<ref id="ref12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>J.</given-names></name> <name><surname>You</surname> <given-names>B.</given-names></name> <name><surname>Ding</surname> <given-names>L.</given-names></name> <name><surname>Yu</surname> <given-names>X.</given-names></name> <name><surname>Li</surname> <given-names>W.</given-names></name> <name><surname>Zhang</surname> <given-names>T.</given-names></name> <etal/></person-group>. (<year>2022b</year>). <article-title>Dual-master/single-slave haptic teleoperation system for semiautonomous bilateral control of hexapod robot subject to deformable rough terrain</article-title>. <source>IEEE Trans Syst Man Cybern Syst</source> <volume>52</volume>, <fpage>2435</fpage>&#x2013;<lpage>2449</lpage>. doi: <pub-id pub-id-type="doi">10.1109/TSMC.2021.3049848</pub-id></citation>
</ref>
<ref id="ref13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lyu</surname> <given-names>J.</given-names></name> <name><surname>Ma&#x00FF;e</surname> <given-names>A.</given-names></name> <name><surname>G&#x00F6;rner</surname> <given-names>M.</given-names></name> <name><surname>Ruppel</surname> <given-names>P.</given-names></name> <name><surname>Engel</surname> <given-names>A. K.</given-names></name> <name><surname>Zhang</surname> <given-names>J.</given-names></name></person-group> (<year>2023</year>). <article-title>Coordinating human-robot collaboration by EEG-based human intention prediction and vigilance control</article-title>. <source>Front. Neurorobot.</source> <volume>16</volume>:<fpage>1068274</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fnbot.2022.1068274</pub-id>, PMID: <pub-id pub-id-type="pmid">36531919</pub-id></citation>
</ref>
<ref id="ref14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ma</surname> <given-names>B.</given-names></name> <name><surname>Liu</surname> <given-names>Y.</given-names></name> <name><surname>Na</surname> <given-names>X.</given-names></name> <name><surname>Liu</surname> <given-names>Y.</given-names></name> <name><surname>Yang</surname> <given-names>Y.</given-names></name></person-group> (<year>2019</year>). <article-title>A shared steering controller design based on steer-by-wire system considering human-machine goal consistency</article-title>. <source>J. Franklin Inst.</source> <volume>356</volume>, <fpage>4397</fpage>&#x2013;<lpage>4419</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.jfranklin.2018.12.028</pub-id></citation>
</ref>
<ref id="ref15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Massi</surname> <given-names>E.</given-names></name> <name><surname>Vannucci</surname> <given-names>L.</given-names></name> <name><surname>Albanese</surname> <given-names>U.</given-names></name> <name><surname>Capolei</surname> <given-names>M. C.</given-names></name> <name><surname>Vandesompele</surname> <given-names>A.</given-names></name> <name><surname>Urbain</surname> <given-names>G.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Combining evolutionary and adaptive control strategies for quadruped robotic locomotion</article-title>. <source>Front. Neurorobot.</source> <volume>13</volume>, <fpage>1</fpage>&#x2013;<lpage>19</lpage>. doi: <pub-id pub-id-type="doi">10.3389/fnbot.2019.00071</pub-id>, PMID: <pub-id pub-id-type="pmid">31555118</pub-id></citation>
</ref>
<ref id="ref16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Merat</surname> <given-names>N.</given-names></name> <name><surname>Hamish Jamson</surname> <given-names>A.</given-names></name> <name><surname>Lai</surname> <given-names>F. C. H.</given-names></name> <name><surname>Carsten</surname> <given-names>O.</given-names></name></person-group> (<year>2008</year>). <article-title>Highly automated driving, secondary task performance, and driver state</article-title>. <source>Hum. Factors</source> <volume>54</volume>, <fpage>762</fpage>&#x2013;<lpage>771</lpage>. doi: <pub-id pub-id-type="doi">10.1177/0018720812442087</pub-id></citation>
</ref>
<ref id="ref17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Merat</surname> <given-names>N.</given-names></name> <name><surname>Hamish Jamson</surname> <given-names>A.</given-names></name> <name><surname>Lai</surname> <given-names>F. C. H.</given-names></name> <name><surname>Daly</surname> <given-names>M.</given-names></name> <name><surname>Carsten</surname> <given-names>O. M. J.</given-names></name></person-group> (<year>2014</year>). <article-title>Transition to manual: driver behaviour when resuming control from a highly automated vehicle</article-title>. <source>Transp. Res. F Psychol. Behav.</source> <volume>27</volume>, <fpage>274</fpage>&#x2013;<lpage>282</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.trf.2014.09.005</pub-id></citation>
</ref>
<ref id="ref18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nguyen</surname> <given-names>A.-t.</given-names></name> <name><surname>Sentouh</surname> <given-names>C.</given-names></name> <name><surname>Popieul</surname> <given-names>J.-c.</given-names></name></person-group> (<year>2017</year>). <article-title>Driver-automation cooperative approach for shared steering control under multiple system constraints: design and experiments</article-title>. <source>IEEE Trans. Ind. Electron.</source> <volume>64</volume>, <fpage>3819</fpage>&#x2013;<lpage>3830</lpage>. doi: <pub-id pub-id-type="doi">10.1109/TIE.2016.2645146</pub-id></citation>
</ref>
<ref id="ref19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Picardi</surname> <given-names>G.</given-names></name> <name><surname>Chellapurath</surname> <given-names>M.</given-names></name> <name><surname>Iacoponi</surname> <given-names>S.</given-names></name> <name><surname>Stefanni</surname> <given-names>S.</given-names></name> <name><surname>Laschi</surname> <given-names>C.</given-names></name> <name><surname>Calisti</surname> <given-names>M.</given-names></name></person-group> (<year>2020</year>). <article-title>Bioinspired underwater legged robot for seabed exploration with low environmental disturbance</article-title>. <source>Sci. Robot.</source> <volume>5</volume>, <fpage>1</fpage>&#x2013;<lpage>15</lpage>. doi: <pub-id pub-id-type="doi">10.1126/SCIROBOTICS.AAZ1012</pub-id>, PMID: <pub-id pub-id-type="pmid">33022623</pub-id></citation>
</ref>
<ref id="ref20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schwarz</surname> <given-names>M.</given-names></name> <name><surname>Rodehutskors</surname> <given-names>T.</given-names></name> <name><surname>Droeschel</surname> <given-names>D.</given-names></name> <name><surname>Beul</surname> <given-names>M.</given-names></name> <name><surname>Schreiber</surname> <given-names>M.</given-names></name> <name><surname>Araslanov</surname> <given-names>N.</given-names></name> <etal/></person-group>. (<year>2017</year>). <article-title>NimbRo rescue: solving disaster-response tasks with the mobile manipulation robot momaro</article-title>. <source>J. Field Robot.</source> <volume>34</volume>, <fpage>400</fpage>&#x2013;<lpage>425</lpage>. doi: <pub-id pub-id-type="doi">10.1002/rob.21677</pub-id></citation>
</ref>
<ref id="ref21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Seet</surname> <given-names>M.</given-names></name> <name><surname>Bezerianos</surname> <given-names>A.</given-names></name> <name><surname>Member</surname> <given-names>S.</given-names></name> <name><surname>Panou</surname> <given-names>M.</given-names></name> <name><surname>Bekiaris</surname> <given-names>E.</given-names></name> <name><surname>Thakor</surname> <given-names>N.</given-names></name> <etal/></person-group>. (<year>2023</year>). <article-title>Individual susceptibility to vigilance decrement in prolonged assisted driving revealed by alert-state wearable EEG assessment</article-title>. <source>IEEE Trans. Cogn. Dev. Syst.</source> <volume>15</volume>, <fpage>1586</fpage>&#x2013;<lpage>1594</lpage>. doi: <pub-id pub-id-type="doi">10.1109/TCDS.2022.3231691</pub-id></citation>
</ref>
<ref id="ref22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shim</surname> <given-names>H.</given-names></name> <name><surname>Yoo</surname> <given-names>S.-y.</given-names></name> <name><surname>Kang</surname> <given-names>H.</given-names></name> <name><surname>Jun</surname> <given-names>B.-H.</given-names></name></person-group> (<year>2016</year>). <article-title>Development of Arm and leg for seabed walking robot CRABSTER200</article-title>. <source>Ocean Eng.</source> <volume>116</volume>, <fpage>55</fpage>&#x2013;<lpage>67</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.oceaneng.2016.02.028</pub-id></citation>
</ref>
<ref id="ref23">
<citation citation-type="other"><person-group person-group-type="author"><name><surname>Si</surname> <given-names>W.</given-names></name> <name><surname>Wang</surname> <given-names>N.</given-names></name> <name><surname>Li</surname> <given-names>Q.</given-names></name> <name><surname>Yang</surname> <given-names>C.</given-names></name></person-group> (<year>2022</year>). <article-title>A framework for composite layup skill learning and generalizing through teleoperation</article-title> <volume>16</volume>, <fpage>1</fpage>&#x2013;<lpage>16</lpage>. doi: <pub-id pub-id-type="doi">10.3389/fnbot.2022.840240</pub-id>, PMID: <pub-id pub-id-type="pmid">35250529</pub-id></citation>
</ref>
<ref id="ref24">
<citation citation-type="other"><person-group person-group-type="author"><name><surname>Wangbo</surname> <given-names>J.</given-names></name> <name><surname>Emin</surname> <given-names>H.</given-names></name> <name><surname>Ee</surname> <given-names>J. O. L.</given-names></name> <name><surname>Osovitskiy</surname> <given-names>A. L. D.</given-names></name> <name><surname>Ellicoso</surname> <given-names>D. A. B.</given-names></name> <name><surname>Ee</surname> <given-names>J. O. L.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Learning agile and dynamic motor skills for legged robots</article-title>, <fpage>1</fpage>&#x2013;<lpage>20</lpage>. arXiv:1901.08652v1. doi: <pub-id pub-id-type="doi">10.48550/arXiv.1901.08652</pub-id>,</citation>
</ref>
<ref id="ref25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xu</surname> <given-names>P.</given-names></name> <name><surname>Wang</surname> <given-names>Z.</given-names></name> <name><surname>Ding</surname> <given-names>L.</given-names></name> <name><surname>Li</surname> <given-names>Z.</given-names></name> <name><surname>Shi</surname> <given-names>J.</given-names></name> <name><surname>Gao</surname> <given-names>H.</given-names></name> <etal/></person-group>. (<year>2023</year>). <article-title>A closed-loop shared control framework for legged robots</article-title>. <source>IEEE/ASME Trans. Mechatron.</source> <volume>29</volume>, <fpage>190</fpage>&#x2013;<lpage>201</lpage>. doi: <pub-id pub-id-type="doi">10.1109/TMECH.2023.3270527</pub-id></citation>
</ref>
<ref id="ref26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yoo</surname> <given-names>S.-y.</given-names></name> <name><surname>Shim</surname> <given-names>H.</given-names></name> <name><surname>Jun</surname> <given-names>B.-H.</given-names></name> <name><surname>Park</surname> <given-names>J.-Y.</given-names></name> <name><surname>Lee</surname> <given-names>P.-M.</given-names></name></person-group> (<year>2016</year>). <article-title>Design of Walking and Swimming Algorithms for a multi-legged underwater robot Crabster CR200</article-title>. <source>Mar. Technol. Soc. J.</source> <volume>50</volume>, <fpage>74</fpage>&#x2013;<lpage>87</lpage>. doi: <pub-id pub-id-type="doi">10.4031/MTSJ.50.5.8</pub-id></citation>
</ref>
<ref id="ref27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhakypov</surname> <given-names>Z.</given-names></name> <name><surname>Mori</surname> <given-names>K.</given-names></name> <name><surname>Hosoda</surname> <given-names>K.</given-names></name> <name><surname>Paik</surname> <given-names>J.</given-names></name></person-group> (<year>2019</year>). <article-title>Designing minimal and scalable insect-inspired multi-locomotion millirobots</article-title>. <source>Nature</source> <volume>571</volume>, <fpage>381</fpage>&#x2013;<lpage>386</lpage>. doi: <pub-id pub-id-type="doi">10.1038/s41586-019-1388-8</pub-id>, PMID: <pub-id pub-id-type="pmid">31292552</pub-id></citation>
</ref>
<ref id="ref28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>B.</given-names></name> <name><surname>Lan</surname> <given-names>X.</given-names></name> <name><surname>Wang</surname> <given-names>G.</given-names></name></person-group> (<year>2022</year>). <article-title>A noise-suppressing neural network approach for upper limb human-machine interactive control based on SEMG signals</article-title>. <source>Front. Neurorobot.</source> <volume>16</volume>:<fpage>1047325</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fnbot.2022.1047325</pub-id>, PMID: <pub-id pub-id-type="pmid">36406950</pub-id></citation>
</ref>
<ref id="ref29">
<citation citation-type="other"><person-group person-group-type="author"><name><surname>Zhou</surname> <given-names>G.</given-names></name> <name><surname>Luo</surname> <given-names>J.</given-names></name> <name><surname>Shugong</surname> <given-names>X.</given-names></name> <name><surname>Zhang</surname> <given-names>S.</given-names></name></person-group> (<year>2022</year>). <article-title>A cooperative shared control scheme based on intention recognition for flexible assembly manufacturing</article-title> <volume>16</volume>, <fpage>1</fpage>&#x2013;<lpage>19</lpage>. doi: <pub-id pub-id-type="doi">10.3389/fnbot.2022.850211</pub-id>, PMID: <pub-id pub-id-type="pmid">35370590</pub-id></citation>
</ref>
</ref-list>
</back>
</article>