<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Psychol.</journal-id>
<journal-title>Frontiers in Psychology</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Psychol.</abbrev-journal-title>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fpsyg.2021.628728</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Methods</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Controlling Video Stimuli in Sign Language and Gesture Research: The <italic>OpenPoseR</italic> Package for Analyzing <italic>OpenPose</italic> Motion-Tracking Data in <italic>R</italic></article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Trettenbrein</surname> <given-names>Patrick C.</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x002A;</sup></xref>
<xref ref-type="author-notes" rid="fn002"><sup>&#x2020;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/189667/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Zaccarella</surname> <given-names>Emiliano</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="author-notes" rid="fn002"><sup>&#x2020;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/217030/overview"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences</institution>, <addr-line>Leipzig</addr-line>, <country>Germany</country></aff>
<aff id="aff2"><sup>2</sup><institution>International Max Planck Research School on Neuroscience of Communication: Structure, Function, and Plasticity (IMPRS NeuroCom)</institution>, <addr-line>Leipzig</addr-line>, <country>Germany</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Antonio Ben&#x00ED;tez-Burraco, Seville University, Spain</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Suzanne Aussems, University of Warwick, United Kingdom; James P. Trujillo, Radboud University Nijmegen, Netherlands</p></fn>
<corresp id="c001">&#x002A;Correspondence: Patrick C. Trettenbrein, <email>trettenbrein@cbs.mpg.de</email>; <email>patrick.trettenbrein@gmail.com</email></corresp>
<fn fn-type="other" id="fn002"><p><sup>&#x2020;</sup><bold>ORCID:</bold> Patrick C. Trettenbrein, <ext-link ext-link-type="uri" xlink:href="http://orcid.org/0000-0003-2233-6720">orcid.org/0000-0003-2233-6720</ext-link>; Emiliano Zaccarella, <ext-link ext-link-type="uri" xlink:href="http://orcid.org/0000-0002-5703-1778">orcid.org/0000-0002-5703-1778</ext-link></p></fn>
<fn fn-type="other" id="fn004"><p>This article was submitted to Language Sciences, a section of the journal Frontiers in Psychology</p></fn>
</author-notes>
<pub-date pub-type="epub">
<day>19</day>
<month>02</month>
<year>2021</year>
</pub-date>
<pub-date pub-type="collection">
<year>2021</year>
</pub-date>
<volume>12</volume>
<elocation-id>628728</elocation-id>
<history>
<date date-type="received">
<day>12</day>
<month>11</month>
<year>2020</year>
</date>
<date date-type="accepted">
<day>29</day>
<month>01</month>
<year>2021</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2021 Trettenbrein and Zaccarella.</copyright-statement>
<copyright-year>2021</copyright-year>
<copyright-holder>Trettenbrein and Zaccarella</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract>
<p>Researchers in the fields of sign language and gesture studies frequently present their participants with video stimuli showing actors performing linguistic signs or co-speech gestures. Up to now, such video stimuli have been mostly controlled only for some of the technical aspects of the video material (e.g., duration of clips, encoding, framerate, etc.), leaving open the possibility that systematic differences in video stimulus materials may be concealed in the actual motion properties of the actor&#x2019;s movements. Computer vision methods such as <italic>OpenPose</italic> enable the fitting of body-pose models to the consecutive frames of a video clip and thereby make it possible to recover the movements performed by the actor in a particular video clip without the use of a point-based or markerless motion-tracking system during recording. The <italic>OpenPoseR</italic> package provides a straightforward and reproducible way of working with these body-pose model data extracted from video clips using <italic>OpenPose</italic>, allowing researchers in the fields of sign language and gesture studies to quantify the amount of motion (velocity and acceleration) pertaining only to the movements performed by the actor in a video clip. These quantitative measures can be used for controlling differences in the movements of an actor in stimulus video clips or, for example, between different conditions of an experiment. In addition, the package also provides a set of functions for generating plots for data visualization, as well as an easy-to-use way of automatically extracting metadata (e.g., duration, framerate, etc.) from large sets of video files.</p>
</abstract>
<kwd-group>
<kwd><italic>R</italic></kwd>
<kwd>linguistics</kwd>
<kwd>psychology</kwd>
<kwd>neuroscience</kwd>
<kwd>sign language</kwd>
<kwd>gesture</kwd>
<kwd>video stimuli</kwd>
</kwd-group>
<contract-sponsor id="cn001">Max-Planck-Gesellschaft <named-content content-type="fundref-id">10.13039/501100004189</named-content></contract-sponsor>
<counts>
<fig-count count="1"/>
<table-count count="1"/>
<equation-count count="2"/>
<ref-count count="26"/>
<page-count count="7"/>
<word-count count="0"/>
</counts>
</article-meta>
</front>
<body>
<sec id="S1">
<title>Introduction</title>
<p>Researchers in linguistics, psychology, and neuroscience who are studying sign language and gesture frequently present their participants with pre-recorded video stimuli showing actors performing manual gestures. Such gestures may be lexicalized signs of a natural sign language which can be combined to build up complex meanings (<xref ref-type="bibr" rid="B9">Klima et al., 1979</xref>; <xref ref-type="bibr" rid="B12">Mathur and Rathmann, 2014</xref>; <xref ref-type="bibr" rid="B5">Cecchetto, 2017</xref>) and are primarily processed by the brain&#x2019;s left-hemispheric core language network (<xref ref-type="bibr" rid="B6">Emmorey, 2015</xref>; <xref ref-type="bibr" rid="B22">Trettenbrein et al., 2021</xref>). Alternatively, non-signers may use a wide variety of so-called co-speech gestures to support different communicative functions in situations where gestures are produced spontaneously alongside speech (<xref ref-type="bibr" rid="B13">McNeill, 1985</xref>; <xref ref-type="bibr" rid="B10">Krauss, 1998</xref>; <xref ref-type="bibr" rid="B14">&#x00D6;zy&#x00FC;rek, 2018</xref>).</p>
<p>The reliance on video clips as stimulus materials presents researchers in sign language and gesture studies with a number of challenges. While controlling for primarily technical aspects of the video material (e.g., duration of clips, encoding, framerate, etc.) is rather straightforward, it is strikingly more difficult to capture aspects of the video material which are related to the actor and the specific movements they performed. For example, while the length of video clips in an experiment may be perfectly matched across the different conditions, systematic differences could nevertheless exist with regard to the speed, duration, and extent of the movements performed by the actor in each condition. In a hypothetical neuroimaging experiment on sign language, such systematic differences could for example, lead to unexpected response of parts of cortex that are sensitive to biological motion across conditions, thus distorting the actual contrasts of interest. By quantifying the bodily movements of the actor in a video clip it becomes possible to control for potential differences in these movement patterns across different conditions, or use information about velocity or acceleration as further regressors in a statistical model.</p>
<p>Up to now, researchers have usually focused on controlling their video stimuli only with respect to certain technical properties. Some have used device-based optic marker motion-tracking systems, but only to create stimulus materials which displayed gestures in the form of point-light displays instead of a human actor (<xref ref-type="bibr" rid="B16">Poizner et al., 1981</xref>; <xref ref-type="bibr" rid="B2">Campbell et al., 2011</xref>). A number of markerless motion-tracking systems (e.g., Microsoft Kinect; <xref ref-type="bibr" rid="B26">Zhang, 2012</xref>) and tools for analyzing these data exist (<xref ref-type="bibr" rid="B24">Trujillo et al., 2019</xref>), but using them for creating video stimuli with ultimately only two dimensions in many cases may constitute too big of an expenditure in terms of time, effort, and hardware requirements. Also, neither optic marker nor markerless motion-tracking systems can be retroactively applied to videos already recorded. As a result, the possibility that systematic differences in video stimulus materials may be concealed in the actual motion properties of the actor&#x2019;s movements has at times been disregarded in sign language and gesture research.</p>
<p>Recent technical advances in video-based tracking systems offer an exciting opportunity for creating means to control video stimuli that go beyond technical aspects such as clip duration by recovering different parameters (e.g., velocity or acceleration) of the actor&#x2019;s movements performed in the video. A number of existing tools may be used to quantify motion using pixel-based frame-differencing methods (e.g., <xref ref-type="bibr" rid="B15">Paxton and Dale, 2013</xref>; <xref ref-type="bibr" rid="B8">Kleinbub and Ramseyer, 2020</xref>; <xref ref-type="bibr" rid="B19">Ramseyer, 2020</xref>), but they tend require a static camera position, very stable lighting conditions, and do not allow for the actor to change their location on the screen. In contrast, computer vision methods such as <italic>OpenPose</italic> (<xref ref-type="bibr" rid="B4">Cao et al., 2017</xref>, <xref ref-type="bibr" rid="B3">2019</xref>) deploy machine learning methods which enable the fitting of a variety of different body-pose models to the consecutive frames of a video clip. By extracting body-pose information in this automated and model-based fashion, such machine learning-based methods essentially make it possible to recover the position of the actor&#x2019;s body or parts of their body (e.g., head and hands) in every frame of a video clip. Based on this information contained in the body-pose model, the movements performed by the actor in a particular video clip can be recovered in a two-dimensional space without the use of a point-based or markerless motion-tracking system during recording. Consequently, such video-based tracking methods do not require any special equipment and can also be applied retroactively to materials which have been recorded without the aid of motion-tracking systems.</p>
</sec>
<sec id="S2">
<title>Methods</title>
<p>Here we present <italic>OpenPoseR</italic>, a package of specialized functions for the <italic>R</italic> statistics language and environment (<xref ref-type="bibr" rid="B18">R Core Team, 2019</xref>) which provides a straightforward and reproducible way of working with body-pose model data extracted from video clips using <italic>OpenPose</italic>. The source code for <italic>OpenPoseR</italic> is freely available from the project&#x2019;s GitHub page.<sup><xref ref-type="fn" rid="footnote1">1</xref></sup> In essence, <italic>OpenPoseR</italic> allows researchers in the fields of sign language and gesture studies to quantify the amount of motion pertaining only to the movements performed by the actor in a video clip (<xref ref-type="fig" rid="F1">Figure 1A</xref>) by using information from body-pose models fit by <italic>OpenPose</italic> (<xref ref-type="fig" rid="F1">Figure 1B</xref>). In its current version, <italic>OpenPoseR</italic> provides quantitative measures of motion based on velocity and acceleration of the actor in the video which can be used for controlling differences in these movement parameters, for example, between different conditions of an experiment. More precisely, the package makes it possible to straightforwardly compute the Euclidean norms of sums of all velocity or acceleration vectors (<xref ref-type="fig" rid="F1">Figure 1C</xref>) and thereby provide a quantitative measure of motion for an entire video clip. <italic>OpenPoseR</italic> has already successfully been used to automatically detect onset and offset for a set of more than 300 signs from German Sign Language (<xref ref-type="bibr" rid="B23">Trettenbrein et al., in press</xref>). The onset and offset of the sign &#x201C;psychology&#x201D; are also clearly visible in the time series depicted <xref ref-type="fig" rid="F1">Figure 1D</xref>. Moreover, an approach similar to the one presented here has shown the general validity of <italic>OpenPose</italic> data (<xref ref-type="bibr" rid="B17">Pouw et al., 2020</xref>). In addition to its core functionality the package also provides some functions for generating basic plots for illustration purposes, as well as an easy-to-use way of automatically extracting metadata (e.g., duration, framerate, etc.) from large sets of video files. See <xref ref-type="table" rid="T1">Table 1</xref> for an overview of all functions included in the current version. Due to its integration into the larger <italic>R</italic> environment, <italic>OpenPoseR</italic> supports reproducible workflows (e.g., using <italic>R Markdown</italic>; <xref ref-type="bibr" rid="B1">Allaire et al., 2020</xref>) and allows for seamless interaction with other <italic>R</italic> packages for further statistical analysis of movement parameters and technical properties of the video materials analyzed. Hence, results of <italic>OpenPoseR</italic> analyses can readily be integrated with other data such as, for example, manual annotation data created using <italic>ELAN</italic> (<xref ref-type="bibr" rid="B11">Lausberg and Sloetjes, 2009</xref>).</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption><p>Using body-pose estimation for controlling video stimuli. <bold>(A)</bold> Representative frames of an example video file showing an actor produce the German Sign Language sign for &#x201C;psychology&#x201D; (video courtesy of Henrike Maria Falke, gebaerdenlernen.de; license: CC BY-NC-SA 3.0). <bold>(B)</bold> Representative frames from the example input video file illustrating the body-pose model which was fit automatically using <italic>OpenPose</italic>. <bold>(C)</bold> The information from the fit body-pose model is then used by <italic>OpenPoseR</italic> to compute the vertical (y-axis) and horizontal (x-axis) velocity of the different points of the model. Based on these calculations, the software then computes the Euclidean norm of the sums of the velocity vectors. <bold>(D)</bold> The illustrated procedure makes it possible to quantify the total amount of bodily motion in the video using a single measure. Onset and offset the sign are clearly visible as peaks in the plot (shaded areas).</p></caption>
<graphic xlink:href="fpsyg-12-628728-g001.tif"/>
</fig>
<table-wrap position="float" id="T1">
<label>TABLE 1</label>
<caption><p>List of functions available in <italic>OpenPoseR</italic>.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left">Function</td>
<td valign="top" align="left">Description</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">acceleration_x</td>
<td valign="top" align="left">Computes acceleration for points on x-axis for a given data frame in <italic>OpenPoseR</italic> format.</td>
</tr>
<tr>
<td valign="top" align="left">acceleration_y</td>
<td valign="top" align="left">Computes acceleration for points on y-axis for a given data frame in <italic>OpenPoseR</italic> format.</td>
</tr>
<tr>
<td valign="top" align="left">clean_data</td>
<td valign="top" align="left">Discards points with probabilities below a pre-defined cut-off as well as zero values by imposing data.</td>
</tr>
<tr>
<td valign="top" align="left">create_csv</td>
<td valign="top" align="left">Creates a CSV file in <italic>OpenPoseR</italic> format from raw <italic>OpenPose</italic> JSON output.</td>
</tr>
<tr>
<td valign="top" align="left">det_hand</td>
<td valign="top" align="left">Determines whether the left, right, or both arms were moved by the actor.</td>
</tr>
<tr>
<td valign="top" align="left">en_acceleration</td>
<td valign="top" align="left">Computes Euclidean norm of sums of acceleration vectors (x,y).</td>
</tr>
<tr>
<td valign="top" align="left">en_velocity</td>
<td valign="top" align="left">Computes Euclidean norm of sums of velocity vectors (x,y).</td>
</tr>
<tr>
<td valign="top" align="left">file_acceleration</td>
<td valign="top" align="left">Computes acceleration for a CSV file generated by create_csv(), using acceleration_x() and acceleration_y().</td>
</tr>
<tr>
<td valign="top" align="left">file_clean</td>
<td valign="top" align="left">Cleans a CSV file generated by create_csv() using clean_data().</td>
</tr>
<tr>
<td valign="top" align="left">file_en_acceleration</td>
<td valign="top" align="left">Computes en_acceleration() for given CSV files generated using file_acceleration().</td>
</tr>
<tr>
<td valign="top" align="left">file_en_velocity</td>
<td valign="top" align="left">Computes en_velocity() for given CSV files generated using file_velocity().</td>
</tr>
<tr>
<td valign="top" align="left">file_velocity</td>
<td valign="top" align="left">Computes velocity for CSV file generated by create_csv(), using velocity_x() and velocity_y().</td>
</tr>
<tr>
<td valign="top" align="left">file_video_index</td>
<td valign="top" align="left">Creates a video index using video_index() for a given directory and save it as a CSV file.</td>
</tr>
<tr>
<td valign="top" align="left">plot_frontal</td>
<td valign="top" align="left">Plot a heatmap of points extracted from <italic>OpenPose</italic> data using create_csv().</td>
</tr>
<tr>
<td valign="top" align="left">plot_timeseries</td>
<td valign="top" align="left">Plot a time series (velocity or acceleration), derived from <italic>OpenPose</italic> data using <italic>OpenPoseR</italic>.</td>
</tr>
<tr>
<td valign="top" align="left">velocity_x</td>
<td valign="top" align="left">Compute velocity for points on x-axis for given data frame in <italic>OpenPoseR</italic> format.</td>
</tr>
<tr>
<td valign="top" align="left">velocity_y</td>
<td valign="top" align="left">Compute velocity for points on y-axis for given data frame in <italic>OpenPoseR</italic> format.</td>
</tr>
<tr>
<td valign="top" align="left">video_index</td>
<td valign="top" align="left">Create an index of video files and their properties (length, frame rate, etc.) in a given directory.</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<attrib><italic>Notice that for all of the software&#x2019;s core functionality, so-called wrapper functions with the prefix &#x201C;file_&#x201D; make it possible to directly work with files as input and output arguments, thereby making it easier to handle data derived from large sets of video clips.</italic></attrib>
</table-wrap-foot>
</table-wrap>
<sec id="S2.SS1">
<title>Installation and Prerequisites</title>
<p><italic>OpenPoseR</italic> was created using a current version of <italic>R</italic> (3.1 or newer; <xref ref-type="bibr" rid="B18">R Core Team, 2019</xref>). In addition, we recommend using a current version of <italic>RStudio</italic> (<xref ref-type="bibr" rid="B20">RStudio Team, 2020</xref>). Upon installation, <italic>OpenPoseR</italic> will automatically resolve its dependencies and install additional packages for plotting and reading video files which it requires to run in case they are missing. Notice that <italic>OpenPoseR</italic> provides a means of analyzing motion-tracking data generated by <italic>OpenPose</italic> (<xref ref-type="bibr" rid="B4">Cao et al., 2017</xref>, <xref ref-type="bibr" rid="B3">2019</xref>), which is freely available for download.<sup><xref ref-type="fn" rid="footnote2">2</xref></sup> You will need to install and run <italic>OpenPose</italic> on your system or a compute cluster to fit body-pose models (<xref ref-type="fig" rid="F1">Figure 1B</xref>) which can then be analyzed with <italic>OpenPoseR</italic>. Accordingly, an installation of <italic>OpenPose</italic> is not required on the same machine on which you use <italic>OpenPoseR</italic>, meaning that you can, for example, fit body-pose models on a central compute cluster with more powerful GPUs and analyze the output of <italic>OpenPose</italic> locally on your workstation or laptop. A number of different so-called &#x201C;cloud computing&#x201D; companies now also offer options to purchase GPU computing time in case such hardware is not available locally. <italic>OpenPoseR</italic> can be used to analyze data from all three standard models that can be fit with <italic>OpenPose</italic> (&#x201C;body25,&#x201D; &#x201C;face,&#x201D; or &#x201C;hand&#x201D;), whereas your choice of model of course depends on what exactly you want to control for. By default, we recommend fitting the &#x201C;body25&#x201D; model in the context of stimulus control. In the below example, we describe the analysis of data from the &#x201C;body25&#x201D; model, however, &#x201C;face&#x201D; and &#x201C;hand&#x201D; models can be fit and analyzed in the same manner.</p>
<p>The most straight-forward way of installing <italic>OpenPoseR</italic> on your machine is to use the install_github() function provided by the <italic>devtools</italic> package:</p>
<disp-quote>
<p>install.packages(&#x201D;devtools&#x201D;)</p>
<p>devtools::install_github(&#x201D;trettenbrein/OpenPoseR&#x201D;)</p>
</disp-quote>
<p>This will download and install the latest version of OpenPoseR on your system. Alternatively, it is also possible to use <italic>R</italic>&#x2019;s ability to manually install packages from a downloaded archive of the source code, however, we strongly recommend to directly install the package from GitHub.</p>
</sec>
</sec>
<sec id="S3">
<title>Example</title>
<p>Below we outline an example workflow for using <italic>OpenPoseR</italic> to analyze <italic>OpenPose</italic> motion-tracking data derived from an example video clip. Throughout, we will use the results of the analysis of a short video clip showing the German Sign Language sign for PSYCHOLOGY (<xref ref-type="fig" rid="F1">Figure 1A</xref>) in order to demonstrate the capabilities of <italic>OpenPoseR</italic>. All of <italic>OpenPoseR</italic>&#x2019;s functionality can either directly be called upon in <italic>R</italic> scripts for data loaded into the <italic>R</italic> workspace or, alternatively, may indirectly be accessed using wrapper function (prefixed &#x201C;file_&#x201D;) which make it easier to work with data derived from large sets of video files (see <xref ref-type="table" rid="T1">Table 1</xref> for an overview). The present example will use these wrapper functions to demonstrate how <italic>OpenPoseR</italic> would be used to compute the velocity of the body-pose model for the PSYCHOLOGY video clip. In addition to this abridged example, an interactive and reproducible demonstration of the approach including code examples and example data is available as an <italic>R Markdown</italic> file from the project&#x2019;s GitHub repository.<sup><xref ref-type="fn" rid="footnote3">3</xref></sup></p>
<sec id="S3.SS1">
<title>Convert OpenPose Data</title>
<p><italic>OpenPose</italic> creates a JSON file with information about the fit body-pose model(s) for every frame of the input video. Consequently, an input video file with a duration of exactly 5 s and a rate of 25 frames per second will already generate 125 individual files. To make these data easier to handle, <italic>OpenPoseR</italic> combines the output for an entire video file into a single CSV file with a human-readable tabular data structure. Depending on the model (&#x201C;body25,&#x201D; &#x201C;face,&#x201D; or &#x201C;hand&#x201D;) that has been fit using <italic>OpenPose</italic> the CSV file generated by <italic>OpenPoseR</italic> will contain consecutive columns for the x and y position of the points in the body-pose model followed by the confidence value (ranging from 0 to 1) for every point in the model. Every consecutive row contains the data of a frame of the input video clip. This conversion into the <italic>OpenPoseR</italic> format can be achieved using the create_csv() function:</p>
<disp-quote>
<p>create_csv(&#x201C;input/path/&#x201D;, &#x201C;file_name&#x201D;, &#x201C;output/path/&#x201D;)</p>
</disp-quote>
<p>The function will create a CSV file using the JSON files stored in the specified input directory with the suffix of the body-pose model (e.g., &#x201C;_body25&#x201D;) in the given output directory. In addition, the raw motion-tracking data may be viewed by plotting a heatmap of the data using the plot_frontal() function.</p>
</sec>
<sec id="S3.SS2">
<title>Clean Motion-Tracking Data</title>
<p>On rare occasions, the model fit using <italic>OpenPose</italic> may contain zero values for x, y, and c of a point, indicating that the model fit failed for this particular point for the given frame. This is rather unlikely when working with stimulus video recordings which have been produced in the well-lit setting of professional video-recording facilities, but may occur with materials recorded in less optimal conditions. Similarly, points with very low confidence values (e.g., &#x003C;0.3) might reasonably be excluded from further analysis as these values indicate that <italic>OpenPose</italic> had trouble detecting a point in a particular frame. Including zero values and frames with very low confidence values could lead to a situation where points may appear to be &#x201C;jumping around&#x201D; from frame to frame due to incorrect x and y values, thereby misrepresenting the actual velocity or acceleration of the model in the video clip.</p>
<p>Accordingly, to increase the accuracy of our calculations the data of the body-pose model should be cleaned and thresholded before further processing. <italic>OpenPoseR</italic> provides the clean_data() function which will take care of this step by automatically imposing zero values as well as values below a pre-defined cutoff using the mean of the previous and consecutive frames. By using file_clean() we can run this function for a large set of files:</p>
<disp-quote>
<p>file_clean(&#x201D;path/to/file.csv&#x201D;, cutoff=0.3, overwrite=FALSE)</p>
</disp-quote>
<p>This will create a file called &#x201C;filename_model_cleaned.csv&#x201D; because the argument &#x201C;overwrite&#x201D; was set to FALSE upon calling the function, thereby preventing OpenPoseR from overwriting the input CSV file with the cleaned data. Using the &#x201C;cutoff&#x201D; argument, the threshold for the confidence value that a point has to surpass can be adjusted. If everything went well, the function will return TRUE.</p>
</sec>
<sec id="S3.SS3">
<title>Compute Velocity</title>
<p>After having successfully converted and cleaned the date, we can now compute the velocity of the different points of the body-pose model using <italic>OpenPoseR</italic>&#x2019;s file_velocity() function. As already mentioned above, all functions prefixed with &#x201C;file_&#x201D; provide wrappers for <italic>OpenPoseR</italic> functions which make it possible to directly pass a filename to the function (see <xref ref-type="table" rid="T1">Table 1</xref>). In this case, file_velocity() will compute the velocity of a point on either the x- or y-axis using the velocity_x() and velocity_y() functions. Both functions compute the velocity of a point on either the x- or y-axis according to the following formula:</p>
<disp-formula id="S3.Ex1"><mml:math id="M1"><mml:mfrac><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>t</mml:mi></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>-</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mfrac></mml:math></disp-formula>
<p>By using the file_velocity() function we can disregard these details regarding the computations being carried out and simply pass to the function the filename or a path to a CSV file in the <italic>OpenPoseR</italic> format as specified above in section &#x201C;Clean Motion-Tracking Data&#x201D;:</p>
<disp-quote>
<p>file_velocity(&#x201D;path/to/file_model_cleaned.csv&#x201D;)</p>
</disp-quote>
<p>Notice that if your video was recorded with more or less than 25 frames per second, you will have to specify this when calling the function by setting the argument &#x201C;fps&#x201D; to the desired value. Output files will be created automatically and carry either the suffix &#x201C;_velocity_x&#x201D; or &#x201C;_velocity_y.&#x201D; If everything went well, the function will return TRUE.</p>
</sec>
<sec id="S3.SS4">
<title>Compute Euclidean Norm of Sums of Velocity Vectors</title>
<p>Given that our goal is to capture the total amount of bodily motion between frames in a single value, our final step in this analysis will be to compute the Euclidean norm of sums of velocity vectors (<xref ref-type="fig" rid="F1">Figure 1C</xref>). In <italic>OpenPoseR</italic>, this is implemented in the en_velocity() function which takes the velocity vectors of the x- and y-axis as its input. The Euclidean norm of sums for velocity as the given motion feature then is computed using the following formula:</p>
<disp-formula id="S3.Ex2"><mml:math id="M2"><mml:mrow><mml:mrow><mml:mi>M</mml:mi><mml:mo>&#x2062;</mml:mo><mml:msub><mml:mi>F</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mo fence="true">||</mml:mo><mml:mrow><mml:munderover><mml:mo largeop="true" movablelimits="false" symmetric="true">&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>N</mml:mi></mml:munderover><mml:msub><mml:mi>v</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo fence="true">||</mml:mo></mml:mrow></mml:mrow></mml:math></disp-formula>
<p>Again, we may disregard the details of the implementation by using the wrapper function file_en_velocity() which takes the filenames or path to the two CSV files created in the previous step as its input:</p>
<disp-quote>
<p>file_en_velocity(&#x201D;path/to/file_model_cleaned_velocity_x.csv&#x201D;,</p>
<p>&#x201D;path/to/file_model_cleaned_velocity_y.csv&#x201D;)</p>
</disp-quote>
<p>The output file will be created automatically and carries the suffix &#x201C;_en_velocity.&#x201D; If everything went well, the function will return TRUE.</p>
</sec>
<sec id="S3.SS5">
<title>Plotting Results</title>
<p>Finally, the result of the previous computations can be visualized using basic plotting functions included in the package. <italic>OpenPoseR</italic> uses the functionality of <italic>ggplot2</italic> (<xref ref-type="bibr" rid="B25">Wickham, 2016</xref>) to provide some straightforward time series plots which can be created using the plot_timeseries() function:</p>
<disp-quote>
<p>example &#x003C;- read.csv(&#x201D;path/to/file_model_cleaned_en_velocity.csv&#x201D;,</p>
<p>sep=&#x201D;&#x201D;)&#x201D;</p>
<p>plot_timeseries(example)</p>
</disp-quote>
<p>This will create the simple plot in <xref ref-type="fig" rid="F1">Figure 1D</xref> which provides an illustration of how we have quantified the total amount of bodily motion occurring in the input video clip of the German Sign Language Sign PSYCHOLOGY. The sign is articulated with an initial large movement of the signer&#x2019;s dominant right hand to the chest which is followed by a small hand movement in the same place on the chest, which is followed by another large movement of the hand into its original position. The onset and offset of this movement is clearly visible in the form of peaks in the timeseries in <xref ref-type="fig" rid="F1">Figure 1D</xref>, thereby indicating that we succeeded in capturing the bodily motion occurring in the example video clip.</p>
</sec>
</sec>
<sec id="S4">
<title>Discussion and Outlook</title>
<p>The method and workflow presented here provides researchers in sign language and gesture studies with a straightforward and reproducible means for using body-pose estimation data derived from <italic>OpenPose</italic> for controlling their video stimuli. By quantifying the bodily movements of the actor in a video clip in terms of velocity or acceleration of a body-pose model it becomes possible to control for potential differences in these movement patterns, for example, across the different conditions of an experiment. In addition, this approach has already been successfully used to automatically detect the onset and offset for a large set of signs from German Sign Language (<xref ref-type="bibr" rid="B23">Trettenbrein et al., in press</xref>). <italic>OpenPoseR</italic>&#x2019;s core functionality of computing measures of velocity and acceleration for <italic>OpenPose</italic> data is furthermore supplemented by functions for generating basic plots as well as an easy-to-use way of extracting metadata (e.g., duration, framerate, etc.) from large sets of video files. With its integration into the larger R environment, <italic>OpenPoseR</italic> supports reproducible workflows and enables seamless interaction with other packages in order to subject the motion-tracking results to further statistical analysis.</p>
<p><italic>OpenPoseR</italic> was developed to assist sign language and gesture researchers with stimulus control in experiments that present participants with video recordings of an actor signing or gesturing, by reconstructing motion parameters of the actor using the data of body-pose models fit with <italic>OpenPose</italic>. However, we believe that the package&#x2019;s functionality may also be useful for other domains of sign language and gesture research, especially as it can be continuously expanded due to the open-source nature of the project. For example, given that unlike other methods (e.g., <xref ref-type="bibr" rid="B19">Ramseyer, 2020</xref>) <italic>OpenPose</italic> does not require a static camera position, is sensitive to the actual body pose of the actor in the video, and it not biased by other motion or changes in the background, the method and workflow described here has the potential to enable the use of naturalistic stimuli in sign language research, similar to the increasing popularity of naturalistic stimuli in research on spoken language (<xref ref-type="bibr" rid="B7">Hamilton and Huth, 2020</xref>). Similarly, we believe that the functionality provided by <italic>OpenPoseR</italic> may prove useful in the automated analysis of large-scale <italic>OpenPose</italic> data derived from sign language corpora (e.g., <xref ref-type="bibr" rid="B21">Schulder and Hanke, 2019</xref>).</p>
</sec>
<sec id="S5">
<title>Data Availability Statement</title>
<p>Publicly available datasets were analyzed in this study. This data can be found here: <ext-link ext-link-type="uri" xlink:href="https://github.com/trettenbrein/OpenPoseR">https://github.com/trettenbrein/OpenPoseR</ext-link>.</p>
</sec>
<sec id="S6">
<title>Author Contributions</title>
<p>PT wrote <italic>R</italic> code and curated online resources. EZ supervised the project. Both authors conceptualized the functionality of the package, implemented algorithms, contributed to the article, and approved the submitted version.</p>
</sec>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of Interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</body>
<back>
<fn-group>
<fn fn-type="financial-disclosure">
<p><bold>Funding.</bold> This work was funded by the Max Planck Society.</p>
</fn>
</fn-group>
<ack>
<p>We are grateful to Angela D. Friederici for supporting us in the development of this software.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="B1"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Allaire</surname> <given-names>J.</given-names></name> <name><surname>Xie</surname> <given-names>Y.</given-names></name> <name><surname>McPherson</surname> <given-names>J.</given-names></name> <name><surname>Luraschi</surname> <given-names>J.</given-names></name> <name><surname>Ushey</surname> <given-names>K.</given-names></name> <name><surname>Atkins</surname> <given-names>A.</given-names></name><etal/></person-group> (<year>2020</year>). <source><italic>R Markdown: Dynamic Documents for R.</italic></source> <comment>Available online at:</comment> <ext-link ext-link-type="uri" xlink:href="https://github.com/rstudio/rmarkdown">https://github.com/rstudio/rmarkdown</ext-link> <comment>(accessed February 25, 2020)</comment>.</citation></ref>
<ref id="B2"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Campbell</surname> <given-names>R.</given-names></name> <name><surname>Capek</surname> <given-names>C. M.</given-names></name> <name><surname>Gazarian</surname> <given-names>K.</given-names></name> <name><surname>MacSweeney</surname> <given-names>M.</given-names></name> <name><surname>Woll</surname> <given-names>B.</given-names></name> <name><surname>David</surname> <given-names>A. S.</given-names></name><etal/></person-group> (<year>2011</year>). <article-title>The signer and the sign: cortical correlates of person identity and language processing from point-light displays.</article-title> <source><italic>Neuropsychologia</italic></source> <volume>49</volume> <fpage>3018</fpage>&#x2013;<lpage>3026</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2011.06.029</pub-id> <pub-id pub-id-type="pmid">21767555</pub-id></citation></ref>
<ref id="B3"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cao</surname> <given-names>Z.</given-names></name> <name><surname>Hidalgo</surname> <given-names>G.</given-names></name> <name><surname>Simon</surname> <given-names>T.</given-names></name> <name><surname>Wei</surname> <given-names>S.-E.</given-names></name> <name><surname>Sheikh</surname> <given-names>Y.</given-names></name></person-group> (<year>2019</year>). <article-title>OpenPose: realtime multi-person 2D pose estimation using part affinity fields.</article-title> <source><italic>ArXiv</italic></source> <comment>[Preprint]. ArXiv:1812.08008</comment></citation></ref>
<ref id="B4"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cao</surname> <given-names>Z.</given-names></name> <name><surname>Simon</surname> <given-names>T.</given-names></name> <name><surname>Wei</surname> <given-names>S.-E.</given-names></name> <name><surname>Sheikh</surname> <given-names>Y.</given-names></name></person-group> (<year>2017</year>). <article-title>Realtime multi-person 2D pose estimation using part affinity fields.</article-title> <source><italic>ArXiv</italic></source> <comment>[Preprint]. arXiv:1611.08050</comment></citation></ref>
<ref id="B5"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cecchetto</surname> <given-names>C.</given-names></name></person-group> (<year>2017</year>). &#x201C;<article-title>The syntax of sign language and Universal grammar</article-title>,&#x201D; in <source><italic>The Oxford handbook of Universal Grammar</italic></source>, <role>ed.</role> <person-group person-group-type="editor"><name><surname>Roberts</surname> <given-names>I.</given-names></name></person-group> (<publisher-loc>Oxford</publisher-loc>: <publisher-name>Oxford UP</publisher-name>).</citation></ref>
<ref id="B6"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Emmorey</surname> <given-names>K.</given-names></name></person-group> (<year>2015</year>). &#x201C;<article-title>The neurobiology of sign language</article-title>,&#x201D; in <source><italic>Brain Mapping: An Encyclopedic Reference</italic></source>, <volume>Vol. 3</volume> <role>eds</role> <person-group person-group-type="editor"><name><surname>Toga</surname> <given-names>A. W.</given-names></name> <name><surname>Bandettini</surname> <given-names>P.</given-names></name> <name><surname>Thompson</surname> <given-names>P.</given-names></name> <name><surname>Friston</surname> <given-names>K.</given-names></name></person-group> (<publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>Academic Press</publisher-name>), <fpage>475</fpage>&#x2013;<lpage>479</lpage>. <pub-id pub-id-type="doi">10.1016/b978-0-12-397025-1.00272-4</pub-id></citation></ref>
<ref id="B7"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hamilton</surname> <given-names>L. S.</given-names></name> <name><surname>Huth</surname> <given-names>A. G.</given-names></name></person-group> (<year>2020</year>). <article-title>The revolution will not be controlled: natural stimuli in speech neuroscience.</article-title> <source><italic>Lang. Cogn. Neurosci.</italic></source> <volume>35</volume> <fpage>573</fpage>&#x2013;<lpage>582</lpage>. <pub-id pub-id-type="doi">10.1080/23273798.2018.1499946</pub-id> <pub-id pub-id-type="pmid">32656294</pub-id></citation></ref>
<ref id="B8"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kleinbub</surname> <given-names>J. R.</given-names></name> <name><surname>Ramseyer</surname> <given-names>F. T.</given-names></name></person-group> (<year>2020</year>). <article-title>rMEA: an R package to assess nonverbal synchronization in motion energy analysis time-series.</article-title> <source><italic>Psychother. Res.</italic></source> <fpage>1</fpage>&#x2013;<lpage>14</lpage>. <pub-id pub-id-type="doi">10.1080/10503307.2020.1844334</pub-id> <pub-id pub-id-type="pmid">33225873</pub-id></citation></ref>
<ref id="B9"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Klima</surname> <given-names>E. S.</given-names></name> <name><surname>Bellugi</surname> <given-names>U.</given-names></name> <name><surname>Battison</surname> <given-names>R.</given-names></name> <name><surname>Boyes-Braem</surname> <given-names>P.</given-names></name> <name><surname>Fischer</surname> <given-names>S.</given-names></name> <name><surname>Frishberg</surname> <given-names>N.</given-names></name><etal/></person-group> (<year>1979</year>). <source><italic>The Signs of Language.</italic></source> <publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>Harvard UP</publisher-name>.</citation></ref>
<ref id="B10"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Krauss</surname> <given-names>R. M.</given-names></name></person-group> (<year>1998</year>). <article-title>Why do we gesture when we speak?</article-title> <source><italic>Curr. Direct. Psychol. Sci.</italic></source> <volume>7</volume> <fpage>54</fpage>&#x2013;<lpage>54</lpage>. <pub-id pub-id-type="doi">10.1111/1467-8721.ep13175642</pub-id></citation></ref>
<ref id="B11"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lausberg</surname> <given-names>H.</given-names></name> <name><surname>Sloetjes</surname> <given-names>H.</given-names></name></person-group> (<year>2009</year>). <article-title>Coding gestural behavior with the NEUROGES-ELAN system.</article-title> <source><italic>Behav. Res. Methods</italic></source> <volume>41</volume> <fpage>841</fpage>&#x2013;<lpage>849</lpage>. <pub-id pub-id-type="doi">10.3758/BRM.41.3.841</pub-id> <pub-id pub-id-type="pmid">19587200</pub-id></citation></ref>
<ref id="B12"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mathur</surname> <given-names>G.</given-names></name> <name><surname>Rathmann</surname> <given-names>C.</given-names></name></person-group> (<year>2014</year>). &#x201C;<article-title>The structure of sign languages</article-title>,&#x201D; in <source><italic>The Oxford Handbook of Language Production</italic></source>, <role>eds</role> <person-group person-group-type="editor"><name><surname>Goldrick</surname> <given-names>M. A.</given-names></name> <name><surname>Ferreira</surname> <given-names>V. S.</given-names></name> <name><surname>Miozzo</surname> <given-names>M.</given-names></name></person-group> (<publisher-loc>Oxford</publisher-loc>: <publisher-name>Oxford UP</publisher-name>), <fpage>379</fpage>&#x2013;<lpage>392</lpage>.</citation></ref>
<ref id="B13"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>McNeill</surname> <given-names>D.</given-names></name></person-group> (<year>1985</year>). <article-title>So you think gestures are nonverbal?</article-title> <source><italic>Psychol. Rev.</italic></source> <volume>92</volume> <fpage>350</fpage>&#x2013;<lpage>371</lpage>. <pub-id pub-id-type="doi">10.1037/0033-295X.92.3.350</pub-id></citation></ref>
<ref id="B14"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>&#x00D6;zy&#x00FC;rek</surname> <given-names>A.</given-names></name></person-group> (<year>2018</year>). &#x201C;<article-title>Role of gesture in language processing: toward a unified account for production and comprehension</article-title>,&#x201D; in <source><italic>The Oxford Handbook of Psycholinguistics</italic></source>, <role>eds</role> <person-group person-group-type="editor"><name><surname>Rueschemeyer</surname> <given-names>S.-A.</given-names></name> <name><surname>Gaskell</surname> <given-names>M. G.</given-names></name></person-group> (<publisher-loc>Oxford</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>), <fpage>591</fpage>&#x2013;<lpage>607</lpage>. <pub-id pub-id-type="doi">10.1093/oxfordhb/9780198786825.013.25</pub-id></citation></ref>
<ref id="B15"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Paxton</surname> <given-names>A.</given-names></name> <name><surname>Dale</surname> <given-names>R.</given-names></name></person-group> (<year>2013</year>). <article-title>Frame-differencing methods for measuring bodily synchrony in conversation.</article-title> <source><italic>Behav. Res. Methods</italic></source> <volume>45</volume> <fpage>329</fpage>&#x2013;<lpage>343</lpage>. <pub-id pub-id-type="doi">10.3758/s13428-012-0249-2</pub-id> <pub-id pub-id-type="pmid">23055158</pub-id></citation></ref>
<ref id="B16"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Poizner</surname> <given-names>H.</given-names></name> <name><surname>Bellugi</surname> <given-names>U.</given-names></name> <name><surname>Lutes-Driscoll</surname> <given-names>V.</given-names></name></person-group> (<year>1981</year>). <article-title>Perception of American sign language in dynamic point-light displays.</article-title> <source><italic>J. Exp. Psychol. Hum. Percept. Perform.</italic></source> <volume>7</volume> <fpage>430</fpage>&#x2013;<lpage>440</lpage>. <pub-id pub-id-type="doi">10.1037/0096-1523.7.2.430</pub-id> <pub-id pub-id-type="pmid">6453935</pub-id></citation></ref>
<ref id="B17"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pouw</surname> <given-names>W.</given-names></name> <name><surname>Trujillo</surname> <given-names>J. P.</given-names></name> <name><surname>Dixon</surname> <given-names>J. A.</given-names></name></person-group> (<year>2020</year>). <article-title>The quantification of gesture&#x2013;speech synchrony: a tutorial and validation of multimodal data acquisition using device-based and video-based motion tracking.</article-title> <source><italic>Behav. Res. Methods</italic></source> <volume>52</volume> <fpage>723</fpage>&#x2013;<lpage>740</lpage>. <pub-id pub-id-type="doi">10.3758/s13428-019-01271-9</pub-id> <pub-id pub-id-type="pmid">31659689</pub-id></citation></ref>
<ref id="B18"><citation citation-type="journal"><collab>R Core Team</collab> (<year>2019</year>). <source><italic>R: A Language and Environment for Statistical Computing.</italic></source> <publisher-loc>Vienna</publisher-loc>: <publisher-name>R Foundation for Statistical Computing</publisher-name>.</citation></ref>
<ref id="B19"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ramseyer</surname> <given-names>F. T.</given-names></name></person-group> (<year>2020</year>). <article-title>Motion energy analysis (MEA): a primer on the assessment of motion from video.</article-title> <source><italic>J. Counsel. Psychol.</italic></source> <volume>67</volume> <fpage>536</fpage>&#x2013;<lpage>549</lpage>. <pub-id pub-id-type="doi">10.1037/cou0000407</pub-id> <pub-id pub-id-type="pmid">32614233</pub-id></citation></ref>
<ref id="B20"><citation citation-type="journal"><collab>RStudio Team</collab> (<year>2020</year>). <source><italic>RStudio: Integrated Development for R.</italic></source> <publisher-loc>Boston, MA</publisher-loc>: <publisher-name>RStudio, PBC</publisher-name>.</citation></ref>
<ref id="B21"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schulder</surname> <given-names>M.</given-names></name> <name><surname>Hanke</surname> <given-names>T.</given-names></name></person-group> (<year>2019</year>). <source><italic>OpenPose in the Public DGS Corpus.</italic></source> <publisher-loc>Hamburg</publisher-loc>: <publisher-name>University of Hamburg</publisher-name>. <pub-id pub-id-type="doi">10.25592/UHHFDM.842</pub-id></citation></ref>
<ref id="B22"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Trettenbrein</surname> <given-names>P. C.</given-names></name> <name><surname>Papitto</surname> <given-names>G.</given-names></name> <name><surname>Friederici</surname> <given-names>A. D.</given-names></name> <name><surname>Zaccarella</surname> <given-names>E.</given-names></name></person-group> (<year>2021</year>). <article-title>Functional neuroanatomy of language without speech: an ALE meta&#x2013;analysis of sign language.</article-title> <source><italic>Hum. Brain Mapp.</italic></source> <volume>42</volume> <fpage>699</fpage>&#x2013;<lpage>712</lpage>. <pub-id pub-id-type="doi">10.1002/hbm.25254</pub-id> <pub-id pub-id-type="pmid">33118302</pub-id></citation></ref>
<ref id="B23"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Trettenbrein</surname> <given-names>P. C.</given-names></name> <name><surname>Pendzich</surname> <given-names>N.-K.</given-names></name> <name><surname>Cramer</surname> <given-names>J.-M.</given-names></name> <name><surname>Steinbach</surname> <given-names>M.</given-names></name> <name><surname>Zaccarella</surname> <given-names>E.</given-names></name></person-group> (<year>in press</year>). <article-title>Psycholinguistic norms for more than 300 lexical signs in German Sign Language (DGS).</article-title> <source><italic>Behav. Res. Methods.</italic></source> In press. <pub-id pub-id-type="doi">10.3758/s13428-020-01524-y</pub-id> <pub-id pub-id-type="pmid">33575986</pub-id></citation></ref>
<ref id="B24"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Trujillo</surname> <given-names>J. P.</given-names></name> <name><surname>Vaitonyte</surname> <given-names>J.</given-names></name> <name><surname>Simanova</surname> <given-names>I.</given-names></name> <name><surname>&#x00D6;zy&#x00FC;rek</surname> <given-names>A.</given-names></name></person-group> (<year>2019</year>). <article-title>Toward the markerless and automatic analysis of kinematic features: a toolkit for gesture and movement research.</article-title> <source><italic>Behav. Res. Methods</italic></source> <volume>51</volume> <fpage>769</fpage>&#x2013;<lpage>777</lpage>. <pub-id pub-id-type="doi">10.3758/s13428-018-1086-8</pub-id> <pub-id pub-id-type="pmid">30143970</pub-id></citation></ref>
<ref id="B25"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wickham</surname> <given-names>H.</given-names></name></person-group> (<year>2016</year>). <source><italic>ggplot2: Elegant Graphics for Data Analysis.</italic></source> <publisher-loc>Berlin</publisher-loc>: <publisher-name>Springer-Verlag</publisher-name>.</citation></ref>
<ref id="B26"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>Z.</given-names></name></person-group> (<year>2012</year>). <article-title>Microsoft Kinect Sensor and Its Effect.</article-title> <source><italic>IEEE Multimedia</italic></source> <volume>19</volume> <fpage>4</fpage>&#x2013;<lpage>10</lpage>. <pub-id pub-id-type="doi">10.1109/MMUL.2012.24</pub-id></citation></ref>
</ref-list>
<fn-group>
<fn id="footnote1">
<label>1</label>
<p><ext-link ext-link-type="uri" xlink:href="https://github.com/trettenbrein/OpenPoseR">https://github.com/trettenbrein/OpenPoseR</ext-link></p></fn>
<fn id="footnote2">
<label>2</label>
<p><ext-link ext-link-type="uri" xlink:href="https://github.com/CMU-Perceptual-Computing-Lab/openpose">https://github.com/CMU-Perceptual-Computing-Lab/openpose</ext-link></p></fn>
<fn id="footnote3">
<label>3</label>
<p><ext-link ext-link-type="uri" xlink:href="https://github.com/trettenbrein/OpenPoseR/blob/master/demo.zip">https://github.com/trettenbrein/OpenPoseR/blob/master/demo.zip</ext-link></p></fn>
</fn-group>
</back>
</article>