<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Anim. Sci.</journal-id>
<journal-title>Frontiers in Animal Science</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Anim. Sci.</abbrev-journal-title>
<issn pub-type="epub">2673-6225</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fanim.2021.791290</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Animal Science</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Across-Species Pose Estimation in Poultry Based on Images Using Deep Learning</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Doornweerd</surname> <given-names>Jan Erik</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1508395/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Kootstra</surname> <given-names>Gert</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Veerkamp</surname> <given-names>Roel F.</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1577821/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Ellen</surname> <given-names>Esther D.</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/81280/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>van der Eijk</surname> <given-names>Jerine A. J.</given-names></name>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1195477/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>van de Straat</surname> <given-names>Thijs</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Bouwman</surname> <given-names>Aniek C.</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/795535/overview"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Animal Breeding and Genomics, Wageningen University and Research</institution>, <addr-line>Wageningen</addr-line>, <country>Netherlands</country></aff>
<aff id="aff2"><sup>2</sup><institution>Farm Technology, Wageningen University and Research</institution>, <addr-line>Wageningen</addr-line>, <country>Netherlands</country></aff>
<aff id="aff3"><sup>3</sup><institution>Animal Health and Welfare, Wageningen University and Research</institution>, <addr-line>Wageningen</addr-line>, <country>Netherlands</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Dan B&#x000F8;rge Jensen, University of Copenhagen, Denmark</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Madonna Benjamin, Michigan State University, United States; E. Tobias Krause, Friedrich-Loeffler-Institute, Germany</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Jan Erik Doornweerd <email>janerik.doornweerd&#x00040;wur.nl</email></corresp>
<fn fn-type="other" id="fn001"><p>This article was submitted to Precision Livestock Farming, a section of the journal Frontiers in Animal Science</p></fn></author-notes>
<pub-date pub-type="epub">
<day>15</day>
<month>12</month>
<year>2021</year>
</pub-date>
<pub-date pub-type="collection">
<year>2021</year>
</pub-date>
<volume>2</volume>
<elocation-id>791290</elocation-id>
<history>
<date date-type="received">
<day>08</day>
<month>10</month>
<year>2021</year>
</date>
<date date-type="accepted">
<day>24</day>
<month>11</month>
<year>2021</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2021 Doornweerd, Kootstra, Veerkamp, Ellen, van der Eijk, van de Straat and Bouwman.</copyright-statement>
<copyright-year>2021</copyright-year>
<copyright-holder>Doornweerd, Kootstra, Veerkamp, Ellen, van der Eijk, van de Straat and Bouwman</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license></permissions>
<abstract><p>Animal pose-estimation networks enable automated estimation of key body points in images or videos. This enables animal breeders to collect pose information repeatedly on a large number of animals. However, the success of pose-estimation networks depends in part on the availability of data to learn the representation of key body points. Especially with animals, data collection is not always easy, and data annotation is laborious and time-consuming. The available data is therefore often limited, but data from other species might be useful, either by itself or in combination with the target species. In this study, the across-species performance of animal pose-estimation networks and the performance of an animal pose-estimation network trained on multi-species data (turkeys and broilers) were investigated. Broilers and turkeys were video recorded during a walkway test representative of the situation in practice. Two single-species and one multi-species model were trained by using DeepLabCut and tested on two single-species test sets. Overall, the within-species models outperformed the multi-species model, and the models applied across species, as shown by a lower raw pixel error, normalized pixel error, and higher percentage of keypoints remaining (PKR). The multi-species model had slightly higher errors with a lower PKR than the within-species models but had less than half the number of annotated frames available from each species. Compared to the single-species broiler model, the multi-species model achieved lower errors for the head, left foot, and right knee keypoints, although with a lower PKR. Across species, keypoint predictions resulted in high errors and low to moderate PKRs and are unlikely to be of direct use for pose and gait assessments. A multi-species model may reduce annotation needs without a large impact on performance for pose assessment, however, with the recommendation to only be used if the species are comparable. If a single-species model exists it could be used as a pre-trained model for training a new model, and possibly require a limited amount of new data. Future studies should investigate the accuracy needed for pose and gait assessments and estimate genetic parameters for the new phenotypes before pose-estimation networks can be applied in practice.</p></abstract>
<kwd-group>
<kwd>broilers</kwd>
<kwd>computer vision</kwd>
<kwd>deep learning</kwd>
<kwd>gait</kwd>
<kwd>multi-species</kwd>
<kwd>pose-estimation</kwd>
<kwd>turkeys</kwd>
<kwd>within-species</kwd>
</kwd-group>
<counts>
<fig-count count="2"/>
<table-count count="6"/>
<equation-count count="4"/>
<ref-count count="48"/>
<page-count count="11"/>
<word-count count="8477"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>Introduction</title>
<p>In poultry production, locomotion is an important health and welfare trait. Impaired locomotion is a major welfare concern (Scientific Committee on Animal Health and Animal Welfare, <xref ref-type="bibr" rid="B34">2000</xref>; van Staaveren et al., <xref ref-type="bibr" rid="B42">2020</xref>), and a cause of economic losses in both turkeys and broilers (Sullivan, <xref ref-type="bibr" rid="B37">1994</xref>; van Staaveren et al., <xref ref-type="bibr" rid="B42">2020</xref>). Impaired locomotion has been linked to high growth rate, high body weight, infection, and housing conditions (e.g., light and feeding regime) in broilers (Bradshaw et al., <xref ref-type="bibr" rid="B6">2002</xref>). Birds with impaired locomotion have trouble accessing feed and water (Weeks et al., <xref ref-type="bibr" rid="B45">2000</xref>), performing motivated behaviors like dust bathing (Vestergaard and Sanotra, <xref ref-type="bibr" rid="B44">1999</xref>), and likely with peck avoidance (Erasmus, <xref ref-type="bibr" rid="B11">2018</xref>). Studies have reported that in broilers approximately 15&#x02013;28% of the birds, and in turkeys, approximately 8&#x02013;13% of the birds, examined had impaired locomotion (Kestin et al., <xref ref-type="bibr" rid="B17">1992</xref>; Bassler et al., <xref ref-type="bibr" rid="B4">2013</xref>; Sharafeldin et al., <xref ref-type="bibr" rid="B35">2015</xref>; Vermette et al., <xref ref-type="bibr" rid="B43">2016</xref>; Kittelsen et al., <xref ref-type="bibr" rid="B18">2017</xref>).</p>
<p>Gait-scoring systems have been developed for both turkeys and broilers (e.g., Kestin et al., <xref ref-type="bibr" rid="B17">1992</xref>; Garner et al., <xref ref-type="bibr" rid="B12">2002</xref>; Quinton et al., <xref ref-type="bibr" rid="B32">2011</xref>; Kapell et al., <xref ref-type="bibr" rid="B16">2017</xref>). Generally, a human expert judges the gait of an animal from behind, or the side, based on several locomotion factors, which often relate to the fluidity of movement and leg conformation. Gait scores were found to be heritable in turkeys [<italic>h</italic><sup>2</sup>: 0.08&#x02013;0.13 &#x000B1; 0.01 (Kapell et al., <xref ref-type="bibr" rid="B16">2017</xref>) and 0.25&#x02013;0.26 &#x000B1; 0.01 (Quinton et al., <xref ref-type="bibr" rid="B32">2011</xref>)]. The gait scores are valuable to breeding programs, yet the gait-scoring process is laborious, and gait scores are prone to subjectivity. Sensor technologies could provide relatively effortless, non-invasive, and objective gait assessments, while also allowing for the assessment of a larger number of animals with higher frequency.</p>
<p>Several technologies for objective gait assessment have been introduced over the years. These technologies include pressure-sensitive walkways (PSW) (N&#x000E4;&#x000E4;s et al., <xref ref-type="bibr" rid="B27">2010</xref>; Paxton et al., <xref ref-type="bibr" rid="B30">2013</xref>; Oviedo-Rond&#x000F3;n et al., <xref ref-type="bibr" rid="B29">2017</xref>; Kremer et al., <xref ref-type="bibr" rid="B19">2018</xref>; Stevenson et al., <xref ref-type="bibr" rid="B36">2019</xref>), rotarods (Malchow et al., <xref ref-type="bibr" rid="B23">2019</xref>), video analysis (Abourachid, <xref ref-type="bibr" rid="B1">1991</xref>, <xref ref-type="bibr" rid="B2">2001</xref>; Caplen et al., <xref ref-type="bibr" rid="B8">2012</xref>; Paxton et al., <xref ref-type="bibr" rid="B30">2013</xref>; Oviedo-Rond&#x000F3;n et al., <xref ref-type="bibr" rid="B29">2017</xref>), accelerometers (Stevenson et al., <xref ref-type="bibr" rid="B36">2019</xref>), and inertial measurement units (IMUs; provide 3D accelerometer, gyroscope, and, occasionally, magnetometer data) (Bouwman et al., <xref ref-type="bibr" rid="B5">2020</xref>). The on-farm application of the sensor technologies is limited due to equipment costs (PSW), habituation requirements (PSW, accelerometers, IMUs), or increased animal handling (rotarod, accelerometers, and IMUs). On-farm application of cameras can be more practical, however, the investigated camera-based methods rely on physical markers placed on key body points to assess the gait of an animal (Caplen et al., <xref ref-type="bibr" rid="B8">2012</xref>; Paxton et al., <xref ref-type="bibr" rid="B30">2013</xref>; Oviedo-Rond&#x000F3;n et al., <xref ref-type="bibr" rid="B29">2017</xref>).</p>
<p>Pose-estimation networks that use deep learning can be trained to predict the spatial location of key body points in an image or video frame, and hence make physical markers placed on key body points obsolete. Pose-estimation networks enable repeated pose assessment on a large number of animals, which is needed to achieve accurate breeding values. Pose-estimation methods that use deep learning (Lecun et al., <xref ref-type="bibr" rid="B21">2015</xref>) can learn the representation of key body points from annotated training data. In brief, these pose-estimation methods based on deep learning consists of two parts, a feature extractor that extracts visual features from a video image (frame), and a predictor that uses the output of the feature extractor to predict the body part and its location in the frame (Mathis et al., <xref ref-type="bibr" rid="B25">2020</xref>). In part, the success of a supervised deep learning model depends on the availability of annotated data to learn these representations (Sun et al., <xref ref-type="bibr" rid="B38">2017</xref>).</p>
<p>In the human domain, markerless pose estimation has been an active field of research for many years (e.g., Toshev and Szegedy, <xref ref-type="bibr" rid="B40">2014</xref>; Insafutdinov et al., <xref ref-type="bibr" rid="B15">2016</xref>; Sun et al., <xref ref-type="bibr" rid="B39">2019</xref>; Cheng et al., <xref ref-type="bibr" rid="B9">2020</xref>) and large datasets have been collected over the years [e.g., MPII (Andriluka et al., <xref ref-type="bibr" rid="B3">2014</xref>), COCO (Lin et al., <xref ref-type="bibr" rid="B22">2014</xref>)]. Animal pose estimation has been investigated in more recent studies (e.g., Mathis et al., <xref ref-type="bibr" rid="B24">2018</xref>; Graving et al., <xref ref-type="bibr" rid="B13">2019</xref>; Pereira et al., <xref ref-type="bibr" rid="B31">2019</xref>), but large datasets remain scarce. One dataset (Cao et al., <xref ref-type="bibr" rid="B7">2019</xref>) is publicly available, however, it is smaller than the human pose-estimation datasets and does not include broilers or turkeys. The creation of large datasets is difficult; large-scale animal data collection is not always easy, and data annotation is laborious and time-consuming. Therefore, efforts should be made to investigate methods that could permit deep-learning-based pose-estimation networks to work with limited data, and with that reduce annotation needs.</p>
<p>One method to work with limited data could be the use of data from different sources, like different species. Only a few studies have investigated the use of pose data from one or multiple species on another species (Sanakoyeu et al., <xref ref-type="bibr" rid="B33">2020</xref>; Mathis et al., <xref ref-type="bibr" rid="B26">2021</xref>). In Sanakoyeu et al. (<xref ref-type="bibr" rid="B33">2020</xref>), a chimpanzee pose-estimation network was trained on chimpanzee pseudo-labels originating from a network trained on data of humans and other species (bear, dog, elephant, cat, horse, cow, bird, sheep, zebra, giraffe, and mouse). Pseudo-labels are labels that are predicted by a model and not the result of manual annotation. In Mathis et al. (<xref ref-type="bibr" rid="B26">2021</xref>), a part of the research focused on the generalization of a pose-estimation network across species (horse, dog, sheep, cat, and cow). The pose-estimation network was trained on one or all other animal species whilst withholding either sheep or cow as test data. In both Mathis et al. (<xref ref-type="bibr" rid="B26">2021</xref>) and Sanakoyeu et al. (<xref ref-type="bibr" rid="B33">2020</xref>), despite differences in approach, pre-training with multiple species or training with multiple species resulted in better performance on the unseen species than when pre-training or training with one species. However, it is unclear whether the improved performance stems from a larger data availability or the multi-species data since no notion of dataset size was given. Furthermore, the investigated species were visually distinct, this might have affected the performance of the networks.</p>
<p>The objective of this study is to investigate the across-species performance of an animal pose-estimation network trained on broilers and tested on turkeys, and vice versa. Furthermore, since the interest is in working with limited data, the performance of an animal pose-estimation network trained on a multi-species training dataset (turkeys and broilers) will also be investigated. A multi-species dataset could potentially reduce annotation needs in both species without a negative effect on performance.</p>
</sec>
<sec sec-type="materials and methods" id="s2">
<title>Materials and Methods</title>
<sec>
<title>Data Collection</title>
<p>The data used in this research were collected in two different trials, one for turkeys and one for broilers. The data was not specifically collected for this study, but representative of the situation in practice. In both cases, the data collection will be presented separately though with a similar structure for easier comparison.</p>
</sec>
<sec>
<title>Turkeys</title>
<p>Data were collected on 83 male breeder turkeys at 20 weeks of age. This is approximately the slaughter age for commercial turkeys (Wood, <xref ref-type="bibr" rid="B46">2009</xref>). The animals were subjected to a standard walkway test applied in the turkey breeding program of Hybrid Turkeys (Hendrix Genetics, Boxmeer, The Netherlands). The birds were stimulated to walk along a corridor (Width: &#x0007E;1.5 m, Length: &#x0007E;6 m) within the barn. Video recordings (RGB) were made from behind with an Intel&#x000AE; RealSense&#x02122; Depth Camera D415 (Intel Corporation, Santa Clara, United States; Resolution: 1,280 &#x000D7; 720, Frame Rate: 30). The camera was set up on a small tripod on a bucket to get a clear view of the legs of the birds. The camera was parallel to the ground and in the center of the walkway. A person trailed behind the birds to stimulate walking, and if needed waving their hand or tapping on the back of the bird. During the trial, the birds were equipped with three IMUs, one around the neck, the other two just above the hock. The IMU data was not used in this study but the IMUs were visible in the videos. Other birds were visible through wire-mesh fencing. The videos were cropped to a size of 600 &#x000D7; 720 to reduce the visibility of other turkeys through the wire-mesh fencing. The birds were housed under typical commercial circumstances.</p>
</sec>
<sec>
<title>Broilers</title>
<p>Data were collected on 47 conventional broilers at 37 days of age. The broilers were in the finishing stage and nearing the age of slaughter age at 41 days (Van Horne, <xref ref-type="bibr" rid="B41">2020</xref>). The birds were stimulated to walk along a corridor (width: &#x0007E;0.4 m, length: &#x0007E;3 m) within the pen. Video recordings (RGB) were made from behind with the same Intel&#x000AE; RealSense&#x02122; Depth Camera D415 as used in the turkey experiment. The camera was set up in a fixed position on a metal rig attached to the front panel of the runway to get a clear view of the legs of the birds from behind. The camera was parallel to the ground and in the center of the walkway. The birds were stimulated to walk with a black screen made of wire netting on a stick. Other birds were not visible due to non-transparent side panels. The videos were not cropped since other broilers were not visible. The birds were housed in an experimental facility with a low stocking density (25 on 6 m<sup>2</sup>) but with standard light and feeding regime.</p>
</sec>
<sec>
<title>Frame Extraction and Annotation</title>
<p>The toolbox of DeepLabCut 2.0 (version 2.1.8.2; Nath et al., <xref ref-type="bibr" rid="B28">2019</xref>) was used to extract and annotate the frames from the collected RGB-videos (<xref ref-type="table" rid="T1">Table 1</xref>). For the turkeys, 20 frames per video/turkey were manually extracted to ensure no other animals were visible within the walkway and to exclude frames with human&#x02013;animal interaction. For two turkeys, 50 frames were extracted. These two turkeys were part of our initial trial with DeepLabCut and hence had more annotated frames available. For the broilers, 40 frames per video/broiler were extracted, randomly sampled from a uniform distribution across time. The number of frames per broiler was roughly double the number of frames per turkey because the number of available broiler videos was roughly half that of the number of available turkey videos.</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p>Data overview per species.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><bold>Species</bold></th>
<th valign="top" align="center"><bold>Resolution</bold></th>
<th valign="top" align="center"><bold>No. frames</bold></th>
<th valign="top" align="center"><bold>No. animals</bold></th>
<th valign="top" align="center" style="border-bottom: thin solid #000000;" colspan="3"><bold>No. frames per animal</bold></th>
</tr>
<tr>
<th/>
<th/>
<th/>
<th/>
<th valign="top" align="center"><bold>Min</bold></th>
<th valign="top" align="center"><bold>Mean</bold></th>
<th valign="top" align="center"><bold>Max</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Turkey</td>
<td valign="top" align="center">600 &#x000D7; 720</td>
<td valign="top" align="center">1,747</td>
<td valign="top" align="center">83</td>
<td valign="top" align="center">20</td>
<td valign="top" align="center">21</td>
<td valign="top" align="center">50</td>
</tr>
<tr>
<td valign="top" align="left">Broiler</td>
<td valign="top" align="center">1,280 &#x000D7; 720</td>
<td valign="top" align="center">1,530</td>
<td valign="top" align="center">47</td>
<td valign="top" align="center">15</td>
<td valign="top" align="center">32</td>
<td valign="top" align="center">39</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>In principle, eight keypoints were annotated in each frame: head, neck, left knee, left hock, left foot, right knee, right hock, right foot (<xref ref-type="fig" rid="F1">Figure 1</xref>). However, in some frames not all keypoints were visible (e.g., rump obscuring the head because the bird put its head down), these frames were retained, but the occluded keypoint was not annotated. The annotations are visually estimated locations founded on morphological knowledge, but can deviate from ground truth, particularly for keypoints obscured by plumage. The head was annotated at the top, the neck at the base, the knees at the estimated location of the knee, the hocks at the transition of the feathers into scales, and the feet approximately at the height of the first toe in the middle. The annotated data consisted of the <italic>x</italic> and <italic>y</italic> coordinates of the visible keypoints within the frames.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p>Example of broiler <bold>(A)</bold> and turkey <bold>(B)</bold> annotations. The keypoints are head (red), neck (blue), left knee (green), left hock (purple), left foot (yellow), right knee (brown), left hock (pink), and left foot (gray). Images are cropped.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fanim-02-791290-g0001.tif"/>
</fig>
<p>Extracted frames with no animal in view or no visible keypoints (i.e., animal too close to the camera) were not annotated and subsequently removed. This only occurred in broiler frames, due to random frame extraction for the broilers vs. the manual frame extraction for the turkeys. Altogether, a total of 350 broiler frames were removed. There was no threshold on the minimal number of keypoints within a frame. In total, 3,277 frames were annotated by one annotator, consisting of 1,747 turkey frames and 1,530 broiler frames. The number of frames differed per animal (<xref ref-type="table" rid="T1">Table 1</xref>).</p>
</sec>
<sec>
<title>Datasets for Training and Testing</title>
<p>Five datasets were created from the annotated frames to train and test pose-estimation networks: two turkey datasets, two broiler datasets, and one multi-species training (turkey and broiler) dataset (<xref ref-type="table" rid="T2">Table 2</xref>). The single-species datasets were created by splitting the total number of frames in a training and test set (80 and 20%, respectively). Animals in the test set did not occur in the training set. Most animals in the test set were randomly selected, some were selected to get a proper 80/20-split since the number of frames differed per animal. The remainder of the frames made up the training data. The multi-species dataset was a combination of turkeys and broilers training frames. Most animals in the multi-species dataset were randomly selected from the animals in the turkey and broiler training set, some were selected to get the correct total number of frames. The five datasets thus consisted of three training datasets (turkey, broiler, multi-species) and two test datasets (turkey and broiler). An overview of the datasets is provided in <xref ref-type="table" rid="T2">Table 2</xref>.</p>
<table-wrap position="float" id="T2">
<label>Table 2</label>
<caption><p>Dataset configuration.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><bold>Species</bold></th>
<th valign="top" align="center"><bold>No. training frames</bold><break/> <bold>(No. animals)</bold></th>
<th valign="top" align="center"><bold>No. testing frames</bold><break/> <bold>(No. animals)</bold></th>
<th valign="top" align="center"><bold>No. total frames</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left"><bold>Turkey</bold></td>
<td valign="top" align="center">1,397 (67)</td>
<td valign="top" align="center">350 (16)</td>
<td valign="top" align="center">1,747</td>
</tr>
<tr>
<td valign="top" align="left"><bold>Broiler</bold></td>
<td valign="top" align="center">1,224 (37)</td>
<td valign="top" align="center">306 (10)</td>
<td valign="top" align="center">1,530</td>
</tr>
<tr>
<td valign="top" align="left"><bold>Multi-species</bold></td>
<td valign="top" align="center">600/601 (30/19)</td>
<td valign="top" align="center">&#x02013;</td>
<td valign="top" align="center">1,201</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>The multi-species dataset reports two numbers, the first relates to turkeys and the second to broilers</italic>.</p>
</table-wrap-foot>
</table-wrap>
</sec>
<sec>
<title>Pose-Estimation</title>
<p>DeepLabCut is an open-source deep-learning-based pose-estimation tool (Mathis et al., <xref ref-type="bibr" rid="B24">2018</xref>; Nath et al., <xref ref-type="bibr" rid="B28">2019</xref>). In DeepLabCut, the feature detector from DeeperCut (Insafutdinov et al., <xref ref-type="bibr" rid="B15">2016</xref>) is followed by deconvolutional layers to produce a score-map and a location refinement field for each keypoint. The score-map encodes the location probabilities of the keypoints (<xref ref-type="fig" rid="F2">Figure 2</xref>). The location refinement field predicts an offset to counteract the effect of the down sampled score-map. The feature detector is a variant of deep residual neural networks (ResNet-50; He et al., <xref ref-type="bibr" rid="B14">2016</xref>) pre-trained on ImageNet&#x02014;a large-scale dataset for object recognition (Deng et al., <xref ref-type="bibr" rid="B10">2009</xref>). The pre-trained network was fine-tuned for our task. This fine-tuning improves performance, reduces computational time, and reduces data requirements (Yosinski et al., <xref ref-type="bibr" rid="B47">2014</xref>). During fine-tuning, the weights of the pre-trained network are iteratively adjusted on the training data of our task to ensure that the network returns high probabilities for the annotated keypoint locations (Mathis et al., <xref ref-type="bibr" rid="B24">2018</xref>). DeepLabCut returns the location <inline-formula><mml:math id="M1"><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mover accent='true'><mml:mi>x</mml:mi><mml:mo>&#x0005E;</mml:mo></mml:mover><mml:mi>i</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mover accent='true'><mml:mi>y</mml:mi><mml:mo>&#x0005E;</mml:mo></mml:mover><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></inline-formula> with the highest likelihood (&#x003B8;<sub><italic>i</italic></sub>) for each predicted keypoint in each frame (<xref ref-type="fig" rid="F2">Figure 2</xref>).</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p>Example of a broiler score-map. The score-map encodes the location probabilities of the keypoints.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fanim-02-791290-g0002.tif"/>
</fig>
</sec>
<sec>
<title>Analyses</title>
<p>DeepLabCut (core, version 2.1.8.1; Mathis et al., <xref ref-type="bibr" rid="B24">2018</xref>) was used to train three networks, one for each training dataset [turkey (T), broiler (B), and multi-species (M)]. All three networks were tested on both test datasets (turkey and broiler), thus within-species and across-species (<xref ref-type="table" rid="T2">Table 2</xref>). The model and test set will be indicated with the following convention; the first letter denotes the model, and the second letter the test set, i.e., MT stands for multi-species model on turkey test set and BB stands for broiler model on broiler test set.</p>
<p>All three networks were trained with default parameters for 1.03 million iterations (default). The number of epochs&#x02014;the number times the entire dataset is presented through the network&#x02014;differed between networks due to different training set sizes (turkey: 737 epochs; broiler: 841 epochs; multi-species: 858 epochs).</p>
<p>In <xref ref-type="table" rid="T2">Table 2</xref>, a testing scheme is presented. The testing scheme shows within-species (TT and BB), across-species (TB and BT) and multi-species model (MT and MB) testing. The within-species test established the performance of the networks on the species on which the model was trained. The across-species test was used to assess a network&#x00027;s performance across species, i.e., on the species on which the model was not trained. The multi-species model was tested on both test sets to assess the performance of a network trained with a combination of species and fewer annotations per species.</p>
</sec>
<sec>
<title>Evaluation Metrics</title>
<p>The performance of the models was evaluated with the raw pixel error, the normalized pixel error, and the percentage of keypoints remaining (PKR). The raw pixel error and normalized pixel error were calculated for all keypoints or keypoints with a likelihood higher or equal to 0.6 (default in DeepLabCut).</p>
<p>The raw pixel error was expressed as the Euclidean distance between the <italic>x</italic> and <italic>y</italic> coordinates of the model predictions and the human annotator.</p>
<disp-formula id="E1"><label>(1)</label><mml:math id="M2"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>d</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msqrt><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>&#x0002B;</mml:mo><mml:msup><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x00177;</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:msqrt></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Where <italic>d</italic><sub><italic>ij</italic></sub> is the Euclidean distance between the predicted location of keypoint <italic>i</italic>, <inline-formula><mml:math id="M3"><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mover accent='true'><mml:mi>x</mml:mi><mml:mo>&#x0005E;</mml:mo></mml:mover><mml:mi>i</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mover accent='true'><mml:mi>y</mml:mi><mml:mo>&#x0005E;</mml:mo></mml:mover><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></inline-formula>, and its annotated location, (<italic>x</italic><sub><italic>i</italic></sub>, <italic>y</italic><sub><italic>i</italic></sub>), in frame <italic>j</italic>.</p>
<p>The average Euclidean distance was calculated per keypoint over all frames.</p>
<disp-formula id="E2"><label>(2)</label><mml:math id="M4"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mi>d</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mtext>&#x000A0;</mml:mtext><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:mfrac><mml:msubsup><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x02032;</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:msubsup><mml:msub><mml:mrow><mml:mi>d</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Where <inline-formula><mml:math id="M5"><mml:msub><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mi>d</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is the average Euclidean distance of keypoint <italic>i</italic>. <italic>N</italic> is the total number of frames, and <italic>N</italic>&#x02032; is the number of frames in which keypoint <italic>i</italic> was annotated, thus visible.</p>
<p>The overall average Euclidean distance was calculated over all keypoints over all frames.</p>
<disp-formula id="E3"><label>(3)</label><mml:math id="M6"><mml:mrow><mml:mover accent='true'><mml:mi>d</mml:mi><mml:mo>&#x000AF;</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:mtext>&#x000A0;</mml:mtext><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mo>&#x0007C;</mml:mo><mml:mi>&#x02133;</mml:mi><mml:mo>&#x0007C;</mml:mo></mml:mrow></mml:mfrac><mml:mstyle displaystyle='true'><mml:msubsup><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mtext>&#x000A0;</mml:mtext><mml:mo>&#x02208;</mml:mo><mml:mtext>&#x000A0;</mml:mtext><mml:mi>&#x02133;</mml:mi></mml:mrow><mml:mrow><mml:mo>&#x0007C;</mml:mo><mml:mi>&#x02133;</mml:mi><mml:mo>&#x0007C;</mml:mo></mml:mrow></mml:msubsup><mml:mrow><mml:msub><mml:mi>d</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mstyle></mml:mrow></mml:math></disp-formula>
<p>Where <inline-formula><mml:math id="M7"><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mi>d</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:math></inline-formula> is the overall average Euclidean distance, <inline-formula><mml:math id="M8"><mml:mi>&#x02133;</mml:mi></mml:math></inline-formula> is the set of all valid annotations of all keypoints <italic>i</italic> in all frames <italic>j</italic>.</p>
<p>Since the animal is moving away from the camera, the size of the animal in relation to the frame changes, i.e., the animal becomes smaller. The normalized pixel error corrects the raw pixel error for the size of the animal in the frame, i.e., a pixel error of five pixels when the animal is near the camera is better than a pixel error of five pixels when the animal is further from the camera. The raw pixel errors were normalized by the square root of the bounding box area of the legs, as head and neck keypoints were not always visible. The bounding box was constructed from the annotated keypoints to ensure that the normalization of the raw pixel error was independent of the predictions. The square root of the bounding box area penalized the pixel errors less for large bounding boxes than for small bounding boxes.</p>
<p>The normalized pixel error was calculated as follows:</p>
<disp-formula id="E4"><label>(4)</label><mml:math id="M9"><mml:mtable columnalign='left'><mml:mtr><mml:mtd><mml:mi>N</mml:mi><mml:mi>o</mml:mi><mml:mi>r</mml:mi><mml:mi>m</mml:mi><mml:msub><mml:mi>d</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mi>d</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:msqrt><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mtext>max</mml:mtext></mml:mrow><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi><mml:mtext>&#x000A0;</mml:mtext><mml:mi>&#x003F5;</mml:mi><mml:mtext>&#x000A0;</mml:mtext><mml:mi>L</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:mtext>&#x000A0;</mml:mtext><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mrow><mml:mtext>min</mml:mtext></mml:mrow><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi><mml:mtext>&#x000A0;</mml:mtext><mml:mi>&#x003F5;</mml:mi><mml:mtext>&#x000A0;</mml:mtext><mml:mi>L</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:mtext>&#x000A0;</mml:mtext><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mtext>&#x000A0;</mml:mtext><mml:mo>&#x02217;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mtext>max</mml:mtext></mml:mrow><mml:mrow><mml:msub><mml:mi>y</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi><mml:mtext>&#x000A0;</mml:mtext><mml:mi>&#x003F5;</mml:mi><mml:mtext>&#x000A0;</mml:mtext><mml:mi>L</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:mtext>&#x000A0;</mml:mtext><mml:msub><mml:mi>y</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mrow><mml:mtext>min</mml:mtext></mml:mrow><mml:mrow><mml:msub><mml:mi>y</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi><mml:mtext>&#x000A0;</mml:mtext><mml:mi>&#x003F5;</mml:mi><mml:mtext>&#x000A0;</mml:mtext><mml:mi>L</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msub><mml:mtext>&#x000A0;</mml:mtext><mml:msub><mml:mi>y</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msqrt></mml:mrow></mml:mfrac></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Where <italic>d</italic><sub><italic>ij</italic></sub> is the raw pixel error as in Equation (1), <inline-formula><mml:math id="M11"><mml:mi>&#x02112;</mml:mi></mml:math></inline-formula> is a set of annotated leg keypoint coordinates, (<italic>x</italic><sub><italic>i</italic></sub>, <italic>y</italic><sub><italic>i</italic></sub>), in frame <italic>j</italic>. Leg keypoints consist of the knees, the hocks, and the feet. The normalized pixel error was reported either as the average normalized error per keypoint as in Equation (2) or as the overall average normalized error as in Equation (3) with <italic>d</italic><sub><italic>ij</italic></sub> substituted with <italic>Normd</italic><sub><italic>ij</italic></sub>.</p>
<p>The PKR is the percentage of keypoints with a likelihood higher or equal to 0.6 over the total keypoints with a Euclidean distance. Only annotated keypoints have a Euclidean distance (see also Equation 1). The PKR is a proxy for the confidence of the model. The PKR should always be considered in unison with the pixel error, a model with a high PKR and a low pixel error is confidently right.</p>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>Results</title>
<p>The models were used to investigate the across-species performance of animal pose-estimation networks and the performance of an animal pose-estimation network trained on multi-species data. The models were tested according to the testing scheme in <xref ref-type="table" rid="T3">Table 3</xref>. The performances of all models over all keypoints are shown in <xref ref-type="table" rid="T4">Tables 4</xref>, <xref ref-type="table" rid="T5">5</xref>.</p>
<table-wrap position="float" id="T3">
<label>Table 3</label>
<caption><p>Testing scheme.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><bold>Test set (species)</bold></th>
<th valign="top" align="center" style="border-bottom: thin solid #000000;" colspan="3"><bold>Model</bold></th>
</tr>
<tr>
<th/>
<th valign="top" align="left"><bold>Turkey (T)</bold></th>
<th valign="top" align="left"><bold>Broiler (B)</bold></th>
<th valign="top" align="left"><bold>Multi-species (M)</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Turkey (T)</td>
<td valign="top" align="left">Within-species performance (TT)</td>
<td valign="top" align="left">Across-species performance (BT)</td>
<td valign="top" align="left">Multi-species performance (MT)</td>
</tr>
<tr>
<td valign="top" align="left">Broiler (B)</td>
<td valign="top" align="left">Across-species performance (TB)</td>
<td valign="top" align="left">Within-species performance (BB)</td>
<td valign="top" align="left">Multi-species Performance (MB)</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>The following convention will be used to convey the model and test set; the first letter denotes the model, the second letter the test set, i.e., BT stands for the broiler model on the turkey test set</italic>.</p>
</table-wrap-foot>
</table-wrap>
<table-wrap position="float" id="T4">
<label>Table 4</label>
<caption><p>Train and test performance of turkey and broiler model within species.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><bold>Model</bold></th>
<th valign="top" align="center" style="border-bottom: thin solid #000000;" colspan="4"><bold>Train error</bold></th>
<th valign="top" align="center" style="border-bottom: thin solid #000000;" colspan="4"><bold>Test error</bold></th>
<th valign="top" align="center"><bold>PKR<xref ref-type="table-fn" rid="TN1"><sup>a</sup></xref> (%)</bold></th>
</tr>
<tr>
<th/>
<th valign="top" align="center" style="border-bottom: thin solid #000000;" colspan="2"><bold>Raw<xref ref-type="table-fn" rid="TN2"><sup>b</sup></xref></bold></th>
<th valign="top" align="center" style="border-bottom: thin solid #000000;" colspan="2"><bold>Normalized<xref ref-type="table-fn" rid="TN3"><sup>c</sup></xref></bold></th>
<th valign="top" align="center" style="border-bottom: thin solid #000000;" colspan="2"><bold>Raw</bold></th>
<th valign="top" align="center" style="border-bottom: thin solid #000000;" colspan="2"><bold>Normalized</bold></th>
<th/>
</tr>
<tr>
<th/>
<th valign="top" align="center"><bold>All</bold></th>
<th valign="top" align="center"><bold>Filter</bold></th>
<th valign="top" align="center"><bold>All</bold></th>
<th valign="top" align="center"><bold>Filter</bold></th>
<th valign="top" align="center"><bold>All</bold></th>
<th valign="top" align="center"><bold>Filter</bold></th>
<th valign="top" align="center"><bold>All</bold></th>
<th valign="top" align="center"><bold>Filter</bold></th>
<th/>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left"><bold>Turkey</bold></td>
<td valign="top" align="center">2.34</td>
<td valign="top" align="center">2.34</td>
<td valign="top" align="center">0.03</td>
<td valign="top" align="center">0.03</td>
<td valign="top" align="center">6.76</td>
<td valign="top" align="center">6.27</td>
<td valign="top" align="center">0.06</td>
<td valign="top" align="center">0.05</td>
<td valign="top" align="center">99</td>
</tr>
<tr>
<td valign="top" align="left"><bold>Broiler</bold></td>
<td valign="top" align="center">2.26</td>
<td valign="top" align="center">2.26</td>
<td valign="top" align="center">0.03</td>
<td valign="top" align="center">0.03</td>
<td valign="top" align="center">12.05</td>
<td valign="top" align="center">7.56</td>
<td valign="top" align="center">0.11</td>
<td valign="top" align="center">0.09</td>
<td valign="top" align="center">94</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>Model performance reported in raw pixel error, normalized pixel error, and the PKR over all keypoints. Filter implies that only keypoints with &#x003B8; &#x02265; 0.6 were considered in the error calculation. Errors closer to zero are better; PKR closer to 100% is better.</italic></p>
<fn id="TN1">
<label>a</label>
<p><italic>The percentage of keypoints with a likelihood higher or equal to 0.6 over the total number of keypoints with a Euclidean distance.</italic></p></fn>
<fn id="TN2">
<label>b</label>
<p><italic>The Euclidean distance in pixels between the annotated and predicted keypoints.</italic></p></fn>
<fn id="TN3">
<label>c</label>
<p><italic>The Euclidean distance normalized to the square root of the bounding box area of the legs</italic>.</p></fn>
</table-wrap-foot>
</table-wrap>
<table-wrap position="float" id="T5">
<label>Table 5</label>
<caption><p>Performance of all models on the test sets.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><bold>Model</bold></th>
<th valign="top" align="left"><bold>Test set</bold></th>
<th valign="top" align="center" style="border-bottom: thin solid #000000;" colspan="4"><bold>Test error</bold></th>
<th valign="top" align="center"><bold>PKR<xref ref-type="table-fn" rid="TN4"><sup>a</sup></xref> (%)</bold></th>
</tr>
<tr>
<th valign="top" align="left"><bold>Trained on</bold></th>
<th valign="top" align="left"><bold>Species</bold></th>
<th valign="top" align="center" style="border-bottom: thin solid #000000;" colspan="2"><bold>Raw<xref ref-type="table-fn" rid="TN5"><sup>b</sup></xref></bold></th>
<th valign="top" align="center" style="border-bottom: thin solid #000000;" colspan="2"><bold>Normalized<xref ref-type="table-fn" rid="TN6"><sup>c</sup></xref></bold></th>
<th/>
</tr>
<tr>
<th/>
<th/>
<th valign="top" align="center"><bold>All</bold></th>
<th valign="top" align="center"><bold>Filter</bold></th>
<th valign="top" align="center"><bold>All</bold></th>
<th valign="top" align="center"><bold>Filter</bold></th>
<th/>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Turkey</td>
<td valign="top" align="left">Broiler</td>
<td valign="top" align="center">132.50</td>
<td valign="top" align="center">32.13</td>
<td valign="top" align="center">2.35</td>
<td valign="top" align="center">0.66</td>
<td valign="top" align="center">42%</td>
</tr>
<tr>
<td valign="top" align="left">Broiler</td>
<td valign="top" align="left">Turkey</td>
<td valign="top" align="center">82.37</td>
<td valign="top" align="center">51.03</td>
<td valign="top" align="center">0.88</td>
<td valign="top" align="center">0.54</td>
<td valign="top" align="center">58%</td>
</tr>
<tr>
<td valign="top" align="left">Multi-species</td>
<td valign="top" align="left">Turkey</td>
<td valign="top" align="center">8.89</td>
<td valign="top" align="center">7.33</td>
<td valign="top" align="center">0.07</td>
<td valign="top" align="center">0.06</td>
<td valign="top" align="center">97%</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">Broiler</td>
<td valign="top" align="center">22.19</td>
<td valign="top" align="center">9.98</td>
<td valign="top" align="center">0.16</td>
<td valign="top" align="center">0.10</td>
<td valign="top" align="center">89%</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>Model performance reported in raw pixel error, normalized pixel error, and the PKR over all keypoints. Filter implies that only keypoints with &#x003B8; &#x02265; 0.6 were considered in the error calculation. Errors closer to zero are better; PKR closer to 100% is better.</italic></p>
<fn id="TN4">
<label>a</label>
<p><italic>The percentage of keypoints with a likelihood higher or equal to 0.6 over the total number of keypoints with a Euclidean distance.</italic></p></fn>
<fn id="TN5">
<label>b</label>
<p><italic>The Euclidean distance in pixels between the annotated and predicted keypoints.</italic></p></fn>
<fn id="TN6">
<label>c</label>
<p><italic>The Euclidean distance normalized to the square root of the bounding box area of the legs</italic>.</p></fn>
</table-wrap-foot>
</table-wrap>
<sec>
<title>Comparison Between Within-Species, Across-Species, and Multi-Species</title>
<p>On all evaluation metrics calculated overall keypoints, the within-species models (TT, BB) outperformed the multi-species model (MT, MB) and the models applied across species (TB, BT) (<xref ref-type="table" rid="T4">Tables 4</xref>, <xref ref-type="table" rid="T5">5</xref>). The within-species models had lower raw pixel errors, lower normalized pixel errors, and higher PKRs than the multi-species model and the models applied across-species. Compared to the within-species models, the multi-species model had slightly higher normalized errors (&#x0002B;0.01). However, the errors across-species were considerably higher (&#x0002B;0.57; &#x0002B;0.49) than they were for the within-species models.</p>
<p>Performance varied per keypoint, not only within models but also between models (<xref ref-type="table" rid="T6">Table 6</xref>). In general, the head, neck and knee keypoints were predicted with the highest errors. Across-species, the models always performed worse than the within-species counterpart and the multi-species model. On the broiler test set, the multi-species model outperformed the broiler model for the head and right knee keypoint, although this did coincide with a lower PKR. The turkey model had either a similar or better performance than the multi-species model on the turkey test set but the multi-species model did generally have a lower PKR.</p>
<table-wrap position="float" id="T6">
<label>Table 6</label>
<caption><p>Performance of all models on the test sets per keypoint.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><bold>Model</bold></th>
<th valign="top" align="center" style="border-bottom: thin solid #000000;" colspan="4"><bold>Turkey</bold></th>
<th valign="top" align="center" style="border-bottom: thin solid #000000;" colspan="4"><bold>Broiler</bold></th>
<th valign="top" align="center" style="border-bottom: thin solid #000000;" colspan="4"><bold>Multi-species</bold></th>
</tr>
<tr>
<th valign="top" align="left"><bold>Test species</bold></th>
<th valign="top" align="center" style="border-bottom: thin solid #000000;" colspan="2"><bold>Turkey</bold></th>
<th valign="top" align="center" style="border-bottom: thin solid #000000;" colspan="2"><bold>Broiler</bold></th>
<th valign="top" align="center" style="border-bottom: thin solid #000000;" colspan="2"><bold>Turkey</bold></th>
<th valign="top" align="center" style="border-bottom: thin solid #000000;" colspan="2"><bold>Broiler</bold></th>
<th valign="top" align="center" style="border-bottom: thin solid #000000;" colspan="2"><bold>Turkey</bold></th>
<th valign="top" align="center" style="border-bottom: thin solid #000000;" colspan="2"><bold>Broiler</bold></th>
</tr>
<tr>
<th valign="top" align="left"><bold>Keypoint</bold></th>
<th valign="top" align="center"><bold>Normalized<xref ref-type="table-fn" rid="TN7"><sup>a</sup></xref></bold></th>
<th valign="top" align="center"><bold>PKR<xref ref-type="table-fn" rid="TN8"><sup>b</sup></xref> (%)</bold></th>
<th valign="top" align="center"><bold>Normalized</bold></th>
<th valign="top" align="center"><bold>PKR (%)</bold></th>
<th valign="top" align="center"><bold>Normalized</bold></th>
<th valign="top" align="center"><bold>PKR (%)</bold></th>
<th valign="top" align="center"><bold>Normalized</bold></th>
<th valign="top" align="center"><bold>PKR (%)</bold></th>
<th valign="top" align="center"><bold>Normalized</bold></th>
<th valign="top" align="center"><bold>PKR (%)</bold></th>
<th valign="top" align="center"><bold>Normalized</bold></th>
<th valign="top" align="center"><bold>PKR (%)</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Head</td>
<td valign="top" align="center">0.06</td>
<td valign="top" align="center">99</td>
<td valign="top" align="center">11.14</td>
<td valign="top" align="center">1</td>
<td valign="top" align="center">1.17</td>
<td valign="top" align="center">64</td>
<td valign="top" align="center">0.09</td>
<td valign="top" align="center">95</td>
<td valign="top" align="center">0.06</td>
<td valign="top" align="center">97</td>
<td valign="top" align="center">0.08</td>
<td valign="top" align="center">84</td>
</tr>
<tr>
<td valign="top" align="left">Neck</td>
<td valign="top" align="center">0.08</td>
<td valign="top" align="center">99</td>
<td valign="top" align="center">0.82</td>
<td valign="top" align="center">1</td>
<td valign="top" align="center">0.85</td>
<td valign="top" align="center">67</td>
<td valign="top" align="center">0.08</td>
<td valign="top" align="center">93</td>
<td valign="top" align="center">0.10</td>
<td valign="top" align="center">97</td>
<td valign="top" align="center">0.09</td>
<td valign="top" align="center">78</td>
</tr>
<tr>
<td valign="top" align="left">Left knee</td>
<td valign="top" align="center">0.05</td>
<td valign="top" align="center">100</td>
<td valign="top" align="center">0.20</td>
<td valign="top" align="center">39</td>
<td valign="top" align="center">0.21</td>
<td valign="top" align="center">13</td>
<td valign="top" align="center">0.08</td>
<td valign="top" align="center">86</td>
<td valign="top" align="center">0.06</td>
<td valign="top" align="center">97</td>
<td valign="top" align="center">0.09</td>
<td valign="top" align="center">84</td>
</tr>
<tr>
<td valign="top" align="left">Left hock</td>
<td valign="top" align="center">0.03</td>
<td valign="top" align="center">100</td>
<td valign="top" align="center">0.30</td>
<td valign="top" align="center">46</td>
<td valign="top" align="center">0.39</td>
<td valign="top" align="center">52</td>
<td valign="top" align="center">0.08</td>
<td valign="top" align="center">97</td>
<td valign="top" align="center">0.03</td>
<td valign="top" align="center">99</td>
<td valign="top" align="center">0.12</td>
<td valign="top" align="center">93</td>
</tr>
<tr>
<td valign="top" align="left">Left foot</td>
<td valign="top" align="center">0.03</td>
<td valign="top" align="center">100</td>
<td valign="top" align="center">0.13</td>
<td valign="top" align="center">47</td>
<td valign="top" align="center">0.15</td>
<td valign="top" align="center">68</td>
<td valign="top" align="center">0.08</td>
<td valign="top" align="center">99</td>
<td valign="top" align="center">0.04</td>
<td valign="top" align="center">98</td>
<td valign="top" align="center">0.08</td>
<td valign="top" align="center">97</td>
</tr>
<tr>
<td valign="top" align="left">Right knee</td>
<td valign="top" align="center">0.08</td>
<td valign="top" align="center">96</td>
<td valign="top" align="center">0.35</td>
<td valign="top" align="center">63</td>
<td valign="top" align="center">0.13</td>
<td valign="top" align="center">45</td>
<td valign="top" align="center">0.14</td>
<td valign="top" align="center">88</td>
<td valign="top" align="center">0.09</td>
<td valign="top" align="center">92</td>
<td valign="top" align="center">0.12</td>
<td valign="top" align="center">81</td>
</tr>
<tr>
<td valign="top" align="left">Right hock</td>
<td valign="top" align="center">0.04</td>
<td valign="top" align="center">100</td>
<td valign="top" align="center">0.41</td>
<td valign="top" align="center">46</td>
<td valign="top" align="center">0.24</td>
<td valign="top" align="center">73</td>
<td valign="top" align="center">0.09</td>
<td valign="top" align="center">95</td>
<td valign="top" align="center">0.04</td>
<td valign="top" align="center">99</td>
<td valign="top" align="center">0.12</td>
<td valign="top" align="center">96</td>
</tr>
<tr>
<td valign="top" align="left">Right foot</td>
<td valign="top" align="center">0.06</td>
<td valign="top" align="center">99</td>
<td valign="top" align="center">1.72</td>
<td valign="top" align="center">73</td>
<td valign="top" align="center">0.17</td>
<td valign="top" align="center">72</td>
<td valign="top" align="center">0.09</td>
<td valign="top" align="center">99</td>
<td valign="top" align="center">0.08</td>
<td valign="top" align="center">100</td>
<td valign="top" align="center">0.10</td>
<td valign="top" align="center">93</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>Model performance reported in normalized pixel error and the PKR per keypoint. Only keypoints with &#x003B8; &#x02265; 0.6 were considered in the error calculation. Errors closer to zero are better; PKR closer to 100% is better.</italic></p>
<fn id="TN7">
<label>a</label>
<p><italic>The Euclidean distance normalized to the square root of the bounding box area of the legs.</italic></p></fn>
<fn id="TN8">
<label>b</label>
<p><italic>The percentage of keypoints with a likelihood higher or equal to 0.6 over the total number of keypoints with a Euclidean distance</italic>.</p></fn>
</table-wrap-foot>
</table-wrap>
</sec>
<sec>
<title>Within-Species</title>
<p>On the training dataset, both within-species models (TT, BB) showed comparable raw pixel errors and normalized pixel errors (<xref ref-type="table" rid="T4">Table 4</xref>). The turkey model (TT) had a lower raw and normalized pixel error and higher PKR than the broiler model (BB). The turkey model had the lowest keypoints error for the left hock and left foot and the highest error for the right knee keypoint (<xref ref-type="table" rid="T6">Table 6</xref>). The right knee keypoint error was 0.03 higher than the left knee keypoint error. The leg keypoint errors of the broiler model were rather consistent within each leg, except for the right knee keypoint.</p>
</sec>
<sec>
<title>Multi-Species</title>
<p>Multi-species model performance was different between species (MT, MB; <xref ref-type="table" rid="T4">Table 4</xref>). The multi-species model performed better on the turkey test set (MT) than it did on the broiler test set (MB). The multi-species model on the turkey test set had the highest error for the neck keypoint and the lowest error for the left hock keypoint (<xref ref-type="table" rid="T6">Table 6</xref>). On the broiler test set, the multi-species model had the highest errors for the hocks and right knee keypoints, and the lowest error for the head keypoint.</p>
</sec>
<sec>
<title>Across-Species</title>
<p>Across species, the turkey and broiler model had high errors (TB, BT; <xref ref-type="table" rid="T5">Table 5</xref>). The turkey model on the broiler test set had the highest error for the head keypoint, whereas the left foot keypoint had the lowest error (<xref ref-type="table" rid="T6">Table 6</xref>). The broiler model on turkey test set also had the highest error for the head keypoint and lowest error for the left foot keypoint.</p>
</sec>
</sec>
<sec sec-type="discussion" id="s4">
<title>Discussion</title>
<p>In this study, the across-species performance of animal pose-estimation networks and the performance of an animal pose-estimation network trained on multi-species data (turkeys and broilers) were investigated. The results showed that within-species the models had the best performance, followed by the multi-species model, and across-species the models had the worst performance, as illustrated by the raw pixel errors, normalized pixel errors, and PKRs. However, the multi-species model outperformed the broiler model on the broiler test set for the head, left foot, and right knee keypoints, though with a lower PKR.</p>
<sec>
<title>Data Availability and Model Performance</title>
<p>The turkey model outperformed the broiler model on the within species test set (<xref ref-type="table" rid="T4">Table 4</xref>), even though both models had approximately comparable raw pixel errors and normalized pixel errors on the training dataset. For the turkeys, the training set was slightly larger (<italic>n</italic> = 1,397) than the broiler training set (<italic>n</italic> = 1,224), which might explain the difference in performance. However, the turkey test set was likely less challenging, as the difference between the unfiltered and filtered error was smaller for the turkey model than it was for the broiler model. The difference in difficulty can partly be explained by the difference in frame extraction. The broiler dataset consisted of frames that were randomly sampled from a uniform distribution across time, whereas the turkey dataset consisted of consecutive frames. The temporal correlation between the frames may explain why the turkey test set was less challenging.</p>
<p>Overall, the multi-species model had higher errors and a lower PKR than the single-species models. Yet, compared to the within-species models, the multi-species model had less than half the number of annotated frames of the tested species. Interestingly, the multi-species model performed better or similar for certain keypoints compared to the single-species models, but with less confidence, hence a lower PKR. The multi-species model performance suggests that data from the other species was useful to improve performance for certain keypoints but did lower the PKR. The lower PKR is more apparent on the broiler test set but also noticeable on the turkey test set. The lower PKR may be caused by an interplay between the inclusion of other species training data and a lower variability within the species-specific training data.</p>
<p>The pose-estimation networks applied across species had no data available on the target species and could still estimate keypoints. Those keypoint estimates appear to be relatively informed as indicated by the normalized errors. This suggests that, in the case of comparable species, with an existing model and limited availability of data on the new species, the existing model could be fine-tuned on limited data of the target species. The performance of the pose-estimation models confirmed that the success of a supervised deep learning model depends on the availability of data, as was noted by Sun et al. (<xref ref-type="bibr" rid="B38">2017</xref>).</p>
<p>Across-species, the head and neck showed high normalized pixel errors for both turkeys and broilers. Across-species pose-estimation is influenced by differences in appearance of the animals and the differences in environment. There are inherent differences in appearance between turkeys and broilers, especially concerning the head and neck. A turkey head is featherless and has a light-blue tint and a broiler head is feathered and white. In our case, it appears that DeepLabCut was dependent on the color of the keypoints. The broiler model tended to predict the turkey head in the white overhead lights, on workers&#x00027; white boots, and turkeys at the end of the walkway. These locations were relatively far away from the bird, as indicated by the pixel error. A model that uses spatial information of other keypoints within a frame could notice that these predicted keypoints are too far off and search for the second-best location closer to the other keypoints. This suggests that single animal DeepLabCut could benefit from the use of spatial information of other keypoints within a frame, as was also noted by Labuguen et al. (<xref ref-type="bibr" rid="B20">2021</xref>).</p>
</sec>
<sec>
<title>Data Collection</title>
<p>In this study, the data was collected in two different trials, one for turkeys and one for broilers but neither specific for this study. Recording both species in the same setting under the same conditions may have been better for assessing model performance between the two species, but can only be done in an experimental setting, which often poorly translates to practical implementation. The datasets used here were representative of the situation in practice for poultry breeding programs. In the end, the models will have to work in less regulated environments, i.e., barns and pens, to be of use.</p>
<p>In the turkey trial, multiple sensors collected data to assess the gait of the animals. The trial did not only involve a video camera, but the animals were also equipped with IMUs, and there was a force plate hidden underneath the bedding. The IMUs were attached to both legs and the neck, and hence they were visible in the turkey video frames. The presence of the IMUs was likely picked up by the pose-estimation network, as the hocks often had the lowest normalized pixel error, and highest PKR of all keypoints within a turkey leg. Likewise, when the broiler model was tested on the turkey test set it tended to predict the hocks at the transition of the Velcro strap of the IMU to the feathers, instead of the transition from scales into feathers. The presence of external sensors seems to have influenced the performance of the pose-estimation networks on the turkey test set.</p>
<p>The turkey trial was conducted during a standard walkway test applied in the turkey breeding program of Hybrid Turkeys (Hendrix Genetics, Boxmeer, The Netherlands), and therefore, representative of a practical gait scoring situation. The turkeys were stimulated to walk by a worker causing occlusions in the frames. However, occlusions could also occur because of another bird in queue, while the bird of interest was still walking. In the turkey dataset, only frames without occlusion by a worker or other bird were included. These occlusions limit the amount of usable data available for gait and pose estimations. The occlusions did not hinder the human expert who can move around freely, while the camera is in a fixed position. In an ideal situation, each animal is walked one-by-one for the full extent of the walkway, as was done with the broilers. This will not only make the videos more usable but also allow for a better sampling of the frames to train a network.</p>
</sec>
<sec>
<title>Annotation</title>
<p>During the annotation process, not all keypoints could be annotated as accurately. For both turkeys and broilers, the knees were annotated at the estimated location as the knees of the birds cannot be observed directly. The uncertainty in labeling, and thereby the variability in labeling, declined when the animal was further away from the camera since the likely knee area simply declined, but annotator uncertainty was still present. The larger likely knee area when the animal was near the camera, coupled with the annotator uncertainty is likely to increase the raw pixel errors. The annotator uncertainty probably increased the variability of the knee keypoint annotations which would have a negative effect on the PKR, as the network would have more trouble learning the knee keypoint. The annotator uncertainty becomes evident when we look at the normalized pixel error and the PKR of the turkey and broiler model applied within species. The knees had the highest normalized pixel error and/or lowest PKR of the keypoints within each leg. Ideally, the normalized pixel error of the knees reflected the decline of the likely knee area by being equal to the normalized pixel error of the other keypoints within the leg. However, the normalized pixel error of the knee keypoints was only equal to the normalized pixel error of the other keypoins within the left leg of the broilers, in all other cases, it was higher, showing that labeling uncertainty was still present.</p>
</sec>
<sec>
<title>Prospects</title>
<p>This study provides insight into the across-species performance of animal pose-estimation networks and the performance of an animal pose-estimation network trained on multi-species data. Accurate pose-estimation networks enable automated estimation of key body points in an images or video frames, which are a prerequisite to use camera&#x00027;s for objective assessment of poses and gaits, hence within species trained models would perform best, if sufficient annotated data is available on the species. Within-species models will provide more accurate keypoints from which more accurate spatiotemporal (e.g., step time and speed) and kinematic (e.g., joint angles) gait and pose parameters can be estimated. In case of limited data availability, a multi-species model could potentially be considered for pose assessment without a large impact on performance if the used species are comparable. The across-species keypoint estimates may not be precise enough for accurate gait and pose assessments, but still appear to be relatively informed as indicated by the normalized errors. A pose estimation network may not be directly applicable across species, but the network could serve as a pre-trained network that can be fine-tuned on the target species if there is limited available data. An alternative could be the use of Generative Adversarial Neural networks (GANs; Zhu et al., <xref ref-type="bibr" rid="B48">2017</xref>). However, recent GANs appear to work better to change coat color than to change a dog into a cat (Cao et al., <xref ref-type="bibr" rid="B7">2019</xref>). Furthermore, if the species change is successful, the accuracy of the converted keypoint labels could be negatively impacted (Cao et al., <xref ref-type="bibr" rid="B7">2019</xref>).</p>
</sec>
</sec>
<sec sec-type="conclusions" id="s5">
<title>Conclusion</title>
<p>In this study, the across-species performance of animal pose-estimation networks and the performance of an animal pose-estimation network trained on multi-species data (turkeys and broilers) were investigated. Across species, keypoint predictions resulted in high errors in low to moderate PKRs and are unlikely to be of direct use for pose and gait assessments. The multi-species model had slightly higher errors with a lower PKR than the within-species models but had less than half the number of annotated frames available from each species. The within-species model had the overall best performance. The within-species models will provide more accurate keypoints from which more accurate spatiotemporal and kinematic&#x02014;geometric and time-dependent aspects of motion&#x02014;gait and pose parameters can be estimated. A multi-species model could potentially reduce annotation needs without a large impact on performance on pose assessment, however, with the recommendation to only be used if the species are comparable. Future studies should investigate the actual accuracy needed for pose and gait assessments and estimate genetic parameters for the new phenotypes before pose-estimation networks can be applied in practice.</p>
</sec>
<sec sec-type="data-availability" id="s6">
<title>Data Availability Statement</title>
<p>The turkey dataset analyzed for this study is not publicly available as it is the intellectual property of Hendrix Genetics. Requests to access the dataset should be directed to Bram Visser, <email>bram.visser&#x00040;hendrixgenetics.com</email>. The broiler dataset analyzed for this study is available upon reasonable request from Wageningen Livestock Research. Requests to access the dataset should be directed to Aniek C. Bouwman, <email>aniek.bouwman&#x00040;wur.nl</email>.</p>
</sec>
<sec id="s7">
<title>Ethics Statement</title>
<p>The Animal Welfare Body of Wageningen Research decided ethical review was not necessary because the turkeys were not isolated, semi-familiar with the corridor, the IMUs low in weight (1% of body weight), and the IMUs were attached for not longer than one hour. The Animal Welfare Body of Wageningen University noted that the broiler study did not constitute an animal experiment under Dutch law. The experimental procedures described in the protocol of the broiler study would cause less pain or distress than the insertion of a needle under good veterinary practice.</p>
</sec>
<sec id="s8">
<title>Author Contributions</title>
<p>JD, AB, GK, and RV contributed to the conceptualization of the study. JD, AB, and JE were involved with broiler data collection. JD and TS performed annotation and analysis. JD wrote the first draft of the manuscript. JD, AB, EE, JE, GK, and RV reviewed and edited the manuscript. All authors have read and agreed to the published version of the manuscript.</p>
</sec>
<sec sec-type="funding-information" id="s9">
<title>Funding</title>
<p>This study was financially supported by the Dutch Ministry of Economic Affairs (TKI Agri and Food project 16022) and the Breed4Food partners Cobb Europe, CRV, Hendrix Genetics and Topigs Norsvin.</p>
</sec>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of Interest</title>
<p>The authors declare that this study received funding from Cobb Europe, CRV, Hendrix Genetics and Topigs Norsvin. The funders had the following involvement in the study: All funders were involved in the study design. Hendrix Genetics was involved in data collection. None of the funders was involved in the analysis, interpretation of data, the writing of this article or the decision to submit it for publication.</p>
</sec>
<sec sec-type="disclaimer" id="s10">
<title>Publisher&#x00027;s Note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
</body>
<back>
<ack><p>We would like to thank the Hybrid Turkeys team (Kitchener, Canada) for the collection of the turkey data. We thank Henk Gunnink and Stephanie Melis (Animal Health and Welfare, Wageningen University and Research) for their help with broiler data collection. We would also like to thank Ard Nieuwenhuizen (Agrosystems Research, Wageningen University and Research) for his work on the turkey data and introducing us to DeepLabCut. The use of the HPC cluster has been made possible by CAT-AgroFood (Shared Research Facilities Wageningen UR).</p>
</ack>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Abourachid</surname> <given-names>A</given-names></name></person-group> (<year>1991</year>). <article-title>Comparative gait analysis of two strains of turkey, meleagris gallopavo</article-title>. <source>Br. Poult. Sci.</source> <volume>32</volume>, <fpage>271</fpage>&#x02013;<lpage>277</lpage>. <pub-id pub-id-type="doi">10.1080/00071669108417350</pub-id><pub-id pub-id-type="pmid">1868368</pub-id></citation></ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Abourachid</surname> <given-names>A</given-names></name></person-group> (<year>2001</year>). <article-title>Kinematic parameters of terrestrial locomotion in cursorial (ratites), swimming (ducks), and striding birds (quail and guinea fowl)</article-title>. <source>Comp. Biochem. Physiol. A Mol. Integr. Physiol.</source> <volume>131</volume>, <fpage>113</fpage>&#x02013;<lpage>119</lpage>. <pub-id pub-id-type="doi">10.1016/S1095-6433(01)00471-8</pub-id><pub-id pub-id-type="pmid">11733170</pub-id></citation></ref>
<ref id="B3">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Andriluka</surname> <given-names>M.</given-names></name> <name><surname>Pishchulin</surname> <given-names>L.</given-names></name> <name><surname>Gehler</surname> <given-names>P.</given-names></name> <name><surname>Schiele</surname> <given-names>B.</given-names></name></person-group> (<year>2014</year>). <article-title>&#x0201C;2D human pose estimation: new benchmark and state of the art analysis,&#x0201D;</article-title> in <source>Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition</source> (<publisher-loc>Columbus, OH</publisher-loc>), <fpage>3686</fpage>&#x02013;<lpage>3693</lpage>. <pub-id pub-id-type="doi">10.1109/CVPR.2014.471</pub-id><pub-id pub-id-type="pmid">27295638</pub-id></citation></ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bassler</surname> <given-names>A. W.</given-names></name> <name><surname>Arnould</surname> <given-names>C.</given-names></name> <name><surname>Butterworth</surname> <given-names>A.</given-names></name> <name><surname>Colin</surname> <given-names>L.</given-names></name> <name><surname>De Jong</surname> <given-names>I. C.</given-names></name> <name><surname>Ferrante</surname> <given-names>V.</given-names></name> <etal/></person-group>. (<year>2013</year>). <article-title>Potential risk factors associated with contact dermatitis, lameness, negative emotional state, and fear of humans in broiler chicken flocks</article-title>. <source>Poult. Sci.</source> <volume>92</volume>, <fpage>2811</fpage>&#x02013;<lpage>2826</lpage>. <pub-id pub-id-type="doi">10.3382/ps.2013-03208</pub-id><pub-id pub-id-type="pmid">24135583</pub-id></citation></ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bouwman</surname> <given-names>A.</given-names></name> <name><surname>Savchuk</surname> <given-names>A.</given-names></name> <name><surname>Abbaspourghomi</surname> <given-names>A.</given-names></name> <name><surname>Visser</surname> <given-names>B.</given-names></name></person-group> (<year>2020</year>). <article-title>Automated Step detection in inertial measurement unit data from turkeys</article-title>. <source>Front. Genet.</source> <volume>11</volume>:<fpage>207</fpage>. <pub-id pub-id-type="doi">10.3389/fgene.2020.00207</pub-id><pub-id pub-id-type="pmid">32265981</pub-id></citation></ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bradshaw</surname> <given-names>R. H.</given-names></name> <name><surname>Kirkden</surname> <given-names>R. D.</given-names></name> <name><surname>Broom</surname> <given-names>D. M.</given-names></name></person-group> (<year>2002</year>). <article-title>A review of the aetiology and pathology of leg weakness in broilers in relation to their welfare</article-title>. <source>Avian Poult. Biol. Rev.</source> <volume>13</volume>, <fpage>45</fpage>&#x02013;<lpage>104</lpage>. <pub-id pub-id-type="doi">10.3184/147020602783698421</pub-id></citation>
</ref>
<ref id="B7">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Cao</surname> <given-names>J.</given-names></name> <name><surname>Tang</surname> <given-names>H.</given-names></name> <name><surname>Fang</surname> <given-names>H. S.</given-names></name> <name><surname>Shen</surname> <given-names>X.</given-names></name> <name><surname>Lu</surname> <given-names>C.</given-names></name> <name><surname>Tai</surname> <given-names>Y. W.</given-names></name></person-group> (<year>2019</year>). <article-title>&#x0201C;Cross-domain adaptation for animal pose estimation,&#x0201D;</article-title> in <source>Proceedings of the IEEE/CVF International Conference on Computer Vision</source> Vol. <volume>1</volume> (<publisher-loc>Seoul</publisher-loc>), <fpage>9497</fpage>&#x02013;<lpage>9506</lpage>. <pub-id pub-id-type="doi">10.1109/ICCV.2019.00959</pub-id><pub-id pub-id-type="pmid">27295638</pub-id></citation></ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Caplen</surname> <given-names>G.</given-names></name> <name><surname>Hothersall</surname> <given-names>B.</given-names></name> <name><surname>Murrell</surname> <given-names>J. C.</given-names></name> <name><surname>Nicol</surname> <given-names>C. J.</given-names></name> <name><surname>Waterman-Pearson</surname> <given-names>A. E.</given-names></name> <name><surname>Weeks</surname> <given-names>C. A.</given-names></name> <etal/></person-group>. (<year>2012</year>). <article-title>Kinematic analysis quantifies gait abnormalities associated with lameness in broiler chickens and identifies evolutionary gait differences</article-title>. <source>PLoS ONE</source> <volume>7</volume>:<fpage>e40800</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0040800</pub-id><pub-id pub-id-type="pmid">22815823</pub-id></citation></ref>
<ref id="B9">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Cheng</surname> <given-names>B.</given-names></name> <name><surname>Xiao</surname> <given-names>B.</given-names></name> <name><surname>Wang</surname> <given-names>J.</given-names></name> <name><surname>Shi</surname> <given-names>H.</given-names></name> <name><surname>Huang</surname> <given-names>T. S.</given-names></name> <name><surname>Zhang</surname> <given-names>L.</given-names></name></person-group> (<year>2020</year>). <article-title>&#x0201C;Higherhrnet: scale-aware representation learning for bottom-up human pose estimation,&#x0201D;</article-title> in <source>Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition</source> (<publisher-loc>Seattle, WA</publisher-loc>), <fpage>5386</fpage>&#x02013;<lpage>5395</lpage>.</citation>
</ref>
<ref id="B10">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Deng</surname> <given-names>J.</given-names></name> <name><surname>Dong</surname> <given-names>W.</given-names></name> <name><surname>Socher</surname> <given-names>R.</given-names></name> <name><surname>Li</surname> <given-names>L.-J.</given-names></name> <name><surname>Li</surname> <given-names>K.</given-names></name> <name><surname>Fei-Fei</surname> <given-names>L.</given-names></name></person-group> (<year>2009</year>). <article-title>ImageNet: a large-scale hierarchical image database</article-title>. <source>IEEE Conf. Comp. Vis. Patt. Recogn.</source> (<publisher-loc>Miami, FL</publisher-loc>), <fpage>248</fpage>&#x02013;<lpage>255</lpage>. <pub-id pub-id-type="doi">10.1109/cvpr.2009.5206848</pub-id><pub-id pub-id-type="pmid">27295638</pub-id></citation></ref>
<ref id="B11">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Erasmus</surname> <given-names>M. A</given-names></name></person-group> (<year>2018</year>). <article-title>Welfare issues in turkey production.</article-title> in <source>Advances in Poultry Welfare</source>, ed <person-group person-group-type="editor"><name><surname>Mench</surname> <given-names>J. A.</given-names></name></person-group> (<publisher-loc>Duxford</publisher-loc>: <publisher-name>Woodhead Publishing</publisher-name>), <fpage>263</fpage>&#x02013;<lpage>291</lpage>. <pub-id pub-id-type="doi">10.1016/B978-0-08-100915-4.00013-0</pub-id></citation>
</ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Garner</surname> <given-names>J. P.</given-names></name> <name><surname>Falcone</surname> <given-names>C.</given-names></name> <name><surname>Wakenell</surname> <given-names>P.</given-names></name> <name><surname>Martin</surname> <given-names>M.</given-names></name> <name><surname>Mench</surname> <given-names>J. A.</given-names></name></person-group> (<year>2002</year>). <article-title>Reliability and validity of a modified gait scoring system and its use in assessing tibial dyschondroplasia in broilers</article-title>. <source>Br. Poult. Sci.</source> <volume>43</volume>, <fpage>355</fpage>&#x02013;<lpage>363</lpage>. <pub-id pub-id-type="doi">10.1080/00071660120103620</pub-id><pub-id pub-id-type="pmid">12195794</pub-id></citation></ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Graving</surname> <given-names>J. M.</given-names></name> <name><surname>Chae</surname> <given-names>D.</given-names></name> <name><surname>Naik</surname> <given-names>H.</given-names></name> <name><surname>Li</surname> <given-names>L.</given-names></name> <name><surname>Koger</surname> <given-names>B.</given-names></name> <name><surname>Costelloe</surname> <given-names>B. R.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Deepposekit, a software toolkit for fast and robust animal pose estimation using deep learning</article-title>. <source>Elife</source> <volume>8</volume>, <fpage>1</fpage>&#x02013;<lpage>42</lpage>. <pub-id pub-id-type="doi">10.7554/eLife.47994</pub-id><pub-id pub-id-type="pmid">31570119</pub-id></citation></ref>
<ref id="B14">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>He</surname> <given-names>K.</given-names></name> <name><surname>Zhang</surname> <given-names>X.</given-names></name> <name><surname>Ren</surname> <given-names>S.</given-names></name> <name><surname>Sun</surname> <given-names>J.</given-names></name></person-group> (<year>2016</year>). <article-title>&#x0201C;Deep residual learning for image recognition,&#x0201D;</article-title> in <source>Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition</source> (<publisher-loc>Las Vegas, NV</publisher-loc>), <fpage>770</fpage>&#x02013;<lpage>778</lpage>.<pub-id pub-id-type="pmid">32166560</pub-id></citation></ref>
<ref id="B15">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Insafutdinov</surname> <given-names>E.</given-names></name> <name><surname>Pishchulin</surname> <given-names>L.</given-names></name> <name><surname>Andres</surname> <given-names>B.</given-names></name> <name><surname>Andriluka</surname> <given-names>M.</given-names></name> <name><surname>Schiele</surname> <given-names>B.</given-names></name></person-group> (<year>2016</year>). <article-title>&#x0201C;DeeperCut: a deeper, stronger, and faster multi-person pose estimation model,&#x0201D;</article-title> in <source>European Conference on Computer Vision</source> (<publisher-loc>Amsterdam</publisher-loc>), <fpage>34</fpage>&#x02013;<lpage>50</lpage>.</citation>
</ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kapell</surname> <given-names>D. N. R. G.</given-names></name> <name><surname>Hocking</surname> <given-names>P. M.</given-names></name> <name><surname>Glover</surname> <given-names>P. K.</given-names></name> <name><surname>Kremer</surname> <given-names>V. D.</given-names></name> <name><surname>Avenda&#x000F1;o</surname> <given-names>S.</given-names></name></person-group> (<year>2017</year>). <article-title>Genetic basis of leg health and its relationship with body weight in purebred turkey lines</article-title>. <source>Poult. Sci.</source> <volume>96</volume>, <fpage>1553</fpage>&#x02013;<lpage>1562</lpage>. <pub-id pub-id-type="doi">10.3382/ps/pew479</pub-id><pub-id pub-id-type="pmid">28339774</pub-id></citation></ref>
<ref id="B17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kestin</surname> <given-names>S. C.</given-names></name> <name><surname>Knowles</surname> <given-names>T. G.</given-names></name> <name><surname>Tinch</surname> <given-names>A. E.</given-names></name> <name><surname>Gregory</surname> <given-names>N. G.</given-names></name></person-group> (<year>1992</year>). <article-title>Prevalence of leg weakness in broiler chickens and its relationship with genotype</article-title>. <source>Vet. Rec.</source> <volume>131</volume>, <fpage>190</fpage>&#x02013;<lpage>194</lpage>. <pub-id pub-id-type="doi">10.1136/vr.131.9.190</pub-id><pub-id pub-id-type="pmid">1441174</pub-id></citation></ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kittelsen</surname> <given-names>K. E.</given-names></name> <name><surname>David</surname> <given-names>B.</given-names></name> <name><surname>Moe</surname> <given-names>R. O.</given-names></name> <name><surname>Poulsen</surname> <given-names>H. D.</given-names></name> <name><surname>Young</surname> <given-names>J. F.</given-names></name> <name><surname>Granquist</surname> <given-names>E. G.</given-names></name></person-group> (<year>2017</year>). <article-title>Associations among gait score, production data, abattoir registrations, and postmortem tibia measurements in broiler chickens</article-title>. <source>Poult. Sci.</source> <volume>96</volume>, <fpage>1033</fpage>&#x02013;<lpage>1040</lpage>. <pub-id pub-id-type="doi">10.3382/ps/pew433</pub-id><pub-id pub-id-type="pmid">27965410</pub-id></citation></ref>
<ref id="B19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kremer</surname> <given-names>J. A.</given-names></name> <name><surname>Robison</surname> <given-names>C. I.</given-names></name> <name><surname>Karcher</surname> <given-names>D. M.</given-names></name></person-group> (<year>2018</year>). <article-title>Growth dependent changes in pressure sensing walkway data for Turkeys</article-title>. <source>Front. Vet. Sci.</source> <volume>5</volume>:<fpage>241</fpage>. <pub-id pub-id-type="doi">10.3389/fvets.2018.00241</pub-id><pub-id pub-id-type="pmid">30356777</pub-id></citation></ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Labuguen</surname> <given-names>R.</given-names></name> <name><surname>Matsumoto</surname> <given-names>J.</given-names></name> <name><surname>Negrete</surname> <given-names>S. B.</given-names></name> <name><surname>Nishimaru</surname> <given-names>H.</given-names></name> <name><surname>Nishijo</surname> <given-names>H.</given-names></name> <name><surname>Takada</surname> <given-names>M.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>MacaquePose: a novel &#x0201C;in the wild&#x0201D; macaque monkey pose dataset for markerless motion capture</article-title>. <source>Front. Behav. Neurosci.</source> <volume>14</volume>:<fpage>581154</fpage>. <pub-id pub-id-type="doi">10.3389/fnbeh.2020.581154</pub-id><pub-id pub-id-type="pmid">33584214</pub-id></citation></ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lecun</surname> <given-names>Y.</given-names></name> <name><surname>Bengio</surname> <given-names>Y.</given-names></name> <name><surname>Hinton</surname> <given-names>G.</given-names></name></person-group> (<year>2015</year>). <article-title>Deep learning</article-title>. <source>Nature</source> <volume>521</volume>, <fpage>436</fpage>&#x02013;<lpage>444</lpage>. <pub-id pub-id-type="doi">10.1038/nature14539</pub-id><pub-id pub-id-type="pmid">26017442</pub-id></citation></ref>
<ref id="B22">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Lin</surname> <given-names>T. Y.</given-names></name> <name><surname>Maire</surname> <given-names>M.</given-names></name> <name><surname>Belongie</surname> <given-names>S.</given-names></name> <name><surname>Hays</surname> <given-names>J.</given-names></name> <name><surname>Perona</surname> <given-names>P.</given-names></name> <name><surname>Ramanan</surname> <given-names>D.</given-names></name> <etal/></person-group>. (<year>2014</year>). <article-title>&#x0201C;Microsoft COCO: common objects in context,&#x0201D;</article-title> in <source>European Conference on Computer Vision</source> (<publisher-loc>Zurich</publisher-loc>), <fpage>740</fpage>&#x02013;<lpage>755</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-319-10602-1_48</pub-id><pub-id pub-id-type="pmid">33159948</pub-id></citation></ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Malchow</surname> <given-names>J.</given-names></name> <name><surname>Dudde</surname> <given-names>A.</given-names></name> <name><surname>Berk</surname> <given-names>J.</given-names></name> <name><surname>Krause</surname> <given-names>E. T.</given-names></name> <name><surname>Sanders</surname> <given-names>O.</given-names></name> <name><surname>Puppe</surname> <given-names>B.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Is the rotarod test an objective alternative to the gait score for evaluating walking ability in chickens?</article-title> <source>Anim. Welf.</source> <volume>28</volume>, <fpage>261</fpage>&#x02013;<lpage>269</lpage>. <pub-id pub-id-type="doi">10.7120/109627286.28.3.261</pub-id></citation>
</ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mathis</surname> <given-names>A.</given-names></name> <name><surname>Mamidanna</surname> <given-names>P.</given-names></name> <name><surname>Cury</surname> <given-names>K. M.</given-names></name> <name><surname>Abe</surname> <given-names>T.</given-names></name> <name><surname>Murthy</surname> <given-names>V. N.</given-names></name> <name><surname>Mathis</surname> <given-names>M. W.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>DeepLabCut: markerless pose estimation of user-defined body parts with deep learning</article-title>. <source>Nat. Neurosci.</source> <volume>21</volume>, <fpage>1281</fpage>&#x02013;<lpage>1289</lpage>. <pub-id pub-id-type="doi">10.1038/s41593-018-0209-y</pub-id><pub-id pub-id-type="pmid">30127430</pub-id></citation></ref>
<ref id="B25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mathis</surname> <given-names>A.</given-names></name> <name><surname>Schneider</surname> <given-names>S.</given-names></name> <name><surname>Lauer</surname> <given-names>J.</given-names></name> <name><surname>Mathis</surname> <given-names>M. W.</given-names></name></person-group> (<year>2020</year>). <article-title>A primer on motion capture with deep learning: principles, pitfalls, and perspectives</article-title>. <source>Neuron</source> <volume>108</volume>, <fpage>44</fpage>&#x02013;<lpage>65</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2020.09.017</pub-id><pub-id pub-id-type="pmid">33294876</pub-id></citation></ref>
<ref id="B26">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Mathis</surname> <given-names>A.</given-names></name> <name><surname>Y&#x000FC;ksekg&#x000F6;n&#x000FC;l</surname> <given-names>M.</given-names></name> <name><surname>Rogers</surname> <given-names>B.</given-names></name> <name><surname>Bethge</surname> <given-names>M.</given-names></name> <name><surname>Mathis</surname> <given-names>M. W.</given-names></name></person-group> (<year>2021</year>). <article-title>&#x0201C;Pretraining boosts out-of-domain robustness for pose estimation,&#x0201D;</article-title> in <source>Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision</source> (<publisher-loc>Waikoloa, HI</publisher-loc>), <fpage>1859</fpage>&#x02013;<lpage>1868</lpage>.</citation>
</ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>N&#x000E4;&#x000E4;s</surname> <given-names>I. D. A.</given-names></name> <name><surname>de Lima Almeida Paz</surname> <given-names>I. C.</given-names></name> <name><surname>Baracho</surname> <given-names>M. d. S.</given-names></name> <name><surname>de Menezes</surname> <given-names>A. G.</given-names></name> <name><surname>de Lima</surname> <given-names>K. A. O.</given-names></name> <etal/></person-group>. (<year>2010</year>). <article-title>Assessing locomotion deficiency in broiler chicken</article-title>. <source>Sci. Agric.</source> <volume>67</volume>, <fpage>129</fpage>&#x02013;<lpage>135</lpage>. <pub-id pub-id-type="doi">10.1590/S0103-90162010000200001</pub-id></citation>
</ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nath</surname> <given-names>T.</given-names></name> <name><surname>Mathis</surname> <given-names>A.</given-names></name> <name><surname>Chen</surname> <given-names>A. C.</given-names></name> <name><surname>Patel</surname> <given-names>A.</given-names></name> <name><surname>Bethge</surname> <given-names>M.</given-names></name> <name><surname>Mathis</surname> <given-names>M. W.</given-names></name></person-group> (<year>2019</year>). <article-title>Using DeepLabCut for 3D markerless pose estimation across species and behaviors</article-title>. <source>Nat. Protoc.</source> <volume>14</volume>, <fpage>2152</fpage>&#x02013;<lpage>2176</lpage>. <pub-id pub-id-type="doi">10.1038/s41596-019-0176-0</pub-id><pub-id pub-id-type="pmid">31227823</pub-id></citation></ref>
<ref id="B29">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Oviedo-Rond&#x000F3;n</surname> <given-names>E. O.</given-names></name> <name><surname>Lascelles</surname> <given-names>B. D. X.</given-names></name> <name><surname>Arellano</surname> <given-names>C.</given-names></name> <name><surname>Mente</surname> <given-names>P. L.</given-names></name> <name><surname>Eusebio-Balcazar</surname> <given-names>P.</given-names></name> <name><surname>Grimes</surname> <given-names>J. L.</given-names></name> <etal/></person-group>. (<year>2017</year>). <article-title>Gait parameters in four strains of turkeys and correlations with bone strength</article-title>. <source>Poult. Sci.</source> <volume>96</volume>, <fpage>1989</fpage>&#x02013;<lpage>2005</lpage>. <pub-id pub-id-type="doi">10.3382/ps/pew502</pub-id><pub-id pub-id-type="pmid">28204753</pub-id></citation></ref>
<ref id="B30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Paxton</surname> <given-names>H.</given-names></name> <name><surname>Daley</surname> <given-names>M. A.</given-names></name> <name><surname>Corr</surname> <given-names>S. A.</given-names></name> <name><surname>Hutchinson</surname> <given-names>J. R.</given-names></name></person-group> (<year>2013</year>). <article-title>The gait dynamics of the modern broiler chicken: a cautionary tale of selective breeding</article-title>. <source>J. Exp. Biol.</source> <volume>216</volume>, <fpage>3237</fpage>&#x02013;<lpage>3248</lpage>. <pub-id pub-id-type="doi">10.1242/jeb.080309</pub-id><pub-id pub-id-type="pmid">23685968</pub-id></citation></ref>
<ref id="B31">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pereira</surname> <given-names>T. D.</given-names></name> <name><surname>Aldarondo</surname> <given-names>D. E.</given-names></name> <name><surname>Willmore</surname> <given-names>L.</given-names></name> <name><surname>Kislin</surname> <given-names>M.</given-names></name> <name><surname>Wang</surname> <given-names>S. S. H.</given-names></name> <name><surname>Murthy</surname> <given-names>M.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Fast animal pose estimation using deep neural networks</article-title>. <source>Nat. Methods</source> <volume>16</volume>, <fpage>117</fpage>&#x02013;<lpage>125</lpage>. <pub-id pub-id-type="doi">10.1038/s41592-018-0234-5</pub-id><pub-id pub-id-type="pmid">30573820</pub-id></citation></ref>
<ref id="B32">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Quinton</surname> <given-names>C. D.</given-names></name> <name><surname>Wood</surname> <given-names>B. J.</given-names></name> <name><surname>Miller</surname> <given-names>S. P.</given-names></name></person-group> (<year>2011</year>). <article-title>Genetic analysis of survival and fitness in turkeys with multiple-trait animal models</article-title>. <source>Poult. Sci.</source> <volume>90</volume>, <fpage>2479</fpage>&#x02013;<lpage>2486</lpage>. <pub-id pub-id-type="doi">10.3382/ps.2011-01604</pub-id><pub-id pub-id-type="pmid">22010232</pub-id></citation></ref>
<ref id="B33">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Sanakoyeu</surname> <given-names>A.</given-names></name> <name><surname>Khalidov</surname> <given-names>V.</given-names></name> <name><surname>McCarthy</surname> <given-names>M. S.</given-names></name> <name><surname>Vedaldi</surname> <given-names>A.</given-names></name> <name><surname>Neverova</surname> <given-names>N.</given-names></name></person-group> (<year>2020</year>). <article-title>&#x0201C;Transferring dense pose to proximal animal classes,&#x0201D;</article-title> in <source>Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition</source> (<publisher-loc>Seattle, WA</publisher-loc>), <fpage>5233</fpage>&#x02013;<lpage>5242</lpage>. <pub-id pub-id-type="doi">10.1109/CVPR42600.2020.00528</pub-id><pub-id pub-id-type="pmid">27295638</pub-id></citation></ref>
<ref id="B34">
<citation citation-type="book"><person-group person-group-type="author"><collab>Scientific Committee on Animal Health and Animal Welfare</collab></person-group> (<year>2000</year>). <source>The Welfare of Chickens Kept for Meat Production (Broilers)</source>. <publisher-loc>European Union Commission</publisher-loc>.</citation>
</ref>
<ref id="B35">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sharafeldin</surname> <given-names>T. A.</given-names></name> <name><surname>Mor</surname> <given-names>S. K.</given-names></name> <name><surname>Bekele</surname> <given-names>A. Z.</given-names></name> <name><surname>Verma</surname> <given-names>H.</given-names></name> <name><surname>Noll</surname> <given-names>S. L.</given-names></name> <name><surname>Goyal</surname> <given-names>S. M.</given-names></name> <etal/></person-group>. (<year>2015</year>). <article-title>Experimentally induced lameness in turkeys inoculated with a newly emergent turkey reovirus</article-title>. <source>Vet. Res.</source> <volume>46</volume>, <fpage>1</fpage>&#x02013;<lpage>7</lpage>. <pub-id pub-id-type="doi">10.1186/s13567-015-0144-9</pub-id><pub-id pub-id-type="pmid">25828424</pub-id></citation></ref>
<ref id="B36">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stevenson</surname> <given-names>R.</given-names></name> <name><surname>Dalton</surname> <given-names>H. A.</given-names></name> <name><surname>Erasmus</surname> <given-names>M.</given-names></name></person-group> (<year>2019</year>). <article-title>Validity of micro-data loggers to determine walking activity of turkeys and effects on turkey gait</article-title>. <source>Front. Vet. Sci.</source> <volume>5</volume>:<fpage>319</fpage>. <pub-id pub-id-type="doi">10.3389/fvets.2018.00319</pub-id><pub-id pub-id-type="pmid">30766875</pub-id></citation></ref>
<ref id="B37">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sullivan</surname> <given-names>T. W</given-names></name></person-group> (<year>1994</year>). <article-title>Skeletal problems in poultry: estimated annual cost and descriptions</article-title>. <source>Poult. Sci.</source> <volume>73</volume>, <fpage>879</fpage>&#x02013;<lpage>882</lpage>. <pub-id pub-id-type="doi">10.3382/ps.0730879</pub-id><pub-id pub-id-type="pmid">8072932</pub-id></citation></ref>
<ref id="B38">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Sun</surname> <given-names>C.</given-names></name> <name><surname>Shrivastava</surname> <given-names>A.</given-names></name> <name><surname>Singh</surname> <given-names>S.</given-names></name> <name><surname>Gupta</surname> <given-names>A.</given-names></name></person-group> (<year>2017</year>). <article-title>&#x0201C;Revisiting unreasonable effectiveness of data in deep learning era,&#x0201D;</article-title> in <source>Proceedings of the IEEE International Conference on Computer Vision</source> (<publisher-loc>Venice</publisher-loc>), <fpage>843</fpage>&#x02013;<lpage>852</lpage>. <pub-id pub-id-type="doi">10.1109/ICCV.2017.97</pub-id><pub-id pub-id-type="pmid">27295638</pub-id></citation></ref>
<ref id="B39">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Sun</surname> <given-names>K.</given-names></name> <name><surname>Xiao</surname> <given-names>B.</given-names></name> <name><surname>Liu</surname> <given-names>D.</given-names></name> <name><surname>Wang</surname> <given-names>J.</given-names></name></person-group> (<year>2019</year>). <article-title>&#x0201C;Deep high-resolution representation learning for human pose estimation,&#x0201D;</article-title> in <source>Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition</source> (<publisher-loc>Long Beach, CA</publisher-loc>), <fpage>5686</fpage>&#x02013;<lpage>5696</lpage>.</citation>
</ref>
<ref id="B40">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Toshev</surname> <given-names>A.</given-names></name> <name><surname>Szegedy</surname> <given-names>C.</given-names></name></person-group> (<year>2014</year>). <article-title>&#x0201C;DeepPose: human pose estimation via deep neural networks,&#x0201D;</article-title> <source>Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition</source> (<publisher-loc>Columbus, OH</publisher-loc>), <fpage>1653</fpage>&#x02013;<lpage>1660</lpage>. <pub-id pub-id-type="doi">10.1109/CVPR.2014.214</pub-id><pub-id pub-id-type="pmid">27295638</pub-id></citation></ref>
<ref id="B41">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Van Horne</surname> <given-names>P. L. M</given-names></name></person-group> (<year>2020</year>). <source>Economics of Broiler Production Systems in the Netherlands: Economic Aspects Within the Greenwell Sustainability Assessment Model</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://www.wur.nl/en/show/Report-Economics-of-broiler-production-systems-in-the-Netherlands.htm">https://www.wur.nl/en/show/Report-Economics-of-broiler-production-systems-in-the-Netherlands.htm</ext-link> (accessed September 11, 2021).</citation>
</ref>
<ref id="B42">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>van Staaveren</surname> <given-names>N.</given-names></name> <name><surname>Leishman</surname> <given-names>E. M.</given-names></name> <name><surname>Wood</surname> <given-names>B. J.</given-names></name> <name><surname>Harlander-Matauschek</surname> <given-names>A.</given-names></name> <name><surname>Baes</surname> <given-names>C. F.</given-names></name></person-group> (<year>2020</year>). <article-title>Farmers&#x00027; perceptions about health and welfare issues in turkey production</article-title>. <source>Front. Vet. Sci.</source> <volume>7</volume>:<fpage>332</fpage>. <pub-id pub-id-type="doi">10.3389/fvets.2020.00332</pub-id><pub-id pub-id-type="pmid">32596273</pub-id></citation></ref>
<ref id="B43">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vermette</surname> <given-names>C.</given-names></name> <name><surname>Schwean-Lardner</surname> <given-names>K.</given-names></name> <name><surname>Gomis</surname> <given-names>S.</given-names></name> <name><surname>Grahn</surname> <given-names>B. H.</given-names></name> <name><surname>Crowe</surname> <given-names>T. G.</given-names></name> <name><surname>Classen</surname> <given-names>H. L.</given-names></name></person-group> (<year>2016</year>). <article-title>The impact of graded levels of day length on Turkey health and behavior to 18 weeks of age</article-title>. <source>Poult. Sci.</source> <volume>95</volume>, <fpage>1223</fpage>&#x02013;<lpage>1237</lpage>. <pub-id pub-id-type="doi">10.3382/ps/pew078</pub-id><pub-id pub-id-type="pmid">26994194</pub-id></citation></ref>
<ref id="B44">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vestergaard</surname> <given-names>K. S.</given-names></name> <name><surname>Sanotra</surname> <given-names>G. S.</given-names></name></person-group> (<year>1999</year>). <article-title>Changes in the behaviour of broiler chickens</article-title>. <source>Vet. Rec.</source> <volume>144</volume>, <fpage>205</fpage>&#x02013;<lpage>210</lpage>.</citation>
</ref>
<ref id="B45">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Weeks</surname> <given-names>C. A.</given-names></name> <name><surname>Danbury</surname> <given-names>T. D.</given-names></name> <name><surname>Davies</surname> <given-names>H. C.</given-names></name> <name><surname>Hunt</surname> <given-names>P.</given-names></name> <name><surname>Kestin</surname> <given-names>S. C.</given-names></name></person-group> (<year>2000</year>). <article-title>The behaviour of broiler chickens and its modification by lameness</article-title>. <source>Appl. Anim. Behav. Sci.</source> <volume>67</volume>, <fpage>111</fpage>&#x02013;<lpage>125</lpage>. <pub-id pub-id-type="doi">10.1016/S0168-1591(99)00102-1</pub-id><pub-id pub-id-type="pmid">10719194</pub-id></citation></ref>
<ref id="B46">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wood</surname> <given-names>B. J</given-names></name></person-group> (<year>2009</year>). <article-title>Calculating economic values for turkeys using a deterministic production model</article-title>. <source>Can. J. Anim. Sci.</source> <volume>89</volume>, <fpage>201</fpage>&#x02013;<lpage>213</lpage>. <pub-id pub-id-type="doi">10.4141/CJAS08105</pub-id></citation>
</ref>
<ref id="B47">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Yosinski</surname> <given-names>J.</given-names></name> <name><surname>Clune</surname> <given-names>J.</given-names></name> <name><surname>Bengio</surname> <given-names>Y.</given-names></name> <name><surname>Lipson</surname> <given-names>H.</given-names></name></person-group> (<year>2014</year>). <source>How transferable are features in deep neural networks? Adv. Neural Inf. Process. Syst</source>. (<publisher-loc>Montreal, QC</publisher-loc>), <volume>4</volume>, <fpage>3320</fpage>&#x02013;<lpage>3328</lpage>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://proceedings.neurips.cc/paper/2014/file/375c71349b295fbe2dcdca9206f20a06-Paper.pdf">https://proceedings.neurips.cc/paper/2014/file/375c71349b295fbe2dcdca9206f20a06-Paper.pdf</ext-link></citation>
</ref>
<ref id="B48">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Zhu</surname> <given-names>J.-Y.</given-names></name> <name><surname>Park</surname> <given-names>T.</given-names></name> <name><surname>Isola</surname> <given-names>P.</given-names></name> <name><surname>Efros</surname> <given-names>A. A.</given-names></name></person-group> (<year>2017</year>). <article-title>Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks</article-title>. <source>Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition</source> (<publisher-loc>Honolulu, HI</publisher-loc>), <fpage>183</fpage>&#x02013;<lpage>202</lpage>.</citation>
</ref>
</ref-list>
</back>
</article>