<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article article-type="research-article" dtd-version="2.3" xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Phys.</journal-id>
<journal-title>Frontiers in Physics</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Phys.</abbrev-journal-title>
<issn pub-type="epub">2296-424X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">1108393</article-id>
<article-id pub-id-type="doi">10.3389/fphy.2023.1108393</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Physics</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Evaluation of edge detection algorithm of frontal image of facial contour in plastic surgery</article-title>
<alt-title alt-title-type="left-running-head">Yang</alt-title>
<alt-title alt-title-type="right-running-head">
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fphy.2023.1108393">10.3389/fphy.2023.1108393</ext-link>
</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Yang</surname>
<given-names>Chunxia</given-names>
</name>
<xref ref-type="corresp" rid="c001">&#x2a;</xref>
<uri xlink:href="https://loop.frontiersin.org/people/2115092/overview"/>
</contrib>
</contrib-group>
<aff>
<institution>Plastic and Cosmetic Surgery</institution>, <institution>Sanya Traditional Chinese Medicine Hospital</institution>, <addr-line>Sanya</addr-line>, <addr-line>Hainan</addr-line>, <country>China</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>
<bold>Edited by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1605897/overview">Amrit Mukherjee</ext-link>, University of South Bohemia in &#x10c;esk&#xe9; Bud&#x11b;jovice, Czechia</p>
</fn>
<fn fn-type="edited-by">
<p>
<bold>Reviewed by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1814578/overview">Jiapeng Dai</ext-link>, Nanjing University, China</p>
<p>
<ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1929905/overview">Chen Lifeng</ext-link>, Zhejiang University, China</p>
</fn>
<corresp id="c001">&#x2a;Correspondence: Chunxia Yang, <email>160210128@stu.cuz.edu.cn</email>
</corresp>
<fn fn-type="other">
<p>This article was submitted to Quantum Engineering and Technology, a section of the journal Frontiers in Physics</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>25</day>
<month>01</month>
<year>2023</year>
</pub-date>
<pub-date pub-type="collection">
<year>2023</year>
</pub-date>
<volume>11</volume>
<elocation-id>1108393</elocation-id>
<history>
<date date-type="received">
<day>26</day>
<month>11</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>03</day>
<month>01</month>
<year>2023</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#xa9; 2023 Yang.</copyright-statement>
<copyright-year>2023</copyright-year>
<copyright-holder>Yang</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract>
<p>With the improvement of medical levels and the continuous improvement of people&#x2019;s living standards, the demand for beauty by the general public is increasing. The plastic surgery industry has also developed by leaps and bounds. People&#x2019;s dissatisfaction with their own facial appearance, facial injuries and some other reasons have prompted people to carry out facial reconstruction, and facial plastic surgery has developed rapidly. However, in the current facial plastic surgery, the edge detection effect on the contour image is general. In order to improve the edge detection effect of facial contour lines in medical images, this paper proposed a facial contour line generation algorithm. First, the detection effects of four operators were compared. After comparing the effects, the Sobel operator was used as the input data to generate an edge detection algorithm. Then, the grayscale features of the tissue in the image and the symmetry of the image were used to perform bidirectional contour tracking on the detected image to extract facial contour lines. In addition, for facial contour features, the midpoint method can be used to generate auxiliary contours. The algorithm was verified by a set of facial CT (Computed Tomography) images in the experiment. The results showed that the new generation algorithm accelerated the edge detection speed, had good denoising performance, and enhanced the edge detection effect by about 12.05% compared with the traditional edge detection algorithm. The validity and practicability of facial edge detection were verified, and it provided a theoretical basis for further realizing the design of a facial contour digital image processing system.</p>
</abstract>
<kwd-group>
<kwd>image edge detection algorithm</kwd>
<kwd>image segmentation</kwd>
<kwd>plastic surgery</kwd>
<kwd>facial contour</kwd>
<kwd>evaluation of edge detection algorithm</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<title>1 Introduction</title>
<p>Facial plastic surgery has an important position as a major branch of plastic surgery. Due to the irregular contours of the human face, the correction of facial structural parameters, facial measurement, and structural analysis are both key and difficult points in plastic surgery. With the continuous development of face detection and recognition technology based on digital images, the detection of facial feature parameters has also begun to enter the medical field, and various studies have provided new ideas and solutions for problems such as facial image edge detection and image segmentation.</p>
<p>Now there is some research on image edge detection algorithms by scholars. Luo C proposed an improved algorithm based on Canny edge detection and grayscale difference preprocessing to solve the problems of traditional image edge detection algorithms with low efficiency and accuracy, poor anti-noise ability, and the need to set thresholds manually [<xref ref-type="bibr" rid="B1">1</xref>]. According to whether the dataset contains intuitive label features, Zhang Q split current deep learning-based edge detection techniques into supervised learning-based image edge detection method and unsupervised learning-based image edge detection method [<xref ref-type="bibr" rid="B2">2</xref>]. An edge identification technique based on the binary wavelet transform and the morphological operator was suggested by Zhi-Bin H U, the accurate data extraction and recognition analysis of the front image of the face contour can be realized through the binary wavelet change [<xref ref-type="bibr" rid="B3">3</xref>]. To create a novel picture edge identification technique, Feng J multiplied the wavelet coefficients of the nearby scale products and identified the local modulus maximum point of the scale product coefficients as an edge, the application of image edge recognition technology to the edge detection of facial contour frontal image can effectively improve the accuracy of facial image detection [<xref ref-type="bibr" rid="B4">4</xref>]. On a board with a field programmable gate array, Menaka R suggested a filter-based edge detection approach [<xref ref-type="bibr" rid="B5">5</xref>]. With contour detection at its core and the green channel data from the eye image as input, Rujov F created a fuzzy rule based on Mamdani (Type-2) theory [<xref ref-type="bibr" rid="B6">6</xref>]. To segment an image, edge detection is essential. A typical Canny operator-based edge identification technique for medical images has been studied by Xu Z. The Canny operator&#x2019;s edge detection is enhanced, and the double-gate limit detection approach enhances the Canny operator&#x2019;s edge detection. MATLAB simulates the process on a computer platform, and the mean square error and information entropy are used as two objective evaluation indices to evaluate the experimental results. The experimental findings demonstrated that the enhanced adaptive double-threshold Canny algorithm has superior edge detection performance than the conventional Canny method, the image details are rich, the noise suppression effect is good, and the error edge is less [<xref ref-type="bibr" rid="B7">7</xref>]. The above research analyzed the image edge detection algorithm.</p>
<p>Many scholars have studied image segmentation. Sami R generated fresh-keeping antibacterial technology by studying microscopic image segmentation and morphological characterization [<xref ref-type="bibr" rid="B8">8</xref>]. Suban I B used the level set function to capture the boundaries between objects contained in the image, combined the lattice Boltzmann method with fuzzy clustering, and used the graphics processing unit for parallel processing to speed up the image segmentation process [<xref ref-type="bibr" rid="B9">9</xref>]. Li H A employed the AdaBoost algorithm with the Gabor texture analysis algorithm to separate images with several faces, significantly lowering the face image segmentation&#x2019;s false detection rate [<xref ref-type="bibr" rid="B10">10</xref>]. Rangayya put forth a brand-new face recognition model. Four primary components make up the suggested method: data collection, segmentation, feature extraction, and recognition [<xref ref-type="bibr" rid="B11">11</xref>]. For face segmentation and 3D face reconstruction, Yin X suggested a new face masking model that can automatically remove all borders, even blurring face masking. A 3D face reconstruction module, a face segmentation module, and an image creation module are all included in the suggested model [<xref ref-type="bibr" rid="B12">12</xref>]. The research of the above scholars has achieved fruitful progress in image segmentation.</p>
<p>The foundation of image processing in medicine is the edge detection of medical pictures. The accuracy of the edge detection result directly affects how easily the picture may be processed later. Due to the complexity and variety of medical images, there is currently no universal segmentation method, and only appropriate algorithms can be designed according to specific applications and image characteristics. This paper aims to improve the speed, accuracy, and contour segmentation effect of facial contour image segmentation in plastic surgery, to design and study related algorithms.</p>
</sec>
<sec id="s2">
<title>2 Facial image preprocessing</title>
<sec id="s2-1">
<title>2.1 Preprocessing process</title>
<p>From an information-theoretic point of view, the best results are obtained without preprocessing since preprocessing reduces the amount of information in the image [<xref ref-type="bibr" rid="B13">13</xref>]. Preprocessing refers to the preparation process before final processing and improvement. Specific applications in different industries or fields will have different interpretations. In the process of image processing, preprocessing will perform feature extraction, segmentation and matching on the input image. Therefore, work should focus on processing high-quality image data when conditions permit. However, preprocessing is not unsatisfactory, and it can suppress the deformation of the image and strengthen the image features required for subsequent processing [<xref ref-type="bibr" rid="B14">14</xref>].</p>
<p>Grayscale transformation can convert pixels to grayscale. If the image grayscale is within a small range, grayscale conversion can be performed on the image to achieve a higher grayscale. The pixel undergoes coordinate transformation and grayscale interpolation in its neighborhood to obtain a new bit of grayscale.</p>
<p>Local preprocessing creates fresh pixel grayscales. Local preprocessing can be split into two groups depending on the goal. Image enhancement is one, and image smoothing is another. Image grayscale transformation is the process of transforming the grayscale value of each point in the original image into another grayscale value according to a specific mapping function. The smoothing phase is low-pass filtering since the goal of smoothing is to reduce noise, which is typically high frequency. Smoothing, however, eliminates this portion of high-frequency information because the spatial distribution of the image grayscale function still contains high-frequency components (such as edges, corners, and lines). Conversely, image enhancement aims to emphasize image details (i.e., edges, lines, and corners), that is, to enhance the high-frequency parts of the image, but this also produces an enhanced effect on image noise. Simple smoothing and enhancement are not ideal, and this article looks for a way to balance the two.</p>
</sec>
<sec id="s2-2">
<title>2.2 Image reading</title>
<p>The images use the DICOM standard. The (National Electrical Manufacturers Association) and (the American College of Radiology) jointly established the DICOM (Digital Imaging and Communications in Medicine) standard, which is utilized in the disciplines of digital medical imaging and communications [<xref ref-type="bibr" rid="B15">15</xref>]. In the past, the operation of CT images was to send the CT film into the computer by scanning, photographing, and other means, and save it in bitmap format to obtain a two-dimensional image and display it. However, when viewing the original CT image directly, part of the facial feature information will be lost due to the characteristics of the CT image itself. In film scanning and transferring data, it is easy to lose a lot of information and affect the subsequent operation effect. Matlab7 especially provides the function of reading DICOM images, which can directly read CT images and reduce the loss of information [<xref ref-type="bibr" rid="B16">16</xref>]. To this end, this paper uses Matlab to program image operations such as CT image reading and display. Because CT images contain some useless information, they all need to be preprocessed. In the preprocessing operation, the multiplication function shields the useless information and saves the useful fault information.</p>
</sec>
<sec id="s2-3">
<title>2.3 Image enhancement</title>
<p>Due to the physical characteristics of X-rays, the gray distribution of muscle and soft tissues in medical CT images is narrow, and the contrast between the target and the background is low, which is not conducive to identification [<xref ref-type="bibr" rid="B17">17</xref>]. Direct viewing of the original CT image is easy to ignore some detailed information, which is not conducive to the extraction of facial contours. Histogram equalization is one of the techniques of grayscale transformation, and it is a powerful means to improve image brightness [<xref ref-type="bibr" rid="B18">18</xref>]. The contrast of the image after histogram equalization is significantly enhanced. The histogram changes are shown in <xref ref-type="fig" rid="F1">Figure 1</xref>.</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption>
<p>Histogram change.</p>
</caption>
<graphic xlink:href="fphy-11-1108393-g001.tif"/>
</fig>
<p>The abscissa of the histogram represents 0&#x2013;255 gray levels, and the ordinate represents the number of pixels corresponding to this gray level. It can be seen from <xref ref-type="fig" rid="F1">Figure 1</xref> that the pixels with gray levels between 0 and 50 in the histogram before equalization account for the vast majority, and the pixels in the histogram after equalization are more evenly distributed in the entire gray range. In addition, the histogram equalization technique can increase the contrast around the maximum value of the histogram and reduce the contrast around the minimum value.</p>
<p>Histogram equalization is a convenient and efficient image enhancement technology. It alters the picture histogram to realize the change in each pixel&#x2019;s level of gray and is mostly used to boost dynamic range and image contrast. Due to the grey distribution&#x2019;s concentration in a tiny area, the original image may not be clearly defined. In a photograph that has been overexposed, for example, the distribution of the grey level is mostly in the high brightness range, but in a shot that has been underexposed, the distribution would be in the low brightness range. Histogram equalisation aims to increase the dynamic range of the difference in grey values between pixels and improve the overall contrast of the image by converting the original picture histogram into an equally distributed and equalised form.</p>
</sec>
</sec>
<sec id="s3">
<title>3 Construction of edge detection algorithm</title>
<sec id="s3-1">
<title>3.1 Edge detection</title>
<p>After acquiring the enhanced image, image edge detection is required. In practical applications, the edge contains most of the information in the image, and the information of the facial contour that needs to be found is contained in the edge of the image. Therefore, extracting image edges reliably and effectively is a preprocessing measure that must be taken. <xref ref-type="fig" rid="F2">Figure 2</xref> shows some typical edge profiles.</p>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption>
<p>Typical edge profile.</p>
</caption>
<graphic xlink:href="fphy-11-1108393-g002.tif"/>
</fig>
<p>Edges are defined as vectors, and for image function points, the edge size is the image function gradient size. The direction is the direction of rotation along the gradient direction. The edge direction is perpendicular to the gradient direction, and the measure is equal to the gradient magnitude. The edge of an image refers to the part of the image where the brightness changes significantly in a local area. The gray level profile of this area can generally be seen as a step, that is, from a gray level value that changes sharply in a small buffer area to another gray level value with a large difference in gray level. In fact, edge detection is divided into two layers, one of which is to locate the position of the edge, and the other is to measure the size of the edge. For an image, in the area where the gray level is uniform, the edge is 0 or approximately 0; however, in the area where the gray level of the image changes, there must be an edge where it changes. In this paper, the image function gradient is used to detect the change in the image function, and then the edge is detected. Gradient-based edge detection operators include the Robert operator, Laplace operator, Sobel operator, and LoG operator based on the zero-crossing point of the second derivative.</p>
</sec>
<sec id="s3-2">
<title>3.2 Comparison of four operators and their effects</title>
<p>
<list list-type="simple">
<list-item>
<p>1), The Robert operator only uses the <inline-formula id="inf1">
<mml:math id="m1">
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> area of the current pixel adjacent area, which is very easy to calculate. Its mask is defined as:</p>
</list-item>
</list>
<disp-formula id="e1">
<mml:math id="m2">
<mml:mrow>
<mml:msub>
<mml:mi>h</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mfenced open="[" close="]" separators="|">
<mml:mrow>
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mtext>&#x2003;</mml:mtext>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mtext>&#x2003;</mml:mtext>
<mml:msub>
<mml:mi>h</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mfenced open="[" close="]" separators="|">
<mml:mrow>
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mtext>&#x2003;</mml:mtext>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mtext>&#x2003;</mml:mtext>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(1)</label>
</disp-formula>
</p>
<p>This operator is also known as the intersection operator because it approximates its first-order partial derivative by using the intersection of the current pixel. This operator uses a small neighborhood, which makes it more vocally sensitive [<xref ref-type="bibr" rid="B19">19</xref>]. Additionally, because of its even size, the operator&#x2019;s predicted neighborhood is asymmetrical to the present pixel, producing an unsatisfactory outcome. 2) Laplace operator.</p>
<p>The Laplace operator approximates the second-order partial derivative of the image function at the current pixel, and its size is <inline-formula id="inf2">
<mml:math id="m3">
<mml:mrow>
<mml:mn>3</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:mn>3</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> and has a center point, so the result is better than the Robert operator. The Laplace operator is actually the second-order differential of the grayscale function. The Laplace operator is defined as follows:<disp-formula id="e2">
<mml:math id="m4">
<mml:mrow>
<mml:msup>
<mml:mo>&#x2207;</mml:mo>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mi>g</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:mo>&#x2202;</mml:mo>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mi>g</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2202;</mml:mo>
<mml:msup>
<mml:mi>x</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:mfrac>
<mml:mo>&#x2b;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:mo>&#x2202;</mml:mo>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mi>g</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2202;</mml:mo>
<mml:msup>
<mml:mi>y</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
<label>(2)</label>
</disp-formula>
</p>
<p>It is discretized, as shown in Formula (3):<disp-formula id="e3">
<mml:math id="m5">
<mml:mrow>
<mml:msup>
<mml:mo>&#x2207;</mml:mo>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mi>g</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mfenced open="[" close="]" separators="|">
<mml:mrow>
<mml:mi>g</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>g</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>g</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>g</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>4</mml:mn>
<mml:mi>g</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(3)</label>
</disp-formula>
</p>
<p>In the formula <inline-formula id="inf3">
<mml:math id="m6">
<mml:mrow>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> is the grayscale image.</p>
<p>Various coefficients are proposed to create geometric modeling:<disp-formula id="e4">
<mml:math id="m7">
<mml:mrow>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi>X</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>Y</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mtext>&#x2003;</mml:mtext>
<mml:mn>1</mml:mn>
<mml:mtext>&#x2003;</mml:mtext>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>4</mml:mn>
<mml:mtext>&#x2003;</mml:mtext>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mtext>&#x2003;</mml:mtext>
<mml:mn>1</mml:mn>
<mml:mtext>&#x2003;</mml:mtext>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(4)</label>
</disp-formula>
</p>
<p>The diagonal direction is added to the definition, resulting in the following:<disp-formula id="e5">
<mml:math id="m8">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi>X</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>Y</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x2032;</mml:mo>
</mml:msup>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mtext>&#x2003;</mml:mtext>
<mml:mn>1</mml:mn>
<mml:mtext>&#x2003;</mml:mtext>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>8</mml:mn>
<mml:mtext>&#x2003;</mml:mtext>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mtext>&#x2003;</mml:mtext>
<mml:mn>1</mml:mn>
<mml:mtext>&#x2003;</mml:mtext>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(5)</label>
</disp-formula>
</p>
<p>Since the Laplace operator calculates the second derivative of the image function, it is very sensitive to noise and has a double response to some edges in the image.</p>
</sec>
<sec id="s3-3">
<title>3.3 Sobel operator</title>
<p>The Sobel operator uses a convolution modulus of <inline-formula id="inf4">
<mml:math id="m9">
<mml:mrow>
<mml:mn>3</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:mn>3</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>. In the image area <inline-formula id="inf5">
<mml:math id="m10">
<mml:mrow>
<mml:mn>3</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:mn>3</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, the difference between the third row and the first row is approximately the first-order partial derivative of the image in the horizontal axis direction, and its modeling is defined as:<disp-formula id="e6">
<mml:math id="m11">
<mml:mrow>
<mml:msub>
<mml:mi>h</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mtext>&#x2003;</mml:mtext>
<mml:mn>0</mml:mn>
<mml:mtext>&#x2003;</mml:mtext>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mtext>&#x2003;</mml:mtext>
<mml:mn>2</mml:mn>
<mml:mtext>&#x2003;</mml:mtext>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mtext>&#x2003;</mml:mtext>
<mml:msub>
<mml:mi>h</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mtext>&#x2003;</mml:mtext>
<mml:mn>0</mml:mn>
<mml:mtext>&#x2003;</mml:mtext>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>2</mml:mn>
<mml:mtext>&#x2003;</mml:mtext>
<mml:mn>0</mml:mn>
<mml:mtext>&#x2003;</mml:mtext>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mtext>&#x2003;</mml:mtext>
<mml:mn>0</mml:mn>
<mml:mtext>&#x2003;</mml:mtext>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(6)</label>
</disp-formula>
</p>
</sec>
<sec id="s3-4">
<title>3.4 LoG operator</title>
<p>Image <inline-formula id="inf6">
<mml:math id="m12">
<mml:mrow>
<mml:mi>g</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is the grayscale profile of a certain neighborhood along the <inline-formula id="inf7">
<mml:math id="m13">
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:math>
</inline-formula>-direction, and <inline-formula id="inf8">
<mml:math id="m14">
<mml:mrow>
<mml:msup>
<mml:mi>g</mml:mi>
<mml:mo>&#x2032;</mml:mo>
</mml:msup>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> <inline-formula id="inf9">
<mml:math id="m15">
<mml:mrow>
<mml:msup>
<mml:mi>g</mml:mi>
<mml:mo>&#x2033;</mml:mo>
</mml:msup>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> are its first and second derivatives. By comparing the images, it can be found that the inflection point <inline-formula id="inf10">
<mml:math id="m16">
<mml:mrow>
<mml:mi>g</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> corresponds to the extreme point <inline-formula id="inf11">
<mml:math id="m17">
<mml:mrow>
<mml:msup>
<mml:mi>g</mml:mi>
<mml:mo>&#x2032;</mml:mo>
</mml:msup>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> and the zero-crossing point <inline-formula id="inf12">
<mml:math id="m18">
<mml:mrow>
<mml:msup>
<mml:mi>g</mml:mi>
<mml:mo>&#x2033;</mml:mo>
</mml:msup>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> at the same time. It can be considered that the inflection point <inline-formula id="inf13">
<mml:math id="m19">
<mml:mrow>
<mml:mi>g</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>, the extreme value point <inline-formula id="inf14">
<mml:math id="m20">
<mml:mrow>
<mml:msup>
<mml:mi>g</mml:mi>
<mml:mo>&#x2032;</mml:mo>
</mml:msup>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> , and the zero-crossing point <inline-formula id="inf15">
<mml:math id="m21">
<mml:mrow>
<mml:msup>
<mml:mi>g</mml:mi>
<mml:mo>&#x2033;</mml:mo>
</mml:msup>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> correspond to the edge center point, respectively. Therefore, the center position of the edge can be determined by looking for the zero-crossing point.</p>
<p>The definition of the Gaussian filter is shown in Formula (7):<disp-formula id="e7">
<mml:math id="m22">
<mml:mrow>
<mml:mi>G</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>&#x3c0;</mml:mi>
<mml:msup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:mfrac>
<mml:msup>
<mml:mi>e</mml:mi>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:mi>x</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>&#x2b;</mml:mo>
<mml:msup>
<mml:mi>y</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:msup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
<label>(7)</label>
</disp-formula>
</p>
<p>Among them <inline-formula id="inf16">
<mml:math id="m23">
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> is the only parameter in the Gaussian filter, which corresponds to the size of the neighborhood where the filter works [<xref ref-type="bibr" rid="B20">20</xref>]. First, a Gaussian filter is used to filter out the original image. The filtered image is applied and filtered by the Laplace operator. Finally, the zero-crossing points acting on the image are searched to determine the edge position of the image. Finally, the edge size can be obtained by calculating the gradient size of the original image at each edge position. The process is shown in Formula (8):<disp-formula id="e8">
<mml:math id="m24">
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mo>&#x2207;</mml:mo>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi>G</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x2217;</mml:mo>
<mml:mi>g</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(8)</label>
</disp-formula>
<disp-formula id="e9">
<mml:math id="m25">
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:math>
<label>(9)</label>
</disp-formula>
</p>
<p>It is discretized to obtain the mask off <inline-formula id="inf17">
<mml:math id="m26">
<mml:mrow>
<mml:mn>5</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:mn>5</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, as shown in Formula (10):<disp-formula id="e10">
<mml:math id="m27">
<mml:mrow>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>Y</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mtext>&#x200a;</mml:mtext>
<mml:mn>0</mml:mn>
<mml:mtext>&#x2003;</mml:mtext>
<mml:mn>0</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mtext>&#x2003;</mml:mtext>
<mml:mn>0</mml:mn>
<mml:mtext>&#x2003;</mml:mtext>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mtext>&#x200a;</mml:mtext>
<mml:mn>0</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mtext>&#x2003;</mml:mtext>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>2</mml:mn>
<mml:mtext>&#x2003;</mml:mtext>
<mml:mn>16</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mtext>&#x200a;</mml:mtext>
<mml:mn>0</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mtext>&#x2003;</mml:mtext>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mtext>&#x200a;</mml:mtext>
<mml:mn>0</mml:mn>
<mml:mtext>&#x2003;</mml:mtext>
<mml:mn>0</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mtext>&#x2003;</mml:mtext>
<mml:mn>0</mml:mn>
<mml:mtext>&#x2003;</mml:mtext>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(10)</label>
</disp-formula>
</p>
<p>Comparison: it can be seen that the test results according to the first derivative test operator (Robert operator, Sobel operator, etc.) are acceptable, the noise of the two images is small, and the testing effect according to the second derivative test operator (Laplace operator) is not ideal. Higher-order derivatives are more sensitive to grayscale changes. The LoG operator has the highest level of edge detecting capability. However, based on the demands of the ensuing segmentation algorithm, the detection effect is poor because of a lot of extraneous data. This research ultimately decides to use the Sobel operator as the foundation for algorithm optimization after assessing the detection performance of each operator.</p>
</sec>
</sec>
<sec id="s4">
<title>4 Facial contour segmentation</title>
<sec id="s4-1">
<title>4.1 Facial contour generation</title>
<p>The result of the new facial edge detection algorithm proposed in this paper finally leads to contour tracking, through which accurate and clear facial contours can be obtained. In the face CT image segmentation, firstly, the appropriate threshold is set according to the facial soft tissue&#x2019;s gray characteristics, and each CT image&#x2019;s edges are detected to obtain a series of edge detection images. Then, the gradient magnitude of each pixel in the image is calculated so that the gray value of each pixel is equal to the gradient magnitude of the point. Finally, the starting point of the contour in the image after edge detection is identified, and the facial edge is detected by bidirectional tracking to obtain the contour of the facial soft tissue. In the bidirectional tracking detection of facial edges, edges are a kind of underlying features of images. The accuracy of facial image contour detection can be improved through bidirectional feature detection. The generation steps of the entire facial contour line are shown in <xref ref-type="fig" rid="F3">Figure 3</xref>.</p>
<fig id="F3" position="float">
<label>FIGURE 3</label>
<caption>
<p>Steps for generating facial contour lines.</p>
</caption>
<graphic xlink:href="fphy-11-1108393-g003.tif"/>
</fig>
</sec>
<sec id="s4-2">
<title>4.2 Bidirectional contour generation</title>
<p>After performing Sobel edge detection on the image, the edges of the tissue in the image are obtained. In view of these characteristics, this paper improves the contour tracking algorithm and realizes the bidirectional tracking of facial contours. The pixel position coordinates are shown in <xref ref-type="table" rid="T1">Table 1</xref>.</p>
<table-wrap id="T1" position="float">
<label>TABLE 1</label>
<caption>
<p>Pixel position coordinates.</p>
</caption>
<table>
<tbody valign="top">
<tr>
<td align="center">
<inline-formula id="inf18">
<mml:math id="m28">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="center">
<inline-formula id="inf19">
<mml:math id="m29">
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="center">
<inline-formula id="inf20">
<mml:math id="m30">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
</tr>
<tr>
<td align="center">
<inline-formula id="inf21">
<mml:math id="m31">
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="center">
<inline-formula id="inf22">
<mml:math id="m32">
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="center">
<inline-formula id="inf23">
<mml:math id="m33">
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
</tr>
<tr>
<td align="center">
<inline-formula id="inf24">
<mml:math id="m34">
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="center">
<inline-formula id="inf25">
<mml:math id="m35">
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="center">
<inline-formula id="inf26">
<mml:math id="m36">
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
</tr>
<tr>
<td align="center">
<inline-formula id="inf27">
<mml:math id="m37">
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="center">
<inline-formula id="inf28">
<mml:math id="m38">
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="center">
<inline-formula id="inf29">
<mml:math id="m39">
<mml:mrow>
<mml:mfenced open="(" close=")" separators="|">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>In order to reduce the tracking time, the bidirectional tracking method is adopted in this paper. In the two-way tracking process, synchronous tracking is performed in two directions, clockwise and counterclockwise, and the two are parallel. Clockwise is to track the right half of the image contour, and counterclockwise is to track the left half of the image contour. Tracking in a clockwise direction is called right tracking, and tracking in a counterclockwise direction is called left tracking. The position of the midpoint of the initial edge is selected as the starting point, the contour is traced clockwise to obtain the right half of the contour, and the left half of the contour is obtained by tracing the contour counterclockwise, and the two directions are traced in parallel. Bidirectional tracking saves the contour generation time and correspondingly increases the contour speed. Since the facial contour is at the top of the image, all detections are performed in order from top to bottom. A two-dimensional edge point array E is set, the initial value of E is 0, and the initial midpoint M is determined. The tracking process is from the edge point group E to the initial midpoint M.</p>
</sec>
<sec id="s4-3">
<title>4.3 Auxiliary contour generation</title>
<p>The tomographic pictures are separated by a specific amount. The number of obtained soft tissue contour lines is insufficient if there are not enough CT pictures. The midpoint approach is used to create auxiliary contour lines with this attribute in mind. The midpoint method is described in detail below. This method divides each candidate path into quadrants first, uses direct coordinate value comparison to quickly eliminate some candidate paths, and then applies the midpoint method to determine the direction of the candidate paths for the difficult-to-distinguish candidate paths. Since the midpoint discrimination method only needs shift, addition, and subtraction operations and avoids cumbersome angle calculation and intersection operation, the calculation efficiency is high.</p>
</sec>
</sec>
<sec id="s5">
<title>5 Experiments of the improved edge detection algorithm</title>
<p>This paper selects 20 sets of CT facial images as samples, numbered A&#x223c;T, and the effective rate is 100%. The improved edge detection algorithm is named IM, and the traditional edge detection algorithm is called TR. During the experiment, a set of facial CT tomographic images were input first, and the images in the specified folder were read and displayed through the program. The tomographic image spacing used is 0.5&#xa0;mm. Firstly, all input images are used to combine the threshold Sobel edge detection and the traditional algorithm edge detection. Then the image bidirectional contour tracking is realized after edge detection.</p>
<sec id="s5-1">
<title>5.1 Comparison of edge detection speed</title>
<p>The bidirectional contour tracking of the improved algorithm in this paper means that the contour is tracked from the midpoint, and the left and right tracking are performed synchronously. Since the amount of contour line data is not very large, the memory is hardly affected after adding auxiliary contour lines. Although it takes a certain amount of time to generate the auxiliary contour lines, generally speaking, the generation of the contour lines takes less time, so the generation efficiency of the contour lines is improved, and the number of contour lines is correspondingly increased. The edge detection speed comparison is shown in <xref ref-type="fig" rid="F4">Figure 4</xref>.</p>
<fig id="F4" position="float">
<label>FIGURE 4</label>
<caption>
<p>Edge detection speed.</p>
</caption>
<graphic xlink:href="fphy-11-1108393-g004.tif"/>
</fig>
<p>
<xref ref-type="fig" rid="F4">Figure 4</xref> reflects the edge detection speed comparison of the two algorithms. The edge detection speed of the improved IM algorithm is the same as the theoretical inference. Compared with the traditional algorithm, the edge detection speed is increased by about 81.37%, almost doubling the speed. The reason is that the Sobel operator involves 8 different pixels. The optimization method is to record 8 consecutive pixel color values in 8 variables, and each color value is represented by 16-bit data. An integer is transformed into a floating-point number, which is then transformed back into an integer. The data is likely to be larger than what can be represented by bytes, so the anti-saturation downpacking function is used when 8 int types are obtained in the final step and the result needs to be converted into a byte type.</p>
</sec>
<sec id="s5-2">
<title>5.2 Comparison of edge detection accuracy</title>
<p>The so-called accuracy is the judgment of edge points. The most direct and effective way is to use the differential operator to perform numerical differentiation on the image data to judge the edge strength and position. Most traditional differential operators use small matrix convolution, which leads to too strong directionality and poor edge detection effect in different directions. The edge detection accuracy comparison is shown in <xref ref-type="fig" rid="F5">Figure 5</xref>.</p>
<fig id="F5" position="float">
<label>FIGURE 5</label>
<caption>
<p>Edge detection accuracy.</p>
</caption>
<graphic xlink:href="fphy-11-1108393-g005.tif"/>
</fig>
<p>
<xref ref-type="fig" rid="F5">Figure 5</xref> reflects the comparison of edge detection accuracy. It can be seen that the effective point values obtained by the two algorithms are quite different, and the reflection is extreme. The larger the number of valid points, the more accurate the judgment of the edge points obtained by the algorithm can lay a good foundation for the subsequent facial contour image segmentation.</p>
</sec>
<sec id="s5-3">
<title>5.3 Comparison of contour segmentation effects</title>
<p>After edge detection is performed and edge points are obtained, further optimization is required. It is mainly reflected in the denoising of the image. The method is mainly to remove those isolated points and hanging points to facilitate the realization of contour segmentation. The value should depend on the specific conditions of the image. If not set properly, the main contours in the edge image would be destroyed. The comparison of contour segmentation effects is shown in <xref ref-type="fig" rid="F6">Figure 6</xref>.</p>
<fig id="F6" position="float">
<label>FIGURE 6</label>
<caption>
<p>Contour segmentation effect.</p>
</caption>
<graphic xlink:href="fphy-11-1108393-g006.tif"/>
</fig>
<p>
<xref ref-type="fig" rid="F6">Figure 6</xref> shows the comparison of contour segmentation effects. It can be seen that the values of noise points removed by the two algorithms are very different, and the number of noise points is very different. The improved algorithm has good denoising optimization ability. After denoising, the interference factors in the image can be reduced a lot, which is beneficial to the extraction of facial features. Therefore, the optimized algorithm has a good effect on the segmentation of facial contours, but the traditional algorithm does not have a good segmentation effect.</p>
</sec>
<sec id="s5-4">
<title>5.4 Comparison of comprehensive effects of edge detection</title>
<p>The results obtained in the above experiments are weighted to generate a score, and a comparison of the comprehensive edge detection effects of the two algorithms is obtained, as shown in <xref ref-type="fig" rid="F7">Figure 7</xref>.</p>
<fig id="F7" position="float">
<label>FIGURE 7</label>
<caption>
<p>Comprehensive effect of edge detection.</p>
</caption>
<graphic xlink:href="fphy-11-1108393-g007.tif"/>
</fig>
<p>The comparison of the two techniques&#x2019; complete edge detection effects is shown in <xref ref-type="fig" rid="F7">Figure 7</xref>. The new approach increased the edge detection effect by roughly 12.05% when compared to the conventional edge detection algorithm. With mean or median filtering typically included before the algorithm, the upgraded algorithm typically finds edges more accurately and with less calculation time.</p>
</sec>
</sec>
<sec sec-type="conclusion" id="s6">
<title>6 Conclusion</title>
<p>Image edge detection is the basic and main component of digital image processing. Generally, at the initial stage of recognition by machines, whether the correct edge detection directly affects the results of subsequent image processing. Due to the complexity of image edges and the influence of noise, there is no general algorithm for image edge detection. This paper introduced the concept of image segmentation. To solve the problems of time-consuming and poor accuracy of tissue contour generation, a contour generation algorithm was studied. In addition, this paper also selected the facial image as the research target and focused on the facial contour generation algorithm according to the grayscale characteristics of the facial image and gave the IM algorithm. The experimental results showed that using the threshold edge detection and bidirectional contour tracking algorithm to extract facial soft tissue contour can reduce the time of contour line generation and remove different tissue contours according to the threshold setting. In addition, according to the characteristics of facial contour lines, the midpoint method was used to generate auxiliary contour lines and increase the number of contour lines. The facial contour line is a non-closed curve. If a closed curve is generated, the end condition of the bidirectional contour tracking needs to be changed accordingly. There are many ways to assist contour lines, and this paper adopts different generation methods according to the contour lines&#x2019; characteristics. In the experiment, the results of edge detection speed comparison, edge detection accuracy comparison results, contour segmentation effect comparison results, and edge detection comprehensive effect comparison results were obtained. In the future, people will focus on the auxiliary contour generation method and the contour surface, hoping to find a general auxiliary contour generation algorithm and further optimize the IM algorithm.</p>
</sec>
</body>
<back>
<sec sec-type="data-availability" id="s7">
<title>Data availability statement</title>
<p>The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.</p>
</sec>
<sec id="s8">
<title>Author contributions</title>
<p>The author confirms being the sole contributor of this work and has approved it for publication.</p>
</sec>
<sec sec-type="COI-statement" id="s9">
<title>Conflict of interest</title>
<p>The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s10">
<title>Publisher&#x2019;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<label>1.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Luo</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Sun</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Sun</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Song</surname>
<given-names>J</given-names>
</name>
</person-group>. <article-title>Improved harris corner detection algorithm based on Canny edge detection and gray difference preprocessing</article-title>. <source>J Phys Conf Ser</source> (<year>2021</year>) <volume>1971</volume>(<issue>1</issue>):<fpage>012088</fpage>&#x2013;<lpage>8</lpage>. <pub-id pub-id-type="doi">10.1088/1742-6596/1971/1/012088</pub-id>
</citation>
</ref>
<ref id="B2">
<label>2.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>Q</given-names>
</name>
<name>
<surname>Zhou</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Xu</surname>
<given-names>X</given-names>
</name>
</person-group>. <article-title>The supervised CNN image edge detection algorithm in scotopic vision environment</article-title>. In: <conf-name>2021 IEEE 9th international conference on bioinformatics and computational biology (ICBCB)</conf-name>. <publisher-name>IEEE</publisher-name> (<year>2021</year>) <volume>9</volume>(<issue>5</issue>):<fpage>23</fpage>.</citation>
</ref>
<ref id="B3">
<label>3.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhi-Bin</surname>
<given-names>HU</given-names>
</name>
<name>
<surname>Deng</surname>
<given-names>CX</given-names>
</name>
<name>
<surname>Shao</surname>
<given-names>YH</given-names>
</name>
</person-group>. <article-title>Image edge detection algorithm based on dyadic wavelet transform and improved morphology</article-title>. <source>Comput Eng Des</source> (<year>2020</year>) <volume>24</volume>(<issue>5</issue>):<fpage>56</fpage>&#x2013;<lpage>85</lpage>.</citation>
</ref>
<ref id="B4">
<label>4.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Feng</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Yu</surname>
<given-names>X</given-names>
</name>
</person-group>. <article-title>Image edge detection algorithm based on adjacent scale product coefficient</article-title>. In: <conf-name>2020 5th international conference on electromechanical control technology and transportation (ICECTT)</conf-name>. <year>2020</year>, <volume>8</volume>(<issue>5</issue>):<fpage>126</fpage>.</citation>
</ref>
<ref id="B5">
<label>5.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Menaka</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Janarthanan</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Deeba</surname>
<given-names>K</given-names>
</name>
</person-group>. <article-title>FPGA implementation of low power and high-speed image edge detection algorithm</article-title>. <source>Microprocessors and Microsystems</source> (<year>2020</year>) <volume>75</volume>(<issue>10</issue>):<fpage>103053</fpage>&#x2013;<lpage>3</lpage>. <pub-id pub-id-type="doi">10.1016/j.micpro.2020.103053</pub-id>
</citation>
</ref>
<ref id="B6">
<label>6.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Orujov</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Maskeliunas</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Damasevicius</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Wei</surname>
<given-names>W</given-names>
</name>
</person-group>. <article-title>Fuzzy-based image edge detection algorithm for blood vessel detection in retinal images</article-title>. <source>Appl Soft Comput</source> (<year>2020</year>) <volume>13</volume>(<issue>6</issue>):<fpage>106452</fpage>&#x2013;<lpage>6</lpage>. <pub-id pub-id-type="doi">10.1016/j.asoc.2020.106452</pub-id>
</citation>
</ref>
<ref id="B7">
<label>7.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Xu</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Ji</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Sun</surname>
<given-names>X</given-names>
</name>
</person-group>. <article-title>Edge detection algorithm of medical image based on Canny operator</article-title>. <source>J Phys Conf Ser</source> (<year>2021</year>) <volume>1955</volume>(<issue>1</issue>):<fpage>012080</fpage>&#x2013;<lpage>0</lpage>. <pub-id pub-id-type="doi">10.1088/1742-6596/1955/1/012080</pub-id>
</citation>
</ref>
<ref id="B8">
<label>8.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sami</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Soltane</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Helal</surname>
<given-names>M</given-names>
</name>
</person-group>. <article-title>Microscopic image segmentation and morphological characterization of novel chitosan/silica nanoparticle/nisin films using antimicrobial technique for blueberry preservation</article-title>. <source>Membranes</source> (<year>2021</year>) <volume>11</volume>(<issue>5</issue>):<fpage>303</fpage>&#x2013;<lpage>4</lpage>. <pub-id pub-id-type="doi">10.3390/membranes11050303</pub-id>
</citation>
</ref>
<ref id="B9">
<label>9.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Suban</surname>
<given-names>IB</given-names>
</name>
<name>
<surname>Suyoto</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Pranowo</surname>
<given-names>P</given-names>
</name>
</person-group>. <article-title>Medical image segmentation using a combination of lattice Boltzmann method and fuzzy clustering based on GPU CUDA parallel processing</article-title>. <source>Int J Online Biomed Eng (Ijoe)</source> (<year>2021</year>) <volume>17</volume>(<issue>11</issue>):<fpage>76</fpage>&#x2013;<lpage>7</lpage>. <pub-id pub-id-type="doi">10.3991/ijoe.v17i11.24459</pub-id>
</citation>
</ref>
<ref id="B10">
<label>10.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Li</surname>
<given-names>HA</given-names>
</name>
<name>
<surname>Fan</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>He</surname>
<given-names>D</given-names>
</name>
<etal/>
</person-group> <article-title>Facial image segmentation based on gabor filter</article-title>. <source>Math Probl Eng</source> (<year>2021</year>) <volume>2021</volume>(<issue>6</issue>):<fpage>1</fpage>&#x2013;<lpage>7</lpage>. <pub-id pub-id-type="doi">10.1155/2021/6620742</pub-id>
</citation>
</ref>
<ref id="B11">
<label>11.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rangayya</surname>
<given-names>V</given-names>
</name>
<name>
<surname>Patil</surname>
<given-names>N</given-names>
</name>
</person-group>. <article-title>An enhanced segmentation technique and improved support vector machine classifier for facial image recognition</article-title>. <source>Int J Intell Comput Cybernetics (English)</source> (<year>2022</year>) <volume>15</volume>(<issue>2</issue>):<fpage>302</fpage>&#x2013;<lpage>17</lpage>. <pub-id pub-id-type="doi">10.1108/ijicc-08-2021-0172</pub-id>
</citation>
</ref>
<ref id="B12">
<label>12.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yin</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Huang</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Fu</surname>
<given-names>Z</given-names>
</name>
</person-group>. <article-title>Segmentation-reconstruction-guided facial image de-occlusion</article-title>. <source>Comput Electr Eng</source> (<year>2021</year>) <volume>6</volume>(<issue>5</issue>):<fpage>5</fpage>&#x2013;<lpage>13</lpage>. <pub-id pub-id-type="doi">10.48550/arXiv.2112.08022</pub-id>
</citation>
</ref>
<ref id="B13">
<label>13.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Islam</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Mahmud</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Hossain</surname>
<given-names>A</given-names>
</name>
</person-group>. <article-title>A facial region segmentation based approach to recognize human emotion using fusion of HOG &#x26; LBP features and artificial neural network 2018 4th international conference on electrical engineering and information &#x26; communication technology (iCEEiCT)</article-title>. <source>IEEE</source> (<year>2019</year>) <volume>89</volume>(<issue>1</issue>):<fpage>15</fpage>&#x2013;<lpage>26</lpage>. <pub-id pub-id-type="doi">10.13140/RG.2.2.12027.16160</pub-id>
</citation>
</ref>
<ref id="B14">
<label>14.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhao</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Zheng</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Yuan</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>Y</given-names>
</name>
</person-group>. <article-title>Portrait style transfer using deep convolutional neural networks and facial segmentation</article-title>. <source>Comput Electr Eng</source> (<year>2020</year>) <volume>85</volume>(<issue>6</issue>):<fpage>106655</fpage>&#x2013;<lpage>155</lpage>. <pub-id pub-id-type="doi">10.1016/j.compeleceng.2020.106655</pub-id>
</citation>
</ref>
<ref id="B15">
<label>15.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Islam</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Mahmud</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Hossain</surname>
<given-names>A</given-names>
</name>
</person-group>. <article-title>A facial region segmentation based approach to recognize human emotion using fusion of HOG &#x26; LBP features and artificial neural network 2018 4th international conference on electrical engineering and information &#x26; communication technology (iCEEiCT)</article-title>. <year>2018</year>, <volume>15</volume>(<issue>6</issue>):<fpage>565</fpage>&#x2013;<lpage>84</lpage>.</citation>
</ref>
<ref id="B16">
<label>16.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bharti</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Kumar</surname>
<given-names>GU</given-names>
</name>
<name>
<surname>Arora</surname>
<given-names>SK</given-names>
</name>
</person-group>. <article-title>Measurement of round shape object from the image using MATLAB</article-title>. <source>JETIR</source> (<year>2021</year>) <volume>41</volume>(<issue>5</issue>):<fpage>2</fpage>&#x2013;<lpage>6</lpage>.</citation>
</ref>
<ref id="B17">
<label>17.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kong</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Zou</surname>
<given-names>J</given-names>
</name>
</person-group>. <article-title>A novel natural image edge detection algorithm based on depth image and feature extraction</article-title>. <source>Comput Intelligence Neurosci</source> (<year>2017</year>) <volume>34</volume>(<issue>3</issue>):<fpage>432</fpage>&#x2013;<lpage>543</lpage>.</citation>
</ref>
<ref id="B18">
<label>18.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Teng</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Zhang</surname>
</name>
</person-group>. <article-title>A new medical image edge detection algorithm based on BC-aco</article-title>. <source>Int J Pattern Recognition Artif Intelligence</source> (<year>2017</year>) <volume>45</volume>(<issue>9</issue>):<fpage>51</fpage>&#x2013;<lpage>5</lpage>. <pub-id pub-id-type="doi">10.1142/S0218001417570026</pub-id>
</citation>
</ref>
<ref id="B19">
<label>19.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cao</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Min</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Tian</surname>
<given-names>Y</given-names>
</name>
</person-group>. <article-title>Implementing a parallel image edge detection algorithm based on the otsu-canny operator on the hadoop platform</article-title>. <source>Comput Intelligence Neurosci</source> (<year>2018</year>) <volume>2018</volume>(<issue>4</issue>):<fpage>1</fpage>&#x2013;<lpage>12</lpage>. <pub-id pub-id-type="doi">10.1155/2018/3598284</pub-id>
</citation>
</ref>
<ref id="B20">
<label>20.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wei</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Dong</surname>
</name>
<name>
<surname>Dong</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Sun</surname>
<given-names>S</given-names>
</name>
</person-group>. <article-title>Developing an image manipulation detection algorithm based on edge detection and faster R-CNN</article-title>. <source>Symmetry</source> (<year>2019</year>) <volume>11</volume>(<issue>10</issue>):<fpage>1223</fpage>&#x2013;<lpage>4</lpage>. <pub-id pub-id-type="doi">10.3390/sym11101223</pub-id>
</citation>
</ref>
</ref-list>
</back>
</article>