<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Organ. Psychol.</journal-id>
<journal-title>Frontiers in Organizational Psychology</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Organ. Psychol.</abbrev-journal-title>
<issn pub-type="epub">2813-771X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/forgp.2025.1500016</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Organizational Psychology</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Influence of deliverable evaluation feedback and additional reward on worker&#x00027;s motivation in crowdsourcing services</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Hijikata</surname> <given-names>Yoshinori</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/2222000/overview"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-original-draft/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-review-editing/"/>
<role content-type="https://credit.niso.org/contributor-roles/conceptualization/"/>
<role content-type="https://credit.niso.org/contributor-roles/data-curation/"/>
<role content-type="https://credit.niso.org/contributor-roles/formal-analysis/"/>
<role content-type="https://credit.niso.org/contributor-roles/funding-acquisition/"/>
<role content-type="https://credit.niso.org/contributor-roles/investigation/"/>
<role content-type="https://credit.niso.org/contributor-roles/methodology/"/>
<role content-type="https://credit.niso.org/contributor-roles/project-administration/"/>
<role content-type="https://credit.niso.org/contributor-roles/resources/"/>
<role content-type="https://credit.niso.org/contributor-roles/software/"/>
<role content-type="https://credit.niso.org/contributor-roles/supervision/"/>
<role content-type="https://credit.niso.org/contributor-roles/validation/"/>
<role content-type="https://credit.niso.org/contributor-roles/visualization/"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Ishizaki</surname> <given-names>Hikari</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<role content-type="https://credit.niso.org/contributor-roles/data-curation/"/>
<role content-type="https://credit.niso.org/contributor-roles/investigation/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-review-editing/"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Graduate School of Information Science, University of Hyogo</institution>, <addr-line>Kobe, Hyogo</addr-line>, <country>Japan</country></aff>
<aff id="aff2"><sup>2</sup><institution>School of Business Administration, Kwansei Gakuin University</institution>, <addr-line>Nishinomiya, Hyogo</addr-line>, <country>Japan</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Ron Landis, Wilbur O. and Ann Powers College of Business, Clemson University, United States</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Hale Erden, Final International University, Cyprus</p>
<p>Enrique Estell&#x000E9;s-Arolas, Catholic University of Valencia San Vicente M&#x000E1;rtir, Spain</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Yoshinori Hijikata <email>contact&#x00040;soc-research.org</email></corresp>
</author-notes>
<pub-date pub-type="epub">
<day>09</day>
<month>04</month>
<year>2025</year>
</pub-date>
<pub-date pub-type="collection">
<year>2025</year>
</pub-date>
<volume>3</volume>
<elocation-id>1500016</elocation-id>
<history>
<date date-type="received">
<day>29</day>
<month>09</month>
<year>2024</year>
</date>
<date date-type="accepted">
<day>18</day>
<month>03</month>
<year>2025</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2025 Hijikata and Ishizaki.</copyright-statement>
<copyright-year>2025</copyright-year>
<copyright-holder>Hijikata and Ishizaki</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract>
<sec>
<title>Introduction</title>
<p>To create training data for AI systems, it is necessary to manually assign correct labels to a large number of objects; this task is often performed by crowdsourcing. This task is usually divided into a certain number of smaller and more manageable segments, and workers work on them one after the other. In this study, assuming the above task, we investigated whether the deliverable evaluation feedback and provision of additional rewards contribute to the improvement of workers&#x00027; motivation, that is, the persistence of the tasks and performance.</p></sec>
<sec>
<title>Method</title>
<p>We conducted a user experiment on a real crowdsourcing service platform. This provided first and second round of tasks, which ask workers input correct labels to a flower species. We developed an experimental system that assessed the work products of the first-round task performed by a worker and presented the results to the worker. Six hundred forty-five workers participated in this experiment. They were divided into high and low performing groups according to their first-round scores (correct answer ratio). The workers&#x00027; performance and task continuation ratio under the high and low performance group and with and without evaluation feedback and additional rewards were compared.</p></sec>
<sec>
<title>Results</title>
<p>We found that the presentation of deliverable evaluations increased the task continuation rate of high-quality workers, but did not contribute to an increase in the task performance (correct answer rate) for either type of worker. The providing additional rewards reduced workers&#x00027; task continuation rate, and the amount of reduction was larger for low-quality workers than that for high-quality workers. However, it largely increased the low-quality worker&#x00027;s task performance. Although not statistically significant, the low-quality worker&#x00027;s task performance of the second round was highest for those who were shown both feedback and additional rewards.</p></sec>
<sec>
<title>Discussion</title>
<p>It was found that rewards positively affected worker motivation in previous studies. This is inconsistent with the results of our study. One possible reason is that previous studies have examined workers&#x00027; future engagements on different tasks, whereas our study examined workers&#x00027; successive tackles on the almost same task. In conclusion, it is better to offer both feedback and additional rewards when the quality of the deliverables is a priority, and to give only feedback when the quantity of deliverables is a priority.</p></sec></abstract>
<kwd-group>
<kwd>crowdsourcing</kwd>
<kwd>deliverable assessment</kwd>
<kwd>evaluation feedback</kwd>
<kwd>additional reward</kwd>
<kwd>worker motivation</kwd>
<kwd>task continuation</kwd>
<kwd>task performance</kwd>
</kwd-group>
<contract-num rid="cn001">JP19K12242</contract-num>
<contract-num rid="cn001">JP23K28194</contract-num>
<contract-sponsor id="cn001">Japan Society for the Promotion of Science<named-content content-type="fundref-id">https://doi.org/10.13039/501100001691</named-content></contract-sponsor>
<contract-sponsor id="cn002">Japan Science and Technology Agency<named-content content-type="fundref-id">https://doi.org/10.13039/501100002241</named-content></contract-sponsor>
<counts>
<fig-count count="3"/>
<table-count count="4"/>
<equation-count count="0"/>
<ref-count count="44"/>
<page-count count="10"/>
<word-count count="7666"/>
</counts>
<custom-meta-wrap>
<custom-meta>
<meta-name>section-at-acceptance</meta-name>
<meta-value>Performance and Development</meta-value>
</custom-meta>
</custom-meta-wrap>
</article-meta>
</front>
<body>
<sec id="s1">
<title>1 Introduction</title>
<p>In recent years, crowdsourcing services are being used by companies to diversify work styles and improve production efficiency. In crowdsourcing, workers are recruited via the Internet, which makes recruitment easier than the conventional means, such as flyers and job magazines (Howe, <xref ref-type="bibr" rid="B17">2006</xref>). In addition, workers can work anytime and from anywhere, therefore, they have the flexibility to work during their spare time. However, since the work provider (hereinafter, &#x0201C;crowdsourcer&#x0201D;) cannot directly monitor workers, it is impossible to know whether they are diligently working on their tasks. Therefore, it is necessary for crowdsourcers to devise ways to ensure this. In addition, crowdsourcing services must introduce a system that can guarantee the quality of deliverables (Kashima et al., <xref ref-type="bibr" rid="B21">2016</xref>; Soliman and Tuunainen, <xref ref-type="bibr" rid="B32">2015</xref>).</p>
<p>In particular, when collecting training data for machine learning, it is necessary to have people assign correct labels to the examples such as photos or sounds. For such tasks, it is desirable to have workers work on many tasks while improving the quality of tasks they perform. However, many crowdsourcers cannot ensure that workers work consistently on similar or subsequent tasks after their initial participation (Rockmann and Ballinger, <xref ref-type="bibr" rid="B29">2017</xref>; Yang et al., <xref ref-type="bibr" rid="B41">2008</xref>). If a participant does not work on several microtasks, the effort to break down tasks into microtasks and post them on the crowdsourcing platform will be wasted. Therefore, it is necessary to encourage continuous worker participation, not only to obtain large volumes of participation in similar tasks in the short term but also to maintain the total volume of efforts in the crowdsourcing community in the long term (Sun et al., <xref ref-type="bibr" rid="B34">2015</xref>). Consequently, both academia and industry are looking at ways to encourage workers to continue participating on similar or subsequent tasks (Liu and Liu, <xref ref-type="bibr" rid="B26">2019</xref>; Sun et al., <xref ref-type="bibr" rid="B33">2012</xref>). However, there is a trade-off between the quality and quantity of deliverables. On the one hand, the quantity of deliverables will decrease as the evaluation criteria are raised to ensure reliability of the quality of deliverables. On the other hand, if the deliverables are not evaluated and payment to workers is high, the quantity of deliverables will increase, but their quality will decrease. Therefore, it is an extremely difficult problem to maintain a certain level of quality while ensuring the quantity of deliverables.</p>
<p>General methods to accomplish this include devising the task and its work environment. Specifically, making the task socially meaningful rather than for profit (Rogstadius et al., <xref ref-type="bibr" rid="B30">2011</xref>; Cappa et al., <xref ref-type="bibr" rid="B4">2019</xref>), or making it seem like a game so that workers can enjoy while working (Hong et al., <xref ref-type="bibr" rid="B16">2013</xref>; Goncalves et al., <xref ref-type="bibr" rid="B13">2014</xref>; Feng et al., <xref ref-type="bibr" rid="B11">2018</xref>; Morschheuser et al., <xref ref-type="bibr" rid="B28">2019</xref>; Uhlgren et al., <xref ref-type="bibr" rid="B37">2024</xref>) are some examples. Other possible approaches include eliminating the participation of workers who do not satisfy the necessary conditions before performing the task (Matsubara and Wang, <xref ref-type="bibr" rid="B27">2014</xref>; Allahbakhsh et al., <xref ref-type="bibr" rid="B1">2013</xref>), and eliminating those who complete the task in an extremely short time by measuring the time taken to perform the task (Cosley et al., <xref ref-type="bibr" rid="B6">2005</xref>). Furthermore, it is possible to set effective rewards that lead to incentives for the tasks performed by workers (Watts and Mason, <xref ref-type="bibr" rid="B39">2009</xref>; Ho et al., <xref ref-type="bibr" rid="B15">2015</xref>; Yin et al., <xref ref-type="bibr" rid="B43">2013</xref>; Feng et al., <xref ref-type="bibr" rid="B11">2018</xref>; Cappa et al., <xref ref-type="bibr" rid="B4">2019</xref>), provide evaluation feedback on the deliverables (Feng et al., <xref ref-type="bibr" rid="B11">2018</xref>), and inform workers about the evaluation criteria to encourage them to take their work more seriously (Dow et al., <xref ref-type="bibr" rid="B9">2012</xref>). Another way is to evaluate workers&#x00027; abilities in advance by testing and distribute tasks according to workers&#x00027; abilities (Tauchmann et al., <xref ref-type="bibr" rid="B35">2020</xref>).</p>
<p>Some studies have shown that explaining the objectives or social significance of the project improves the worker&#x00027;s motivation. For examples, showing the project&#x00027;s technical features has been found to lead to sustained participation (Jackson et al., <xref ref-type="bibr" rid="B19">2015</xref>), and showing social significance of the task has been found to increase the number of participants (Cappa et al., <xref ref-type="bibr" rid="B4">2019</xref>). The relationship between task instructions and worker participation intentions or motivations has also been investigated. For examples, the type of task instructions (i.e., unbounded, suggestive, and prohibitive) has been found to affect the creativity of participants (Chaffois et al., <xref ref-type="bibr" rid="B5">2015</xref>). Yin et al. (<xref ref-type="bibr" rid="B44">2022</xref>) examined how requirement-oriented and reward-oriented strategies in task instructions affect worker participation and found that the usage of restrictive words affect the number of participants.</p>
<p>Some studies have combined the methods described above to increase workers&#x00027; motivation to engage in tasks. Feng et al. (<xref ref-type="bibr" rid="B11">2018</xref>) studied the effects of rewards for deliverables and feedback on motivation to participate in tasks for 295 workers in a crowdsourcing service. Furthermore, they examined whether the four intrinsic motivations (self-presentation, self-efficacy, social bonding, and playfulness) in the crowdsourcing context have a mediating effect on workers&#x00027; motivation. The results confirmed the mediating effects of intrinsic motivations of self-presentation, self-efficacy, and playfulness on the effect of rewards and feedback on respondents&#x00027; willingness to participate.</p>
<p>Using a public database of crowdsourcing tasks, Cappa et al. (<xref ref-type="bibr" rid="B4">2019</xref>) collected data related to &#x0201C;crowdsourcing invention activities&#x0201D; campaigns conducted between 2007 and 2014. The data were used to investigate the impact of financial rewards and social significance explanation on the number of project participants. Regression analysis revealed that financial rewards had a significant trend on the number of participants, while social significance explanation had a significant impact on the number of participants.</p>
<p>While the aforementioned studies examined whether task devices affect worker motivation, some studies have investigated the psychological elements that constitute worker motivation in crowdsourcing. According to the self-determination theory (SDT), two types of motivation impact human behavior: intrinsic and extrinsic motivation (Ryan and Deci, <xref ref-type="bibr" rid="B31">2000</xref>). Participants of collaborative work are also motivated in two different ways (extrinsic one and intrinsic one; Wasko and Faraj, <xref ref-type="bibr" rid="B38">2005</xref>; Antikainen et al., <xref ref-type="bibr" rid="B2">2010</xref>). Soliman and Tuunainen (<xref ref-type="bibr" rid="B32">2015</xref>) investigated the extrinsic and intrinsic motivations that influence the use of crowdsourcing services. It showed that intrinsic motivation consisted of curiosity, enjoyment, and altruism, while extrinsic motivation consisted of financial rewards, skill development, future employment, and publicity. Kaufmann et al. (<xref ref-type="bibr" rid="B22">2011</xref>) investigated which of intrinsic and extrinsic motivation was high among crowdsourcing service workers. The survey results showed that immediate rewards, which are extrinsic motivations, were high. Among the intrinsic motivators, enjoyment-based motivation tended to be higher than the others.</p>
<p>Huang et al. (<xref ref-type="bibr" rid="B18">2020</xref>) conducted a questionnaire survey on crowdsourcing to investigate whether workers intended to continue working on tasks from the same crowdsourcer. They investigated whether the flexibility and enjoyment of previous tasks, as well as trust in the crowdsourcer, affected the worker&#x00027;s intention to continue working. The results showed that enjoyment of the previous task and trust in the crowdsourcer had positive effects on task continuation.</p>
<p>Thus, studies investigating the factors that correlate with workers&#x00027; motivation to engage in tasks and task persistence have mainly been conducted using questionnaires. The results of these questionnaire-based studies will be more reliable in terms of causality if they are supported by the results from psychological experiments. Therefore, in this study, we used a psychological experiment to test whether evaluation feedback on deliverables and additional rewards based on it increases workers&#x00027; persistence in participating in tasks and improves the quality of tasks they perform.</p>
<p>We targeted tasks that require a large number of identical inputs and responses, such as the task of collecting training data for machine learning. In this simple repetitive input/response task, the crowdsourcer can divide the tasks into a certain number of smaller and more manageable segments (microtasks) and request the workers in crowdsourcing services to complete them (Deng et al., <xref ref-type="bibr" rid="B7">2016</xref>). We developed an experimental system that assessed the work products of the first task performed by a worker and presented the results to the worker (feedback). While feedback in crowdsourcing services typically includes numeric reviews and textual comments by crowdsourcers (Feng et al., <xref ref-type="bibr" rid="B11">2018</xref>; Jian et al., <xref ref-type="bibr" rid="B20">2019</xref>), in this study, the correct answer rate, which can be systematically calculated from responses and immediately presented to the worker, was used. By allowing workers to check the evaluation of their work products, it is expected that if the evaluation is good, the worker will be more motivated for the task. Moreover, we paid additional rewards only if the evaluation was good because workers were expected to be more motivated when they knew that their rewards would change depending on their effort.</p>
<p>We examined the effects of the abovementioned system by categorizing workers into two groups: high-quality and low-quality workers. We prepared a task that could be worked on twice in succession and grouped workers according to their performance on the first task: high-quality workers were those with high correct answer ratio (hereinafter &#x0201C;score&#x0201D;) and low-quality workers were those with low scores. For high-quality workers, presentation of work product evaluations (feedback) and additional rewards were expected to positively affect their motivation because those who work sincerely are likely to feel more satisfied with their work when they know that their efforts are evaluated fairly. However, for low-quality workers it may negatively affect their motivation because they do not work honestly, and when they know that the crowdsourcer is carefully evaluating their task, they may think that they cannot complete the task with ease, even if they continue to work on it. However, it is also possible that some workers, even those who were of low-quality, change their motivation and try to work on the task with integrity when they know that their work products are being evaluated. Therefore, the following eight hypotheses were formulated:</p>
<p>Hypothesis 1-1: High-quality workers will continue with the tasks when they receive feedback.</p>
<p>Hypothesis 1-2: High-quality workers will continue with the tasks when they receive rewards in addition to feedback.</p>
<p>Hypothesis 1-3: High-quality workers will achieve a higher score on the second task when feedback on the first task is provided.</p>
<p>Hypothesis 1-4: High-quality workers will achieve a higher score on the second task when they receive rewards in addition to feedback.</p>
<p>Hypothesis 2-1: Low-quality workers are less likely to continue with the tasks when they receive feedback.</p>
<p>Hypothesis 2-2: Low-quality workers are less likely to continue with the tasks when they receive rewards in addition to feedback.</p>
<p>Hypothesis 2-3: Low-quality workers will achieve a higher score on the second task when feedback for the first task is provided.</p>
<p>Hypothesis 2-4: Low-quality workers will achieve a higher score on the second task when they receive rewards in addition to feedback.</p></sec>
<sec id="s2">
<title>2 Method</title>
<sec>
<title>2.1 Experimental system and tasks</title>
<p>In this study, an experiment was conducted on an actual crowdsourcing service with people who typically undertake crowdsourcing jobs. Among the crowdsourcing services available in Japan, we chose &#x0201C;CrowdWorks,&#x0201D; as it had the largest number of workers in Japan. We developed an experimental system designed to collect training data for machine learning (hereinafter referred to as the &#x0201C;experimental system&#x0201D;). The experimental system was implemented in PHP. There exist three versions (conditions) in the developed system: Condition (1) Without feedback (FB) and without additional reward (AR), Condition (2) with FB and without AR, and Condition (3) with FB and with AR.</p>
<p>The task prepared for the experiment was to identify the type of a flower image displayed on a screen. Specifically, the image of a flower either of the species &#x0201C;halcyon&#x0201D; (scientific name: &#x0201C;erigeron philadelphicus&#x0201D;) or of &#x0201C;daisy&#x0201D; (scientific name: &#x0201C;erigeron annuus&#x0201D;) was displayed, and workers needed to guess which of the two types of flowers was displayed (<xref ref-type="fig" rid="F1">Figure 1</xref>). Because it is not easy to distinguish between these two types of flowers, an explanation of how to distinguish between them was always displayed below the question. Twenty questions were asked in succession to identify flower types and the task was presented twice. After completing the first task, the workers were allowed to complete the second task at their discretion. That is, they could end the experiment after completing the first task or proceed to the second task.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p>Screenshot of the experimental system (question page).</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="forgp-03-1500016-g0001.tif"/>
</fig>
<p>To test our hypothesis, we prepared the following three conditions:</p>
<p>(1) Feedback (percentage of correct answers) was not provided and no additional reward was given.</p>
<p>(2) Feedback (percentage of correct answers) was provided but no additional reward was given.</p>
<p>(3) Feedback (percentage of correct answers) was provided and additional rewards were paid to those with a high score (correct answer ratio).</p>
<p>By comparing conditions (1) and (2), we could clarify the relationship between feedback and task performance (i.e., the score in the second task), and between feedback and continuation rate (i.e., the ratio of workers who proceeded to the second task after completing the first task). Comparing (2) and (3) allowed us to determine whether there was a difference in task performance or continuation rate based on an additional reward when feedback was provided.</p>
</sec>
<sec>
<title>2.2 Experimental conditions</title>
<p>The first and second tasks consisted of 20 questions each. The score (correct answer ratio) was calculated by dividing the number of correct answers by 20 and presenting it as a percentage to the workers (<xref ref-type="fig" rid="F2">Figure 2</xref>). To test the hypotheses, it was necessary to establish a threshold score to distinguish between high- and low-quality workers. Thus, we asked 13 students in our laboratory to complete the task. The results are shown in <xref ref-type="fig" rid="F3">Figure 3</xref>. Based on these results, we adopted a correct answer ratio of 80% as the threshold for distinguishing between high- and low-quality workers. In the actual experiment, workers who answered the first task correctly at least 80% of the time were considered high-quality workers, and others were considered low-quality workers. All workers were paid a flat rate of 50 yen per task for the first and second tasks. In experimental condition (3), the additional reward for high-quality workers was 25 yen. It was paid to them through a dummy task on the crowdsourcing service that the authorized workers could access and obtain money by inputting a password.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p>Screenshot of the experimental system (evaluation page).</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="forgp-03-1500016-g0002.tif"/>
</fig>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p>Histogram of scores (correct answers ratio) in pre-tests in the lab.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="forgp-03-1500016-g0003.tif"/>
</fig>
<p>In a crowdsourcing experiment, workers who are demotivated or who have worked on a task in a random manner can be eliminated by a simple rule (Matsubara and Wang, <xref ref-type="bibr" rid="B27">2014</xref>; Allahbakhsh et al., <xref ref-type="bibr" rid="B1">2013</xref>; Cosley et al., <xref ref-type="bibr" rid="B6">2005</xref>), and this experiment was conducted with these eliminated workers as well. The participants were asked about their birth year before starting the task and their zodiac sign (sexagenary cycle traditionally used in East Asia, represented by the name of an animal in Japan) after completing the task. In addition, an image that was clearly incorrect (an image of a large lily flower) was inserted in the middle of the task (hereafter referred to as a dummy question). The usual number of choices for an answer was two, however, for the dummy question, three choices were given by adding the &#x0201C;neither&#x0201D; option. Therefore, 21 questions were presented to the participants. The experimental system also measured the start and end times of the task and calculated the time required to answer. Therefore, workers whose birth year and zodiac sign did not match, who answered anything other than &#x0201C;neither&#x0201D; to the dummy question, and whose took &#x0003C; 60 s to answer were considered unreliable (these criteria were called filtering rules) and were excluded from the experimental results. Here, 60 s was considered because it indicated the time in which it was physically impossible to answer the question (i.e., to input all the answers in the Web forms). A total of 300 workers were recruited for each experiment. They were recruited under the guise of participating in a plant image determination task. The experiment was also conducted under their consent to participate in the task. This study was reviewed and approved by the Research Ethics Review Committee of the authors&#x00027; institution.</p>
</sec>
</sec>
<sec id="s3">
<title>3 Results</title>
<sec>
<title>3.1 Analysis of all workers</title>
<p><xref ref-type="table" rid="T1">Table 1</xref> shows the numbers of participants (workers) in each experimental condition, those excluded as unreliable by the filtering rule, and those who remained. The results showed that few workers provided unreliable answers. We focused on workers&#x00027; score (correct answer ratio). First, a Kolmogorov-Smirnov test was performed on the scores for each experimental condition (first and second tasks) and no normality was found. The average scores of the first and second tasks in experimental conditions (1)&#x02013;(3) are shown in <xref ref-type="table" rid="T2">Table 2</xref> (second and third columns). The mean score in the first task combined with conditions (1)&#x02013;(3) was 0.76, and that in the second task was 0.83. The mean score was higher in the second task than that in the first task under every experimental condition. Because a worker could perform both the tasks, their corresponding scores were used. Wilcoxon signed-rank tests were conducted for workers performing both tasks, and significant differences were found with <italic>p</italic> = 9.99e-13, 1.34e-11, and 2.65e-07 (all &#x0003C; 0.05) for experimental conditions (1), (2), and (3), respectively.</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p>The number of workers in each experimental condition.</p></caption>
<table frame="box" rules="all">
<thead>
<tr style="background-color:#8f9496;color:#ffffff">
<th valign="top" align="left"><bold>Experimental condition</bold></th>
<th valign="top" align="center"><bold>All</bold></th>
<th valign="top" align="center"><bold>Unreliable</bold></th>
<th valign="top" align="center"><bold>Target</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Condition (1)-First</td>
<td valign="top" align="center">282</td>
<td valign="top" align="center">11</td>
<td valign="top" align="center">271</td>
</tr> <tr>
<td valign="top" align="left">Condition (1)-Second</td>
<td valign="top" align="center">231</td>
<td valign="top" align="center">11</td>
<td valign="top" align="center">220</td>
</tr> <tr>
<td valign="top" align="left">Condition (2)-First</td>
<td valign="top" align="center">268</td>
<td valign="top" align="center">9</td>
<td valign="top" align="center">259</td>
</tr> <tr>
<td valign="top" align="left">Condition (2)-Second</td>
<td valign="top" align="center">226</td>
<td valign="top" align="center">13</td>
<td valign="top" align="center">213</td>
</tr> <tr>
<td valign="top" align="left">Condition (3)-First</td>
<td valign="top" align="center">295</td>
<td valign="top" align="center">8</td>
<td valign="top" align="center">287</td>
</tr> <tr>
<td valign="top" align="left">Condition (3)-Second</td>
<td valign="top" align="center">213</td>
<td valign="top" align="center">22</td>
<td valign="top" align="center">191</td>
</tr></tbody>
</table>
</table-wrap>
<table-wrap position="float" id="T2">
<label>Table 2</label>
<caption><p>Average score in the first and second tasks and continuation rate of all workers in each experimental condition.</p></caption>
<table frame="box" rules="all">
<thead>
<tr style="background-color:#8f9496;color:#ffffff">
<th valign="top" align="left"><bold>Experimental condition</bold></th>
<th valign="top" align="center"><bold>Score- first</bold></th>
<th valign="top" align="center"><bold>Score- second</bold></th>
<th valign="top" align="center"><bold>Continuation ratio</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Condition (1)<break/> Without FB and without AR</td>
<td valign="top" align="center">0.74</td>
<td valign="top" align="center">0.82</td>
<td valign="top" align="center">0.81</td>
</tr> <tr>
<td valign="top" align="left">Condition (2)<break/> With FB and without AR</td>
<td valign="top" align="center">0.75</td>
<td valign="top" align="center">0.83</td>
<td valign="top" align="center">0.82</td>
</tr> <tr>
<td valign="top" align="left">Condition (3)<break/> With FB and with AR</td>
<td valign="top" align="center">0.77</td>
<td valign="top" align="center">0.85</td>
<td valign="top" align="center">0.67</td>
</tr></tbody>
</table>
<table-wrap-foot>
<p>FB, Feedback; AR, Additional reward.</p>
</table-wrap-foot>
</table-wrap>
<p>Although not a hypothesis of the study, we first examined whether feedback and additional rewards affected continuation rates and scores for all participants including both high-quality workers and low-quality workers. In detail, we checked whether the difference between the scores in the first and second tasks differed depending on the experimental condition. A Kruskal-Wallis test was conducted, yielding a <italic>p</italic> = 0.69 (&#x0003E;0.05). The Wilcoxon rank sum test with Bonferroni&#x00027;s adjustment showed no significant difference, with <italic>p</italic> = 1.00, 1.00, 1.00 (all &#x0003E;0.05), for experimental conditions (1) and (2), (2) and (3), and (1) and (3), respectively. Therefore, there was no difference in the scores depending on the experimental conditions.</p>
<p>The continuation rates for experimental conditions (1) and (3) are shown in <xref ref-type="table" rid="T2">Table 2</xref> (fourth column). We statistically tested whether continuation rate differed depending on the experimental conditions. A chi-square test of the experimental conditions showed a significant difference, with a p-value of 4.84e-6 (&#x0003C; 0.05). The Cramer&#x00027;s coefficient of association was 0.17. Residual analysis revealed that the adjusted standardized residuals for experimental conditions (1), (2), and (3) were 2.32, 2.73, and &#x02212;4.94, respectively (all absolute values &#x0003E;1.96), indicating significant differences. This indicated that the experimental condition (3) with feedback and additional rewards had a lower continuation rate than other conditions. Experimental condition (2) with feedback but no additional reward and experimental condition (1) with neither feedback nor additional reward had higher continuation rates than experimental condition (3). However, there was no difference between them in the continuation rates. This suggested that feedback was ineffective in increasing the overall continuation rate.</p>
</sec>
<sec>
<title>3.2 Analysis based on worker quality</title>
<p>Next, we examined whether task continuation rate and score on the second task differed depending on the quality of the workers. Specifically, we calculated the average score (correct answer ratio) and continuation rate for the high- and low-quality groups. The results for the high- and low-quality workers are shown in <xref ref-type="table" rid="T3">Tables 3</xref>, <xref ref-type="table" rid="T4">4</xref>, respectively.</p>
<table-wrap position="float" id="T3">
<label>Table 3</label>
<caption><p>Average score in the first and second tasks and continuation rate of high-quality workers in each experimental condition.</p></caption>
<table frame="box" rules="all">
<thead>
<tr style="background-color:#8f9496;color:#ffffff">
<th valign="top" align="left"><bold>Experimental condition</bold></th>
<th valign="top" align="center"><bold>Score- first</bold></th>
<th valign="top" align="center"><bold>Score- second</bold></th>
<th valign="top" align="center"><bold>Continuation rate</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Condition (1)<break/> Without FB and without AR</td>
<td valign="top" align="center">0.86</td>
<td valign="top" align="center">0.86</td>
<td valign="top" align="center">0.80</td>
</tr> <tr>
<td valign="top" align="left">Condition (2)<break/> With FB and without AR</td>
<td valign="top" align="center">0.86</td>
<td valign="top" align="center">0.86</td>
<td valign="top" align="center">0.85</td>
</tr> <tr>
<td valign="top" align="left">Condition (3)<break/> With FB and with AR</td>
<td valign="top" align="center">0.87</td>
<td valign="top" align="center">0.85</td>
<td valign="top" align="center">0.71</td>
</tr></tbody>
</table>
<table-wrap-foot>
<p>FB, Feedback; AR, Additional reward.</p>
</table-wrap-foot>
</table-wrap>
<table-wrap position="float" id="T4">
<label>Table 4</label>
<caption><p>Average score in the first and second tasks and continuation rate of low-quality workers in each experimental condition.</p></caption>
<table frame="box" rules="all">
<thead>
<tr style="background-color:#8f9496;color:#ffffff">
<th valign="top" align="left"><bold>Experimental condition</bold></th>
<th valign="top" align="center"><bold>Score- first</bold></th>
<th valign="top" align="center"><bold>Score- second</bold></th>
<th valign="top" align="center"><bold>Continuation rate</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Condition (1)<break/> Without FB and without AR</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">0.79</td>
<td valign="top" align="center">0.82</td>
</tr> <tr>
<td valign="top" align="left">Condition (2)<break/> With FB and without AR</td>
<td valign="top" align="center">0.67</td>
<td valign="top" align="center">0.80</td>
<td valign="top" align="center">0.80</td>
</tr> <tr>
<td valign="top" align="left">Condition (3)<break/> With FB and with AR</td>
<td valign="top" align="center">0.66</td>
<td valign="top" align="center">0.85</td>
<td valign="top" align="center">0.61</td>
</tr></tbody>
</table>
<table-wrap-foot>
<p>FB, Feedback; AR, Additional reward.</p>
</table-wrap-foot>
</table-wrap>
<p>We statistically examined whether the continuation rate differed between the experimental conditions for high- and low-quality workers. First, we focused on high-quality workers. A chi-square test was conducted on experimental condition and task continuation, and a significant difference was found with a <italic>p</italic> = 0.017 (&#x0003C; 0.05). The Cramer&#x00027;s coefficient of association was 0.14. Further residual analysis revealed that the adjusted standardized residuals for experimental conditions (1), (2), and (3) were 0.86, 2.06, and &#x02212;2.72, respectively (judgment condition |stdres| &#x0003E;1.96), indicating significant differences. Thus, we found that experimental condition (3) with additional reward had a lower continuation rate than experimental condition (2) without it. Therefore, Hypothesis 1-2 was not supported. The continuation rate was higher in experimental condition (2), in which feedback was provided, than in experimental condition (1), in which no feedback was provided. Thus, Hypothesis 1-1 was supported. This suggested that feedback succeeded in maintaining the motivation of high-quality workers to some extent. In other words, workers with high judgment ability or motivation might have been able to maintain their motivation by learning about their own performance.</p>
<p>Regarding low-quality workers, A chi-square test on experimental condition and task continuation showed a significant difference, <italic>p</italic> = 5.98e-5 (&#x0003C; 0.05). The Cramer&#x00027;s coefficient of association was 0.22. Further residual analysis revealed that the adjusted standardized residuals for experimental conditions (1), (2), and (3) were 2.37, 1.93, and &#x02212;4.40, respectively (judgment condition |stdres| &#x0003E;1.96), indicating significant differences. Thus, we found that experimental condition (3) with additional reward had a lower continuation rate than the other conditions. Thus, Hypothesis 2-2 was supported. It was likely that low-quality workers, in other words, workers with poor judgment ability or low motivation, lost the motivation to continue working on the task even when additional rewards were offered. Experimental condition (1), with neither additional reward nor feedback, had the highest continuation rate. In experimental condition (2), in which only feedback was provided, no association was found, and Hypothesis 2-1 was not supported.</p>
<p>Next, we focused on scores (correct answer ratio). Those of the high-quality workers showed little change, with the mean of the first and second task scores being approximately 0.86. To confirm this, Wilcoxon signed-rank tests were conducted on the first and second task scores of high-quality workers who proceeded to the second task. The p-values were 0.60, 1.00, and 0.43 (all &#x0003E;0.05) for conditions (1), (2), and (3), respectively, and no significant differences were found. Additionally, we checked whether the difference between the second and first task scores depended on experimental conditions. A Kruskal-Wallis test was performed which showed a <italic>p</italic> = 0.76 (&#x0003E;0.05). A Wilcoxon rank sum test with Bonferroni adjustment revealed no significant differences with <italic>p</italic> = 1.00, 1.00, 1.00 (all &#x0003E;0.05) for experimental conditions (1) and (2), (2) and (3), and (1) and (3), respectively. Therefore, Hypotheses 1-3 and 1-4 were not supported.</p>
<p>In contrast, low-quality workers&#x00027; second task scores were higher than their first task scores in all experimental conditions. Wilcoxon signed-rank tests revealed significant differences in the experimental conditions (1), (2), and (3) with <italic>p</italic> = 4.49e-15, 1.11e-14, and 1.07e-5 (all &#x0003C; 0.05), respectively. We tested whether the difference between the second and first task scores depended on experimental conditions. A Kruskal-Wallis test was performed with a <italic>p</italic> = 0.14 (&#x0003E;0.05). A Wilcoxon rank sum test with Bonferroni&#x00027;s adjustment revealed no significant differences with <italic>p</italic> = 1.00, 0.20, 0.26 &#x0003E; 0.05 for the experimental conditions (1) and (2), (2) and (3), and (1) and (3), respectively.</p>
<p>Moreover, we checked whether the second task score differed according to experimental conditions. The Kruskal-Wallis test was performed with a p-value of 0.0033 (&#x0003C; 0.05). The Wilcoxon rank sum test with Bonferroni&#x00027;s adjustment revealed partially significant differences with <italic>p</italic> = 1.00, 0.039, and 0.0026 (<italic>p</italic> &#x0003C; 0.05 for the decision condition) for the experimental conditions (1) and (2), (2) and (3), and (1) and (3), respectively. Thus, the highest score was obtained in the experimental condition (3) (with feedback and additional rewards). Thus, Hypothesis 2-4 was partially supported. That is, although we were not able to increase the scores of workers individually, we were able to maintain an overall high score for the second task, as some workers did not proceed to the second task.</p>
<p>Surprisingly, the second task scores of the low-quality workers were almost identical to those of high-quality workers. This may be due to the fact that those who did not work hard for the first time worked hard the second time to receive the additional reward, and that only the low-quality workers who were confident about receiving the additional reward in the second time proceeded to the second task. In experimental condition (2) (with feedback and no additional reward), the increase in scores was much smaller than in experimental condition (3). Therefore, Hypothesis 2-3 was not supported. It is likely that low-quality workers were not motivated by feedback on work product evaluations alone.</p>
</sec>
</sec>
<sec id="s4">
<title>4 Discussion</title>
<p>An analysis of all workers showed that additional rewards reduced the task continuation, and an analysis of high- and low-quality workers showed that providing feedback on work product evaluations contributed to a higher continuation rate for high-quality workers. However, it had little effect on the continuation rate of low-quality workers. Feedback did not improve the second task score for either high- or low-quality workers. Although additional reward largely reduced the continuation rate for low-quality workers, it also slightly reduced that for high-quality workers. Although the additional rewards did not improve individual scores among low-quality workers, high scores were maintained for those who proceeded to the second task.</p>
<p>Previous studies on motivation for crowdsourcing task efforts (Feng et al., <xref ref-type="bibr" rid="B11">2018</xref>; Cappa et al., <xref ref-type="bibr" rid="B4">2019</xref>; Kaufmann et al., <xref ref-type="bibr" rid="B22">2011</xref>) found that rewards positively affect worker motivation. This is inconsistent with the results of this study which showed that additional rewards did not lead to higher continuation rates. One reason for this is that the previous studies involved working on different tasks (one-shot tasks independent of each other), whereas our study involved working on the next task immediately (a task with almost the same content as the first one). The decision to work on the task was not a significant cognitive burden, and rewards might not have positively affected worker motivation. Another possibility is that the tasks used in the experiment were microtasks aimed at acquiring machine learning training data; therefore, the base and additional reward (half of the base reward) amounts were small (Of course, the amount of compensation they receive for their working hours is not low). It is possible that this small amount did not motivate workers to work diligently on the second task and receive additional rewards.</p>
<p>Regarding the motivation behind each worker&#x00027;s quality, additional rewards did not increase the continuation rate of high-quality workers. It is possible that rewards are not the only motivating factors for high-quality workers. The additional reward may have provided them with a certain degree of satisfaction and discouraged them from continuing to work on the task. This can be explained using the attribution process (Kelley and Michela, <xref ref-type="bibr" rid="B23">1980</xref>), from the field of social psychology, which mention that people infer why events in the real world, including their own actions, occurred. The factors affecting human behavior can be largely divided into internal and external factors (Weiner, <xref ref-type="bibr" rid="B40">1974</xref>). The target event in this study needed the user&#x00027;s own effort to guess the name of the flower. In our experimental task, the internal factors might include the user&#x00027;s original pleasures and enjoyment of engaging in the crowdsourcing task (Deng and Joshi, <xref ref-type="bibr" rid="B8">2016</xref>; Ye and Kankanhalli, <xref ref-type="bibr" rid="B42">2017</xref>) and personal growth achievement obtained by finishing the task (Deng and Joshi, <xref ref-type="bibr" rid="B8">2016</xref>; Feller et al., <xref ref-type="bibr" rid="B10">2012</xref>), and the external factor might be the reward that can be obtained by working on the task (Taylor and Joshi, <xref ref-type="bibr" rid="B36">2019</xref>). Generally, monetary rewards are used as incentives in exchange of contributions to crowdsourcing tasks (Hann et al., <xref ref-type="bibr" rid="B14">2013</xref>; Khern-am-nuai et al., <xref ref-type="bibr" rid="B24">2018</xref>). In crowdsourcing services, it is a common practice to get paid on task completion; therefore, the standard amount of payment is not a strong external factor. In fact, the rewards for the tasks in our experiment were not higher than those for other tasks in crowdsourcing. The task was divided into microtasks and a standard payment was set considering the actual working hours. It seems that high-quality workers began this task with the intention of taking it seriously. In other words, they were motivated by internal factors. However, the additional reward after working on the task may have changed the cause of their action from an internal to an external factor of obtaining the reward. It has been shown in a study on young children&#x00027;s play (Lepper et al., <xref ref-type="bibr" rid="B25">1973</xref>) that switching attribution from internal to external factors results in a loss of motivation for the task, and the results of the present study seem to be consistent with this. Cappa et al. (<xref ref-type="bibr" rid="B4">2019</xref>) showed that financial and social rewards (explanation of social benefit) were able to attract more participation, indicating that both extrinsic and intrinsic motivation should be utilized to increase the number of participants and contributions. They also suggested that methods of reward that negatively affect intrinsic motivation should be avoided. Based on the above discussion, it is necessary to consider how to reward high-quality workers without damaging their motivation.</p>
<p>This study had some limitations. We set up a simple task as the experiment to judge whether an example was true or false, assuming the acquisition of training data for machine learning. Our experimental results show that feedback on performance evaluation increases the continuation rate of high-quality workers, and that additional rewards increase the overall performance of low-quality workers. We cannot confirm if our experimental results are applicable to other creative tasks, such as creating a tagline for a product or composing a theme song. We are also uncertain if our results can be applied to tasks that require logical thinking to make judgments. The feedback in this experiment was simply the number of correct answers out of 20 questions, and the percentage of correct answers. If emotional expressions are added to feedback, the continuation rate may change. It has been found that a person&#x00027;s altruistic behavior is influenced by empathy for the other person (Batson et al., <xref ref-type="bibr" rid="B3">1981</xref>). Expressions of empathy and emotional praise for the efforts of the worker may influence task continuation. It has also been found that workers committed to the crowdsourcing community are more likely to voluntarily engage in tasks (Ghosh et al., <xref ref-type="bibr" rid="B12">2012</xref>). It would be good to indicate how much of the deliverables by the worker contributed to the target community (e.g., the discipline of biology or environmental studies, if the task was to identify flower species used in the experiment). Future work should conduct similar experiments on a variety of tasks and investigate the effects of different types of feedback, including emotional feedback. In particular, emotional appeals may influence workers&#x00027; internal motivation. For example, simply showing troubles that a researcher faces may motivate workers to participate in the task. What kind of appeals should be included in task instruction is an issue for future research.</p>
<p>In conclusion, this study examined whether the presentation of evaluations of work products (feedback) and additional rewards in crowdsourcing services can encourage workers to continue working on tasks and improve the quality of deliverables. We developed an experimental system that can be changed with or without feedback and additional rewards, and conducted an experiment on a real crowdsourcing service using this system. When only feedback was provided, the task continuation rate increased for high-quality workers. However, for low-quality workers, the task continuation rate could not be reduced. Feedback did not contribute to an increase in the correct answer rate for either type of worker. Although the presence of both feedback and additional rewards reduced workers&#x00027; task continuation rate, the amount of reduction was larger for low-quality workers than that for high-quality workers. Furthermore, it significantly increased the low-quality worker&#x00027;s task score. Although not statistically significant, the second score was highest for those who were shown both feedback and additional rewards.</p>
<p>These results are not simple, but they suggest that for businesses ordering tasks for crowdsourcing, it is better to offer both feedback and additional rewards when the quality of the deliverables is a priority, and to give only feedback when the quantity of deliverables is a priority. In the future, we aim to improve the quality of the work products of the subsequent task of low-quality workers without decreasing the continuation rate of high-quality workers by devising the reward method and messages in the feedback.</p></sec>
</body>
<back>
<sec sec-type="data-availability" id="s5">
<title>Data availability statement</title>
<p>The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.</p>
</sec>
<sec sec-type="ethics-statement" id="s6">
<title>Ethics statement</title>
<p>The studies involving humans were approved by the Research Ethics Review Committee of &#x0201C;Behavioral Studies on Human Subjects&#x0201D; at Kwansei Gakuin University. The studies were conducted in accordance with the local legislation and institutional requirements. Written informed consent for participation was not required from the participants or the participants&#x00027; legal guardians/next of kin because the experiment was conducted online.</p>
</sec>
<sec sec-type="author-contributions" id="s7">
<title>Author contributions</title>
<p>YH: Writing &#x02013; original draft, Writing &#x02013; review &#x00026; editing, Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization. HI: Data curation, Investigation, Writing &#x02013; review &#x00026; editing.</p>
</sec>
<sec sec-type="funding-information" id="s8">
<title>Funding</title>
<p>The author(s) declare that financial support was received for the research and/or publication of this article. This work was supported by JSPS KAKENHI Grant Number JP19K12242 and JP23K28194, and was partially supported by JST CREST Grant Number JPMJCR20D4.</p>
</sec>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec id="s9">
<title>Generative AI statement</title>
<p>The author(s) declare that no Gen AI was used in the creation of this manuscript.</p></sec>
<sec sec-type="disclaimer" id="s10">
<title>Publisher&#x00027;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Allahbakhsh</surname> <given-names>M.</given-names></name> <name><surname>Benatallah</surname> <given-names>B.</given-names></name> <name><surname>Ignjatovic</surname> <given-names>A.</given-names></name> <name><surname>Motahari-Nezhad</surname> <given-names>H. R.</given-names></name> <name><surname>Bertino</surname> <given-names>E.</given-names></name> <name><surname>Dustdar</surname> <given-names>S.</given-names></name></person-group> (<year>2013</year>). <article-title>Quality control in crowdsourcing systems: issues and directions</article-title>. <source>IEEE Internet Comput.</source> <volume>17</volume>, <fpage>76</fpage>&#x02013;<lpage>81</lpage>. <pub-id pub-id-type="doi">10.1109/MIC.2013.20</pub-id></citation>
</ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Antikainen</surname> <given-names>M.</given-names></name> <name><surname>M&#x000E4;kip&#x000E4;&#x000E4;</surname> <given-names>M.</given-names></name> <name><surname>Ahonen</surname> <given-names>M.</given-names></name></person-group> (<year>2010</year>). <article-title>Motivating and supporting collaboration in open innovation</article-title>. <source>Eur. J. Innov. Manag.</source> <volume>13</volume>, <fpage>100</fpage>&#x02013;<lpage>119</lpage>. <pub-id pub-id-type="doi">10.1108/14601061011013258</pub-id></citation>
</ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Batson</surname> <given-names>C. D.</given-names></name> <name><surname>Duncan</surname> <given-names>B. D.</given-names></name> <name><surname>Ackerman</surname> <given-names>P.</given-names></name> <name><surname>Buckley</surname> <given-names>T.</given-names></name> <name><surname>Birch</surname> <given-names>K.</given-names></name></person-group> (<year>1981</year>). <article-title>Is empathic emotion a source of altruistic motivation?</article-title> <source>J. Pers. Soc. Psychol.</source> <volume>40</volume>, <fpage>290</fpage>&#x02013;<lpage>302</lpage>. <pub-id pub-id-type="doi">10.1037/0022-3514.40.2.290</pub-id></citation>
</ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cappa</surname> <given-names>F.</given-names></name> <name><surname>Rosso</surname> <given-names>F.</given-names></name> <name><surname>Hayes</surname> <given-names>D.</given-names></name></person-group> (<year>2019</year>). <article-title>Monetary and social rewards for crowdsourcing</article-title>. <source>Sustainability</source> <volume>11</volume>, <fpage>2834</fpage>. <pub-id pub-id-type="doi">10.3390/su11102834</pub-id></citation>
</ref>
<ref id="B5">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Chaffois</surname> <given-names>C.</given-names></name> <name><surname>Gillier</surname> <given-names>T.</given-names></name> <name><surname>Belkhouja</surname> <given-names>M.</given-names></name> <name><surname>Roth</surname> <given-names>Y.</given-names></name></person-group> (<year>2015</year>). <article-title>&#x0201C;How task instructions impact the creativity of designers and ordinary participants in online idea generation,&#x0201D;</article-title> in <source>Proceedings of the 22nd Innovation Product Development Management Conference (IPDMC)</source> (<publisher-loc>Lyon</publisher-loc>: <publisher-name>HAL</publisher-name>).</citation>
</ref>
<ref id="B6">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Cosley</surname> <given-names>D.</given-names></name> <name><surname>Frankowski</surname> <given-names>D.</given-names></name> <name><surname>Kiesler</surname> <given-names>S.</given-names></name> <name><surname>Terveen</surname> <given-names>L.</given-names></name> <name><surname>Riedl</surname> <given-names>J.</given-names></name></person-group> (<year>2005</year>). <article-title>&#x0201C;How oversight improves member-maintained communities,&#x0201D;</article-title> in <source>Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI &#x00027;05)</source> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>ACM</publisher-name>). <pub-id pub-id-type="doi">10.1145/1054972.1054975</pub-id></citation>
</ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Deng</surname> <given-names>X.</given-names></name> <name><surname>Joshi</surname> <given-names>K. D.</given-names></name> <name><surname>Galliers</surname> <given-names>R. D.</given-names></name></person-group> (<year>2016</year>). <article-title>The duality of empowerment and marginalization in microtask crowdsourcing: giving voice to the less powerful through value sensitive design</article-title>. <source>MIS Q.</source> <volume>40</volume>, <fpage>279</fpage>&#x02013;<lpage>302</lpage>. <pub-id pub-id-type="doi">10.25300/MISQ/2016/40.2.01</pub-id></citation>
</ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Deng</surname> <given-names>X. N.</given-names></name> <name><surname>Joshi</surname> <given-names>K. D.</given-names></name></person-group> (<year>2016</year>). <article-title>Why individuals participate in micro-task crowdsourcing work environment: revealing crowdworkers&#x00027; perceptions</article-title>. <source>J. Assoc. Inf. Syst.</source> <volume>17</volume>, <fpage>711</fpage>&#x02013;<lpage>736</lpage>. <pub-id pub-id-type="doi">10.17705/1jais.00441</pub-id></citation>
</ref>
<ref id="B9">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Dow</surname> <given-names>S.</given-names></name> <name><surname>Kulkarni</surname> <given-names>A. P.</given-names></name> <name><surname>Klemmer</surname> <given-names>S. R.</given-names></name> <name><surname>Hartmann</surname> <given-names>B.</given-names></name></person-group> (<year>2012</year>). <article-title>&#x0201C;Shepherding the crowd yields better work,&#x0201D;</article-title> in <source>Proceedings of the ACM Conference on Computer Supported Cooperative Work (CSCW &#x00027;12)</source> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>ACM</publisher-name>) <fpage>1013</fpage>&#x02013;<lpage>1022</lpage>. <pub-id pub-id-type="doi">10.1145/2145204.2145355</pub-id></citation>
</ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Feller</surname> <given-names>J.</given-names></name> <name><surname>Finnegan</surname> <given-names>P.</given-names></name> <name><surname>Hayes</surname> <given-names>J.</given-names></name> <name><surname>O&#x00027;Reilly</surname> <given-names>P.</given-names></name></person-group> (<year>2012</year>). <article-title>&#x02018;Orchestrating&#x00027; sustainable crowdsourcing: a characterisation of solver brokerages</article-title>. <source>J. Strateg. Inf. Syst.</source> <volume>21</volume>, <fpage>216</fpage>&#x02013;<lpage>232</lpage>. <pub-id pub-id-type="doi">10.1016/j.jsis.2012.03.002</pub-id></citation>
</ref>
<ref id="B11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Feng</surname> <given-names>Y.</given-names></name> <name><surname>Ye</surname> <given-names>H. J.</given-names></name> <name><surname>Yu</surname> <given-names>Y.</given-names></name> <name><surname>Yang</surname> <given-names>C.</given-names></name> <name><surname>Cui</surname> <given-names>T.</given-names></name></person-group> (<year>2018</year>). <article-title>Gamification artifacts and crowdsourcing participation: examining the mediating role of intrinsic motivations</article-title>. <source>Comput. Hum. Behav</source>. <volume>81</volume>, <fpage>124</fpage>&#x02013;<lpage>136</lpage>. <pub-id pub-id-type="doi">10.1016/j.chb.2017.12.018</pub-id></citation>
</ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ghosh</surname> <given-names>R.</given-names></name> <name><surname>Reio</surname> <given-names>J. T. G.</given-names></name> <name><surname>Haynes</surname> <given-names>R. K.</given-names></name></person-group> (<year>2012</year>). <article-title>Mentoring and organizational citizenship behavior: estimating the mediating effects of organization-based self-esteem and affective commitment</article-title>. <source>Hum. Resour. Dev. Q.</source> <volume>23</volume>, <fpage>41</fpage>&#x02013;<lpage>63</lpage>. <pub-id pub-id-type="doi">10.1002/hrdq.21121</pub-id></citation>
</ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Goncalves</surname> <given-names>J.</given-names></name> <name><surname>Hosio</surname> <given-names>S.</given-names></name> <name><surname>Ferreira</surname> <given-names>D.</given-names></name> <name><surname>Vassilis</surname> <given-names>K.</given-names></name></person-group> (<year>2014</year>). <article-title>&#x0201C;Game of words: tagging places through crowdsourcing on public displays,&#x0201D;</article-title> <source>Proceedings of the 2014 ACM Conference on Designing Interactive Systems (DIS &#x00027;14)</source>, <volume>ACM</volume>, <fpage>705</fpage>&#x02013;<lpage>714</lpage>. <pub-id pub-id-type="doi">10.1145/2598510.2598514</pub-id></citation>
</ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hann</surname> <given-names>I.-H.</given-names></name> <name><surname>Roberts</surname> <given-names>J. A.</given-names></name> <name><surname>Slaughter</surname> <given-names>S. A.</given-names></name></person-group> (<year>2013</year>). <article-title>All are not equal: an examination of the economic returns to different forms of participation in open source software communities</article-title>. <source>Inf. Syst. Res.</source> <volume>24</volume>, <fpage>520</fpage>&#x02013;<lpage>538</lpage>. <pub-id pub-id-type="doi">10.1287/isre.2013.0474</pub-id><pub-id pub-id-type="pmid">19642375</pub-id></citation></ref>
<ref id="B15">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ho</surname> <given-names>C.-J.</given-names></name> <name><surname>Slivkins</surname> <given-names>A.</given-names></name> <name><surname>Suri</surname> <given-names>S.</given-names></name> <name><surname>Vaughanet</surname> <given-names>J. W.</given-names></name></person-group> (<year>2015</year>). <article-title>&#x0201C;Incentivizing high quality crowdwork,&#x0201D;</article-title> <italic>Proceedings of the 24th International Conference on World Wide Web (WWW &#x00027;15), International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE</italic> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>419</fpage>&#x02013;<lpage>429</lpage>. <pub-id pub-id-type="doi">10.1145/2736277.2741102</pub-id></citation>
</ref>
<ref id="B16">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Hong</surname> <given-names>Y.</given-names></name> <name><surname>Kwak</surname> <given-names>H.</given-names></name> <name><surname>Baek</surname> <given-names>Y.</given-names></name> <name><surname>Moon</surname> <given-names>S.</given-names></name></person-group> (<year>2013</year>). <article-title>&#x0201C;Tower of babel: a crowdsourcing game building sentiment lexicons for resource-scarce languages,&#x0201D;</article-title> in <source>Proceedings of the ACM 22nd International Conference on World Wide Web (WWW &#x00027;13)</source> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>549</fpage>&#x02013;<lpage>556</lpage>. <pub-id pub-id-type="doi">10.1145/2487788.2487993</pub-id></citation>
</ref>
<ref id="B17">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Howe</surname> <given-names>J.</given-names></name></person-group> (<year>2006</year>). <source>The Rise of Crowdsourcing. Wired</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="http://www.wired.com/wired/archive/14.06/crowds_pr.html">http://www.wired.com/wired/archive/14.06/crowds_pr.html</ext-link> (accessed June 1, 2006).</citation>
</ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Huang</surname> <given-names>L.</given-names></name> <name><surname>Xie</surname> <given-names>G.</given-names></name> <name><surname>Blenkinsopp</surname> <given-names>J.</given-names></name> <name><surname>Huang</surname> <given-names>R.</given-names></name> <name><surname>Bin</surname> <given-names>H.</given-names></name></person-group> (<year>2020</year>). <article-title>Crowdsourcing for sustainable urban logistics: exploring the factors influencing crowd workers&#x00027; participative behavior</article-title>. <source>Sustainability</source> <volume>12</volume>, <fpage>3091</fpage>. <pub-id pub-id-type="doi">10.3390/su12083091</pub-id></citation>
</ref>
<ref id="B19">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Jackson</surname> <given-names>C.</given-names></name> <name><surname>&#x000D8;sterlund</surname> <given-names>C. S.</given-names></name> <name><surname>Mugar</surname> <given-names>G.</given-names></name> <name><surname>Hassman</surname> <given-names>K. D. V.</given-names></name> <name><surname>Crowston</surname> <given-names>K.</given-names></name></person-group> (<year>2015</year>). <article-title>&#x0201C;Motivations for sustained participation in crowdsourcing: case studies of citizen science on the role of talk,&#x0201D;</article-title> in <source>Proceedings of the 48th Hawaii International Conference on System Sciences</source> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1624</fpage>&#x02013;<lpage>1634</lpage>. <pub-id pub-id-type="doi">10.1109/HICSS.2015.196</pub-id></citation>
</ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jian</surname> <given-names>L.</given-names></name> <name><surname>Yang</surname> <given-names>S.</given-names></name> <name><surname>Ba</surname> <given-names>S.</given-names></name> <name><surname>Lu</surname> <given-names>L.</given-names></name> <name><surname>Jiang</surname> <given-names>C.</given-names></name></person-group> (<year>2019</year>). <article-title>Managing the crowds: the effect of prize guarantees and in-process feedback on participation in crowdsourcing contests</article-title>. <source>MIS Q.</source> <volume>43</volume>, <fpage>97</fpage>&#x02013;<lpage>112</lpage>. <pub-id pub-id-type="doi">10.25300/MISQ/2019/13649</pub-id></citation>
</ref>
<ref id="B21">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Kashima</surname> <given-names>H.</given-names></name> <name><surname>Oyama</surname> <given-names>S.</given-names></name> <name><surname>Baba</surname> <given-names>Y.</given-names></name></person-group> (<year>2016</year>). <source>Human Computation and Crowdsourcing</source>. <publisher-loc>Tokyo</publisher-loc>: <publisher-name>Kodansha</publisher-name>.</citation>
</ref>
<ref id="B22">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Kaufmann</surname> <given-names>N.</given-names></name> <name><surname>Schulze</surname> <given-names>T.</given-names></name> <name><surname>Veit</surname> <given-names>D.</given-names></name></person-group> (<year>2011</year>). <article-title>&#x0201C;More than fun and money. <italic>Worker motivation in crowdsourcing: a study on mechanical turk,&#x0201D;</italic> in Proceedings of the Seventeenth Americas Conference on Information Systems (AMCIS &#x00027;11)</article-title> (<publisher-loc>Atlanta, GA</publisher-loc>: <publisher-name>AIS</publisher-name>).</citation>
</ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kelley</surname> <given-names>H. H.</given-names></name> <name><surname>Michela</surname> <given-names>J. L.</given-names></name></person-group> (<year>1980</year>). <article-title>Attribution theory and research</article-title>. <source>Annu. Rev. Psychol.</source> <volume>31</volume>, <fpage>457</fpage>&#x02013;<lpage>501</lpage>. <pub-id pub-id-type="doi">10.1146/annurev.ps.31.020180.002325</pub-id><pub-id pub-id-type="pmid">20809783</pub-id></citation></ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Khern-am-nuai</surname> <given-names>W.</given-names></name> <name><surname>Kannan</surname> <given-names>K.</given-names></name> <name><surname>Ghasemkhani</surname> <given-names>H.</given-names></name></person-group> (<year>2018</year>). <article-title>Extrinsic vs. intrinsic rewards for contributing reviews in an online platform</article-title>. <source>Inf. Syst. Res.</source> <volume>29</volume>, <fpage>871</fpage>&#x02013;<lpage>892</lpage>. <pub-id pub-id-type="doi">10.1287/isre.2017.0750</pub-id><pub-id pub-id-type="pmid">19642375</pub-id></citation></ref>
<ref id="B25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lepper</surname> <given-names>M. R.</given-names></name> <name><surname>Greene</surname> <given-names>D.</given-names></name> <name><surname>Nisbett</surname> <given-names>R. E.</given-names></name></person-group> (<year>1973</year>). <article-title>Undermining children&#x00027;s intrinsic interest with extrinsic reward: a test of &#x0201C;overjustification&#x0201D; hypothesis</article-title>. <source>J. Pers. Soc. Psychol.</source> <volume>28</volume>, <fpage>129</fpage>&#x02013;<lpage>137</lpage>. <pub-id pub-id-type="doi">10.1037/h0035519</pub-id></citation>
</ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>Y.</given-names></name> <name><surname>Liu</surname> <given-names>Y.</given-names></name></person-group> (<year>2019</year>). <article-title>The effect of workers&#x00027; justice perception on continuance participation intention in the crowdsourcing market</article-title>. <source>Internet Res.</source> <volume>29</volume>, <fpage>1485</fpage>&#x02013;<lpage>1508</lpage>. <pub-id pub-id-type="doi">10.1108/INTR-02-2018-0060</pub-id></citation>
</ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Matsubara</surname> <given-names>S.</given-names></name> <name><surname>Wang</surname> <given-names>M.</given-names></name></person-group> (<year>2014</year>). <article-title>Preventing participation of insincere workers in crowdsourcing by using pay-for-performance payments</article-title>. <source>IEICE Trans. Inf. Syst.</source> <volume>E97D</volume>, <fpage>2415</fpage>&#x02013;<lpage>2422</lpage>. <pub-id pub-id-type="doi">10.1587/transinf.2013EDP7441</pub-id><pub-id pub-id-type="pmid">12739966</pub-id></citation></ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Morschheuser</surname> <given-names>B.</given-names></name> <name><surname>Hamari</surname> <given-names>J.</given-names></name> <name><surname>Maedche</surname> <given-names>A.</given-names></name></person-group> (<year>2019</year>). <article-title>Cooperation or competition&#x02013;When do people contribute more? A field experiment on gamification of crowdsourcing</article-title>. <source>Int. J. Hum.-Comput. Stud.</source> <volume>127</volume>, <fpage>7</fpage>&#x02013;<lpage>24</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijhcs.2018.10.001</pub-id></citation>
</ref>
<ref id="B29">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rockmann</surname> <given-names>K. W.</given-names></name> <name><surname>Ballinger</surname> <given-names>G. A.</given-names></name></person-group> (<year>2017</year>). <article-title>Intrinsic motivation and organizational identification among on-demand workers</article-title>. <source>J. Appl. Psychol.</source> <volume>102</volume>, <fpage>1305</fpage>&#x02013;<lpage>1316</lpage>. <pub-id pub-id-type="doi">10.1037/apl0000224</pub-id><pub-id pub-id-type="pmid">28394148</pub-id></citation></ref>
<ref id="B30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rogstadius</surname> <given-names>J.</given-names></name> <name><surname>Kostakos</surname> <given-names>V.</given-names></name> <name><surname>Kittur</surname> <given-names>A.</given-names></name> <name><surname>Smus</surname> <given-names>B.</given-names></name> <name><surname>Laredo</surname> <given-names>J.</given-names></name> <name><surname>Vukovic</surname> <given-names>M.</given-names></name></person-group> (<year>2011</year>). <article-title>An assessment of intrinsic and extrinsic motivation on task performance in crowdsourcing markets</article-title>. <source>Proc. 5th Int. AAAI Conf. Web Soc. Media</source> <volume>5</volume>, <fpage>321</fpage>&#x02013;<lpage>328</lpage>. <pub-id pub-id-type="doi">10.1609/icwsm.v5i1.14105</pub-id></citation>
</ref>
<ref id="B31">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ryan</surname> <given-names>R. M.</given-names></name> <name><surname>Deci</surname> <given-names>E. L.</given-names></name></person-group> (<year>2000</year>). <article-title>Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being</article-title>. <source>Am. Psychol.</source> <volume>55</volume>, <fpage>68</fpage>&#x02013;<lpage>78</lpage>. <pub-id pub-id-type="doi">10.1037/0003-066X.55.1.68</pub-id><pub-id pub-id-type="pmid">11392867</pub-id></citation></ref>
<ref id="B32">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Soliman</surname> <given-names>W.</given-names></name> <name><surname>Tuunainen</surname> <given-names>V. K.</given-names></name></person-group> (<year>2015</year>). <article-title>Understanding continued use of crowdsourcing systems: an interpretive study</article-title>. <source>J. Theor. Appl. Electron. Commer. Res.</source> <volume>10</volume>, <fpage>1</fpage>&#x02013;<lpage>18</lpage>. <pub-id pub-id-type="doi">10.4067/S0718-18762015000100002</pub-id><pub-id pub-id-type="pmid">33036834</pub-id></citation></ref>
<ref id="B33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sun</surname> <given-names>Y.</given-names></name> <name><surname>Fang</surname> <given-names>Y.</given-names></name> <name><surname>Lim</surname> <given-names>K. H.</given-names></name></person-group> (<year>2012</year>). <article-title>Understanding sustained participation in transactional virtual communities</article-title>. <source>Decis. Support Syst.</source> <volume>53</volume>, <fpage>12</fpage>&#x02013;<lpage>22</lpage>. <pub-id pub-id-type="doi">10.1016/j.dss.2011.10.006</pub-id></citation>
</ref>
<ref id="B34">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sun</surname> <given-names>Y.</given-names></name> <name><surname>Wang</surname> <given-names>N.</given-names></name> <name><surname>Yin</surname> <given-names>C.</given-names></name> <name><surname>Zhang</surname> <given-names>X.</given-names></name></person-group> (<year>2015</year>). <article-title>Understanding the relationships between motivators and effort in crowdsourcing marketplaces: a nonlinear analysis</article-title>. <source>Int. J. Inf. Manag.</source> <volume>35</volume>, <fpage>267</fpage>&#x02013;<lpage>276</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijinfomgt.2015.01.009</pub-id></citation>
</ref>
<ref id="B35">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Tauchmann</surname> <given-names>C.</given-names></name> <name><surname>Daxenberger</surname> <given-names>J.</given-names></name> <name><surname>Mieskes</surname> <given-names>M.</given-names></name></person-group> (<year>2020</year>). <article-title>&#x0201C;The influence of input data complexity on crowdsourcing quality,&#x0201D;</article-title> in <source>Proceedings of the 25th International Conference on Intelligent User Interfaces Companion (IUI &#x00027;20)</source> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>71</fpage>&#x02013;<lpage>72</lpage>. <pub-id pub-id-type="doi">10.1145/3379336.3381499</pub-id></citation>
</ref>
<ref id="B36">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Taylor</surname> <given-names>J.</given-names></name> <name><surname>Joshi</surname> <given-names>K. D.</given-names></name></person-group> (<year>2019</year>). <article-title>Joining the crowd: the career anchors of information technology workers participating in crowdsourcing</article-title>. <source>Inf. Syst. J.</source> <volume>29</volume>, <fpage>641</fpage>&#x02013;<lpage>673</lpage>. <pub-id pub-id-type="doi">10.1111/isj.12225</pub-id></citation>
</ref>
<ref id="B37">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Uhlgren</surname> <given-names>V.-V.</given-names></name> <name><surname>Laato</surname> <given-names>S.</given-names></name> <name><surname>Hamari</surname> <given-names>J.</given-names></name> <name><surname>Nummenmaa</surname> <given-names>T.</given-names></name></person-group> (<year>2024</year>). <article-title>&#x0201C;Gamification to motivate the crowdsourcing of dynamic nature data: a field experiment in northern Europe,&#x0201D;</article-title> <italic>Proceedings of the 27th International Academic Mindtrek Conference (Mindtrek&#x00027;24)</italic> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>230</fpage>&#x02013;<lpage>234</lpage>. <pub-id pub-id-type="doi">10.1145/3681716.3689440</pub-id></citation>
</ref>
<ref id="B38">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wasko</surname> <given-names>M. M.</given-names></name> <name><surname>Faraj</surname> <given-names>S.</given-names></name></person-group> (<year>2005</year>). <article-title>Why should I share? Examining social capital and knowledge contribution in electronic networks of practice</article-title>. <source>MIS Q.</source> <volume>29</volume>, <fpage>35</fpage>&#x02013;<lpage>57</lpage>. <pub-id pub-id-type="doi">10.2307/25148667</pub-id></citation>
</ref>
<ref id="B39">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Watts</surname> <given-names>D. J.</given-names></name> <name><surname>Mason</surname> <given-names>W.</given-names></name></person-group> (<year>2009</year>). <article-title>&#x0201C;Financial incentives and the &#x02018;performance of crowds&#x00027;,&#x0201D;</article-title> <italic>Proceedings of the ACM SIGKDD Workshop on Human Computation (HCOMP &#x00027;09)</italic> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>77</fpage>&#x02013;<lpage>85</lpage>. <pub-id pub-id-type="doi">10.1145/1600150.1600175</pub-id></citation>
</ref>
<ref id="B40">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Weiner</surname> <given-names>B.</given-names></name></person-group> (<year>1974</year>). <source>Achievement Motivation and Attribution Theory</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>General Learning Press</publisher-name>.</citation>
</ref>
<ref id="B41">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Yang</surname> <given-names>J.</given-names></name> <name><surname>Adamic</surname> <given-names>L. A.</given-names></name> <name><surname>Ackerman</surname> <given-names>M. S.</given-names></name></person-group> (<year>2008</year>). <article-title>&#x0201C;Crowdsourcing and knowledge sharing: strategic user behavior on TaskCN,&#x0201D;</article-title> in <source>Proceedings of the 9th ACM Conference on Electronic Commerce (EC&#x00027;08)</source> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>246</fpage>&#x02013;<lpage>255</lpage>. <pub-id pub-id-type="doi">10.1145/1386790.1386829</pub-id></citation>
</ref>
<ref id="B42">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ye</surname> <given-names>H. J.</given-names></name> <name><surname>Kankanhalli</surname> <given-names>A.</given-names></name></person-group> (<year>2017</year>). <article-title>Solvers&#x00027; participation in crowdsourcing platforms: examining the impacts of trust, and benefit and cost factors</article-title>. <source>J. Strateg. Inf. Syst.</source> <volume>26</volume>, <fpage>101</fpage>&#x02013;<lpage>117</lpage>. <pub-id pub-id-type="doi">10.1016/j.jsis.2017.02.001</pub-id></citation>
</ref>
<ref id="B43">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Yin</surname> <given-names>M.</given-names></name> <name><surname>Chen</surname> <given-names>Y.</given-names></name> <name><surname>Sun</surname> <given-names>Y.-A.</given-names></name></person-group> (<year>2013</year>). &#x0201C;The effects of performance-contingent financial incentives in online labor markets,&#x0201D; <italic>Proceedings of the 27th AAAI Conference on Artificial Intelligence (AAAI &#x00027;13)</italic> (<publisher-loc>Washington, DC</publisher-loc>: <publisher-name>AAAI</publisher-name>), <fpage>1191</fpage>&#x02013;<lpage>1197</lpage>. <pub-id pub-id-type="doi">10.1609/aaai.v27i1.8461</pub-id></citation>
</ref>
<ref id="B44">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yin</surname> <given-names>X.</given-names></name> <name><surname>Zhu</surname> <given-names>K.</given-names></name> <name><surname>Wang</surname> <given-names>H.</given-names></name> <name><surname>Zhang</surname> <given-names>J.</given-names></name> <name><surname>Wang</surname> <given-names>W.</given-names></name> <name><surname>Zhang</surname> <given-names>H.</given-names></name></person-group> (<year>2022</year>). <article-title>Motivating participation in crowdsourcing contests: the role of instruction-writing strategy</article-title>. <source>Inf. Manag.</source> <volume>59</volume>, <fpage>103616</fpage>. <pub-id pub-id-type="doi">10.1016/j.im.2022.103616</pub-id></citation>
</ref>
</ref-list>
</back>
</article> 