<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Psychol.</journal-id>
<journal-title>Frontiers in Psychology</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Psychol.</abbrev-journal-title>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fpsyg.2014.00307</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Original Research Article</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Weight lifting can facilitate appreciative comprehension for museum exhibits</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Yamada</surname> <given-names>Yuki</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="author-notes" rid="fn001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://community.frontiersin.org/people/u/50443"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Harada</surname> <given-names>Shinya</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://community.frontiersin.org/people/u/149402"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Choi</surname> <given-names>Wonje</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Fujino</surname> <given-names>Rika</given-names></name>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
<uri xlink:href="http://community.frontiersin.org/people/u/149227"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Tokunaga</surname> <given-names>Akinobu</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Gao</surname> <given-names>YueYun</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://community.frontiersin.org/people/u/149248"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Miura</surname> <given-names>Kayo</given-names></name>
<xref ref-type="aff" rid="aff4"><sup>4</sup></xref>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Faculty of Arts and Science, Kyushu University</institution> <country>Fukuoka, Japan</country></aff>
<aff id="aff2"><sup>2</sup><institution>Graduate School of Human-Environment Studies, Kyushu University</institution> <country>Fukuoka, Japan</country></aff>
<aff id="aff3"><sup>3</sup><institution>Graduate School of Integrated Frontier Sciences, Kyushu University</institution> <country>Fukuoka, Japan</country></aff>
<aff id="aff4"><sup>4</sup><institution>Faculty of Human-Environment Studies, Kyushu University</institution> <country>Fukuoka, Japan</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Bettina Forster, City University London, UK</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Serge Thill, University of Sk&#x000F6;vde, Sweden; Cosimo Urgesi, University of Udine, Italy</p></fn>
<fn fn-type="corresp" id="fn001"><p>&#x0002A;Correspondence: Yuki Yamada, Faculty of Arts and Science, Kyushu University, 744 Motooka, Nishi-ku, Fukuoka 819-0395, Japan e-mail: <email>yamadayuk&#x00040;gmail.com</email></p></fn>
<fn fn-type="other" id="fn002"><p>This article was submitted to Cognitive Science, a section of the journal Frontiers in Psychology.</p></fn>
</author-notes>
<pub-date pub-type="epub">
<day>14</day>
<month>04</month>
<year>2014</year>
</pub-date>
<pub-date pub-type="collection">
<year>2014</year>
</pub-date>
<volume>5</volume>
<elocation-id>307</elocation-id>
<history>
<date date-type="received">
<day>26</day>
<month>11</month>
<year>2013</year>
</date>
<date date-type="accepted">
<day>25</day>
<month>03</month>
<year>2014</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2014 Yamada, Harada, Choi, Fujino, Tokunaga, Gao and Miura.</copyright-statement>
<copyright-year>2014</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/3.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract><p>Appreciation of exhibits in a museum can be equated to a virtual experience of lives in the contexts originally surrounding the exhibits. Here we focus on the importance of weight information, and hence tested whether experiencing a weight during museum exhibit appreciation affects the beholders&#x00027; satisfaction and recognition memory for the exhibits. An experiment was performed at a museum exhibiting skeletal preparations of animals. We used nine preparations and prepared four weight stimuli as weight cues in accordance with the actual weight of four of the preparations: Remaining five preparations was displayed without weight stimuli. In the cued condition, participants were asked to lift up the weight stimuli during their observation of the four exhibits. In the uncued condition, participants observed the exhibits without touching the weight stimuli. After observation of the exhibits, the participants responded to a questionnaire that measured their impressions of the exhibits and the museum, and performed a recognition test on the exhibits. Results showed that memory performance was better and viewing duration was longer with weight lifting instruction than without instruction. A factor analysis on the questionnaires revealed four factors (likeability, contentment, value, and quality). A path analysis showed indirect effects of viewing duration on memory performance and willingness-to-pay (WTP) for the museum appreciation through the impression factors. Our findings provide insight into a new interactive exhibition that enables long appreciation producing positive effects on visitors&#x00027; impression, memory, and value estimation for exhibits.</p>
</abstract>
<kwd-group>
<kwd>museology</kwd>
<kwd>memory</kwd>
<kwd>haptic</kwd>
<kwd>information integration</kwd>
<kwd>appreciation</kwd>
</kwd-group>
<counts>
<fig-count count="3"/>
<table-count count="2"/>
<equation-count count="0"/>
<ref-count count="23"/>
<page-count count="7"/>
<word-count count="4755"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="introduction" id="s1">
<title>Introduction</title>
<p>Through visiting museums, we are able to come into contact with things not usually seen in our daily lives, such as rare creatures, historical relics, archaeological remains, and artwork. Through the appreciations of museum exhibits, we can experience virtual lives in the spatial and temporal contexts that originally encompassed the exhibited items with vivid reality. Great progress in recent technologies related to virtual and augmented reality has allowed museums to incorporate three-dimensional (3D) representation into their exhibits (Walczak and White, <xref ref-type="bibr" rid="B21">2003</xref>; Hirose, <xref ref-type="bibr" rid="B11">2006</xref>; Petridis et al., <xref ref-type="bibr" rid="B17">2006</xref>). Simultaneously, multimodal displays based on similar technology have been equipped to appeal to not only visual but also auditory, haptic, and olfactory modalities (Butler and Neave, <xref ref-type="bibr" rid="B4">2008</xref>; Figueroa et al., <xref ref-type="bibr" rid="B7">2009</xref>). Such new methods contribute to the diversity of presentation within and between museums, comprising a new category of &#x0201C;virtual museum&#x0201D; that distributes digital replicas of exhibits to each individual, going beyond the conventional static presentations that remain inside glass cases.</p>
<p>Multimodal displays are likely to facilitate a deeper understanding of museum exhibits. Indeed, it has been found that when haptic devices are incorporated into exhibits, visitors take more time to appreciate them (Butler and Neave, <xref ref-type="bibr" rid="B4">2008</xref>); moreover, the inclusion of haptic devices in exhibits has received positive evaluations from not only visitors, but also museum curators and researchers (Asano et al., <xref ref-type="bibr" rid="B2">2005</xref>). Thus, given that one of the objectives of museums is to educate visitors about their exhibits, it is important to improve visitors&#x00027; impressions of exhibits and to encourage longer appreciation times by adding haptic information.</p>
<p>Directly touching museum exhibits can provide visitors with additional information on the texture, weight, and materials of items than merely viewing them can. In cognitive psychology, it has been suggested that the provision of more information about unknown things can elicit a positive affect (Biederman and Vessel, <xref ref-type="bibr" rid="B3">2006</xref>; Yamada et al., <xref ref-type="bibr" rid="B22">2012</xref>, <xref ref-type="bibr" rid="B23">2013</xref>). These findings may underlie why the addition of haptic information to exhibits leads to more positive impressions of them. However, there is a dilemma in introducing such touchable museum exhibits. While exhibits have traditionally been susceptible to age-related natural deterioration and damage during transport, which can affect the condition of exhibit items to varying degrees, the touching of exhibit items by visitors&#x00027; hands can also accelerate their deterioration.</p>
<p>To overcome this dilemma, in our study we focused on the addition of a simple object, a box, that has the same weight to exhibits. This is information for which &#x0201C;surrogates&#x0201D; (i.e., information that does not require the direct touching of original exhibit items) can be easily created and that does not seem to be directly related to positive or negative emotions regarding the exhibit. Moreover, to investigate such an addition experimentally, we can easily prepare and control the experimental stimuli when compared with texture and material stimuli. Humans perceive the weight of a stimulus based on outputs from proprioceptive and cutaneous sensors that constitute haptic perception (Jones, <xref ref-type="bibr" rid="B13">1986</xref>; Flanagan et al., <xref ref-type="bibr" rid="B9">1995</xref>; Flanagan and Wing, <xref ref-type="bibr" rid="B8">1997</xref>). Moreover, many previous studies with psychophysics have revealed that haptic perception sometimes alters the appearance of visual stimuli (Ernst et al., <xref ref-type="bibr" rid="B6">2000</xref>; Violentyev et al., <xref ref-type="bibr" rid="B20">2005</xref>; Ide and Hidaka, <xref ref-type="bibr" rid="B12">2013</xref>). Thus, we developed the hypothesis that weight cues will provide a portion of the available haptic information about museum exhibit items without direct touch, thereby resulting in positive effects on visitors&#x00027; visual museum appreciation.</p>
<p>The goal of the present study was to examine whether weight cues affect visitor appreciation of museum exhibits. In our experiment, we prepared polystyrene foam boxes containing sand in accordance with the actual weight of each exhibit. We divided the participants into two conditions; in one, the participants had the opportunity to pick up the weight stimuli using their hands (cued condition), and in the other the participants had no opportunity to do so (uncued condition). We predicted that if our hypothesis was correct, that is, that weight cues affect participants&#x00027; appreciation of exhibit items, then the participants in the cued condition would (1) have significantly more positive perceptions of the exhibit appreciation experience and of the museum itself, (2) be able to better recall a greater number of the exhibits, (3) look at exhibits for longer periods of time, and (4) be willing to pay more money to experience such appreciation when compared with the participants in the uncued condition.</p>
</sec>
<sec sec-type="materials and methods" id="s2">
<title>Materials and methods</title>
<sec>
<title>Participants</title>
<p>Forty-two graduate and undergraduate students attending Kyushu University participated in the experiment (12 men, 30 women; mean age &#x0003D; 21.7 years). The participants were unaware of the purpose of the experiment. The experiment was conducted according to the principles laid down in the Helsinki Declaration. Written informed consent was obtained from all participants after the nature and possible consequences of the study were explained to them. The ethical committees of Kyushu University approved the protocol. A total of 20 (3 men, 17 women) and 22 (9 men, 13 women) participants were randomly assigned to the cued and uncued conditions, respectively, which are described below.</p>
</sec>
<sec>
<title>Stimuli</title>
<p>The experiment was conducted at a museum belonging to the Kyushu University (Figure <xref ref-type="fig" rid="F1">1A</xref>; First pavilion of The Kyushu University Museum: <ext-link ext-link-type="uri" xlink:href="http://www.museum.kyushu-u.ac.jp/english/index.html">http://www.museum.kyushu-u.ac.jp/english/index.html</ext-link>). An exhibition room displaying skeletal preparations of animals in glass cases was used (Figure <xref ref-type="fig" rid="F1">1C</xref>). There were nine skeletal preparations in total, and we presented weight stimuli with only four preparations (box-present condition), and hence, no weight stimuli were presented with the residual five preparations (box-absent condition). We used the following skeletal preparations, babirusa [<italic>Babyrousa babyrussa</italic>], Indian elephant [<italic>Elephas maximus indicus</italic>], short-finned pilot whale [<italic>Globicephala macrorhyncus</italic>], and water buffalo [<italic>Bubalus arnee</italic>], as exhibit items for the box-present condition and the following skeletal preparations, camel [<italic>Camelidae</italic>], reindeer [<italic>Rangifer tarandus</italic>], pygmy sperm whale [<italic>Kogia breviceps</italic>], sheep [<italic>Ovis aries</italic>], and sun bear [<italic>Helarctos malayanus</italic>], as exhibit items for the box-absent condition (Figure <xref ref-type="fig" rid="F1">1D</xref>). Participants both in the cued and uncued conditions appreciated all of these skeletal preparations. As a weight cue, we created weight stimuli with the same appearance but various mass values that corresponded to the actual weight of the four skeletal preparations (babirusa: 792.7 g; Indian elephant: 5114.6 g; short-finned pilot whale: 3276.4 g; water buffalo: 1554.7 g) used in the box-present condition (Figure <xref ref-type="fig" rid="F1">1B</xref>). The weight stimuli were blue polystyrene foam boxes that contained amounts of sand giving each box the appropriate respective weight. Each of these weight stimuli was placed in front of a skeletal presentation. For the five skeletal preparations in the box-absent condition, no box was prepared.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p><bold>(A)</bold> Appearance of the first pavilion of The Kyushu University Museum. <bold>(B)</bold> A box of the weight cue used in the experiment. <bold>(C)</bold> Layout and use of the weight cues. <bold>(D)</bold> Examples of skeletal preparations used in the experiment.</p></caption>
<graphic xlink:href="fpsyg-05-00307-g0001.tif"/>
</fig>
<p>A paper-based questionnaire with a 7-point Likert-style scale (from 1 &#x0201C;I don&#x00027;t think so at all&#x0201D; to 7 &#x0201C;I strongly think so&#x0201D;) was employed. The questionnaire consisted of two parts. The first part included 14 questions regarding the participants&#x00027; impressions of their museum appreciation experience, such as &#x0201C;<italic>like</italic>,&#x0201D; &#x0201C;<italic>dislike</italic>,&#x0201D; &#x0201C;<italic>satisfied</italic>,&#x0201D; &#x0201C;<italic>dissatisfied</italic>,&#x0201D; &#x0201C;<italic>approachable</italic>,&#x0201D; &#x0201C;<italic>novel</italic>,&#x0201D; &#x0201C;<italic>surprised</italic>,&#x0201D; &#x0201C;<italic>boring</italic>,&#x0201D; &#x0201C;<italic>pleased</italic>,&#x0201D; &#x0201C;<italic>interesting</italic>,&#x0201D; &#x0201C;<italic>awful</italic>,&#x0201D; &#x0201C;<italic>exciting</italic>,&#x0201D; &#x0201C;<italic>enjoyable</italic>,&#x0201D; and &#x0201C;<italic>refreshing</italic>.&#x0201D; The second part included 11 questions regarding the participants&#x00027; impressions of the museum itself, such as &#x0201C;<italic>scientific</italic>,&#x0201D; &#x0201C;<italic>better-than-expected</italic>,&#x0201D; &#x0201C;<italic>less-than-expected</italic>,&#x0201D; &#x0201C;<italic>dignified</italic>,&#x0201D; &#x0201C;<italic>ingenious</italic>,&#x0201D; &#x0201C;<italic>realistic</italic>,&#x0201D; &#x0201C;<italic>exhibits-enriched</italic>,&#x0201D; &#x0201C;<italic>recommendable to friends</italic>,&#x0201D; &#x0201C;<italic>increased my desire to learn</italic>,&#x0201D; &#x0201C;<italic>historic</italic>,&#x0201D; and &#x0201C;<italic>made me want to visit again</italic>.&#x0201D;</p>
</sec>
<sec>
<title>Procedure</title>
<p>Participants were instructed that they could freely view the exhibits in the museum. The participants assigned to the cued condition were additionally instructed as follows: &#x0201C;You&#x00027;ll find boxes in front of some exhibits. These boxes were prepared by the museum staff, and they have the same weight as the exhibit. Please make an effort to pick each box up to experience the actual weight of each exhibit.&#x0201D; On the other hand, the participants assigned to the uncued condition were instructed as follows: &#x0201C;You&#x00027;ll find boxes in front of some exhibits. These boxes contain documents available only to the museum staff. Please do not touch them.&#x0201D; After the visitors had experienced the museum exhibits, we asked them how much they were willing to pay, in Japanese yen, for the same museum experience (100 Japanese yen was nearly equivalent to one USD at the time of the experiment). Then, we carried out a memory (recognition) test on the names of the exhibits, with no time limit. The test items consisted of the nine items displayed in the museum, and 10 filler items that were not actually displayed (capybara, crocodile, giraffe, great Indian rhinoceros, hartebeest, hippopotamus, Malayan tapir, Reeves&#x00027;s muntjac, shearwater, and striped dolphin). Furthermore, we measured each participant&#x00027;s viewing duration with a stopwatch.</p>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>Results</title>
<p>A post-experiment interview showed that one female participant had previously visited the museum, so data from her were not used for analysis. We compared the recognition performance, viewing duration, and willingness-to-pay (WTP) between the cued and uncued conditions. Recognition performance was indicated by A&#x00027; (Figure <xref ref-type="fig" rid="F2">2A</xref>) and B&#x0201D;D measures (Donaldson, <xref ref-type="bibr" rid="B5">1992</xref>). Viewing duration of each exhibit was calculated by dividing total duration by the number of exhibits (i.e., 19) and was analyzed after log-transformation (Figure <xref ref-type="fig" rid="F2">2B</xref>).</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p><bold>The experimental results for (A) recognition performance, (B) viewing duration of each exhibit, and (C) willingness to pay</bold>. Error bars denote standard errors of the means.</p></caption>
<graphic xlink:href="fpsyg-05-00307-g0002.tif"/>
</fig>
<p>For the recognition performance, a mixed analysis of variance of the A&#x00027; measure with cue condition as a between-subjects factor and box condition as a within-subjects factor was performed; results showed that only a main effect of cue was significant, <italic>F</italic><sub>(1, 39)</sub> &#x0003D; 5.31, <italic>p</italic> &#x0003C; 0.03, &#x003B7;<sup>2</sup><sub><italic>p</italic></sub> &#x0003D; 0.14. Although there was no interaction between cue and box conditions, we individually conducted paired <italic>t</italic>-tests and showed that the difference between the box-present and box-absent conditions was significant for the participants in the cued condition, <italic>t</italic><sub>(18)</sub> &#x0003D; 2.32, <italic>p</italic> &#x0003C; 0.04, Cohen&#x00027;s <italic>d</italic> &#x0003D; 0.82, while no significant difference between the conditions was found for the participants in the uncued condition, <italic>t</italic><sub>(21)</sub> &#x0003D; 0.29, <italic>p</italic> &#x0003E; 0.77, Cohen&#x00027;s <italic>d</italic> &#x0003D; 0.08. A two-sample <italic>t</italic>-test showed that the difference between the cued and uncued condition was significant for the box-present condition, <italic>t</italic><sub>(39)</sub> &#x0003D; 2.60, <italic>p</italic> &#x0003C; 0.02, Cohen&#x00027;s <italic>d</italic> &#x0003D; 0.85. On the other hand, ANOVA and individual <italic>t</italic>-tests did not show any significant effect in the B&#x0201D;D measure.</p>
<p>As for the viewing duration, a two-sample <italic>t</italic>-test revealed that participants in the cued condition viewed the exhibits significantly longer than participants in the uncued condition, <italic>t</italic><sub>(39)</sub> &#x0003D; 2.50, <italic>p</italic> &#x0003C; 0.02, Cohen&#x00027;s <italic>d</italic> &#x0003D; 0.80.</p>
<p>As for WTP, a two-sample <italic>t</italic>-test showed no significant difference between the cued and uncued conditions, <italic>t</italic><sub>(39)</sub> &#x0003D; 1.53, <italic>p</italic> &#x0003E; 0.13, Cohen&#x00027;s <italic>d</italic> &#x0003D; 0.49 (Figure <xref ref-type="fig" rid="F2">2C</xref>).</p>
<p>To investigate the relationship between the indices analyzed above and participants&#x00027; impression of their exhibit experience as well as the museum, we used an exploratory factor analysis on the individual questionnaire data that extracted principal components with the unweighted least-square method and promax rotation after reversed items were adjusted. Bartlett&#x00027;s test of sphericity showed significant &#x003C7;<sup>2</sup> values [exhibit appreciation scale: &#x003C7;<sup>2</sup><sub>(91)</sub> &#x0003D; 423.73, <italic>p</italic> &#x0003C; 0.0001; museum scale: &#x003C7;<sup>2</sup><sub>(55)</sub> &#x0003D; 163.54, <italic>p</italic> &#x0003C; 0.0001]. The Kaiser-Meyer-Olkin measurement of sampling adequacy was equal to 0.78 and 0.67 for the exhibit appreciation and museum scales, respectively, suggesting that these data were suitable for factor analysis. Curves in the scree plot suggested that two factor solutions could be extracted from both the scales (Tables <xref ref-type="table" rid="T1">1</xref>, <xref ref-type="table" rid="T2">2</xref>). These factors explained 59.2% and 48.1% of the total variances in the exhibit appreciation and museum scales, respectively. For the participants&#x00027; exhibit appreciation, the first and second factors included items on &#x0201C;likability&#x0201D; and &#x0201C;contentment&#x0201D; with their exhibit appreciation experience, respectively. For the impressions of the museum, the first and second factors included items on the &#x0201C;value&#x0201D; and &#x0201C;quality&#x0201D; of the museum, respectively.</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p><bold>Factor loadings for the items in the appreciation scale after promax rotation</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top" colspan="2"><bold>Item</bold></th>
<th align="center" valign="top" colspan="2"><bold>Factor</bold></th>
<th align="left" valign="top" ><bold><italic>h</italic><sup>2</sup></bold></th>
</tr>
<tr>
<th/>
<th/>
<th align="center" valign="top"><bold>Likability</bold></th>
<th align="center" valign="top"><bold>Contentment</bold></th>
<th/>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top" colspan="2">Pleased</td>
<td align="center" valign="top"><bold>0.948</bold></td>
<td align="center" valign="top">0.129</td>
<td align="center" valign="top">0.824</td>
</tr>
<tr>
<td align="left" colspan="2">Interesting</td>
<td align="center" valign="top"><bold>0.892</bold></td>
<td align="center" valign="top">0.116</td>
<td align="center" valign="top">0.777</td>
</tr>
<tr>
<td align="left" colspan="2">Like</td>
<td align="center" valign="top"><bold>0.868</bold></td>
<td align="center" valign="top">0.012</td>
<td align="center" valign="top">0.804</td>
</tr>
<tr>
<td align="left" colspan="2">Enjoyable</td>
<td align="center" valign="top"><bold>0.823</bold></td>
<td align="center" valign="top">&#x02212;0.031</td>
<td align="center" valign="top">0.850</td>
</tr>
<tr>
<td align="left" colspan="2">Dislike</td>
<td align="center" valign="top"><bold>0.758</bold></td>
<td align="center" valign="top">&#x02212;0.146</td>
<td align="center" valign="top">0.845</td>
</tr>
<tr>
<td align="left" colspan="2">Refleshing</td>
<td align="center" valign="top"><bold>0.750</bold></td>
<td align="center" valign="top">0.037</td>
<td align="center" valign="top">0.760</td>
</tr>
<tr>
<td align="left" colspan="2">Approachable</td>
<td align="center" valign="top"><bold>0.697</bold></td>
<td align="center" valign="top">0.066</td>
<td align="center" valign="top">0.569</td>
</tr>
<tr>
<td align="left" colspan="2">Awful</td>
<td align="center" valign="top"><bold>0.486</bold></td>
<td align="center" valign="top">&#x02212;0.041</td>
<td align="center" valign="top">0.604</td>
</tr>
<tr>
<td align="left" colspan="2">Dissatisfied</td>
<td align="center" valign="top">0.117</td>
<td align="center" valign="top"><bold>0.769</bold></td>
<td align="center" valign="top">0.594</td>
</tr>
<tr>
<td align="left" colspan="2">Satisfied</td>
<td align="center" valign="top">&#x02212;0.325</td>
<td align="center" valign="top"><bold>0.769</bold></td>
<td align="center" valign="top">0.806</td>
</tr>
<tr>
<td align="left" colspan="2">Boring</td>
<td align="center" valign="top">0.030</td>
<td align="center" valign="top"><bold>0.714</bold></td>
<td align="center" valign="top">0.555</td>
</tr>
<tr>
<td align="left" colspan="2">Novel</td>
<td align="center" valign="top">&#x02212;0.151</td>
<td align="center" valign="top"><bold>&#x02212;0.550</bold></td>
<td align="center" valign="top">0.481</td>
</tr>
<tr>
<td align="left" valign="top">&#x000A0;&#x000A0;&#x000A0;&#x000A0;Correlation</td>
<td align="center" valign="top">F1</td>
<td align="center" valign="top">1.000</td>
<td/>
<td/>
</tr>
<tr>
<td align="left" valign="top">&#x000A0;&#x000A0;&#x000A0;&#x000A0;between factors</td>
<td align="center" valign="top">F2</td>
<td align="center" valign="top">&#x02212;0.431</td>
<td align="center" valign="top">1.000</td>
<td/>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>The bold values represent that these items constituted the corresponding factor</italic>.</p>
</table-wrap-foot>
</table-wrap>
<table-wrap position="float" id="T2">
<label>Table 2</label>
<caption><p><bold>Factor loadings for the items in the museum scale after promax rotation</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" colspan="2"><bold>Item</bold></th>
<th align="center" colspan="2"><bold>Factor</bold></th>
<th align="center" valign="top"><bold><italic>h</italic><sup>2</sup></bold></th>
</tr>
<tr>
<th align="left" colspan="2"></th>
<th align="center" valign="top"><bold>Likability</bold></th>
<th align="center" valign="top"><bold>Contentment</bold></th>
<th/>
</tr>
</thead>
<tbody>
<tr>
<td align="left" colspan="2">Made me want to visit again</td>
<td align="center" valign="top"><bold>0.996</bold></td>
<td align="center" valign="top">&#x02212;0.168</td>
<td align="center" valign="top">0.673</td>
</tr>
<tr>
<td align="left" colspan="2">Recommendable to friends</td>
<td align="center" valign="top"><bold>0.744</bold></td>
<td align="center" valign="top">0.240</td>
<td align="center" valign="top">0.747</td>
</tr>
<tr>
<td align="left" colspan="2">Increased my desire to learn</td>
<td align="center" valign="top"><bold>0.607</bold></td>
<td align="center" valign="top">0.094</td>
<td align="center" valign="top">0.514</td>
</tr>
<tr>
<td align="left" colspan="2">Historic</td>
<td align="center" valign="top"><bold>0.381</bold></td>
<td align="center" valign="top">0.058</td>
<td align="center" valign="top">0.282</td>
</tr>
<tr>
<td align="left" colspan="2">Exhibits-enriched</td>
<td align="center" valign="top">&#x02212;0.162</td>
<td align="center" valign="top"><bold>0.908</bold></td>
<td align="center" valign="top">0.492</td>
</tr>
<tr>
<td align="left" colspan="2">Better-than-expected</td>
<td align="center" valign="top">0.158</td>
<td align="center" valign="top"><bold>0.625</bold></td>
<td align="center" valign="top">0.537</td>
</tr>
<tr>
<td align="left" colspan="2">Dignified</td>
<td align="center" valign="top">0.113</td>
<td align="center" valign="top"><bold>0.496</bold></td>
<td align="center" valign="top">0.479</td>
</tr>
<tr>
<td align="left" colspan="2">Scientific</td>
<td align="center" valign="top">0.024</td>
<td align="center" valign="top"><bold>0.478</bold></td>
<td align="center" valign="top">0.256</td>
</tr>
<tr>
<td align="left" colspan="2">Less-than-expected</td>
<td align="center" valign="top">0.129</td>
<td align="center" valign="top"><bold>0.450</bold></td>
<td align="center" valign="top">0.451</td>
</tr>
<tr>
<td align="left" valign="top">&#x000A0;&#x000A0;&#x000A0;&#x000A0;Correlation</td>
<td align="center" valign="top">F1</td>
<td align="center" valign="top">1.000</td>
<td/>
<td/>
</tr>
<tr>
<td align="left" valign="top">&#x000A0;&#x000A0;&#x000A0;&#x000A0;between factors</td>
<td align="center" valign="top">F2</td>
<td align="center" valign="top">0.588</td>
<td align="center" valign="top">1.000</td>
<td/>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>The bold values represent that these items constituted the corresponding factor</italic>.</p>
</table-wrap-foot>
</table-wrap>
<p>Moreover, using each of the indices (recognition performance, viewing duration, and WTP) and the four factor scores, we performed a path analysis (Figure <xref ref-type="fig" rid="F3">3</xref>). The goodness of fit of a model to the data was high, &#x003C7;<sup>2</sup><sub>(15)</sub> &#x0003D; 24.95, <italic>p</italic> &#x0003E; 05; RMR &#x0003D; 0.038; GFI &#x0003D; 0.856; AGFI &#x0003D; 0.731. We estimated 95% confidence intervals using a Bayesian framework with 100,000 iterations after a burn-in of 20,000 iterations. Through the resampling, all variables converged under the convergence criteria of 1.002 (Gelman et al., <xref ref-type="bibr" rid="B10">2004</xref>). An indirect effect of the viewing duration on recognition performance and WTP was significant, 95% confidence intervals of the standardized indirect effects were 0.019 to 0.222 and 0.047 to 0.246, respectively.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p><bold>Result of a path analysis</bold>. The path coefficients represent standardized partial regression coefficient. <sup>&#x0002A;</sup><italic>p</italic> &#x0003C; 0.05; <sup>&#x0002A;&#x0002A;</sup><italic>p</italic> &#x0003C; 0.01; <sup>&#x0002A;&#x0002A;&#x0002A;</sup><italic>p</italic> &#x0003C; 0.001.</p></caption>
<graphic xlink:href="fpsyg-05-00307-g0003.tif"/>
</fig>
<p>Furthermore, we compared the four factor scores between the cued and uncued conditions. The results showed that there was no difference, suggesting that the manipulation of the weight cue did not directly affect the participants&#x00027; impression.</p>
<p>Finally, since one can argue that any gender difference might have impacted on our results, we ran male vs. female comparisons with two-sample <italic>t</italic>-tests for viewing duration, recognition performance (A&#x00027; and B&#x0201D;D values), and WTP. The results did not show any significant differences between males and females for viewing duration: <italic>t</italic><sub>(39)</sub> &#x0003D; 1.29, <italic>p</italic> &#x0003E; 0.20, Cohen&#x00027;s <italic>d</italic> &#x0003D; 0.45; A&#x00027;: <italic>t</italic><sub>(39)</sub> &#x0003D; 0.69, <italic>p</italic> &#x0003E; 0.49, Cohen&#x00027;s <italic>d</italic> &#x0003D; 0.24; B&#x0201D;D: <italic>t</italic><sub>(39)</sub> &#x0003D; 0.23, <italic>p</italic> &#x0003E; 0.81, Cohen&#x00027;s <italic>d</italic> &#x0003D; 0.08; nor WTP: <italic>t</italic><sub>(39)</sub> &#x0003D; 1.35, <italic>p</italic> &#x0003E; 0.18, Cohen&#x00027;s <italic>d</italic> &#x0003D; 0.47, and hence, the unexpected effect of gender on our results was foreclosed.</p>
</sec>
<sec sec-type="discussion" id="s4">
<title>Discussion</title>
<p>The results of the mean differences between the groups based on the presence of weight cueing suggest significant effects of cue for each behavioral index. First, participants who were instructed to lift the weights showed significantly higher memory performance relative to the other participants. Although this was not confirmed under the significant interaction, the cued participant&#x00027;s memory performance of the box-present exhibits was higher than that of the box-absent exhibits. This effect was not due to a gender difference. Moreover, considering all the participants viewed the boxes, the explanation that the box merely served as a visual marker for memory retrieval does not seem plausible. Instead, the results might come from the instruction itself in a manner of increasing the participants&#x00027; arousal or motivation to the appreciation.</p>
<p>Second, consistent with the results of a previous study (Butler and Neave, <xref ref-type="bibr" rid="B4">2008</xref>), our results showed that participants with access to haptic experience took a significantly longer time to appreciate exhibits than participants without such haptic experience access. One could argue that this time difference reflects the time spent on the lifting action itself. The mean difference of the total viewing duration between groups was approximately 106.6 s and the total duration required for lifting action that was measured in a supplementary experiment was approximately 19.7 s (see Appendix). Even when this lifting duration was subtracted from the total viewing duration in the cued condition, the significant difference between the cued and uncued condition in viewing duration per exhibit was preserved, <italic>t</italic><sub>(39)</sub> &#x0003D; 2.26, <italic>p</italic> &#x0003C; 0.04, Cohen&#x00027;s <italic>d</italic> &#x0003D; 0.71. Therefore, it is unlikely that merely the lifting action of four boxes produced the difference observed in the main experiment. Third, the difference in the WTP variable 102.99 Japanese yen higher in the cued condition did not reach significance, suggesting that the weight cue alone was not sufficient to significantly change the participants&#x00027; WTP directly.</p>
<p>Analyses of the questionnaire data provided much evidence supporting the group differences. The factor analysis showed that the questionnaire items could be categorized into two principal factors: the factors &#x0201C;likability&#x0201D; and &#x0201C;contentment&#x0201D; with respect to the participants&#x00027; impressions of the exhibit appreciation experience, and factors &#x0201C;value&#x0201D; and &#x0201C;quality&#x0201D; with respect to the participants&#x00027; impressions of the museum itself. The results of the path analysis suggest that the formation of participants&#x00027; impressions regarding viewing the museum exhibits was influenced by likability-based processing regarding exhibit appreciation and quality-based processing regarding the museum itself after establishing the impression of value. One implication of these results is that curators&#x00027; efforts to facilitate the visitors&#x00027; memories of exhibits may not necessarily lead to their positive attitudes toward paying more for similar museum experiences. Our findings of no significant correlation between WTP and recognition performance, <italic>r</italic> &#x0003D; 0.02, <italic>p</italic> &#x0003E; 0.91, support this implication.</p>
<p>How did the weight cues affect the results? In light of the internal mechanisms of participants, as mentioned in the introduction, we first surmised that the greater amount of information provided from multiple modalities may lead to higher processing fluency for the corresponding objects, thereby enhancing the esthetic pleasure experienced through perceiving those objects (Reber et al., <xref ref-type="bibr" rid="B19">1998</xref>, <xref ref-type="bibr" rid="B18">2004</xref>; Kuchinke et al., <xref ref-type="bibr" rid="B15">2009</xref>). However, this might not be true or at lease it is somewhat premature to draw a conclusion on this issue, because the results showed that the manipulation of the weight cue did not directly affect any factor scores. The results of the path analysis instead indicate another possibility. We just found that the weight cue significantly prolonged the viewing duration and a model with the viewing duration as a causal factor could explain the results. Hence, an indirect effect of the manipulation of the weight cue might exist. That is, the lifting action prolongs the viewing duration, and then the viewing duration changes participants&#x00027; impression of exhibits and museum (for example, a significant direct effect of viewing duration on value was found), and then these impression changes affect recognition performance and WTP through separate paths. These findings have important implication on a future museum exhibition: Interaction with even a box with a neutral appearance does change visitors&#x00027; impression, memory, and value estimation for exhibits.</p>
<p>In conclusion, both the data for each behavioral index and the questionnaire data indicate that the interaction with physical surrogate objects is (at least indirectly) sufficient to facilitate visitors&#x00027; appreciation of museum exhibits. Future studies should address to what degree the weight information is important. For example, one could hypothesize that weight information does not contribute to the appreciation of pictorial art exhibit items because the esthetic value of such paintings is obviously unrelated to their physical weight. Furthermore, it is unclear whether weight information itself is important. Although visitors do not know the actual weight of the exhibits, visitors can also guess somewhat the weight of the exhibits on the basis of the visually perceived size and texture of them. Hence, a loose matching between the weight cues and the exhibits might matter. Moreover, depending on the shape of the exhibits, there is a difference in the weight distribution between the weight cues and the exhibits. How these issues influence the visitors&#x00027; appreciation should be clarified in future. Owing to the recent progress in 3D printing technology, 3D printers can now easily produce precise replicas of all kinds of items for application in museum exhibits (Allard et al., <xref ref-type="bibr" rid="B1">2005</xref>; Kelley et al., <xref ref-type="bibr" rid="B14">2007</xref>; Niven et al., <xref ref-type="bibr" rid="B16">2009</xref>). Such surrogate exhibits allow visitors to directly touch items freely without having to worry about damaging the original items. The 3D replica-based experiments would resolve the issues about weight information we discussed above. Further cross-disciplinary investigations between museology and psychology that focus on the role of such surrogate exhibit items are needed for a deeper understanding of not only the mental mechanisms involved in the appreciation of museum exhibits, but also their creation and presentation.</p>
<sec>
<title>Conflict of interest statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p></sec>
</sec>
</body>
<back>
<ack>
<p>The authors would like to thank CongLei Cao and Kyoshiro Sasaki for helping in collecting data. We also thank Dr. Keiko Ihaya for her helpful suggestion on the analysis performed in the present study. In addition, the authors truly thank Drs. Kyoko Funahashi, Shozo Iwanaga, and Misako Mishima of The Kyushu University Museum for their comprehensive support and cooperation on the main experiment. This research was supported by Grant-in-Aid for Scientific Research (A) (&#x00023;25242060) and KAKENHI (Grant-in-Aid for Scientific Research) Application Encouragement Program given to Yuki Yamada.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Allard</surname> <given-names>T. T.</given-names></name> <name><surname>Sitchon</surname> <given-names>M.</given-names></name> <name><surname>Sawatzky</surname> <given-names>R.</given-names></name> <name><surname>Hoppa</surname> <given-names>R. D.</given-names></name></person-group> (<year>2005</year>). <article-title>Use of hand-held laser scanning and 3D printing for creation of a museum exhibit</article-title>, in <source>Proceedings of the 6th International Symposium on Virtual Reality, Archaeology and Cultural Heritage: Short and Project Papers</source>, eds <person-group person-group-type="editor"><name><surname>Mudge</surname> <given-names>M.</given-names></name> <name><surname>Ryan</surname> <given-names>N.</given-names></name> <name><surname>Scopigno</surname> <given-names>R.</given-names></name></person-group> (<publisher-loc>Pisa</publisher-loc>), <fpage>97</fpage>&#x02013;<lpage>101</lpage>.</citation>
</ref>
<ref id="B2">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Asano</surname> <given-names>T.</given-names></name> <name><surname>Ishibashi</surname> <given-names>Y.</given-names></name> <name><surname>Minezawa</surname> <given-names>S.</given-names></name> <name><surname>Fujimoto</surname> <given-names>M.</given-names></name></person-group> (<year>2005</year>). <article-title>Surveys of exhibition planners and visitors about a distributed haptic museum</article-title>, in <source>Proceedings of the 2005 ACM SIGCHI International Conference on Advances in Computer Entertainment Technology</source> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>246</fpage>&#x02013;<lpage>249</lpage>. <pub-id pub-id-type="doi">10.1145/1178477.1178518</pub-id></citation>
</ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Biederman</surname> <given-names>I.</given-names></name> <name><surname>Vessel</surname> <given-names>E.</given-names></name></person-group> (<year>2006</year>). <article-title>Perceptual pleasure and the brain: a novel theory explains why the brain craves information and seeks it through the senses</article-title>. <source>Am. Sci</source>. <volume>94</volume>, <fpage>247</fpage>&#x02013;<lpage>253</lpage>. <pub-id pub-id-type="doi">10.1511/2006.59.995</pub-id></citation>
</ref>
<ref id="B4">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Butler</surname> <given-names>M.</given-names></name> <name><surname>Neave</surname> <given-names>P.</given-names></name></person-group> (<year>2008</year>). <article-title>Object appreciation through haptic interaction</article-title>, in <source>Hello! Where are you in the landscape of educational technology? Proceedings Ascilite Melbourne 2008</source> (<publisher-loc>Melbourne, VIC</publisher-loc>), <fpage>133</fpage>&#x02013;<lpage>141</lpage>.</citation>
</ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Donaldson</surname> <given-names>W.</given-names></name></person-group> (<year>1992</year>). <article-title>Measuring recognition memory</article-title>. <source>J. Exp. Psychol. Gen</source>. <volume>121</volume>, <fpage>275</fpage>&#x02013;<lpage>277</lpage>. <pub-id pub-id-type="doi">10.1037/0096-3445.121.3.275</pub-id><pub-id pub-id-type="pmid">1402701</pub-id></citation>
</ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ernst</surname> <given-names>M. O.</given-names></name> <name><surname>Banks</surname> <given-names>M. S.</given-names></name> <name><surname>B&#x000FC;lthoff</surname> <given-names>H. H.</given-names></name></person-group> (<year>2000</year>). <article-title>Touch can change visual slant perception</article-title>. <source>Nat. Neurosci</source>. <volume>3</volume>, <fpage>69</fpage>&#x02013;<lpage>73</lpage>. <pub-id pub-id-type="doi">10.1038/71140</pub-id><pub-id pub-id-type="pmid">10607397</pub-id></citation>
</ref>
<ref id="B7">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Figueroa</surname> <given-names>P.</given-names></name> <name><surname>Coral</surname> <given-names>M.</given-names></name> <name><surname>Boulanger</surname> <given-names>P.</given-names></name> <name><surname>Borda</surname> <given-names>J.</given-names></name> <name><surname>Londo&#x000F1;o</surname> <given-names>E.</given-names></name> <name><surname>Vega</surname> <given-names>F.</given-names></name> <etal/></person-group>. (<year>2009</year>). <article-title>Multi-modal exploration of small artifacts: an exhibition at the Gold Museum in Bogota</article-title>, in <source>Proceedings of the 16th ACM Symposium on Virtual Reality Software and Technology</source> (<publisher-loc>Kyoto</publisher-loc>), <fpage>67</fpage>&#x02013;<lpage>74</lpage>.</citation>
</ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Flanagan</surname> <given-names>J. R.</given-names></name> <name><surname>Wing</surname> <given-names>A. M.</given-names></name></person-group> (<year>1997</year>). <article-title>Effects of surface texture and grip force on the discrimination of hand-held loads</article-title>. <source>Percept. Psychophys</source>. <volume>59</volume>, <fpage>111</fpage>&#x02013;<lpage>118</lpage>. <pub-id pub-id-type="doi">10.3758/BF03206853</pub-id><pub-id pub-id-type="pmid">9038413</pub-id></citation>
</ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Flanagan</surname> <given-names>J. R.</given-names></name> <name><surname>Wing</surname> <given-names>A. M.</given-names></name> <name><surname>Allison</surname> <given-names>S.</given-names></name> <name><surname>Spenceley</surname> <given-names>A.</given-names></name></person-group> (<year>1995</year>). <article-title>Effects of surface texture on weight perception when lifting objects with a precision grip</article-title>. <source>Percept. Psychophys</source>. <volume>57</volume>, <fpage>282</fpage>&#x02013;<lpage>290</lpage>. <pub-id pub-id-type="doi">10.3758/BF03213054</pub-id><pub-id pub-id-type="pmid">7770320</pub-id></citation>
</ref>
<ref id="B10">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Gelman</surname> <given-names>A.</given-names></name> <name><surname>Carlin</surname> <given-names>J.</given-names></name> <name><surname>Stern</surname> <given-names>H.</given-names></name> <name><surname>Rubin</surname> <given-names>D.</given-names></name></person-group> (<year>2004</year>). <source>Bayesian Data Analysis</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Chapman &#x00026; Hall</publisher-name>.</citation>
</ref>
<ref id="B11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hirose</surname> <given-names>M.</given-names></name></person-group> (<year>2006</year>). <article-title>Virtual reality technology and museum exhibit</article-title>. <source>Int. J. Virtual Real</source>. <volume>5</volume>, <fpage>31</fpage>&#x02013;<lpage>36</lpage>. <pub-id pub-id-type="doi">10.1007/11590361_1</pub-id></citation>
</ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ide</surname> <given-names>M.</given-names></name> <name><surname>Hidaka</surname> <given-names>S.</given-names></name></person-group> (<year>2013</year>). <article-title>Tactile stimulation can suppress visual perception</article-title>. <source>Sci. Rep</source>. <volume>3</volume>:<fpage>3453</fpage>. <pub-id pub-id-type="doi">10.1038/srep03453</pub-id><pub-id pub-id-type="pmid">24336391</pub-id></citation>
</ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jones</surname> <given-names>L. A.</given-names></name></person-group> (<year>1986</year>). <article-title>Perception of force and weight: theory and research</article-title>. <source>Psychol. Bull</source>. <volume>100</volume>, <fpage>29</fpage>&#x02013;<lpage>42</lpage>. <pub-id pub-id-type="doi">10.1037/0033-2909.100.1.29</pub-id><pub-id pub-id-type="pmid">2942958</pub-id></citation>
</ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kelley</surname> <given-names>D. J.</given-names></name> <name><surname>Farhoud</surname> <given-names>M.</given-names></name> <name><surname>Meyerand</surname> <given-names>M. E.</given-names></name> <name><surname>Nelson</surname> <given-names>D. L.</given-names></name> <name><surname>Ramirez</surname> <given-names>L. F.</given-names></name> <name><surname>Dempsey</surname> <given-names>R. J.</given-names></name> <etal/></person-group>. (<year>2007</year>). <article-title>Creating physical 3D stereolithograph models of brain and skull</article-title>. <source>PLoS ONE</source> <volume>2</volume>:<fpage>e1119</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0001119.s002</pub-id><pub-id pub-id-type="pmid">17971879</pub-id></citation>
</ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kuchinke</surname> <given-names>L.</given-names></name> <name><surname>Trapp</surname> <given-names>S.</given-names></name> <name><surname>Jacobs</surname> <given-names>A. M.</given-names></name> <name><surname>Leder</surname> <given-names>H.</given-names></name></person-group> (<year>2009</year>). <article-title>Pupillary responses in art appreciation: effects of aesthetic emotions</article-title>. <source>Psychol. Aesthet. Creat. Arts</source> <volume>3</volume>, <fpage>156</fpage>&#x02013;<lpage>163</lpage>. <pub-id pub-id-type="doi">10.1037/a0014464</pub-id></citation>
</ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Niven</surname> <given-names>L.</given-names></name> <name><surname>Steele</surname> <given-names>T. E.</given-names></name> <name><surname>Finke</surname> <given-names>H.</given-names></name> <name><surname>Gernat</surname> <given-names>T.</given-names></name> <name><surname>Hublin</surname> <given-names>J.-J.</given-names></name></person-group> (<year>2009</year>). <article-title>Virtual skeletons: using a structured light scanner to create a 3D faunal comparative collection</article-title>. <source>J. Archaeol. Sci</source>. <volume>36</volume>, <fpage>2018</fpage>&#x02013;<lpage>2023</lpage>. <pub-id pub-id-type="doi">10.1016/j.jas.2009.05.021</pub-id></citation>
</ref>
<ref id="B17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Petridis</surname> <given-names>P.</given-names></name> <name><surname>Mania</surname> <given-names>K.</given-names></name> <name><surname>Pletinckx</surname> <given-names>D.</given-names></name> <name><surname>White</surname> <given-names>M.</given-names></name></person-group> (<year>2006</year>). <article-title>The EPOCH multimodal interface for interacting with digital heritage artefacts</article-title>. <source>Lect. Notes Comput. Sci</source>. <volume>4270</volume>, <fpage>408</fpage>&#x02013;<lpage>417</lpage>. <pub-id pub-id-type="doi">10.1007/11890881_45</pub-id></citation>
</ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Reber</surname> <given-names>R.</given-names></name> <name><surname>Schwarz</surname> <given-names>N.</given-names></name> <name><surname>Winkielman</surname> <given-names>P.</given-names></name></person-group> (<year>2004</year>). <article-title>Processing fluency and aesthetic pleasure: is beauty in the perceiver&#x00027;s processing experience?</article-title> <source>Pers. Soc. Psychol. Rev</source>. <volume>8</volume>, <fpage>364</fpage>&#x02013;<lpage>382</lpage>. <pub-id pub-id-type="doi">10.1207/s15327957pspr0804_3</pub-id><pub-id pub-id-type="pmid">15582859</pub-id></citation>
</ref>
<ref id="B19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Reber</surname> <given-names>R.</given-names></name> <name><surname>Winkielman</surname> <given-names>P.</given-names></name> <name><surname>Schwarz</surname> <given-names>N.</given-names></name></person-group> (<year>1998</year>). <article-title>Effects of perceptual fluency on affective judgments</article-title>. <source>Psychol. Sci</source>. <volume>9</volume>, <fpage>45</fpage>&#x02013;<lpage>48</lpage>. <pub-id pub-id-type="doi">10.1111/1467-9280.00008</pub-id></citation>
</ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Violentyev</surname> <given-names>A.</given-names></name> <name><surname>Shimojo</surname> <given-names>S.</given-names></name> <name><surname>Shams</surname> <given-names>L.</given-names></name></person-group> (<year>2005</year>). <article-title>Touch-induced visual illusion</article-title>. <source>Neuroreport</source> <volume>16</volume>, <fpage>1107</fpage>&#x02013;<lpage>1110</lpage>. <pub-id pub-id-type="doi">10.1097/00001756-200507130-00015</pub-id></citation>
</ref>
<ref id="B21">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Walczak</surname> <given-names>K.</given-names></name> <name><surname>White</surname> <given-names>M.</given-names></name></person-group> (<year>2003</year>). <article-title>Cultural heritage applications of virtual reality</article-title>, in <source>Web3D &#x00027;03: Proceedings of the Eighth International Conference on 3D Web Technology</source> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>182</fpage>&#x02013;<lpage>183</lpage>. <pub-id pub-id-type="doi">10.1145/636593.636623</pub-id></citation>
</ref>
<ref id="B22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yamada</surname> <given-names>Y.</given-names></name> <name><surname>Kawabe</surname> <given-names>T.</given-names></name> <name><surname>Ihaya</surname> <given-names>K.</given-names></name></person-group> (<year>2012</year>). <article-title>Can you eat it? A link between categorization difficulty and food likability</article-title>. <source>Adv. Cogn. Psychol</source>. <volume>8</volume>, <fpage>248</fpage>&#x02013;<lpage>254</lpage>. <pub-id pub-id-type="doi">10.2478/v10053-008-0120-2</pub-id><pub-id pub-id-type="pmid">22956990</pub-id></citation>
</ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yamada</surname> <given-names>Y.</given-names></name> <name><surname>Kawabe</surname> <given-names>T.</given-names></name> <name><surname>Ihaya</surname> <given-names>K.</given-names></name></person-group> (<year>2013</year>). <article-title>Categorization difficulty is associated with negative evaluation in the &#x0201C;uncanny valley&#x0201D; phenomenon</article-title>. <source>Jpn. Psychol. Res</source>. <volume>55</volume>, <fpage>20</fpage>&#x02013;<lpage>32</lpage>. <pub-id pub-id-type="doi">10.1111/j.1468-5884.2012.00538.x</pub-id></citation>
</ref>
</ref-list>
<app-group>
<app id="A1">
<title>Appendix</title>
<p>To support our dismissal of the possible contamination of lifting-time itself, we additionally conducted an experiment that measured the time needed to the lifting action. In this supplementary experiment, participants were asked to freely lift up the boxes used in the main experiment under the instruction &#x0201C;Please freely lift all the boxes up.&#x0201D; This experiment was conducted in an indoor space of Kyushu University. The stimuli were same as the weight cues (boxes) used in the main experiment. Other than the weight cues, there was neither object nor exhibit. We measured the lifting time via stopwatch. Eighteen participants took part in this experiment (5 men, 13 women; mean age &#x0003D; 22.3 years). The results showed that the total lifting duration was 19.66 s (<italic>SD</italic> &#x0003D; 3.62 s). This lifting duration was subtracted from the total viewing duration in the cued condition. Even so, there was the significant difference between the cued and uncued condition in the viewing duration per exhibit, <italic>t</italic><sub>(39)</sub> &#x0003D; 2.26, <italic>p</italic> &#x0003C; 0.04, Cohen&#x00027;s <italic>d</italic> &#x0003D; 0.71. This, we dismissed the possibility that merely the lifting action of four boxes produced the difference in the main experiment.</p>
</app>
</app-group>
</back>
</article>
