<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="review-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Big Data</journal-id>
<journal-title>Frontiers in Big Data</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Big Data</abbrev-journal-title>
<issn pub-type="epub">2624-909X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fdata.2023.1249997</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Big Data</subject>
<subj-group>
<subject>Mini Review</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Differential privacy in collaborative filtering recommender systems: a review</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>M&#x000FC;llner</surname> <given-names>Peter</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/2200373/overview"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Lex</surname> <given-names>Elisabeth</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="corresp" rid="c002"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/484887/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Schedl</surname> <given-names>Markus</given-names></name>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
<xref ref-type="aff" rid="aff4"><sup>4</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/696384/overview"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Kowald</surname> <given-names>Dominik</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="corresp" rid="c003"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/975410/overview"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Know-Center Gmbh</institution>, <addr-line>Graz</addr-line>, <country>Austria</country></aff>
<aff id="aff2"><sup>2</sup><institution>Institute of Interactive Systems and Data Science, Graz University of Technology</institution>, <addr-line>Graz</addr-line>, <country>Austria</country></aff>
<aff id="aff3"><sup>3</sup><institution>Institute of Computational Perception, Johannes Kepler University Linz</institution>, <addr-line>Linz</addr-line>, <country>Austria</country></aff>
<aff id="aff4"><sup>4</sup><institution>Linz Institute of Technology</institution>, <addr-line>Linz</addr-line>, <country>Austria</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Yassine Himeur, University of Dubai, United Arab Emirates</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Chaochao Chen, Zhejiang University, China</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Peter M&#x000FC;llner <email>pmuellner&#x00040;know-center.at</email>; <email>pmuellner&#x00040;student.tugraz.at</email></corresp>
<corresp id="c002">Elisabeth Lex <email>elisabeth.lex&#x00040;tugraz.at</email></corresp>
<corresp id="c003">Dominik Kowald <email>dkowald&#x00040;know-center.at</email></corresp>
</author-notes>
<pub-date pub-type="epub">
<day>12</day>
<month>10</month>
<year>2023</year>
</pub-date>
<pub-date pub-type="collection">
<year>2023</year>
</pub-date>
<volume>6</volume>
<elocation-id>1249997</elocation-id>
<history>
<date date-type="received">
<day>29</day>
<month>06</month>
<year>2023</year>
</date>
<date date-type="accepted">
<day>25</day>
<month>09</month>
<year>2023</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2023 M&#x000FC;llner, Lex, Schedl and Kowald.</copyright-statement>
<copyright-year>2023</copyright-year>
<copyright-holder>M&#x000FC;llner, Lex, Schedl and Kowald</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license></permissions>
<abstract>
<p>State-of-the-art recommender systems produce high-quality recommendations to support users in finding relevant content. However, through the utilization of users&#x00027; data for generating recommendations, recommender systems threaten users&#x00027; privacy. To alleviate this threat, often, differential privacy is used to protect users&#x00027; data via adding random noise. This, however, leads to a substantial drop in recommendation quality. Therefore, several approaches aim to improve this trade-off between accuracy and user privacy. In this work, we first overview threats to user privacy in recommender systems, followed by a brief introduction to the differential privacy framework that can protect users&#x00027; privacy. Subsequently, we review recommendation approaches that apply differential privacy, and we highlight research that improves the trade-off between recommendation quality and user privacy. Finally, we discuss open issues, e.g., considering the relation between privacy and fairness, and the users&#x00027; different needs for privacy. With this review, we hope to provide other researchers an overview of the ways in which differential privacy has been applied to state-of-the-art collaborative filtering recommender systems.</p></abstract>
<kwd-group>
<kwd>differential privacy</kwd>
<kwd>collaborative filtering</kwd>
<kwd>recommender systems</kwd>
<kwd>accuracy-privacy trade-off</kwd>
<kwd>review</kwd>
</kwd-group>
<counts>
<fig-count count="1"/>
<table-count count="1"/>
<equation-count count="0"/>
<ref-count count="62"/>
<page-count count="7"/>
<word-count count="6040"/>
</counts>
<custom-meta-wrap>
<custom-meta>
<meta-name>section-at-acceptance</meta-name>
<meta-value>Recommender Systems</meta-value>
</custom-meta>
</custom-meta-wrap>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>1. Introduction</title>
<p>Several previous research works have revealed multiple privacy threats for users in recommender systems. For example, the disclosure of users&#x00027; private data to untrusted third parties (Calandrino et al., <xref ref-type="bibr" rid="B8">2011</xref>), or the inference of users&#x00027; sensitive attributes, such as gender or age (Zhang et al., <xref ref-type="bibr" rid="B59">2023</xref>). Similarly, also the users themselves care more about their privacy in recommender systems (Herbert et al., <xref ref-type="bibr" rid="B23">2021</xref>). For these reasons, privacy-enhancing techniques have been applied, most prominently <italic>differential privacy (DP)</italic> (Dwork, <xref ref-type="bibr" rid="B12">2008</xref>). DP injects random noise into the recommender system and formally guarantees a certain degree of privacy. However, through this random noise, the quality of the recommendations suffers (Berkovsky et al., <xref ref-type="bibr" rid="B5">2012</xref>). Many works aim to address this trade-off between recommendation quality and user privacy via carefully applying DP in specific ways. Friedman et al. (<xref ref-type="bibr" rid="B17">2016</xref>) show that in case of matrix factorization, DP can be applied to three different parts of the recommender system: (i) to the input of the recommender system, (ii) within the training process of the model, and (iii) to the model after training. However, a concise overview of works with respect to these three categories does not exist yet.</p>
<p>Therefore, in the paper at hand, we address this gap and identify 26 papers from relevant venues that deal with DP in collaborative filtering recommender systems. We briefly review these 26 papers and make two key observations about the state-of-the-art. Firstly, the vast majority of works use datasets from the same non-sensitive domain, i.e., movies. Secondly, research on applying DP after model training is scarce. Finally, we discuss our findings and present two open questions that may be relevant for future research: <italic>How does applying DP impact fairness?</italic> and <italic>How to quantify the user&#x00027;s perceived privacy?</italic></p>
<p>Our work is structured as follows: In Section 2, we present threats to the privacy of users in recommender systems and additionally, introduce the DP framework. In Section 3, we precisely outline our methodology for obtaining the set of 26 relevant papers. In Section 4, we review these papers and group them into three groups according to the way in which they apply DP. In Section 5, we discuss our findings and propose open issues that we identified.</p></sec>
<sec id="s2">
<title>2. Background</title>
<p>In recent years, users of recommender systems have shown increasing concerns with respect to keeping their data private (Herbert et al., <xref ref-type="bibr" rid="B23">2021</xref>). In fact, several research works (Bilge et al., <xref ref-type="bibr" rid="B6">2013</xref>; Jeckmans et al., <xref ref-type="bibr" rid="B26">2013</xref>; Friedman et al., <xref ref-type="bibr" rid="B18">2015</xref>; Beigi and Liu, <xref ref-type="bibr" rid="B4">2020</xref>; Majeed and Lee, <xref ref-type="bibr" rid="B37">2020</xref>; Himeur et al., <xref ref-type="bibr" rid="B24">2022</xref>) have revealed multiple privacy threats, for example, the inadvertent disclosure of users&#x00027; interaction data, or the inference of users&#x00027; sensitive attributes (e.g., gender, age).</p>
<p>Typically, a recommender system utilizes historic interaction data to generate recommendations. Ramakrishnan et al. (<xref ref-type="bibr" rid="B45">2001</xref>) show that in <italic>k</italic> nearest neighbors recommender systems, the recommendations could disclose the interaction data of the neighbors, i.e., users, whose interaction data is utilized to generate the recommendations. Similarly, Calandrino et al. (<xref ref-type="bibr" rid="B8">2011</xref>) inject fake users to make the recommendations more likely to disclose the neighbors&#x00027; interaction data, and also, they can infer users&#x00027; interaction data based on the public outputs of a recommender system, e.g., public interaction data or public product reviews. Furthermore, Hashemi et al. (<xref ref-type="bibr" rid="B22">2022</xref>) and Xin et al. (<xref ref-type="bibr" rid="B54">2023</xref>) aim to learn user behavior via observing many recommendations and, in this way, can disclose parts of a user&#x00027;s interaction data. Weinsberg et al. (<xref ref-type="bibr" rid="B52">2012</xref>) show that an adversary could infer sensitive attributes, in this case, gender, based on a user&#x00027;s interaction data. Their attack relies on a classifier that leverages a small set of training examples to learn the correlation between a user&#x00027;s preferences and gender. Likewise, Ganh&#x000F6;r et al. (<xref ref-type="bibr" rid="B19">2022</xref>) show that recommender systems based on autoencoder architectures are vulnerable to infer the user&#x00027;s gender from the latent user representation. The authors also propose an adversarial training regime to mitigate this problem. Similarly, also Zhang et al. (<xref ref-type="bibr" rid="B59">2023</xref>) infer the age and gender of users in a federated learning recommender system. In summary, many of a user&#x00027;s sensitive attributes can be inferred via thoroughly analyzing the user&#x00027;s digital footprint (e.g., the behavior in a recommender system or social media platform) (Kosinski et al., <xref ref-type="bibr" rid="B30">2013</xref>).</p>
<p>Overall, the utilization of users&#x00027; interaction data for generating recommendations poses a privacy risk for users. Therefore, privacy-enhancing techniques, such as homomorphic encryption (Gentry, <xref ref-type="bibr" rid="B21">2009</xref>), federated learning (McMahan et al., <xref ref-type="bibr" rid="B38">2017</xref>), or most prominently, <italic>differential privacy (DP)</italic> (Dwork, <xref ref-type="bibr" rid="B12">2008</xref>) have been applied to protect users&#x00027; privacy. Specifically, DP is applied via injecting noise into the recommender system. This ensures that the recommender system uses noisy data instead of the real data. For example, an additive mechanism samples random noise from the Laplace or Gaussian distribution and adds it to the users&#x00027; rating data (Dwork and Roth, <xref ref-type="bibr" rid="B14">2014</xref>). Alternatively, the randomized responses mechanism flips a fair coin, which decides whether to use the real data or random data, and this way, ensures DP (Warner, <xref ref-type="bibr" rid="B51">1965</xref>; Dwork and Roth, <xref ref-type="bibr" rid="B14">2014</xref>). Overall, the degree of noise that is used is defined by the parameter &#x003F5;, i.e., the privacy budget. Intuitively, the smaller the &#x003F5;-value is, the better the privacy, but the stronger the expected accuracy drop. Therefore, choosing &#x003F5; is non-trivial and depends on the specific use case (Dwork, <xref ref-type="bibr" rid="B12">2008</xref>).</p></sec>
<sec id="s3">
<title>3. Review methodology</title>
<p>To conduct our review, we chose relevant conferences in the field, i.e., ACM SIGIR, TheWebConf, ACM KDD, IJCAI, ACM CIKM, and ACM RecSys and journals, i.e., TOIS, TIST, UMUAI, and TKDE. Adopting a keyword-based search, we identified relevant publications in the proceedings via querying the full-texts for &#x0201C;differential privacy&#x0201D; and &#x0201C;recommender system&#x0201D;, &#x0201C;recommend&#x0201D;, &#x0201C;recommendation&#x0201D;, or &#x0201C;recommender&#x0201D;. We manually checked the resulting papers for their relevance and retrieved 16 publications. In addition, we conducted a literature search on Google Scholar using the same keywords and procedure, which resulted in 10 publications. Overall, we considered 26 publications in the paper at hand.</p></sec>
<sec id="s4">
<title>4. Recommender systems with differential privacy</title>
<p>According to Friedman et al. (<xref ref-type="bibr" rid="B17">2016</xref>), DP can be applied via (i) adding noise to the input of a collaborative filtering-based recommender system, e.g., the user data or other user representations, (ii) adding noise to the training process of the model, i.e., the model updates, or (iii) adding noise to the model after training, i.e., to the resulting latent factors. In <xref ref-type="table" rid="T1">Table 1</xref>, we group the selected publications into these three categories.</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p>Overview of the reviewed 26 publications.</p></caption> 
<table frame="box" rules="all">
<thead>
<tr style="background-color:&#x00023;919498;color:&#x00023;ffffff">
<th/>
<th/>
<th valign="top" align="center" colspan="3"><bold>DP applied to</bold></th>
</tr>
<tr style="background-color:&#x00023;919498;color:&#x00023;ffffff">
<th valign="top" align="left"><bold>References</bold></th>
<th valign="top" align="left"><bold>Domain(s)</bold></th>
<th valign="top" align="left"><bold>User represent</bold>.</th>
<th valign="top" align="left"><bold>Model updates</bold></th>
<th valign="top" align="left"><bold>After training</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Long et al. (<xref ref-type="bibr" rid="B35">2023</xref>)</td>
<td valign="top" align="left">Location</td>
<td valign="top" align="left">&#x02022;</td>
<td/>
<td/>
</tr>
<tr>
<td valign="top" align="left">M&#x000FC;llner et al. (<xref ref-type="bibr" rid="B42">2023</xref>)</td>
<td valign="top" align="left">Movies, Music, Books, Social</td>
<td valign="top" align="left">&#x02022;</td>
<td/>
<td/>
</tr>
<tr>
<td valign="top" align="left">Neera et al. (<xref ref-type="bibr" rid="B43">2023</xref>)</td>
<td valign="top" align="left">Movies, Jokes, Dating</td>
<td valign="top" align="left">&#x02022;</td>
<td/>
<td/>
</tr>
<tr>
<td valign="top" align="left">Wang et al. (<xref ref-type="bibr" rid="B50">2023</xref>)</td>
<td valign="top" align="left">Movies, Music</td>
<td/>
<td valign="top" align="left">&#x02022;</td>
<td/>
</tr>
<tr>
<td valign="top" align="left">Chai et al. (<xref ref-type="bibr" rid="B9">2022</xref>)</td>
<td valign="top" align="left">Movies, Location</td>
<td valign="top" align="left">&#x02022;</td>
<td/>
<td/>
</tr>
<tr>
<td valign="top" align="left">Chen et al. (<xref ref-type="bibr" rid="B10">2022</xref>)</td>
<td valign="top" align="left">Movies, Music, Books</td>
<td valign="top" align="left">&#x02022;</td>
<td/>
<td/>
</tr>
<tr>
<td valign="top" align="left">Jiang et al. (<xref ref-type="bibr" rid="B27">2022</xref>)</td>
<td valign="top" align="left">Movies, Music, Location, Groceries</td>
<td/>
<td valign="top" align="left">&#x02022;</td>
<td/>
</tr>
<tr>
<td valign="top" align="left">Liu et al. (<xref ref-type="bibr" rid="B34">2022</xref>)</td>
<td valign="top" align="left">Social</td>
<td/>
<td valign="top" align="left">&#x02022;</td>
</tr>
<tr>
<td valign="top" align="left">Ning et al. (<xref ref-type="bibr" rid="B44">2022</xref>)</td>
<td valign="top" align="left">Movies</td>
<td/>
<td valign="top" align="left">&#x02022;</td>
</tr>
<tr>
<td valign="top" align="left">Ran et al. (<xref ref-type="bibr" rid="B46">2022</xref>)</td>
<td valign="top" align="left">Movies, Music</td>
<td/>
<td/>
<td valign="top" align="left">&#x02022;</td>
</tr>
<tr>
<td valign="top" align="left">Ren et al. (<xref ref-type="bibr" rid="B47">2022</xref>)</td>
<td valign="top" align="left">Social</td>
<td valign="top" align="left">&#x02022;</td>
<td/>
<td/>
</tr>
<tr>
<td valign="top" align="left">Wu et al. (<xref ref-type="bibr" rid="B53">2022</xref>)</td>
<td valign="top" align="left">Advertisement</td>
<td valign="top" align="left">&#x02022;</td>
<td/>
<td/>
</tr>
<tr>
<td valign="top" align="left">Li et al. (<xref ref-type="bibr" rid="B32">2021</xref>)</td>
<td valign="top" align="left">Movies, Dating</td>
<td/>
<td valign="top" align="left">&#x02022;</td>
</tr>
<tr>
<td valign="top" align="left">Minto et al. (<xref ref-type="bibr" rid="B41">2021</xref>)</td>
<td valign="top" align="left">Movies</td>
<td/>
<td valign="top" align="left">&#x02022;</td>
</tr>
<tr>
<td valign="top" align="left">Zhang et al. (<xref ref-type="bibr" rid="B58">2021</xref>)</td>
<td valign="top" align="left">Movies</td>
<td valign="top" align="left">&#x02022;</td>
<td/>
<td valign="top" align="left">&#x02022;</td>
</tr>
<tr>
<td valign="top" align="left">Chen et al. (<xref ref-type="bibr" rid="B11">2020</xref>)</td>
<td valign="top" align="left">Location</td>
<td valign="top" align="left">&#x02022;</td>
<td/>
<td/>
</tr>
<tr>
<td valign="top" align="left">Gao et al. (<xref ref-type="bibr" rid="B20">2020</xref>)</td>
<td valign="top" align="left">Movies, Smartphone</td>
<td valign="top" align="left">&#x02022;</td>
<td/>
<td/>
</tr>
<tr>
<td valign="top" align="left">Ma et al. (<xref ref-type="bibr" rid="B36">2019</xref>)</td>
<td valign="top" align="left">Health</td>
<td/>
<td valign="top" align="left">&#x02022;</td>
</tr>
<tr>
<td valign="top" align="left">Meng et al. (<xref ref-type="bibr" rid="B40">2018</xref>)</td>
<td valign="top" align="left">Social</td>
<td/>
<td valign="top" align="left">&#x02022;</td>
</tr>
<tr>
<td valign="top" align="left">Shin et al. (<xref ref-type="bibr" rid="B48">2018</xref>)</td>
<td valign="top" align="left">Movies, Dating</td>
<td/>
<td valign="top" align="left">&#x02022;</td>
</tr>
<tr>
<td valign="top" align="left">Liu et al. (<xref ref-type="bibr" rid="B33">2017</xref>)</td>
<td valign="top" align="left">Movies</td>
<td valign="top" align="left">&#x02022;</td>
<td/>
</tr>
<tr>
<td valign="top" align="left">Yang et al. (<xref ref-type="bibr" rid="B55">2017</xref>)</td>
<td valign="top" align="left">Movies</td>
<td valign="top" align="left">&#x02022;</td>
<td/>
<td/>
</tr>
<tr>
<td valign="top" align="left">Li et al. (<xref ref-type="bibr" rid="B31">2016</xref>)</td>
<td valign="top" align="left">Movies</td>
<td valign="top" align="left">&#x02022;</td>
<td/>
<td/>
</tr>
<tr>
<td valign="top" align="left">Hua et al. (<xref ref-type="bibr" rid="B25">2015</xref>)</td>
<td valign="top" align="left">Movies</td>
<td/>
<td valign="top" align="left">&#x02022;</td>
<td valign="top" align="left">&#x02022;</td>
</tr>
<tr>
<td valign="top" align="left">Zhu et al. (<xref ref-type="bibr" rid="B62">2013</xref>)</td>
<td valign="top" align="left">Movies</td>
<td valign="top" align="left">&#x02022;</td>
<td/>
</tr>
<tr>
<td valign="top" align="left">Zhao et al. (<xref ref-type="bibr" rid="B60">2011</xref>)</td>
<td valign="top" align="left">Movies</td>
<td valign="top" align="left">&#x02022;</td>
<td/>
<td/>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p>We mark whether DP is applied to the user representation, to the model updates, or after training. Domain(s) refers to the domain(s) in which the recommendations are evaluated. We sort the publications with respect to recency.</p>
</table-wrap-foot>
</table-wrap>
<sec>
<title>4.1. Differential privacy applied to the user representation</title>
<p>In collaborative filtering recommender systems, the input to the system is typically given by interaction or rating data. However, more complex user representations exist, e.g., neural-based user embeddings.</p>
<p>Chen et al. (<xref ref-type="bibr" rid="B11">2020</xref>) protect POI (point of interest) interaction data of users, e.g., a user visited a restaurant, with DP. Specifically, they use this data to privately calculate POI features, i.e., the number of visitors per restaurant, which are subsequently used for generating recommendations instead of the DP-protected interaction data. This way, they can increase recommendation accuracy. Similarly, Long et al. (<xref ref-type="bibr" rid="B35">2023</xref>) use DP to recommend POIs, but in a decentralized fashion. A central server collects public data to train a recommendation model and to privately identify groups of similar users. DP is used for privately calculating user-user similarities. Then, users locally use information from similar users, which leads to a better trade-off between recommendation quality and privacy than comparable approaches.</p>
<p>Liu et al. (<xref ref-type="bibr" rid="B33">2017</xref>) add noise to users&#x00027; rating data and to the user-user covariance matrix to ensure DP of a KNN-based recommender system. They show that this leads to better privacy than in case only the covariance matrix is protected via DP. Besides revealing users&#x00027; rating data, an attacker could also aim to infer sensitive attributes (e.g., gender) of the users. Therefore, Chai et al. (<xref ref-type="bibr" rid="B9">2022</xref>) propose an obfuscation model to protect gender information. After applying this obfuscation model, users protect their data via DP and send it to a central server. Yang et al. (<xref ref-type="bibr" rid="B55">2017</xref>) use the Johnson-Lindenstrauss transform (Blocki et al., <xref ref-type="bibr" rid="B7">2012</xref>), i.e., they ensure DP via multiplying the original interaction matrix with a random matrix. Using this protected matrix, their approach guarantees differential privacy and also can even generate more accurate recommendations than a non-private approach. Neera et al. (<xref ref-type="bibr" rid="B43">2023</xref>) underline that adding Laplacian noise can lead to &#x0201C;unrealistic&#x0201D; rating values, i.e., outside the rating range, and through this, recommendation accuracy can drop severely. Therefore, they bound the noisy ratings to a &#x0201C;realistic&#x0201D; value range without harming DP. Plus, they use a Gaussian mixture model to estimate and then remove noise in the recommendation process to keep recommendation accuracy.</p>
<p>Cross-domain recommendation models can increase recommendation accuracy in the target domain by exploiting data from multiple source domains. To protect user privacy when data from the source domain is made available to the target domain, Chen et al. (<xref ref-type="bibr" rid="B10">2022</xref>) use the Johnson-Lindenstrauss transform. Due to the high sparsity of the rating matrix, they employ a variant that performs better when applied to sparse matrices (Ailon and Chazelle, <xref ref-type="bibr" rid="B2">2009</xref>). Ren et al. (<xref ref-type="bibr" rid="B47">2022</xref>) utilize data from different social network platforms to generate recommendations and apply DP to the user attributes and the connections in the social network graphs. Plus, they apply a variant of DP to protect textual data (Fernandes et al., <xref ref-type="bibr" rid="B16">2019</xref>). Moreover, to increase the click-through rate for recommended advertisements, Wu et al. (<xref ref-type="bibr" rid="B53">2022</xref>) leverage user interaction data from multiple platforms. First, user embeddings are generated per platform and then protected with DP. Second, the recommender system collects and aggregates a user&#x00027;s DP-protected embeddings across platforms and then applies DP again to the aggregated user embedding. According to the authors, applying DP after aggregation allows for smaller noise levels when applying DP to the per-platform user embeddings, which results in higher accuracy. Typically, many users use a variety of different online platforms. Therefore, Li et al. (<xref ref-type="bibr" rid="B31">2016</xref>) leverage these multiple data sources per user to increase recommendation accuracy. Specifically, they combine DP-protected item-item similarities from dataset <italic>B</italic> as auxiliary data that helps to generate more accurate recommendations for users in dataset <italic>A</italic> (cf. Zhao et al., <xref ref-type="bibr" rid="B60">2011</xref>).</p>
<p>Gao et al. (<xref ref-type="bibr" rid="B20">2020</xref>) compute item-item similarities by using DP-protected user interaction data. With these item-item similarities, users can locally generate recommendations on their own devices, therefore not harming their privacy. The item-based KNN recommender system proposed by Zhu et al. (<xref ref-type="bibr" rid="B62">2013</xref>) utilizes DP in two ways: First, they randomly rearrange the most similar neighbors to foster privacy. Second, they measure how the item-item similarity changes if a specific user interaction was not present, and with this, they add the necessary level of noise to the users&#x00027; interactions. This way, recommendation accuracy can be better preserved than with approaches that apply the same level of noise to all user interactions. For user-based KNN, M&#x000FC;llner et al. (<xref ref-type="bibr" rid="B42">2023</xref>) identify neighbors that can be reused for many recommendations. This way, only a small set of users are used as neighbors for many recommendations and need to be protected with DP. Many users, however, are only rarely utilized as neighbors and therefore do not need to be protected with DP. Overall, this yields more accurate recommendations than in case DP needs to be applied to all users.</p>
</sec>
<sec>
<title>4.2. Differential privacy applied to the model updates</title>
<p>Some recommender systems do not process user data and create user representations on a central server, instead, they compute the model updates, i.e., gradients, locally on their users&#x00027; device. Then, the recommender system collects these gradients to adapt its recommendation model. To prohibit the leakage of user data through these gradients (Bagdasaryan et al., <xref ref-type="bibr" rid="B3">2020</xref>), DP can be applied.</p>
<p>For example, Hua et al. (<xref ref-type="bibr" rid="B25">2015</xref>) add noise to the gradients of the recommendation model to ensure DP. However, due to the sparsity of the gradients, the application of DP can be ineffective and information about what items have been rated by the user can be disclosed. To address this problem, Shin et al. (<xref ref-type="bibr" rid="B48">2018</xref>) use DP to mask whether a user appears in the dataset. Also, they formally show that the noise added to the gradients hinders a fast convergence of the recommendation model, and in this way, increases the training time. Therefore, they introduce a stabilization factor to enable better training of the recommendation model. Wang et al. (<xref ref-type="bibr" rid="B50">2023</xref>) propose a recommender system that uses a special DP-mechanism (Zhao et al., <xref ref-type="bibr" rid="B61">2020</xref>) to simultaneously protect the rating values and the set of items that is rated by a user. The DP-protected item-vectors are then send to a central server, which performs dimensionality reduction to reduce the accuracy drop (cf. Shin et al., <xref ref-type="bibr" rid="B48">2018</xref>). In Minto et al. (<xref ref-type="bibr" rid="B41">2021</xref>), users receive a global model from a central server and, then, compute their respective updates locally. These updates are protected via DP, before being sent back to the server. Plus, the number of updates per user are restricted to further improve privacy. Moreover, the authors highlight that high-dimensional gradients can negatively impact the recommendation quality, as they are especially prone to higher sparsity (cf. Hua et al., <xref ref-type="bibr" rid="B25">2015</xref>; Shin et al., <xref ref-type="bibr" rid="B48">2018</xref>). When DP is applied, the gradients become denser since noise is added to the entire gradient, including the zero-entries. This, in practice, leads to additional communication overhead, since all non-zero-entries need to be transmitted (Ning et al., <xref ref-type="bibr" rid="B44">2022</xref>). Therefore, Ning et al. only add noise to the non-zero gradients. This way, the communication overhead is reduced; however, DP cannot be guaranteed anymore.</p>
<p>Jiang et al. (<xref ref-type="bibr" rid="B27">2022</xref>) reduce the accuracy drop via an adaptive DP mechanism that depends on the number of training steps. Intuitively, after many training steps, the model fine-tunes its predictions and the gradients need to be measured more accurately than during the beginning of the model training. Thus, they add more noise in the beginning and less noise in the end of the training process. This yields more accurate recommendations than a static DP mechanism that always adds the same level of noise. Li et al. (<xref ref-type="bibr" rid="B32">2021</xref>) also use noisy model updates to ensure DP. They observe that noise can lead to large values for the user embeddings, which increases the sensitivity and therefore also the level of noise that is required to ensure DP. To foster recommendation quality, they map the user embeddings to a certain range, which bounds the sensitivity and requires less noise. Liu et al. (<xref ref-type="bibr" rid="B34">2022</xref>) leverage user interactions and social connections to generate recommendations via a federated graph neural network. To ensure DP, they add noise to the gradients that are sent to a central server. However, gradients with different magnitudes have different sensitivities (cf. Li et al., <xref ref-type="bibr" rid="B32">2021</xref>), and thus, need a different level of noise to ensure DP. Therefore, they fit the noise level to the gradient magnitudes to satisfy DP, but also, to preserve recommendation accuracy.</p>
<p>Ma et al. (<xref ref-type="bibr" rid="B36">2019</xref>) employ federated tensor factorization in the health domain. A global model is distributed to hospitals, which locally update the model based on their data. To protect privacy, a variant of DP is applied to the model updates, which are subsequently sent to the global server to adapt the global model. Meng et al. (<xref ref-type="bibr" rid="B40">2018</xref>) randomly divide users&#x00027; ratings into non-sensitive and sensitive ratings. For sensitive ratings, they apply more noise than for non-sensitive ratings. With this, their approach can preserve higher recommendation accuracy than in case the same noise level is used for sensitive and non-sensitive data.</p>
</sec>
<sec>
<title>4.3. Differential privacy applied after training</title>
<p>Only few works apply DP to the recommendation model after training. In case of a matrix factorization approach, noise can be added to the learned user- and item-vectors to ensure DP. Our selected publications (see Section 3) do not include any works that apply DP exclusively to the model after training. Nevertheless, we describe works that apply DP to the user representation or the model updates, but also after training.</p>
<p>For example, Hua et al. (<xref ref-type="bibr" rid="B25">2015</xref>) consider a matrix factorization model, where the model sends item-vectors back to the users and this way, users&#x00027; data can get leaked. To prohibit this, Hua et al. perturb the model&#x00027;s objective function after training via adding noise to the latent item-vectors. Similarly, Ran et al. (<xref ref-type="bibr" rid="B46">2022</xref>) also use DP to prohibit data leakage through the item-vectors that are sent to the users. Specifically, a trusted recommender system generates a matrix factorization model. Instead of publishing the item-vectors of this model, they learn new item-vectors on the DP-protected user-vectors. Through this, they can minimize the noise that is introduced and thus, can improve recommendation accuracy over comparable approaches. Zhang et al. (<xref ref-type="bibr" rid="B58">2021</xref>) apply DP to the user representation and also, to the model after training. Specifically, they use a polynomial approximation of the model&#x00027;s loss function to efficiently compute the sensitivity of the dataset and, accordingly, adapt the level of noise that is added to the loss function.</p></sec>
</sec>
<sec id="s5">
<title>5. Summary and open questions</title>
<p>In this review, we investigate research works that apply DP to collaborative filtering recommender systems. We identify 26 relevant works and categorize these based on how they apply DP, i.e., to the user representation, to the model updates, or to the model after training (see <xref ref-type="table" rid="T1">Table 1</xref>). In addition, we briefly summarize these relevant works to obtain a broad overview of the state-of-the-art. Furthermore, we identify the main concepts of the relevant works in <xref ref-type="fig" rid="F1">Figure 1</xref> to help readers to understand in which diverse ways the reviewed papers apply DP to improve the accuracy-privacy trade-off. Our main findings from reviewing the discussed literature are two-fold: (i) The majority of works use datasets from the same non-sensitive domain, i.e., movies, and (ii) applying DP to the model after training seems to be an understudied topic.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p>Overview of the main concepts of the reviewed papers. <italic>Use auxiliary data to foster accuracy</italic> refers to the incorporation of data from other domains, datasets or users to increase recommendation accuracy. <italic>Reduce noise level that is needed</italic> refers to designing recommender systems that require a minimal amount of noise to ensure DP. <italic>Limit where/when to apply DP</italic> refers to carefully minimizing the application of DP. <italic>Other</italic> refers to approaches that do not fit into the previous categories.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-06-1249997-g0001.tif"/>
</fig>
<p>Many research works use datasets from the movie domain, which, in general, does not include sensitive data. For research on DP in collaborative filtering recommender systems, however, datasets from sensitive domains may be better suited to resemble real-world privacy threats well. For example, datasets from the health, finance, or job domain. Moreover, the majority of research focuses on either applying DP to the user representation or to the model updates. Research on applying DP to the model after training is scarce, and therefore, this opens up the possibility of future work to fill this gap.</p>
<p>Our review of relevant work allows to grasp the state-of-the-art and to identify the following open research questions:</p>
<p><italic>Q1: How does applying DP impact fairness?</italic> Dwork et al. (<xref ref-type="bibr" rid="B13">2012</xref>) and Zemel et al. (<xref ref-type="bibr" rid="B57">2013</xref>) suggest that in theory, privacy can lead to fairness and fairness can lead to privacy. The reason is that for both, a user&#x00027;s data shall be hidden, either to ensure privacy or to prohibit discrimination based on this data. However, in practice, correlations in private data can still lead to unfairness (Ekstrand et al., <xref ref-type="bibr" rid="B15">2018</xref>; Agarwal, <xref ref-type="bibr" rid="B1">2020</xref>). Only recently, Yang et al. (<xref ref-type="bibr" rid="B56">2023</xref>) and Sun et al. (<xref ref-type="bibr" rid="B49">2023</xref>) investigate the connection between privacy and fairness in recommender systems. For example, Sun et al. (<xref ref-type="bibr" rid="B49">2023</xref>) use DP-protected information to re-rank the items in the recommendation list and in this way, increase a more fair exposure of items. Nonetheless, the impact of DP on fairness remains an understudied topic.</p>
<p><italic>Q2: How to quantify the user&#x00027;s perceived privacy?</italic> Users perceive privacy differently, e.g., some users tolerate disclosing their gender, while others refuse to do this (Joshaghani et al., <xref ref-type="bibr" rid="B28">2018</xref>). This perceived privacy depends on many factors, e.g., context or situational factors (Knijnenburg and Kobsa, <xref ref-type="bibr" rid="B29">2013</xref>; Mehdy et al., <xref ref-type="bibr" rid="B39">2021</xref>). However, measuring users&#x00027; perceived privacy is hard and is usually done via questionnaires (Knijnenburg and Kobsa, <xref ref-type="bibr" rid="B29">2013</xref>). This is in stark contrast to how privacy is measured in the DP framework, i.e., via quantifying to what extent the data impacts the output of the recommender system. Therefore, developing methods to better quantify users&#x00027; privacy is an important future research avenue.</p></sec>
<sec sec-type="author-contributions" id="s6">
<title>Author contributions</title>
<p>PM: literature analysis, conceptualization, and writing. MS: conceptualization and writing. EL and DK: conceptualization, writing, and supervision. All authors contributed to the article and approved the submitted version.</p></sec>
</body>
<back>
<sec sec-type="funding-information" id="s7">
<title>Funding</title>
<p>This work was supported by the DDAI COMET Module within the COMET-Competence Centers for Excellent Technologies Programme, funded by the Austrian Federal Ministry for Transport, Innovation and Technology (BMVIT), the Austrian Federal Ministry for Digital and Economic Affairs (BMDW), the Austrian Research Promotion Agency (FFG), the province of Styria (SFG), and partners from industry and academia. The COMET Programme is managed by FFG. In addition, the work received funding from the TU Graz Open Access Publishing Fund and from the Austrian Science Fund (FWF): DFH-23 and P33526.</p>
</sec>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of interest</title>
<p>PM was employed by Know-Center Gmbh. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s8">
<title>Publisher&#x00027;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Agarwal</surname> <given-names>S.</given-names></name></person-group> (<year>2020</year>). <source>Trade-offs between fairness, interpretability, and privacy in machine learning</source> (Master&#x00027;s thesis). <publisher-name>University of Waterloo, Waterloo</publisher-name>, <publisher-loc>ON, Canada</publisher-loc>.</citation>
</ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ailon</surname> <given-names>N.</given-names></name> <name><surname>Chazelle</surname> <given-names>B.</given-names></name></person-group> (<year>2009</year>). <article-title>The fast Johnson&#x02013;Lindenstrauss transform and approximate nearest neighbors</article-title>. <source>SIAM J. Comput.</source> <volume>39</volume>, <fpage>302</fpage>&#x02013;<lpage>322</lpage>. <pub-id pub-id-type="doi">10.1137/060673096</pub-id></citation>
</ref>
<ref id="B3">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Bagdasaryan</surname> <given-names>E.</given-names></name> <name><surname>Veit</surname> <given-names>A.</given-names></name> <name><surname>Hua</surname> <given-names>Y.</given-names></name> <name><surname>Estrin</surname> <given-names>D.</given-names></name> <name><surname>Shmatikov</surname> <given-names>V.</given-names></name></person-group> (<year>2020</year>). <article-title>&#x0201C;How to backdoor federated learning,&#x0201D;</article-title> in <source>International Conference on Artificial Intelligence and Statistics</source> (<publisher-loc>Palermo</publisher-loc>: <publisher-name>PMLR</publisher-name>), <fpage>2938</fpage>&#x02013;<lpage>2948</lpage>.</citation>
</ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Beigi</surname> <given-names>G.</given-names></name> <name><surname>Liu</surname> <given-names>H.</given-names></name></person-group> (<year>2020</year>). <article-title>A survey on privacy in social media: identification, mitigation, and applications</article-title>. <source>ACM Trans. Data Sci.</source> <volume>1</volume>, <fpage>1</fpage>&#x02013;<lpage>38</lpage>. <pub-id pub-id-type="doi">10.1145/3343038</pub-id></citation>
</ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Berkovsky</surname> <given-names>S.</given-names></name> <name><surname>Kuflik</surname> <given-names>T.</given-names></name> <name><surname>Ricci</surname> <given-names>F.</given-names></name></person-group> (<year>2012</year>). <article-title>The impact of data obfuscation on the accuracy of collaborative filtering</article-title>. <source>Expert Syst. Appl.</source> <volume>39</volume>, <fpage>5033</fpage>&#x02013;<lpage>5042</lpage>. <pub-id pub-id-type="doi">10.1016/j.eswa.2011.11.037</pub-id></citation>
</ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bilge</surname> <given-names>A.</given-names></name> <name><surname>Kaleli</surname> <given-names>C.</given-names></name> <name><surname>Yakut</surname> <given-names>I.</given-names></name> <name><surname>Gunes</surname> <given-names>I.</given-names></name> <name><surname>Polat</surname> <given-names>H.</given-names></name></person-group> (<year>2013</year>). <article-title>A survey of privacy-preserving collaborative filtering schemes</article-title>. <source>Int. J. Softw. Eng. Knowledge Eng.</source> <volume>23</volume>, <fpage>1085</fpage>&#x02013;<lpage>1108</lpage>. <pub-id pub-id-type="doi">10.1142/S0218194013500320</pub-id></citation>
</ref>
<ref id="B7">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Blocki</surname> <given-names>J.</given-names></name> <name><surname>Blum</surname> <given-names>A.</given-names></name> <name><surname>Datta</surname> <given-names>A.</given-names></name> <name><surname>Sheffet</surname> <given-names>O.</given-names></name></person-group> (<year>2012</year>). <article-title>&#x0201C;The Johnson-Lindenstrauss transform itself preserves differential privacy,&#x0201D;</article-title> in <source>2012 IEEE 53rd Annual Symposium on Foundations of Computer Science</source> (<publisher-loc>New Brunswick, NJ</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>410</fpage>&#x02013;<lpage>419</lpage>.</citation>
</ref>
<ref id="B8">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Calandrino</surname> <given-names>J. A.</given-names></name> <name><surname>Kilzer</surname> <given-names>A.</given-names></name> <name><surname>Narayanan</surname> <given-names>A.</given-names></name> <name><surname>Felten</surname> <given-names>E. W.</given-names></name> <name><surname>Shmatikov</surname> <given-names>V.</given-names></name></person-group> (<year>2011</year>). <article-title>&#x0201C;&#x0201C;You might also like:&#x0201D; privacy risks of collaborative filtering,&#x0201D;</article-title> in <source>Proc. of S&#x00026;P&#x00027;11</source> (<publisher-loc>Oakland, CA</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>231</fpage>&#x02013;<lpage>246</lpage>.</citation>
</ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chai</surname> <given-names>D.</given-names></name> <name><surname>Wang</surname> <given-names>L.</given-names></name> <name><surname>Chen</surname> <given-names>K.</given-names></name> <name><surname>Yang</surname> <given-names>Q.</given-names></name></person-group> (<year>2022</year>). <article-title>Efficient federated matrix factorization against inference attacks</article-title>. <source>ACM Trans. Intell. Syst. Technol.</source> <volume>13</volume>, <fpage>1</fpage>&#x02013;<lpage>20</lpage>. <pub-id pub-id-type="doi">10.1145/3501812</pub-id></citation>
</ref>
<ref id="B10">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>C.</given-names></name> <name><surname>Wu</surname> <given-names>H.</given-names></name> <name><surname>Su</surname> <given-names>J.</given-names></name> <name><surname>Lyu</surname> <given-names>L.</given-names></name> <name><surname>Zheng</surname> <given-names>X.</given-names></name> <name><surname>Wang</surname> <given-names>L.</given-names></name></person-group> (<year>2022</year>). <article-title>&#x0201C;Differential private knowledge transfer for privacy-preserving cross-domain recommendation,&#x0201D;</article-title> in <source>Proc. of ACM WWW&#x00027;22</source> (<publisher-loc>Lyon</publisher-loc>).</citation>
</ref>
<ref id="B11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>C.</given-names></name> <name><surname>Zhou</surname> <given-names>J.</given-names></name> <name><surname>Wu</surname> <given-names>B.</given-names></name> <name><surname>Fang</surname> <given-names>W.</given-names></name> <name><surname>Wang</surname> <given-names>L.</given-names></name> <name><surname>Qi</surname> <given-names>Y.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Practical privacy preserving POI recommendation</article-title>. <source>ACM Trans. Intell. Syst. Technol.</source> <volume>11</volume>, <fpage>1455</fpage>&#x02013;<lpage>1465</lpage>. <pub-id pub-id-type="doi">10.1145/3394138</pub-id></citation>
</ref>
<ref id="B12">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Dwork</surname> <given-names>C.</given-names></name></person-group> (<year>2008</year>). <article-title>&#x0201C;Differential privacy: a survey of results,&#x0201D;</article-title> in <source>International Conference on Theory and Applications of Models of Computation</source> (<publisher-loc>Berlin</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>1</fpage>&#x02013;<lpage>19</lpage>.</citation>
</ref>
<ref id="B13">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Dwork</surname> <given-names>C.</given-names></name> <name><surname>Hardt</surname> <given-names>M.</given-names></name> <name><surname>Pitassi</surname> <given-names>T.</given-names></name> <name><surname>Reingold</surname> <given-names>O.</given-names></name> <name><surname>Zemel</surname> <given-names>R.</given-names></name></person-group> (<year>2012</year>). <article-title>&#x0201C;Fairness through awareness,&#x0201D;</article-title> in <source>Proc. of ITCS&#x00027;12</source> (<publisher-loc>Cambridge, MA</publisher-loc>), <fpage>214</fpage>&#x02013;<lpage>226</lpage>.</citation>
</ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dwork</surname> <given-names>C.</given-names></name> <name><surname>Roth</surname> <given-names>A.</given-names></name></person-group> (<year>2014</year>). <article-title>The algorithmic foundations of differential privacy</article-title>. <source>Found. Trends Theoret. Comput. Sci.</source> <volume>9</volume>, <fpage>211</fpage>&#x02013;<lpage>407</lpage>. <pub-id pub-id-type="doi">10.1561/0400000042</pub-id></citation>
</ref>
<ref id="B15">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ekstrand</surname> <given-names>M. D.</given-names></name> <name><surname>Joshaghani</surname> <given-names>R.</given-names></name> <name><surname>Mehrpouyan</surname> <given-names>H.</given-names></name></person-group> (<year>2018</year>). <article-title>&#x0201C;Privacy for all: Ensuring fair and equitable privacy protections,&#x0201D;</article-title> in <source>Proc. of FAccT&#x00027;18</source> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>PMLR</publisher-name>), <fpage>35</fpage>&#x02013;<lpage>47</lpage>.</citation>
</ref>
<ref id="B16">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Fernandes</surname> <given-names>N.</given-names></name> <name><surname>Dras</surname> <given-names>M.</given-names></name> <name><surname>McIver</surname> <given-names>A.</given-names></name></person-group> (<year>2019</year>). <article-title>&#x0201C;Generalised differential privacy for text document processing,&#x0201D;</article-title> in <source>Principles of Security and Trust: 8th International Conference, POST 2019</source> (<publisher-loc>Prague</publisher-loc>: <publisher-name>Springer International Publishing</publisher-name>), <fpage>123</fpage>&#x02013;<lpage>148</lpage>.</citation>
</ref>
<ref id="B17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Friedman</surname> <given-names>A.</given-names></name> <name><surname>Berkovsky</surname> <given-names>S.</given-names></name> <name><surname>Kaafar</surname> <given-names>M. A.</given-names></name></person-group> (<year>2016</year>). <article-title>A differential privacy framework for matrix factorization recommender systems</article-title>. <source>User Model. User Adapt. Interact.</source> <volume>26</volume>, <fpage>425</fpage>&#x02013;<lpage>458</lpage>. <pub-id pub-id-type="doi">10.1007/s11257-016-9177-7</pub-id></citation>
</ref>
<ref id="B18">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Friedman</surname> <given-names>A.</given-names></name> <name><surname>Knijnenburg</surname> <given-names>B. P.</given-names></name> <name><surname>Vanhecke</surname> <given-names>K.</given-names></name> <name><surname>Martens</surname> <given-names>L.</given-names></name> <name><surname>Berkovsky</surname> <given-names>S.</given-names></name></person-group> (<year>2015</year>). <article-title>&#x0201C;Privacy aspects of recommender systems,&#x0201D;</article-title> in <source>Recommender Systems Handbook</source>, eds F. Ricci, L. Rokach, and B. Shapira (<publisher-loc>Boston, MA</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>649</fpage>&#x02013;<lpage>688</lpage>.</citation>
</ref>
<ref id="B19">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ganh&#x000F6;r</surname> <given-names>C.</given-names></name> <name><surname>Penz</surname> <given-names>D.</given-names></name> <name><surname>Rekabsaz</surname> <given-names>N.</given-names></name> <name><surname>Lesota</surname> <given-names>O.</given-names></name> <name><surname>Schedl</surname> <given-names>M.</given-names></name></person-group> (<year>2022</year>). <article-title>&#x0201C;Unlearning protected user attributes in recommendations with adversarial training,&#x0201D;</article-title> in <source>SIGIR &#x00027;22: The 45th International ACM SIGIR Conference on Research and Development in Information Retrieval</source>, eds E. Amig&#x000F3;, P. Castells, J. Gonzalo, B. Carterette, J. S. Culpepper, and G. Kazai (<publisher-loc>Madrid</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>2142</fpage>&#x02013;<lpage>2147</lpage>.</citation>
</ref>
<ref id="B20">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Gao</surname> <given-names>C.</given-names></name> <name><surname>Huang</surname> <given-names>C.</given-names></name> <name><surname>Lin</surname> <given-names>D.</given-names></name> <name><surname>Jin</surname> <given-names>D.</given-names></name> <name><surname>Li</surname> <given-names>Y.</given-names></name></person-group> (<year>2020</year>). <article-title>&#x0201C;DPLCF: differentially private local collaborative filtering,&#x0201D;</article-title> in <source>Proc. of SIGIR&#x00027;20</source> (<publisher-loc>Xi&#x00027;an</publisher-loc>), <fpage>961</fpage>&#x02013;<lpage>970</lpage>.</citation>
</ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gentry</surname> <given-names>C.</given-names></name></person-group> (<year>2009</year>). <source>A Fully Homomorphic Encryption Scheme (Dissertation)</source>. Stanford University.</citation>
</ref>
<ref id="B22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hashemi</surname> <given-names>H.</given-names></name> <name><surname>Xiong</surname> <given-names>W.</given-names></name> <name><surname>Ke</surname> <given-names>L.</given-names></name> <name><surname>Maeng</surname> <given-names>K.</given-names></name> <name><surname>Annavaram</surname> <given-names>M.</given-names></name> <name><surname>Suh</surname> <given-names>G. E.</given-names></name> <etal/></person-group>. (<year>2022</year>). <article-title>Data leakage via access patterns of sparse features in deep learning-based recommendation systems</article-title>. <source>arXiv preprint arXiv:2212.06264</source>.</citation>
</ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Herbert</surname> <given-names>C.</given-names></name> <name><surname>Marschin</surname> <given-names>V.</given-names></name> <name><surname>Erb</surname> <given-names>B.</given-names></name> <name><surname>Mei&#x000DF;ner</surname> <given-names>D.</given-names></name> <name><surname>Aufheimer</surname> <given-names>M.</given-names></name> <name><surname>B&#x000F6;sch</surname> <given-names>C.</given-names></name></person-group> (<year>2021</year>). <article-title>Are you willing to self-disclose for science? Effects of privacy awareness and trust in privacy on self-disclosure of personal and health data in online scientific studies&#x02013;an experimental study</article-title>. <source>Front. Big Data</source> <volume>4</volume>, <fpage>763196</fpage>. <pub-id pub-id-type="doi">10.3389/fdata.2021.763196</pub-id><pub-id pub-id-type="pmid">35005619</pub-id></citation></ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Himeur</surname> <given-names>Y.</given-names></name> <name><surname>Sohail</surname> <given-names>S. S.</given-names></name> <name><surname>Bensaali</surname> <given-names>F.</given-names></name> <name><surname>Amira</surname> <given-names>A.</given-names></name> <name><surname>Alazab</surname> <given-names>M.</given-names></name></person-group> (<year>2022</year>). <article-title>Latest trends of security and privacy in recommender systems: a comprehensive review and future perspectives</article-title>. <source>Comput. Sec.</source> <volume>118</volume>, <fpage>102746</fpage>. <pub-id pub-id-type="doi">10.1016/j.cose.2022.102746</pub-id></citation>
</ref>
<ref id="B25">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Hua</surname> <given-names>J.</given-names></name> <name><surname>Xia</surname> <given-names>C.</given-names></name> <name><surname>Zhong</surname> <given-names>S.</given-names></name></person-group> (<year>2015</year>). <article-title>&#x0201C;Differentially private matrix factorization,&#x0201D;</article-title> in <source>International Joint Conference on Artificial Intelligence</source> (<publisher-loc>Buenos Aires</publisher-loc>).</citation>
</ref>
<ref id="B26">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Jeckmans</surname> <given-names>A. J.</given-names></name> <name><surname>Beye</surname> <given-names>M.</given-names></name> <name><surname>Erkin</surname> <given-names>Z.</given-names></name> <name><surname>Hartel</surname> <given-names>P.</given-names></name> <name><surname>Lagendijk</surname> <given-names>R. L.</given-names></name> <name><surname>Tang</surname> <given-names>Q.</given-names></name></person-group> (<year>2013</year>). <article-title>&#x0201C;Privacy in recommender systems,&#x0201D;</article-title> in <source>Social Media Retrieval</source>, eds N. Ramzan, R. van Zwol, J.-S. Lee, K. Cl&#x000FC;ver, and X.-S. Hua (<publisher-loc>London</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>263</fpage>&#x02013;<lpage>281</lpage>.</citation>
</ref>
<ref id="B27">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Jiang</surname> <given-names>X.</given-names></name> <name><surname>Liu</surname> <given-names>B.</given-names></name> <name><surname>Qin</surname> <given-names>J.</given-names></name> <name><surname>Zhang</surname> <given-names>Y.</given-names></name> <name><surname>Qian</surname> <given-names>J.</given-names></name></person-group> (<year>2022</year>). <article-title>&#x0201C;FedNCF: federated neural collaborative filtering for privacy-preserving recommender system,&#x0201D;</article-title> in <source>2022 International Joint Conference on Neural Networks (IJCNN)</source> (<publisher-loc>Padua</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x02013;<lpage>8</lpage>.</citation>
</ref>
<ref id="B28">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Joshaghani</surname> <given-names>R.</given-names></name> <name><surname>Ekstrand</surname> <given-names>M. D.</given-names></name> <name><surname>Knijnenburg</surname> <given-names>B.</given-names></name> <name><surname>Mehrpouyan</surname> <given-names>H.</given-names></name></person-group> (<year>2018</year>). <article-title>&#x0201C;Do different groups have comparable privacy tradeoffs?&#x0201D;</article-title> in <source>Workshop on Moving Beyond a &#x0201C;One-Size Fits All&#x0201D; Approach: Exploring Individual Differences in Privacy, in Conjunction with the ACM CHI Conference on Human Factors in Computing Systems, CHI 2018</source> (<publisher-loc>Montreal, QC</publisher-loc>).</citation>
</ref>
<ref id="B29">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Knijnenburg</surname> <given-names>B. P.</given-names></name> <name><surname>Kobsa</surname> <given-names>A.</given-names></name></person-group> (<year>2013</year>). <article-title>Making decisions about privacy: information disclosure in context-aware recommender systems</article-title>. <source>ACM Trans. Interact. Intell. Syst.</source> <volume>3</volume>, <fpage>1</fpage>&#x02013;<lpage>23</lpage>. <pub-id pub-id-type="doi">10.1145/2499670</pub-id></citation>
</ref>
<ref id="B30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kosinski</surname> <given-names>M.</given-names></name> <name><surname>Stillwell</surname> <given-names>D.</given-names></name> <name><surname>Graepel</surname> <given-names>T.</given-names></name></person-group> (<year>2013</year>). <article-title>Private traits and attributes are predictable from digital records of human behavior</article-title>. <source>Proc. Nat. Acad. Sci. U.S.A.</source> <volume>110</volume>, <fpage>5802</fpage>&#x02013;<lpage>5805</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.1218772110</pub-id><pub-id pub-id-type="pmid">23479631</pub-id></citation></ref>
<ref id="B31">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>J.</given-names></name> <name><surname>Yang</surname> <given-names>J.-J.</given-names></name> <name><surname>Zhao</surname> <given-names>Y.</given-names></name> <name><surname>Liu</surname> <given-names>B.</given-names></name> <name><surname>Zhou</surname> <given-names>M.</given-names></name> <name><surname>Bi</surname> <given-names>J.</given-names></name> <name><surname>Wang</surname> <given-names>Q.</given-names></name></person-group> (<year>2016</year>). <article-title>Enforcing differential privacy for shared collaborative filtering</article-title>. <source>IEEE Access</source> <volume>5</volume>, <fpage>35</fpage>&#x02013;<lpage>49</lpage>. <pub-id pub-id-type="doi">10.1109/ACCESS.2016.2600258</pub-id></citation>
</ref>
<ref id="B32">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>Z.</given-names></name> <name><surname>Ding</surname> <given-names>B.</given-names></name> <name><surname>Zhang</surname> <given-names>C.</given-names></name> <name><surname>Li</surname> <given-names>N.</given-names></name> <name><surname>Zhou</surname> <given-names>J.</given-names></name></person-group> (<year>2021</year>). <article-title>Federated matrix factorization with privacy guarantee</article-title>. <source>Proc. VLDB Endowment</source> <volume>15</volume>, <fpage>900</fpage>&#x02013;<lpage>913</lpage>. <pub-id pub-id-type="doi">10.14778/3503585.3503598</pub-id></citation>
</ref>
<ref id="B33">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>X.</given-names></name> <name><surname>Liu</surname> <given-names>A.</given-names></name> <name><surname>Zhang</surname> <given-names>X.</given-names></name> <name><surname>Li</surname> <given-names>Z.</given-names></name> <name><surname>Liu</surname> <given-names>G.</given-names></name> <name><surname>Zhao</surname> <given-names>L.</given-names></name> <name><surname>Zhou</surname> <given-names>X.</given-names></name></person-group> (<year>2017</year>). <article-title>&#x0201C;When differential privacy meets randomized perturbation: a hybrid approach for privacy-preserving recommender system,&#x0201D;</article-title> in <source>International Conference on Database Systems for Advanced Applications</source> (<publisher-loc>Suzhou</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>576</fpage>&#x02013;<lpage>591</lpage>.</citation>
</ref>
<ref id="B34">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>Z.</given-names></name> <name><surname>Yang</surname> <given-names>L.</given-names></name> <name><surname>Fan</surname> <given-names>Z.</given-names></name> <name><surname>Peng</surname> <given-names>H.</given-names></name> <name><surname>Yu</surname> <given-names>P. S.</given-names></name></person-group> (<year>2022</year>). <article-title>Federated social recommendation with graph neural network</article-title>. <source>ACM Trans. Intell. Syst. Technol.</source> <volume>13</volume>, <fpage>1</fpage>&#x02013;<lpage>24</lpage>. <pub-id pub-id-type="doi">10.1145/3501815</pub-id></citation>
</ref>
<ref id="B35">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Long</surname> <given-names>J.</given-names></name> <name><surname>Chen</surname> <given-names>T.</given-names></name> <name><surname>Nguyen</surname> <given-names>Q. V. H.</given-names></name> <name><surname>Yin</surname> <given-names>H.</given-names></name></person-group> (<year>2023</year>). <article-title>Decentralized collaborative learning framework for next POI recommendation</article-title>. <source>ACM Trans. Inf. Syst.</source> <volume>41</volume>, <fpage>1</fpage>&#x02013;<lpage>25</lpage>. <pub-id pub-id-type="doi">10.1145/3555374</pub-id></citation>
</ref>
<ref id="B36">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ma</surname> <given-names>J.</given-names></name> <name><surname>Zhang</surname> <given-names>Q.</given-names></name> <name><surname>Lou</surname> <given-names>J.</given-names></name> <name><surname>Ho</surname> <given-names>J. C.</given-names></name> <name><surname>Xiong</surname> <given-names>L.</given-names></name> <name><surname>Jiang</surname> <given-names>X.</given-names></name></person-group> (<year>2019</year>). <article-title>&#x0201C;Privacy-preserving tensor factorization for collaborative health data analysis,&#x0201D;</article-title> in <source>Proceedings of the 28th ACM International Conference on Information and Knowledge Management, CIKM &#x00027;19</source> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>Association for Computing Machinery</publisher-name>), <fpage>1291</fpage>&#x02013;<lpage>1300</lpage>.<pub-id pub-id-type="pmid">31897355</pub-id></citation></ref>
<ref id="B37">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Majeed</surname> <given-names>A.</given-names></name> <name><surname>Lee</surname> <given-names>S.</given-names></name></person-group> (<year>2020</year>). <article-title>Anonymization techniques for privacy preserving data publishing: a comprehensive survey</article-title>. <source>IEEE Access</source> <volume>9</volume>, <fpage>8512</fpage>&#x02013;<lpage>8545</lpage>. <pub-id pub-id-type="doi">10.1109/ACCESS.2020.3045700</pub-id></citation>
</ref>
<ref id="B38">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>McMahan</surname> <given-names>B.</given-names></name> <name><surname>Moore</surname> <given-names>E.</given-names></name> <name><surname>Ramage</surname> <given-names>D.</given-names></name> <name><surname>Hampson</surname> <given-names>S.</given-names></name> <name><surname>Arcas</surname> <given-names>B. A.</given-names></name></person-group> (<year>2017</year>). <article-title>&#x0201C;Communication-efficient learning of deep networks from decentralized data,&#x0201D;</article-title> in <source>Artificial Intelligence and Statistics</source> (<publisher-loc>Fort Lauderdale, FL</publisher-loc>: <publisher-name>PMLR</publisher-name>), <fpage>1273</fpage>&#x02013;<lpage>1282</lpage>.</citation>
</ref>
<ref id="B39">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Mehdy</surname> <given-names>A. N.</given-names></name> <name><surname>Ekstrand</surname> <given-names>M. D.</given-names></name> <name><surname>Knijnenburg</surname> <given-names>B. P.</given-names></name> <name><surname>Mehrpouyan</surname> <given-names>H.</given-names></name></person-group> (<year>2021</year>). <article-title>&#x0201C;Privacy as a planned behavior: effects of situational factors on privacy perceptions and plans,&#x0201D;</article-title> in <source>Proc. of UMAP&#x00027;21</source> (<publisher-loc>Utrecht</publisher-loc>).</citation>
</ref>
<ref id="B40">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Meng</surname> <given-names>X.</given-names></name> <name><surname>Wang</surname> <given-names>S.</given-names></name> <name><surname>Shu</surname> <given-names>K.</given-names></name> <name><surname>Li</surname> <given-names>J.</given-names></name> <name><surname>Chen</surname> <given-names>B.</given-names></name> <name><surname>Liu</surname> <given-names>H.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>&#x0201C;Personalized privacy-preserving social recommendation,&#x0201D;</article-title> in <source>Proceedings of the AAAI Conference on Artificial Intelligence</source> (<publisher-loc>New Orleans, LA</publisher-loc>).</citation>
</ref>
<ref id="B41">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Minto</surname> <given-names>L.</given-names></name> <name><surname>Haller</surname> <given-names>M.</given-names></name> <name><surname>Livshits</surname> <given-names>B.</given-names></name> <name><surname>Haddadi</surname> <given-names>H.</given-names></name></person-group> (<year>2021</year>). <article-title>&#x0201C;Stronger privacy for federated collaborative filtering with implicit feedback,&#x0201D;</article-title> in <source>Proceedings of the 15th ACM Conference on Recommender Systems</source> (<publisher-loc>Amsterdam</publisher-loc>), <fpage>342</fpage>&#x02013;<lpage>350</lpage>.</citation>
</ref>
<ref id="B42">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>M&#x000FC;llner</surname> <given-names>P.</given-names></name> <name><surname>Lex</surname> <given-names>E.</given-names></name> <name><surname>Schedl</surname> <given-names>M.</given-names></name> <name><surname>Kowald</surname> <given-names>D.</given-names></name></person-group> (<year>2023</year>). <article-title>ReuseKNN: neighborhood reuse for differentially-private KNN-based recommendations</article-title>. <source>ACM Trans. Intell. Syst. Technol.</source> <volume>14</volume>, <fpage>1</fpage>&#x02013;<lpage>29</lpage>. <pub-id pub-id-type="doi">10.1145/3608481</pub-id></citation>
</ref>
<ref id="B43">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Neera</surname> <given-names>J.</given-names></name> <name><surname>Chen</surname> <given-names>X.</given-names></name> <name><surname>Aslam</surname> <given-names>N.</given-names></name> <name><surname>Wang</surname> <given-names>K.</given-names></name> <name><surname>Shu</surname> <given-names>Z.</given-names></name></person-group> (<year>2023</year>). <article-title>Private and utility enhanced recommendations with local differential privacy and Gaussian mixture model</article-title>. <source>IEEE Trans. Knowledge Data Eng.</source> <volume>35</volume>, <fpage>4151</fpage>&#x02013;<lpage>4163</lpage>. <pub-id pub-id-type="doi">10.1109/TKDE.2021.3126577</pub-id></citation>
</ref>
<ref id="B44">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ning</surname> <given-names>L.</given-names></name> <name><surname>Chien</surname> <given-names>S.</given-names></name> <name><surname>Song</surname> <given-names>S.</given-names></name> <name><surname>Chen</surname> <given-names>M.</given-names></name> <name><surname>Xue</surname> <given-names>Y.</given-names></name> <name><surname>Berlowitz</surname> <given-names>D.</given-names></name></person-group> (<year>2022</year>). <article-title>&#x0201C;EANA: reducing privacy risk on large-scale recommendation models,&#x0201D;</article-title> in <source>Proceedings of the 16th ACM Conference on Recommender Systems</source> (<publisher-loc>Seattle, WA</publisher-loc>), <fpage>399</fpage>&#x02013;<lpage>407</lpage>.</citation>
</ref>
<ref id="B45">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ramakrishnan</surname> <given-names>N.</given-names></name> <name><surname>Keller</surname> <given-names>B. J.</given-names></name> <name><surname>Mirza</surname> <given-names>B. J.</given-names></name> <name><surname>Grama</surname> <given-names>A. Y.</given-names></name> <name><surname>Karypis</surname> <given-names>G.</given-names></name></person-group> (<year>2001</year>). <article-title>When being weak is brave: privacy in recommender systems</article-title>. <source>IEEE Internet Comput.</source> <volume>5</volume>, <fpage>54</fpage>&#x02013;<lpage>62</lpage>. <pub-id pub-id-type="doi">10.1109/4236.968832</pub-id></citation>
</ref>
<ref id="B46">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ran</surname> <given-names>X.</given-names></name> <name><surname>Wang</surname> <given-names>Y.</given-names></name> <name><surname>Zhang</surname> <given-names>L. Y.</given-names></name> <name><surname>Ma</surname> <given-names>J.</given-names></name></person-group> (<year>2022</year>). <article-title>A differentially private matrix factorization based on vector perturbation for recommender system</article-title>. <source>Neurocomputing</source> <volume>483</volume>, <fpage>32</fpage>&#x02013;<lpage>41</lpage>. <pub-id pub-id-type="doi">10.1016/j.neucom.2022.01.079</pub-id></citation>
</ref>
<ref id="B47">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ren</surname> <given-names>J.</given-names></name> <name><surname>Jiang</surname> <given-names>L.</given-names></name> <name><surname>Peng</surname> <given-names>H.</given-names></name> <name><surname>Lyu</surname> <given-names>L.</given-names></name> <name><surname>Liu</surname> <given-names>Z.</given-names></name> <name><surname>Chen</surname> <given-names>C.</given-names></name> <etal/></person-group>. (<year>2022</year>). <article-title>&#x0201C;Cross-network social user embedding with hybrid differential privacy guarantees,&#x0201D;</article-title> in <source>Proceedings of the 31st ACM International Conference on Information &#x00026;; Knowledge Management, CIKM &#x00027;22</source> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>Association for Computing Machinery</publisher-name>), <fpage>1685</fpage>&#x02013;<lpage>1695</lpage>.</citation>
</ref>
<ref id="B48">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shin</surname> <given-names>H.</given-names></name> <name><surname>Kim</surname> <given-names>S.</given-names></name> <name><surname>Shin</surname> <given-names>J.</given-names></name> <name><surname>Xiao</surname> <given-names>X.</given-names></name></person-group> (<year>2018</year>). <article-title>Privacy enhanced matrix factorization for recommendation with local differential privacy</article-title>. <source>IEEE Trans. Knowledge Data Eng.</source> <volume>30</volume>, <fpage>1770</fpage>&#x02013;<lpage>1782</lpage>. <pub-id pub-id-type="doi">10.1109/TKDE.2018.2805356</pub-id></citation>
</ref>
<ref id="B49">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Sun</surname> <given-names>J. A.</given-names></name> <name><surname>Pentyala</surname> <given-names>S.</given-names></name> <name><surname>Cock</surname> <given-names>M. D.</given-names></name> <name><surname>Farnadi</surname> <given-names>G.</given-names></name></person-group> (<year>2023</year>). <article-title>&#x0201C;Privacy-preserving fair item ranking,&#x0201D;</article-title> in <source>Advances in Information Retrieval</source>, eds J. Kamps, L. Goeuriot, F. Crestani, M. Maistro, H. Joho, B. Davis, C. Gurrin, U. Kruschwitz, and A. Caputo (<publisher-loc>Cham</publisher-loc>: <publisher-name>Springer Nature</publisher-name>), <fpage>188</fpage>&#x02013;<lpage>203</lpage>.</citation>
</ref>
<ref id="B50">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>Y.</given-names></name> <name><surname>Gao</surname> <given-names>M.</given-names></name> <name><surname>Ran</surname> <given-names>X.</given-names></name> <name><surname>Ma</surname> <given-names>J.</given-names></name> <name><surname>Zhang</surname> <given-names>L. Y.</given-names></name></person-group> (<year>2023</year>). <article-title>An improved matrix factorization with local differential privacy based on piecewise mechanism for recommendation systems</article-title>. <source>Expert Syst. Appl.</source> <volume>216</volume>, <fpage>119457</fpage>. <pub-id pub-id-type="doi">10.1016/j.eswa.2022.119457</pub-id></citation>
</ref>
<ref id="B51">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Warner</surname> <given-names>S. L.</given-names></name></person-group> (<year>1965</year>). <article-title>Randomized response: a survey technique for eliminating evasive answer bias</article-title>. <source>J. Am. Stat. Assoc.</source> <volume>60</volume>, <fpage>63</fpage>&#x02013;<lpage>69</lpage>.<pub-id pub-id-type="pmid">12261830</pub-id></citation></ref>
<ref id="B52">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Weinsberg</surname> <given-names>U.</given-names></name> <name><surname>Bhagat</surname> <given-names>S.</given-names></name> <name><surname>Ioannidis</surname> <given-names>S.</given-names></name> <name><surname>Taft</surname> <given-names>N.</given-names></name></person-group> (<year>2012</year>). <article-title>&#x0201C;Blurme: inferring and obfuscating user gender based on ratings,&#x0201D;</article-title> in <source>Proceedings of the Sixth ACM Conference on Recommender Systems</source> (<publisher-loc>Dublin</publisher-loc>), <fpage>195</fpage>&#x02013;<lpage>202</lpage>.</citation>
</ref>
<ref id="B53">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wu</surname> <given-names>C.</given-names></name> <name><surname>Wu</surname> <given-names>F.</given-names></name> <name><surname>Lyu</surname> <given-names>L.</given-names></name> <name><surname>Huang</surname> <given-names>Y.</given-names></name> <name><surname>Xie</surname> <given-names>X.</given-names></name></person-group> (<year>2022</year>). <article-title>FedCTR: Federated native ad CTR prediction with cross-platform user behavior data</article-title>. <source>ACM TIST</source> <volume>13</volume>, <fpage>1</fpage>&#x02013;<lpage>19</lpage>. <pub-id pub-id-type="doi">10.1145/3506715</pub-id></citation>
</ref>
<ref id="B54">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xin</surname> <given-names>X.</given-names></name> <name><surname>Yang</surname> <given-names>J.</given-names></name> <name><surname>Wang</surname> <given-names>H.</given-names></name> <name><surname>Ma</surname> <given-names>J.</given-names></name> <name><surname>Ren</surname> <given-names>P.</given-names></name> <name><surname>Luo</surname> <given-names>H.</given-names></name> <etal/></person-group>. (<year>2023</year>). <article-title>On the user behavior leakage from recommender system exposure</article-title>. <source>ACM Trans. Inform. Syst.</source> <volume>41</volume>, <fpage>1</fpage>&#x02013;<lpage>25</lpage>. <pub-id pub-id-type="doi">10.1145/3568954</pub-id></citation>
</ref>
<ref id="B55">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Yang</surname> <given-names>M.</given-names></name> <name><surname>Zhu</surname> <given-names>T.</given-names></name> <name><surname>Ma</surname> <given-names>L.</given-names></name> <name><surname>Xiang</surname> <given-names>Y.</given-names></name> <name><surname>Zhou</surname> <given-names>W.</given-names></name></person-group> (<year>2017</year>). <article-title>&#x0201C;Privacy preserving collaborative filtering via the Johnson-Lindenstrauss transform,&#x0201D;</article-title> in <source>2017 IEEE Trustcom/BigDataSE/ICESS</source> (<publisher-loc>Sydney, NSW</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>417</fpage>&#x02013;<lpage>424</lpage>.</citation>
</ref>
<ref id="B56">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Yang</surname> <given-names>Z.</given-names></name> <name><surname>Ge</surname> <given-names>Y.</given-names></name> <name><surname>Su</surname> <given-names>C.</given-names></name> <name><surname>Wang</surname> <given-names>D.</given-names></name> <name><surname>Zhao</surname> <given-names>X.</given-names></name> <name><surname>Ying</surname> <given-names>Y.</given-names></name></person-group> (<year>2023</year>). <article-title>&#x0201C;Fairness-aware differentially private collaborative filtering,&#x0201D;</article-title> in <source>Companion Proceedings of the ACM Web Conference 2023, WWW &#x00027;23 Companion</source> (<publisher-loc>Austin, TX</publisher-loc>: <publisher-name>Association for Computing Machinery</publisher-name>), <fpage>927</fpage>&#x02013;<lpage>931</lpage>.</citation>
</ref>
<ref id="B57">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Zemel</surname> <given-names>R.</given-names></name> <name><surname>Wu</surname> <given-names>Y.</given-names></name> <name><surname>Swersky</surname> <given-names>K.</given-names></name> <name><surname>Pitassi</surname> <given-names>T.</given-names></name> <name><surname>Dwork</surname> <given-names>C.</given-names></name></person-group> (<year>2013</year>). <article-title>&#x0201C;Learning fair representations,&#x0201D;</article-title> in <source>Proc. of ICML&#x00027;13</source> (<publisher-loc>Atlanta, GA</publisher-loc>: <publisher-name>PMLR</publisher-name>), <fpage>325</fpage>&#x02013;<lpage>333</lpage>.</citation>
</ref>
<ref id="B58">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>S.</given-names></name> <name><surname>Yin</surname> <given-names>H.</given-names></name> <name><surname>Chen</surname> <given-names>T.</given-names></name> <name><surname>Huang</surname> <given-names>Z.</given-names></name> <name><surname>Cui</surname> <given-names>L.</given-names></name> <name><surname>Zhang</surname> <given-names>X.</given-names></name></person-group> (<year>2021</year>). <article-title>&#x0201C;Graph embedding for recommendation against attribute inference attacks,&#x0201D;</article-title> in <source>Proceedings of the Web Conference 2021, WWW &#x00027;21</source> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>Association for Computing Machinery</publisher-name>), <fpage>3002</fpage>&#x02013;<lpage>3014</lpage>.</citation>
</ref>
<ref id="B59">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>S.</given-names></name> <name><surname>Yuan</surname> <given-names>W.</given-names></name> <name><surname>Yin</surname> <given-names>H.</given-names></name></person-group> (<year>2023</year>). <article-title>Comprehensive privacy analysis on federated recommender system against attribute inference attacks</article-title>. <source>IEEE Trans. Knowledge Data Eng</source>. 1-13. <pub-id pub-id-type="doi">10.1109/TKDE.2023.3295601</pub-id></citation>
</ref>
<ref id="B60">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Zhao</surname> <given-names>Y.</given-names></name> <name><surname>Feng</surname> <given-names>X.</given-names></name> <name><surname>Li</surname> <given-names>J.</given-names></name> <name><surname>Liu</surname> <given-names>B.</given-names></name></person-group> (<year>2011</year>). <article-title>&#x0201C;Shared collaborative filtering,&#x0201D;</article-title> in <source>Proceedings of the Fifth ACM Conference on Recommender Systems</source> (<publisher-loc>Chicago, IL</publisher-loc>), <fpage>29</fpage>&#x02013;<lpage>36</lpage>.</citation>
</ref>
<ref id="B61">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhao</surname> <given-names>Y.</given-names></name> <name><surname>Zhao</surname> <given-names>J.</given-names></name> <name><surname>Yang</surname> <given-names>M.</given-names></name> <name><surname>Wang</surname> <given-names>T.</given-names></name> <name><surname>Wang</surname> <given-names>N.</given-names></name> <name><surname>Lyu</surname> <given-names>L.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Local differential privacy-based federated learning for internet of things</article-title>. <source>IEEE Internet Things J.</source> <volume>8</volume>, <fpage>8836</fpage>&#x02013;<lpage>8853</lpage>. <pub-id pub-id-type="doi">10.1109/JIOT.2020.3037194</pub-id></citation>
</ref>
<ref id="B62">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Zhu</surname> <given-names>T.</given-names></name> <name><surname>Li</surname> <given-names>G.</given-names></name> <name><surname>Ren</surname> <given-names>Y.</given-names></name> <name><surname>Zhou</surname> <given-names>W.</given-names></name> <name><surname>Xiong</surname> <given-names>P.</given-names></name></person-group> (<year>2013</year>). <article-title>&#x0201C;Differential privacy for neighborhood-based collaborative filtering,&#x0201D;</article-title> in <source>Proc. of IEEE/ACM ASONAM&#x00027;13</source> (<publisher-loc>Niagara Falls, ON</publisher-loc>), <fpage>752</fpage>&#x02013;<lpage>759</lpage>.</citation>
</ref>
</ref-list>
</back>
</article>