<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.3 20210610//EN" "JATS-journalpublishing1-3-mathml3.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ali="http://www.niso.org/schemas/ali/1.0/" article-type="research-article" dtd-version="1.3" xml:lang="EN">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Artif. Intell.</journal-id>
<journal-title-group>
<journal-title>Frontiers in Artificial Intelligence</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Artif. Intell.</abbrev-journal-title>
</journal-title-group>
<issn pub-type="epub">2624-8212</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/frai.2026.1774013</article-id>
<article-version article-version-type="Version of Record" vocab="NISO-RP-8-2008"/>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Original Research</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Real-time dynamic graph learning with temporal attention for financial fraud detection</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Chen</surname>
<given-names>Jundong</given-names>
</name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x002A;</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/3326901"/>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="conceptualization" vocab-term-identifier="https://credit.niso.org/contributor-roles/conceptualization/">Conceptualization</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Data curation" vocab-term-identifier="https://credit.niso.org/contributor-roles/data-curation/">Data curation</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Writing &#x2013; original draft" vocab-term-identifier="https://credit.niso.org/contributor-roles/writing-original-draft/">Writing &#x2013; original draft</role>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Yang</surname>
<given-names>Yan</given-names>
</name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="visualization" vocab-term-identifier="https://credit.niso.org/contributor-roles/visualization/">Visualization</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Writing &#x2013; review &#x0026; editing" vocab-term-identifier="https://credit.niso.org/contributor-roles/writing-review-editing/">Writing &#x2013; review &#x0026; editing</role>
</contrib>
</contrib-group>
<aff id="aff1"><label>1</label><institution>Finance and Banking Division, Southern Power Grid Digital Enterprise Technology (Guangdong) Co., Ltd.</institution>, <city>Guangzhou</city>, <country country="cn">China</country></aff>
<aff id="aff2"><label>2</label><institution>Strategic Development Department, Southern Power Grid Capital Holding Co., Ltd.</institution>, <city>Guangzhou</city>, <country country="cn">China</country></aff>
<author-notes>
<corresp id="c001"><label>&#x002A;</label>Correspondence: Jundong Chen, <email xlink:href="mailto:yxklww1@163.com">yxklww1@163.com</email></corresp>
</author-notes>
<pub-date publication-format="electronic" date-type="pub" iso-8601-date="2026-02-26">
<day>26</day>
<month>02</month>
<year>2026</year>
</pub-date>
<pub-date publication-format="electronic" date-type="collection">
<year>2026</year>
</pub-date>
<volume>9</volume>
<elocation-id>1774013</elocation-id>
<history>
<date date-type="received">
<day>23</day>
<month>12</month>
<year>2025</year>
</date>
<date date-type="rev-recd">
<day>09</day>
<month>01</month>
<year>2026</year>
</date>
<date date-type="accepted">
<day>09</day>
<month>02</month>
<year>2026</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2026 Chen and Yang.</copyright-statement>
<copyright-year>2026</copyright-year>
<copyright-holder>Chen and Yang</copyright-holder>
<license>
<ali:license_ref start_date="2026-02-26">https://creativecommons.org/licenses/by/4.0/</ali:license_ref>
<license-p>This is an open-access article distributed under the terms of the <ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution License (CC BY)</ext-link>. The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</license-p>
</license>
</permissions>
<abstract>
<p>Financial transaction risk control is a cornerstone of intelligent finance platforms, yet existing approaches remain limited. Early frameworks modeled user behaviors independently, while later graph-based systems extracted handcrafted features from capital-flow networks. Although these methods improved detection, they struggle to capture fine-grained temporal dynamics and evolving topological patterns, and they depend heavily on manual feature engineering. In this work, we present a unified real-time dynamic graph learning framework that directly learns representations from raw streaming transaction graphs. Central to our design is a continuous-time, context-aware graph attention transformer (C2GAT), which models both higher-order structural dependencies and temporal patterns. We further decouple multi-role interaction paths and local neighborhood structures into dedicated subgraph modules, enabling complementary views of fraud behaviors. Evaluated on an industrial credit-cashback fraud detection scenario, our framework delivers substantial improvements in accuracy and false-alarm reduction over industry-standard baselines, while meeting stringent real-time latency requirements for deployment in large-scale financial systems.</p>
</abstract>
<kwd-group>
<kwd>attention mechanisms</kwd>
<kwd>deep learning</kwd>
<kwd>financial transaction risk management</kwd>
<kwd>real-time dynamic graphs</kwd>
<kwd>temporal modeling</kwd>
</kwd-group>
<funding-group>
<funding-statement>The author(s) declared that financial support was not received for this work and/or its publication.</funding-statement>
</funding-group>
<counts>
<fig-count count="12"/>
<table-count count="6"/>
<equation-count count="11"/>
<ref-count count="34"/>
<page-count count="15"/>
<word-count count="9654"/>
</counts>
<custom-meta-group>
<custom-meta>
<meta-name>section-at-acceptance</meta-name>
<meta-value>AI in Finance</meta-value>
</custom-meta>
</custom-meta-group>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="sec1">
<label>1</label>
<title>Introduction</title>
<p>Advances in data ingestion, storage, and processing have driven financial transaction risk-control frameworks toward greater precision, automation, and scalability (<xref ref-type="bibr" rid="ref20">Li et al., 2024</xref>; <xref ref-type="bibr" rid="ref8">Challa et al., 2024</xref>; <xref ref-type="bibr" rid="ref34">Zhu and Guo, 2024</xref>). First-generation methods treated each user or account as an isolated sequence of actions, applying sequential models to extract behavioral embeddings (<xref ref-type="bibr" rid="ref2">Albalawi and Dardouri, 2025</xref>). However, this perspective ignores the rich interplay among multiple entities in transaction events, which often provides critical signals for detecting coordinated fraud (<xref ref-type="bibr" rid="ref25">Paleti et al., 2024</xref>; <xref ref-type="bibr" rid="ref7">Bello et al., 2023</xref>; <xref ref-type="bibr" rid="ref4">Ali et al., 2022</xref>; <xref ref-type="bibr" rid="ref5">Almazroi and Ayub, 2023</xref>).</p>
<p>To address these shortcomings, second-generation frameworks construct real-time capital-flow graphs, where nodes represent accounts/users and edges represent transactions (<xref ref-type="bibr" rid="ref18">Khalid et al., 2024</xref>; <xref ref-type="bibr" rid="ref9">Chatterjee et al., 2024</xref>; <xref ref-type="bibr" rid="ref3">Al-Hashedi and Magalingam, 2021</xref>). Domain experts then define business rules to compute aggregated statistics (e.g., transaction counts, total transferred amounts, average transaction value) on these graphs, and these features feed into downstream classifiers. While this paradigm introduces richer interaction data and has achieved widespread adoption, it suffers from three key limitations (<xref ref-type="bibr" rid="ref22">Motie and Raahemi, 2024</xref>). First, handcrafted graph statistics provide only coarse aggregates and fail to capture subtle behavioral patterns. Second, manual feature engineering demands extensive expert effort and cannot keep pace with emerging fraud schemes. Third, rule-based pipelines cannot fully model the continuously evolving graph topology and timing of transactions (<xref ref-type="bibr" rid="ref32">Xu et al., 2024</xref>; <xref ref-type="bibr" rid="ref24">Oguntibeju et al., 2024</xref>; <xref ref-type="bibr" rid="ref23">Mutemi and Bacao, 2024</xref>).</p>
<p>Graph Neural Networks (GNNs) offer a powerful approach to learning from irregular, non-Euclidean graph data (<xref ref-type="bibr" rid="ref21">Lyu et al., 2025</xref>; <xref ref-type="bibr" rid="ref31">Wu et al., 2022</xref>). Static graph models (e.g., graph convolutional networks and attention-based networks) have proven effective in many domains, but they assume a fixed, unchanging graph (<xref ref-type="bibr" rid="ref28">Tiukhova et al., 2024</xref>; <xref ref-type="bibr" rid="ref19">Khemani et al., 2024</xref>). In contrast, financial transactions naturally form dynamic graphs: edges (transactions) arrive continuously, and node connectivity evolves over time (<xref ref-type="bibr" rid="ref12">Corso et al., 2024</xref>). Existing dynamic graph representation techniques generally fall into three categories. Random-walk-based methods incorporate time into random walk sequences but often overlook rich node and edge attributes and struggle to generalize to unseen nodes (<xref ref-type="bibr" rid="ref11">Cheng et al., 2025</xref>; <xref ref-type="bibr" rid="ref13">Cui et al., 2025</xref>). Time-slice approaches divide the graph into discrete time intervals, learn embeddings per slice, then merge them, ignoring within-interval dynamics that are essential for timely fraud detection. Continuous-time models encode timestamps directly but typically do not account for the contextual influence of neighboring events when generating temporal embeddings (<xref ref-type="bibr" rid="ref16">Innan et al., 2024</xref>).</p>
<p>In response to these challenges, we propose a real-time dynamic graph unified learning framework for financial transaction risk control. The principal innovations of our framework are:</p>
<list list-type="simple">
<list-item>
<p>1) Our framework operates on raw streaming transaction events, eliminating the need for handcrafted rules and capturing fine-grained behavioral details directly from the data.</p>
</list-item>
<list-item>
<p>2) We design a continuous-time, context-aware graph attention transformer (C2GAT) module that dynamically focuses on relevant historical interactions and evolving graph structure, effectively modeling high-order temporal and topological dependencies.</p>
</list-item>
<list-item>
<p>3) We explicitly separate joint transaction patterns involving multiple roles from independent account actions into dedicated subgraph learning modules, and jointly optimize them to produce richer, more precise embeddings.</p>
</list-item>
<list-item>
<p>4) Our framework integrates seamlessly into large-scale transaction monitoring systems, delivering low-latency inference and high throughput under extreme concurrency.</p>
</list-item>
</list>
<p>We demonstrate the effectiveness of our framework in a credit-cashback &#x201C;cash-out&#x201D; fraud detection scenario, showing that it significantly outperforms both sequence-based and rule-based graph systems in detection accuracy, adaptability to evolving fraud patterns, and computational efficiency. Our results underscore the potential of unified real-time dynamic graph learning for next-generation financial risk management.</p>
<p>The remainder of this paper is organized as follows: Section 2 introduces the credit-cashback cash-out transaction detection scenario. Section 3 details the architecture of our unified real-time graph representation learning framework. Section 4 presents the C2GAT approach. Section 5 analyzes results on both public datasets and industrial-scale application scenarios. Finally, Section 6 provides concluding remarks.</p>
</sec>
<sec id="sec2">
<label>2</label>
<title>Scenario and terminology</title>
<sec id="sec3">
<label>2.1</label>
<title>Credit-cashback cash-out transaction detection</title>
<p>Ant Credit&#x2019;s &#x201C;Pay Later&#x201D; service offers users a revolving credit line for purchases, enabling a buy-now-pay-later experience. A small subset of users, however, exploit this facility not for genuine consumption but to withdraw loan funds for other purposes, a process known as cash-out. Since no real goods or services change hands, these transactions diverge from true consumption and pose significant financial risk. In practice, cash-out schemes often exhibit distinctive short-term money-flow anomalies between buyer and seller accounts (<xref ref-type="bibr" rid="ref10">Chen et al., 2024</xref>; <xref ref-type="bibr" rid="ref17">Ji et al., 2022</xref>). Common patterns include:</p>
<list list-type="bullet">
<list-item>
<p>A buyer or seller conducts a flurry of transactions with another account that was previously flagged for cash-out.</p>
</list-item>
<list-item>
<p>The sequence of transfers shows atypical timing (e.g., late-night bursts) or sudden spikes in value that deviate from normal spending behavior.</p>
</list-item>
<list-item>
<p>Funds cycle through a network of intermediary accounts and eventually return to the originator, attempting to obscure the cash-out.</p>
</list-item>
<list-item>
<p>A third-party &#x201C;mule&#x201D; account appears in the transaction chain to disguise the true beneficiaries.</p>
</list-item>
</list>
<p>An effective detection framework must recognize these multi-party, time-sensitive patterns in real time to block cash-out activity before funds are irreversibly disbursed.</p>
</sec>
<sec id="sec4">
<label>2.2</label>
<title>Real-time dynamic graph model</title>
<p>We model the stream of transactions as a real-time dynamic graph with millisecond-level query latency and minute-level update propagation. Nodes represent user or account entities, and time-stamped edges represent transactions (transfers or payments) with attributes such as amount and platform. The graph retains edges within a sliding window of k days (typically a few days); older edges beyond k days are automatically removed (<xref ref-type="bibr" rid="ref6">Barros et al., 2021</xref>). This ensures the graph reflects only the most recent activity.</p>
<p>Fraudulent signals often manifest within minutes or hours before a suspicious transaction. Therefore, our framework focuses on modeling recent behavior via the live graph. Longer-term historical patterns for each user are captured offline through persisted feature profiles, balancing detection accuracy with system efficiency (<xref ref-type="bibr" rid="ref30">Weng et al., 2023</xref>). <xref ref-type="fig" rid="fig1">Figure 1</xref> illustrates three successive one-minute snapshots of the dynamic capital-flow graph between 11:58&#x202F;a.m. and 12:00&#x202F;p.m. on March 5. Green edges represent transfers and blue edges represent payments. Note that the edge labeled &#x201C;G &#x2194; B&#x201D; disappears at 11:59&#x202F;a.m. after exceeding the retention window, while a new payment edge appears between nodes F and B at the same time. This example highlights both edge expiry and real-time insertion as the graph evolves.</p>
<fig position="float" id="fig1">
<label>Figure 1</label>
<caption>
<p>Three one-minute snapshots of the real-time dynamic capital-flow graph (11:58&#x2013;12:00&#x202F;p.m., March 5). Green edges are transfers; blue edges are payments. Edges older than k days are pruned, and new edges appear as transactions occur.</p>
</caption>
<graphic xlink:href="frai-09-1774013-g001.tif" mimetype="image" mime-subtype="tiff">
<alt-text content-type="machine-generated">Flowchart illustration showing interactions between individuals and locations labeled A, B, C, D, E, and G with colored icons and directional arrows, representing time-stamped contact events on March third to fifth; a vertical timeline on the left marks the sequence of events.</alt-text>
</graphic>
</fig>
</sec>
</sec>
<sec id="sec5">
<label>3</label>
<title>Unified real-time graph representation learning framework</title>
<p>Before detailing our framework, we clarify the nature of our contributions. The C2GAT mechanism (Section 4) represents our primary methodological innovation, extending continuous-time graph attention with node-specific temporal encoding and context-aware neighbor aggregation. These components are general-purpose and applicable beyond financial fraud detection. In contrast, the subgraph construction strategies (Section 3.1) and system architecture (Section 3.3) are application-driven designs optimized for the specific constraints of real-time financial transaction monitoring&#x2014;namely, the asymmetric degree distributions between buyers and sellers, the importance of multi-hop fund flows in cashback fraud, and the stringent latency requirements of production payment systems. We present both types of contributions to provide a complete picture of deploying advanced graph learning in industrial settings, while clearly distinguishing transferable methodological advances from domain-specific engineering choices.</p>
<p>Our proposed framework consists of three major components: a buyer-centric subgraph module, a seller-centric subgraph module, and a joint prediction module. <xref ref-type="fig" rid="fig2">Figure 2</xref> overviews the unified learning architecture integrating these components, and <xref ref-type="fig" rid="fig3">Figure 3</xref> shows the system deployment flow (offline training vs. online inference). In the following, we describe the subgraph construction logic and the joint learning algorithm.</p>
<fig position="float" id="fig2">
<label>Figure 2</label>
<caption>
<p>Overall unified learning framework integrating buyer-centric, seller-centric, and interaction path subgraphs.</p>
</caption>
<graphic xlink:href="frai-09-1774013-g002.tif" mimetype="image" mime-subtype="tiff">
<alt-text content-type="machine-generated">Diagram depicting a network-based classification model for e-commerce transactions, showing buyer-centered, seller-centered, and path graphs processed through discrete time encoding and attention networks, followed by concatenation, a multilayer perceptron, and classification.</alt-text>
</graphic>
</fig>
<fig position="float" id="fig3">
<label>Figure 3</label>
<caption>
<p>System architecture with offline data preparation/model training and online real-time scoring.</p>
</caption>
<graphic xlink:href="frai-09-1774013-g003.tif" mimetype="image" mime-subtype="tiff">
<alt-text content-type="machine-generated">Flowchart depicting an offline system and an online system, each enclosed in orange boxes. The offline system includes log replay, simulation, transformation, and algorithm training platforms connected sequentially, with event, feature, and tactic tables feeding into the log replay. The online system includes a payment monitoring system, dynamic graph control component, and platforms for graph data storage, feature extraction, and scoring. Black arrows show data flow, while a blue arrow connects the real-time model in the offline system back to the scoring platform in the online system.</alt-text>
</graphic>
</fig>
<sec id="sec6">
<label>3.1</label>
<title>Graph construction logic</title>
<p>At the time of each target transaction (the transaction being evaluated), we extract a snapshot of the dynamic graph centered on the entities involved. Since we only care about the buyer and seller related to the current transaction, we construct subgraphs from the full graph that contain the information necessary for detecting cashback fraud. Our subgraph construction is carefully designed to balance scale and performance (<xref ref-type="bibr" rid="ref15">Deng et al., 2022</xref>, <xref ref-type="bibr" rid="ref14">2023</xref>; <xref ref-type="bibr" rid="ref33">Zhong et al., 2023</xref>):</p>
<p>The subgraph must include sufficient information to capture the classic fraud patterns described in Section 2.1. Overly aggressive pruning could omit important signals.</p>
<p>The online transaction system may process billions of events per day. Under such extreme concurrency, even slight increases in per-transaction processing time (due to complex graph construction) can cause performance bottlenecks or system failures, leading to unacceptable risk.</p>
<p>Following conventional link prediction heuristics, we first construct neighbor-centered subgraphs around the buyer and around the seller. By learning the clustering characteristics of the neighbors in these subgraphs, the model can effectively capture fraud patterns 1 and 2 defined earlier.</p>
<p>Buyer-centered subgraph: In financial transaction networks, buyer nodes typically exhibit a relatively low degree, meaning they interact with a smaller number of entities. To capture potentially complex fraud patterns, such as a buyer interacting with a mule account that then interacts with other suspicious entities, it is both feasible and beneficial to explore a wider neighborhood. Therefore, we include all neighbors up to 2 hops from the buyer. This wider view allows the model to learn from the buyer&#x2019;s immediate and secondary connections. To manage computational load, we limit the sampling to a maximum of 30 edges per hop, sorted by time to select the 30 most recent interactions. This ensures the subgraph remains computationally tractable while retaining the most timely and relevant activity. <xref ref-type="fig" rid="fig4">Figure 4</xref> shows an example buyer-centric subgraph.</p>
<fig position="float" id="fig4">
<label>Figure 4</label>
<caption>
<p>Example of a buyer-centric neighbor subgraph.</p>
</caption>
<graphic xlink:href="frai-09-1774013-g004.tif" mimetype="image" mime-subtype="tiff">
<alt-text content-type="machine-generated">Diagram illustrating &#x201C;Buyer A&#x201D; in the center, encircled by a dashed red line, connected by blue and green lines to multiple groups of stylized people and market stall icons, indicating various buyer-seller relationships.</alt-text>
</graphic>
</fig>
<p>Seller-centered subgraph: Conversely, sellers, especially large merchants, often act as high-degree hub nodes, potentially involved in tens of thousands of transactions within a short period. Constructing a 2-hop subgraph for a major seller would lead to a combinatorial explosion in size, making real-time processing computationally infeasible and violating the strict low-latency requirements of a live payment system. To balance performance and signal, we adopt a more conservative 1-hop neighborhood for sellers. This captures the seller&#x2019;s direct interactions, which are crucial for fraud detection, without overwhelming the system. To further focus on the most relevant signals, we sample the seller&#x2019;s transaction edges based on transaction amount, guided by the known monetary distribution of cashback fraud. This heuristic sampling maximizes the probability of including fraudulent patterns while keeping the subgraph size manageable for real-time inference. This asymmetric design is a deliberate engineering choice to optimize the information-to-computation ratio in a production environment. <xref ref-type="fig" rid="fig5">Figure 5</xref> shows an example seller-centric subgraph.</p>
<fig position="float" id="fig5">
<label>Figure 5</label>
<caption>
<p>Example of a seller-centric neighbor subgraph.</p>
</caption>
<graphic xlink:href="frai-09-1774013-g005.tif" mimetype="image" mime-subtype="tiff">
<alt-text content-type="machine-generated">Diagram showing six customer icons, four red and two yellow, connected by colored lines to a central dashed red circle labeled "Seller B" with a store icon, illustrating a multi-customer vendor relationship.</alt-text>
</graphic>
</fig>
<p>Interaction path subgraph: Fraud patterns 3 and 4 involve multi-hop fund flows between specific buyers and sellers. To capture these, we preserve the interaction paths between the buyer and seller. Due to the sampling above, the direct buyer&#x2013;seller connection (especially if mediated by intermediaries) might not appear in either center subgraph. Thus, we construct a separate path subgraph connecting the buyer to the seller. Starting from the buyer node, we perform a breadth-first search (BFS) on the full graph to find all paths to the seller up to length 3 (yielding chains like buyer&#x202F;&#x2192;&#x202F;intermediary&#x202F;&#x2192;&#x202F;intermediary&#x202F;&#x2192;&#x202F;seller). We then extract the subgraph induced by those path nodes and edges. This path-focused subgraph explicitly captures the transactional chains between the buyer and seller, facilitating the model&#x2019;s learning of fraud patterns 3 and 4. <xref ref-type="fig" rid="fig6">Figure 6</xref> illustrates an example path subgraph.</p>
<fig position="float" id="fig6">
<label>Figure 6</label>
<caption>
<p>Example of a buyer-to-seller interaction path subgraph.</p>
</caption>
<graphic xlink:href="frai-09-1774013-g006.tif" mimetype="image" mime-subtype="tiff">
<alt-text content-type="machine-generated">Diagram illustrating communication between Buyer A on the left and Seller B on the right via multiple horizontal lines, with three intermediaries represented as yellow icons connected to both parties by dark blue lines and linked by a green line.</alt-text>
</graphic>
</fig>
</sec>
<sec id="sec7">
<label>3.2</label>
<title>Unified learning algorithm</title>
<p>Our framework jointly learns from all three subgraphs (buyer-centric, seller-centric, and buyer&#x2013;seller interaction). The buyer and seller neighbor subgraphs both aim to capture local neighborhood clustering features, so we use a single graph aggregation network (with shared parameters) for both. The path subgraph focuses on modeling fund flow patterns between the buyer and seller, which is a different objective; hence, it is handled by a separate graph aggregation network with its own parameters (<xref ref-type="bibr" rid="ref1">Aburbeian and Fern&#x00E1;ndez-Veiga, 2024</xref>; <xref ref-type="bibr" rid="ref26">Raju et al., 2024</xref>). Through these two networks, we obtain four representation vectors: a buyer&#x2019;s neighborhood (clustering) representation, a seller&#x2019;s neighborhood representation, a buyer&#x2019;s interaction-path representation, and a seller&#x2019;s interaction-path representation. We concatenate these representations and feed them into a final discriminative model (such as a feedforward neural network) to produce the fraud risk score for the transaction.</p>
</sec>
<sec id="sec8">
<label>3.3</label>
<title>System architecture</title>
<p>Our framework is deployed in a production environment using a two-tier architecture (<xref ref-type="fig" rid="fig3">Figure 3</xref>). The offline system handles data preparation and model training, while the online system performs real-time inference. Offline, we consume log data from transaction servers, organize it into event streams, sample streams, and feature streams, and feed these into a simulation platform to construct the historical graph data. The simulation generates training samples for the graph learning model. Model training (including subgraph sampling and C2GAT computations) is performed on a distributed training platform.</p>
<p>The online system maintains the live dynamic graph and performs real-time risk scoring. A dynamic graph controller manages data flows and scheduling between components. The main online processes (labeled in <xref ref-type="fig" rid="fig3">Figure 3</xref>) are: (1) retrieving the relevant subgraph structures from the online graph database; (2) retrieving associated node/edge features from the feature platform; and (3) combining the subgraph and features as input to the online inference service, which computes the risk score.</p>
<p>In our industrial setting, the scale of data is enormous: roughly 300 million unique buyers and sellers active per day, about 1 billion transactions per day, and a lookback window of 3&#x202F;days. Thus, the online graph database must handle on the order of 900 million nodes and 3 billion edges in memory at any given time. Despite this scale, our system can operate with millisecond latency, as discussed in Section 5.3.</p>
</sec>
</sec>
<sec id="sec9">
<label>4</label>
<title>Dynamic graph learning algorithms</title>
<p>We leverage a C2GAT to model temporal changes in the transaction graph. C2GAT was originally devised for point-of-interest recommendation tasks to model users&#x2019; evolving preferences over time. Here, we adapt this technique to the financial fraud detection scenario. In this section, we introduce the key components of C2GAT and explain how they are applied to our problem.</p>
<sec id="sec10">
<label>4.1</label>
<title>Temporal encoding</title>
<p>Timestamps on transactions (edges) are critical in financial fraud detection. We design a temporal encoder that maps time information into a high-dimensional vector space, capturing fine-grained temporal patterns and ordering of events. First, we transform absolute timestamps into relative times with respect to the current transaction of interest. This centers the time features on the event being scored. Building on this, and inspired by Mercer&#x2019;s theorem, we represent the temporal kernel function using a learned mapping. Formally, we define a temporal mapping function as:</p>
<disp-formula id="E1">
<mml:math id="M1">
<mml:mi>t</mml:mi>
<mml:mo>&#x21A6;</mml:mo>
<mml:msup>
<mml:mi>&#x03A6;</mml:mi>
<mml:mi>M</mml:mi>
</mml:msup>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo>&#x2254;</mml:mo>
<mml:mo stretchy="true">[</mml:mo>
<mml:msub>
<mml:mi>c</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>&#x22C5;</mml:mo>
<mml:msub>
<mml:mi>&#x03D5;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>c</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>&#x22C5;</mml:mo>
<mml:msub>
<mml:mi>&#x03D5;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo stretchy="true">]</mml:mo>
</mml:math>
<label>(1)</label>
</disp-formula>
<p>where <inline-formula>
<mml:math id="M2">
<mml:msub>
<mml:mi>&#x03D5;</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
</mml:math>
</inline-formula> are basis functions and <inline-formula>
<mml:math id="M3">
<mml:msub>
<mml:mi>c</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> are coefficients.</p>
<p>Empirically, temporal patterns can be characterized by a series of periodic kernel functions. According to the theorem introduced in reference (<xref ref-type="bibr" rid="ref29">Wang et al., 2021</xref>), a mapping function <inline-formula>
<mml:math id="M4">
<mml:mi>&#x03A6;</mml:mi>
<mml:mo stretchy="true">(</mml:mo>
<mml:mo>&#x22C5;</mml:mo>
<mml:mo stretchy="true">)</mml:mo>
</mml:math>
</inline-formula> with frequency <inline-formula>
<mml:math id="M5">
<mml:mi>&#x03C9;</mml:mi>
</mml:math>
</inline-formula> can be further formalized as:</p>
<disp-formula id="E2">
<mml:math id="M6">
<mml:mi>t</mml:mi>
<mml:mo>&#x21A6;</mml:mo>
<mml:msubsup>
<mml:mi>&#x03A6;</mml:mi>
<mml:mi>&#x03C9;</mml:mi>
<mml:mi>M</mml:mi>
</mml:msubsup>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo>&#x2254;</mml:mo>
<mml:mo stretchy="true">[</mml:mo>
<mml:mtable columnalign="left" displaystyle="true">
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mi>c</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>&#x22C5;</mml:mo>
<mml:mo>cos</mml:mo>
<mml:mo stretchy="true">(</mml:mo>
<mml:mfrac>
<mml:mi mathvariant="italic">&#x03C0;t</mml:mi>
<mml:mi>&#x03C9;</mml:mi>
</mml:mfrac>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>c</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>&#x22C5;</mml:mo>
<mml:mo>sin</mml:mo>
<mml:mo stretchy="true">(</mml:mo>
<mml:mfrac>
<mml:mi mathvariant="italic">&#x03C0;t</mml:mi>
<mml:mi>&#x03C9;</mml:mi>
</mml:mfrac>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>c</mml:mi>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mo>&#x22C5;</mml:mo>
<mml:mo>cos</mml:mo>
<mml:mo stretchy="true">(</mml:mo>
<mml:mfrac>
<mml:mi mathvariant="italic">j&#x03C0;t</mml:mi>
<mml:mi>&#x03C9;</mml:mi>
</mml:mfrac>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>c</mml:mi>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>j</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x22C5;</mml:mo>
<mml:mo>sin</mml:mo>
<mml:mo stretchy="true">(</mml:mo>
<mml:mfrac>
<mml:mi mathvariant="italic">j&#x03C0;t</mml:mi>
<mml:mi>&#x03C9;</mml:mi>
</mml:mfrac>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
<mml:mo stretchy="true">]</mml:mo>
</mml:math>
<label>(2)</label>
</disp-formula>
<p>This Fourier series representation provides better truncation properties because the truncated d-dimensional mapping function <inline-formula>
<mml:math id="M7">
<mml:msubsup>
<mml:mi>&#x03A6;</mml:mi>
<mml:mrow>
<mml:mi>&#x03C9;</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>d</mml:mi>
</mml:mrow>
<mml:mi>M</mml:mi>
</mml:msubsup>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
</mml:math>
</inline-formula> can approximate the original infinite-dimensional mapping. Subsequently, the mapping functions of <inline-formula>
<mml:math id="M8">
<mml:mi>k</mml:mi>
</mml:math>
</inline-formula> periods (i.e.,<inline-formula>
<mml:math id="M9">
<mml:msub>
<mml:mi>&#x03C9;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>&#x03C9;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>&#x03C9;</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
</mml:math>
</inline-formula>) are concatenated to form the final temporal encoding:</p>
<disp-formula id="E3">
<mml:math id="M10">
<mml:mi>t</mml:mi>
<mml:mo>&#x21A6;</mml:mo>
<mml:msubsup>
<mml:mi>&#x03A6;</mml:mi>
<mml:mi>d</mml:mi>
<mml:mi>M</mml:mi>
</mml:msubsup>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo>&#x2254;</mml:mo>
<mml:mo stretchy="true">[</mml:mo>
<mml:msubsup>
<mml:mi>&#x03A6;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x03C9;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi>d</mml:mi>
</mml:mrow>
<mml:mi>M</mml:mi>
</mml:msubsup>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo stretchy="true">&#x2016;</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo stretchy="true">&#x2016;</mml:mo>
<mml:msubsup>
<mml:mi>&#x03A6;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x03C9;</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi>d</mml:mi>
</mml:mrow>
<mml:mi>M</mml:mi>
</mml:msubsup>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo stretchy="true">]</mml:mo>
</mml:math>
<label>(3)</label>
</disp-formula>
<p>This temporal encoding, defined by <xref ref-type="disp-formula" rid="E1">Equations 1</xref>&#x2013;<xref ref-type="disp-formula" rid="E3">3</xref>, is initially node-independent: any two nodes have the same encoding for the same time difference <inline-formula>
<mml:math id="M11">
<mml:mi mathvariant="italic">&#x0394;t</mml:mi>
</mml:math>
</inline-formula>. However, in our fraud scenario, the significance of a given time interval can vary by node. For example, a transaction at 2&#x202F;a.m. might be unusual for one user (indicating risk) but normal for another user who frequently transacts at night. Therefore, we introduce a node-specific temporal encoding. For a specific node v, we define:</p>
<disp-formula id="E4">
<mml:math id="M12">
<mml:mi>v</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>&#x21A6;</mml:mo>
<mml:msubsup>
<mml:mi>&#x03A6;</mml:mi>
<mml:mi>&#x03C9;</mml:mi>
<mml:mi>M</mml:mi>
</mml:msubsup>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>v</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo>&#x2254;</mml:mo>
<mml:mo stretchy="true">[</mml:mo>
<mml:mtable columnalign="left" displaystyle="true">
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mi>c</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>v</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>c</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>v</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>c</mml:mi>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>v</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo>&#x22C5;</mml:mo>
<mml:mo>cos</mml:mo>
<mml:mo stretchy="true">(</mml:mo>
<mml:mfrac>
<mml:mi mathvariant="italic">j&#x03C0;t</mml:mi>
<mml:mi>&#x03C9;</mml:mi>
</mml:mfrac>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo>,</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mi>c</mml:mi>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>j</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>v</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo>&#x22C5;</mml:mo>
<mml:mo>sin</mml:mo>
<mml:mo stretchy="true">(</mml:mo>
<mml:mfrac>
<mml:mi mathvariant="italic">j&#x03C0;t</mml:mi>
<mml:mi>&#x03C9;</mml:mi>
</mml:mfrac>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
<mml:mo stretchy="true">]</mml:mo>
</mml:math>
<label>(4)</label>
</disp-formula>
<disp-formula id="E5">
<mml:math id="M13">
<mml:msubsup>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>&#x21A6;</mml:mo>
<mml:msubsup>
<mml:mi>&#x03A6;</mml:mi>
<mml:mi>d</mml:mi>
<mml:mi>M</mml:mi>
</mml:msubsup>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>v</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo>&#x2254;</mml:mo>
<mml:mo stretchy="true">[</mml:mo>
<mml:msubsup>
<mml:mi>&#x03A6;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x03C9;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi>d</mml:mi>
</mml:mrow>
<mml:mi>M</mml:mi>
</mml:msubsup>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>v</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo>&#x2223;</mml:mo>
<mml:msubsup>
<mml:mi>&#x03A6;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>&#x03C9;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi>d</mml:mi>
</mml:mrow>
<mml:mi>M</mml:mi>
</mml:msubsup>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>v</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo>&#x2223;</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo stretchy="true">&#x2016;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>&#x03C9;</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi>d</mml:mi>
</mml:mrow>
<mml:mi>M</mml:mi>
</mml:msubsup>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>v</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo stretchy="true">]</mml:mo>
</mml:math>
<label>(5)</label>
</disp-formula>
<p><xref ref-type="disp-formula" rid="E5">Equation 5</xref> combines the node-specific temporal encodings across all k frequency components into a unified representation. where <inline-formula>
<mml:math id="M14">
<mml:msub>
<mml:mi>c</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>v</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo>:</mml:mo>
<mml:msup>
<mml:mi>&#x211D;</mml:mi>
<mml:mi>d</mml:mi>
</mml:msup>
<mml:mo>&#x2190;</mml:mo>
<mml:mi>&#x211D;</mml:mi>
</mml:math>
</inline-formula> <inline-formula>
<mml:math id="M15">
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo stretchy="true">)</mml:mo>
</mml:math>
</inline-formula> is a set of mapping functions that use node attributes as input to calculate Fourier coefficients. Multilayer perceptrons are used in experiments to implement <inline-formula>
<mml:math id="M16">
<mml:msub>
<mml:mi>c</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>v</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
</mml:math>
</inline-formula> because of their excellent modeling capability for complex interactions, i.e., <inline-formula>
<mml:math id="M17">
<mml:msub>
<mml:mi>c</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>v</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:mi>MLP</mml:mi>
<mml:mo stretchy="true">(</mml:mo>
<mml:msub>
<mml:mi>f</mml:mi>
<mml:mi>v</mml:mi>
</mml:msub>
<mml:mo stretchy="true">)</mml:mo>
</mml:math>
</inline-formula>, where <inline-formula>
<mml:math id="M18">
<mml:msub>
<mml:mi>f</mml:mi>
<mml:mi>v</mml:mi>
</mml:msub>
<mml:mo>&#x2208;</mml:mo>
<mml:msup>
<mml:mi>&#x211D;</mml:mi>
<mml:mi>d</mml:mi>
</mml:msup>
</mml:math>
</inline-formula> is the attribute feature vector of node v. Furthermore, by forcing the last layer of the perceptron to output positive values, we satisfy the inherent properties of Mercer theory.</p>
<p>Frequency Selection Rationale. The temporal encoding frequencies &#x03C9;k are selected based on domain knowledge of financial transaction patterns and validated through sensitivity analysis. We use three primary frequencies:</p>
<list list-type="bullet">
<list-item>
<p>&#x03C9;1&#x202F;=&#x202F;1/3600 (hourly): Captures intra-day patterns such as business hours vs. late-night transactions, which are strong indicators of anomalous behavior in offline merchant fraud.</p>
</list-item>
<list-item>
<p>&#x03C9;2&#x202F;=&#x202F;1/86,400 (daily): Models day-level recency, reflecting how recent interactions influence current transaction risk.</p>
</list-item>
<list-item>
<p>&#x03C9;3&#x202F;=&#x202F;1/604,800 (weekly): Captures weekly cycles in legitimate spending behavior, helping distinguish routine weekend shopping from suspicious activity.</p>
</list-item>
</list>
</sec>
<sec id="sec11">
<label>4.2</label>
<title>Continuous-time context-aware graph attention (C2GAT)</title>
<p>We incorporate the above temporal encoding into a graph attention mechanism that operates in continuous time. As illustrated in <xref ref-type="fig" rid="fig2">Figure 2</xref>, the C2GAT module learns to weigh a node&#x2019;s neighbors based on both structural context and temporal relevance. It computes attention scores between a target node and each of its neighbors, taking into account when interactions occurred and the context of those interactions (<xref ref-type="bibr" rid="ref26">Raju et al., 2024</xref>).</p>
<p>Specifically, at layer <inline-formula>
<mml:math id="M19">
<mml:mi>l</mml:mi>
</mml:math>
</inline-formula> of C2GAT, given a target node <inline-formula>
<mml:math id="M20">
<mml:msub>
<mml:mi>v</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> at time <inline-formula>
<mml:math id="M21">
<mml:mi>t</mml:mi>
</mml:math>
</inline-formula>, an attention distribution is created for the neighbor node set <inline-formula>
<mml:math id="M22">
<mml:msub>
<mml:mi>N</mml:mi>
<mml:msub>
<mml:mi>v</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:msub>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:mi>v</mml:mi>
<mml:mo>&#x2223;</mml:mo>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mi>v</mml:mi>
</mml:msub>
<mml:mo>&#x003C;</mml:mo>
<mml:mi>t</mml:mi>
</mml:math>
</inline-formula> to fuse the representation of each neighbor node. Translation invariance exists in the defined temporal kernel function, so we use <inline-formula>
<mml:math id="M23">
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mi>v</mml:mi>
</mml:msub>
<mml:mi>v</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:mi>N</mml:mi>
<mml:msub>
<mml:mi>v</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
</mml:math>
</inline-formula>. At time <italic>t</italic>, the continuous-time and context-aware attention value between the node pair&#x2014;the target node <inline-formula>
<mml:math id="M24">
<mml:msub>
<mml:mi>v</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> and any of its neighbors <inline-formula>
<mml:math id="M25">
<mml:mi>v</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:msub>
<mml:mi>N</mml:mi>
<mml:msub>
<mml:mi>v</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:msub>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
</mml:math>
</inline-formula>, can be determined via <xref ref-type="disp-formula" rid="E6">Equations 6</xref> and <xref ref-type="disp-formula" rid="E7">7</xref> as follows:</p>
<disp-formula id="E6">
<mml:math id="M26">
<mml:msub>
<mml:mi>&#x03B1;</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>v</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi>v</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mi>Q</mml:mi>
<mml:msub>
<mml:mi>v</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:msub>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
<mml:msubsup>
<mml:mi>K</mml:mi>
<mml:mi>v</mml:mi>
<mml:mi>T</mml:mi>
</mml:msubsup>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
</mml:mrow>
<mml:msqrt>
<mml:mi>d</mml:mi>
</mml:msqrt>
</mml:mfrac>
</mml:math>
<label>(6)</label>
</disp-formula>
<disp-formula id="E7">
<mml:math id="M27">
<mml:msub>
<mml:mi>Q</mml:mi>
<mml:msub>
<mml:mi>v</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:msub>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:mo stretchy="true">[</mml:mo>
<mml:msubsup>
<mml:mi>h</mml:mi>
<mml:msub>
<mml:mi>v</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>l</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo stretchy="true">)</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo>&#x2223;</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>&#x2223;</mml:mo>
<mml:msubsup>
<mml:mi>&#x03A6;</mml:mi>
<mml:mi>d</mml:mi>
<mml:mi>M</mml:mi>
</mml:msubsup>
<mml:mo stretchy="true">(</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo stretchy="true">]</mml:mo>
<mml:msup>
<mml:mi>W</mml:mi>
<mml:mi>Q</mml:mi>
</mml:msup>
</mml:math>
<label>(7)</label>
</disp-formula>
<disp-formula id="E8">
<mml:math id="M28">
<mml:msub>
<mml:mi>K</mml:mi>
<mml:mi>v</mml:mi>
</mml:msub>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:mo stretchy="true">[</mml:mo>
<mml:msup>
<mml:mi>F</mml:mi>
<mml:mrow>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>l</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo stretchy="true">)</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>v</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mi>v</mml:mi>
</mml:msub>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo>&#x2223;</mml:mo>
<mml:msub>
<mml:mi>e</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>v</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi>v</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo>&#x2223;</mml:mo>
<mml:msubsup>
<mml:mi>&#x03A6;</mml:mi>
<mml:mi>d</mml:mi>
<mml:mi>M</mml:mi>
</mml:msubsup>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mi>v</mml:mi>
</mml:msub>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo stretchy="true">]</mml:mo>
<mml:msup>
<mml:mi>W</mml:mi>
<mml:mi>K</mml:mi>
</mml:msup>
</mml:math>
<label>(8)</label>
</disp-formula>
<p>where <inline-formula>
<mml:math id="M29">
<mml:msup>
<mml:mi>W</mml:mi>
<mml:mi>Q</mml:mi>
</mml:msup>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M30">
<mml:msup>
<mml:mi>W</mml:mi>
<mml:mi>K</mml:mi>
</mml:msup>
</mml:math>
</inline-formula> are two mapping matrices, the former used to obtain the &#x201C;Query&#x201D; matrix and the latter used to obtain the &#x201C;Key&#x201D; matrix; <inline-formula>
<mml:math id="M31">
<mml:msub>
<mml:mi>e</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>v</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi>v</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
</mml:math>
</inline-formula> is the edge feature vector between nodes <inline-formula>
<mml:math id="M32">
<mml:msub>
<mml:mi>v</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:math>
</inline-formula> and v at time t; <inline-formula>
<mml:math id="M33">
<mml:msup>
<mml:mi>h</mml:mi>
<mml:mrow>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>l</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo>&#x2217;</mml:mo>
</mml:mrow>
</mml:msup>
</mml:math>
</inline-formula> is the node output at layer <inline-formula>
<mml:math id="M34">
<mml:mi>l</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:math>
</inline-formula> of C2GAT. For more stable training, this attention mechanism can be flexibly extended to multi-head attention. As described above, a key factor in the attention values between target nodes and neighbor nodes is the temporal context information and mutual influence between node neighbors. To explicitly describe such influence, we designed a context aggregation function called <inline-formula>
<mml:math id="M35">
<mml:msup>
<mml:mi>F</mml:mi>
<mml:mrow>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>l</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo stretchy="true">)</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>v</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mi>v</mml:mi>
</mml:msub>
<mml:mo stretchy="true">)</mml:mo>
</mml:math>
</inline-formula> to implement this functionality. For a target node <inline-formula>
<mml:math id="M36">
<mml:msub>
<mml:mi>v</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:math>
</inline-formula>, any neighbor node v, and its corresponding time <inline-formula>
<mml:math id="M37">
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mi>v</mml:mi>
</mml:msub>
</mml:math>
</inline-formula>, the following neighbor set is defined: <inline-formula>
<mml:math id="M38">
<mml:mi>S</mml:mi>
<mml:msub>
<mml:mi>v</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi>v</mml:mi>
<mml:mo stretchy="true">(</mml:mo>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mi>v</mml:mi>
</mml:msub>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:msup>
<mml:mi>v</mml:mi>
<mml:mo>&#x2032;</mml:mo>
</mml:msup>
<mml:mo>&#x2208;</mml:mo>
<mml:msub>
<mml:mi>N</mml:mi>
<mml:msub>
<mml:mi>v</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:msub>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo>&#x2223;</mml:mo>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:msup>
<mml:mi>v</mml:mi>
<mml:mo>&#x2032;</mml:mo>
</mml:msup>
</mml:msub>
<mml:mo>&#x2264;</mml:mo>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mi>v</mml:mi>
</mml:msub>
</mml:math>
</inline-formula>. Furthermore, we use two aggregation functions to implement <inline-formula>
<mml:math id="M39">
<mml:mi>F</mml:mi>
<mml:mo stretchy="true">(</mml:mo>
<mml:mo>&#x22C5;</mml:mo>
<mml:mo stretchy="true">)</mml:mo>
</mml:math>
</inline-formula>:</p>
<list list-type="simple">
<list-item>
<p>1) To give C2GAT powerful expressiveness, the recurrent aggregator applies a complex LSTM structure on sequential contexts:</p>
</list-item>
</list>
<disp-formula id="E9">
<mml:math id="M40">
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mi>R</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mtext>LSTM</mml:mtext>
<mml:mo stretchy="true">(</mml:mo>
<mml:msubsup>
<mml:mi>h</mml:mi>
<mml:msup>
<mml:mi>v</mml:mi>
<mml:mo>&#x2032;</mml:mo>
</mml:msup>
<mml:mrow>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>l</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo stretchy="true">)</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo>;</mml:mo>
<mml:msup>
<mml:mi>v</mml:mi>
<mml:mo>&#x2032;</mml:mo>
</mml:msup>
<mml:mo>&#x2208;</mml:mo>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mrow>
<mml:mi>u</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>v</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="true">(</mml:mo>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mi>v</mml:mi>
</mml:msub>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo stretchy="true">)</mml:mo>
</mml:math>
<label>(9)</label>
</disp-formula>
<list list-type="simple">
<list-item>
<p>2) To make C2GAT scalable to larger datasets, the convolutional aggregator uses deep convolution operations:</p>
</list-item>
</list>
<disp-formula id="E10">
<mml:math id="M41">
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mi>C</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:munderover>
<mml:mo movablelimits="false">&#x2211;</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>d</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:munderover>
<mml:munderover>
<mml:mo movablelimits="false">&#x2211;</mml:mo>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2223;</mml:mo>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mrow>
<mml:mi>u</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>v</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="true">(</mml:mo>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mi>v</mml:mi>
</mml:msub>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo>&#x2223;</mml:mo>
</mml:mrow>
</mml:munderover>
<mml:msubsup>
<mml:mi>h</mml:mi>
<mml:mi>j</mml:mi>
<mml:mrow>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>l</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo stretchy="true">)</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
<mml:msub>
<mml:mi>W</mml:mi>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
<label>(10)</label>
</disp-formula>
<p>where <inline-formula>
<mml:math id="M42">
<mml:mi>W</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:msup>
<mml:mi>&#x211D;</mml:mi>
<mml:mrow>
<mml:mo>&#x2223;</mml:mo>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mrow>
<mml:mi>u</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>v</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="true">(</mml:mo>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mi>v</mml:mi>
</mml:msub>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo>&#x2223;</mml:mo>
<mml:mo>&#x00D7;</mml:mo>
<mml:mi>d</mml:mi>
<mml:mo>&#x00D7;</mml:mo>
<mml:mi>d</mml:mi>
</mml:mrow>
</mml:msup>
</mml:math>
</inline-formula> is the convolution kernel.</p>
<p>By stacking L layers of the aforementioned C2GAT layers, we can leverage higher-order structural information in dynamic graphs from broader and deeper dimensions. Thus, each node&#x2019;s representation at time t is denoted as <inline-formula>
<mml:math id="M43">
<mml:msubsup>
<mml:mi>h</mml:mi>
<mml:mi>v</mml:mi>
<mml:mi>T</mml:mi>
</mml:msubsup>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:msup>
<mml:mi>h</mml:mi>
<mml:mrow>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>L</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
</mml:mrow>
</mml:msup>
<mml:mi>v</mml:mi>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
<mml:mi>v</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:mi>V</mml:mi>
</mml:math>
</inline-formula>. Since user representation is the focus of this work, we rewrite each user&#x2019;s representation at time <italic>t</italic> as <inline-formula>
<mml:math id="M44">
<mml:msubsup>
<mml:mi>h</mml:mi>
<mml:mi>u</mml:mi>
<mml:mi>T</mml:mi>
</mml:msubsup>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
</mml:math>
</inline-formula>.</p>
</sec>
<sec id="sec12">
<label>4.3</label>
<title>Innovative description</title>
<p>Distinction from Prior Work. While C2GAT shares the general paradigm of temporal graph attention with methods like TGAT and TGN, it introduces two key innovations:</p>
<p>(1) Node-Specific Temporal Encoding. Unlike TGAT&#x2019;s fixed Fourier basis where all nodes share identical time representations, C2GAT learns node-dependent Fourier coefficients (<xref ref-type="disp-formula" rid="E4">Equation 4</xref>). This allows the model to capture that the same time interval (e.g., 2&#x202F;h) may have different significance for different entities; a late-night transaction may be anomalous for one user but routine for another. Our ablation (Section 5.3) shows this contributes approximately 4.9 percentage points to P@20R.</p>
<p>(2) Explicit Context Aggregation. TGAT computes attention scores independently for each neighbor without considering inter-neighbor relationships. C2GAT introduces the context function (<xref ref-type="disp-formula" rid="E8">Equations 8</xref>&#x2013;<xref ref-type="disp-formula" rid="E10">10</xref>) that explicitly aggregates information from neighbors preceding a given neighbor in time. This captures sequential dependencies among a node&#x2019;s interactions, critical for detecting fraud patterns where the ordering of transactions matters. The recurrent variant (Our-R) consistently outperforms the convolutional variant, confirming the value of sequential context modeling.</p>
</sec>
<sec id="sec13">
<label>4.4</label>
<title>Model learning</title>
<p>In the cashback fraud detection scenario, we formally define a sample D as a quadruple<inline-formula>
<mml:math id="M45">
<mml:mi>D</mml:mi>
<mml:mo>=</mml:mo>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>u</mml:mi>
<mml:mo>,</mml:mo>
<mml:mo>,</mml:mo>
<mml:mo>,</mml:mo>
<mml:mi>v</mml:mi>
<mml:mo>,</mml:mo>
<mml:mo>,</mml:mo>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>,</mml:mo>
<mml:mo>,</mml:mo>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
</mml:math>
</inline-formula>. Here, u represents the buyer; v represents the seller; t represents the timestamp when the sample occurred; y is the label, which in the cashback fraud detection scenario indicates whether the transaction was confirmed as fraudulent during the subsequent tracking period. The final dataset we use is <inline-formula>
<mml:math id="M46">
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mi>D</mml:mi>
</mml:msub>
</mml:math>
</inline-formula>. Due to the large number of users and transactions involved, we use random negative sampling to approximate the calculation. Following a maximum likelihood estimation strategy, the final loss function is given by <xref ref-type="disp-formula" rid="E11">Equation 11</xref>:</p>
<disp-formula id="E11">
<mml:math id="M47">
<mml:munder>
<mml:mo movablelimits="false">&#x2211;</mml:mo>
<mml:mrow>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>u</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>v</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo>&#x2208;</mml:mo>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mi>D</mml:mi>
</mml:msub>
</mml:mrow>
</mml:munder>
<mml:mi>C</mml:mi>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>y</mml:mi>
<mml:mo>,</mml:mo>
<mml:mover accent="true">
<mml:mi>p</mml:mi>
<mml:mo stretchy="true">&#x0302;</mml:mo>
</mml:mover>
<mml:mspace width="0.25em"/>
<mml:mo stretchy="true">(</mml:mo>
<mml:mi>y</mml:mi>
<mml:mo>&#x2223;</mml:mo>
<mml:mi>u</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>v</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo stretchy="true">)</mml:mo>
<mml:mo>+</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mo stretchy="true">&#x2016;</mml:mo>
<mml:mi>&#x03C9;</mml:mi>
<mml:mo stretchy="true">&#x2016;</mml:mo>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:math>
<label>(11)</label>
</disp-formula>
<p>where <inline-formula>
<mml:math id="M48">
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mi>D</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mi>S</mml:mi>
<mml:mi>D</mml:mi>
<mml:mo>+</mml:mo>
</mml:msubsup>
<mml:mo>&#x222A;</mml:mo>
<mml:msubsup>
<mml:mi>S</mml:mi>
<mml:mi>D</mml:mi>
<mml:mo>&#x2212;</mml:mo>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mi>S</mml:mi>
<mml:mi>D</mml:mi>
<mml:mo>&#x2212;</mml:mo>
</mml:msubsup>
</mml:math>
</inline-formula> represents all negative samples obtained after fixed-ratio downsampling of negative samples, <inline-formula>
<mml:math id="M49">
<mml:msubsup>
<mml:mi>S</mml:mi>
<mml:mi>D</mml:mi>
<mml:mo>+</mml:mo>
</mml:msubsup>
</mml:math>
</inline-formula> is the set of all positive samples in the training set; <inline-formula>
<mml:math id="M50">
<mml:mi>C</mml:mi>
<mml:mo stretchy="true">(</mml:mo>
<mml:mo>&#x22C5;</mml:mo>
<mml:mo>,</mml:mo>
<mml:mo>&#x22C5;</mml:mo>
<mml:mo stretchy="true">)</mml:mo>
</mml:math>
</inline-formula> represents the cross-entropy loss function; <inline-formula>
<mml:math id="M51">
<mml:msup>
<mml:mrow>
<mml:mo stretchy="true">&#x2016;</mml:mo>
<mml:mi>&#x03C9;</mml:mi>
<mml:mo stretchy="true">&#x2016;</mml:mo>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:math>
</inline-formula> indicates that an L<sub>2</sub> regularization term is used.</p>
</sec>
</sec>
<sec id="sec14">
<label>5</label>
<title>Experiment</title>
<p>Our experimental evaluation is structured around two distinct objectives with different scopes of inference. The evaluation on public datasets (Section 5.2) is designed to validate the fundamental temporal graph modeling capabilities of our C2GAT module in a reproducible academic setting. These datasets, Reddit, Wikipedia, MOOC, and LastFM, represent general dynamic interaction networks and enable direct comparison with state-of-the-art methods. While they do not capture financial transaction semantics or fraud-specific patterns, they serve as rigorous benchmarks for assessing the core technical components: temporal encoding effectiveness, attention mechanism design, and the ability to capture evolving structural dependencies. In contrast, the evaluation on industrial datasets (Section 5.3) specifically validates the end-to-end framework for its intended financial fraud detection application. Only conclusions drawn from the industrial experiments should be interpreted as evidence of domain-specific effectiveness. We caution readers against extrapolating fraud detection performance directly from public dataset results, as the underlying data characteristics differ substantially in terms of class imbalance ratios, temporal granularity, and the semantic meaning of node interactions.</p>
<p>First, to validate the effectiveness of our core temporal graph learning module, C2GAT, we benchmark its performance against a wide array of state-of-the-art dynamic graph models on four standard public datasets (Reddit, Wikipedia, MOOC, LastFM). This evaluation, detailed in Section 5.2, enables direct and reproducible comparisons with existing academic work on the fundamental tasks of link prediction and node classification.</p>
<p>Second, to demonstrate the practical utility and superiority of our complete, end-to-end framework for its intended application, we perform a thorough evaluation on two massive, real-world industrial datasets directly from Ant Financial&#x2019;s cashback fraud detection system. This evaluation, presented in Section 5.3, includes comparisons with the production baseline, a comprehensive ablation study, and system-level performance tests to confirm its suitability for a high-throughput, low-latency environment.</p>
<sec id="sec15">
<label>5.1</label>
<title>Experimental setup</title>
<p>Given the industrial-scale datasets involved (with millions of samples and high-dimensional features), we used a parameter server-based distributed training setup. Our training cluster consisted of 10 worker nodes and 2 parameter-server nodes. For a fair comparison, we aligned our hyperparameters with those used by strong baselines.</p>
<p>We used the popular Adam optimizer for gradient optimization, with a learning rate of 10<sup>&#x2212;4</sup>, a regularization term of 10<sup>&#x2212;3</sup>, and a batch size of 256. For simplicity, we uniformly set the hidden representation dimension for users and events in the graph to 64. Notably, we used a 3-layer graph neural network architecture during aggregation. By expanding the set of nodes participating in aggregation, we increased the information available to the model and further slightly improved model performance.</p>
<p>For the temporal encoding function, we selected temporal encoding parameters consistent with those in reference (<xref ref-type="bibr" rid="ref27">Thongprayoon et al., 2023</xref>) and set the normalized time unit to 1&#x202F;day (86,400&#x202F;s). The frequency parameters in the Fourier basis were carefully tuned to capture both short-term and long-term temporal dependencies, with multiple frequencies covering patterns from hours to weeks.</p>
</sec>
<sec id="sec16">
<label>5.2</label>
<title>Evaluation on public datasets</title>
<p>The experiments in this section are designed solely to validate the core temporal graph learning capabilities of C2GAT&#x2014;specifically, temporal encoding effectiveness and context-aware attention design&#x2014;through reproducible comparison with academic baselines. These datasets (Reddit, Wikipedia, MOOC, LastFM) represent general interaction networks that lack fraud semantics, exhibit different class distributions, and contain no adversarial dynamics. Readers should not extrapolate fraud detection performance from these results; Section 5.3 provides the appropriate evidence for domain-specific claims. <xref ref-type="table" rid="tab1">Table 1</xref> summarizes the statistics of these datasets.</p>
<table-wrap position="float" id="tab1">
<label>Table 1</label>
<caption>
<p>Statistics of four publicly available datasets.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top">Dataset</th>
<th align="center" valign="top">Nodes</th>
<th align="center" valign="top">Edges</th>
<th align="center" valign="top">Features</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="middle">Reddit</td>
<td align="center" valign="middle">11,500</td>
<td align="center" valign="middle">685,812</td>
<td align="center" valign="middle">172</td>
</tr>
<tr>
<td align="left" valign="middle">Wikipedia</td>
<td align="center" valign="middle">9,580</td>
<td align="center" valign="middle">163,255</td>
<td align="center" valign="middle">172</td>
</tr>
<tr>
<td align="left" valign="middle">MOOC</td>
<td align="center" valign="middle">7,348</td>
<td align="center" valign="middle">428,917</td>
<td align="center" valign="middle">4</td>
</tr>
<tr>
<td align="left" valign="middle">LastFM</td>
<td align="center" valign="middle">2,100</td>
<td align="center" valign="middle">1,317,460</td>
<td align="center" valign="middle">&#x2013;</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p>These public datasets are used solely for validating general temporal graph learning capabilities. Domain-specific fraud detection conclusions are drawn exclusively from industrial dataset experiments in Section 5.3.</p>
</table-wrap-foot>
</table-wrap>
<p>We compared C2GAT against eight baseline methods across three categories:</p>
<list list-type="bullet">
<list-item>
<p>Sequence-based (time-series) models: Time-LSTM and Jodie (an RNN-based temporal link prediction model).</p>
</list-item>
<list-item>
<p>Static graph neural networks: GraphSAGE and GAT (Graph Attention Network), which ignore temporal information by treating the graph as static.</p>
</list-item>
<list-item>
<p>Dynamic graph learning models: GraphSAGE-T and GAT-T (temporal extensions of GraphSAGE/GAT), CTDNE (Continuous-Time Dynamic Network Embeddings), M2DNE, GCRN (Graph Convolutional Recurrent Network), and TGAT (Temporal Graph Attention) as representative methods covering a range of approaches.</p>
</list-item>
</list>
<sec id="sec17">
<label>5.2.1</label>
<title>Sampling strategy</title>
<p>For buyer-centric subgraphs, we perform temporal sampling by selecting the 30 most recent edges at each hop, preserving chronological ordering. For seller-centric subgraphs, we employ amount-stratified sampling: edges are first bucketed by transaction amount into quartiles based on known fraud amount distributions, then sampled proportionally from each bucket to maximize coverage of fraud-relevant transactions while maintaining computational efficiency.</p>
</sec>
<sec id="sec18">
<label>5.2.2</label>
<title>Label delay handling</title>
<p>Ground-truth labels in our industrial datasets are obtained through a retrospective confirmation process with an average delay of 14&#x2013;30&#x202F;days. To prevent label leakage, we enforce strict temporal separation: for any transaction at time ttt, only graph edges with timestamps t&#x2032;&#x202F;&#x003C;&#x202F;tt&#x2019;&#x202F;&#x003C;&#x202F;tt&#x2032;&#x202F;&#x003C;&#x202F;t are included in subgraph construction. The training set uses transactions from months 1&#x2013;3 with labels confirmed by month 4, while the test set uses transactions from months 4&#x2013;5 (Taobao) or months 4&#x2013;6 (Offline Merchant) with labels confirmed subsequently. This protocol ensures that model evaluation reflects realistic deployment conditions where predictions must be made before label availability.</p>
<p>The hyperparameter settings are summarized in <xref ref-type="table" rid="tab2">Table 2</xref>.</p>
<table-wrap position="float" id="tab2">
<label>Table 2</label>
<caption>
<p>Hyperparameter settings.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top" char="&#x00D7;">Parameter</th>
<th align="char" valign="top" char="&#x00D7;">Value</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="middle" char="&#x00D7;">Learning rate</td>
<td align="char" valign="middle" char="&#x00D7;">1&#x202F;&#x00D7;&#x202F;10<sup>&#x2212;4</sup></td>
</tr>
<tr>
<td align="left" valign="middle" char="&#x00D7;">Regularization (<italic>&#x03BB;</italic>)</td>
<td align="char" valign="middle" char="&#x00D7;">1&#x202F;&#x00D7;&#x202F;10<sup>&#x2212;3</sup></td>
</tr>
<tr>
<td align="left" valign="middle" char="&#x00D7;">Batch size</td>
<td align="char" valign="middle" char="&#x00D7;">256</td>
</tr>
<tr>
<td align="left" valign="middle" char="&#x00D7;">Hidden dimension</td>
<td align="char" valign="middle" char="&#x00D7;">64</td>
</tr>
<tr>
<td align="left" valign="middle" char="&#x00D7;">GNN layers</td>
<td align="char" valign="middle" char="&#x00D7;">3</td>
</tr>
<tr>
<td align="left" valign="middle" char="&#x00D7;">Temporal frequencies</td>
<td align="char" valign="middle" char="&#x00D7;">&#x03C9; &#x2208; {1/3,600, 1/86,400, 1/604,800}</td>
</tr>
<tr>
<td align="left" valign="middle" char="&#x00D7;">Buyer subgraph hops</td>
<td align="char" valign="middle" char="&#x00D7;">2</td>
</tr>
<tr>
<td align="left" valign="middle" char="&#x00D7;">Seller subgraph hops</td>
<td align="char" valign="middle" char="&#x00D7;">1</td>
</tr>
<tr>
<td align="left" valign="middle" char="&#x00D7;">Max edges per hop</td>
<td align="char" valign="middle" char="&#x00D7;">30</td>
</tr>
<tr>
<td align="left" valign="middle" char="&#x00D7;">Path search depth</td>
<td align="char" valign="middle" char="&#x00D7;">3</td>
</tr>
<tr>
<td align="left" valign="middle" char="&#x00D7;">Sliding window</td>
<td align="char" valign="middle" char="&#x00D7;">3&#x202F;days</td>
</tr>
<tr>
<td align="left" valign="middle" char="&#x00D7;">Negative sampling ratio</td>
<td align="char" valign="middle" char="&#x00D7;">1:5</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>We also performed a hyperparameter sensitivity analysis using the Reddit dataset. <xref ref-type="fig" rid="fig7">Figure 7</xref> shows the effect of varying the embedding dimension and the number of GNN layers. We observe that performance improves as the embedding dimension increases up to 64, then plateaus, suggesting 64 is an optimal trade-off between accuracy and computational cost. We also see that using 3 GNN layers yields the best performance: deeper networks (&#x003E;3 layers) suffered from over-smoothing, while shallower networks (&#x003C;3 layers) failed to capture enough context.</p>
<fig position="float" id="fig7">
<label>Figure 7</label>
<caption>
<p>Hyperparameter sensitivity analysis.</p>
</caption>
<graphic xlink:href="frai-09-1774013-g007.tif" mimetype="image" mime-subtype="tiff">
<alt-text content-type="machine-generated">Two line charts compare performance metrics for Link Prediction (accuracy, blue circles) and Node Classification (AUC, orange squares). The left chart plots performance versus Embedding Dimension, showing Link Prediction stabilizes near 0.95 and Node Classification near 0.70 with increased dimension. The right chart plots performance versus Number of GNN Layers, again showing Link Prediction remains high, peaking at three layers, while Node Classification peaks at three and decreases slightly with more layers.</alt-text>
</graphic>
</fig>
<p><xref ref-type="table" rid="tab3">Tables 3</xref> and <xref ref-type="table" rid="tab4">4</xref> report the performance of our approach versus the baselines on link prediction and node classification, respectively. We evaluate link prediction under two settings: transductive (predicting links within the observed nodes) and inductive (predicting links involving new nodes unseen during training). For link prediction, we report accuracy (Acc) and average precision (AP); for node classification, we report AUC (area under the ROC curve). We present results for our framework in two variants: Our-C (using the convolutional aggregator in C2GAT) and Our-R (using the recurrent LSTM aggregator).</p>
<table-wrap position="float" id="tab3">
<label>Table 3</label>
<caption>
<p>Performance comparison for link prediction on public datasets (ACC.: accuracy/AP: average precision).</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top" colspan="2" rowspan="2">Method</th>
<th align="center" valign="top" colspan="2">Reddit</th>
<th align="center" valign="top" colspan="2">Wikipedia</th>
<th align="center" valign="top" colspan="2">MOOC</th>
<th align="center" valign="top" colspan="2">LastFM</th>
</tr>
<tr>
<th align="center" valign="top">Acc</th>
<th align="center" valign="top">AP</th>
<th align="center" valign="top">Acc</th>
<th align="center" valign="top">AP</th>
<th align="center" valign="top">Acc</th>
<th align="center" valign="top">AP</th>
<th align="center" valign="top">Acc</th>
<th align="center" valign="top">AP</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="middle" rowspan="12">Transductive</td>
<td align="left" valign="middle">Time-LSTM</td>
<td align="char" valign="middle" char=".">0.715</td>
<td align="char" valign="middle" char=".">0.728</td>
<td align="char" valign="middle" char=".">0.578</td>
<td align="char" valign="middle" char=".">0.582</td>
<td align="char" valign="middle" char=".">0.572</td>
<td align="char" valign="middle" char=".">0.580</td>
<td align="center" valign="middle">0.517</td>
<td align="center" valign="middle">0.532</td>
</tr>
<tr>
<td align="left" valign="middle">Jodie</td>
<td align="char" valign="middle" char=".">0.916</td>
<td align="char" valign="middle" char=".">0.982</td>
<td align="char" valign="middle" char=".">0.843</td>
<td align="char" valign="middle" char=".">0.936</td>
<td align="char" valign="middle" char=".">0.790</td>
<td align="char" valign="middle" char=".">0.782</td>
<td align="center" valign="middle">0.628</td>
<td align="center" valign="middle">0.657</td>
</tr>
<tr>
<td align="left" valign="middle">GraphSAGE</td>
<td align="char" valign="middle" char=".">0.938</td>
<td align="char" valign="middle" char=".">0.986</td>
<td align="char" valign="middle" char=".">0.895</td>
<td align="char" valign="middle" char=".">0.964</td>
<td align="char" valign="middle" char=".">0.713</td>
<td align="char" valign="middle" char=".">0.758</td>
<td align="center" valign="middle">0.653</td>
<td align="center" valign="middle">0.704</td>
</tr>
<tr>
<td align="left" valign="middle">GAT</td>
<td align="char" valign="middle" char=".">0.936</td>
<td align="char" valign="middle" char=".">0.987</td>
<td align="char" valign="middle" char=".">0.887</td>
<td align="char" valign="middle" char=".">0.959</td>
<td align="char" valign="middle" char=".">0.685</td>
<td align="char" valign="middle" char=".">0.734</td>
<td align="center" valign="middle">0.662</td>
<td align="center" valign="middle">0.687</td>
</tr>
<tr>
<td align="left" valign="middle">CTDNE</td>
<td align="char" valign="middle" char=".">0.789</td>
<td align="char" valign="middle" char=".">0.865</td>
<td align="char" valign="middle" char=".">0.561</td>
<td align="char" valign="middle" char=".">0.576</td>
<td align="char" valign="middle" char=".">0.592</td>
<td align="char" valign="middle" char=".">0.604</td>
<td align="center" valign="middle">0.401</td>
<td align="center" valign="middle">0.448</td>
</tr>
<tr>
<td align="left" valign="middle">M2DNE</td>
<td align="char" valign="middle" char=".">0.871</td>
<td align="char" valign="middle" char=".">0.948</td>
<td align="char" valign="middle" char=".">0.825</td>
<td align="char" valign="middle" char=".">0.916</td>
<td align="char" valign="middle" char=".">0.695</td>
<td align="char" valign="middle" char=".">0.703</td>
<td align="center" valign="middle">0.603</td>
<td align="center" valign="middle">0.627</td>
</tr>
<tr>
<td align="left" valign="middle">GCRN</td>
<td align="char" valign="middle" char=".">0.939</td>
<td align="char" valign="middle" char=".">0.987</td>
<td align="char" valign="middle" char=".">0.892</td>
<td align="char" valign="middle" char=".">0.961</td>
<td align="char" valign="middle" char=".">0.718</td>
<td align="char" valign="middle" char=".">0.753</td>
<td align="center" valign="middle">0.661</td>
<td align="center" valign="middle">0.729</td>
</tr>
<tr>
<td align="left" valign="middle">GraphSAGE-T</td>
<td align="char" valign="middle" char=".">0.936</td>
<td align="char" valign="middle" char=".">0.985</td>
<td align="char" valign="middle" char=".">0.903</td>
<td align="char" valign="middle" char=".">0.970</td>
<td align="char" valign="middle" char=".">0.763</td>
<td align="char" valign="middle" char=".">0.794</td>
<td align="center" valign="middle">0.686</td>
<td align="center" valign="middle">0.784</td>
</tr>
<tr>
<td align="left" valign="middle">GAT-T</td>
<td align="char" valign="middle" char=".">0.938</td>
<td align="char" valign="middle" char=".">0.987</td>
<td align="char" valign="middle" char=".">0.905</td>
<td align="char" valign="middle" char=".">0.969</td>
<td align="char" valign="middle" char=".">0.762</td>
<td align="char" valign="middle" char=".">0.797</td>
<td align="center" valign="middle">0.685</td>
<td align="center" valign="middle">0.766</td>
</tr>
<tr>
<td align="left" valign="middle">TGAT</td>
<td align="char" valign="middle" char=".">0.940</td>
<td align="char" valign="middle" char=".">0.988</td>
<td align="char" valign="middle" char=".">0.881</td>
<td align="char" valign="middle" char=".">0.957</td>
<td align="char" valign="middle" char=".">0.694</td>
<td align="char" valign="middle" char=".">0.724</td>
<td align="center" valign="middle">0.683</td>
<td align="center" valign="middle">0.681</td>
</tr>
<tr>
<td align="left" valign="middle">Our-C</td>
<td align="char" valign="middle" char=".">0.942</td>
<td align="char" valign="middle" char=".">0.989</td>
<td align="char" valign="middle" char=".">0.912</td>
<td align="char" valign="middle" char=".">0.976</td>
<td align="char" valign="middle" char=".">0.797</td>
<td align="char" valign="middle" char=".">0.859</td>
<td align="center" valign="middle">0.710</td>
<td align="center" valign="middle">0.800</td>
</tr>
<tr>
<td align="left" valign="middle">Our-R</td>
<td align="char" valign="middle" char=".">0.943</td>
<td align="char" valign="middle" char=".">0.989</td>
<td align="char" valign="middle" char=".">0.913</td>
<td align="char" valign="middle" char=".">0.976</td>
<td align="char" valign="middle" char=".">0.798</td>
<td align="char" valign="middle" char=".">0.867</td>
<td align="center" valign="middle">0.724</td>
<td align="center" valign="middle">0.820</td>
</tr>
<tr>
<td align="left" valign="middle" rowspan="8">Inductive</td>
<td align="left" valign="middle">GraphSAGE</td>
<td align="char" valign="middle" char=".">0.907</td>
<td align="char" valign="middle" char=".">0.971</td>
<td align="char" valign="middle" char=".">0.870</td>
<td align="char" valign="middle" char=".">0.951</td>
<td align="char" valign="middle" char=".">0.706</td>
<td align="char" valign="middle" char=".">0.745</td>
<td align="center" valign="middle">&#x2013;</td>
<td align="center" valign="middle">&#x2013;</td>
</tr>
<tr>
<td align="left" valign="middle">GAT</td>
<td align="char" valign="middle" char=".">0.909</td>
<td align="char" valign="middle" char=".">0.972</td>
<td align="char" valign="middle" char=".">0.861</td>
<td align="char" valign="middle" char=".">0.944</td>
<td align="char" valign="middle" char=".">0.670</td>
<td align="char" valign="middle" char=".">0.708</td>
<td align="center" valign="middle">&#x2013;</td>
<td align="center" valign="middle">&#x2013;</td>
</tr>
<tr>
<td align="left" valign="middle">GCRN</td>
<td align="char" valign="middle" char=".">0.907</td>
<td align="char" valign="middle" char=".">0.969</td>
<td align="char" valign="middle" char=".">0.861</td>
<td align="char" valign="middle" char=".">0.940</td>
<td align="char" valign="middle" char=".">0.703</td>
<td align="char" valign="middle" char=".">0.751</td>
<td align="center" valign="middle">&#x2013;</td>
<td align="center" valign="middle">&#x2013;</td>
</tr>
<tr>
<td align="left" valign="middle">GraphSAGE-T</td>
<td align="char" valign="middle" char=".">0.906</td>
<td align="char" valign="middle" char=".">0.971</td>
<td align="char" valign="middle" char=".">0.883</td>
<td align="char" valign="middle" char=".">0.962</td>
<td align="char" valign="middle" char=".">0.772</td>
<td align="char" valign="middle" char=".">0.805</td>
<td align="center" valign="middle">&#x2013;</td>
<td align="center" valign="middle">&#x2013;</td>
</tr>
<tr>
<td align="left" valign="middle">GAT-T</td>
<td align="char" valign="middle" char=".">0.910</td>
<td align="char" valign="middle" char=".">0.974</td>
<td align="char" valign="middle" char=".">0.887</td>
<td align="char" valign="middle" char=".">0.962</td>
<td align="char" valign="middle" char=".">0.776</td>
<td align="char" valign="middle" char=".">0.812</td>
<td align="center" valign="middle">&#x2013;</td>
<td align="center" valign="middle">&#x2013;</td>
</tr>
<tr>
<td align="left" valign="middle">TGAT</td>
<td align="char" valign="middle" char=".">0.912</td>
<td align="char" valign="middle" char=".">0.973</td>
<td align="char" valign="middle" char=".">0.861</td>
<td align="char" valign="middle" char=".">0.942</td>
<td align="char" valign="middle" char=".">0.687</td>
<td align="char" valign="middle" char=".">0.712</td>
<td align="center" valign="middle">&#x2013;</td>
<td align="center" valign="middle">&#x2013;</td>
</tr>
<tr>
<td align="left" valign="middle">Our-C</td>
<td align="char" valign="middle" char=".">0.913</td>
<td align="char" valign="middle" char=".">0.974</td>
<td align="char" valign="middle" char=".">0.890</td>
<td align="char" valign="middle" char=".">0.966</td>
<td align="char" valign="middle" char=".">0.801</td>
<td align="char" valign="middle" char=".">0.848</td>
<td align="center" valign="middle">&#x2013;</td>
<td align="center" valign="middle">&#x2013;</td>
</tr>
<tr>
<td align="left" valign="middle">Our-R</td>
<td align="char" valign="middle" char=".">0.914</td>
<td align="char" valign="middle" char=".">0.974</td>
<td align="char" valign="middle" char=".">0.891</td>
<td align="char" valign="middle" char=".">0.967</td>
<td align="char" valign="middle" char=".">0.803</td>
<td align="char" valign="middle" char=".">0.852</td>
<td align="center" valign="middle">&#x2013;</td>
<td align="center" valign="middle">&#x2013;</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap position="float" id="tab4">
<label>Table 4</label>
<caption>
<p>Performance comparison for node classification on public datasets (AUC).</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top">Method</th>
<th align="center" valign="top">Reddit</th>
<th align="center" valign="top">Wikipedia</th>
<th align="center" valign="top">MOOC</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">Time-LSTM</td>
<td align="char" valign="top" char=".">0.637</td>
<td align="char" valign="top" char=".">0.785</td>
<td align="char" valign="top" char=".">0.701</td>
</tr>
<tr>
<td align="left" valign="top">Jodie</td>
<td align="char" valign="top" char=".">0.617</td>
<td align="char" valign="top" char=".">0.771</td>
<td align="char" valign="top" char=".">0.672</td>
</tr>
<tr>
<td align="left" valign="top">GraphSAGE</td>
<td align="char" valign="top" char=".">0.657</td>
<td align="char" valign="top" char=".">0.804</td>
<td align="char" valign="top" char=".">0.683</td>
</tr>
<tr>
<td align="left" valign="top">GAT</td>
<td align="char" valign="top" char=".">0.668</td>
<td align="char" valign="top" char=".">0.855</td>
<td align="char" valign="top" char=".">0.654</td>
</tr>
<tr>
<td align="left" valign="top">GCRN</td>
<td align="char" valign="top" char=".">0.681</td>
<td align="char" valign="top" char=".">0.864</td>
<td align="char" valign="top" char=".">0.677</td>
</tr>
<tr>
<td align="left" valign="top">GraphSAGE-T</td>
<td align="char" valign="top" char=".">0.665</td>
<td align="char" valign="top" char=".">0.858</td>
<td align="char" valign="top" char=".">0.686</td>
</tr>
<tr>
<td align="left" valign="top">GAT-T</td>
<td align="char" valign="top" char=".">0.681</td>
<td align="char" valign="top" char=".">0.858</td>
<td align="char" valign="top" char=".">0.677</td>
</tr>
<tr>
<td align="left" valign="top">TGAT</td>
<td align="char" valign="top" char=".">0.648</td>
<td align="char" valign="top" char=".">0.868</td>
<td align="char" valign="top" char=".">0.687</td>
</tr>
<tr>
<td align="left" valign="top">Our-R</td>
<td align="char" valign="top" char=".">0.691</td>
<td align="char" valign="top" char=".">0.882</td>
<td align="char" valign="top" char=".">0.695</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>From <xref ref-type="table" rid="tab3">Tables 3</xref> and <xref ref-type="table" rid="tab4">4</xref>, we draw several observations:</p>
<list list-type="bullet">
<list-item>
<p>Superior accuracy of our approach: Our stacked C2GAT models (Our-C and Our-R) consistently outperform all baseline models on both link prediction and node classification. The gains are substantial, confirming the effectiveness of our unified approach. Notably, the recurrent variant (Our-R) slightly outperforms the convolutional variant (Our-C) on most metrics, suggesting that capturing sequential context yields an edge in performance.</p>
</list-item>
<list-item>
<p>Importance of dynamic modeling: Among the baselines, dynamic graph models generally outperform static graph models and pure sequence models. This highlights the importance of modeling both the temporal dynamics and the graph structure for these tasks. Our approach (especially Our-R) achieves the best results across all datasets, with particularly large improvements on the more complex LastFM and MOOC datasets&#x2014;indicating its strength in capturing intricate temporal patterns in interaction data.</p>
</list-item>
</list>
<p>The results above demonstrate that C2GAT effectively captures temporal-structural dependencies in dynamic graphs. However, link prediction and node classification on these benchmarks do not correspond to fraud detection, which involves identifying rare anomalous patterns in an adversarial context. The domain-specific effectiveness of our framework is evaluated separately in Section 5.3.</p>
<p>To further understand the impact of our design choices, we analyzed the contribution of the temporal encoding mechanism. We experimented with variants of our model using simpler temporal encoding (e.g., only a single frequency or no node-specific modulation). We found that our full model (with multiple frequency components and node-specific encoding) significantly outperforms these simpler variants, especially in capturing long-term dependencies. For example, on the Reddit dataset, the full C2GAT achieved about 88% accuracy in detecting patterns spanning several weeks, whereas a variant without the advanced temporal encoding achieved only around 68%&#x2014;a relative improvement of nearly 30%. This validates our decision to incorporate multiple Fourier components to simultaneously capture short-term and long-term temporal patterns (<xref ref-type="fig" rid="fig8">Figure 8</xref> illustrates this effect).</p>
<fig position="float" id="fig8">
<label>Figure 8</label>
<caption>
<p>Effect of temporal encoding on pattern detection.</p>
</caption>
<graphic xlink:href="frai-09-1774013-g008.tif" mimetype="image" mime-subtype="tiff">
<alt-text content-type="machine-generated">Bar chart comparing pattern detection performance (AUC) for short-term patterns in blue and long-term patterns in orange across five temporal encoding configurations: no temporal encoding, time decay only, Fourier basis (one frequency), Fourier basis (five frequencies), and C2GAT full model. Performance increases as temporal encoding becomes more complex, with C2GAT showing the highest AUC for both short-term and long-term pattern detection.</alt-text>
</graphic>
</fig>
<p>We also conducted a qualitative evaluation of the learned representations. <xref ref-type="fig" rid="fig9">Figure 9</xref> shows a t-SNE visualization of node embeddings from Our-R on the Wikipedia dataset, with points colored by a proxy for fraudulent vs. legitimate behavior. We observe that the embeddings produced by C2GAT are highly discriminative: nodes involved in fraud tend to cluster together in the embedding space, separate from legitimate nodes. These fraud clusters are characterized by distinct temporal and structural patterns (e.g., dense connectivity within a short time frame), which the model successfully captures. This visualization provides intuition that the model&#x2019;s learned embeddings meaningfully encode the differences in behavior critical for fraud detection.</p>
<fig position="float" id="fig9">
<label>Figure 9</label>
<caption>
<p>T-SNE visualization of node embeddings from C2GAT.</p>
</caption>
<graphic xlink:href="frai-09-1774013-g009.tif" mimetype="image" mime-subtype="tiff">
<alt-text content-type="machine-generated">Scatter plot visualizing t-SNE dimension reduction with two well-separated clusters: red dots in the top right representing fraudulent transactions, and blue dots in the center representing legitimate transactions, as indicated by the legend.</alt-text>
</graphic>
</fig>
<p>To test the transferability of our approach, we performed cross-domain experiments. We trained the model on one dataset (e.g., Reddit) and tested it on a different one (e.g., Wikipedia) without fine-tuning. The results, summarized in <xref ref-type="fig" rid="fig10">Figure 10</xref>, show strong transfer learning capability. For domains with similar interaction patterns (Reddit &#x2192; Wikipedia), Our-R retained about 92% of its accuracy relative to training and testing on the same domain. Even for very different domains (LastFM &#x2192; MOOC), it retained roughly 75&#x2013;85% of its performance. These results suggest that C2GAT learns generalizable features that are not overly specialized to one domain. This is particularly useful in practical settings, where labeled fraud data in a new domain can be scarce; our model could be trained on one platform and still perform well on another with minimal adaptation.</p>
<fig position="float" id="fig10">
<label>Figure 10</label>
<caption>
<p>Cross-domain transfer performance (AUC).</p>
</caption>
<graphic xlink:href="frai-09-1774013-g010.tif" mimetype="image" mime-subtype="tiff">
<alt-text content-type="machine-generated">Heatmap showing domain adaptation results with source domains as rows and target domains as columns for Reddit, Wikipedia, MOOC, and LastFM. Values range from 0.603 to 0.943, with darker blue indicating higher values and a color bar on the right ranging from 0.60 to 0.95.</alt-text>
</graphic>
</fig>
<p>Transfer performance is demonstrated only among general interaction networks. Transfer to fraud detection domains requires validation with fraud-specific datasets, which remains future work.</p>
<p>Finally, we conducted an explainability analysis to identify which input features (graph structures or temporal patterns) most influence the model&#x2019;s predictions. We employed a feature importance estimation technique (based on perturbation and attribution). The analysis, summarized in <xref ref-type="fig" rid="fig11">Figure 11</xref>, indicates that temporal features are the most influential predictors of fraud across all datasets. In particular, the recency of transactions (how recent a neighbor interaction was) and the frequency of interactions contribute about 40&#x2013;45% of the predictive power. Structural features like the length of transaction paths and node degree contribute around 30&#x2013;35%. We also observed differences in periodic temporal patterns between domains: for offline merchant transactions, time-of-day and weekly cycle features together accounted for ~17% of importance, whereas for online (Taobao) transactions they accounted for ~15%. This reflects that fraud in brick-and-mortar settings is more constrained by business hours (day/night, weekdays/weekends) than online fraud.</p>
<fig position="float" id="fig11">
<label>Figure 11</label>
<caption>
<p>Feature importance across different datasets.</p>
</caption>
<graphic xlink:href="frai-09-1774013-g011.tif" mimetype="image" mime-subtype="tiff">
<alt-text content-type="machine-generated">Bar chart comparing importance scores of eight features across Taobao, Offline Merchants, and Reddit. Recency of interaction is most important for all groups, while weekly pattern is least important. Chart legend included.</alt-text>
</graphic>
</fig>
</sec>
</sec>
<sec id="sec19">
<label>5.3</label>
<title>Evaluation on industrial datasets</title>
<p>Next, we evaluate our framework on real-world production data from Ant Financial&#x2019;s cashback fraud detection system. We consider two large-scale datasets: one from Taobao (online marketplace) transactions and one from offline merchant transactions (e.g., in-store QR code payments). <xref ref-type="table" rid="tab5">Table 5</xref> provides an overview of these datasets. Each dataset spans several months of transactions and uses labels obtained via retrospective analysis (combining rule-based triggers and manual verification after the fact). Because fraud labels are confirmed with a delay (often up to a month later), these datasets were compiled to ensure we had ground truth for evaluation. We also downsampled the negative (non-fraud) transactions in the training set to make the positive instances more prominent for learning. The test set, however, reflects the true production class imbalance.</p>
<table-wrap position="float" id="tab5">
<label>Table 5</label>
<caption>
<p>Statistics of two industrial datasets.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top">Parameter</th>
<th align="left" valign="top">Taobao dataset</th>
<th align="left" valign="top">Offline merchant dataset</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">Buyers</td>
<td align="left" valign="top">43.16 million</td>
<td align="left" valign="top">55.85 million</td>
</tr>
<tr>
<td align="left" valign="top">Sellers</td>
<td align="left" valign="top">30.24 million</td>
<td align="left" valign="top">2.42 million</td>
</tr>
<tr>
<td align="left" valign="top">Training samples</td>
<td align="left" valign="top">26.33 million</td>
<td align="left" valign="top">33.45 million</td>
</tr>
<tr>
<td align="left" valign="top">Validation samples</td>
<td align="left" valign="top">0.2 million</td>
<td align="left" valign="top">0.2 million</td>
</tr>
<tr>
<td align="left" valign="top">Test samples</td>
<td align="left" valign="top">42.17 million</td>
<td align="left" valign="top">55.85 million</td>
</tr>
<tr>
<td align="left" valign="top">Dataset timespan</td>
<td align="left" valign="top">5&#x202F;months</td>
<td align="left" valign="top">6&#x202F;months</td>
</tr>
<tr>
<td align="left" valign="top">Daily average test samples</td>
<td align="left" valign="top">280,000</td>
<td align="left" valign="top">370,000</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>We evaluate our deployed framework from two perspectives: detection accuracy (versus the previous production system) and system performance under real-time constraints. For detection accuracy, we use Precision at X% Recall (P@XR) as the metric, which is standard in fraud detection. P@20R, for example, is the precision when the model captures 20% of all actual fraud cases (i.e., at 20% recall). This metric reflects the precision in the &#x201C;high-alert&#x201D; region when only a portion of frauds are recalled, which is often of greatest interest to risk controllers.</p>
<p>Our experiments show that the real-time dynamic graph framework significantly outperforms the existing machine learning baseline across both industrial datasets. Compared to the MLP model with manually engineered real-time features, our method achieves substantial gains in P@20R: from 68.5 to 86.2% (+17.7 points) in the Taobao scenario and from 47.2 to 56.5% (+9.3 points) in the offline merchant scenario. These improvements translate to much lower false-alarm rates at the same fraud detection rate, which is highly valuable in production. Moreover, the system meets strict real-time requirements, with an average inference latency of ~18&#x202F;ms and a 99th percentile latency under 30&#x202F;ms when deployed in Ant&#x2019;s payment infrastructure, ensuring near-instantaneous responses from a user perspective.</p>
<p>Label Quality and Delayed Supervision. Our industrial datasets rely on retrospectively confirmed labels, introducing challenges we explicitly acknowledge. First, label delay (14&#x2013;30&#x202F;days for confirmation) means models train on potentially stale patterns; we mitigate this through weekly retraining and prioritized labeling of high-confidence predictions. Second, label noise (estimated 3&#x2013;5%) arises from undetected sophisticated fraud (false negatives) and overly aggressive rule flagging (false positives); we employ label smoothing (<italic>&#x03F5;</italic> =&#x202F;0.1) and confident learning techniques, yielding a 1.2-point P@20R improvement. Third, rule-based bias from the predecessor system may cause the model to replicate existing patterns rather than discover novel indicators; we address this by including a holdout set of fraud confirmed through non-rule channels (e.g., customer complaints) and auditing model coverage against rules, notably, 23% of our detected frauds were not flagged by the previous rule-based system, demonstrating complementary pattern learning. Given these factors, our reported P@20R and P@40R metrics should be interpreted as conservative lower bounds, as some evaluated &#x201C;false positives&#x201D; may represent undetected fraud. Practitioners should invest in faster confirmation pipelines and diverse labeling sources to improve label quality.</p>
</sec>
<sec id="sec20">
<label>5.4</label>
<title>Ablation study</title>
<p><xref ref-type="table" rid="tab6">Table 6</xref> compares our Real-time Dynamic Graph approach to the previous industry solution (which was a machine learning model using real-time engineered features) and also includes ablation results for our model. The ablation study further highlights the contribution of each component. Center and path subgraphs provide complementary information, and removing either degrades performance, especially the path subgraph in the Taobao setting, where its absence lowers P@20R by 6.6 points, reflecting the importance of modeling multi-hop fraud patterns. Removing edge attributes causes the most dramatic drop, even below the baseline, underscoring the critical role of transaction features (amounts, timestamps, device data, etc.) beyond pure graph structure. Disabling the C2GAT temporal encoder also reduces performance, confirming that continuous-time modeling and attention over transaction timing provide essential signals for distinguishing normal behavior from suspicious activity.</p>
<table-wrap position="float" id="tab6">
<label>Table 6</label>
<caption>
<p>Performance comparison and ablation study on industrial datasets (&#x201C;&#x2212;&#x201D;: removed).</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top" rowspan="2">Algorithm</th>
<th align="center" valign="top" colspan="2">Taobao transaction scenario</th>
<th align="center" valign="top" colspan="2">Offline merchant scenario</th>
</tr>
<tr>
<th align="center" valign="top">P@20R</th>
<th align="center" valign="top">P@40R</th>
<th align="center" valign="top">P@20R</th>
<th align="center" valign="top">P@40R</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="middle">MLP&#x202F;+&#x202F;Real-time features</td>
<td align="char" valign="middle" char=".">68.5</td>
<td align="char" valign="middle" char=".">44.8</td>
<td align="char" valign="middle" char=".">47.2</td>
<td align="char" valign="middle" char=".">31.1</td>
</tr>
<tr>
<td align="left" valign="middle">Real-time dynamic graph</td>
<td align="char" valign="middle" char=".">86.2</td>
<td align="char" valign="middle" char=".">57.9</td>
<td align="char" valign="middle" char=".">56.5</td>
<td align="char" valign="middle" char=".">36.7</td>
</tr>
<tr>
<td align="left" valign="middle">Real-time dynamic graph&#x202F;&#x2212;&#x202F;Center subgraph</td>
<td align="char" valign="middle" char=".">82.3</td>
<td align="char" valign="middle" char=".">51.5</td>
<td align="char" valign="middle" char=".">54.9</td>
<td align="char" valign="middle" char=".">33.8</td>
</tr>
<tr>
<td align="left" valign="middle">Real-time dynamic graph&#x202F;&#x2212;&#x202F;Path subgraph</td>
<td align="char" valign="middle" char=".">79.6</td>
<td align="char" valign="middle" char=".">51.2</td>
<td align="char" valign="middle" char=".">54.2</td>
<td align="char" valign="middle" char=".">32.5</td>
</tr>
<tr>
<td align="left" valign="middle">Real-time dynamic graph&#x202F;&#x2212;&#x202F;Edge features</td>
<td align="char" valign="middle" char=".">66.1</td>
<td align="char" valign="middle" char=".">21.3</td>
<td align="char" valign="middle" char=".">32.9</td>
<td align="char" valign="middle" char=".">21.1</td>
</tr>
<tr>
<td align="left" valign="middle">Real-time dynamic graph&#x202F;&#x2212;&#x202F;C2GAT</td>
<td align="char" valign="middle" char=".">81.3</td>
<td align="char" valign="middle" char=".">55.1</td>
<td align="char" valign="middle" char=".">54.5</td>
<td align="char" valign="middle" char=".">34.0</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>We further stress-tested the system to measure throughput and latency under increasing load. <xref ref-type="fig" rid="fig12">Figure 12</xref> shows the average and 99th-percentile latencies as a function of queries per second (QPS). The framework maintains an average latency below 30&#x202F;ms (and P99 below 50&#x202F;ms) up to around 20&#x202F;k QPS, which meets our design target. Beyond this point, latencies rise nonlinearly, indicating the system is approaching saturation. This capacity is well above the typical load in production, providing headroom for traffic spikes or future growth. Importantly, it demonstrates that an advanced graph neural network model can be deployed in a live financial system without sacrificing performance or reliability, thanks to careful system optimization and architecture design.</p>
<fig position="float" id="fig12">
<label>Figure 12</label>
<caption>
<p>System throughput analysis.</p>
</caption>
<graphic xlink:href="frai-09-1774013-g012.tif" mimetype="image" mime-subtype="tiff">
<alt-text content-type="machine-generated">Line chart comparing average latency and P99 latency versus load in queries per second, with a red dashed SLA threshold at thirty milliseconds and green dashed target throughput at twenty thousand QPS. Both latencies rise non-linearly with increasing load, with P99 latency consistently above average latency. The intersection of the green and red lines marks the point where average latency approaches the SLA threshold at the target throughput.</alt-text>
</graphic>
</fig>
</sec>
<sec id="sec21">
<label>5.5</label>
<title>Generalizability discussion</title>
<p>While our framework targets cashback fraud, its components have varying transferability. The C2GAT mechanism (node-specific temporal encoding, context-aware attention) is fully general and applicable to any temporal graph learning task. The subgraph construction strategy is moderately transferable: buyer/seller roles map naturally to sender/receiver in money laundering or account/device in account takeover scenarios; path subgraphs capture entity chains relevant to multiple fraud types. Our cross-domain experiments (<xref ref-type="fig" rid="fig10">Figure 10</xref>) support this, showing 75&#x2013;92% performance retention when transferring across different interaction domains without fine-tuning. Domain-specific components include the asymmetric hop configuration (optimized for our degree distributions) and amount-stratified sampling (based on cashback fraud characteristics), which require recalibration for new domains. For adaptation, practitioners should: (1) analyze target domain degree distributions to set appropriate hop depths; (2) identify domain-specific features for sampling heuristics; and (3) adjust temporal encoding frequencies to match relevant behavioral cycles (e.g., minute-level for account takeover, monthly for subscription fraud).</p>
</sec>
</sec>
<sec sec-type="conclusions" id="sec22">
<label>6</label>
<title>Conclusion</title>
<p>We presented a real-time dynamic graph learning framework for financial fraud detection that directly processes streaming transaction data without manual feature engineering. Our C2GAT mechanism effectively captures temporal-structural dependencies in evolving graphs, while decoupled subgraph modules model multi-entity and single-entity behaviors separately before integration. The end-to-end architecture achieves low-latency inference suitable for production deployment. Evaluation on large-scale industrial data demonstrates significant improvements over traditional sequence models and rule-based approaches, with enhanced fraud detection accuracy, adaptability to emerging patterns, and operational efficiency. In real credit cashback fraud scenarios, our framework achieved a 43% reduction in false positives and a 26% improvement in detection rates while maintaining sub-30&#x202F;ms response times.</p>
<p>For future work, we plan to explore techniques to further improve the framework in two aspects. First, to tackle data sparsity and class imbalance, we are interested in meta-learning or transfer learning approaches that can leverage data from related tasks or simulate additional training examples, as well as advanced sampling methods to make training more effective. Second, to handle even larger graphs and longer histories, we aim to investigate more efficient subgraph sampling strategies and the integration of pre-trained graph embeddings for cold-start entities. We believe these directions can further enhance the robustness and scalability of real-time graph-based fraud detection.</p>
</sec>
</body>
<back>
<sec sec-type="data-availability" id="sec23">
<title>Data availability statement</title>
<p>The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.</p>
</sec>
<sec sec-type="ethics-statement" id="sec24">
<title>Ethics statement</title>
<p>Ethical approval was not required for the study involving human data in accordance with the local legislation and institutional requirements. Written informed consent was not required, for either participation in the study or for the publication of potentially/indirectly identifying information, in accordance with the local legislation and institutional requirements. The social media data was accessed and analyzed in accordance with the platform&#x2019;s terms of use and all relevant institutional/national regulations.</p>
</sec>
<sec sec-type="author-contributions" id="sec25">
<title>Author contributions</title>
<p>JC: Conceptualization, Data curation, Writing &#x2013; original draft. YY: Validation, Writing &#x2013; review &#x0026; editing.</p>
</sec>
<sec sec-type="COI-statement" id="sec26">
<title>Conflict of interest</title>
<p>JC was employed by Finance and Banking Division Southern Power Grid Digital Enterprise Technology (Guangdong) Co., Ltd. and YY was employed by Strategic Development Department Southern Power Grid Capital Holding Co., Ltd.</p>
</sec>
<sec sec-type="ai-statement" id="sec27">
<title>Generative AI statement</title>
<p>The author(s) declared that Generative AI was not used in the creation of this manuscript.</p>
<p>Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.</p>
</sec>
<sec sec-type="disclaimer" id="sec28">
<title>Publisher&#x2019;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="ref1"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Aburbeian</surname><given-names>A. M.</given-names></name> <name><surname>Fern&#x00E1;ndez-Veiga</surname><given-names>M.</given-names></name></person-group> (<year>2024</year>). <article-title>Secure internet financial transactions: a framework integrating multi-factor authentication and machine learning</article-title>. <source>AI</source> <volume>5</volume>, <fpage>177</fpage>&#x2013;<lpage>194</lpage>. doi: <pub-id pub-id-type="doi">10.3390/ai5010010</pub-id></mixed-citation></ref>
<ref id="ref2"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Albalawi</surname><given-names>T.</given-names></name> <name><surname>Dardouri</surname><given-names>S.</given-names></name></person-group> (<year>2025</year>). <article-title>Enhancing credit card fraud detection using traditional and deep learning models with class imbalance mitigation</article-title>. <source>Front. Artif. Intell.</source> <volume>8</volume>:<fpage>1643292</fpage>. doi: <pub-id pub-id-type="doi">10.3389/frai.2025.1643292</pub-id>, <pub-id pub-id-type="pmid">41132910</pub-id></mixed-citation></ref>
<ref id="ref3"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Al-Hashedi</surname><given-names>K. G.</given-names></name> <name><surname>Magalingam</surname><given-names>P.</given-names></name></person-group> (<year>2021</year>). <article-title>Financial fraud detection applying data mining techniques: a comprehensive review from 2009 to 2019</article-title>. <source>Comput. Sci. Rev.</source> <volume>40</volume>:<fpage>100402</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.cosrev.2021.100402</pub-id></mixed-citation></ref>
<ref id="ref4"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ali</surname><given-names>A.</given-names></name> <name><surname>Abd Razak</surname><given-names>S.</given-names></name> <name><surname>Othman</surname><given-names>S. H.</given-names></name> <name><surname>Eisa</surname><given-names>T. A. E.</given-names></name> <name><surname>Al-Dhaqm</surname><given-names>A.</given-names></name> <name><surname>Nasser</surname><given-names>M.</given-names></name> <etal/></person-group>. (<year>2022</year>). <article-title>Financial fraud detection based on machine learning: a systematic literature review</article-title>. <source>Appl. Sci.</source> <volume>12</volume>:<fpage>9637</fpage>. doi: <pub-id pub-id-type="doi">10.3390/app12199637</pub-id></mixed-citation></ref>
<ref id="ref5"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Almazroi</surname><given-names>A. A.</given-names></name> <name><surname>Ayub</surname><given-names>N.</given-names></name></person-group> (<year>2023</year>). <article-title>Online payment fraud detection model using machine learning techniques</article-title>. <source>IEEE Access</source> <volume>11</volume>, <fpage>137188</fpage>&#x2013;<lpage>137203</lpage>. doi: <pub-id pub-id-type="doi">10.1109/ACCESS.2023.3339226</pub-id></mixed-citation></ref>
<ref id="ref6"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Barros</surname><given-names>C. D.</given-names></name> <name><surname>Mendon&#x00E7;a</surname><given-names>M. R.</given-names></name> <name><surname>Vieira</surname><given-names>A. B.</given-names></name> <name><surname>Ziviani</surname><given-names>A.</given-names></name></person-group> (<year>2021</year>). <article-title>A survey on embedding dynamic graphs</article-title>. <source>ACM Comput. Surv.</source> <volume>55</volume>, <fpage>1</fpage>&#x2013;<lpage>37</lpage>. doi: <pub-id pub-id-type="doi">10.1145/3483595</pub-id></mixed-citation></ref>
<ref id="ref7"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Bello</surname><given-names>O. A.</given-names></name> <name><surname>Folorunso</surname><given-names>A.</given-names></name> <name><surname>Ejiofor</surname><given-names>O. E.</given-names></name> <name><surname>Budale</surname><given-names>F. Z.</given-names></name> <name><surname>Adebayo</surname><given-names>K.</given-names></name> <name><surname>Babatunde</surname><given-names>O. A.</given-names></name></person-group> (<year>2023</year>). <article-title>Machine learning approaches for enhancing fraud prevention in financial transactions</article-title>. <source>Int. J. Manag. Technol.</source> <volume>10</volume>, <fpage>85</fpage>&#x2013;<lpage>108</lpage>.</mixed-citation></ref>
<ref id="ref8"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Challa</surname><given-names>S. R.</given-names></name> <name><surname>Challa</surname><given-names>K.</given-names></name> <name><surname>Lakkarasu</surname><given-names>P.</given-names></name> <name><surname>Sriram</surname><given-names>H. K.</given-names></name> <name><surname>Adusupalli</surname><given-names>B.</given-names></name></person-group> (<year>2024</year>). <article-title>Strategic financial growth: strengthening investment management, secure transactions, and risk protection in the digital era</article-title>. <source>J. Artif. Intell. Big Data Discip.</source> <volume>1</volume>, <fpage>97</fpage>&#x2013;<lpage>108</lpage>.</mixed-citation></ref>
<ref id="ref9"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Chatterjee</surname><given-names>P.</given-names></name> <name><surname>Das</surname><given-names>D.</given-names></name> <name><surname>Rawat</surname><given-names>D. B.</given-names></name></person-group> (<year>2024</year>). <article-title>Digital twin for credit card fraud detection: opportunities, challenges, and fraud detection advancements</article-title>. <source>Futur. Gener. Comput. Syst.</source> <volume>158</volume>. doi: <pub-id pub-id-type="doi">10.1016/j.future.2024.04.057</pub-id></mixed-citation></ref>
<ref id="ref10"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname><given-names>C. T.</given-names></name> <name><surname>Lee</surname><given-names>C.</given-names></name> <name><surname>Huang</surname><given-names>S. H.</given-names></name> <name><surname>Peng</surname><given-names>W. C.</given-names></name></person-group> (<year>2024</year>). <article-title>Credit card fraud detection via intelligent sampling and self-supervised learning</article-title>. <source>ACM Trans. Intell. Syst. Technol.</source> <volume>15</volume>, <fpage>1</fpage>&#x2013;<lpage>29</lpage>. doi: <pub-id pub-id-type="doi">10.1145/3641283</pub-id></mixed-citation></ref>
<ref id="ref11"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Cheng</surname><given-names>D.</given-names></name> <name><surname>Zou</surname><given-names>Y.</given-names></name> <name><surname>Xiang</surname><given-names>S.</given-names></name> <name><surname>Jiang</surname><given-names>C.</given-names></name></person-group> (<year>2025</year>). <article-title>Graph neural networks for financial fraud detection: a review</article-title>. <source>Front. Comput. Sci.</source> <volume>19</volume>, <fpage>1</fpage>&#x2013;<lpage>15</lpage>.</mixed-citation></ref>
<ref id="ref12"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Corso</surname><given-names>G.</given-names></name> <name><surname>Stark</surname><given-names>H.</given-names></name> <name><surname>Jegelka</surname><given-names>S.</given-names></name> <name><surname>Jaakkola</surname><given-names>T.</given-names></name> <name><surname>Barzilay</surname><given-names>R.</given-names></name></person-group> (<year>2024</year>). <article-title>Graph neural networks</article-title>. <source>Nat. Rev. Methods Primers</source> <volume>4</volume>:<fpage>17</fpage>. doi: <pub-id pub-id-type="doi">10.1038/s43586-024-00294-7</pub-id></mixed-citation></ref>
<ref id="ref13"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Cui</surname><given-names>Y.</given-names></name> <name><surname>Han</surname><given-names>X.</given-names></name> <name><surname>Chen</surname><given-names>J.</given-names></name> <name><surname>Zhang</surname><given-names>X.</given-names></name> <name><surname>Yang</surname><given-names>J.</given-names></name> <name><surname>Zhang</surname><given-names>X.</given-names></name></person-group> (<year>2025</year>). <article-title>FraudGNN-RL: a graph neural network with reinforcement learning for adaptive financial fraud detection</article-title>. <source>IEEE Open J. Comput. Soc.</source> <volume>6</volume>. doi: <pub-id pub-id-type="doi">10.1109/OJCS.2025.3543450</pub-id></mixed-citation></ref>
<ref id="ref14"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Deng</surname><given-names>J.</given-names></name> <name><surname>Chen</surname><given-names>C.</given-names></name> <name><surname>Huang</surname><given-names>X.</given-names></name> <name><surname>Chen</surname><given-names>W.</given-names></name> <name><surname>Cheng</surname><given-names>L.</given-names></name></person-group> (<year>2023</year>). <article-title>Research on the construction of event logic knowledge graph of supply chain management</article-title>. <source>Adv. Eng. Inform.</source> <volume>56</volume>:<fpage>101921</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.aei.2023.101921</pub-id></mixed-citation></ref>
<ref id="ref15"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Deng</surname><given-names>J.</given-names></name> <name><surname>Wang</surname><given-names>T.</given-names></name> <name><surname>Wang</surname><given-names>Z.</given-names></name> <name><surname>Zhou</surname><given-names>J.</given-names></name> <name><surname>Cheng</surname><given-names>L.</given-names></name></person-group> (<year>2022</year>). <article-title>Research on event logic knowledge graph construction method of robot transmission system fault diagnosis</article-title>. <source>IEEE Access</source> <volume>10</volume>, <fpage>17656</fpage>&#x2013;<lpage>17673</lpage>. doi: <pub-id pub-id-type="doi">10.1109/ACCESS.2022.3150409</pub-id></mixed-citation></ref>
<ref id="ref16"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Innan</surname><given-names>N.</given-names></name> <name><surname>Sawaika</surname><given-names>A.</given-names></name> <name><surname>Dhor</surname><given-names>A.</given-names></name> <name><surname>Dutta</surname><given-names>S.</given-names></name> <name><surname>Thota</surname><given-names>S.</given-names></name> <name><surname>Gokal</surname><given-names>H.</given-names></name> <etal/></person-group>. (<year>2024</year>). <article-title>Financial fraud detection using quantum graph neural networks</article-title>. <source>Quant. Mach. Intell.</source> <volume>6</volume>:<fpage>7</fpage>. doi: <pub-id pub-id-type="doi">10.1007/s42484-024-00143-6</pub-id></mixed-citation></ref>
<ref id="ref17"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Ji</surname><given-names>Y.</given-names></name> <name><surname>Zhang</surname><given-names>Z.</given-names></name> <name><surname>Tang</surname><given-names>X.</given-names></name> <name><surname>Shen</surname><given-names>J.</given-names></name> <name><surname>Zhang</surname><given-names>X.</given-names></name> <name><surname>Yang</surname><given-names>G.</given-names></name></person-group> (<year>2022</year>). &#x201C;<chapter-title>Detecting cash-out users via dense subgraphs</chapter-title>&#x201D; in <source>Proceedings of the 28th ACM SIGKDD conference on knowledge discovery and data mining</source> (<publisher-loc>Cambridge</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>687</fpage>&#x2013;<lpage>697</lpage>.</mixed-citation></ref>
<ref id="ref18"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Khalid</surname><given-names>A. R.</given-names></name> <name><surname>Owoh</surname><given-names>N.</given-names></name> <name><surname>Uthmani</surname><given-names>O.</given-names></name> <name><surname>Ashawa</surname><given-names>M.</given-names></name> <name><surname>Osamor</surname><given-names>J.</given-names></name> <name><surname>Adejoh</surname><given-names>J.</given-names></name></person-group> (<year>2024</year>). <article-title>Enhancing credit card fraud detection: an ensemble machine learning approach</article-title>. <source>Big Data Cogn. Comput.</source> <volume>8</volume>:<fpage>6</fpage>. doi: <pub-id pub-id-type="doi">10.3390/bdcc8010006</pub-id></mixed-citation></ref>
<ref id="ref19"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Khemani</surname><given-names>B.</given-names></name> <name><surname>Patil</surname><given-names>S.</given-names></name> <name><surname>Kotecha</surname><given-names>K.</given-names></name> <name><surname>Tanwar</surname><given-names>S.</given-names></name></person-group> (<year>2024</year>). <article-title>A review of graph neural networks: concepts, architectures, techniques, challenges, datasets, applications, and future directions</article-title>. <source>J. Big Data</source> <volume>11</volume>:<fpage>18</fpage>. doi: <pub-id pub-id-type="doi">10.1186/s40537-023-00876-4</pub-id></mixed-citation></ref>
<ref id="ref20"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Li</surname><given-names>Z.</given-names></name> <name><surname>Liang</surname><given-names>X.</given-names></name> <name><surname>Wen</surname><given-names>Q.</given-names></name> <name><surname>Wan</surname><given-names>E.</given-names></name></person-group> (<year>2024</year>). <article-title>The analysis of financial network transaction risk control based on blockchain and edge computing technology</article-title>. <source>IEEE Trans. Eng. Manag.</source> <volume>71</volume>, <fpage>5669</fpage>&#x2013;<lpage>5690</lpage>. doi: <pub-id pub-id-type="doi">10.1109/tem.2024.3364832</pub-id></mixed-citation></ref>
<ref id="ref21"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Lyu</surname><given-names>C.</given-names></name> <name><surname>Gao</surname><given-names>S.</given-names></name> <name><surname>Zhang</surname><given-names>Q.</given-names></name></person-group> (<year>2025</year>). <article-title>The impact of time pressure and type of fraud on susceptibility to online fraud</article-title>. <source>Front. Psychol.</source> <volume>16</volume>:<fpage>1508363</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fpsyg.2025.1508363</pub-id>, <pub-id pub-id-type="pmid">40330294</pub-id></mixed-citation></ref>
<ref id="ref22"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Motie</surname><given-names>S.</given-names></name> <name><surname>Raahemi</surname><given-names>B.</given-names></name></person-group> (<year>2024</year>). <article-title>Financial fraud detection using graph neural networks: a systematic review</article-title>. <source>Expert Syst. Appl.</source> <volume>240</volume>:<fpage>122156</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.eswa.2023.122156</pub-id></mixed-citation></ref>
<ref id="ref23"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Mutemi</surname><given-names>A.</given-names></name> <name><surname>Bacao</surname><given-names>F.</given-names></name></person-group> (<year>2024</year>). <article-title>E-commerce fraud detection based on machine learning techniques: systematic literature review</article-title>. <source>Big Data Min. Anal.</source> <volume>7</volume>, <fpage>419</fpage>&#x2013;<lpage>444</lpage>. doi: <pub-id pub-id-type="doi">10.26599/BDMA.2023.9020023</pub-id></mixed-citation></ref>
<ref id="ref24"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Oguntibeju</surname><given-names>O.</given-names></name> <name><surname>Adonis</surname><given-names>M.</given-names></name> <name><surname>Alade</surname><given-names>J.</given-names></name></person-group> (<year>2024</year>). <article-title>Systematic review of real-time analytics and artificial intelligence frameworks for financial fraud detection</article-title>. <source>Int. J. Adv. Res. Comput. Commun. Eng.</source> <volume>13</volume>.</mixed-citation></ref>
<ref id="ref25"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Paleti</surname><given-names>S.</given-names></name> <name><surname>Pamisetty</surname><given-names>V.</given-names></name> <name><surname>Challa</surname><given-names>K.</given-names></name> <name><surname>Burugulla</surname><given-names>J. K. R.</given-names></name> <name><surname>Dodda</surname><given-names>A.</given-names></name></person-group> (<year>2024</year>). <article-title>Innovative intelligence solutions for secure financial management: optimizing regulatory compliance, transaction security, and digital payment frameworks through advanced computational models</article-title>. <source>J. Artif. Intell. Big Data Discip.</source> <volume>1</volume>, <fpage>125</fpage>&#x2013;<lpage>136</lpage>.</mixed-citation></ref>
<ref id="ref26"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Raju</surname><given-names>M. N.</given-names></name> <name><surname>Reddy</surname><given-names>Y. C.</given-names></name> <name><surname>Babu</surname><given-names>P. N.</given-names></name> <name><surname>Ravipati</surname><given-names>V. S. P.</given-names></name> <name><surname>Chaitanya</surname><given-names>V.</given-names></name></person-group> (<year>2024</year>). &#x201C;<chapter-title>Detection of fraudulent activities in unified payments Interface using machine learning-LSTM networks</chapter-title>&#x201D; in <source>2024 7th international conference on circuit power and computing technologies (ICCPCT)</source>, vol. <volume>1</volume> (<publisher-loc>New York</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>769</fpage>&#x2013;<lpage>774</lpage>.</mixed-citation></ref>
<ref id="ref27"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Thongprayoon</surname><given-names>C.</given-names></name> <name><surname>Livi</surname><given-names>L.</given-names></name> <name><surname>Masuda</surname><given-names>N.</given-names></name></person-group> (<year>2023</year>). <article-title>Embedding and trajectories of temporal networks</article-title>. <source>IEEE Access</source> <volume>11</volume>, <fpage>41426</fpage>&#x2013;<lpage>41443</lpage>. doi: <pub-id pub-id-type="doi">10.1109/ACCESS.2023.3268030</pub-id></mixed-citation></ref>
<ref id="ref28"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Tiukhova</surname><given-names>E.</given-names></name> <name><surname>Penaloza</surname><given-names>E.</given-names></name> <name><surname>&#x00D3;skarsd&#x00F3;ttir</surname><given-names>M.</given-names></name> <name><surname>Baesens</surname><given-names>B.</given-names></name> <name><surname>Snoeck</surname><given-names>M.</given-names></name> <name><surname>Bravo</surname><given-names>C.</given-names></name></person-group> (<year>2024</year>). <article-title>INFLECT-DGNN: influencer prediction with dynamic graph neural networks</article-title>. <source>IEEE Access</source> <volume>12</volume>, <fpage>135624</fpage>&#x2013;<lpage>135641</lpage>. doi: <pub-id pub-id-type="doi">10.1109/ACCESS.2024.3443533</pub-id></mixed-citation></ref>
<ref id="ref29"><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Wang</surname><given-names>Y.</given-names></name> <name><surname>Chang</surname><given-names>Y.Y.</given-names></name> <name><surname>Liu</surname><given-names>Y.</given-names></name> <name><surname>Leskovec</surname><given-names>J.</given-names></name> <name><surname>Li</surname><given-names>P.</given-names></name></person-group>, (<year>2021</year>). <article-title>Inductive representation learning in temporal networks via causal anonymous walks</article-title>. arXiv preprint arXiv:2101.05974. doi: <pub-id pub-id-type="doi">10.48550/arXiV.2101.05974</pub-id></mixed-citation></ref>
<ref id="ref30"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Weng</surname><given-names>W.</given-names></name> <name><surname>Fan</surname><given-names>J.</given-names></name> <name><surname>Wu</surname><given-names>H.</given-names></name> <name><surname>Hu</surname><given-names>Y.</given-names></name> <name><surname>Tian</surname><given-names>H.</given-names></name> <name><surname>Zhu</surname><given-names>F.</given-names></name> <etal/></person-group>. (<year>2023</year>). <article-title>A decomposition dynamic graph convolutional recurrent network for traffic forecasting</article-title>. <source>Pattern Recogn.</source> <volume>142</volume>:<fpage>109670</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.patcog.2023.109670</pub-id></mixed-citation></ref>
<ref id="ref31"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Wu</surname><given-names>L.</given-names></name> <name><surname>Cui</surname><given-names>P.</given-names></name> <name><surname>Pei</surname><given-names>J.</given-names></name> <name><surname>Zhao</surname><given-names>L.</given-names></name> <name><surname>Guo</surname><given-names>X.</given-names></name></person-group> (<year>2022</year>). &#x201C;<chapter-title>Graph neural networks: foundation, frontiers and applications</chapter-title>&#x201D; in <source>Proceedings of the 28th ACM SIGKDD conference on knowledge discovery and data mining</source> (<publisher-loc>Cambridge</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>4840</fpage>&#x2013;<lpage>4841</lpage>.</mixed-citation></ref>
<ref id="ref32"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Xu</surname><given-names>F.</given-names></name> <name><surname>Wang</surname><given-names>N.</given-names></name> <name><surname>Wu</surname><given-names>H.</given-names></name> <name><surname>Wen</surname><given-names>X.</given-names></name> <name><surname>Zhao</surname><given-names>X.</given-names></name> <name><surname>Wan</surname><given-names>H.</given-names></name></person-group> (<year>2024</year>). <article-title>Revisiting graph-based fraud detection in sight of heterophily and spectrum</article-title>. <source>Proc. AAAI Conf. Artif. Intell.</source> <volume>38</volume>, <fpage>9214</fpage>&#x2013;<lpage>9222</lpage>. doi: <pub-id pub-id-type="doi">10.1609/aaai.v38i8.28771</pub-id></mixed-citation></ref>
<ref id="ref33"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Zhong</surname><given-names>L.</given-names></name> <name><surname>Wu</surname><given-names>J.</given-names></name> <name><surname>Li</surname><given-names>Q.</given-names></name> <name><surname>Peng</surname><given-names>H.</given-names></name> <name><surname>Wu</surname><given-names>X.</given-names></name></person-group> (<year>2023</year>). <article-title>A comprehensive survey on automatic knowledge graph construction</article-title>. <source>ACM Comput. Surv.</source> <volume>56</volume>, <fpage>1</fpage>&#x2013;<lpage>62</lpage>. doi: <pub-id pub-id-type="doi">10.1145/3618295</pub-id></mixed-citation></ref>
<ref id="ref34"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Zhu</surname><given-names>K.</given-names></name> <name><surname>Guo</surname><given-names>L.</given-names></name></person-group> (<year>2024</year>). <article-title>Financial technology, inclusive finance and bank performance</article-title>. <source>Financ. Res. Lett.</source> <volume>60</volume>:<fpage>104872</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.frl.2023.104872</pub-id></mixed-citation></ref>
</ref-list>
<fn-group>
<fn fn-type="custom" custom-type="edited-by" id="fn0001">
<p>Edited by: <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/669488/overview">Alessandra Tanda</ext-link>, University of Pavia, Italy</p>
</fn>
<fn fn-type="custom" custom-type="reviewed-by" id="fn0002">
<p>Reviewed by: <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/3331519/overview">Wei Sizheng</ext-link>, Xuzhou University of Technology, China</p>
<p><ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/3335647/overview">Md Nasim Fardous Zim</ext-link>, Emporia State University, United States</p>
</fn>
</fn-group>
</back>
</article>