Transformer-Based Anomaly Detection for First-Party Fraud Patterns Across Transaction Graphs
Main Article Content
Abstract
First-party fraud, or friendly fraud, is a new and emerging problem in digital commerce. It occurs when the legitimate cardholder takes advantage of chargebacks to obtain a refund for goods or services, despite having already received them. Because of the time between transaction and the filing of a fraudulent dispute, point of transaction detection does not work․ Simple rule systems and basic machine learning models that don't compare one account to others or rely on fixed features are not good enough for understanding the complex behaviors and connections involved in more advanced types of friendly fraud. Heterogeneous temporal graphs can be formed from these transactions using graph neural networks and transformer-based attention architectures. The heterogeneous message-passing layers, i․e․ the graph attention layer and the multi-head temporal self-attention layers surface cross-account, cross-device, across-time, and cross-dimension interaction signals of fraud missing from standard detection․ The use of fused structural and sequential representations improves detection efficacy across opportunistic, habitual, and organized fraud typologies. Cost-sensitive threshold calibration and human-in-the-loop review allocation enable deployment in production without excessive burdens of false positives. ․