|
| |||||||||||
AI-FA 2026 : SIGKDD 2026 1st Workshop on AI for Fraud and Abuse | |||||||||||
| |||||||||||
Call For Papers | |||||||||||
|
Fraud and abuse are ubiquitous in modern digital ecosystems, manifesting across e-commerce,
social media, cloud computing, and telecommunications. While fraud typically centers on financial deception and theft, "abuse" represents a broader, context-dependent set of challenges that vary significantly by field—ranging from the exploitation of cloud infrastructure and advertising click-fraud to behavioral toxicity on social platforms and the systemic manipulation of search or ranking algorithms. Historically, the research and industrial communities have addressed these challenges through a piecemeal approach, treating "credit card fraud," "social media misinformation," and "account takeovers" as isolated domain problems. However, in today’s era, generative AI toolchains span across multiple domains in an unprecedented manner, enabling automated, cross-platform attacks that blur traditional boundaries. Threat actors now leverage common generative frameworks to create synthetic identities, realistic phishing campaigns, and coordinated botnets that impact diverse sectors simultaneously. This workshop aims to take a holistic lens on the detection and prevention of such fraud and abuse, moving beyond domain-specific silos to identify universal patterns and scalable AI-driven defenses. The workshop is designed for a cross-disciplinary audience of researchers and practitioners: ● Computer Science Researchers: Experts in anomaly detection, adversarial machine learning, graph neural networks (GNNs), and Large Language Model (LLM) security. ● Industry Trust & Safety Professionals: Teams from big tech, retail, gaming, and fintech who manage platform integrity and user safety. ● Data Scientists and ML Engineers: Those building real-time detection systems that must balance high-precision filtering with low-latency requirements. ● Ethics and Policy Researchers: Individuals studying the societal impact of AI-driven abuse and the regulatory frameworks (e.g., AI Act, KYC) governing automated defense. |
|