![]() |
| |||||||||||||||
3D-Sec 2025 : Deepfake, Deception, and Disinformation Security (3D-Sec) Workshop co-located with CCS '25 | |||||||||||||||
Link: https://sites.google.com/view/3d-sec2025 | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
Event Details
Event Name: 3D-Sec: Deepfake, Deception, and Disinformation Security Workshop Event Website: https://sites.google.com/view/3d-sec2025 Submission Deadline: June 20, 2025 Event Date: October 13, 2025 Location: Taipei, Taiwan Topics Covered This workshop aims to bring together researchers, practitioners, and policymakers to explore the security implications of AI misuse, with a focus on detection, attribution, forensic analysis, and mitigation strategies. We invite original research contributions on topics including, but not limited to: Deepfake Security & Synthetic Media in Cyber Threats ❖ Deepfake Generation and Detection for Cybersecurity Applications ❖ Deepfake Forensics and Adversarial Robustness of Detectors ❖ Cyber Threat Modelling for AI-Generated Media Manipulation ❖ Deepfake Phishing and Impersonation Attacks ❖ Automated Video/Audio Spoofing for Fraud And Cybercrime ❖ Defensive AI Techniques For Detecting Synthetic Media in Cyberattacks AI-Driven Disinformation & Fake News in Cybersecurity ❖ AI-generated Propaganda and Cybersecurity Risks ❖ LLMs in Cyber Warfare, Automated Fake News, and Disinformation Amplification ❖ Computational Approaches to Detecting Manipulated Narratives ❖ Security Frameworks for Detecting and Mitigating AI-Generated Disinformation ❖ Network Analysis of AI-Driven Disinformation Campaigns in Cyberattacks ❖ Cybercrime and Legal Aspects of AI-generated Disinformation AI-Powered Deception & Social Engineering Attacks ❖ Adversarial AI for Social Engineering, Scams, and Automated Fraud ❖ Deepfake-Enhanced Phishing and Business Email Compromise (Bec) Attacks ❖ AI-Powered Deception for Cybersecurity Red Teaming ❖ Metrics for Assessing Deception and Manipulation in Cyber Operations ❖ AI-driven disinformation in Cyber-Espionage and Nation-State Attacks ❖ Countermeasures and Detection Strategies for Adversarial Deception Generative AI & LLM Threats to Cybersecurity ❖ LLM-powered Phishing, Impersonation, and Fraud Detection ❖ Prompt Injection Attacks and Adversarial Manipulation Of LLMs ❖ Automated Misinformation Campaigns Using LLM-generated Narratives ❖ Security Risks of AI-generated Social Engineering and Disinformation Bots ❖ LLM-based Malware, Code Obfuscation, And Automated Cyberattacks ❖ Digital Provenance And Watermarking For LLM-generated Content Verification Security & Countermeasures for AI-Generated Threats ❖ Adversarial Attacks on Deepfake and LLM-based Security Systems ❖ AI-based Threat Intelligence for Detecting AI-generated Cyberattacks ❖ Watermarking and Content Provenance Verification For AI-generated Media ❖ Human-AI Collaboration in Detecting AI-generated Threats in SOCs ❖ Forensic Techniques for Attribution of Synthetic Media in Cybersecurity Incidents ❖ Robust Authentication and Identity Verification against AI-generated Attacks Organisers Mario Fritz (CISPA, Germany) Bimal Viswanath (Virginia Tech, USA) Simon S. Woo (Sungkyunkwan University, South Korea) Shahroz Tariq (CSIRO's Data61, Australia) Kristen Moore (CSIRO's Data61, Australia) Sharif Abuadbba (CSIRO's Data61, Australia) Tim Walita (CISPA Helmholtz Center for Information Security, Germany) Additional Details Organiser Contact: shahroz.tariq@data61.csiro.au Submission links: https://3d-sec25.hotcrp.com/ |
|