|
| |||||||||||
ANUBIS 2026 : Assessment with New methodologies, Unified Benchmarks, and environments, of Intrusion detection and response Systems | |||||||||||
| Link: https://superviz.inria.fr/anubis26/ | |||||||||||
| |||||||||||
Call For Papers | |||||||||||
|
We are pleased to invite you to submit your work to ANUBIS: the 2nd International Workshop on Assessment with New methodologies, Unified Benchmarks, and environments, of Intrusion detection and response Systems.
It will take place in Rome, Italy in September 2026. ANUBIS is co-located with the 31st European Symposium on Research in Computer Security (ESORICS 2026). ANUBIS is supported by France 2030 through the “Superviz” project. Please find below our Call for Papers: ====================================== In the face of the humongous volume of publications in the field of intrusion detection and response, coupled with the lack of rigorous evaluation methodology of these (increasingly AI-based) methods, reproducibility is close to impossible. To remediate to that issue, ANUBIS offers the opportunity for researchers from different domains and communities to bring and discuss their evaluation methodology practices. Evaluation is a fundamentally transverse topic, and multidisciplinary expertise about cybersecurity goals, technical domain constraints, and machine learning components is necessary to achieve fair, explainable, and trustworthy evaluation. As such, we are looking for submissions that deal with the methods, tools and techniques to evaluate security measures that aim to protect (computer) systems against intrusions. We welcome original papers submitted by researchers and practitioners from various backgrounds, such as security and privacy (incl. code audit or penetration testing), formal methods, experimental platforms (incl. digital twins), machine learning and data mining. Topics of Interest =================== - Threat data collection software and methods - Evaluation of current and new security datasets - Privacy-preserving datasets collection - AI for synthetic data generation (legitimate, malicious and mixed workloads) - Data representation for security - Methodology, benchmark, metrics, formal methods, and tools for datasets or security tools evaluation - Evaluation in dynamic environments and concept drift analysis - Platforms, learning environments, digital twins, and software for reproducible experiments - Evaluation of AI approaches for intrusion detection and response, such as reinforcement learning and federated learning Submission Guidelines ===================== The workshop accepts original research work and work-in-progress, not substantially overlapping with previous publications or concurrent submissions, as either: - research papers: at most 16 pages (using 10-point font), excluding the bibliography and well-marked appendices, or - position and work-in-progress papers: at most 8 pages (using 10-point font), excluding the bibliography and well-marked appendices. Submitted papers must follow the LNCS template from the time they are submitted. ANUBIS follows a double-blind review process and all papers that are not desk-rejected will be reviewed by two to three experts. Submissions must be uploaded to the following EasyChair website https://easychair.org/conferences/?conf=esorics2026, in the ANUBIS track. Important dates =============== - Submission deadline: June 12, 2026, AoE - Notification to authors: July 31, 2026 - Camera-ready version: August 28, 2026 AoE For future updates, check our website: https://superviz.inria.fr/anubis26/ |
|