| |||||||||||||||
NEMESIS 2018 : 1st Workshop on Recent Advances in Adversarial Machine Learning - ECML/PKDD | |||||||||||||||
Link: http://research.ibm.com/labs/ireland/nemesis2018/ | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
==== Call for Papers (Nemesis'18) ====
1st Workshop on Recent Advances in Adversarial Machine Learning Co-located with ECML/PKDD 2018 Date: September 10, 2018 Venue: Croke Park, Dublin, Ireland Site: http://research.ibm.com/labs/ireland/nemesis2018 ** SUBMISSION DEADLINE: JULY 2, 2018 ** MOTIVATION ========== There is an exploding body of literature on Adversarial Machine Learning, however, several key questions remain unanswered: * What is the reason for the existence of adversarial examples and their transferability between different Machine Learning models? * How can the space of adversarial examples be characterized, in particular, relative to the data manifold and learned representations of the data? * Are there provable limitations of the robustness guarantees that adversarial defences can provide, in particular in the case of white-box attacks or adaptive adversaries? * How strong is the adversarial threat for data modes other than images, e.g., text or speech? * How to design defences that address threats from combinations of poisoning and evasion attacks? TOPICS OF INTEREST ================== The workshop will solicit contributions including (but not limited to) the following topics: * Theory of adversarial machine learning - Space of adversarial examples - Transferability - Learning theory - Data privacy - Metrics of adversarial robustness * Adversarial attacks - Data poisoning - Evasion - Model theft - Attacks for different data modes, in particular text / natural language understanding - Attacks by adaptive adversaries * Adversarial defences - Data poisoning - Evasion - Model theft - Model hardening - Input data preprocessing - Robust model architectures - Defences against adaptive adversaries * Applications and demonstrations - Real-world examples and use cases of adversarial threats and defences against those SUBMISSION FORMAT ================= The workshop invites two types of submissions: full research papers and extended abstracts. Accepted full research contributions will be published by Springer in the workshop’s proceedings. Extended abstracts are meant to cover preliminary research ideas and results. Submissions will be evaluated on the basis of significance, originality, technical quality and clarity. Only work that has not been previously published will be considered. Papers must be written in English and formatted according to the Springer LNCS guidelines. Full research papers must be up to ten pages long (excluding references). Extended abstracts must be up to six pages long (excluding references). To be considered, papers must be submitted before the deadline (see Important Dates section). Electronic submissions will be handled via Easy Chair. Submissions should include the authors’ names and affiliations, as the review process is single-blind. For each accepted paper, at least one author must attend the workshop and present the paper. ORGANIZERS ========== Workshop chair: Mathieu Sinn, IBM Research Program committee chairs: * Ian Molloy, IBM Research * Irina Nicolae, IBM Research Program committee: * Naveed Akhtar, University of Western Australia * Pin-Yu Chen, IBM Research * David Evans, University of Virginia * Alhussein Fawzi, DeepMind * Kathrin Grosse, University of Saarland * Tianyu Gu, Uber ATG * Jan Hendrik Metzen, Bosch Center for AI * Luis Munoz-Gonzalez, Imperial College London * Florian Tramer, Stanford University * Xiangyu Zhang, Purdue University |
|