| |||||||||||||||
AI Security & Privacy 2025 : First International Workshop on Artificial Intelligence Security and Privacy | |||||||||||||||
Link: https://sites.google.com/view/aisp2025/home | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
Many products and services that utilize AI technology have become pervasive throughout the world, and AI decision-making is now having an impact on people's lives and many industries. As humans are gradually removed from autonomous decision-making by AI, there is an increasing need to consider AI security and privacy when using it as a design principle. This workshop aims to explore new ideas and deepen research regarding AI security and privacy, including malfunctions, attacks, defenses, tracking, and analysis. This workshop will be led by JSAI's SIG-Sec. Internationally, existing workshops such as AAAI/SafeAI and IJCAI/AIsafety are quite active, however there is no corresponding international gathering venue yet in Japan. This workshop serves to fill this gap for being Japanese and Asian research on AI Security and Privacy more active.
Topics of interest include the following related to AI Security and Privacy: - Adversarial learning - Federated learning - Machine Unlearning - AI approaches to trust and reputation - AI Misuse (e.g. misinformation, deepfakes) - Machine learning and computer security - Privacy-enhancing technologies, anonymity, and censorship (e.g. Differential privacy in AI) Also, this workshop, AI Security and Privacy, is interested in all AI aspects of computer security and privacy. And, we include the following LLM-related hot topics such as: - Secure Large AI Systems and Models - Large AI Systems and Models' Privacy and Security Vulnerabilities - Copyright of AI While covering any aspect of AI Safety including, but not limited to: - Safety in AI-based system architectures - Detection and mitigation of AI safety risks - Avoiding negative side effects in AI-based systems - Regulating AI-based systems: safety standards and certification - Evaluation platforms for AI safety - AI safety education and awareness - Safety and ethical issues of Generative AI We welcome and encourage the submission of high-quality, original papers, which are not simultaneously submitted for publication elsewhere. All submissions will be blind-refereed and thus must be anonymous, with no author names, affiliations, acknowledgments, or obvious references. Papers should be written in English, formatted according to the Springer Verlag LNCS style in a pdf form, which can be obtained from https://www.springer.com/gp/computer-science/lncs/conference-proceedings-guidelines and not exceed 14 pages including figures, references, etc. If you use a Word file, please follow the instruction of the format, and then convert it into a PDF form and submit it at the paper submission page: https://easychair.org/conferences/?conf=aisecurityprivacy202 If a paper is accepted, at least one author of the paper must register the workshop through this page. Without fulfilling this condition, the paper will not be in the proceedings. |
|