posted by user: grupocole || 230 views || tracked by 1 users: [display]

LLMSEC 2025 : LLMSEC

FacebookTwitterLinkedInGoogle

Link: https://sig.llmsecurity.net/workshop/
 
When Aug 1, 2025 - Aug 1, 2025
Where Vienna, Austria
Submission Deadline Apr 15, 2025
Notification Due May 17, 2025
Final Version Due Jun 16, 2025
Categories    NLP   computational linguistics   artificial intelligene
 

Call For Papers


LLMSEC 2025

URL: https://sig.llmsecurity.net/workshop/
Direct submission deadline: April 15, 2025


LLMSEC is an academic event publishing & presenting work on adversarially-induced failure modes of large language models, the conditions that lead to them, and their mitigations.

Date: Aug 1, 2025
Location: Vienna, Austria

Co-located with ACL 2025 as a workshop

Scope
Large Language Models accept a variety of inputs and produce a variety of outputs. It is possible to find inputs that lead to LLM outputs that model creators, owners, or users do not want. Defining and enumerating this space is an open task. We describe LLM security as the field of investigating how models that process text can, by an adversary, be made to behave in unintended and harmful ways. %The field covers both weaknesses and vulnerabilities.

Research at LLMSEC includes the entire life cycle of LLMs, from training data through fine-tuning and alignment over to inference-time. It also covers deployment context of LLMs, including risk assessment, release decisions, and use of LLMs in agent-based systems.

Event scope is LLM attacks, LLM defence, and the contextualisation of LLM security. LLM attacks are anything that causes LLMs to behave in an unexpected/unintended manner usable by an adversary. In the LLM life cycle, this includes techniques like data poisoning and other model supply chain attacks, as well as the adversarial inputs that yield insecure outputs. Topics include:

Adversarial attacks on LLMs
Automated and adaptive LLM attacks
Data poisoning
Data extraction from trained models
Defining LLM vulnerabilities
Detection of adversarial LLM inputs
Ethical aspects of LLM security
Legal impacts and debates related to model security
LLM Denial-of-service
LLM security measurement
LLM supply chain attacks
Model input/output guardrails
Model inversion
Model policy
Multi-modal and cross-model models (e.g. vision&text-to-text, text-to-speech, speech-to-text)
Organising model exploits
Organising model failure modes
Practical tools for exploiting LLMs
Privacy breaches mediated by LLM
Privilege escalation and lateral movement mediated by LLMs
Prompt injection
Proofs-of-concept of LLM exploits
Red teaming of LLMs
Retrieval Augmented Generation security
Secure LLM use and deployment


Keynotes

1. Johannes Bjerva, Aalborg University (Denmark). Prof. Bjerva’s research is characterised by an interdisciplinary perspective on NLP, with a focus on the potential for impact in society. His main contributions to my field are to incorporate linguistic information into NLP, including large language models (LLMs), and to improve the state of resource-poor languages. Recent research focuses on embedding inversion and attacks on multi-modal models.

2. Erick Galinkin, NVIDIA Corporation (USA). Erick Galinkin is a Research Scientist at NVIDIA working on the security assessment and protection of large language models. Previously, he led the AI research team at Rapid7 and has extensive experience working in the cybersecurity space. He is an alumnus of Johns Hopkins University and holds degrees in applied mathematics and computer science. Outside of his work, Erick is a lifelong student, currently at Drexel University and is renowned for his ability to be around equestrians.

3. TBA

Submission formats

Submissions must be anonymised & de-identified following ACL policy, and in the ACL template.

Long & Short papers

We invite both short and long papers; short papers with a 4 page limit, long papers with an 8 page limit, with references, ethics statements, & other compulsory sections not subjected to this limit.

Qualitative work

As a relatively new field, still engaged in sense-making of the context of this research, we particularly welcome rigorous qualitative work, and work that provides novel information about LLMSEC practice and context.

War stories

Following cybersecurity tradition, LLMSEC also welcomes “war stories”, that is, accounts of security investigations or operations that are informative to broader audiences. These are intended to connect researchers and practitioners; LLM security is highly interdisciplinary and we have a lot to share with each other.

War story submissions need not provide novel quantitative empirical results, but should be illuminating and helpful to the workshop audience. They may be up to four pages, with references, appendices, and compulsory sections excluded from the limit

Submission link

Submit via softconf: https://softconf.com/acl2025/llmsec2025/

Important Dates

Direct submission deadline: April 15, 2025
Notification of acceptance: May 17, 2025
Camera-ready paper deadline: June 16, 2025
Pre-recorded video due: July 5, 2025
Workshop dates: July 31st / August 1st 2025
TZ: Anywhere on earth

Organisation

Leon Derczynski. Principal Scientist in LLM Security at NVIDIA Corporation, Associate Professor in NLP at ITU University of Copenhagen, President of ACL SIGSEC. https://www.linkedin.com/in/leon-derczynski/

Jekaterina Novikova. Science Lead at the AI Risk and Vulnerability Alliance (ARVA), Expert Advisor of ACL SIGSEC. https://jeknov.github.io/

Muhao Chen. Assistant Professor of Computer Science at Uuniversity of California, Davis, Secretary of ACL SIGSEC. Prof Chen has considerable organisational and service experience, including SAC and AC at NAACL, ACL, EMNLP, and AAAI, and co-chairing workshops at NAACL 2022 and AKBC 2022. https://muhaochen.github.io/

Related Resources

IEEE-Ei/Scopus-ITCC 2025   2025 5th International Conference on Information Technology and Cloud Computing (ITCC 2025)-EI Compendex
RANLP 2025   Recent Advances in Natural Language Processing
BIOM 2025   International Conference on Big Data, IoT and Machine Learning
KONVENS 2025   Conference on Natural Language Processing
IJCSITCE 2025   The International Journal of Computational Science, Information Technology and Control Engineering
IJANS 2025   International Journal on AdHoc Networking Systems
IJRAP 2025   International Journal of Recent advances in Physics
DEPLING 2023   International Conference on Dependency Linguistics
TSD 2025   Twenty-eighth International Conference on Text, Speech and Dialogue
MathSJ 2025   Applied Mathematics and Sciences: An International Journal