| |||||||||||||||
LLMCS 2024 : The International Workshop on Large Language Models for Cybersecurity | |||||||||||||||
Link: https://fllm2024.fllm-conference.org/Workshops/LLMCS2024/ | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
Recent advancements in Generative Artificial Intelligence have revolutionized the landscape of content creation and have significantly changed how content is conceptualized, developed, and delivered across various industries. Large Language Models (LLMs), such as BERT, T5, ChatGPT, GPT-4, Falcon 180B, and Codex, have influenced most disciplines of science and technology that support content generation in diverse applications, including cybersecurity. In cybersecurity, LLMs represent a dual-purpose tool. On the one hand, they empower malicious actors to identify vulnerabilities and enhance attack strategies. Conversely, they empower security teams to fortify defenses, identify threats, and effectively streamline risk management and operational processes. Despite the anticipated widespread adoption of these LLMs, our understanding of their full impact on cybersecurity still needs to be completed. There is a critical need to assess how they contribute to the discovery of vulnerabilities comprehensively, the development of new attack tactics and techniques, the creation of complex malware patterns, the identification of potential threats, and the mitigation of risks through automated vulnerability remediation
We invite the submission of original papers on all topics related to LLMs and cybersecurity, with special interest in but not limited to: LLMs-empowered defensive strategies Offensive approaches using LLMs LLMs and cybercrime laws Chatbot software/Apps (BERT, T5, ChatGPT, GPT-4, Falcon 180B , ...) impact on cybersecurity education LLM for creating cybersecurity policies Security of LLM-generated code LLMs-driven threat modeling LLMs for Solving Offensive Security Challenges such as Capture the Flag Reliability Issues of using LLMs in the cybersecurity contex LLMs for generation and analysis of Cyber Threat Intelligence (CTI) Privacy issues of LLMs and privacy-preserving LLMs Generating Adversarial machine learning examples using LLMs LLMs driven threat prevention LLMs based cybersecurity awareness framework |
|