|
| |||||||||||
HCAI 2026 : Frontiers in Computer Science: Human-Centered AI in Cyber Security, Privacy, and Trust for Critical Infrastructures | |||||||||||
| Link: https://www.frontiersin.org/research-topics/79765/human-centered-ai-in-cyber-security-privacy-and-trust-for-critical-infrastructures | |||||||||||
| |||||||||||
Call For Papers | |||||||||||
|
Background
The intersection of Artificial Intelligence (AI) and human-centered design is reshaping strategies to protect critical infrastructures such as energy grids, transportation networks, healthcare systems, and financial services. With the rapid escalation of sophisticated cyber threats, current approaches often rely on automation and detection systems, but these can be limited by issues related to transparency, explainability, and human oversight. Recent research has demonstrated the benefits of combining AI technologies with human insight to establish resilience, manage privacy, and strengthen trustworthiness. However, the dynamic, high-stakes environments characteristic of critical infrastructures presents unique challenges for achieving effective human-AI collaboration. These challenges arise from stringent operational constraints, safety‑critical decision making, and the convergence of Information Technology (IT) and Operational Technology (OT), where failures may have cascading physical and societal impacts. Consequently, critical infrastructure environments necessitate human‑centered AI approaches that ensure meaningful human oversight, contextual awareness, and robust human-AI collaboration. Gaps remain in designing systems that facilitate meaningful human control, promote ethical outcomes, and maintain reliable situational awareness in changing operational contexts. This Research Topic aims to advance knowledge of how human-centered AI can be developed and operationalized for cyber security, privacy, and trust in critical infrastructures. The central objective is to unpack and explore novel frameworks, mechanisms, and system architectures that support effective human-AI collaboration, moving beyond automation-centric models to emphasize augmented judgment, adaptive teaming, and trust-building. Contributors are encouraged to investigate questions such as: How can AI-driven tools enhance human capability to prevent, detect, and respond to cyber incidents? What design principles ensure accountability and establish confidence in AI-supported decisions? Can adaptive privacy-preserving solutions operate transparently and ethically at scale? The goal is to identify actionable insights and pioneering models that can inform the ethical integration of human-centered AI within operational cyber security landscapes. The scope of this Research Topic covers the design, evaluation, and deployment of collaborative human-AI systems for cyber security, privacy, and trust in critical infrastructure, with recognition of the boundaries between fully autonomous, semi-autonomous, and human-in-the-loop solutions. While the focus is on interdisciplinary, empirical, and theoretical studies directly addressing human-AI collaboration in operational contexts, submissions must also demonstrate clear relevance to the practical realities and ethical considerations of critical infrastructure defense. To gather further insights in this rapidly evolving field, we welcome articles addressing, but not limited to, the following themes as they apply to critical infrastructure sectors: • AI-enhanced threat detection and mitigation with human collaboration • Human-centric AI systems for real-time cyber defense • Designing AI for collaborative threat response in critical infrastructure • AI in cyber resilience, privacy preservation, and human-AI interaction • Collaborative human-AI systems for incident response, recovery, and trust building • Human-centered AI design for Security Operations Centers (SOCs) • Ethics, accountability, transparency, and trust in human-AI cyber security teams • Human-AI collaboration for privacy-aware prevention, detection, and response to cyber incidents • Explainable and trustworthy AI for cyber security and privacy-critical applications • Adaptive privacy monitoring and human-in-the-loop privacy protection systems We invite submissions of original research articles, empirical studies, experimental findings, theoretical analyses, and innovative systems demonstrations relevant to human-centered AI for cyber security, privacy, and trust in critical infrastructures. Topic Editors Sabarathinam Chockalingam (Institute for Energy Technology (IFE), Halden, Norway) Aida Omerovic (SINTEF, Oslo, Norway) Mohan Baruwal Chhetri (Commonwealth Scientific and Industrial Research Organisation (CSIRO), Canberra, Australia) Sanjay Misra (Institute for Energy Technology (IFE), Halden, Norway) Prosper A. Kwei-Narh (Institute for Energy Technology (IFE), Halden, Norway) For any questions or further information, please contact Sabarathinam.Chockalingam@ife.no. |
|