posted by user: matteo_camilli || 1183 views || tracked by 3 users: [display]

JSS SI: AI testing and analysis 2024 : [JSS - Elsevier] Special Issue on Automated Testing and Analysis for Dependable AI-enabled Software and Systems

FacebookTwitterLinkedInGoogle

Link: https://www.sciencedirect.com/journal/journal-of-systems-and-software/about/call-for-papers?fbclid=IwAR3PgrP2T65w7ZY2GPSJ3RXVAPxRZQWB2XDcNuUPW6d-16sMGI-74M5V9vA#automated-testing-a
 
When N/A
Where N/A
Submission Deadline Aug 31, 2024
Notification Due Oct 31, 2024
Categories    SE4AI   automated testing   ai testing   dependability
 

Call For Papers

====================
Journal of Systems and Software (JSS),
Special Issue on
** Automated Testing and Analysis for Dependable AI-enabled Software and Systems **
====================

** Guest editors **

Matteo Camilli, Politecnico di Milano, Italy

Michael Felderer, German Aerospace Center (DLR) and University of Cologne, Cologne, Germany

Alessandro Marchetto, University of Trento, Italy

Andrea Stocco, Technical University of Munich (TUM) and fortiss GmbH, Germany

** Special Issues Editors **

Laurence Duchien and Raffaela Mirandola

** Editors in Chief **

Paris Avgeriou and David Shepherd


** Special issue information **

The advancements in Artificial Intelligence (AI) and its integration into various domains have led to the development of AI-enabled software and systems that offer unprecedented capabilities. Technologies ranging from computer vision to natural language processing, from speech recognition to recommender systems enhance modern software and systems with the aim of providing innovative services, as well as rich and customized experiences to the users. Such technologies are also changing the software and system engineering and development methods and tools, especially quality assurance methods that require deep restructuring due to the inherent differences between AI and traditional software.

AI-enabled software and systems are often large-scale driven by data, and more complex than traditional software and systems. They are typically heterogeneous, autonomous, and probabilistic in nature. They also lack of transparent understanding of their internal mechanics. Furthermore, they are typically optimized and trained for specific tasks and, as such, may fail to generalize their knowledge to other situations that often emerge in dynamic environments. These systems strongly demand safety, trustworthiness, security, and other dependability aspects. High-quality data and AI components shall be safely integrated, verified, maintained, and evolved. In fact, the potential impact of a failure, or a service interruption, cannot be tolerated in business-critical applications (e.g., chatbots and virtual assistants, facial recognition for authentication and security, industrial robots) or safety-critical applications (e.g., autonomous drones, collaborative robots, self-driving cars and autonomous vehicles for transportation).

The scientific community is hence studying new cost-effective verification and validation techniques tailored to these systems. In particular, automated testing and analysis is a very active area that has led to notable advances to realize the promise of dependable AI-enabled software and systems.

This special issue welcomes contributions regarding approaches, techniques, tools, and experience reports about adopting, creating, and improving automated testing and analysis of AI-enabled software and systems with a special focus on dependability aspects, such as reliability, safety, security, resilience, scalability, usability, trustworthiness, and compliance to standards.

Topics of interest include, but are not limited to:

Verification and validation techniques and tools for AI-enabled software and systems
​Automated testing and analysis approaches, techniques, and tools for AI-enabled software and systems.
Fuzzing and Search-based testing for AI-enabled software and systems.
Metamorphic testing for AI-enabled software and systems.
Techniques and tools to assess the dependability of AI-enabled software and systems, such as reliability, safety, security, resilience, scalability, usability,
trustworthiness, and compliance with standards in critical domains.
Fault and vulnerability detection, prediction, and localization techniques and tools for AI-enabled software and systems.
Automated testing and analysis to improve explainability of AI-enabled software and
systems.
Program analysis techniques for AI-enabled software and systems.
Regression testing and continuous integration for AI components.
Automated testing and analysis of generative AI, such as Large Language Models
(LLMs), chatbots, and text-to-image AI systems.
Verification and validation techniques and tools for specific domains, such as healthcare, telecommunication, cloud computing, mobile, big data, automotive, industrial manufacturing, robotics, cyber-physical systems, Internet of Things, education, social networks, and context-aware software systems.
Empirical studies, applications, and case studies in verification and validation of AI-enabled software and systems.
Experience reports and best practices in adopting, creating, and improving testing and analysis of AI-enabled software and systems.
Future trends in AI testing and analysis, such as integration of AI technologies in test case generation and validation of AI-enabled software and systems.
Important dates (tentative)

Submission Open Date: January 1, 2024
Manuscript Submission Deadline: August 31, 2024
Notification to authors (first round): October 31, 2024
Submission of revised papers (second round): January 31, 2025
Completion of the review and revision process (final notification): February 28, 2025


** Manuscript submission information **

The call for this special issue is an open call. All submitted papers will undergo a rigorous peer-review process and should adhere to the general principles of the Journal of Systems and Software articles. Submissions have to be prepared according to the Guide for Authors. Submitted papers must be original, must not have been previously published, or be under consideration for publication elsewhere. If a paper has been already presented at a conference, it should contain at least 30% new material before being submitted to this issue. Authors must provide any previously published material relevant to their submission and describe the additions made. We will invite some papers for this special issue, although the issue is open. The SI does not publish survey articles, systematic reviews, or mapping studies.

All manuscripts and any supplementary material should be submitted through the Elsevier Editorial System. Follow the submission instructions given on this site. During the submission process, select the article type "VSI:AI-testing-and-analysis" from the "Choose Article Type" pull-down menu.

Submissions will be processed and start the reviewing process as soon as they arrive, without waiting for the submission deadline.

Related Resources

ISSTA 2025   The ACM SIGSOFT International Symposium on Software Testing and Analysis
AI in Evidence Synthesis 2025   AI in Evidence Synthesis (Cochrane Evidence Synthesis and Methods)
IDA 2025   Intelligent Data Analysis
SEAS 2025   14th International Conference on Software Engineering and Applications
AASDS 2024   Special Issue on Applications and Analysis of Statistics and Data Science
FPC 2025   Foresight Practitioner Conference 2025
ICPAMI 2025   2025 2nd International Conference on Pattern Analysis and Machine Intelligence
Topical collection Springer 2025   CFP: Sense-Making and Collective Virtues among AI Innovators. Aligning Shared Concepts and Common Goals
VALID 2025   The Seventeenth International Conference on Advances in System Testing and Validation Lifecycle
AISyS 2025   The Second International Conference on AI-based Systems and Services