posted by user: darshanjani || 401 views || tracked by 1 users: [display]

IEEE AITest 2024 : IEEE AITest 2024 The 6th IEEE International Conference on Artificial Intelligence Testing

FacebookTwitterLinkedInGoogle

Link: https://ieeeaitest.com/
 
When Jul 15, 2024 - Jul 18, 2024
Where Shanghai, China
Submission Deadline Apr 1, 2024
 

Call For Papers

Artificial Intelligence (AI) technologies are widely used in computer applications to perform tasks
such as monitoring, forecasting, recommending, predicting, and statistical reporting. They are
deployed in a variety of systems, including driverless vehicles, robot-controlled warehouses,
financial forecasting applications, and security enforcement and are increasingly integrated with
cloud/fog/edge computing, big data analytics, robotics, Internet-of-Things (IoT), mobile computing,
smart cities, smart homes, intelligent healthcare, and many more. Despite this dramatic progress, the
quality assurance of existing AI application development processes is still far from satisfactory, and
the demand for demonstrable levels of confidence in such systems is growing. Software testing is a
fundamental, effective, and recognized quality assurance method which has shown its
cost-effectiveness to ensure the reliability of many complex software systems. However, the
adaptation of software testing to the peculiarities of AI applications remains largely unexplored and
needs extensive research to be performed. On the other hand, the availability of AI technologies
provides an exciting opportunity to improve existing software testing processes, and recent years
have shown that machine learning, data mining, knowledge representation, constraint optimization,
planning, scheduling, multi-agent systems, etc. have real potential to positively impact software
testing. Recent years have seen a rapid growth of interest in testing AI applications as well as the
application of AI techniques to software testing. This conference provides an international forum
for researchers and practitioners to exchange novel research results, articulate the problems and
challenges from practices, deepen our understanding of the subject area with new theories,
methodologies, techniques, process models, impacts, etc., and improve the practices with new tools
and resources.
Topics of Interest:
The conference invites papers of original research on AI testing and reports of the best practices in
the industry as well as the challenges in practice and research. Topics of interest include (but are not
limited to) the following:
• Testing AI applications
• Methodologies for testing, verification, and validation of AI applications
o Process models for testing AI applications and quality assurance activities and
procedures
o Quality models of AI applications and quality attributes of AI applications, such as
correctness, reliability, safety, security, accuracy, precision, comprehensibility,
explainability, etc.
o The whole lifecycle of AI applications, including analysis, design, development,
deployment, operation, and evolution
o Quality evaluation and validation of the datasets that are used for building the AI
applications
• Techniques for testing AI applications
o Test case design, test data generation, test prioritization, test reduction, etc.
o Metrics and measurements of the adequacy of testing AI applications
• Testing of Large Language Models (LLMs)
• Test Oracle for checking the correctness of AI applications on test cases
• Tools and environment for automated and semi-automated software testing AI applications
for various testing activities and management of testing resources
• Specific concerns of software testing with various specific types of AI technologies and AI
applications
• Applications of AI techniques to software testing
• Machine learning applications to software testing, such as test case generation, test
effectiveness prediction and optimization, test adequacy improvement, test cost reduction
• Constraint Programming for test case generation and test suite reduction
• Constraint Scheduling and Optimization for test case prioritization and test execution
scheduling
• Crowdsourcing and swarm intelligence in software testing
• Genetic algorithms, search-based techniques, and heuristics to optimization of testing
• Data quality evaluation for AI applications
• Automatic data validation tools
• Quality assurance for unstructured training data
• Large-scale unstructured data quality certification
• Techniques for testing deep neural network learning, reinforcement learning and graph
learning
• Testing of distributed AI applications and softwares
• Responsible AI testing
Important Dates:
• Early submission deadline: March 8, 2024 (first-round review)
• Final submission deadline: April 1, 2024 (second-round review)
• Author’s notification: June 1, 2024
• Final paper submission (camera-ready) and conference registration: June 15, 2024
• Conference dates: July 15-18, 2024

Related Resources

IEEE-EI/Scopus-IECA 2025   2025 2nd International Conference on Informatics Education and Computer Technology Applications -IEEE Xplore/EI/Scopus
IEEE AMCAI 2025   IEEE Afro-Mediterranean Conference on Artificial Intelligence
IEEE BDAI 2025   IEEE--2025 the 8th International Conference on Big Data and Artificial Intelligence (BDAI 2025)
BDAI 2025   IEEE--2025 the 8th International Conference on Big Data and Artificial Intelligence (BDAI 2025)
IEEE IRCE 2025   IEEE--2025 The 8th International Conference on Intelligent Robotics and Control Engineering (IRCE 2025)
SACI 2025   19th IEEE International Symposium on Applied Computational Intelligence and Informatics
IEEE ICRAS 2025   IEEE--2025 9th International Conference on Robotics and Automation Sciences (ICRAS 2025)
IEEE CSR 2025   2025 IEEE International Conference on Cyber Security and Resilience
IEEE ICMVA 2025   IEEE--2025 The 8th International Conference on Machine Vision and Applications (ICMVA 2025)
IEEE DAPPS 2025   The 7th IEEE International Conference on Decentralized Applications and Infrastructures