| |||||||||||||||
AITest 2021 : The IEEE Third International Conference On Artificial Intelligence Testing | |||||||||||||||
Link: http://www.ieeeaitests.com/ | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
The IEEE Third International Conference on Artificial Intelligence
Testing (AITest 2021) 23rd-26th August 2021 Virtual Conference Organized by Oxford University, UK Conference Site: http://ieeeaitests.com Paper Submission Site: https://easychair.org/conferences/?conf=aitest2021 The IEEE Third International Conference On Artificial Intelligence Testing (AITest 2021) is an international conference to provide a platform for researchers, practitioners, and students to present research results and exchanges ideas on how to test applications empowered by Artificial Intelligence (AI) and how to empower software testing methodology and techniques with AI. AI technologies are widely used in computer applications to perform tasks such as monitoring, forecasting, recommending, prediction, and statistical reporting. They are deployed in a variety of systems including driverless vehicles, robot-controlled warehouses, financial forecasting applications, and security enforcement, and are increasingly integrated with cloud/fog/edge computing, big data analytics, robotics, Internet-of-Things, mobile computing, smart cities, smart homes, intelligent healthcare, etc. Despite this dramatic progress, the quality assurance of existing AI application development processes is still far from satisfactory and the demand for being able to show demonstrable levels of confidence in such systems is growing. Software testing is a fundamental, effective, and recognized quality assurance method which has shown its cost-effectiveness to ensure the reliability of many complex software systems. However, the adaptation of software testing to the peculiarities of AI applications remains largely unexplored and needs extensive research to be performed. On the other hand, the availability of AI technologies provides an exciting opportunity to improve existing software testing processes, and recent years have shown that machine learning, data mining, knowledge representation, constraint optimization, planning, scheduling, multi-agent systems, etc. have real potential to positively impact on software testing. Recent years have seen a rapid growth of interest in testing AI applications as well as the application of AI techniques to software testing. This conference provides an international forum for researchers and practitioners to exchange novel research results, to articulate the problems and challenges from practices, to deepen our understanding of the subject area with new theories, methodologies, techniques, processes models, etc., and to improve the practices with new tools and resources. TOPICS OF INTEREST A. Testing AI applications + Methodologies for testing, verification, and validation of AI applications ++ Process models for testing AI applications and quality assurance activities and procedures ++ Quality models of AI applications and quality attributes of AI applications, such as correctness, reliability, safety, security, accuracy, precision, comprehensibility, explainability, etc ++ Whole lifecycle of AI applications, including analysis, design, development, deployment, operation, and evolution + Techniques for testing AI applications ++ Test case design, test data generation, test prioritization, test reduction, etc ++ Metrics and measurements of the adequacy of testing AI applications ++ Test oracle for checking the correctness of AI application on test cases + Tools and environment for automated and semi-automated software testing AI applications for various testing activities and management of testing resources + Specific concerns of software testing with various specific types of AI technologies and AI applications B. Applications of AI techniques to software testing + Machine learning applications to software testing, such as test case generation, test effectiveness prediction and optimization, test adequacy improvement, test cost reduction, etc + Constraint Programming for test case generation and test suite reduction + Constraint Scheduling and Optimization for test case prioritization and test execution scheduling + Multi-agent systems for testing and test services + Crowdsourcing and swarm intelligence in software testing + Genetic algorithms, search-based techniques, and heuristics to the optimization of testing + Knowledge-based and expert systems for software testing C. Data quality checking for AI applications + Quality assurance for unstructured training data + Automatic data validation tools + Large-scale unstructured data quality certification TYPES OF CONTRIBUTIONS A. Regular Papers (8 Pages) And Short Papers (2 Pages) Regular papers in this track describe original and significant work or report on case studies and empirical research, and short papers that describe late-breaking research results or work in progress with timely and innovative ideas. B. AI Testing in Practice Papers (8 Pages) Papers in this track provide a forum for networking, exchanging ideas, and innovative or experimental practices to address software engineering research that impacts directly on practice on software testing for AI. C. Tool Demo Papers (4 Pages) The tool demo track provides a forum to present and demonstrate innovative tools and/or new benchmarking datasets in the context of software testing for AI. FORMAT All papers must be submitted electronically in PDF format using the IEEE Computer Society Proceedings format (two columns, single-spaced, 10pt font). Papers must not be accepted for publication, or be under submission to another conference or journal. Each paper will be reviewed by at least three members of the Program Committee, using a single-blind reviewing procedure. At least one author of the accepted paper must register for the conference and confirm that she/he will present the paper in person. The submission site is AITest 2021 at EasyChair: https://easychair.org/conferences/?conf=aitest2021 Program Committee Chairs W.K. Chan, City University of Hong Kong, China Gordon Fraser, University of Passau, Germany General Executive Chair Hong Zhu, Oxford Brookes University, UK General Chairs Franz Wotawa, Graz University of Technology, Austria Jerry Gao, San Jose State University, USA Marc Roper, University of Strathclyde, UK |
|