| |||||||||||||||
SIGIR 2022 : IEEE AITEST 2022: THE 4TH IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE TESTING | |||||||||||||||
Link: http://ieeetests.com/?p=19 | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
Artificial Intelligence (AI) technologies are widely used in computer applications to perform tasks such as monitoring, forecasting, recommending, prediction, and statistical reporting. They are deployed in a variety of systems including driverless vehicles, robot-controlled warehouses, financial forecasting applications, and security enforcement and are increasingly integrated with cloud/fog/edge computing, big data analytics, robotics, Internet-of-Things, mobile computing, smart cities, smart homes, intelligent healthcare, etc. In spite of this dramatic progress, the quality assurance of existing AI application development processes is still far from satisfactory and the demand for being able to show demonstrable levels of confidence in such systems is growing. Software testing is a fundamental, effective and recognized quality assurance method which has shown its cost-effectiveness to ensure the reliability of many complex software systems. However, the adaptation of software testing to the peculiarities of AI applications remains largely unexplored and needs extensive research to be performed. On the other hand, the availability of AI technologies provides an exciting opportunity to improve existing software testing processes, and recent years have shown that machine learning, data mining, knowledge representation, constraint optimization, planning, scheduling, multi-agent systems, etc. have real potential to positively impact on software testing. Recent years have seen a rapid growth of interests in testing AI applications as well as application of AI techniques to software testing. This conference provides an international forum for researchers and practitioners to exchange novel research results, to articulate the problems and challenges from practices, to deepen our understanding of the subject area with new theories, methodologies, techniques, processes models, etc., and to improve the practices with new tools and resources.
Topics Of Interest The conference invites papers of original research on AI testing and reports of the best practices in the industry as well as the challenges in practice and research. Topics of interest include (but are not limited to) the following: Testing AI applications Methodologies for testing, verification and validation of AI applications Process models for testing AI applications and quality assurance activities and procedures Quality models of AI applications and quality attributes of AI applications, such as correctness, reliability, safety, security, accuracy, precision, comprehensibility, explainability, etc. Whole lifecycle of AI applications, including analysis, design, development, deployment, operation and evolution Quality evaluation and validation of the datasets that are used for building the AI applications Techniques for testing AI applications Test case design, test data generation, test prioritization, test reduction, etc. Metrics and measurements of the adequacy of testing AI applications Test oracle for checking the correctness of AI application on test cases Tools and environment for automated and semi-automated software testing AI applications for various testing activities and management of testing resources Specific concerns of software testing with various specific types of AI technologies and AI applications Applications of AI techniques to software testing Machine learning applications to software testing, such as test case generation, test effectiveness prediction and optimization, test adequacy improvement, test cost reduction, etc. Constraint Programming for test case generation and test suite reduction Constraint Scheduling and Optimization for test case prioritization and test execution scheduling Crowdsourcing and swarm intelligence in software testing Genetic algorithms, search-based techniques and heuristics to optimization of testing Data quality evaluation for AI applications Automatic data validation tools Quality assurance for unstructured training data Large-scale unstructured data quality certification Techniques for testing deep neural network learning, reinforcement learning and graph learning General Chairs Hong Zhu, Oxford Brookes University, UK Franz Wotawa, Graz University of Technology, Austria Program Chairs Junhua Ding, University of North Texas, USA Oum-El-Kheir Aktouf, Université Grenoble Alpes, France PC members Rob Alexander – University of York, United Kingdom Sebastien Bardin – CEA LIST, France Christian Berger – University of Gothenburg, Sweden Christof J. Budnik – Siemens Corporate Technology, United States Yan Cai – State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, China Andrea Ceccarelli – University of Firenze, Italy Jaganmohan Chandrasekaran – Virginia Tech Research Center, United States Lin Chen – Nanjing University, China Zhenbang Chen – National University of Defense Technology, Changsha, China Zhenyu Chen – Nanjing University, China Stanislav Chren – Masaryk University, Faculty of Informatics, Czechia Tao Chuanqi, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China Emilia Cioroaica – Fraunhofer, Germany Claudio De La Riva – Universidad de Oviedo, Spain Anurag Dwarakanath – Accenture Technology Labs, India Kerstin Eder – University of Bristol, United Kingdom Chunrong Fang – Software Institute of Nanjing University, China Fuyuki Ishikawa – National Institute of Informatics, Japan Hyeran Jeon – University of California Merced, United States Shunhui Ji – Hohai University, China Bo Jiang – Beihang University, China Mingyue Jiang – Zhejiang Sci-Tech University, China Foutse Khomh – DGIGL, École Polytechnique de Montréal, Canada Nadjib Lazaar – UM2-LIRMM, France Yu Lei – University of Texas at Arlington, United States J. Jenny Li – Kean University, United States Francesca Lonetti – CNR-ISTI, Italy Dusica Marijan – Simula, Norway Kevin Moran – College of William & Mary, United States Ernest Pobee – City University of Hong Kong, Hong Kong Andrea Polini – University of Camerino, Italy Ju Qian – Nanjing University of Aeronautics and Astronautics, China Guodong Rong – Meta Platforms, United States Marc Roper – University of Strathclyde, United Kingdom Chang-Ai Sun – University of Science and Technology Beijing, China Sahar Tahvili – Ericsson, Sweden Tatsuhiro Tsuchiya – Osaka University, Japan Javier Tuya – Universidad de Oviedo, Spain Mark Utting – The University of Queensland, Australia Neil Walkinshaw – The University of Sheffield, United Kingdom Ziyuan Wang – Nanjing University of Posts and Telecommunications, China Zhi Quan Zhou – University of Wollongong, Australia Tao Zhang – Northwest Polytechnical University, China CISOSE General Chairs Jerry Gao, San Jose State University, USA Paul Townend, Umeå University, Sweden CISOSE Steering Committee Jerry Gao, San Jose State University, USA Guido Wirtz, University of Bamberg, Germany Huaimin Wang, NUDT, China Jie Xu, University of Leeds, UK Wei-Tek Tsai, Arizona State University, USA Axel Kupper, TU Berlin, Germany Hong Zhu, Oxford Brookes University, UK Longbin Cao, University of Technology Sydney, Australia Cristian Borcea, New Jersey Institute of Technology, USA Sato Hiroyuki, University of Tokyo, Japan |
|