posted by user: Prudhvimvns || 2760 views || tracked by 5 users: [display]

https://ieeeaitest.com/ 2023 : IEEE AITEST 2022 : THE 5TH IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE TESTING

FacebookTwitterLinkedInGoogle

Link: https://ieeeaitest.com/
 
When Jul 17, 2023 - Jul 20, 2023
Where Athens, Greece
Submission Deadline Apr 7, 2023
Notification Due Apr 7, 2023
Final Version Due Apr 7, 2023
Categories    software engineering   software testing   artificial intelligence
 

Call For Papers

Call for Papers

Artificial Intelligence (AI) technologies are widely used in computer applications to perform tasks such as monitoring, forecasting, recommending, prediction, and statistical reporting. They are deployed in a variety of systems including driverless vehicles, robot-controlled warehouses, financial forecasting applications, and security enforcement and are increasingly integrated with cloud/fog/edge computing, big data analytics, robotics, Internet-of-Things, mobile computing, smart cities, smart homes, intelligent healthcare, etc. In spite of this dramatic progress, the quality assurance of existing AI application development processes is still far from satisfactory and the demand for being able to show demonstrable levels of confidence in such systems is growing. Software testing is a fundamental, effective and recognized quality assurance method which has shown its cost-effectiveness to ensure the reliability of many complex software systems. However, the adaptation of software testing to the peculiarities of AI applications remains largely unexplored and needs extensive research to be performed. On the other hand, the availability of AI technologies provides an exciting opportunity to improve existing software testing processes, and recent years have shown that machine learning, data mining, knowledge representation, constraint optimization, planning, scheduling, multi-agent systems, etc. have real potential to positively impact on software testing. Recent years have seen a rapid growth of interests in testing AI applications as well as application of AI techniques to software testing. This conference provides an international forum for researchers and practitioners to exchange novel research results, to articulate the problems and challenges from practices, to deepen our understanding of the subject area with new theories, methodologies, techniques, processes models, etc., and to improve the practices with new tools and resources.

Topics Of Interest

The conference invites papers of original research on AI testing and reports of the best practices in the industry as well as the challenges in practice and research. Topics of interest include (but are not limited to) the following:

Testing AI applications
Methodologies for testing, verification and validation of AI applications
Process models for testing AI applications and quality assurance activities and procedures
Quality models of AI applications and quality attributes of AI applications, such as correctness, reliability, safety, security, accuracy, precision, comprehensibility, explainability, etc.
Whole lifecycle of AI applications, including analysis, design, development, deployment, operation and evolution
Quality evaluation and validation of the datasets that are used for building the AI applications
Techniques for testing AI applications
Test case design, test data generation, test prioritization, test reduction, etc.
Metrics and measurements of the adequacy of testing AI applications
Test oracle for checking the correctness of AI application on test cases
Tools and environment for automated and semi-automated software testing AI applications for various testing activities and management of testing resources
Specific concerns of software testing with various specific types of AI technologies and AI applications
Applications of AI techniques to software testing
Machine learning applications to software testing, such as test case generation, test effectiveness prediction and optimization, test adequacy improvement, test cost reduction, etc.
Constraint Programming for test case generation and test suite reduction
Constraint Scheduling and Optimization for test case prioritization and test execution scheduling
Crowdsourcing and swarm intelligence in software testing
Genetic algorithms, search-based techniques and heuristics to optimization of testing
Data quality evaluation for AI applications
Automatic data validation tools
Quality assurance for unstructured training data
Large-scale unstructured data quality certification
Techniques for testing deep neural network learning, reinforcement learning and graph learning

TYPES OF CONTRIBUTIONS
A. Regular Papers (8 Pages) And Short Papers (2 Pages)
Regular papers in this track describe original and significant work or
report on case studies and empirical research, and short papers that
describe late-breaking research results or work in progress with
timely and innovative ideas.


B. AI Testing in Practice Papers (8 Pages)
Papers in this track provide a forum for networking, exchanging ideas,
and innovative or experimental practices to address software
engineering research that impacts directly on practice on software
testing for AI.


C. Tool Demo Papers (4 Pages)
The tool demo track provides a forum to present and demonstrate
innovative tools and/or new benchmarking datasets in the context of
software testing for AI.


FORMAT
All papers must be submitted electronically in PDF format using the
IEEE Computer Society Proceedings format (two columns, single-spaced,
10pt font). Papers must not be accepted for publication, or be under
submission to another conference or journal. Each paper will be
reviewed by at least three members of the Program Committee, using a
single-blind reviewing procedure. At least one author of the accepted
paper must register for the conference and confirm that she/he will
present the paper in person. The submission site is AITest 2021 at
EasyChair: https://easychair.org/conferences/?conf=aitest2021


General Chairs
Hong Zhu, Oxford Brookes University, UK
Franz Wotawa, Graz University of Technology, Austria

Program Chairs
Junhua Ding, University of North Texas, USA
Oum-El-Kheir Aktouf, Université Grenoble Alpes, France

PC members
Rob Alexander – University of York, United Kingdom
Sebastien Bardin – CEA LIST, France
Christian Berger – University of Gothenburg, Sweden
Christof J. Budnik – Siemens Corporate Technology, United States
Yan Cai – State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, China
Andrea Ceccarelli – University of Firenze, Italy
Jaganmohan Chandrasekaran – Virginia Tech Research Center, United States
Lin Chen – Nanjing University, China
Zhenbang Chen – National University of Defense Technology, Changsha, China
Zhenyu Chen – Nanjing University, China
Stanislav Chren – Masaryk University, Faculty of Informatics, Czechia Emilia Cioroaica – Fraunhofer, Germany
Claudio De La Riva – Universidad de Oviedo, Spain
Anurag Dwarakanath – Accenture Technology Labs, India
Kerstin Eder – University of Bristol, United Kingdom
Chunrong Fang – Software Institute of Nanjing University, China
Fuyuki Ishikawa – National Institute of Informatics, Japan
Hyeran Jeon – University of California Merced, United States
Shunhui Ji – Hohai University, China
Bo Jiang – Beihang University, China
Mingyue Jiang – Zhejiang Sci-Tech University, China
Foutse Khomh – DGIGL, École Polytechnique de Montréal, Canada
Nadjib Lazaar – UM2-LIRMM, France
Yu Lei – University of Texas at Arlington, United States
J. Jenny Li – Kean University, United States
Francesca Lonetti – CNR-ISTI, Italy
Dusica Marijan – Simula, Norway
Kevin Moran – College of William & Mary, United States
Ernest Pobee – City University of Hong Kong, Hong Kong
Andrea Polini – University of Camerino, Italy
Ju Qian – Nanjing University of Aeronautics and Astronautics, China
Guodong Rong – Meta Platforms, United States
Marc Roper – University of Strathclyde, United Kingdom
Chang-Ai Sun – University of Science and Technology Beijing, China
Sahar Tahvili – Ericsson, Sweden
Tatsuhiro Tsuchiya – Osaka University, Japan
Javier Tuya – Universidad de Oviedo, Spain
Mark Utting – The University of Queensland, Australia
Neil Walkinshaw – The University of Sheffield, United Kingdom
Ziyuan Wang – Nanjing University of Posts and Telecommunications, China
Zhi Quan Zhou – University of Wollongong, Australia
Tao Zhang – Northwest Polytechnical University, China

CISOSE General Chairs
Jerry Gao, San Jose State University, USA
Paul Townend, Umeå University, Sweden

CISOSE Steering Committee
Jerry Gao, San Jose State University, USA
Guido Wirtz, University of Bamberg, Germany
Huaimin Wang, NUDT, China
Jie Xu, University of Leeds, UK
Wei-Tek Tsai, Arizona State University, USA
Axel Kupper, TU Berlin, Germany
Hong Zhu, Oxford Brookes University, UK
Longbin Cao, University of Technology Sydney, Australia
Cristian Borcea, New Jersey Institute of Technology, USA
Sato Hiroyuki, University of Tokyo, Japan

Related Resources

IEEE-EI/Scopus-IECA 2025   2025 2nd International Conference on Informatics Education and Computer Technology Applications -IEEE Xplore/EI/Scopus
BIBC 2024   5th International Conference on Big Data, IOT and Blockchain
IEEE AMCAI 2025   IEEE Afro-Mediterranean Conference on Artificial Intelligence
IEEE-Ei/Scopus-ACEPE 2024   2024 IEEE Asia Conference on Advances in Electrical and Power Engineering (ACEPE 2024) -Ei Compendex
IEEE BDAI 2025   IEEE--2025 the 8th International Conference on Big Data and Artificial Intelligence (BDAI 2025)
IEEE-Ei/Scopus-SGGEA 2024   2024 Asia Conference on Smart Grid, Green Energy and Applications (SGGEA 2024) -EI Compendex
IEEE IRCE 2025   IEEE--2025 The 8th International Conference on Intelligent Robotics and Control Engineering (IRCE 2025)
SPIE-Ei/Scopus-DMNLP 2025   2025 2nd International Conference on Data Mining and Natural Language Processing (DMNLP 2025)-EI Compendex&Scopus
IEEE Big Data - MMAI 2024   IEEE Big Data 2024 Workshop on Multimodal AI
BDAI 2025   IEEE--2025 the 8th International Conference on Big Data and Artificial Intelligence (BDAI 2025)