posted by user: upuleegk || 1674 views || tracked by 4 users: [display]

MET 2016 : MET 2016: 1st International Workshop on Metamorphic Testing


When May 16, 2016 - May 16, 2016
Where Austin, Texas
Submission Deadline Jan 22, 2016
Notification Due Feb 19, 2016
Final Version Due Feb 26, 2016
Categories    software engineering   software testing   metamorphic testing

Call For Papers

*************** MET 2016 Call for Papers ******************

The 1st International Workshop on Metamorphic Testing (MET 2016) -

in conjunction with the 38th International Conference on Software Engineering (ICSE), Austin, TX, May 14-22, 2016


Paper submissions due: January 22, 2016
Notification to author: February 19, 2016
Camera ready copies due: February 26, 2016
Workshop: May 16, 2016


Metamorphic testing (MT) is a testing technique that exploits the relationships among the inputs and outputs of multiple executions of the program under test, so-called Metamorphic Relations (MRs). MT has been proven highly effective in testing programs that face the oracle problem, for which the correctness of individual output is difficult to determine. Since the introduction of MT in 1998, the interest in this testing methodology has grown immensely with numerous applications in various domains such as machine learning, bioinformatics, computer graphics, simulation, search engines, decision support, cloud computing, databases, and compilers.

The First International Workshop on Metamorphic Testing (MET 2016) will bring together researchers and practitioners in academia and industry to discuss research results and experiences in MT. The ultimate goal of MET is to provide a platform for the discussion of novel ideas, new perspectives, new applications, and state of research, related to or inspired by MT.


The topics of interest include, but are not limited to:

- Guidelines and techniques for the construction of MRs or MT test cases.
- Prioritization and minimization of MRs or MT test cases.
- Quality assessment mechanisms for MRs or MT test cases (e.g. metrics)
- Automated generation of likely MRs.
- Combination of MRs.
- Generation of source test cases.
- Formal methods involving MRs.
- Case studies and applications.
- Tools.
- Surveys.
- Empirical studies.
- Integration/comparison with other techniques.
- Novel applications, perspectives, or theories inspired by MT, which can be beyond conventional software testing topics.


Authors are invited to submit original, previously unpublished research papers. Papers should be written in English, strictly following the ICSE 2016 formatting and submission instructions:

The following types of submissions are accepted:

- Full research papers with a maximum length of 7 pages, including references and appendices.
- Short papers with a maximum length of 4 pages, including references and appendices.

Papers must be submitted in PDF format via the electronic submission system, which is available at [TBD]

Submitted papers will be evaluated according to their rigor, significance, originality, technical quality and exposition, by at least three members of an international program committee.

At least one author of each accepted paper must register and participate in the workshop. Registration is subject to the terms, conditions and procedure of the main ICSE conference to be found at its website:

Accepted papers will be published in the ACM digital library.


Prof. T.Y. Chen, Swinburne University of Technology, Australia


Upulee Kanewala, Montana State University, USA
Laura L. Pullum, Oak Ridge National Laboratory, USA
Sergio Segura, University of Seville, Spain
Dave Towey, The University of Nottingham Ningbo China, China
Zhi Quan (George) Zhou, University of Wollongong, Australia

*PROGRAM COMMITTEE (To be completed)*

James Bieman, Colorado State University, USA
Giovanni Denaro, University of Milano Bicocca, Italy
Phyllis Frankl, Polytechnic Institute of New York University, USA
Arnaud Gotlieb, Simula Research Laboratory, Norway
Mark Harman, University College London, UK
Robert M. Hierons, University of Brunel, UK
Gail Kaiser, Columbia University, USA
F.-C. Kuo, Swinburne University of Technology, Australia
Mikael Lindvall, Fraunhofer Center for Experimental Software Engineering, USA
Huai Liu, RMIT University, Australia
Chris Murphy, University of Pennsylvania, USA
Alberto Núñez, Universidad Complutense de Madrid, Spain
T. H. Tse, The University of Hong Kong, Hong Kong
Xiaoyuan Xie, Wuhan University, China


Upulee Kanewala. Email:

Related Resources

ITE 2021   2nd International Conference on Integrating Technology in Education
ICST 2021   IEEE International Conference on Software Testing, Verification and Validation 2021
ISSTA 2021   International Symposium on Software Testing and Analysis
ACM--ICMLSC--EI Compendex, Scopus 2021   ACM--2021 The 5th International Conference on Machine Learning and Soft Computing (ICMLSC 2021)--EI Compendex, Scopus
IJNGN 2020   International Journal of Next - Generation Networks
JSS SI on Test Automation 2021   Special Issue on “Test Automation: Trends, Benefits, and Costs” - Journal of Systems and Software (Elsevier)
SAC-SVT 2021   Software Verification and Testing Track at SAC 2021
CYBI 2021   8th International Conference on Cybernetics & Informatics