| |||||||||||||||
SBST 2016 : 9th International Workshop on Search-Based Software Testing | |||||||||||||||
Link: http://cse.sc.edu/~ggay/sbst2016 | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
Call For Papers:
9th International Workshop on Search-Based Software Testing (SBST 2016) May 16-17, 2016 – Austin, TX, USA (co-located with ICSE 2016) http://searchbasedsoftwaretesting.org https://cse.sc.edu/~ggay/sbst2016/ https://twitter.com/sbstworkshop About the Workshop: Search-Based Software Testing (SBST) is the application of optimizing search techniques (for example, Genetic Algorithms) to solve problems in software testing. SBST is used to generate test data, prioritize test cases, minimize test suites, optimize software test oracles, reduce human oracle cost, verify software models, test service-orientated architectures, construct test suites for interaction testing, and validate real-time properties (among many other things). The objectives of this workshop are to bring together researchers and industrial practitioners both from SBST and the wider software engineering community to collaborate, to share experience, to provide directions for future research, and to encourage the use of search techniques in novel aspects of software testing in combination with other aspects of the software engineering lifecycle. Submission Instructions: Researchers and practitioners are invited to submit: - Full papers (maximum of 7 pages) to the workshop on original research either empirical or theoretical in SBST, practical experience of using SBST, or SBST tools. - Short papers (maximum of 4 pages) that describe novel techniques, ideas and positions that have yet to be fully developed; or are a discussion of the importance of a recently published SBST result by another author in setting a direction for the SBST community, and/or the potential applicability (or not) of the result in an industrial context. - Position papers (maximum of 2 pages) that analyze trends in SBST and raise issues of importance. Position papers are intended to seed discussion and debate at the workshop, and will be reviewed with respect to relevance and their ability to spark discussions. - Tool Competition reports (maximum of 4 pages). We invite researchers, students, and tool developers to design innovative new approaches to software test generation. More information is available on our website. In all cases, papers should address a problem in the software testing/verification/validation domain or combine elements of those domains with other concerns in the software engineering lifecycle. Examples of problems in the software testing/verification/validation domain include (but are not limited to) generating testing data, prioritizing test cases, constructing test oracles, minimizing test suites, verifying software models, testing service-orientated architectures, constructing test suites for interaction testing, and validating real-time properties. The solution should apply a metaheuristic search strategy such as (but not limited to) random search, local search (e.g. hill climbing, simulated annealing, and tabu search), evolutionary algorithms (e.g. genetic algorithms, evolution strategies, and genetic programming), ant colony optimization, and particle swarm optimization. Papers must follow the ICSE 2016 formatting guidelines and should be submitted electronically to: https://easychair.org/conferences/?conf=sbst2016 Important Dates: Submission Deadline: January 22, 2016 Competition Report Deadline: February 18, 2016 Author notification: February 19, 2016 Camera-ready version due: February 26, 2016 Events: The 2016 workshop will feature several events of interest: - Keynotes from Claire Le Goues (Carnegie Mellon University) & Tim Menzies (North Carolina State University). - A panel, where leading experts will discuss non-standard algorithms and fitness functions that may expand the efficacy and applicability of SBST. - A tutorial by Gordon Fraser (University of Sheffield) on the EvoSuite test generation framework - covering how to use it, how to integrate it into other tools, and how to extend it. - The 4th round of our tools competition, where unit test generation tools will be benchmarked on coverage, fault detection, and efficiency. Workshop Organizers: - Gregory Gay, PC Co-Chair (University of South Carolina, USA) - Justyna Petke, PC Co-Chair (University College London, UK) - Tanja Vos, Competition Chair (Universidad Politecnica de Valencia, Spain) Program Committee: - Wasif Afzal (Malardalen University, Sweden) - Rob Alexander (University of York, UK) - Giuliano Antoniol (Ecole Polytechnique de Montréal) - Andrea Arcuri (Scienta, Norway) - Earl Barr (University College London, UK) - Mariano Ceccato (FBK (Fondazione Bruno Kessler) Trento, Italy) - Francisco Chicano (University of Málaga, Spain) - Massimiliano Di Penta (University of Sannio, Italy) - Robert Feldt (Blekinge Institute of Technology, Sweden) - Erik Fredericks (Oakland University, USA) - Juan Pablo Galeotti (University of Buenos Aires, Argentina) - Mark Harman (University College London, UK) - Gregory Kapfhammer (Allegheny College, USA) - Zheng Li (Beijing University of Chemical Technology, China) - Phil McMinn (University of Sheffield, UK), Chair - Changhai Nie (Nanjing University, China) - Annibale Panichella (Delft University of Technology, Netherlands) - Simon Poulding (Blekinge Institute of Technology, Sweden) - Marc Roper (University of Strathclyde, UK) - Federica Sarro (University College London, UK) - David White (University of Glasgow, UK) Steering Committee: - Phil McMinn (University of Sheffield, UK), Chair - Myra Cohen (University of Nebraska at Lincoln, USA) - Andrea Arcuri (Scienta, Norway) - John Clark (University of York, UK) - Wasif Afzal (Malardalen University, Sweden) - Simon Poulding (Blekinge Institute of Technology, Sweden) - Tanja Vos (Universidad Politecnica de Valencia, Spain) - Mark Harman (University College London, UK) - Gregory Gay (University of South Carolina, USA) - Giuliano Antoniol (Polytechnique Montreal, Canada) |
|