posted by user: wasielewska || 4185 views || tracked by 7 users: [display]

ATSE 2014 : 5th International Workshop Automating Test Case Design, Selection and Evaluation

FacebookTwitterLinkedInGoogle

Link: http://fedcsis.org/atse
 
When Sep 7, 2014 - Sep 10, 2014
Where Warsaw, Poland
Submission Deadline Apr 23, 2014
Notification Due May 19, 2014
Final Version Due Jun 17, 2014
Categories    computer science   testing
 

Call For Papers

FINAL CALL FOR PAPERS
=======================================

5th International Workshop Automating Test Case Design, Selection and Evaluation (ATSE'14)

Warsaw, Poland, September 7-10, 2014

WWW: http://fedcsis.org/atse
E-mail: atse2014@fedcsis.org

We would like to cordially invite you to consider contributing a paper to ATSE 2014 - held as a part of the Federated Conference on Computer Science and Information Systems (FedCSIS 2014).

FedCSIS is an annual international multi-conference organized by the Polish Information Processing Society (PTI) in technical cooperation with the IEEE Region 8, IEEE Computer Society, IEEE Poland Section Computer Society Chapter, IEEE Poland (Gdansk) Section Computer Society Chapter, ACM Special Interest Group on Applied Computing, ACM Lodz Chapter, International Federation for Information Processing, European Alliance for Innovation, Informatics Europe, IEEE-CIS Poland Section Chapter, Asociación de Técnicos de Informática, Committee of Computer Science of the Polish Academy of Sciences, Polish Society for Business Informatics, Polish Chamber of Information Technology and Telecommunications, Polish Chamber of Commerce for High Technology and Eastern Cluster ICT Poland, Mazovia Cluster ICT.

Trends such as globalisation, standardisation and shorter lifecycles place great demands on the flexibility of the software industry. In order to compete and cooperate on an international scale, a constantly decreasing time to market and an increasing level of quality are essential. Software and systems testing is at the moment the most important and mostly used quality assurance technique applied in industry. However, the complexity of software systems and hence of their development is increasing. Systems get bigger, connect large amounts of components that interact in many different ways on the Future Internet, and have constantly changing and different types of requirements (functionality, dependability, real-time, etc.). Consequently, the development of cost-effective and high-quality systems opens new challenges that cannot be faced only with traditional testing approaches. New techniques for systematization and automation of testing are required.
Even though many test automation tools are currently available to aid test planning and control as well as test case execution and monitoring, all these tools share a similar passive philosophy towards test case design, selection of test data and test evaluation. They leave these crucial, time-consuming and demanding activities to the human tester. This is not without reason; test case design and test evaluation are difficult to automate with the techniques available in current industrial practice. The domain of possible inputs (potential test cases), even for a trivial program, is typically too large to be exhaustively explored. Consequently, one of the major challenges associated with test case design is the selection of test cases that are effective at finding flaws without requiring an excessive number of tests to be carried out. This is the problem which this workshop wants to attack.
This workshop will provide researchers and practitioners a forum for exchanging ideas, experiences, understanding of the problems, visions for the future, and promising solutions to the problems in automated test case generation, selection and evaluation. The workshop will also provide a platform for researchers and developers of testing tools to work together to identify the problems in the theory and practice of software test automation and to set an agenda and lay the foundation for future development.

TOPICS
=======================================

Topics include (but are not limited to):
* techniques and tools for automating test case design:
- model-based,
- combinatorial.based,
- optimization-based,
- etc.
* Evaluation of testing techniques and tools on real systems, not only toy problems.
* Benchmarks for evaluating software testing techniques.

TYPE OF SUBMISSIONS
=======================================

We expect the following types of submissions:
* Research in progress, including research results at a premature stage.
* Experience reports with the usage of testing technique and tools.
- Positive experiences should present techniques and tools that work and the situations in which they work.
- Negative experiences should be used to highlight new research challenges.
* Surveys, case studies and comparative studies that investigate pros, cons and complementarities of existing tools.
* Vision papers stating where the research in the field should be heading towards.
* Tool and technique demonstrations.

PAPER SUBMISSION AND PUBLICATION
=======================================

Papers should be submitted by April 23, 2014 (strict deadline). Preprints will be published on a USB memory stick provided to the FedCSIS participants. Only papers presented during the conference will be submitted to the IEEE for inclusion in the Xplore Digital Library. Furthermore, proceedings, published in a volume with ISBN and ISSN numbers will posted at the conference WWW site. Moreover, most Events' organizers arrange quality journals, edited volumes, etc. and may invite selected extended and revised papers for post-conference publications (information can be found at the websites of individual events).

IMPORTANT DATES
=======================================

- Paper submission: April 23, 2014 (strict deadline)
- Position paper submission: May 12, 2014
- Acceptance decision: May 19, 2014
- Final version of paper submission: June 17, 2014
- Final deadline for discounted fee: July 31, 2014
- Conference dates: September 7-10, 2014

EVENT CHAIRS
=======================================

- Eldh, Sigrid, Ericsson & Karlstad University, Sweden
- Prasetya, Wishnu, University of Utrecht, Netherlands
- Vos, Tanja, Universidad Politecnica de Valencia, Spain

PROGRAM COMMITTEE (confirmed so far)
=======================================

- Afzal, Wasif, Mälardalens Högskola
- Aho, Pekka, VTT
- Bagnato, Alessandra, Softeam, France
- Condori, Nelly, Universidad Politecnica de Valencia, Spain
- Datar, Advaita
- Escalona, Maria Jose, Universidad de Sevilla, Spain
- Harman, Mark
- Jia, Yue
- Marchetto, Alessandro, Centro Ricerche Fiat - CRF, Italy
- Marin, Beatriz, Universidad Diego Portales, Chile
- Memon, Atif, University of Maryland, United States
- Noack, Thomas
- Polo, Macario, Uiversidad de Castilla la Macha, Spain
- Roper, Marc, University of Strathclyde, United Kingdom
- Shehory, Onn, IBM, Israel
- Sundmark, Daniel, Swedish Insititute of Computer Science
- Tonella, Paolo, Fondazione Bruno Kessler,, Italy
- Tuya, Javier, Universidad de Oviedo, Spain

Related Resources

VST 2024   7th Workshop on Validation, Analysis and Evolution of Software Tests
ACM-Ei/Scopus-CCISS 2024   2024 International Conference on Computing, Information Science and System (CCISS 2024)
VDAT 2024   28th International Symposium on VLSI Design and Test (VDAT-2024)
GreeNet Symposium - SGNC 2024   15th Symposium on Green Networking and Computing (SGNC 2024)
JEDT 2024   International Journal of Electronic Design and Test
IEEE ICA 2022   The 6th IEEE International Conference on Agents
DTTIS 2024   IEEE 2nd International conference on Design, Test & Technology of Integrated Systems
SPIE-Ei/Scopus-CVCM 2024   2024 5th International Conference on Computer Vision, Communications and Multimedia (CVCM 2024) -EI Compendex
Disruptive 2024   Disruptive Creativity with Generative AI: Case Studies from Science, Technology and Education
LREC-COLING 2024   The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation