Workshop on Testing of Configurable and Multi-variant Systems


The conference will be held in conjunction with ICST 2020 in Porto, Portugal, Mon 23 - Fri 27 March 2020

Submission link: https://easychair.org/conferences/?conf=tocams2020


ABSTRACT REGISTRATION DEADLINE: DECEMBER 20, 2019

SUBMISSION DEADLINE: JANUARY 3, 2020

NOTIFICATION: JANUARY 20, 2020

CAMERA-READY PAPERS DUE: FEBRUARY 3, 2020


ToCaMS 2020, the first ICST Workshop on Testing of Configurable and Multi-variant Systems, will focus on methods and tools for the automated generation and execution of tests for software-based systems that are highly configurable and customizable. As more and more software-based products and services are available in many different variants, new challenges for the software quality assurance processes arise. In this workshop, both foundational and practical testing problems will be discussed, and possible solutions from an academic and industrial perspective will be presented.

Submission Guidelines

All papers must be original and not simultaneously submitted to another journal or conference. The following paper categories are welcome: Research papers describing original research, new results, methods and tools in testing of variable and configurable systems. A page limit of 12 pages applies to research papers. Experience papers describing case studies, applications, experiences, and best practices in testing of variable and configurable systems. A page limit of 4-8 pages applies to experience papers.

All accepted papers will be published in the IEEE ICSTW proceeding.

List of Topics

Due to increasing market diversification and customer demand, more and more software-based products and services are customizable or are designed in the form of many different variants. This brings about new challenges for the software quality assurance processes: How shall the variability be modelled in order to make sure that all features are being tested? Is it better to test selected variants on a concrete level, or can the generic software and baseline be tested abstractly? Can knowledge-based AI techniques be used to identify and prioritize test cases? How can the quality of a generic test suite be assessed? What are appropriate coverage criteria for configurable modules? If it is impossible to test all possible variants, which products and test cases should be selected for test execution? Can security-testing methods be leveraged to an abstract level?
In this workshop, these and related questions will be discussed both from a practical and from a foundational viewpoint. It should be interesting for researchers from the software testing community, as well as for testing experts from industry who want to learn about the latest methods and tools in the field. Besides new theoretical results, we also welcome problem statements, case studies, experience reports, tool presentations and survey papers.

Following is a non-exhaustive list of topics to be discussed at the workshop:

  • Test modelling,

  • test generation,

  • test priorization,

  • test selection,

  • test execution,

  • test evaluation, and

  • test assessment

for variable and configurable systems.

Program Committee

Ina Schaefer (Technische Universität Braunschweig)

Jonathan Bowen (London South Bank University)

Ana Rosa Cavalli (Institut Mines-Telecom/Telecom SudParis)

Kim Guldstrand Larsen (Aalborg University)

Alexandre Petrenko (CRIM)

Malte Lochau (TU Darmstadt)

Timo Kehrer (Humboldt-Universität zu Berlin)

Jan Tretmans (TNO - Embedded Systems Innovation)

Jens Grabowski (Georg-August-University of Goettingen)

Organizing Committee

Jeremy Bradbury

Peter Kruse

Mehrdad Saadatmand

Holger Schlingloff

All questions about submissions should be emailed at holger[dot]schlingloff[atsymbol]fokus[fullstop]fraunhofer[.=dot]de