Software testing is an important, however often neglected, part of the software development lifecycle.

One of the causes of too little attention being spent on software testing in industry is the lack of attention spent on software testing in higher education curricula. This is in part due to curricula focusing on more glamorous topics such as software development and requirements engineering. One possible solution to this is providing teachers with tools that can be used by the students independently to allow them to gain a good understanding of software testing, while still allowing the classroom time to be spent on other topics.

Model-Driven Engineering (MDE) provides a nice sandbox for building such a tool. The MERODE approach has been known to provide novice students the opportunity to develop low-code applications, by merely modelling a domain of a software system. Using the created domain models, the MERODE code generation automatically generates code from this model. However, even under the assumption that the MERODE code generation generates correct code from the model, there is no guarantee that the model correctly reflects the requirements of the software system. Hence, also in low-code approaches, testing is required to assess the validity of models and ensuing generated software.

A newly introduced tool, named TESt CoverAge Visualization (TesCaV), provides support to test the models that are developed by the MERODE approach. TesCaV is a model-based testing tool, with model-based testing tools being known as tools that automatically generate test cases from a given software model. The test cases that TesCaV uses are based on well-established test cases within research on model-based testing. However, in contrast to most model-based testing tools, TesCaV does not automatically run the generated test cases, instead TesCaV, only generates them. These generated test cases are then used to provide the user feedback about coverage by the test cases that the user has executed manually using the generated code. This allows the user to gain hands-on approach to understanding how to do software testing.

Prior research has already shown that TesCaV has a high user acceptance and thus that people are likely to use TesCaV. New research from the KU Leuven has recently also shown that the usage of TesCaV increased the coverage of test cases conducted by the user, even when not being provided the support of TesCaV in the MERODE approach any longer. Also, prior knowledge on software testing and software development from the users have been shown to have little impact on the improvement of test coverage.

Further research currently being conducted will be investigating the most adequate means of providing the user feedback about the test cases which have and which have not yet been covered by their manual testing.

Read more in the following paper: Felix Cammaerts, Charlotte Verbruggen and Monique Snoeck, Investigating the effectiveness of model-based testing on testing skill acquisition, Proceedings of the PoEM 2022 Conference, London, UK.


Similar Posts