Testing software is very important, but not done well, resulting in problematic and erroneous software applications. This has a noticeable impact on society, economy and innovation since software is at the basis of digital solutions and tools related to climate change, green economy, demography, digitalisation, artificial intelligence, and the recent COVID-19 pandemic. The reason that testing is not done well radicates from a skills mismatch between what is needed in industry, the learning needs of students and the way testing is currently being taught at Higher Education (HE) and Vocational Education and Training (VET) institutes. Testing needs to allocate multiple cognitive resources in students which makes it a challenge to learn and teach [1]. The goal of this project is to identify and design seamless teaching materials for testing that are aligned with industry needs and which take into account also the learning needs and characteristics of students. This can only be achieved by a project in which HEs, VETs and companies cooperate to share knowledge and develop new approaches for education that take into account the broader socio-economic environment, including research into sensemaking and cognitive models of software testing.

WHY? Testing is important but not done well

People and society are more and more dependent on software quality. Software more and more determines our daily lives, since our social and business lives are digitised. We see that software failures have increasingly more impact. A report from Failwatch [2] identified 548 recorded software failures impacting 4.4 billion people and $1.1 trillion in assets. And this is just scratching the surface—there are far more software bugs in the world than we will likely ever know about. In another study [3] it was found that, for the year 2020 in the US alone, the total Cost of Poor Software Quality (CPSQ) was $2.08 trillion. It must be clear that the cost of bad quality is getting out of hand and voices are already mentioning the coming software apocalypse [4].

Testing is, at the moment, the most important and most-used quality assurance technique applied in the software industry. The complexity of software and, hence of its development, is increasing. Modern systems get larger and more complex as they connect large amounts of components that interact in many different ways and must satisfy constantly changing and different types of requirements (functionality, dependability, security, etc.). Data processing that impacts all aspects of our life is increasingly distributed over clouds and devices. This leads to new emerging concerns, such as availability, security, and privacy. Consequently, testing is getting more and more important and complex too.

Companies need to apply systematization and automation of testing throughout the software and system life-cycle so as to keep up with the increasing quality requirements.

Despite the pressing need for good testing practices, there is a lack of testing culture and awareness amongst practitioners in companies [5] and students in academia [6], [7]. In many cases, programmers do not test even though they theoretically understand the value of testing [6],[8]. Practitioners put off writing effective test cases because they are under pressure to deliver the product as quickly as possible, which finally results in low quality software. Students do not test because testing has not been sufficiently integrated into computer science curricula and hence they still can get away with it [9].


Furthermore, knowledge transfer among the projects rarely happens, so that testing challenges related to test case design, scripting, test execution, reporting, management, automation, or even the lack of general knowledge of testing must be faced again every time people start a new project, as it can be evidenced in the industry testing challenges presented in [5]. In addition, taking into account the cognitive process that must be performed when people test, and that the quality of test cases designed is affected by the domain knowledge and testing expertise of people [10], companies also need strategies to transfer the knowledge inside the teams.

WHAT is needed? The problem should be tackled at the root: Education

We see more and more interest in the topic of Testing in education, for instance the systematic literature reviews papers specifically focused on testing education [6] and [7], and also in workshops (https://testedworkshop.github.io/) dedicated entirely to testing in education.

Academy and industry permanently remark the importance of software-testing techniques to improve software quality and to reduce development and maintenance costs. In many cases, novice software engineers argue that they do not know if they are well prepared to do those tasks.

In academia, several efforts have been made to include testing in the curricula and to properly teach software testing techniques to have students better prepared for industry. Nevertheless, teaching – and learning – testing techniques are not easy tasks. Software testing is a complex and intellectual activity based on analysis, reasoning, decision making, abstraction and collaboration. Testing needs to allocate multiple cognitive resources in students which makes it a challenge to teach [1].

An extensive mapping review of 273 scientific works published from 2000 on the topic of inclusion of testing in academia was published in 2019 [6]. This work states that including testing in curricula is not straightforward, and it presents nine topics that have been researched to improve testing in academia. These topics correspond to the inclusion of testing in the curricula as a separated topic or jointly with programming courses; the use of teaching methods that integrate testing when teaching programming; the creation of course material about testing; the creation of programming assignments that include testing practices; the definition of declarative programming processes for novices that include testing; the use of supporting tools for testing; quality assessment of students’ code by testing their code; students’ attitude towards software testing; and assessment of students’ knowledge about testing.

Even though the inclusion of testing in academia has evident benefits such as the improvement of the students programming performance, timely feedback for students, objective assessment, and better understanding of the programming process by the students, there are still drawbacks. All these existing initiatives focus on testing as an isolated topic, they do not take into account the training needs of industry nor those of the students, and they are positioned too late in the curriculum. This leads to disconnection and a gap between theory and practice, less interest from students and hence a negative attitude towards testing, students who are not confident of their testing skills because of the lack of measurement of students performance at testing, and hence, absence of testing activities by students and the future practitioners they become. Moreover, course staff feels that they have considerably more workload when including separate testing topics.

Read more in the following paper: Beatriz Marín, Tanja E. J. Vos, Ana C. R. Paiva, Anna Rita Fasolino, Monique Snoeck: ENACTEST – European Innovation Alliance for Testing Education. RCIS Workshops 2022. https://ceur-ws.org/Vol-3144/RP-paper5.pdf


Stay tuned for the latest updates on the project.

References

[1] E. Enoiu, G. Tukseferi, R. Feldt, Towards a model of testers’ cognitive processes: Software testing as a problem solving approach, in: 2020 IEEE 20th International Conference on Software Quality, Reliability and Security Companion (QRS-C), IEEE, 2020, pp. 272–279.
[2] The cost of poor software quality in the us: A 2020 report, 2020. URL: https://www.it-cisq.org/pdf/CPSQ-2020-report.pdf.
[3] The software fail watch, 2018. URL: https://www.tricentis.com/blog/software-fail-watch-q2-2018/.
[4] J. Somers, The coming software apocalypse, a small group of programmers wants to change how we code—before catastrophe strikes, 2016.
[5] V. Garousi, M. Felderer, M. Kuhrmann, K. Herkiloğlu, S. Eldh, Exploring the industry’s challenges in software testing: An empirical study, Journal of Software: Evolution and Process 32 (2020) e2251.
[6] L. P. Scatalon, J. C. Carver, R. E. Garcia, E. F. Barbosa, Software testing in introductory programming courses: A systematic mapping study, in: Proceedings of the 50th ACM Technical Symposium on Computer Science Education, 2019, pp. 421–427.
[7] V. Garousi, A. Rainer, P. Lauvås Jr, A. Arcuri, Software-testing education: A systematic literature mapping, Journal of Systems and Software 165 (2020) 110570.
[8] A. Afzal, C. Le Goues, M. Hilton, C. S. Timperley, A study on challenges of testing robotic systems, in: 2020 IEEE 13th International Conference on Software Testing, Validation and Verification (ICST), IEEE, 2020, pp. 96–107.
[9] T. E. J. Vos, Zoeken naar fouten: op weg naar een nieuwe manier om software te testen, 2017.
[10] K. Juhnke, M. Tichy, F. Houdek, Challenges concerning test case specifications in automotive software testing: assessment of frequency and criticality, Software Quality Journal 29 (2021) 39–100.
[4] J. Somers, The coming software apocalypse, a small group of programmers wants to change how we code—before catastrophe strikes, 2016.
[5] V. Garousi, M. Felderer, M. Kuhrmann, K. Herkiloğlu, S. Eldh, Exploring the industry’s challenges in software testing: An empirical study, Journal of Software: Evolution and Process 32 (2020) e2251.
[6] L. P. Scatalon, J. C. Carver, R. E. Garcia, E. F. Barbosa, Software testing in introductory programming courses: A systematic mapping study, in: Proceedings of the 50th ACM Technical Symposium on Computer Science Education, 2019, pp. 421–427.
[7] V. Garousi, A. Rainer, P. Lauvås Jr, A. Arcuri, Software-testing education: A systematic literature mapping, Journal of Systems and Software 165 (2020) 110570.
[8] A. Afzal, C. Le Goues, M. Hilton, C. S. Timperley, A study on challenges of testing robotic systems, in: 2020 IEEE 13th International Conference on Software Testing, Validation and Verification (ICST), IEEE, 2020, pp. 96–107.
[9] T. E. J. Vos, Zoeken naar fouten: op weg naar een nieuwe manier om software te testen, 2017.
[10] K. Juhnke, M. Tichy, F. Houdek, Challenges concerning test case specifications in automotive software testing: assessment of frequency and criticality, Software Quality Journal 29 (2021) 39–100.

Similar Posts