Over the last decade the popularity of mobile computing has increased ceaselessly and has reached impressive numbers, with billions of smartphone users and millions of apps currently available in existing app markets.

Like any other software product, mobile apps need to be tested adequately. In such a competitive market, a reliable app has much higher chances of being well-received by the users and thus being profitable. Let us note that testing of mobile apps presents some specific issues. Among them, since most of the interaction with an app is performed through its Graphical User Interface (GUI), it is fundamental to perform also an extensive GUI-level testing, i.e. to exercise the Application Under Test (AUT) by sending input events to its GUI. Moreover, the fragmentation of mobile systems and devices causes the need to repeat the same tests on a large number of different devices and execution environments

GUI testing of mobile apps can be executed either by manual or automated approaches. In the manual approach, test cases are firstly manually designed, then implemented possibly by scripts, and finally executed. On the contrary, automated GUI testing can be executed by means of Automated Input Generation (AIG) tools, aiming at stressing the GUI of an app by automatically generating sequences of input events. A variety of AIG tools have been developed by practitioners from both academic and industrial community to ease verification efforts and achieve different degrees of automation in app testing, especially at GUI level.

Exploratory Testing (ET) is a type of manual testing, which is still widely used and appreciated in the software industry. ET is a creative, experience-based approach in which test design, execution, and learning are parallel activities, and the results of executed tests are immediately applied for designing further tests. Recently, on the basis of the results of an industrial field study, it was concluded that the exploratory testing approach could be effective even when performed by testers with limited experience.

ET can be supported by Capture and Replay (C&R) tools that automatically generate test scripts from sequences of user interactions with the AUT, thus “capturing” (recording) real usage scenarios, without requiring advanced testing/programming skills. Subsequently, these records/scripts can be replayed multiple times on different devices to reduce the overall testing efforts. Several techniques enabling the capture and replay of these input sequences for Android apps have been proposed by the research community, as well as by industrial practitioners.

The AIG-based and the C&R-based testing approaches represent a practical option for app GUI testing, since they do not require specific testing skills and can be applied even in time-constrained testing processes. However, from a practical perspective, a Software Project Manager might be puzzled in deciding whether to use one of the previously described GUI testing strategies, for a specific app under development. More experimental data are needed to better comprehend the differences among the available techniques.

To help Project managers make a more informed decision when choosing an approach for testing mobile apps, a recent study compared different ways of executing Exploratory Testing of mobile apps against several AIG based techniques, in terms of effectiveness of the generated test cases.

The study involved twenty Computer Engineering Master students of an Italian University, enrolled in the Advanced Software Engineering course, who were asked to execute Exploratory Testing by a C&R tool for testing four publicly available Android apps. The same apps were also tested automatically by three automated GUI testing tools, namely Android Ripper, Sapienz and the Robo tool, freely offered by Google.

The authors of the study performed both quantitative comparisons of the obtained amounts of code/branch coverage and detailed qualitative comparisons, focusing on portions of code that have not been covered by tools, or have been covered only by few students. Results showed that, on average, students’ test were more effective, in terms of code coverage, than those of AIG tools, but not in a decisive way. This motivated the authors to conduct a further investigation with the students, by giving them more time and full access to the apps source code to improve the coverage of their C&R tests. Results of this second experiment showed remarkable improvements in the effectiveness of the students’ tests, which always outperformed the considered AIG tools.

This analysis allowed the authors to deduce possible points of strength and limitations of both approaches.

Read more about this study in the paper:

Di Martino, S, Fasolino, A R, Starace, L L L, Tramontana, P. Comparing the effectiveness of capture and replay against automatic input generation for Android graphical user interface testing. Softw. Test. Verif. Reliab. 2021; 31:e1754. https://doi.org/10.1002/stvr.1754


STAY TUNED FOR THE LATEST UPDATES ON THE PROJECT.

Articoli simili