Tech industry and software development represent areas in which progress is often measured on terms of speed and novelty, becoming thriving sectors in which competition to reach market with a finished product might get fierce. Is it, then, a matter of “getting there in the first place”?
Is survival of the fittest only for the fastest?
Test automation, continuous integration, pipelines… the need for speed has just exponentially increased over the last decade. Applying Darwin’s theory, it is quite simple: only the fittest –in this case the fastest- will survive. Traditional, manual testers are like a rhinoceros… in danger of extinction.
Since the introduction of test automation and lately DevOps, manual testers have to deal with many technologies aiming to automate activities as part of the software development lifecycle. Terms like continuous integration, release trains, test-driven development, Gherkin, build servers, etc. have become the new normal.
What does this mean for the position of the manual tester? Do you still have a job within 1, 3 or 5 years? If we consider manual testing as executing pre-scripted cases, the chance is great that our job will be eaten by test automation.
Automation is not always “hallelujah”
However, in the complex IT landscape where we are living in, there are still enough cases where test automation is either not recommended (due to huge cost & maintenance) or technically not possible. Examples of automation shortcomings are: the automation fake news, the automation paradox and the automation blindness.
The automation fake news is about the benefits promised by tool vendors, test automators to customers and management. Here we think of unattended testing during the night/weekend, a full-integrated chain launched with one click on the button. It is not the first project where a so-called unattended test just stops after 5 minutes waiting a click on a popup message in the script. Other classic pitfalls in this area are screen savers or an automatic restart of the machine once the Windows updates are installed over night.
In addition, what about codeless/low code automation that seems to contradict with the fact that automation took over many best practices/design patterns from the development world. Every automator should be familiar with the concept of function libraries to increase the reusability over the scripts and to reduce the maintenance effort. A well-known example of such design pattern is the page object model where there is a clean separation between test code and page specific code such as locators and layout. It offers a single and central repository of services and operations offered by the page rather than having these services scattered throughout the tests.
The automation paradox is especially playing in an Agile context. The main idea about Agile is to provide fast feedback and release fast. It is a challenge for automators to keep up the same pace of the developers during a sprint. Even it is not always possible to automate everything in the same sprint. Moreover, while the battery of automated cases is just growing, it will take more time to execute them in each sprint. Does this not seem in contradiction with the objective of Agile? As such, this becomes a real bottleneck in projects where the last sprint is used as “hardening sprint”. Typically, all different parts are brought together and connected in order to support a sort of integration/E2E test. This supposes that all automation has been done and can support (parts of) those tests.
The automation blindness is about the tunnel vision test automators have by ignoring questions like “Which data combinations are relevant to automate?” “Do we really need to execute all data combinations?“. Especially with a data driven approach, it is tempting to put all combinations in our Excel or csv file because it is just another row of data. Maybe it can help to question yourself if you would execute the test for that row in case of a manual execution. If the answer is no, then leave that row out.
Furthermore, automation reports are not always analysed in depth: “Why do we get a fail?” “Is this an application failure or a failure introduced due to our own automation code?” Continuing on the remark above about the design patterns and development best practices, we should consider test automation as a full development project. In theory, before running any automated script on our application, the framework itself should be subject of a manual or automated test.
Having their shortcomings, test automation cannot be the one and only way of evaluating the quality of software. After all, test automators should also stay loyal to the main principle of being critical regarding their own work. Dare to ask yourself if you are automating the right things in the right way.
Survival of the fittest is not only for the fastest
Yet, manual testing has evolved and needs to be adapted to this changing world. For manual testers, this require other and/or adapted skills like:
1. Transform to a no-sheep tester
Testers tend to act like bleating sheeps because they do not have proper and complete requirements, delivered builds are unstable or the test data has not been delivered as required.
In such cases, manual testers should benefit from this imperfectness by showing creativity and applying techniques like exploratory testing. Besides exploratory testing you can also set up session based testing to involve your (business) testers when less formality is required or needed.
Sanity and smoke testing are the tools to validate the readiness/maturity of a build/environment. Instead of start testing as a fool trying to run through your complete battery of tests, just do a few basic checks or key scenarios. If those did not pass, dare to stop and to revert to the drawing table. And yes, this requires a lot of courage from testers as they tend to go under the bar under pressure of stakeholders.
If you do not have your test data, you can try to create them self (so called selfish data generations). By doing this, you indirectly test already the system. Of course this is not always possible because systems/platforms (e.g. SAP) may require the availability of master data, parameters that we can’t create as testers.
2. CRUD the crap
Just like the English slang expression ‘Cut the crap’, testers should provide info to the business in a no-nonsense way. No bullshit and especially don’t overwhelm your stakeholders with hard figures and tons of numbers. Did you question what they are interested in? In the end, stakeholders want to know if you can Create data, retrieve & update it (=Read & Update) and remove (=Delete) it from the system.
This also mean that we should evolve more to emotional reporting by showing the status with colours, happiness index curves which allow to decide when good is good enough (=so-called “cut the crap” point).
Defining the “cut the crap” point depends on your context and will definitely be different given the risks you want to take and the required quality level. In the example of a one-shot ticketing website you can take more risks and spend less time to testing. That’s only true when you are certain that all tickets will be sold anyway no matter if the performance is bad and/or people will complain about your website.
However for a bank -pushing their customers to online banking apps & platforms- it is a total different story. They cannot afford crappy and buggy software. As result the cut the crap point will be situated higher in the curve where costs, risks and coverage of testing are taken into account.
3. Get a fika
Fika is a Swedish tradition to build in fixed moments in the morning and afternoon to drop your work and socialize with colleagues to talk about everything else than work. Manual testers should participate in such fika’s because you get indirectly a lot of crucial information to read the mindset of your stakeholders, detect impediments and come up with improvements.
It is a running gag but coffee machines and smoking areas are the most dangerous nevertheless interesting zones for testers. Here are often the real decisions taken or opinions outspoken. Where people stay silent in the meeting room, once they are out, it is time to spit their frustrations and groans. Circling around as tester can give you indirectly a wealth of insights, information, which you can use on your turn to adapt the testing strategy and/or remove the blocks from the project & testing road.
4. Think glocally – and no, this is not a typo :-)
Manual tests can play, by far, a crucial role in complex application landscapes & architectures. More than other profiles, they can assess the dependencies between systems, interfaces, etc. especially with regards to the preparation of test data for E2E tests, time travels.
Let elaborate more on time travels where all the systems & interfaces are synced regarding to the time. Although this seems straightforward and obvious, the preparation of such a travel is very complex. You should need a key understanding of the holistic landscape in which the systems/interfaces are running. Once you start the time travel, you normally can only move forward (regardless from the length of intervals and jumps you make).
So, if we miss an action in one of the systems (e.g. creating a policy for a customer) you risk jeopardizing a whole bunch of test cases. Whereas with single system testing a restore of databases/interfaces can be still feasible, reverting your time travel environment is out of question unless we go for a full restore of any component.
In such complex exercises, guaranteeing the data consistency between all involved applications & components is an ideal way to show off your added value as manual tester.
This lets conclude us that survival of the fittest is definitely not about the fastest but about the smartest. Both test automation and manual testing should be approached from a smart, efficient perspective and be complimentary.