Automation in testing - a pragmatic approach

Neil Studd

Neil Studd

Software Development Engineer in Test

Here at comparethemarket, we take a pragmatic approach to quality, looking to infuse best-in-class testing practices throughout our teams whilst still ensuring we are able to deliver frequent high-standard releases. We don't constrain ourselves to particular tools, nor impose a single way of working: each product team is empowered to discover what works best for them, and to share stories of successes (and failures) with their colleagues. In the spirit of this, I wanted to share a few insights into how my particular team operates.

There are a lot of lively debates online about the differences between manual and automated testing, such as Bach/Bolton's "Testing vs Checking". A lot of the discussion around usage of terminology, and division of roles/responsibilities, can be tiresome and counterproductive: I tend to prefer what Richard Bradshaw refers to as "Automation in testing". To summarise: our scripted tools don't have to be a fully autonomous solution, but they can automate the pain points of our day-to-day jobs, making it easier to deliver valuable information which humans can use to make decisions. 

EXAMPLE 1: DATA COMPARISON TOOLING

I work within a team which provides data transformation capabilities to all of our products. In other words, when a customer requests an insurance quote, we have to forward this request to many different insurance providers, with each provider wanting to receive that data in a different format.

Checking whether we're sending the correct data, or whether recent changes have adversely affected the data that we send, has always been an inherently manual task. There's no inherent reason why a provider should want to receive value X instead of value Y, so we have to cross-reference a provider's mapping documentation to verify whether X is correct. It's not a task which traditionally lends itself to automation, but our dev/test team has worked together to automate some of the most troublesome portions of the task.

Data comparison tool

The tool contains hundreds of representative data samples, covering every possible value of every attribute in our data structure, and performs an automated comparison of the differences between two different versions of the mappings. The differences might indicate a problem, or they might not - that's where we still need a human eye. (For instance, in the screenshot above, the new address in green is actually more accurate than the old address in red.)

Importantly, we've enabled our automation to take care of the time-consuming, unskilled parts of the activity (checking whether two documents are identical) and we give this information to human reviewers so that they can focus their time on the more valuable, skilled part of the activity (reviewing whether the differences are a problem).

EXAMPLE 2: DATA VALIDATION AS A SERVICE

When our data transformations are working in the wild, they're also exercising a vast array of custom validations. These checks allow us to verify the integrity of data as it passes all of the way through our code, from when it arrives as an input, gets transformed by our mappings, and then gets passed onwards to insurance providers. These checks aren't something we have to run explicitly - they're happening all of the time, as data flows through the system.

What sort of checks are we performing? Well, here's a few different types:

* Input data doesn't match the expected format. For instance, suppose a mandatory field is missing from the data: this might indicate that the field is now optional, or it might be an error on the website. Either way, the error gives us information which prompts a conversation with the team.

* Invalid action within the mapping. For example, if our mapping tries to multiply two values together, but one of the values is sometimes non-numeric, we throw an error about the invalid calculation, prompting us to examine the logic within that mapping.

* Problems sending the output to an insurer. If we get error codes in our request or our response, this might indicate that there was a problem in the data from our side, or it could be a problem at their end - but again, it's a prompt to have that conversation.

This means that our data testing isn't an activity which has an explicit start and end point. It's happening automatically, in the background, all of the time.

EXAMPLE 3: MONITORING AND METRICS - FREE TESTING!

All of the errors that I mentioned above are being written to our log files, and surfaced to us in Splunk, an invaluable web-based tool for monitoring and analysing logs. Its query engine allows for particular messages to be excluded, so that (for instance) we can hide known issues, enabling us to quickly identify new problems that we're not aware of.

With all of these checks happening automatically in the background, it's opened up interesting new avenues for discovering issues. For example, when a new insurance provider is added to our list of insurers, that provider will often run their own pre-release tests through our website, to check whether we've covered edge-cases in their particular mappings. But because they were submitting their automated insurance quotes through our website, their requests were also sent to every single other insurance provider too - so effectively, they were subjecting every provider to these edge-cases! This allowed us to identify several scenarios that other providers weren't handling correctly.

These metrics also cascade into our alerting system, so that we can receive notifications via Slack or by telephone if thresholds are breached. These alerts are so finely-tuned that we'll often identify a problem with one of our provider's websites (for example, downtime on their endpoint) before they've identified it themselves! Staying ahead like this allows us to avoid surprises, and prevents us from having to be a reactive force.

CONCLUSION

In an environment where fully combinatorial testing of data inputs is impractical (if not impossible), we've been working hard to find ways to mitigate risk, and challenging our existing ways of working. Our testers are leaders who influence the direction that we take, but these initiatives are driven through the entire team - everybody is responsible for quality!

The approach I've outlined above was selected because it fitted the needs of this particular team, at this point in time, and it's certainly not an all-purpose solution. For example, my team is dealing almost exclusively with RESTful web services, with their own set of challenges, whereas other teams have to focus more on things like cross-browser rendering issues. Stay tuned to our blog to hear more about how those other teams face up to their challenges!

If you're intrigued by our way of testing, and you'd like to be a part of it, we're currently hiring for new Software Development Engineers in Test! Click here for more details.

Ready, set, code!

From the get-go our new recruits are working on code that will have an impact. So much so that we pose the challenge - how quickly can you get code live?

New starter challenge

Current leaders - V8

Rank Team Name Individual Name Code Released In...
1
V8
Rob Harris
24.5 hours
2
Marketing IT
Sanjay Purswani
25.5 hours
3
Motor
Murugan Amir
26 hours
4
Marketing IT
Antonio Redondo
29 hours
5
Kinetics
Adarsh Ramsesh
3 working days

Latest tweets

Helpful app identifies objects for the blind to "see" Almost magic!… https://t.co/x2YcINBMDq

View Tweet

A bike helmet with turn signal lights will make rides a whole lot safer 🚲🚨 Could this revolutionise cycling… https://t.co/Gh1fGio1rx

View Tweet

At we're all about YOU Find out more about the benefits & perks of being part of our team-… https://t.co/GwtIMbU1ZK

View Tweet

Keep in touch

If you don’t see a role for you, sign-up for updates on new vacancies and connect with us on LinkedIn and Twitter – we’d love to hear from you.