For some sci-fi aficionados the term “Baseline Testing” will evoke the memories of the stunning and often disturbing movie “Blade Runner 2049”. In particular it refers to some rather scary questioning designed to check if the replicant ‘K’ had picked up some undesirable human characteristics which would need to be fixed. Thankfully fiction.
Closer to the here and now the term baseline testing is often used when considering application performance in relation to a benchmark, but I rarely hear it talked about in functional testing terms, and I think many are missing a valuable and simple trick.
Firstly, most software changes are made to existing applications, and that is what I am talking about now. I am not talking about the scenario where software is being developed from scratch as that needs a totally different approach to testing. The sad thing is that all too often people get stuck in the mindset of the latter and try to apply the techniques needed there to the former. That just leads to an overly complex and unreliable test architecture. People weave themselves into knots about how to best find issues and perform automated checks. Indeed, the discussion of what is an issue, a bug, a problem, a check or a test exacerbate the complexity of the situation and detract from the down to earth need for making sure the software works before exposing it to the users.
That simple objective does lead on to considering the definition of “works”.
In the world of most installed, up and running applications, there is a very close approximation to that definition. It is, how it was before it was changed – it worked then! Agreed, there were reasons for the change which might include enhancements and fixes, but most of the time how it was before was not so bad.
This enables us to look at what changes have been made against this working baseline. And here’s the simplicity of it, it is very easy for the right automation to determine what is different between now and then. Certainly, we can include specific assertions for the changes we know we have made and validate these, that’s the easy part. The part that troubles testers most is trying to figure out if something else got broken along the way, and not knowing where to look and what to look for. How do you codify that?
The pragmatic answer is you don’t, just get your automation to tell you what is different. If that’s not easy, you don’t have the right tool. We know about the differences we expect, so we can account for these. But if you have been informed of any others which are not expected you can determine their significance. In this case automation is not testing or checking, it is just informing. But that’s what you need to know isn’t it? That’s the value of the baseline.
There is another win from this approach because when you get to the point of nothing unexpected by way of differences in the new to the baseline, you have a new definition of ‘it works’. You don’t have to fix any scripts, as you already have them matching the new definition, so your script maintenance is done ready for the next release. More on that here.
This even works when you are changing something for the first time. I used to teach developers doing unit testing to test the programs and functions they were about to change before they changed them. I got them to create a test case which exposed how things worked now. What other objects they referenced, what data was passed, how the database was updated, and whatever was relevant to these components. So, before they made any changes, they had an automated test which defined the baseline and how it works. Any change immediately became easier as the impact, good or bad, was exposed.
Thankfully this version of Baseline Testing is a lot less scary and more practical use of robotics than that suggested in Blade Runner 2049! I must say I loved the haunting music in the original Sir Ridley Scott version.
George Wilson is SVP/Operations Director at Original Software