Table of contents
We were initially going to title this blog “Will AI impact software testing job security?” but on reflection that just sounded a bit too dramatic. And, let’s be honest, there’s already too much drama and hype around AI as it is. Whether you believe AI is going to save our planet or wipe out humanity, you’ve probably wondered at some point whether your job could be done by AI – so if you’re involved in software testing and are worrying about the prospect, let us put your minds at ease by discussing why we think your jobs are safe.
Many tools already use AI
It’s worth pointing out up front that plenty of software testing tools – including the Original Software Platform – already use AI. At Original Software, we use AI for object recognition in our test automation capabilities. That means that an AI reviews the application that’s being tested and recognizes text fields, images, text, buttons, and more, automatically (it’s how our test automation solution is able to spot the difference between versions of your software to show you every change that’s happened).
That AI, however, isn’t what some people are calling “advanced AI”. Advanced AI includes generative AI tools like ChatGPT and DALL-E, as well as other tools that make use of more advanced neural networks to be more “intelligent”.
Last year’s predictions
We talked last year about what we think AI will do in testing. To directly quote ourselves from a year ago:
- We may see a rise in tools that require absolutely no configuration. You simply show them your applications, and they automatically work out what those applications are, how they work, devise automated tests, and run them.
- We could see tools that can review a user’s test results from a manual testing phase, such as UAT and turn those into easy-to-understand feedback, which can be sent to your dev team.
- We may also see a step-change in code-free test script creation, where users can simply describe the test they want, and the system automatically creates the test (we did actually trial something like this a few years back, but it was reliant on drop-down menus to create the right level of specificity and was a bit ahead of its time so didn’t quite take off).
One year on, we still see AI as having the potential to help with software testing – but that potential hasn’t been realized yet. And since we’re still a little way off AI being able to meaningfully help with testing, it’s safe to say that we’re even further away from the prospect of AI being able to replace a human in the testing process.
Why can’t AI help more?
From our own experiments with advanced AI, and what we hear talking with our customers, we see the biggest challenge is the level of nuance and complexity in devising test cases and running them.
For instance: we ran an experiment where we showed ChatGPT the HTML code for a user login page, and asked it to create test cases that would test the page. ChatGPT came up with a number of test cases, such as trying to log in with a correct username, an incorrect one, correct/incorrect passwords, and attempting SQL injection. However, the test cases were all essentially unit testing: does a component behave correctly under a variety of conditions. ChatGPT was incapable of devising a test case that would cover multiple applications, or even different components of the same application, without a lot of guidance and detail – at which point the benefit of using AI was lost, as the test cases could have been generated faster by hand.
Obviously, AI doesn’t start and end with ChatGPT. Organizations can (and are) building customized AI tools for software testing. But there are three things to consider here:
- Building your own advanced AI tool is a very expensive undertaking, requiring large amounts of power and deep expertise. Most organizations are unlikely to be able to afford this.
- Even if you use an AI tool developed by a third party, who is going to be the lead for that tool in your organization? Chances are, it’s you – so your job is likely secure.
- Current AI tools are focused on improving the ease with which humans can conduct testing, and the effectiveness of their testing – not cutting them out of the process entirely.
Prompt engineering – the new testing skill?
That last point is really the crux of our message here – and it’s a message that you see in every discussion of AI. Although AI is designed to make lives easier, and to be accessible by anyone, using it well is a skill that you can learn. In that sense, it’s just like searching the web – anyone can do it, but it’s also a definite skill (if you’re the one who can find things on the net when others can’t you have the skill. If you’re the only one who can’t find the news article everyone at the office is talking about, you don’t!)
Part of the joy of advanced AI is its ability to be tailored to specific tasks and contexts – one organization’s testing AI might not look the same as another’s. That means that being able to get the best results out of your organization’s AI tools is going to be an in-demand skill, as is teaching others in your organization how to use it if they need to.
The technical term for this art is prompt management – and just like the ability to use Microsoft Office did in the 1990s, we reckon it will become a vital skill in software testing, and likely most industries.
And what better place to end this blog than thinking about what that world looks like? As the software test manager in an AI-enabled world, you’ll likely find that you’re able to react far faster to new testing requirements because you can create new test cases far faster, and plan your test cycle with minimal effort. Broken test scripts will either be identified faster for fixing, or fixed automatically, keeping everything on track. At every turn, AI has the potential to make your job more efficient, less stressful, and enable you to output higher quality work. Not bad!