Test Automation

6 min read

by Louise Bellamy on 10th October 2024

Busting testing myths #2: “AI is about to transform software testing”

Senior executives are used to people trying to turn their heads with fancy new technology. It can feel sometimes like you’re caught in a maelstrom of technology that promises (or threatens) to revolutionize your workflows, optimize your resources, digitize your transformation and transform your digitization. 

You’ll be relieved to know that there’s none of that happening here today. 

In today’s blog post we’re talking about how lots of the talk around AI, and how it can help with software testing, may be overhyped. We’ll also be talking about what AI – and specifically, the Original Software platform – can actually do for you instead (hint: it’s the same stuff it’s been doing for years).

What are people saying about AI and software testing? 

We know that lots of the chatter in the industry at the moment is about how AI can be used to generate test data and to generate test cases. Given that 45% of customers we spoke to told us that building new test cases is their most time-consuming task, those are tempting carrots to dangle in front of prospects, indeed. 

But we say: hold your horses. 

Why AI won’t be building your new test cases any time soon 

The example test cases we’ve seen (and tried to build ourselves) using AI all have one thing in common: they’re basic. AI can be used with reasonable success to create test cases for a login page, for instance. It knows to try and enter a correct username and password, to leave one or both fields blank, or to put incorrect data in – it can even attempt SQL injection. 

Fine. But now consider the sorts of things you need to test – let’s say, submitting a sales order on Infor M3. That process is to a login screen as the Mona Lisa is to a child’s crayon drawing. The complexity is just orders of magnitude larger. 

In our experience, there’s just no way any AI – even a proprietary and sandboxed version created by a testing company – will be able to capture all the complexities and quirks of your system and process. How will it know whether a sales order should trigger manufacturing orders or requisition orders, for instance? What will it do with the seven custom fields you made for your business (including the one that Infor later created a native field for, but which you continue to use because the pain of remapping everything is too much)? 

AI just isn’t the right tool for building test cases for complex ERP systems.

Read part #1 of our myth-busting series: “We can automate UAT”

Using AI to generate test data is largely a waste of effort 

Sounds like a bold statement, right? But our logic is sound: 

  1. Because of the complexity of your systems, getting AI to generate test data that will sufficiently test them is likely to be very time-consuming, or completely impossible. 
  1. To do it successfully, you’d likely need to hand a copy of your production data over to your testing company. Even if their system is sandboxed and secured with the latest and greatest security, that’s still a greater security risk than not sharing your data.  
  1. Tools already exist that allow you to clone and completely anonymize your data for testing purposes. With that in mind, sending your data to a third party so they can create a version of it using AI seems like a lot of effort. 
  1. Unless you’re starting your testing from scratch, it’s highly likely that you already have test data that works for your environment. The effort required to use AI to improve on that dataset, due to the reasons above, is likely more than it would take to create the data yourself – or pull it from production and anonymize it. 

So what can AI do in software testing? 

The guiding principle we’ve used for deploying AI in the Original Software Platform is relatively simple: can AI do the task both better and faster than a human can? If so, let the AI do it. If not, then let the human do it. 

In our platform, one of our most-talked about uses of AI is in our regression testing capabilities. Regression testing, as we’ve explained in other blogs, can be thought of as a big game of spot the difference. What has changed since the last version of the software? Have buttons moved, has text changed, and so on? Humans can spot those differences – but it takes time, and humans often miss some of the changes. Our AI-powered object recognition software, however, can spot every change in milliseconds. It does the job better and faster than a human can, so it gets the job.  

That object recognition pops up in other places, too. It’s the reason that our test automation solution is much harder to break than other test automation solutions. Our solution doesn’t rely on logical steps to progress a test; instead, it uses AI to spot the object it’s supposed to test next (for instance, finding a text box to type a username into). 

That same technology also enables our platform to create annotations of manual tests that users can easily understand. When a user performs an action, the AI recognizes what is being interacted with and creates the annotation automatically. That job would take a human tester a lot of time to do and the results would be highly variable depending on the tester; AI gets the job done faster and in a more uniform fashion. 

Go beyond the hype 

We all know that chasing the new a shiny technology is a risky game. Will it transform your operations, or will it come back to bite you when it flops? If you’re looking for ways to improve how you do software testing, our advice is: stick with what works. Which would be our platform. 

As we’ve said, it uses AI where appropriate to save time and deliver better results than humans can. But it does so much more than that, too. It gives you a single environment to capture, manage, and automate your testing, professionalizing testing and making testing easier to scale. 

That includes: 

  • Tools to make manual testing pain-free for testers, with standardized feedback that developers can understand and action, and the ability to turn test results into user guides for onboarding. 
  • Powerful test automation and full regression capabilities, including easy baseline creation. 
  • A full management dashboard showing test managers progress on every test, identifying tests that are behind schedule, and facilitating communication between testers, managers, and developers. 

There’s lots to discover about our platform, so if you’d like to know more, click below to get in touch with us. 

Related topics

Related

Ready to talk testing?

We’re ready to show you how we can help reduce your business risk and test faster than ever.

Talk to us!