Any good IT leader knows that when it comes to technology trends, having one eye on what’s coming down the road can make all the difference between rolling with the punches and getting knocked to the floor. Here at Original Software, we, too, have one eye beadily fixed on the future. In this blog, I’m sharing a few of our predictions when it comes to software testing – hopefully without getting sucked into the vortex of meaningless discussion around ChatGPT. Let’s get into it.
Prediction 1: UAT and integration testing will reign
Most of the big ERP providers are heavily promoting their cloud platforms – and we’ve talked in other blogs about the consequences that has for testing. Thinking over the longer term, though, the move to the cloud has two consequences that are particularly interesting:
- As updates become more regular, they become (generally) smaller and functionally stable.
- The relative inflexibility of cloud platforms will cause application proliferation as organizations try to create the functionality they need.
The bottom line of these two points is that there will be less internal software development effort. Cloud updates will require almost no back-end implementation or configuration (which is, after all, part of the point of the cloud). This trend will likely play out with third-party applications, too – as the number of integrations needed grows, third-party integration apps such as Zapier are likely to be deployed to reduce the technical effort of connecting applications to your core software.
Because of this, your testing efforts will become even more focused on UAT. There will be some regression testing to do, of course, but most of that will likely be automated (as it can be right now), so not take up much time. But every update will still need to be tested by users to ensure business processes work as they should. Regarding those integrations, every time an application is updated, you’ll need to test every application that integrates with the updated application to make sure everything functions properly.
Given that, at present, many organizations find UAT one of the most painful and time-consuming elements of testing, it might be time for IT teams to urgently review how they get UAT done – especially if a cloud migration project has already happened or is due to.
Prediction 2: move away from public cloud?
Perhaps I’m running contrary to popular opinion here, but I’m not sure how long the dominance of the cloud will last – at least, the public cloud. I say this because, over the years, I’ve noticed a pattern of centralization and localization with IT. In the early days of computing, an organization might have had one computer terminal that everyone shared for various tasks. Then as personal computing became more possible and popular, compute was localized as everyone was given a work computer. Then, as network infrastructure matured, organizations started centralizing their data repositories. At present, compute is being even further centralized in the cloud. I can’t help but expect a shift back towards localization at some point in the future. You can already see hints of it in disciplines such as edge computing, which aim to move compute nearer to users (as opposed to being in a large centralized data center), and products like Azure Stack, which can be used to create a private cloud environment.
For those involved in software testing, this prediction is something to keep an eye on for now rather than something to act upon. If private clouds suddenly increase in popularity, then you’ll need to make sure you have the right skills in-house to manage the tech stack, but that may happen ten years from now (or never – I’m not clairvoyant, after all!). It may mean that application development moves back in-house to a certain extent, changing the makeup of your testing and therefore requiring robust test management to keep everything on track.
I do believe that, even if we move from centralized back to localized IT infrastructure, it will be with all the benefits that cloud computing has given us – particularly microservices-based applications, elasticity, and so forth – so it’s not like we’ll be going back to the age of CD-ROMs.
Prediction 3: yes, alright, AI will probably be a big thing
I held off as long as I could, but it’s plain that the potential offered by tools like ChatGPT is considerable – so let’s consider it.
I think it’s important to note that a lot of what commonly gets called AI right now isn’t true AI. A lot of what gets touted as AI currently involves highly sophisticated algorithms – but it doesn’t genuinely learn unless a user trains it. What makes ChatGPT so exciting is that it genuinely does learn from its interactions and refines its own algorithms as it learns. True self-learning AI, I believe, could have a load of applications in testing:
- We may see a rise in tools that require absolutely no configuration. You simply show them your applications, and they automatically work out what those applications are, how they work, devise automated tests, and run them.
- We could see tools that can review a user’s test results from a manual testing phase, such as UAT and turn those into easy-to-understand feedback, which can be sent to your dev team.
- We may also see a step-change in code-free test script creation, where users can simply describe the test they want, and the system automatically creates the test (we did actually trial something like this a few years back, but it was reliant on drop-down menus to create the right level of specificity and was a bit ahead of its time so didn’t quite take off).
Some of the building blocks for these things are already there. Lots of companies out there have been putting in lots of effort to improve object recognition in automated testing (i.e., getting their tools to recognize what’s an important part of an application to test). That’s a key foundation of a tool that could automatically create tests just from seeing an application. In fact, here at Original Software, we’ve been refining our object recognition algorithms for years to the point that our scripts are now self-healing – able to continue functioning even if forms and buttons move or change. And, as I mentioned, we’re always experimenting with new ways to make the process smarter and easier. There are also tools out there that will automatically create variants to your test script to cover a broader range of test scenarios – but none of them are directed by a self-learning AI.
We also have to remember that this generation of AI is still in its early stages. I’ve used ChatGPT to help me write code and have noticed that, on occasion, it’s given me two different sets of code from the same prompt. Both worked, and both did the same thing, but they were different – and that could cause problems further down the line if an organization was to adopt AI-driven code generation at scale.
So, again, it’s a bit of a wait-and-see prediction. And it’s important to state that, in my opinion, AI won’t ever be able to replace the human element of testing, no matter how many pictures, novels or plays ChatGPT writes. In testing, you’re not looking to replicate human creativity; you’re ensuring your software stands up to the randomness of human behavior. Will your applications load quickly enough that even someone in a hurry will be satisfied? Do the buttons look nice? Does the system make users feel empowered and not add to their stress? Those questions (for now, at least) are still far too complex for an AI to answer with any degree of reliability.
BONUS prediction: a convergence of process testing and optimization?
This is a bit more in the realms of pure speculation, but a topic that I think would be worthy of discussion. If testing tools become sufficiently advanced that they can understand your software instantly and devise tests to ensure it performs as it should, then how much longer it will be before they start suggesting process improvements? For instance, what if your tool recognizes when you enter the same data multiple times and could suggest alterations to the process to eliminate repeat data entry? Or could suggest changes to reduce the number of clicks required to complete a process?
Changes like that could have enormous impacts on:
- The time it takes people to do their work, helping improve productivity or freeing people up for other work.
- The number of errors in data inputs, improving the quality of the data your organization works with and further improving turnaround times and business outcomes.
- The customer journey, if you apply these tools to customer-facing processes. Doing so could improve customer satisfaction as processes get easier and simpler.
It might seem a long way away – and there are organizational challenges that would need to be overcome to get the insights from a testing tool into the hands of the people responsible for process optimization – but, I hope you’ll agree, it’s not that far-fetched.
Back to the present
It’s exciting to think about what’s coming and the opportunities it could bring your organization. Technological advances in testing tools have the potential to make it far easier to create and conduct tests, speed up test cycles and reduce the burden on business users at critical stages such as UAT. This will be handy, as it’s highly likely that UAT will become the main type of testing your business conducts.
Whatever happens in the future, it’s certainly true right now that you need to think critically about your testing capabilities. Are they ready for a cloud-native and UAT-heavy testing regime, where updates are released monthly and affect multiple systems? If you’re not confident in your current tools and processes, then we should talk. Original Software has led the way in test automation, test management, and UAT for 25 years now. In that time, we’ve created built-for-purpose tools and a platform to make test creation, execution, and management as painless and effective as possible – and we’ll continue to keep them fit for purpose, whatever the future of testing holds. If you’d like to talk about how we can help you, click below to get in touch.