How might your IBM i testing look if you didn’t have to rely on large datasets to maintain referential integrity? You could definitely test faster as you wouldn’t have to process enormous datasets. You could also spend less time hunting for the important data afterwards to see if your test was successful. You might even have space for more test environments, further speeding up testing or increasing test coverage.
The TestBench Data Extraction module automatically creates subsets of your production data that:
You can then test those subsets with complete confidence that you aren’t missing a trick.
TestBench automatically understands how your database is structured and builds a data model that maintains referential integrity.
Select data based on specific field values to create subsets of data. You can even select data based on related values in other files.
Easily refine your data subset to include all the data scenarios you want to test.
Create logical files, triggers, and constraints to make your data subset truly representative of your database.
Replace or add data to your dataset to enhance your test data environment.
TestBench can extract data from IASPs as well as local or remote IBM i servers, so you can test with confidence no matter where data is stored.
By making changes to your data as you extract it, you can add context to your test data to ensure it’s as useful as possible for your test.
TestBench is your first port of call if you need to test anything based on IBM i.
TestBench has powerful capabilities, from vertical data masking to full data reset. But it’s also been designed to be simple and intuitive for users, so you can get on with your job.
TestBench works seamlessly with our automated, code-free testing tool, TestDrive, so you can automate as much testing as possible to further speed up the job.
TestBench can fit snugly into your DevOps workflows, interfacing directly with your existing tools so you can bring testing rigor to your development workflows.