Making sure your data is protected.
Ok, I’ll admit it, I’m quite old. I’ve been around a while and I’ve seen a few changes along the way.
I was thinking back to when I first started writing business applications on IBM midrange, how one considered every bit and byte of storage, optimizing the database design for capacity and performance of data retrieval. You didn’t go writing long complex SQL statements or slapping Logical Files (Views) over your data without careful design and consideration.
You read through your code and considered all the possible scenarios, you didn’t just lob it at the compiler in a half-baked stake, you compiled it when you expected it to work. Once in a while you would achieve that heady milestone of ‘First Time Final’. One compile & perfection.
Sure, things are different now. Skills, tools, methods, timescales and complexity are all different. But one thing remains the same; if the data is wrong in the database the whole thing will be wrong.
In other words; Data Rules. It is also the name of one of the really interesting features of TestBench for IBM i. A feature that accumulates knowledge about what the data should consist of under different circumstances and applies these ‘Data Rules’ every time you run a test to make sure that every insert or update is compliant. Plus, because these rules reflect the cumulative knowledge of the whole team, past and present, it is rare for something bad to escape. If you know that everything hitting the database is accurate and sound, you have gained a solid platform for the rest of the application layers. The Data Rules won’t let these other layers make mistakes in your precious data; it is protected.
The output shows you each and every change that has violated a Data Rule, even if subsequently, the data has changed again and perhaps hidden the error from the glare of blunt tools like SQL.
Things have moved on and thankfully, I’m not allowed to touch the code anymore, but some things remain the same and always will. The Data Rules! Ok?