Requirements From the QA Chair

by , February 19, 2009
...well, I don't really have a chair on this blog. Think of it as more of a stool with a fancy leather seat on it. There was a very interesting article waaaay back in Aug. 2007 that tried to answer the question of "Why IT Projects Fail". There are a couple points that I'll be talking about today, but here is the quick list under the Technical and Requirements section of the article:
* Lack of user involvement (resulting in expectation issues) * Product owner unclear or consistently not available * Scope creep; lack of adequate change control * Poor or no requirements definition; incomplete or changing requirements * Wrong or inappropriate technology choices * Unfamiliar or changing technologies; lack of required technical skills * Integration problems during implementation * Poor or insufficient testing before go-live * Lack of QA for key deliverables * Long and unpredictable bug fixing phase at end of project
Ouch.... Poor or no requirements definition; incomplete or changing requirements This couldn't be more true. From a QA/Testing point of view, strong requirements are where we hang our hats. Clear and solid requirements must be in place in order to avoid the, always too common, scenario of a designer saying, "well...it supposed to do that". If you have no requirements then the designer is absolutely justified to say that because the requirement is now what is implemented in the code. So, the tester cannot fail a test or raise a problem report regarding any errors or concerns that this implementation might bring up. So what you have now is testing to what was implemented instead of what was required....two totally different things. Changing requirements are another bane of a tester's existence. Sure, there is churn in a project but usually that is a result of issues found after the testing phase. Sometimes this churn is larger than you would like. There is, however; a difference between having churn based on requirements you've defined before testing started and churn based on requirements that change during your testing cycle. Having your requirements changing during the testing means one thing to a tester: my test coverage is now inadequate. Project Management should hate it even more because they now have an undefined quantity of work that now needs to be done to get that test coverage. ...not good. Not good at all. I'm a strong believer of requirements driven testing. Having that level of traceability in your test cases gets rid of the waste of time that is the "well...it's supposed to do that" back and forth that plagues projects that don't use them. You solve quite a few project issues if time is taken up front to figure out what your product should be doing in the first place...and you make your tester happy. Happy testers bring donuts...or so I hear. Darren
Revision List

No Comments Yet

Be the first to respond!

Sorry, comments for this entry are closed at this time.