Sunday, October 04, 2009

Feeling guilty about testing .....

Software testing is one of those fuzzy things where the theory in the books completely differs from how testing is done in practice. Unfortunately, the poor practitioners all too often end up feeling guilty for not testing their products properly. Compounding this problem is the fact that there is a whole lot of techniques that are positioned as 'best practice', 'will find the most bugs' etc. -- unfortunately they do no such thing expect adding to the guilt that the testing is poor. 

Why is it so messy? Do we really need to test software as per the books? 
The key observation from my experience is "normal developers test (execute) code to discover behaviour" -- so, they explore the program to check if it broadly matches the expected behaviour. Further, developers also work with requirements that are incomplete, potentially inconsistent and sadly vague. Developers to some extent guess what is expected. They will fill in the gaps based on similar software systems (or) their common-sense (or) gut-feel (or) experience as reference points. This guess work is unavoidable, unless the person providing the requirements, and the developer implementing the requirements both are 'perfect beings'. 

Back to the question at hand -- So how does one go about testing properly? 

Rather than directly answering it I would like to take a detour to make my point. Lets' say you downloaded some new 'browser' software. Your intention is to browse the web, check e-mail, Facebook, Twitter etc. How do you go about testing the viability of this product for your needs? Do you start by writing down all their tasks, define expected behaviour and then proceed to validate?  People explore tools and systems -- and if they are not too painful, they get used. So, what is the most effective way to test a product? The simplest and easiest method is 'use it like the end-user would' -- and do not feel guilty that you are not doing enough. 

I must add some limitations/variations: 
1. If you have a well defined set of mathematical functions -- these can be tested (more) formally and quite rigorously. 
2. If you have a workflow (or) set of rules that are available as a mathematical expression (some graph, logic rules) again a testing approach that matches inputs to outputs will work. 
3. Safety critical systems -- typically quite a lot of effort goes into the requirements to make sure that the fuzzy aspects are completely reduced. 

In cases like the above, test coverage also comes in very handy. Effort can be put into automation and formal testing since it is actually likely to work. Things like compilers, parsers, business rule engines, workflow engines, chess playing software, data structures, well defined algorithms etc. will all fall into the above two categories.  Rest of the time ... "use it to test it". -- rv

No comments: