Blog detail

Does Testing Add Value?
Does Testing Add Value?

One question I am often asked is “Does testing add value?” Many people who ask this question are development or Q/A managers who are trying to figure out how testing fits within an agile environment. My simple answer is: “No, it does not.”

Now, before you get all half-cocked, I am not saying we should ship a product that is hanging together by dental floss. I am definitely not saying let’s just throw in more and more features and leave less time and resources to maximizing quality. I mean quite the contrary.

When I say testing has no value, I mean the time it takes to test a product doesn’t add value to the end customer. Value, by this definition is “something that takes the form, feature or function that the customer is willing to pay for.” Customers don’t want to pay for testing. They want to pay for a high-quality product. To illustrate, let’s look at two different ways to develop a product:

1. Big bang approach: Spend a lot of time building and ignoring defects along the way because they will be addressed in the ‘Testing’ phase. Defects pile up, get more and more complex and worse, get more and more buried. The engineers/developers then, need to go in and spend most of their time finding the defects before fixing them. This is a lot of time and cost that has to be built into the cost of the product. This doesn’t even take into consideration the inefficiencies of bouncing the defect between the Q/A group and development groups. Customers pay for this necessary step but it adds no value to them. A good amount of the cost and time of finding and fixing the defects could have been avoided.

2. Alternatively, a savvy organization will adopt practices to build integrity into the product and test as you go in small batches. In fact, they will develop requirements AS test cases. The developers build to those test cases, which are objectively measurable and testable as you go. In essence, the developers allow the testing to drive development, not the other way around. This, of course, is the philosophy of test-driven development. The idea here is to build small, high-quality slices of the product and then integrate it in as soon as possible. Part of this continuous integration involves testing the whole as you go. If defects appear, they will most likely be caused by the latest addition. If this latest piece is small, the defect can be found and resolved easily.

A further step would be to avoid creating as many defects as possible by using best practices and quality principles during development. One more tool would be to borrow the concept of a ‘WIP Cap’ from Lean. This means that if we reach a set number of open defects in our database during development (usually 5 or less), we all stop, fix them and move on. It is much more efficient to fix them now than later when the product is more complex.

The second approach exposes problems in hours and days, not months later when different components are completed independently and finally integrated. It will create a higher quality product with severely reduced time in the end and thus cost and time to market because teams don’t have to find and fix all these hidden defects that were created by ignoring them along the way. This is more valuable to the customer because they are spending most of their money on creation of features, not on finding hidden defects. Of course, even this approach will not be a 100% bug-free but industry metrics have shown defects to drop as much as 80% in a development life-cycle.

So is testing valuable to the customer? Still no. Is it necessary? Absolutely. However, by using good quality practices to build integrity in, let testing drive development, integrate in small slices and test as you go, the time and effort to test will drastically reduce creating a much more efficient development group.

Leave a Comment

Your email address will not be published. Required fields are marked *