Testing has been widely adopted as part of software development ever since software has been developed. Business Intelligence (BI) however, has been slower to adopt testing as an integrated part of development in BI software such as IBM Cognos. Let's explore why BI has been slower to adopt testing practices and the consequences of NOT testing.
Why organizations do not test BI...
- Time constraints. BI projects are under constant pressure to be delivered quicker. What some organizations may not realize is that the easiest phase to reduce time is testing.
- Budget constraints. The thinking is that testing is too expensive and can't dedicate a testing team.
- Faster is better. This is not necessarily an "agile" approach and may only get you to the wrong place quicker.
- The "just do it right the first time" mentality. This naive approach insists that the presence of quality control should reduce the need for testing.
- Lack of ownership. This is similar to the previous bullet. The thinking is that “our users will test it.” This approach can lead to unhappy users and lots of support tickets.
- Lack of tools. The misconception that they don't have the right technology for testing.
- Lack of understanding of testing. For example,
- Testing should evaluate the accuracy and validity of data, data consistency, timeliness of data, performance of delivery, and ease of use of the delivery mechanism.
- Testing during a BI project may include regression testing, unit testing, smoke testing, integration testing, user acceptance testing, adhoc testing, stress/scalability testing, system performance testing.
What are the costs of NOT testing BI?
- Inefficient designs. Poor architecture may not be discovered if testing is ignored. Design issues can contribute to usability, performance, re-use, as well as, maintenance and upkeep.
- Data integrity issues. Data corruption or data lineage challenges can lead to lack of trust in the numbers.
- Data validation issues. Decisions made on bad data may be devastating to the business. There’s nothing worse than trying to manage by metrics that are based on incorrect information.
- Decreased user adoption. If the numbers aren’t right, or if the application is not user-friendly, users just aren’t going to use your shiney new enterprise BI software.
- Increased costs due to lack of standardization.
- Increased costs to repair defects in later stages of the BI development life cycle. Any issues discovered beyond the requirements phase will cost exponentially more than if discovered earlier.
Now that we've laid out why organizations might not be testing and the pitfalls that occur when you do not test BI, let's look at some studies on testing in software development.
Studies show testing saves $$!
One study of 139 North American companies ranging in size from 250 to 10,000 employees, reported annual debugging costs of $5.2M to $22M. This cost range reflects organizations that do not have automated unit testing in place. Separately, research by IBM and Microsoft found that with automated unit testing in place, the number of defects can be reduced by between 62% and 91%. This means that dollars spent on debugging could be reduced from the $5M - $22M range to the $0.5M to $8.4M range. That's a huge savings!
Costs to fix errors quickly escalate.
A paper on successful software development tactics demonstrates that most errors are made early in the development cycle and that the longer you wait to detect and correct, the higher it costs you to fix. So, it doesn't take a rocket scientist to draw the obvious conclusion that the sooner errors are discovered and fixed, the better. Speaking of rocket science, it just so happens that NASA published a paper on just that - "Error Cost Escalation Through the Project Life Cycle."
It is intuitive that the costs to fix errors increase as the development life-cycle progresses. The NASA study was performed to determine just how quickly the relative cost of fixing errors discovered progresses. This study used three approaches to determine the relative costs: the bottom-up cost method, the total cost breakdown method, and the top-down hypothetical project method. The approaches and results described in this paper presume development of a hardware/software system having project characteristics similar to those used in the development of a large, complex spacecraft, a military aircraft, or a small communications satellite. The results show the degree to which costs escalate, as errors are discovered and fixed at later and later phases in the project life cycle. This study is representative of other research which has been done.
From the chart nearby, research from TRW, IBM, GTE, Bell Labs, TDC and others shows that if the cost of fixing an error discovered during the requirements phase is defined as 1 unit, the cost to fix that error if found during the design phase is double that; at the code and debug phase, the cost to fix the error is 3 units; at the unit test and integrate phase, the cost to fix the error becomes 5; at the systems test phase phase, the cost to fix the error jumps to 20; and once the system is in the operation phase, the relative cost to correct the error has risen to 98, nearly 100 times the cost of correcting the error if found in the requirements phase.
The bottom line is that it is much more costly to repair defects if they’re not caught early.
Significant research has been conducted which demonstrates the value of early and continuous testing in software development. We, in the BI community, can learn from our friends in software development. Even though most formal research has been done related to software development, similar conclusions can be drawn about BI development. The value of testing is indisputable, but many organizations have been slower to take advantage of formal testing of their BI environment and integrate testing into their BI development processes. The costs of not testing are real. The risks associated with not testing are real.