The MITs method measures the performance of the test effort so that test methods, assumptions, and inventories can be adjusted and improved for future efforts. A performance metric based on the percentage of errors found during the test cycle is used to evaluate the effectiveness of the test coverage. Based on this metric, test assumptions and inventories can be adjusted and improved for future efforts.
How MITs Works
The process works by answering the following questions:
- What do we think we know about this project?
- How big is the test effort?
- If we can't test everything, what should we test?
- How long will the effort take?
- How much will it cost? (How much can we get?)
- How do I identify the tests to run?
- Are we on schedule? Have we tested enough?
- How successful was the test effort? Was the test coverage adequate? Was the test effort adequate?
Answers
- We find out by stating (publishing) the test inventory. The inventory contains the list of all the requirements and specifications that we know about and includes our assumptions. In a RAD (Rapid application development) effort, we often start with only our assumptions, because there may not be any formal requirement or specifications. You can start by writing down what you think it is supposed to do. In projects with formal specifications, there are still assumptions-for example, testing the system on these three operating systems will be adequate. If we do not publish our assumptions, we are deprived of a valuable opportunity to have incorrect assumptions corrected.
- How many tests are there? We find out by enumerating everything there is to test. This is not a count of the things we plan to test; it is a count of all the tests that can be identified. This begins the expansion of the test inventory.
- The most important things, of course! We use ranking criteria, to prioritize the tests, then we will use MITs risk analysis to determine the most important test set from the inventory.
- Once the test set has been identified, fill out the MITs sizing worksheet to size and estimate the effort. The completed worksheet forms the basis for the test agreement.
- Negotiate with management for the resources required to conduct the test effort. Using the worksheet, you can calculate how many tests, testers, machines, and so on will be required to fit the test effort into the desired time line. Use the worksheet to understand and explain resource and test coverage trade-offs in order to meet a scheduled delivery date.
- Use the MITs analysis and the test inventory to pick the most important areas first, and then perform path and data analysis to determine the most important tests to run in that area. Once you have determined the most important tests for each inventory item, recheck your inventory and the sizing worksheet to make sure your schedule is still viable. Renegotiate if necessary. Start running tests and develop new tests as necessary. Add your new tests to the inventory.
- Use S-curves (a Logistic function which depicts an initial slow change, followed by a rapid change and then ending in a slow change again. This results in an "S" shaped line when depicted graphically. The S-curve is often used to describe how new technology is adapted. Initially people are slow to begin to use the technology but once it is accepted growth is rapid. Finally, growth slows down when the market is saturated and/or newer technology comes along.) to track test progress and help determine the end of the test effort.
- Use the performance metric to answer these questions and to improve future test efforts. The historical record of what was accomplished last time is the best starting point for improvement this time. If the effort was conducted in a methodical, reproducible way, the chances of duplicating and improving it are good.
In the ideal scenario, you do all of these things because all these steps are necessary if you plan to do the very best test effort possible. The next thing to recognize is that the real scenario is rarely "ideal." The good news is this method is flexible, even agile. Any steps you perform will add to the value of the test effort. If you don't do them all, there is no penalty or detriment to your effort. Next, the steps are listed in the order that will give you the best return on your investment. This order and the relative importance of the steps is different for different types of development projects.
Different environments have different needs, and these needs mandate different priorities in the test approach.
No comments:
Post a Comment