9.8.08

Certification Process Checklists

Pass

Fail

Checklist for Testing

Are test plans and procedures created and reviewed?

Are test results recorded?

Are defects or problems recorded, assigned, and tracked to closure?

Is there an adequate test process to ensure that areas impacted by changes are retested?

Pass

Fail

Checklist for Measurement

Is the software validated (tested) as a complete system in an environment as similar as possible to the final operating environment? Is this done prior to delivery and acceptance?

If field-testing is required, are the responsibilities of the supplier and the purchaser defined? Is the user environment restored following the test?

Are product metrics collected and used to manage the testing effort?

Are product defects measured and reported?

Is corrective action taken if metric levels exceed established target levels?

Are improvement goals established in terms of metrics

Are process metrics collected to measure the effectiveness of the testing process in terms of schedule and in terms of fault prevention and detection?

Pass

Fail

Checklist for Tools / Techniques

Are tools and techniques used to help make testing and management processes more effective?

Are the used tools and techniques reviewed, as required, and improved upon?

Pass

Fail

Checklist for Training

Are training needs identified according to a procedure?

Is training conducted for all personnel performing work related to quality?

Are personnel who are performing specific tasks qualified on the basis of appropriate education, training, and/or experience?

Are records kept of personnel training and experience?

Pass

Fail

Checklist for Documentation

Are test plans, requirements, and other documents revision controlled?

Do procedures exist to control document approval and issue?

Are changes to controlled documents reviewed and approved?

Are current versions of test documents identifiable by a master list or document control procedures?

Pass

Fail

Checklist for Configuration Management

Is there a Configuration Management (CM) system that identifies and tracks versions of the software under test, software components, build status, and changes? Does the system control simultaneous updates?

Does the configuration management plan include a list of responsibilities, CM activities, CM tools and techniques, and timing of when items are brought under CM control?

Is there a mechanism and procedure that enables software, hardware, and files to be uniquely identified throughout the entire software development lifecycle?

Is there a documented mechanism to identify, record, review, and authorize changes to software items under configuration management? Is this process always followed?

Are affected personnel notified of software changes?

Is the status of software items and change requests reported?

Defect Removal Efficiency (DRE)


A more powerful metric for test effectiveness (and the one that we recommend) can be created using both of the defect metrics discussed above: defects found during testing and defects found during production. What we really want to know is, "How many bugs did we find out of the set of bugs that we could have found?" This measure is called Defect Removal Efficiency (DRE).
The later we discover a bug, the greater harm it does and the more it costs to fix.

Incident Description

The author of the incident report should include enough information so that the readers of the report will be able to understand and replicate the incident. Sometimes, the test case reference alone will be sufficient, but in other instances, information about the setup, environment, and other variables is useful.

Section Heading Description
4.1 Inputs: Describes the inputs actually used (e.g., files, keystrokes, etc.).
4.2 Expected Results: This comes from the test case that was running when the incident was discovered.
4.3 Actual Results: Actual results are recorded here.
4.4 Anomalies: How the actual results differ from the expected results. Also record other data (if it appears to be significant) such as unusually light or heavy volume on the system, it's the last day of the month, etc.
4.5 Date and Time: The date and time of the occurrence of the incident.
4.6 Procedure Step: The step in which the incident occurred. This is particularly important if you use long, complex test procedures.
4.7 Environment: The environment that was used (e.g., system test environment or acceptance test environment, customer 'A' test environment, beta site, etc.)
4.8 Attempts to Repeat: How many attempts were made to repeat the test?
4.9 Testers: Who ran the test?
4.10 Observers: Who else has knowledge of the situation?

Example of Minor, Major, and Critical Defects
Minor: Misspelled word on the screen.
Major: System degraded, but a workaround is available.
Critical: System crashes.

Repetitive Tasks


Repetitive tasks, such as regression tests, are prime candidates for automation because they're typically executed many times. Smoke, load, and performance tests are other examples of repetitive tasks that are suitable for automation.
If the application being tested is unstable or changing rapidly, automating the test scripts may be difficult.
Regression tests are tests that are run after changes (corrections and editions) are made to the software to ensure that the rest of the system still works correctly.
Timing is everything. Trying to implement a major tool or automation effort in the midst of the biggest software release of all time is not a good strategy.
A test set is a group of test cases that cover a feature or system.
As a rule of thumb, we normally recommend that the regression test set (or at least the smoke test) be run in its entirety early on to flag areas that are obviously problematic.
No matter how good you and your colleagues are at designing test cases, you'll always think of new test cases to write when you begin test execution.
Obviously, the results of each test case must be recorded. If the testing is automated, the tool will record both the input and the results. If the tests are manual, the results can be recorded right on the test case document. In some instances, it may be adequate to merely indicate whether the test case passed or failed. Failed test cases will also result in an incident report being generated. Often, it may be useful to capture screens, copies of reports, or some other output stream.

Black-Box vs. White-Box


Black-box testing or behavioral testing is testing based upon the requirements and, just as the name implies, the system is treated as a "black box." That is, the internal workings of the system are unknown. In black-box testing the system is given a stimulus (input) and if the result (output) is what was expected, then the test passes. No consideration is given to how the process was completed.
In white-box testing, an input must still produce the correct result in order to pass, but now we're also concerned with whether or not the process worked correctly. White-box testing is also called structural testing because it's based upon the object's structure.
The creation and execution of tests is best be done by the people who understand the environment associated with that level of test.
Creating automated test scripts can often take more expertise and time than creating manual tests. Some test groups use the strategy of creating all tests manually, and then automating the ones that will be repeated many times. In some organizations, this automation may even be done by an entirely separate group. If you're working in an environment where it takes longer to write an automated script than a manual one, you should determine how much time is saved in the execution of the automated scripts. Then, you can use this estimate to predict how many times each script will have to be executed to make it worthwhile to automate. This rule of thumb will help you decide which scripts to automate. Unless there is very little cost in automating the script (perhaps using capture-replay, but don't forget the learning curve), it's almost always more efficient to execute the test manually if it's intended to be run only once.

Levels of Test Planning




Levels of Test Planning

Test planning SHOULD be separated from test design.

In addition to the Master Test Plan, it is often necessary to create detailed or level-specific test plans. On a larger or more complex project, it's often worthwhile to create an Acceptance Test Plan, System Test Plan, Integration Test Plan, Unit Test Plan, and other test plans, depending on the scope of your project. Smaller projects, that is, projects with smaller scope, number of participants, and organizations, may find that they only need one test plan, which will cover all levels of test. Deciding the number and scope of test plans required should be one of the first strategy decisions made in test planning. As the complexity of a testing activity increases, the criticality of having a good Master Test Plan increases exponentially

If your test plan is too long, it may be necessary to create a number of plans of reduced scope built around subsystems or functionality.

It's important that an organization have a template for its test plans. If a template doesn't meet your particular requirements, you should feel free to customize it as necessary.

What you test is more important than how much you test.

Regression testing is retesting previously tested features to ensure that a change or bug fix has not introduced new problems.

Confirmation testing is rerunning tests that revealed a bug to ensure that the bug was fully and actually fixed.

To be most effective, test planning must start at the beginning and proceed in parallel with software development. General project information is used to develop the master test plan, while more specific software information is used to develop the detailed test plans. This approach will target testing to the most effective areas, while supporting and enhancing your development process. When fully implemented, test planning will provide a mechanism to identify improvements in all aspects of the system and development process.

Preventive Testing

Preventive testing uses the philosophy that testing can actually improve the quality of the software being tested if it occurs early enough in the lifecycle. Specifically, preventive testing requires the creation of test cases to validate the requirements before the code is written

Testware is any document or product created as part of the testing effort (e.g., test cases, test plans, etc.). Testware is to testing what software is to development.

An added benefit of creating the test cases before the code is that the test cases themselves help document the software.

Software Test Documentation Template for Test Documents


1.

Test Plan

Used for the master test plan and level-specific test plans.

2.

Test Design Specification

Used at each test level to specify the test set architecture and coverage traces.

3.

Test Case Specification

Used as needed to describe test cases or automated scripts.

4.

Test Procedure Specification

Used to specify the steps for executing a set of test cases.

5.

Test Log

Used as needed to record the execution of test procedures.

6.

Test Incident Report

Used to describe anomalies that occur during testing or in production. These anomalies may be in the requirements, design, code, documentation, or the test cases themselves. Incidents may later be classified as defects or enhancements.

7.

Test Summary Report

Used to report completion of testing at a level or a major test objective within a level.

Role

Description of Responsibilities

Manager

Communicate, plan, and coordinate.

Analyst

Plan, inventory, design, and evaluate.

Technician

Implement, execute, and check.

Reviewer

Examine and evaluate.

Test planning is one of the keys to successful software testing, yet it's frequently omitted due to time constraints, lack of training, or cultural bias.

Test planning CAN'T be separated from project planning.

All important test planning issues are also important project planning issues.

5.8.08

Risk Analysis

To mitigate the risk of some event causing damage, you must first estimate the probability that the event will occur. This probability has to be translated into some quantity, usually represented as a percentage-for example, "There is a 50 percent chance that this will happen." Next, you need to determine the severity of such a failure. Severity is usually measured in currency, such as dollars, and loss of life. If the severity is minor, then even a high probability of occurrence may still be judged to cause a trivial problem.

The probability that a thing will or won't occur can be calculated under certain circumstances-especially if you can answer a question like, "What was the outcome last time?" or "Do we know if the platform can really do what the maker claims it can?" If you can't provide a good measured answer to these questions up front, then you will need a strategy for dealing with the events that will occur later in the process. If the probability and severity cannot be measured, then they must be estimated. MITs risk analysis provides a formal method for both estimating up front and dealing with events as they unfold. In this chapter, we look at this formal approach to establishing risk and prioritizing the items on the test inventory.

MITs risk analysis uses both quantitative and qualitative analysis to establish a numeric value for risk based on a number of specific criteria. In the early planning phases of a test effort, this risk number is used to focus test resources to size the test effort. As the inventory evolves, the risk ranking plays an important part in actual test selection and optimal test coverage determination.

Metrics to Measure Performance


Was It Worth It?

Test Effectiveness



How Good Were the Tests?

Test Coverage


How Much of It Was Tested?

The Bug Fix Rate


How Many of the Bugs That Were Found Were Fixed?

The Number of Bugs Found

For this metric, there are two main genres: (1) bugs found before the product ships or goes live and (2) bugs found after-or, alternately, those bugs found by testers and those bugs found by customers. As it was already said, this is a very weak measure until you bring it into perspective using other measures, such as the severity of the bugs found.

Bug Type Classification

First of all, bugs are bugs; the name is applied to a huge variety of "things." Types of bugs can range from a nuisance misunderstanding of the interface, to coding errors, to database errors, to systemic failures, and so on.

Like severity, bug classification, or bug types, are usually defined by a local set of rules. These are further modified by factors like reproducibility and fixability.

In a connected system, some types of bugs are system "failures," as opposed to, say, a coding error. For example, the following bugs are caused by missing or broken connections:

· Network outages.

· Communications failures.

· In mobile computing, individual units that are constantly connecting and disconnecting.

· Integration errors.

· Missing or malfunctioning components.

· Timing and synchronization errors.

These bugs are actually system failures. These types of failure can, and probably will, recur in production. Therefore, the tests that found them during the test effort test are very valuable in the production environment. This type of bug is important in the test effectiveness metric.

Severity of Bugs

Severity is a fundamental measure of a bug or a failure. Many ranking schemes exist for defining severity. Because there is no set standard for establishing bug severity, the magnitude of the severity of a bug is often open to debate. The table below shows the definition of the severity metrics and the ranking criteria used in this documentation.
Severity Metrics and Ranking Criteria
SEVERITY RANKING
RANKING CRITERIA
Severity 1 Errors
Program ceases meaningful operation
Severity 2 Errors
Severe function error but application can continue
Severity 3 Errors
Unexpected result or inconsistent operation
Severity 4 Errors
Design or suggestion

Bugs

Many people claim that finding bugs is the main purpose of testing. Even though they are fairly discrete events, bugs are often debated because there is no absolute standard in place for measuring them.
Sample Units: Severity, quantity, type, duration, distribution, and cost to find and fix. Note: Bug distribution and the cost to find and fix are derived metrics.
Like tests, bugs also have attributes as discussed in the following articles.

4.8.08

Fundamental Metrics for Software Testing

The following are some typical software testing questions that require measurement to answer:

· How big is it?

o How long will it take to test it?

o How much will it cost to test it?

· What about the bugs?

o How bad were they? What type were they?

o What kind of bugs were found?

o How many of the bugs that were found were fixed?

o How many new bugs did the users find?

· How much of it has to be tested?

· Will it be ready on time?

· How good were the tests?

· How much did it cost to test it?

· Was the test effort adequate? Was it worth it?

· How did it perform?

Good answers to these questions require measurement. If testers don't have good answers to these questions, it is not because there are no applicable metrics; it's because they are not measuring.

Tests

There is no invariant, precise, internationally accepted standard unit that measures the size of a test, but that should not stop us from benefiting from identifying and counting tests. There are many types of tests, and they all need to be counted if the test effort is going to be measured. Techniques for defining, estimating, and tracking the various types of test units are presented in the next several chapters.

Tests have attributes such as quantity, size, importance or priority, and type

Sample Units (listed simplest to most complex):

· A keystroke or mouse action

· An SQL query

· A single transaction

· A complete function path traversal through the system

· A function-dependent data set

How to Succeed with MITs

A couple of factors will influence which methods and metrics are the right ones for you to start with and which ones are the most useful to you. In fact, you most probably use some of these methods already. The first factor is the ease of implementation. Some of these methods and metrics are much easier to implement and to show a good return on the investment than others. Another factor is the development method that is being used in the project you are approaching.

Plan-driven development efforts use the same MITs methods as Agile development efforts, characterized as heavyweight and lightweight, but their goals and expectations are different. So the priorities placed on the individual MITs steps are very different.

Methods That Are Most Useful and Easiest to Implement

The following lists show the methods that have been identified as most useful. They are listed according to the respondent's perceptions of their ease of implementation.

EASIEST TO IMPLEMENT

· Bug tracking and bug-tracking metrics

· The test inventory and test coverage metrics

· Planning, path analysis, and data analysis

· MITs ranking and ranking criteria (risk analysis)

· The test estimation worksheet

· Test performance metrics

MORE DIFFICULT TO IMPLEMENT

· S-curves

· Test rerun automation

· Automated test plan generation

The Most Important Tests (MITs) Method

The MITs method measures the performance of the test effort so that test methods, assumptions, and inventories can be adjusted and improved for future efforts. A performance metric based on the percentage of errors found during the test cycle is used to evaluate the effectiveness of the test coverage. Based on this metric, test assumptions and inventories can be adjusted and improved for future efforts.


How MITs Works

The process works by answering the following questions:

  1. What do we think we know about this project?
  2. How big is the test effort?
  3. If we can't test everything, what should we test?
  4. How long will the effort take?
  5. How much will it cost? (How much can we get?)
  6. How do I identify the tests to run?
  7. Are we on schedule? Have we tested enough?
  8. How successful was the test effort? Was the test coverage adequate? Was the test effort adequate?

Answers

  1. We find out by stating (publishing) the test inventory. The inventory contains the list of all the requirements and specifications that we know about and includes our assumptions. In a RAD (Rapid application development) effort, we often start with only our assumptions, because there may not be any formal requirement or specifications. You can start by writing down what you think it is supposed to do. In projects with formal specifications, there are still assumptions-for example, testing the system on these three operating systems will be adequate. If we do not publish our assumptions, we are deprived of a valuable opportunity to have incorrect assumptions corrected.
  2. How many tests are there? We find out by enumerating everything there is to test. This is not a count of the things we plan to test; it is a count of all the tests that can be identified. This begins the expansion of the test inventory.
  3. The most important things, of course! We use ranking criteria, to prioritize the tests, then we will use MITs risk analysis to determine the most important test set from the inventory.
  4. Once the test set has been identified, fill out the MITs sizing worksheet to size and estimate the effort. The completed worksheet forms the basis for the test agreement.
  5. Negotiate with management for the resources required to conduct the test effort. Using the worksheet, you can calculate how many tests, testers, machines, and so on will be required to fit the test effort into the desired time line. Use the worksheet to understand and explain resource and test coverage trade-offs in order to meet a scheduled delivery date.
  6. Use the MITs analysis and the test inventory to pick the most important areas first, and then perform path and data analysis to determine the most important tests to run in that area. Once you have determined the most important tests for each inventory item, recheck your inventory and the sizing worksheet to make sure your schedule is still viable. Renegotiate if necessary. Start running tests and develop new tests as necessary. Add your new tests to the inventory.
  7. Use S-curves (a Logistic function which depicts an initial slow change, followed by a rapid change and then ending in a slow change again. This results in an "S" shaped line when depicted graphically. The S-curve is often used to describe how new technology is adapted. Initially people are slow to begin to use the technology but once it is accepted growth is rapid. Finally, growth slows down when the market is saturated and/or newer technology comes along.) to track test progress and help determine the end of the test effort.
  8. Use the performance metric to answer these questions and to improve future test efforts. The historical record of what was accomplished last time is the best starting point for improvement this time. If the effort was conducted in a methodical, reproducible way, the chances of duplicating and improving it are good.

In the ideal scenario, you do all of these things because all these steps are necessary if you plan to do the very best test effort possible. The next thing to recognize is that the real scenario is rarely "ideal." The good news is this method is flexible, even agile. Any steps you perform will add to the value of the test effort. If you don't do them all, there is no penalty or detriment to your effort. Next, the steps are listed in the order that will give you the best return on your investment. This order and the relative importance of the steps is different for different types of development projects.

Different environments have different needs, and these needs mandate different priorities in the test approach.

How to Determine a Factor of Safety

Factors of safety should be determined based on the error in the previous estimation and then adjusted as needed. Even if a process does not use measurements to arrive at an estimate, a factor of safety can be established for future similar estimates. For example, a test effort was estimated to require 14 weeks. In reality, 21 weeks were required to complete the test effort. The estimate was low by a factor of:

21/14 = 1.5

When the next test estimation effort takes place, if the same or similar methods are used to make the estimate, even if it is based on an I-feel-lucky guess, multiply the new estimate by a factor of 1.5, and you will get an estimate that has been adjusted to be in keeping with reality.

Note

It does not matter how a factor of safety is determined; using them improves estimates.

Always Get a Second Opinion

No good reason exists for working without a safety net. Inspection and formal reviews are the most productive way to remove defects that we know of today. Part of the reason is that inspectors bring an outside perspective to the process, so the rest is simple human factors. Having someone to check your work is very important.

Note

Two points for engineers in software to keep in mind:

1. You need to pick the right tool method or approach for the job at hand.

2. You need to remain flexible and plan for change.

Testers do not have to be experts in a system to test it well. They must be trained testers, using a systematic approach, sound reasoning, and measurements to make their case. They must also be well informed about the project and have good channels of communication to the experts. People skills are also a definite plus.

Like it or not, good testing also includes exploration. Good testers are the ones that dig in and explore the system. They want to know how it works. The ones that don't dig in aren't going to get the job done, and the ones that think they know how it works are no good either, because they are closed and biased.

The Engineering Approach

Types of Assumptions

The following are some examples of typical assumptions for a software test effort.

Assumption: Scope and Type of Testing

· The test effort will conduct system, integration, and function testing.

· All unit testing will be conducted by development.

Assumption: Environments That Will Be Tested

· The environments defined in the requirements are the only environments that the test effort will be responsible for verifying and validating.

Environment State(s)

· All operating system software will be installed before testing is begun.

System Behavior

· The system is stable.

· The system will behave in the same way it has in the past.

System Requirements and Specifications

· The system requirements and specifications are complete and up-to-date.

Test Environment Availability

· The test environment will accurately model the real-world environment.

· The test environment will be available at all times for the duration of the test effort.

Bug Fixes

· Bugs found during testing will be fixed within published turnaround times according to priority.

Approaches to Managing Software Testing

Methods and metrics must provide demonstrable value to have credibility and worth.

· The best way is to provide that manager with high-quality information that he or she can use to make decisions that have a positive affect on the bottom line.

To do that you will have to develop an approach to testing that complements (and can succeed with) the development methodology being used. You will have to measure and keep track of what you measure, and finally, you will have to convert your measurement data into that information that management finds truly valuable.

· Another way that you can demonstrate the value of methods and metrics is to measure the cost benefit of not using the methods and metrics and compare that to the cost benefit of using them.

For example, in a case study of an early-1990s shrink-wrap RAD project a group of trained testers, using the methods and metrics in this book, found bugs at the rate of two to three per hour, while untrained testers, who were not using formal methods and metrics, found three bugs per day in the same applications. Further, over 90 percent of the bugs reported by the trained testers were fixed, while half of the bugs reported by the untrained testers were returned by developers as unreproducible.

· The best reason for using methods and metrics is that companies using them have a competitive advantage. They get a much better job done for less.

A competent journeyman tester (the ones using methods and metrics) can successfully test any development project regardless of the development methodology being employed as long as they have sufficient resources and management support to do so. Developing or tailoring the test approach to suit the needs of the project is a critical piece of acquiring those resources and that support.

What is required is a balance between creativity and method. This is where engineers come into the picture.

3.8.08

The quality process - traditional vs. new

Traditional quality assurance principles are not a good fit for today's software projects. Further, the traditional processes by which we ensure quality in software systems are cumbersome and inflexible. Even more important, the traditional tools used by quality assurance are not able to keep up with the pace of software development today. Testers constrained to follow these outmoded practices using these cumbersome tools are doomed to failure.

The quality process must be reinvented to fit the real needs of the development process. The process by which we ensure quality in a product must be improved. A company needs to write its own quality goals and create a process for ensuring that they are met. The process needs to be flexible, and it needs to take advantage of the tools that exist today.

Several new technologies exist that can be used to support quality assurance principles, such as collaboration, which allows all involved parties to contribute and communicate as the design and implementation evolve. These technologies can also help make quality assurance faster and more efficient by replacing traditional paper documentation, distribution, and review processes with instantly available single-source electronic documents, and by supplementing written descriptions with drawings.

Replacing tradition requires a culture change. And, people must change their way of working to include new tools. Changing a culture is difficult. Historically, successful cultures simply absorb new invading cultures, adopt the new ideas that work, and get on with life. Cultures that resist the invasion must spend resources to do so, and so become distracted from the main business at hand.

Software quality is a combination of reliability, timeliness to market, price/cost, and the feature richness. The test effort must exist in balance with these other factors. Testers need tools-that is, methods and metrics that can keep up with development-and testers need the knowledge to use those tools.

Traditionally, testing has been a tool to measure the quality of the product. Today, testing needs to be able to do more than just measure; it needs to be able to add to the value of the product.

2.8.08

Improving the Quality Process - Improving Documentation Techniques(3)


Improving the Way Systems Are Described: Replacing Words with Pictures
It can take anywhere from 15 to 30 pages of commentary to adequately describe this process. Yet this graphic does it in one page. This type of flow mapping is far more efficient than describing the process in a commentary. It is also far more maintainable. This type of flow can be used to generate the tests required to validate the system during the test effort.

Visualization techniques create systems that are self-documenting.

Improving the Quality Process - Improving Documentation Techniques(2)

Improving the Way Documents Are Created, Reviewed, and Maintained

We can greatly improve the creation, review, and approval process for documents if they are (1) kept in a single-source repository and (2) reviewed by the entire team with all comments being collected and merged automatically into a single document. Thousands of hours of quality assurance process time can be saved by using a collaborative environment with these capabilities.

In a project managed using a traditional process of distributing the design documents via email and paper, collecting comments and then rolling all the comments back into a single new version takes a lot more time than doing that through a collaborative Web site, which takes a couple of hours to create and another 1-2 days to write the instructions to the reviewers and train the team to use the site. One document specialist can be assigned as the permanent support role on the site to answer questions and reset passwords.

The whole concept of having to call in old documents and redistribute new copies in a project of any size is so wasteful that an automated Web-based system can usually pay for itself in the first revision cycle.

Improving the Quality Process - Improving Documentation Techniques(1)

Automating Record Keeping

We certainly need to keep records, and we need to write down our plans, but we can't spend time doing it. The records must be generated automatically as a part of our development and quality process. The records that tell us who, what, where, when, and how should not require special effort to create, and they should be maintained automatically every time something is changed.

The most important (read: cost-effective) test automation has not been preparing automated test scripts. It has been automating the documentation process, via the inventory, and test management by instituting online forms and a single-source repository for all test documentation and process tracking.

A project Web site proved to be such a useful tool that the company kept it in service for years after the product was shipped. It was used by the support groups to manage customer issues, internal training, and upgrades to the product for several years.

This automation of online document repositories for test plans, scripts, scheduling, bug reporting, shared information, task lists, and discussions has been so successful that it has taken on a life of its own as a set of project management tools that enable instant collaboration amongst team members, no matter where they are located.

Improving the Quality Process - Picking the Correct Components for Quality in Your Environment

The following are some definitions of the fundamental components that should be the goals of quality assurance.
• The definition of quality is customer satisfaction.
• The system for achieving quality is constant refinement.
• The measure of quality is the profit.
• The target goal of the quality process is a hit every time.
Note: Quality can be quantified most effectively by measuring customer satisfaction.
The formula for achieving these goals is:
• Be first to market with the product.
• Ask the right price.
• Get the right features in it-the required stuff and some flashy stuff that will really please the users.
• Keep the unacceptable bugs to an absolute minimum.
Corollary: Make sure your bugs are less expensive and less irritating than your competitor's bugs.

Improving the Quality Process - Introduction

To improve a quality process, you need to examine your technology environment (hardware, networks, protocols, standards) and your market, and develop definitions for quality that suit them. First of all, quality is only achieved when you have balance-that is, the right proportions of the correct ingredients.

Note: Quality is getting the right balance between timeliness, price, features, reliability, and support to achieve customer satisfaction.

Traditional tools used by quality assurance and software testers

· Records. Documentation that keeps track of events, answering the questions when, where, who, how, and why.

· Documents. Standards, quality plan, test plan, process statements, policy.

· Activities. Reviews, change management, version control, testing.

What the Auditors Want to Know from the Testers

When testing a product the auditors want to know:

1. What does the software or system do?

2. What are you going to do to prove that it works?

3. What are your test results? Did it work under the required environment? Or, did you have to tweak it?

· To test means to compare an actual result to a standard. If there is no standard to compare against, there can be no test.

· Obviously, there is great room for improvement in the software testing environment. Testing is often insufficient and frequently nonexistent. But valuable software testing can take place, even in the constraints (and seeming chaos) of the present market, and the test effort can and should add value and quality to the product.

Quality Assurance: All those planned and systematic actions necessary to provide adequate confidence that a product or service will satisfy given requirements for quality.

· The best combination: Formal design inspections, formal quality assurance, formal testing 77%-95%

· Quality is not a thing; it is the measure of a thing. Quality is a metric. The thing that quality measures is excellence. How much excellence does a thing possess? Excellence is the fact or condition of excelling; of superiority; surpassing goodness or merit.

1. The definition of quality is "conformance with requirements."

2. The system for achieving quality is "prevention, not cure."

3. The measure of success is "the cost of quality."

4. The target goal of the quality process is "Zero defects-get it right the first time."

· Overplanning and underplanning the product are two of the main failings in software development efforts today.

· A development process that does not allow sufficient time for design, test, and fix cycles will fail to produce the right product.

Software Testing Fundamentals-Methods and Metrics

metric = a measure.

metric system = a set or system of measures.

· The only type of metrics used regularly has to do with counting bugs and ranking them by severity. Only a small percentage of respondents measure the bug find rate or the bug fix rate. No other metrics are widely used in development or testing, even among the best-educated and seemingly most competent testers. It can also be inferred from these results that the companies for which these testers work do not have a tradition of measuring their software development or test processes.

· Inspection or structured analysis = some documented structured or systematic method of analyzing the test needs of a system.

· Automated test tools are used today, but test automation is also voted as the most difficult test technique to implement and maintain in the test effort.

· Within months, every major hardware and software vendor had a support presence on the Web. The bug fix process became far more efficient because it was no longer necessary to ship fixes to everyone who purchased the product-only those who noticed the problem came looking for a solution. Thanks to the Internet, the cost of distributing a bug fix fell to almost nothing as more and more users downloaded the fixes from the Web. The customer support Web site provided a single source of information and updates for customers and customer service, and the time required to make a fix available to the users shrank to insignificance. The cost of implementing these support Web sites is very small and the savings were huge; customer satisfaction and profit margins go up. It has become common practice to distribute bug-fix releases (put the patches and fixes on the Web site) within a few weeks of the initial release-after the market has been captured. Consequently, reliability metrics are not currently considered to be crucial to commercial success of the product.

· The way to develop a good cost-benefit statement, and add real credibility to software testing, is to use formal methods and good metrics.

· Regardless of the cause, once a software maker has decided to use formal methods, it must address the question of which formal methods and metrics to adopt. Once methods or a course toward methods has been determined, everyone must be educated in the new methods. Moving an established culture from an informal method of doing something to a formal method of doing the same thing takes time, determination, and a good cost-benefit ratio. It amounts to a cultural change, and introducing culture changes is risky business. Once the new methods are established, it still takes a continuing commitment from management to keep them alive and in use.