10.11.16

User Acceptance Testing

What is User Acceptance Testing?

User acceptance testing, a testing methodology where the clients/end users involved in testing the product to validate the product against their requirements. It is performed at client location at developer's site.
For industry such as medicine or aviation industry, contract and regulatory compliance testing and operational acceptance testing is also carried out as part of user acceptance testing.
UAT is context dependent and the UAT plans are prepared based on the requirements and NOT mandatory to execute all kinds of user acceptance tests and even coordinated and contributed by testing team.

User Acceptance Testing - In SDLC

The following diagram explains the fitment of user acceptance testing in the software development life cycle:
User acceptance testing in Test Life Cycle The acceptance test cases are executed against the test data or using an acceptance test script and then the results are compared with the expected ones.

Acceptance Criteria

Acceptance criteria are defined on the basis of the following attributes:
  • Functional Correctness and Completeness
  • Data Integrity
  • Data Conversion
  • Usability
  • Performance
  • Timeliness
  • Confidentiality and Availability
  • Installability and Upgradability
  • Scalability
  • Documentation

Acceptance Test Plan - Attributes

The acceptance test activities are carried out in phases. Firstly the basic tests are executed and if the test results are satisfactory then the execution of more complex scenarios are carried out.
The Acceptance test plan has the following attributes
  • Introduction
  • Acceptance Test Category
  • operation Environment
  • Test case ID
  • Test Title
  • Test Objective
  • Test Procedure
  • Test Schedule
  • Resources
The acceptance test activities are designed to reach at one of the conclusions :
  1. Accept the system as delivered
  2. Accept the system after the requested modifications have been made
  3. Do not accept the system

Acceptance Test Report - Attributes

The Acceptance test Report has the following attributes:
  • Report Identifier
  • Summary of Results
  • Variations
  • Recommendations
  • Summary of To-DO List
  • Approval Decision

Web Application Testing



What is Web Application Testing?
Web application testing, a software testing technique exclusively adopted to test the applications that are hosted on web in which the application interfaces and other functionalities are tested.
Web Application Testing - Techniques:
1. Functionality Testing - The below are some of the checks that are performed but not limited to the below list:
  • Verify there is no dead page or invalid redirects.
  • First check all the validations on each field.
  • Wrong inputs to perform negative testing.
  • Verify the workflow of the system.
  • Verify the data integrity.
2. Usability testing - To verify how the application is easy to use with.
  • Test the navigation and controls.
  • Content checking.
  • Check for user intuition.
3. Interface testing - Performed to verify the interface and the dataflow from one system to other.
4. Compatibility testing- Compatibility testing is performed based on the context of the application.
  • Browser compatibility
  • Operating system compatibility
  • Compatible to various devices like notebook, mobile, etc.
5. Performance testing - Performed to verify the server response time and throughput under various load conditions.
  • Load testing - It is the simplest form of testing conducted to understand the behaviour of the system under a specific load. Load testing will result in measuring important business critical transactions and load on the database, application server, etc. are also monitored.
  • Stress testing - It is performed to find the upper limit capacity of the system and also to determine how the system performs if the current load goes well above the expected maximum.
  • Soak testing - Soak Testing also known as endurance testing, is performed to determine the system parameters under continuous expected load. During soak tests the parameters such as memory utilization is monitored to detect memory leaks or other performance issues. The main aim is to discover the system's performance under sustained use.
  • Spike testing - Spike testing is performed by increasing the number of users suddenly by a very large amount and measuring the performance of the system. The main aim is to determine whether the system will be able to sustain the work load.
6. Security testing - Performed to verify if the application is secured on web as data theft and unauthorized access are more common issues and below are some of the techniques to verify the security level of the system.
  • Injection
  • Broken Authentication and Session Management
  • Cross-Site Scripting (XSS)
  • Insecure Direct Object References
  • Security Misconfiguration
  • Sensitive Data Exposure
  • Missing Function Level Access Control
  • Cross-Site Request Forgery (CSRF)
  • Using Components with Known Vulnerabilities
  • Unvalidated Redirects and Forwards

15.11.13

How to write a good test case?

Writing test cases is one of the major and most important activities which any tester performs during the entire testing cycle. The approach for writing good test cases will be to identify, define and analyse the requirements.
When you begin writing the test cases, there are few steps which you need to follow to ensure that you are writing good test cases:


1. Identify the purpose of testing. You need to understand requirements to be tested. The first step is to define testing purpose.
When you start writing test cases for any software module, you must understand the features of the same and user requirements.


2. The second step is  to define how to perform testing. This will include defining Test Scenarios.To write good test scenarios you should be well versed with familiar with the functional requirements. You need to know how software is used covering various operations.


3. Identify Non-funtional requiremts.The third step is  to understand the other aspects of software related to non-functional requirements such as hardware requirements, operating system, security aspects to be considered, and other prerequisites such as data files or test data preparation. Testing of non-functional requirements is very important. For example, if the software requires a user to fill in the form, proper time-out logic should be defined by the developer to ensure that it should not result in time-out while submitting the form once user has filled in all required information. Simultaneously, under the same scenario tester should also ensure that user is getting logged-off after certain defined delay to ensure security of the application is not breached.


4. Fourth step would be to define a framework of test cases. The framework of test cases should cover UI interface, functionality, fault tolerance, compatibility, and performance of several categories. Each category should be defined in accordance with the logic of the software application.


5. Next step would be to become familiar with the modular principle. It is easy to analyze the relevance of the software modules present in the specified application. However, it is very important to understand the coupling between the modules. It is very important to test the "Mutual influence" of modules.
The test cases should be designed to cover influence of any module on other modules of the application. For example, in online shopping software while testing shopping cart and order checkout you need to also consider inventory management and validate if the same quantity of the purchased product is deducted from the stores. Similarly, while testing returns, we need to test its effect on the financial part of application along with store inventory.


Structuring of Test cases
Now you have all the required information to begin writing test cases. We will talk about the structure of a test case. Requirements of the software are mapped with test scenarios, which are further elaborated in test cases. For each test scenario, we define test cases. In each test case, we define Test Steps. Test Step is the smallest entity under any test case. It specifies the action to be performed, Expected result of the test application.


The format of a test case comprises of:
1. Test Case ID ( This is the unique number which helps in identifying a specific test cases)
2. Module to be tested (Usually we provide Requirement ID to maintain traceability between test case and requirements)
3. Test Data ( We provide variable and values based on need of the test case)
4. Test Steps ( Steps to be executed)
5. Expected results ( How application should behave after performing stated test steps)
6. Actual results ( Actual output tester will get after preforming steps)
7. A result ( Pass or fail after comparing expected and actual results)
8. Comments ( We can provide screen shot or any other relevant information to help developer debug the code)
During testing you will mark your results against each step, and the defect report will provide related test case ID, which failed during execution.  This can help a tester to relate back to requirements and understand the business scenario which needs to be fixed in code.
For writing test cases you can use simple xls file or tester can select from a wide variety of tools already available. There are few tools such as Quality Centre, Test Director which tester can avail after paying license cost, or you can avail open source tools such as bugzilla.


The test cases can be written with great details, including a large number of steps, or you can also write relatively simple test cases. I personally do not agree with the approach of a large number of steps to be included in test cases.


Here are my thoughts on how a tester can write effective test cases:
1. Self-explanatory and specific – test cases should have sufficient details so that even a new tester can execute the same without any help. All the pre-requisite which are required to execute a specific test should be mentioned in the tests itself. Further, it should clearly specify the purpose and scope of their steps.
2. Valid and concise – test cases should have all designated steps to test based on expectations of the testing. It should not have unnecessary steps. If there are too many test steps in a single test case to be performed the tester may lose focus and aim.
3. Traceable – test cases should cover all the requirements of the software, and every test case should be mapped with “Requirement ID." This helps in ensuring that testing is providing 100% coverage to complete requirements and tester is performing testing for all requirements. Further, it also helps in impact analysis.
4. Maintainable – with the changes in requirement, tester should be able to easily maintain the test suite of test cases. It should reflect the changes in software and accordingly steps should be modified.
5. Positive and negative coverage– test cases should test for boundary values, equivalence classes, normal and abnormal conditions.  Apart from testing for expected results, the negative coverage can help in testing failure conditions and error-handling.
6. Coverage for Usability aspect – Test cases  should include testing for UI interface from the aspect of ease of use. The overall layout and color should be tested against a style guide, if any defined for the software application under testing or should be tested against the signed off mock- up designs. Basic English punctuations, spellings, drop-down list categorizations such as depended pick lists should be covered.
7. Test Data – there should be the diversity of the data which should be used in test cases such as -   Valid data, Legitimate invalid data (to test boundary value), Illegal and abnormal data ( to test error handling and recovery).
8. Non-Functional aspect – the test cases should cover scenarios for basic performance testing of the application such as Multi-user operation, capacity test.  It should cover security aspects such as user permissions, logging mechanism. Test cases for Browser support  in case of  web application.


To summarize, the test cases should first be able to cover all the functional requirements, and then we should also include the test cases which are related to non- functional requirements as they are equally important for the proper functioning of the software.

9.8.08

Certification Process Checklists

Pass

Fail

Checklist for Testing

Are test plans and procedures created and reviewed?

Are test results recorded?

Are defects or problems recorded, assigned, and tracked to closure?

Is there an adequate test process to ensure that areas impacted by changes are retested?

Pass

Fail

Checklist for Measurement

Is the software validated (tested) as a complete system in an environment as similar as possible to the final operating environment? Is this done prior to delivery and acceptance?

If field-testing is required, are the responsibilities of the supplier and the purchaser defined? Is the user environment restored following the test?

Are product metrics collected and used to manage the testing effort?

Are product defects measured and reported?

Is corrective action taken if metric levels exceed established target levels?

Are improvement goals established in terms of metrics

Are process metrics collected to measure the effectiveness of the testing process in terms of schedule and in terms of fault prevention and detection?

Pass

Fail

Checklist for Tools / Techniques

Are tools and techniques used to help make testing and management processes more effective?

Are the used tools and techniques reviewed, as required, and improved upon?

Pass

Fail

Checklist for Training

Are training needs identified according to a procedure?

Is training conducted for all personnel performing work related to quality?

Are personnel who are performing specific tasks qualified on the basis of appropriate education, training, and/or experience?

Are records kept of personnel training and experience?

Pass

Fail

Checklist for Documentation

Are test plans, requirements, and other documents revision controlled?

Do procedures exist to control document approval and issue?

Are changes to controlled documents reviewed and approved?

Are current versions of test documents identifiable by a master list or document control procedures?

Pass

Fail

Checklist for Configuration Management

Is there a Configuration Management (CM) system that identifies and tracks versions of the software under test, software components, build status, and changes? Does the system control simultaneous updates?

Does the configuration management plan include a list of responsibilities, CM activities, CM tools and techniques, and timing of when items are brought under CM control?

Is there a mechanism and procedure that enables software, hardware, and files to be uniquely identified throughout the entire software development lifecycle?

Is there a documented mechanism to identify, record, review, and authorize changes to software items under configuration management? Is this process always followed?

Are affected personnel notified of software changes?

Is the status of software items and change requests reported?

Defect Removal Efficiency (DRE)


A more powerful metric for test effectiveness (and the one that we recommend) can be created using both of the defect metrics discussed above: defects found during testing and defects found during production. What we really want to know is, "How many bugs did we find out of the set of bugs that we could have found?" This measure is called Defect Removal Efficiency (DRE).
The later we discover a bug, the greater harm it does and the more it costs to fix.

Incident Description

The author of the incident report should include enough information so that the readers of the report will be able to understand and replicate the incident. Sometimes, the test case reference alone will be sufficient, but in other instances, information about the setup, environment, and other variables is useful.

Section Heading Description
4.1 Inputs: Describes the inputs actually used (e.g., files, keystrokes, etc.).
4.2 Expected Results: This comes from the test case that was running when the incident was discovered.
4.3 Actual Results: Actual results are recorded here.
4.4 Anomalies: How the actual results differ from the expected results. Also record other data (if it appears to be significant) such as unusually light or heavy volume on the system, it's the last day of the month, etc.
4.5 Date and Time: The date and time of the occurrence of the incident.
4.6 Procedure Step: The step in which the incident occurred. This is particularly important if you use long, complex test procedures.
4.7 Environment: The environment that was used (e.g., system test environment or acceptance test environment, customer 'A' test environment, beta site, etc.)
4.8 Attempts to Repeat: How many attempts were made to repeat the test?
4.9 Testers: Who ran the test?
4.10 Observers: Who else has knowledge of the situation?

Example of Minor, Major, and Critical Defects
Minor: Misspelled word on the screen.
Major: System degraded, but a workaround is available.
Critical: System crashes.

Repetitive Tasks


Repetitive tasks, such as regression tests, are prime candidates for automation because they're typically executed many times. Smoke, load, and performance tests are other examples of repetitive tasks that are suitable for automation.
If the application being tested is unstable or changing rapidly, automating the test scripts may be difficult.
Regression tests are tests that are run after changes (corrections and editions) are made to the software to ensure that the rest of the system still works correctly.
Timing is everything. Trying to implement a major tool or automation effort in the midst of the biggest software release of all time is not a good strategy.
A test set is a group of test cases that cover a feature or system.
As a rule of thumb, we normally recommend that the regression test set (or at least the smoke test) be run in its entirety early on to flag areas that are obviously problematic.
No matter how good you and your colleagues are at designing test cases, you'll always think of new test cases to write when you begin test execution.
Obviously, the results of each test case must be recorded. If the testing is automated, the tool will record both the input and the results. If the tests are manual, the results can be recorded right on the test case document. In some instances, it may be adequate to merely indicate whether the test case passed or failed. Failed test cases will also result in an incident report being generated. Often, it may be useful to capture screens, copies of reports, or some other output stream.