Thursday, February 23, 2012

Nageshwar rao testing notes A-Z - Manual testing and Quick Test Professional Tutorials


Software Engineering
Software: - Computer software (or simply software) is the programs that enable a computer to perform a specific task, as opposed to the physical components of the system (hardware). This includes application software such as a word processor, which enables a user to perform a task, and system software such as an operating system, which enables other software to run properly, by interfacing with hardware and with other software.
The term "software" was first used in this sense by John W. Tukey in 1957. In computer science and software engineering, computer software is all computer programs. The concept of reading different sequences of instructions into the memory of a device to control computations was invented by Charles Babbage as part of his difference engine. The theory that is the basis for most modern software was first proposed by Alan Turing
A product:-Computer Software has become a driving force. It is the engine that drives business decision making. It serves as the basis for modern scientific investigation and engineering problem solving. It is a key factor that differentiates modern products and services. It is embedded in systems of all kinds: transportation, medical, telecommunications, military industrial processes, entertainment, office products etc.It will become the driver for new advances in everything from elementary education to genetic engineering.

Software Characteristics
Software is a logical rather than a physical system element. Therefore software has characteristics that are considerably differently than those of hardware.

Software product characteristics

  • Successful software...
     
  • ...provides the required functionality
     
  • ...is usable by real (i.e. naive) users
     
  • ...is predictable, reliable and dependable
     
  • ...functions efficiently
     
  • ...has a "life-time" (measured in years)
     
  • ...provides an appropriate user interface
     
  • ...is accompanied by complete documentation
     
  • ...may have different configurations
     
  • ...can be "easily" maintained.
Software Applications
Application software is a defined subclass of computer software that employs the capabilities of a computer directly to a task that the user wishes to perform. This should be contrasted with system software which is involved in integrating a computer's various capabilities, but typically does not directly apply them in the performance of tasks that benefit the user. The term application refers to both the application software and its implementation.
For example-System software,Real-time software,Business software,Engineering and scientific software,Embedded software,Personal computer software,Web-based software,Artificial Intelligence software.








Software Engineering: A layered technology
Software Engineering (SE) is the discipline of designing, creating, and maintaining software by applying technologies and practices from computer science, project management, engineering, application domains and other fields.
he term software engineering was popularized after 1968, during the 1968 NATO Software Engineering Conference (held in Garmisch, Germany) by its chairman F.L. Bauer, and has been in widespread use since.
The term software engineering has been commonly used with a variety of distinct meanings:
  • As the informal contemporary term for the broad range of activities that was formerly called programming and systems analysis;
  • As the broad term for all aspects of the practice of computer programming, as opposed to the theory of computer programming, which is called computer science
  • As the term embodying the advocacy of a specific approach to computer programming, one that urges that it be treated as an engineering discipline rather than an art or a craft, and advocates the codification of recommended practices in the form of software engineering methodologies.
  • Software engineering is "(1) the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software, that is, the application of engineering to software," and "(2) the study of approaches as in (1)." – IEEE Standard 610.12.

Process, Methods and Tools
Software engineering is a layered technology.
Any engineering approach (including software engineering) must rest on an organizational commitment to quality. Total quality management and similar philosophies foster a continuous process improvement culture, and this culture ultimately leads to the development of increasing more mature approaches to software engineering. The bedrock that supports software engineering is a quality focus.
                        The foundation for software engineering is the process layer. Software engineering process is the glue that holds the technology layers together and enables rational and timely development of computer software. Process defines the framework for a set of key process areas (KPAs) that must be established for effective delivery of software engineering technology. The key process areas form the basis for management control of software project and establish the context in which technical methods are applied, work products (model, documents, data reports, forms, etc.) are produced , milestones are established, quality is ensured, and change is properly managed.
                      Software engineering methods provide the technical how-to’s for building software. Methods encompass a broad array of tasks that include requirements analysis, design, program construction, testing and support. Software engineering methods rely on a set of basic principles that govern each area of the technology and include modeling activities and other descriptive techniques.
                     Software engineering tools provide automated or semi-automated support for the process and the methods. When tools are integrated so that information created by one tool can be used by another, a system for the support of the software development, called computer-aided software engineering, is established. CASE combines software, hardware and a software engineering database(a repository containing important information about analysis, design, program construction and testing) to create a software engineering environment analogous to CAD/CAE(computer-aided design/engineering) for hardware.

Software Process models
To solve actual problems in an industry setting, a software engineering or team of engineers must incorporate a development strategy that encompasses the process, methods and tools layers .This strategy is often referred to as a process model or a software engineering paradigm. A process model for software engineering is chosen based on the nature of the project and application, the methods and tools to be used and the controls and deliverables that are required.






Oval:     Status 
    Quo












All software development can be characterized as a problem solving loop in which four distinct stages are encountered: status quo, problem definition, technical development and solution integration.

1. Status Quo:- It represents the current state of affairs.

 2. Problem Definition:- It identifies the specific problem to be solved.

 3. Technical Development:- It solves the problem through the application of some technology.

 4. Solution Integration:- It delivers the results for eg. Documents, programs,           data, new business function, new product to those who requested the solution in the first place.

Software Development Life Cycle
SDLC is an acronym for Software development cycle. A software development process is a structure imposed on the development of a software product. Synonyms include software life cycle and software process. There are several models for such processes, each describing approaches to a variety of tasks or activities that take place during the process.


Software Development Lifecycle Models

The process used to create a software product from its initial conception to its public release is known as the software development lifecycle model.
 
There are many different methods that can be used for developing software, and no model is necessarily the best for a particular project. There are different types of models.















The Software life cycle

 Feasibility Study and problem Analysis
– What exactly is this system supposed to do?
Determine and spell out the details of the problem.

Design
– How will the system solve the problem?

Coding
– Translating the design into the actual system.

• Testing
– Does the system solve the problem?
– Have the requirements been satisfied?
– Does the system work properly in all situations?

• Maintenance
– Bug fixes

 
All of the stages from start to finish that take place when developing a new Software..



                                                                                                                                               
















·         The software life cycle is a description of the events that occur between the birth and death of a software project inclusively.

·         Defines the concrete strategy to engineer some software artifact

·         SDLC is separated into phases (steps, stages)

·         SDLC also determines the order of the phases, and the criteria for transitioning from phase to phase














1.1 Feasibility Study:




                 
Oval: The Analyst conducts an initial study of the problem and asks is the solution 
u	Technologically possible?
u	Economically possible?
u	Legally possible?
u	Operationally possible?
u	sible?
u	Operationally possible?
u	Scheduled time scale possible?
n	stems analyst conducts 
	an initial study of the problem and asks is the solution 
u	Technologically possible?
u	Economically possible?
u	Legally possible?
u	Operationally possible?
u	Scheduled time scale possible?
 
















The feasibility report

n  Applications areas to be considered e.g.
     Stock control, purchasing, Accounts etc
n  System investigations for each application
n  Cost estimates
n  System requirements
n  Timescale for implementation
n  Expected benefits



































Systems Analysis:

System analysis and design is the process of investigating a business
with a view to determining how best to manage the various procedures and information processing tasks that it involves.

  The Systems Analyst
Performs the investigation and might recommend the use of a computer to improve the efficiency of the information system being investigated.

  Systems Analysis
  The intention to determine how well a business copes with its current information processing needs and whether it is possible to improve the procedures in order to make it more efficient or profitable.

The System Analysis Report

n  BRS (Business Requirement Document) 
n  FRS (Functional Requirement Document) Or Functional specifications
n  Use Cases (User action and system Response)
[These 3 are the Base documents for writing Test Cases]  
n  Documenting the results
uSystems flow charts
uData flow diagrams
uOrganization charts
ureport

Note:   FRS contains Input, Output, process but no format.
            Use Cases contains user action and system response with fixed format. 

Systems Design:

Planning the structure of the information system to be implemented.
Systems analysis determines what the system should do
And design determines how it should be done.























System Design Report

·         Design Document that consist of Architectural Design, Database Design, Interface Design     












Coding:
 











                                        






Coding Report
·         All the programs, Functions, Reports that related to Coding.  

Testing:

 What Is Software Testing?

IEEE Terminology: An examination of the behavior of the program
 by executing on sample data sets.




Testing is executing a program with an intention of finding defects  




 











                          







Testing is executing a program with an indent of finding Error/Fault and Failure.
Fault is a condition that causes the software to fail to perform its required function.
Error refers to difference between Actual Output and Expected Output.
Failure is the inability of a system or component to perform required function according to its specification. 
Failure is an event; fault is a state of the software, caused by an error.
Why is Software Testing?

To discover defects.
To avoid user detecting problems.
To prove that the software has no faults.
To learn about the reliability of the software.
To ensure that product works as user expected.
To stay in business.
To avoid being sued by customers.
To detect defects early, this helps in reducing the cost of defect fixing.

Cost of Defect Repair

Phase
% Cost
Requirements
0
Design
10
Coding
20
Testing
50
Customer Site
100


How exactly testing is different from QA/QC
Testing is often confused with the processes of quality control and quality assurance. Testing is the process of creating, implementing and evaluating tests.

Testing measures software quality.

Testing can find faults; when they are removed, software quality is improved.
QC is the process of Inspections, Walk-troughs and Reviews.

QA involves in Monitoring and improving the entire SDLC process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with.

Why should we need an approach for testing?
Yes, we definitely need an approach for testing.
To over come following problems, we need a formal approach for Testing.
Incomplete functional coverage: Completeness of testing is difficult task for testing team with out a formal approach. Team will not be in a position to announce the percentage of testing completed.
No risk management -- this is no way to measure overall risk issues regarding code coverage and quality metrics. Effective quality assurance measures quality over time and starting from a known base of evaluation.
Too little emphasis on user tasks -- because testers will focus on ideal paths instead of real paths. With no time to prepare, ideal paths are defined according to best guesses or developer feedback rather than by careful consideration of how users will understand the system or how users understand real-world analogues to the application tasks. With no time to prepare, testers will be using a very restricted set input data, rather than using real data (from user activity logs, from logical scenarios, from careful consideration of the concept domain).
Inefficient over the long term -- quality assurance involves a range of tasks. Effective quality assurance programs expand their base of documentation on the product and on the testing process over time, increasing the coverage and granularity of tests over time. Great testing requires good test setup and preparation, but success with the kind Test plan-less approach described in this essay may reinforce bad project and test methodologies. A continued pattern of quick-and-dirty testing like this is a sign that the product or application is unsustainable in the long run.
Test Standards

External Standards- Familiarity with and adoption of industry test standards from Organizations.

Internal Standards-Development and enforcement of the test standards that testers must meet.

IEEE

·         Institute of Electrical and Electronics Engineers
·         Founded in 1884
·         Have an entire set of standards devoted to Software
·         Testers should be familiar with all the standards mentioned in IEEE.


IEEE STANDARDS: That a Tester should be aware of

1.610.12-1990             IEEE Standard Glossary of Software Engineering Terminology

2. 730-1998                 IEEE Standard for Software Quality Assurance Plans

3. 828-1998                 IEEE Standard for Software Configuration Management Plan

4.829-1998                  IEEE Standard for Software Test Documentation.

5. 830-1998                 IEEE Recommended Practice for Software Requirement                                                     Specification

6.1008-1987 (R1993) IEEE Standard for Software Unit Testing (ANSI)

7.   1012-1998             IEEE Standard for Software Verification and Validation.

8.   1012a-1998           IEEE Standard for Software Verification and Validation                                                     Supplement to 1012-1998 Content Map to IEEE 122207.1

9.   1016-1998             IEEE    Recommended   Practice    for   Software Descriptions

10. 1028-1997             IEEE Standard for Software Reviews

11. 1044-1993             IEEE Standard classification for Software Anomalies

12. 1045-1992             IEEE Standard for Software Productivity Metrics (ANSI)

13. 1058-1998             IEEE Standard for Software Project Management Plans

14. 1058.1-1987          IEEE Standard for Software Management

15. 1061-1998.1          IEEE Standard for Software Quality Metrics Methodology.

Other Standards:

·         ISO-International Organization for Standards

·         SPICE -Software Process Improvement and Capability Determination

·         NIST -National Institute of Standards and Technology

·         DoD-Department of Defense



Types of Testing:
Black Box Testing
Black box testing is also called as Functionality Testing.  In this testing user will be asked to test the correctness of the functionality with the help of Inputs and Outputs. User doesn’t require the knowledge of software code.
Approach:

Equivalence Class:

• For each piece of the specification, generate one or more equivalence Class
• Label the classes as “Valid” or “Invalid”
• Generate one test case for each Invalid Equivalence class
• Generate a test case that covers as many
·   Valid Equivalence Classes as possible
Boundary Value Analysis
•   Generate test cases for the boundary values.
•   Minimum Value, Minimum Value + 1, Minimum Value -1
•   Maximum Value, Maximum Value + 1, Maximum Value - 1


Error Guessing.
-Generating test cases against to the specification.

Advantage:
1. Tester does not need to know the internals logic and programming    language.
2. In the view points of the user

Disadvantage:
1. Since there is no knowledge of the internal structure or logic, there could be errors or deliberate mischief on the part of a programmer, which may not be detectable with black box testing.
White Box Testing
White box testing is also called as Structural testing.  User does require the knowledge of software code.
Structure = 1 Entry + 1 Exit with certain Constraints, Conditions and Loops.
Why do white box testing when white box testing is used to test conformance to requirements:
Logic Errors and incorrect assumptions most are likely to be made while coding for “special cases”. Need to ensure these execution paths are tested.
May find assumptions about execution paths incorrect, and so make design errors.
Typographical errors are random. Just as likely to be on an obscure logical path as on a mainstream path.
Approach
Basic Path Testing:
Cyclomatic Complexity and Mc Cabe Method
Structure Testing:
Condition Testing, Data Flow Testing and Loop Testing

Advantage:

It is thorough and focuses on the produced code. Since there is knowledge of the internal structure or logic, errors or deliberate mischief on the part of a programmer have a higher probability of being detected.

Disadvantages:

            1) It does not verify that the specifications are correct, i.e. it focuses only on the internal logic and does not verify the logic to the specification.

                       2) Another disadvantage is that there is no way to detect missing paths and data-sensitive Errors.
Grey Box Testing.
Grey box Testing is the new term, which evolved due to the different behaviors of the system. This is just a combination of both Black box & white box testing. Tester should have the knowledge of both the internals and externals of the function.
Even though you probably don't have full knowledge of the internals of the product you test, a test strategy based partly on internals is a powerful idea. We call this gray box testing. The concept is simple: If you know something about how the product works on the inside, you can test it better from the outside. This is not to be confused with white box testing, which attempts to cover the internals of the product in detail. In gray box mode, you are testing from the outside of the product, just as you do with black box, but your testing choices are informed by your knowledge of how the underlying components operate and interact. 
      Gray box testing is especially important with Web and Internet applications, because the Internet is built around loosely integrated components that connect via relatively well-defined interfaces. Unless you understand the architecture of the Net, your testing will be skin deep. Hung Nguyen's Testing Applications on the Web (2000) is a good example of gray box test strategy applied to the Web


Installation & Maintenance:
                
             












Installation:
·         File conversion
·         System testing
·         System changeover
·         New system becomes operational
·         Staff training

Maintenance:
·         Corrective maintenance
·         Perfective maintenance
·         Adaptive maintenance

Table format of all the phases in SDLC:

PHASE
INPUT
OUTPUT
Analysis
BRS
SRS
Design
SRS
Design Doc
Coding
Design
.exe File/Application/ Website
Testing
All the above Doc’s
Defect Report


 Software Development Life Cycles

Life cycle: Entire duration of a project, from inception to termination

Different life cycle models:

Code-and-fix model:
- Earliest software development approach (1950s)
- Iterative, programmers' approach
-Two phases: 1. Coding 2. Fixing the code
No provision for:
- Project planning
- Analysis
- Design
- Testing
- Maintenance

Problems with code-and-fix model:
1. After several iterations, code became very poorly structured;
    subsequent fixes became very expensive
2. Even well-designed software often very poorly matched users 
    requirements: were rejected or needed to be redeveloped (expensively!)
3. Changes to code were expensive, because of poor testing and 
    maintenance practices
Solutions:
1. Design before coding
2. Requirements analysis before design
3. Separate testing and maintenance phases after coding

 Waterfall model:
- Also called the classic life cycle
- Introduced in 1956 to overcome limitations of code-and-fix model
- Very structured, organized approach, suitable for planning
Main phases:
1. Feasibility study
2. Analysis
3. Design (overall design & detailed design)
4. Coding
5. Testing (unit test, integration test, acceptance test)
6. Maintenance































- Waterfall model is a linear approach, quite inflexible
- At each phase, feedback to previous phases is possible (but is  
   discouraged in practice)
- Still is the most widespread model today

Problems with Waterfall Model:
  • It doesn’t happen (Requirements are frizzed) 
  • Real projects tend not to follow a sequential flow
  • Activities are done opportunistically during all “phases”
  • Delivery only at the end (long wait)

 Prototyping model:
-Introduced to overcome shortcomings of waterfall model
- Suitable to overcome problem of requirements definition
- Prototyping builds an operational model of the planned
              system, which the customer can evaluate

Main phases:
1. Requirements gathering
2. Quick design
3. Build prototype
4. Customer evaluation of prototype
5. Refine prototype
    Iterate steps 4. and 5. to "tune" the prototype
6. Engineer product
 



















Mostly, the prototype is discarded after step 5. and the actual system is built from scratch in step 6. (Throw-away prototyping)
Possible problems:
- Customer may object to prototype being thrown away and may demand "a few changes" to make it working (results in poor software quality and maintainability)
- Inferior, temporary design solutions may become permanent after a while, when the developer has forgotten that they were only intended to be temporary (results in poor software quality)
 Incremental:
During the first one-month phase, the development team worked from static visual designs to code a prototype.  In focus group meetings, the team discussed users’ needs and the potential features of the product and then showed a demonstration of its prototype. The excellent feedback from these focus groups had a large impact on the quality of the product.
Main phases:
1. Define outline requirements
2. Assign requirements to increments
3. Design system architecture
4. Develop
5. Integrate
6. Validate













 







After the second group of focus groups, the feature set was frozen and the product definition complete. Implementation consisted of four-to-six-week cycles, with software delivered for beta use at the end of each cycle. The entire release took 10 months from definition to manufacturing release. Implementation lasted 4.5 months. The result was a world-class product that has won many awards and has been easy to support.

Spiral model:

- Objective: overcome problems of other models, while combining their 
                      advantages
- Key component: risk management (because traditional models often fail 
                                when risk is neglected)
- Development is done incrementally, in several cycles _ Cycle as often as
  necessary to finish

Main phases:
1. Determine objectives, alternatives for development, and constraints for 
    the portion of the whole system to be developed in the current cycle
2. Evaluate alternatives, considering objectives and constraints; identify
    and resolve risks
3. Develop the current cycle's part of the system, using evolutionary or
    conventional development methods (depending on remaining risks);
    perform validation at the end
4. Prepare plans for subsequent phases







Advantages of spiral model:
- Most realistic approach for large systems envelopment
- Allows identification and resolution of risks early in the development

Problems with spiral model:
- Difficult to convince customer that this approach is controllable
- Requires significant risk assessment expertise to succeed
- Not yet widely used: efficacy not yet proven





















 Software Testing Life Cycle (STLC)





















                                                                                               










System Study:

Making documents of
  1. Domain Knowledge: - Used to know about the client business      
Banking / Finance / Insurance / Real-estates / ERP / CRM / Others
            (ERP-Enterprises Resource Planning and
CRM-Customer Relationships Management)
  1. Software: -
Front End        (GUI) CB / JAVA/ FORMS / Browser
Process            Language, which we want to write programmes
Back End        Database like Oracle, SQL Server etc.
3.      Hardware: -              Internet/ Intranet/ Servers, which you want to install.

4.      Functional Points: -   Ten Lines Of Code (LOC) = 1 Functional Point.

5.      Number of Pages: -   The document, which you want to prepare.

6.      Number of Resources: - Like Programmers, Designers, and Managers.

7.      Number of Days: -    For actual completion of the Project.

8.      Numbers of Modules: -      That depends upon the project

9.      Priority: -                   High/ Medium/ Low importance for Modules   

Scope/ Approach/ Estimation:

Scope: - What to be tested
               What not to test
Approach: - Testing Life Cycle
Estimation: -  (Formula = LOC/ FP/Resources)
  • 1000 = 100 FP (10 LOC = 1 FP)
  • 100 x 3 = 300 (FP x 3 Tech. = Test Cases) The 3 Techniques are

1.  Equivalence Class

2. Boundary Value Analysis
3. Error Guessing.
  • 30 TC Per Day => 300/30 = 10 Days to Design Test Cases
  •  Test Case Review => ½ of Test Case Design  (5 Days)
  • Test Case Execution = 1 ½ of Test Case Design (15 Days)
  • Defect Headlining = Test Case Design (5 Days)
  • Test Plan = 5 days (1 week)
  • Buffer Time = 25% of Estimation

Test Plan Design:
      The Test Plan Design document helps in test execution it contain
  1. About the client and company
  2. Reference document (BRS, FRS and UI etc.)
  3. Scope (What to be tested and what not to be)
  4. Overview of Application
  5. Testing approach (Testing strategy)
  6. For each testing
    • Definition
    • Taconic
    • Start criteria
    • Stop criteria
7.      Resources and there Roles and Responsibilities
8.      Defect definition
9.      Risk / Contingency / Mitigation Plan
10.  Training Required
11.  Schedules
12.  Deliverables       

Test Cases Design:

What is a test case?
Test case is a description of what to be tested, what data to be given and what actions to be done to check the actual result against the expected result.

What are the items of test case?

Test case items are:
Test Case Number
Pre-Condition
Description
Expected Result
Actual Result
Status (Pass/Fail)
Remarks.

Can this test cases reusable?
Yes, Test cases can be reusable.

Test cases developed for functionality testing can be used for Integration/System/Regression testing and performance testing with few modifications.
What are the characteristics of good test case?
A good test case should have the following:
TC should start with “what you are testing”.
TC should be independent.
TC should not contain “If” statements.
TC should be uniform.
Eg. <Action Buttons>, “Links”…
Are there any issues to be considered?
Yes there are few Issues:
All the TC’s should be traceable.
There should not be too many duplicate test cases.
Out dated test cases should be cleared off.
All the test cases should be executable.






TC ID
Pre-Condition

Description

Expected Result

Actual Result

Status
Remarks
Unique Test Case number
Condition to satisfied
  1. What to be tested
  2. What data to provided
  3. What action to be done
As per FSR
System response
Pass or
Fail
If any
Yahoo-001 
Yahoo web page should displayed
1.      Check inbox is displayed
2.      User ID/PW
3.      Click on Submit
System should mail box
System response



Test Case Review:

Peer to peer Reviews
Team Lead Review
Team Manager Review


Review Process

                                                                                                    


















       Review Format

Review-ID
Origin
Description
Status
Priority
Unique ID
Birth place
From where it starts
Defect description
Open/ Close
Major
Medium
Minor

Test Case Execution:
Execution and execution results plays a vital role in the testing. Each and every activity should have proof.

The following activities should be taken care:
      1. Number of test cases executed.
      2. Number of defects found
3. Screen shoots of successful and failure executions should be taken in word document.
      4. Time taken to execute.
      5. Time wasted due to the unavailability of the system.
Test Case Execution Process:










                                                    Check the output  


       Inputs
                 -Test Cases
                 -System Availability
                 -Data Availability
      Process
                 -Test it.
     Output
                 -Raise the Defects
                 -Take screen shoot & save it


Defect Handling
What is Defect?
In computer technology, a Defect is a coding error in a computer program. It is defined by saying that “A software error is present when the program does not do what its end user reasonably expects it to do”.
Who can report a Defect?
Anyone who has involved in software development life cycle and who is using the software can report a Defect. In most of the cases defects are reported by Testing Team.
A short list of people expected to report bugs:
Testers / QA Engineers
Developers
Technical Support
End Users
Sales and Marketing Engineers
Defect Life Cycle
Defect Life Cycle helps in handling defects efficiently. This DLC will help the users to know the status of the defect.
















































 















                                                                                       No


 




Types of Defects
Cosmetic flaw
                        Data corruption
                        Data loss
                        Documentation Issue
                        Incorrect Operation
                        Installation Problem
                        Missing Feature
                        Slow Performance
                        System Crash
                        Unexpected Behavior
                        Unfriendly behavior


How do u decide the Severity of the defect

Severity Level
Description
Response Time or Turn-around Time
High
A defect occurred due to the inability of a key function to perform.  This problem causes the system hang it halts (crash), or the user is dropped out of the system. An immediate fix or work around is needed from development so that testing can continue.

Defect should be responded to within 24 hours and the situation should be resolved test exit
Medium
A defect occurred which severely restricts the system such as the inability to use a major function of the system. There is no acceptable work-around but the problem does not inhibit the testing of other functions

A response or action plan should be provided within 3 working days and the situation should be resolved before test exit.
Low
A defect is occurred which places minor restrict on a function that is not critical. There is an acceptable work-around for the defect.
A response or action plan should be provided within 5 working days and the situation should be resolved before test exit.

Others
An incident occurred which places no restrictions on any function of the system. No immediate impact to testing.
A Design issue or Requirements not definitively detailed in project.
The fix dates are subject to negotiation.

An action plan should be provided for next release or future enhancement




Defect Severity VS Defect Priority

The General rule for the fixing the defects will depend on the Severity. All the High Severity Defects should be fixed first.
This may not be the same in all cases some times even though severity of the bug is high it may not be take as the High priority.
At the same time the low severity bug may be considered as high priority.
Defect Tracking Sheet

Defect No
Description
Origin
Severity
Priority
Status
Unique No
Dec of Bug
Birth place of the Bug
Critical
Major
Medium
Minor
Cosmetic
High
Medium
Low
Submitted
Accepted
Fixed
Rejected
Postponed
Closed 

Defect Tracking Tools
Bug Tracker -- BSL Proprietary Tools
Rational Clear Quest
Test Director

Gap Analysis:
                
1. BRS Vs SRS
                 BRS01 – SRS01
                                -SRS02
                                -SRS03
2. SRS Vs TC
                 SRS01 – TC01
                              - TC02
                              - TC03
3. TC Vs Defects
                  TC01 – Defects01
                             – Defects02
 Deliverables:
 Testing Phases – The V Model
Verification à Static System – Doing Right Job

Validation   à Dynamic System - Job Right
 


















Levels of Testing

 Unit Testing:
In Unit testing user is supposed to check each and every micro function. All field level validations are expected to test at the stage of testing.
In most of the cases Developer will do this.
Approach:

Equivalence Class:

• For each piece of the specification, generate one or more equivalence Class
• Label the classes as “Valid” or “Invalid”
• Generate one test case for each Invalid Equivalence class
• Generate a test case that Covers as many
Valid Equivalence Classes as possible

Boundary Value Analysis
•   Generate test cases for the boundary values.
•   Minimum Value, Minimum Value + 1, Minimum Value -1
•   Maximum Value, Maximum Value + 1, Maximum Value - 1


Error Guessing.
– Generating test cases against to specification

4.2 Integration Testing:
The primary objective of integration testing is to discover errors in the interfaces between Modules/Sub-Systems (Host & Client Interfaces).

Approach:
Top-Down Approach

The integration process is performed in a series of 5 steps
  1. The main control module is used as a test driver, and stubs are substituted for all modules directly subordinate to the main control module.
  2. Depending on the integration approach selected (depth or breadth-first) subordinate stubs are replaced at a time with actual modules.
  3. Tests are conducted, as each module is module is integrated.
  4. One completion of each set of tests, another stub is replaced with the real-module.
  5. Regression testing may be conducted to ensure that new errors have not been introduced.

Bottom-Up Approach.

A bottom-up integration strategy may be implemented with the following steps:

  1. Low-level modules are combined into clusters (Some times called builds) that perform a specific software sub function.
  2. A driver (control program for testing) is written to coordinate test case input and output.
  3. The cluster is tested.
  4. Drivers are removed and clusters are combined upward in the program structure

An integration testing is conducted; the tester should identify critical modules. A critical module has one or more of the following characteristics:

  1. Address several software requirements.
  2. Has a high-level of control. (Resides relatively high in the program structure)
  3. Complex & Error-Phone.
  4. Have definite performance requirements.

 System Testing:
The primary objective of system testing is to discover errors when the system is tested as a hole. System testing is also called as End-End Testing. User is expected to test from Login-To-Logout by covering various business functionalities.

Approach: IDO Model

Identifying the End-End/Business Life Cycles.
Design the test and data.
Optimize the End-End/Business Life Cycles.




Acceptance Testing:

 The primary objective of acceptance testing is to get the acceptance from the client. Client will be using the system against the business requirements.

Pre-user acceptance testing will be conducted to ascertain the stability and to check whether the complete functionality of the system is checked during system testing. After the first round of system testing, test engineers will go through the test cases (Test Scripts) sent by the users. They will ascertain whether a particular condition (functionality) is covered and the test case number will be entered against each condition. If a particular condition test case sent by the user is not covered because of the changes in the requirement, that particular test case will be documented (refer the tabular format) and the existing behavior of the system will be mentioned in the remarks column. 
When a particular condition is not covered, a new test case is prepared along with the test data and it is executed to ensure the system is working accordingly. If there are any test cases which are not covered during system testing and when there is no supportive document for that particular test case it is named as an invalid test case. After the mapping the whole document will be sent back to the user.
Approach: BE
  • Building a team with real-time user, functional users and developers.
  • Execution of business Test Cases.

When should we start writing Test Cases/ Testing
V Model is the most suitable way to start writing Test Cases and conduct Testing.

SDLC Phase
Requirements Freeze
Requirements
Build
Business Requirements Docs
Acceptance Test Cases
Acceptance Testing
Software Requirements Docs
System Test Cases
System testing
Design Requirements Docs
Integration test Cases
Integration Testing
Code
Unit Test Cases
Unit Testing

Testing Methods – FURRPSC Model
Functionality Testing:
Objective:
  1. Test against system requirements.
  2. To confirm all the requirements are covered.
Approach:

Equivalence Class

 
Boundary Value Analysis
Error Guessing.

5.2 Usability Testing:
To test the Easiness and User-friendliness of the system.
 Approach:
Qualitative & Quantitative
Qualitative Approach:

  1. Each and every function should available from all the pages of the site.
  2. User should able to submit each and every request with in 4-5 actions.
  3. Confirmation message should be displayed for each and every submit.

Quantitative Approach:
Heuristic Checklist should be prepared with all the general test cases that fall under the classification of checking.
This generic test cases should be given to 10 different people and ask to execute the system to mark the pass/fail status.
The average of 10 different people should be considered as the final result.
Example: Some people may feel system is more users friendly, If the submit is button on the left side of the screen. At the same time some other may feel its better if the submit button is placed on the right side.

Classification of Checking:
Clarity of communication.
Accessibility
Consistency
Navigation
Design & Maintenance
Visual Representation.

Reliability Testing:

RT is property, which defines how well the software meets its requirements.
Objective is to find Mean Time between failure/time available under specific load pattern. Mean time for recovery.

Approach: RRT (Rational Real Time) for continuous hours of operation.
More then 85% of the stability is must.
Reliability Testing helps you to confirm:
Business logic performs as expected
Active buttons are really active
Correct menu options are available
Reliable hyper links

(Why is load runner used for reliability testing -reason)
Virtual Users can be created using Load Runner. Load Scenarios, which are a mix of business, processes and the number of virtual users, will run on each load server. User can quickly compose multi-user test scenarios using Load Runner’s Controller. The Controller’s interactive capability provides an interactive environment in which user can manage and drive the load test scenario, as well as create repeatable and consistent load. Load Runner’s graphical interface helps to organize and control scenarios during load test setup and execution. (Write some sentences that can be correlated with load runner. Need to put this sentence in an appropriately)

Regression Testing:
Objective is to check the new functionalities has incorporated correctly with out failing the existing functionalities.
RAD – In case of Rapid Application development Regression Test plays a vital role as the total development happens in bits and pieces.
The term "regression testing" can be applied two ways. First, when a code problem has been fixed, a regression test runs tests to verify that the defect is in fact fixed. "Imagine finding an error, fixing it, and repeating the test that exposed the problem in the first place. This is a regression test". Second, regression testing is the counterpart of integration testing: when new code is added to existing code, regression testing verifies that the existing code continues to work correctly, whereas integration testing verifies that the new code works as expected. Regression testing can describes the process of testing new code to verify that this new code hasn't broken any old code

Approach: Automation tools

Performance Testing:

Primary objective of the performance testing is “to demonstrate the system works functions as per specifications with in given response time on a production sized database.
Objectives:
Assessing the system capacity for growth.
Identifying weak points in the architecture
Detect obscure bugs in software
Tuning the system
Verify resilience & reliability

Performance Parameters:
Request-Response Time
Transactions per Second
Turn Around time
Page down load time
Through Put

Approach: Usage of Automation Tools

Classification of Performance Testing:
Load Test
Volume Test
Stress Test

Load Testing
Approach: Load Profile

Volume Testing
Approach: Data Profile

Stress Testing
Approach: RCQE Approach

Repeatedly working on the same functionality
Critical Query Execution (Join Queries)
To emulate peak load.

Load Vs Stress:  
With the Simple Scenario (Functional Query), N number of people working on it will not enforce stress on the server. 
A complex scenario with even one less number of users will stress the server.

 Scalability Testing:
Objective is to find the maximum number of user system can handle.

Classification:
Network Scalability
Server Scalability
Application Scalability
Approach: Performance Tools

 Compatibility Testing:

Compatibility testing provides a basic understanding of how a product will perform over a wide range of hardware, software & network configuration and to isolate the specific problems.
Approach: ET Approach

Environment Selection.

Test Bed Creation




Selection of environment

There are many types of Operating systems, Browsers, JVMs used by wide range of audience around the world. Compatibility testing for all these possible combinations is exhaustive and hence, optimizing the combination of environment is very critical.

Many times the customer may give the environment details for the compatibility testing. Incase if it is not given, the following strategy may be adopted for selecting the environment.

·         By understanding the end users.

List the possible end users for the proposed software application to be tested. Analyze their requirement of environment on the basis of the previous experience (Region wise or type of the application). Select the possible combination of operating system & browser from this input.

·         Importance of selecting both old browser and new browsers

Many end users use the default browser, which are part of the operating system & may not upgrade for new versions. Where as some end-users may tend to go for the latest versions of the browsers. Hence importance should be given for both old & new versions of the browsers for compatibility testing.

·      Selection of the Operating System
The operating system of Microsoft has wide range of user compared to other operating system. However many also use Macintosh and Unix operating system. The compatibility of the application with different operating system is very important. The details of operating system versus browsers supported are given vide Table-3 of section 2.0.
 Test Bed Creation

Following steps are adopted for creation of test bed for different version of browsers and Microsoft operating system. This procedure is not applicable for Macintosh and Unix operating systems.

When the user wants to look for compatibility on different Microsoft operating systems and different version of browsers, following steps helps to reduce time and cost.

1)      Partition of the hard disk.
2)      Creation of Base Image



Partition of the hard disk

Partition helps in installing more than one operating system on a single hard disk. Hard disk is available with two partition namely primary partition and extended partition. The first sector of hard disk contains a partition table. This partition table has room to describe four partitions these are called primary partitions. One of these primary partitions can point to a chain of additional partitions. Each partition in this chain is called a logical partition & one partition is visible at a time.

Using partition magic software the primary partition of the hard disk can be configured into maximum of four parts.

       Following are the steps involved while partitioning:

a)      Create one primary partition of required size.
b)      Make it active.
c)      Load the particular operating system.
d)     Using partition magic hide that partition.
e)      After installing each operating system steps a, b, c, d are repeated.

 With this the primary partition of the hard disk can be configured with WinNT, Win95,
 Win98 & Win2k respectively.

1)      Creation of Base Image
Base image is a clone of hard disk. It is possible to create the base image of all the four operating system of Microsoft along with IE lower version and office 97.

 Incase of Internet Explorer, it is not possible to change from higher version to lower version. With the help of base image it is possible to rewrite the hard disk with the required Operating system, which contains lower version of IE. Norton ghost software helps to take the base image of the partitioned hard disk along with the required operating system.
Incase of Netscape Navigator there is no problem of changing from higher version to lower version and vice versa.
 Following is the comparison of the time required for installing operating systems with & without Norton ghost.

Without using Norton ghost
With  Norton ghost
1) Win95, IE & office97 is 60 minutes.
2) Win98, IE & office97 is 70 minutes.
3) Win2K, IE & office97 is 70 minutes.
4) WinNT,IE & office97 is 50 minutes.

1) It takes 7 minutes to write one operating system with IE & office97.
2) It takes 18 minutes to write base image of all the four operating systems with lower. versions of Internet Explorer.

Performance Life Cycle

What is Performance Testing:

Primary objective of the performance testing is “to demonstrate the system works functions as per specifications with in given response time on a production sized database

 Why Performance Testing:

­-To assess the system capacity for growth
The load and response data gained from the tests can be used to validate the capacity planning model and assist decision making.
-To identify weak points in the architecture
The controlled load can be increased to extreme levels to stress the architecture and break it bottlenecks and weak components can be fixed or replaced
-To detect obscure bugs in software
Tests executed for extended periods can cause failures caused by memory leaks and reveal obscure contention problems or conflicts
-To tune the system
Repeat runs of tests can be performed to verify that tuning activities are having the desired effect – improving performance.
-To verify resilience & reliability
Executing tests at production loads for extended periods is the only way to access the systems resilience and reliability to ensure required service levels are likely to be met.

 

Performance-Tests:

Used to test each part of the web application to find out what parts of the website are slow and how we can make them faster.

 

 Load-Tests:

This type of test is done to test the website using the load that the customer expects to have on his site. This is something like a “real world test” of the website.
First we have to define the maximum request times we want the customers to experience, this is done from the business and usability point of view, not from a technical point of view. At this point we need to calculate the impact of a slow website on the company sales and support costs.
Then we have to calculate the anticipated load and load pattern for the website (Refer Annexure I for details on load calculation) which we then simulate using the Tool.
At the end we compare the test results with the requests times we wanted to achieve.

 

Stress-Tests:

They simulate brute force attacks with excessive load on the web server. In the real world situations like this can be created by a massive spike of users – far above the normal usage – e.g. caused by a large referrer (imagine the website being mentioned on national TV…).
The goals of stress tests are to learn under what load the server generates errors, whether it will come back online after such a massive spike at all or crash and when it will come back online.

When should we start Performance Testing:

It is even a good idea to start performance testing before a line of code is written at all! Early testing the base technology (network, load balancer, application-, database- and web-servers) for the load levels can save a lot of money when you can already discover at this moment that your hardware is to slow. Also the first stress tests can be a good idea at this point.
The costs for correcting a performance problem rise steeply from the start of development until the website goes productive and can be unbelievable high for a website already online.
As soon as several web pages are working the first load tests should be conducted and from there on should be part of the regular testing routine each day or week or for each build of the software.

 Popular tools used to conduct Performance Testing:
Some of the popular industry standard tools used to conduct performance test are 
  • LoadRunner from Mercury Interactive
  • AstraLoad from Mercury Interactive
  • Silk Performer from Segue
  • Rational Suite Test Studio from Rational
  • Rational Site Load from Rational
  • OpenSTA from Cyrano
  • Webload from Radview
  • RSW eSuite from Empirix
  • MS Stress tool from Microsoft

Performance Test Process:
This is a general process for performance Testing. This process can be customized according to the project needs. Few more process steps can be added to the existing process, deleting any of the steps from the existing process may result in Incomplete process. If Client is using any of the tools, In this case one can blindly follow the respective process demonstrated by the tool.








General Process Steps:


 






























Setting up of the test environment

The installation of the tool, agents, directory structure creation for the storage of the scripts and results and installation additional software if essential to collect the server statistics (like SNMP agent).  It is also essential to ensure the correctness of the environment by implementing the dry run.
      Record & playback in the stand by mode
The scripts are generated using the script generator and played back to ensure that there are no errors in the script.
      Enhancement of the script to support multiple users
The variables like logins; user inputs etc. should be parameterised to simulate the live environment.  It is also essential since in some of the applications no two users can login with the same id.
    
     Configuration of the scenarios
Scenarios should be configured to run the scripts on different agents, schedule the scenarios, distribute the users onto different scripts, collect the data related to database etc.

  • Hosts
The next important step in the testing approach is to run the virtual users on different host machines to reduce the load on the client machine by sharing the resources of the other machines.
  • Users
The number of users who need to be activated during the execution of the scenario.
  • Scenarios
A scenario might either comprise of a single script or multiple scripts.  The main intention of creating a scenario to simulate load on the server similar to the live/production environment.
  • Ramping
In the live environment not all the users login to the application simultaneously.  At this stage we can simulate the virtual users similar to the live environment by deciding -
1.      How many users should be activated at a particular point of time as a batch?
2.      What should be the time interval between every batch of users?

Execution for fixed users and reporting the status to the developers
The script should be initially executed for one user and the results/inputs should be verified to check it out whether the server response time for a transaction is less than or equal to the acceptable limit (bench mark).   
If the results are found adequate the execution should be continued for different set of users.  At the end of every execution the results should be analysed. 

If a stage reaches when the time taken for the server to respond to a transaction is above the acceptable limit, then the inputs should be given to the developers.
Re-execution of the scenarios after the developers fine tune the code
After the fine-tuning, the scenarios should be re-executed for the specific set of users for which the response was inadequate.  If found satisfactory, then the execution should be continued until the decided load.
Final report
At the end of the performance testing, final report should be generated which should comprise of the following –
      • Introduction – about the application.
      • Objectives – set / specified in the test plan.
      • Approach – summary of the steps followed in conducting the test
      • Analysis & Results – is a brief explanation about the results and the analysis of the report.
      • Conclusion – the report should be concluded by telling whether the objectives set before the test is met or not.

Life Cycle of Automation








 



















What is Automation?

A software program that is used to test another software program, this is referred to as “automated software testing”.

Why Automation

Avoid the errors that humans make when they get tired after multiple repetitions. The test program won’t skip any test by mistake.

Each future test cycle will take less time and require less human intervention.

Required for regression testing.

 Benefits of Test Automation:
Allows more testing to happen
Tightens / Strengthen Test Cycle
Testing is consistent, repeatable
Useful when new patches released
Makes configuration testing easier
Test battery can be continuously improved.

 False Benefits:
Fewer tests will be needed
It will be easier if it is automate
Compensate for poor design
No more manual testing.

 What are the different tools available in the market:
Rational Robot
WinRunner
Silk Test
QA Run
WebFT

Testing Limitations

  • We can only test against system requirements
·         May not detect errors in the requirements.
·         Incomplete or ambiguous requirements may lead to inadequate or incorrect testing.

  • Exhaustive (total) testing is impossible in present scenario.

  • Time and budget constraints normally require very careful planning of the testing effort.
·         Compromise between thoroughness and budget.
·         Test results are used to make business decisions for release dates.

Test Stop Criteria:

Minimum number of test cases successfully executed.
Uncover minimum number of defects (16/1000 stm)
Statement coverage
Testing uneconomical
Reliability model

 Tester Responsibilities

Follow the test plans, scripts etc. as documented.
Report faults objectively and factually
Check tests are correct before reporting s/w faults
Assess risk objectively
Prioritize what you report
Communicate the truth.

 How to Prioritize Tests:

We can’t test every thing. There is never enough time to do all testing you would like, so what testing should you do?
Prioritize Tests. So that, whenever you stop testing, you have done the best testing in the time available.
Tips
Possible ranking criteria (all risk based)
Test where a failure would be most serve.
Test where failures would be most visible.
Take the help of customer in understanding what is most important to him.
What is most critical to the customers business.
Areas changed most often.
Areas with most problems in the past.
Most complex areas, or technically critical.
 How can we improve the efficiency in testing?
In the recent year it has show lot of outsourcing in testing area. Its right time to think and create process to improve the efficiency of testing projects. The best team will result in the efficient deliverables. The team should contain 55% hard-core test engineers, 30 domain knowledge engineers and 15% technology engineers.
How did we arrive to this figure? The past projects has shown 50-60 percent of the test cases are written on the basis of testing techniques, 28-33% of test cases are resulted to cover the domain oriented business rules and 15-18% technology oriented test cases.


Glossary: QA & Software Testing
Acceptance Testing: Formal testing conducted to determine whether or not a system satisfies its acceptance criteria—enables an end user to determine whether or not to accept the system.

Affinity Diagram:
A group process that takes large amounts of language data, such as a list developed by brainstorming, and divides it into categories.

Alpha Testing: Testing of a software product or system conducted at the developer’s site by the end user.

Accessibility Testing:
Verifying a product is accessible to the people having disabilities.

Ad Hoc Testing: A testing phase where the tester tries to 'break' the system by randomly trying the system's functionality. Can include negative testing as well.

Agile Testing: Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm.

Application Binary Interface (ABI): A specification defining requirements for portability of applications in binary forms across different system platforms and environments.

Application Programming Interface (API): A formalized set of software calls and routines that can be referenced by an application program in order to access supporting system or network services.

Automated Software Quality (ASQ): The use of software tools, such as automated testing tools, to improve software quality

Audit: An inspection/assessment activity that verifies compliance with plans, policies, and procedures, and ensures that resources are conserved. Audit is a staff function; it serves as the “eyes and ears” of management.

Automated Testing: That part of software testing that is assisted with software tool(s) that does not require operator input, analysis, or evaluation.

Beta Testing: Testing conducted at one or more end user sites by the end user of a delivered software product or system.
Basis Path Testing: A white box test case design technique that uses the algorithmic flow of the program to design tests.

Black Box Testing: Functional testing based on requirements with no knowledge of the internal program structure or data. Also known as closed-box testing. Black Box testing indicates whether or not a program meets required specifications by spotting faults of omission -- places where the specification is not fulfilled.

Binary Portability Testing: Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.
Bottom-up Testing: An integration testing technique that tests the low-level components first using test drivers for those components that have not yet been developed to call the low-level component or test.

Boundary Testing: Testing that focuses on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).

Boundary Value Analysis: A test data selection technique in which values are chosen to lie along data extremes. Boundary values include maximum, mini-mum, just inside/outside boundaries, typical values, and error values.

Brainstorming:
A group process for generating creative and diverse ideas.
Branch Coverage Testing: A test method satisfying coverage criteria that requires each decision point at each possible branch to be executed at least once.

Branch Testing:
Testing wherein all branches in the program source code are tested at
least once.

Breadth Testing:
A test suite that exercises the full functionality of a product but does not test features in detail.
Bug: A design flaw that will result in symptoms exhibited by some object (the object under test or some other object) when an object is subjected to an appropriate test.

Cause-and-Effect (Fishbone) Diagram: A tool used to identify possible causes of a problem by representing the relationship between some effect and its possible cause.

Cause-effect Graphing: A testing technique that aids in selecting, in a systematic way, a high-yield set of test cases that logically relates causes to effects to produce test cases. It has a beneficial side effect in pointing out incompleteness and ambiguities in specifications.

Code Complete: Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.

Code Coverage: An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.

Code Inspection: A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.

Code Walkthrough:
A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored to analyze the programmer's logic and assumptions.


Coding: The generation of source code.
Clear-box Testing: Another term for white-box testing. Structural testing is sometimes referred to as clear-box testing; since “white boxes” are considered opaque and do not really permit visibility into the code. This is also known as glass-box or open-box testing.
Concurrency Testing: Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.
Conformance Testing: The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.
Client: The end user that pays for the product received, and receives the benefit from the use of the product.

Control Chart: A statistical method for distinguishing between common and special cause variation exhibited by processes.

Customer (end user): The individual or organization, internal or external to the producing organization that receives the product.

Cyclomatic Complexity: A measure of the number of linearly independent paths through a program module.

Data Flow Analysis: Consists of the graphical analysis of collections of (sequential) data definitions and reference patterns to determine constraints that can be placed on
data values at various points of executing the source program.

Defect:
NOTE: Operationally, it is useful to work with two definitions of a defect:
1) From the producer’s viewpoint: a product requirement that has not been met or a product attribute possessed by a product or a function performed by a product that is not in the statement of requirements that define the product.
2) From the end user’s viewpoint: anything that causes end user dissatisfaction, whether
in the statement of requirements or not.

Defect Analysis
: Using defects as data for continuous quality improvement. Defect analysis generally seeks to classify defects into categories and identify possible causes in order to direct process improvement efforts.

Defect Density: Ratio of the number of defects to program length (a relative number).

Desk Checking:
A form of manual static analysis usually performed by the originator. Source code documentation, etc., is visually checked against requirements and standards.

Data Flow Diagram: A modeling notation that represents a functional decomposition of a system.

Data Driven Testing:
Testing in which the action of a test case is parameterized by
externally defined data values, maintained as a file or spreadsheet.

Debugging:
The process of finding and removing the causes of software failures.

Dependency Testing: Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.

Depth Testing: A test that exercises a feature of a product in full detail.

Dynamic Testing: Testing software through executing it.
Dynamic Analysis: The process of evaluating a program based on execution of that program. Dynamic analysis approaches rely on executing a piece of software with
selected test data.

Error:
1) A discrepancy between a computed, observed, or measured value or condition and the true, specified, or theoretically correct value or condition; and
2) a mental mistake made by a programmer that may result in a program fault.

Error-based Testing: Testing where information about programming style, error-prone language constructs, and other programming knowledge is applied to select test data capable of detecting faults, either a specified class of faults or all possible faults.

Evaluation: The process of examining a system or system component to determine the extent to which specified properties are present.

Execution: The process of a computer carrying out an instruction or instructions of a computer.

Endurance Testing: Checks for memory leaks or other problems that may occur with prolonged execution.

End-to-End testing: Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Exhaustive Testing: Testing which covers all combinations of input values and preconditions for an element of the software under test.

Failure: The inability of a system or system component to perform a required function within specified limits. A failure may be produced when a fault is encountered.

Failure-directed Testing:
Testing based on the knowledge of the types of errors made
in the past that are likely for the system under test.

Fault:
A manifestation of an error in software. A fault, if encountered, may cause a failure.

Fault Tree Analysis: A form of safety analysis that assesses hardware safety to provide failure statistics and sensitivity analyses that indicate the possible effect of critical
failures.

Fault-based Testing:
Testing that employs a test data selection strategy designed to generate test data capable of demonstrating the absence of a set of pre-specified faults, typically, frequently occurring faults.

Flowchart: A diagram showing the sequential steps of a process or of a workflow around a product or service.

Formal Review: A technical review conducted with the end user, including the types of reviews called for in the standards.


Function Points: A consistent measure of software size based on user requirements. Data components include inputs, outputs, etc. Environment characteristics include data communications, performance, reusability, operational ease, etc. Weight scale: 0 = not present, 1 = minor influence, 5 = strong influence.

Functional Testing: Application of test data derived from the specified functional requirements without regard to the final program structure. Also known as black box testing.

Gorilla Testing: Testing one particular module, functionality heavily.

Gray Box Testing: A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings.
Heuristics Testing: Another term for failure-directed testing.

Histogram: A graphical description of individual measured values in a data set that is organized according to the frequency or relative frequency of occurrence. A histogram illustrates the shape of the distribution of individual values in a data set along with information regarding the average and variation.

Hybrid Testing: A combination of top-down testing combined with bottom-up testing of prioritized or available components.

Incremental Analysis: Incremental analysis occurs when (partial) analysis may be performed on an incomplete product to allow early feedback on the development of that product.

Infeasible Path: Program statement sequence that can never be executed.

Inputs: Products, services, or information needed from suppliers to make a process work.

Inspection: 1) A formal evaluation technique in which software requirements, design, or code are examined in detail by a person or group other than the author to detect faults, violations of development standards, and other problems.
2) A quality improvement process for written material that consists of two dominant

components: product (document) improvement and process improvement (document production and inspection).
Instrument: To install or insert devices or instructions into hardware or software to monitor the operation of a system or component.

Integration: The process of combining software components or hardware components, or both, into an overall system.

Integration Testing: An orderly progression of testing in which software components or hardware components, or both, are combined and tested until the entire system has been integrated.

Interface: A shared boundary. An interface might be a hardware component to link two devices, or it might be a portion of storage or registers accessed by two or more computer programs.

Interface Analysis: Checks the interfaces between program elements for consistency and adherence to predefined rules or axioms.

Intrusive Testing: Testing that collects timing and processing information during program execution that may change the behavior of the software from its behavior in a real environment. Usually involves additional code embedded in the software being tested or additional processes running concurrently with software being tested on the same platform.

Installation Testing: Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

IV&V: Independent verification and validation is the verification and validation of a software product by an organization that is both technically and managerially separate from the organization responsible for developing the product.

Life Cycle: The period that starts when a software product is conceived and ends when the product is no longer available for use. The software life cycle typically includes a requirements phase, design phase, implementation (code) phase, test phase, installation and checkout phase, operation and maintenance phase, and a retirement phase.

Localization Testing: This term refers to making software specifically designed for a specific locality.
Loop Testing: A white box testing technique that exercises program loops.
Manual Testing: That part of software testing that requires operator input, analysis, or evaluation.

Mean: A value derived by adding several qualities and dividing the sum by the number of these quantities.

Measurement: 1) The act or process of measuring. A figure, extent, or amount obtained by measuring.

Metric: A measure of the extent or degree to which a product possesses and exhibits a certain quality, property, or attribute.

Monkey Testing: Testing a system or an Application on the fly, i.e. just few tests here and there to ensure the system or an application does not crash out.
Mutation Testing: A method to determine test set thoroughness by measuring the extent to which a test set can discriminate the program from slight variants of the program.

Non-intrusive Testing: Testing that is transparent to the software under test; i.e., testing that does not change the timing or processing characteristics of the software under test from its behavior in a real environment. Usually involves additional hardware that collects timing or processing information and processes that information on another platform.
Negative Testing: Testing aimed at showing software does not work. Also known as "test to fail".

Operational Requirements: Qualitative and quantitative parameters that specify the desired operational capabilities of a system and serve as a basis for deter-mining the operational effectiveness and suitability of a system prior to deployment.

Operational Testing: Testing performed by the end user on software in its normal operating environment.

Outputs: Products, services, or information supplied to meet end user needs.

Path Analysis: Program analysis performed to identify all possible paths through a program, to detect incomplete paths, or to discover portions of the program that are not on any path.

Path Coverage Testing: A test method satisfying coverage criteria that each logical path through the program is tested. Paths through the program often are grouped into a finite set of classes; one path from each class is tested.

Peer Reviews: A methodical examination of software work products by the producer’s peers to identify defects and areas where changes are needed.

Policy: Managerial desires and intents concerning either process (intended objectives) or
products (desired attributes).

Problem: Any deviation from defined standards. Same as defect.

Procedure: The step-by-step method followed to ensure that standards are met.

Process: The work effort that produces a product. This includes efforts of people and equipment guided by policies, standards, and procedures.

Process Improvement: To change a process to make the process produce a given product faster, more economically, or of higher quality. Such changes may require the product to be changed. The defect rate must be maintained or reduced.

Product: The output of a process; the work product. There are three useful classes of

products: manufactured products (standard and custom), administrative/ information
products (invoices, letters, etc.), and service products (physical, intellectual, physiological, and psychological). Products are defined by a statement of requirements; they are produced by one or more people working in a process.
Product Improvement: To change the statement of requirements that defines a product to make the product more satisfying and attractive to the end user (more competitive). Such changes may add to or delete from the list of attributes and/or the list of functions defining a product. Such changes frequently require the process to be changed. NOTE: This process could result in a totally new product.

Path Testing: Testing wherein all paths in the program source code are tested at least once.

Performance Testing: Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing".

Positive Testing:
Testing aimed at showing software works. Also known as "test to pass".

Productivity:
The ratio of the output of a process to the input, usually measured in the same units. It is frequently useful to compare the value added to a product by a process to the value of the input resources required (using fair market values for both input and output).

Proof Checker: A program that checks formal proofs of program properties for logical correctness.

Prototyping:
Evaluating requirements or designs at the conceptualization phase, the requirements analysis phase, or design phase by quickly building scaled-down components of the intended system to obtain rapid feedback of analysis and design decisions.

Qualification Testing:
Formal testing, usually conducted by the developer for the end user, to demonstrate that the software meets its specified requirements.

Quality: A product is a quality product if it is defect free. To the producer a product is a quality product if it meets or conforms to the statement of requirements that defines the product. This statement is usually shortened to “quality means meets requirements.
NOTE: Operationally, the work quality refers to products.
Quality Audit: A systematic and independent examination to determine whether quality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve objectives.

Quality Circle:
A group of individuals with related interests that meet at regular intervals to consider problems or other matters related to the quality of outputs of a process and to the correction of problems or to the improvement of quality.

Quality Management: That aspect of the overall management function that determines and implements the quality policy.

Quality Policy: The overall intentions and direction of an organization as regards quality as formally expressed by top management.
Quality System: The organizational structure, responsibilities, procedures, processes, and resources for implementing quality management.
Quality Assurance (QA): The set of support activities (including facilitation, training, measurement, and analysis) needed to provide adequate confidence that processes are established and continuously improved in order to produce products that meet specifications and are fit for use.

Quality Control (QC): The process by which product quality is compared with applicable standards; and the action taken when nonconformance is detected. Its focus is defect detection and removal. This is a line function, that is, the performance of these tasks is the responsibility of the people working within the process.

Quality Improvement: To change a production process so that the rate at which defective products (defects) are produced is reduced. Some process changes may require the product to be changed.

Random Testing: An essentially black-box testing approach in which a program is tested by randomly choosing a subset of all possible input values. The distribution may be arbitrary or may attempt to accurately reflect the distribution of inputs in the application environment.

Ramp Testing: Continuously raising an input signal until the system breaks down.

Recovery Testing: Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
Reliability: The probability of failure-free operation for a specified period.

Requirement: A formal statement of: 1) an attribute to be possessed by the product or a function to be performed by the product; the performance standard for the attribute or function; or 3) the measuring process to be used in verifying that the standard has been met.

Review:
A way to use the diversity and power of a group of people to point out needed improvements in a product or confirm those parts of a product in which improvement is either not desired or not needed. A review is a general work product evaluation technique that includes desk checking, walkthroughs, technical reviews, peer reviews, formal reviews, and inspections.

Run Chart:
A graph of data points in chronological order used to illustrate trends or cycles of the characteristic being measured for the purpose of suggesting an assignable cause rather than random variation.

Scatter Plot (correlation diagram):
A graph designed to show whether there is a relationship between two changing factors.

Semantics:
1) The relationship of characters or a group of characters to their meanings, independent of the manner of their interpretation and use.
2) The relationships between symbols and their meanings.

Software Characteristic:
An inherent, possibly accidental, trait, quality, or property of software (for example, functionality, performance, attributes, design constraints, number of states, lines of branches).

Software Feature:
A software characteristic specified or implied by requirements documentation (for example, functionality, performance, attributes, or design constraints).

Software Tool:
A computer program used to help develop, test, analyze, or maintain another computer program or its documentation; e.g., automated design tools, compilers, test tools, and maintenance tools.

Standards:
The measure used to evaluate products and identify nonconformance. The basis upon which adherence to policies is measured.

Standardize:
Procedures are implemented to ensure that the output of a process is maintained at a desired level.

Statement Coverage Testing:
A test method satisfying coverage criteria that requires each statement be executed at least once.

Statement of Requirements:
The exhaustive list of requirements that define a product. NOTE: The statement of requirements should document requirements proposed and rejected (including the reason for the rejection) during the requirements determination process.

Static Testing:
Verification performed without executing the system’s code. Also called static analysis.

Statistical Process Control:
The use of statistical techniques and tools to measure an ongoing process for change or stability.

Structural Coverage:
This requires that each pair of module invocations be executed at least once.

Stub:
A software component that usually minimally simulates the actions of called components that have not yet been integrated during top-down testing.
Supplier: An individual or organization that supplies inputs needed to generate a product, service, or information to an end user.
Syntax: 1) The relationship among characters or groups of characters independent of their meanings or the manner of their interpretation and use.
2) The structure of expressions in a language, and
3) the rules governing the structure of the language.

Sanity Testing:
Brief test of major functional elements of a piece of software to determine if it is basically operational.

Scalability Testing: Performance testing focused on ensuring the application under test gracefully handles increases in workload.

Security Testing:
Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.

Smoke Testing:
A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.

Soak Testing:
Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.
Software Requirements Specification: A deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software/

Software Testing:
A set of activities conducted with the intent of finding errors in software.

Static Analysis:
Analysis of a program carried out without executing the program.

Static Analyzer:
A tool that carries out static analysis.

Static Testing:
Analysis of a program carried out without executing the program.

Storage Testing:
Testing that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. This is external storage as opposed to internal storage.

Stress Testing:
Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. Often this is performance testing using a very high level of simulated load.

Structural Testing:
Testing based on an analysis of internal workings and structure of a piece of software.
System Testing: Testing that attempts to discover defects that are properties of the entire system rather than of its individual components.

System:
A collection of people, machines, and methods organized to accomplish a set of specified functions.

System Simulation:
Another name for prototyping.

Technical Review:
A review that refers to content of the technical material being reviewed.

Test Bed:
1) An environment that contains the integral hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test of a logically or physically separate component.
2) A suite of test programs used in conducting the test of a component or system.
Test Development: The development of anything required to conduct testing. This may include test requirements (objectives), strategies, processes, plans, software, procedures, cases, documentation, etc.

Test Executive:
Another term for test harness.

Test Harness:
A software tool that enables the testing of software components that links test capabilities to perform specific tests, accept program inputs, simulate missing components, compare actual outputs with expected outputs to determine correctness, and report discrepancies.

Testability: The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met.
Testing: The process of exercising software to verify that it satisfies specified requirements and to detect errors.
The process of analyzing a software item to detect the differences between existing and required conditions (that is, bugs), and to evaluate the features of the software item (Ref. IEEE Std 829). The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component.
Test Bed: An execution environment configured for testing. May consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, etc. The Test Plan for a project should enumerated the test beds(s) to be used.

Test Case:
Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing. A Test Case will consist of information such as requirements testing, test steps, verification steps, prerequisites, outputs, test environment, etc. A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.

Test Driven Development:
Testing methodology associated with Agile Programming in which every chunk of code is covered by unit tests, which must all pass all the time, in an effort to eliminate unit-level and regression bugs during development. Practitioners of TDD write a lot of tests, i.e. an equal number of lines of test code to the size of the production code.

Test Driver:
A program or test tool used to execute tests. Also known as a Test Harness.

Test Environment:
The hardware and software environment in which tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers.

Test First Design:
Test-first design is one of the mandatory practices of Extreme Programming (XP). It requires that programmers do not write any production code until they have first written a unit test.

Test Plan:
A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning. Ref IEEE Std 829.

Test Procedure
: A document providing detailed instructions for the execution of one or more test cases.

Test Script:
Commonly used to refer to the instructions for a particular test that will be carried out by an automated test tool.

Test Specification:
A document specifying the test approach for a software feature or combination or features and the inputs, predicted results and execution conditions for the associated tests.
Test Suite: A collection of tests used to validate the behavior of a product. The scope of a Test Suite varies from organization to organization. There may be several Test Suites for a particular product for example. In most cases however a Test Suite is a high level concept, grouping together hundreds or thousands of tests related by what they are intended to test.

Test Tools:
Computer programs used in the testing of a system, a component of the system, or its documentation.

Thread Testing:
A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels.

Top Down Testing:
An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.

Total Quality Management:
A company commitment to develop a process that achieves high quality product and customer satisfaction.
Traceability Matrix: A document showing the relationship between Test Requirements and Test Cases.
Test Objective: An identified set of software features to be measured under specified conditions by comparing actual behavior with the required behavior described in the software documentation.

Test Plan:
A formal or informal plan to be followed to assure the controlled testing of the product under test.

Unit Testing:
The testing done to show whether a unit (the smallest piece of software that can be independently compiled or assembled, loaded, and tested) satisfies its functional specification or its implemented structure matches the intended design structure.

Usability Testing:
Testing the ease with which users can learn and use a product.

Use Case:
The specification of tests that are conducted from the end-user perspective. Use cases tend to focus on operating software as an end-user would conduct their day-to-day activities.
V- Diagram (model): a diagram that visualizes the order of testing activities and their corresponding phases of development

Verification:
The process of determining whether or not the products of a given phase of the software development cycle meet the implementation steps and can be traced to the incoming objectives established during the previous phase. The techniques for verification are testing, inspection and reviewing.

Volume Testing:
Testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files), can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner.

Validation: The process of evaluating software to determine compliance with specified requirements.
Walkthrough:
Usually, a step-by-step simulation of the execution of a procedure, as when walking through code, line by line, with an imagined set of inputs. The term has been extended to the review of material that is not procedural, such as data descriptions, reference manuals, specifications, etc.

White-box Testing:
Testing approaches that examine the program structure and derive test
Acceptance Testing:
Formal testing conducted to determine whether or not a system satisfies its acceptance criteria—enables an end user to determine whether or not to accept the system.

Affinity Diagram:
A group process that takes large amounts of language data, such as a list developed by brainstorming, and divides it into categories.

Alpha Testing: Testing of a software product or system conducted at the developer’s site by the end user.

Accessibility Testing:
Verifying a product is accessible to the people having disabilities.

Ad Hoc Testing: A testing phase where the tester tries to 'break' the system by randomly trying the system's functionality. Can include negative testing as well.

Agile Testing: Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm.

Application Binary Interface (ABI): A specification defining requirements for portability of applications in binary forms across different system platforms and environments.

Application Programming Interface (API): A formalized set of software calls and routines that can be referenced by an application program in order to access supporting system or network services.

Automated Software Quality (ASQ): The use of software tools, such as automated testing tools, to improve software quality

Audit: An inspection/assessment activity that verifies compliance with plans, policies, and procedures, and ensures that resources are conserved. Audit is a staff function; it serves as the “eyes and ears” of management.

Automated Testing: That part of software testing that is assisted with software tool(s) that does not require operator input, analysis, or evaluation.

Beta Testing: Testing conducted at one or more end user sites by the end user of a delivered software product or system.
Basis Path Testing: A white box test case design technique that uses the algorithmic flow of the program to design tests.

Black Box Testing: Functional testing based on requirements with no knowledge of the internal program structure or data. Also known as closed-box testing. Black Box testing indicates whether or not a program meets required specifications by spotting faults of omission -- places where the specification is not fulfilled.

Binary Portability Testing: Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.

Bottom-up Testing: An integration testing technique that tests the low-level components first using test drivers for those components that have not yet been developed to call the low-level components for test.

Boundary Testing: Testing that focuses on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).

Boundary Value Analysis: A test data selection technique in which values are chosen to lie along data extremes. Boundary values include maximum, mini-mum, just inside/outside boundaries, typical values, and error values.

Brainstorming:
A group process for generating creative and diverse ideas.
Branch Coverage Testing: A test method satisfying coverage criteria that requires each decision point at each possible branch to be executed at least once.

Branch Testing:
Testing wherein all branches in the program source code are tested at
least once.

Breadth Testing:
A test suite that exercises the full functionality of a product but does not test features in detail.
Bug: A design flaw that will result in symptoms exhibited by some object (the object under test or some other object) when an object is subjected to an appropriate test.
Cause-and-Effect (Fishbone) Diagram: A tool used to identify possible causes of a problem by representing the relationship between some effect and its possible cause.

Cause-effect Graphing: A testing technique that aids in selecting, in a systematic way, a high-yield set of test cases that logically relates causes to effects to produce test cases. It has a beneficial side effect in pointing out incompleteness and ambiguities in specifications.

Code Complete: Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.

Code Coverage: An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.

Code Inspection: A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.

Code Walkthrough:
A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored,
to analyze the programmer's logic and assumptions.

Coding:
The generation of source code.
Clear-box Testing: Another term for white-box testing. Structural testing is sometimes referred to as clear-box testing; since “white boxes” are considered opaque and do not really permit visibility into the code. This is also known as glass-box or open-box testing.
Concurrency Testing: Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.
Conformance Testing: The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.
Client: The end user that pays for the product received, and receives the benefit from the use of the product.

Control Chart: A statistical method for distinguishing between common and special cause variation exhibited by processes.

Customer (end user): The individual or organization, internal or external to the producing organization that receives the product.

Cyclomatic Complexity: A measure of the number of linearly independent paths through a program module.

Data Flow Analysis: Consists of the graphical analysis of collections of (sequential) data definitions and reference patterns to determine constraints that can be placed on
data values at various points of executing the source program.

Defect:
NOTE: Operationally, it is useful to work with two definitions of a defect:
1) From the producer’s viewpoint: a product requirement that has not been met or a product attribute possessed by a product or a function performed by a product that is not in the statement of requirements that define the product.
2) From the end user’s viewpoint: anything that causes end user dissatisfaction, whether
in the statement of requirements or not.

Defect Analysis:
Using defects as data for continuous quality improvement. Defect analysis generally seeks to classify defects into categories and identify possible causes in order to direct process improvement efforts.

Defect Density: Ratio of the number of defects to program length (a relative number).

Desk Checking:
A form of manual static analysis usually performed by the originator. Source code documentation, etc., is visually checked against requirements and standards.

Data Flow Diagram: A modeling notation that represents a functional decomposition of a system.

Data Driven Testing:
Testing in which the action of a test case is parameterized by
externally defined data values, maintained as a file or spreadsheet.

Debugging:
The process of finding and removing the causes of software failures.

Dependency Testing: Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.

Depth Testing: A test that exercises a feature of a product in full detail.

Dynamic Testing: Testing software through executing it.
Dynamic Analysis: The process of evaluating a program based on execution of that program. Dynamic analysis approaches rely on executing a piece of software with
selected test data.

Error:
1) A discrepancy between a computed, observed, or measured value or condition and the true, specified, or theoretically correct value or condition; and
2) a mental mistake made by a programmer that may result in a program fault.

Error-based Testing: Testing where information about programming style, error-prone language constructs, and other programming knowledge is applied to select test data capable of detecting faults, either a specified class of faults or all possible faults.

Evaluation: The process of examining a system or system component to determine the extent to which specified properties are present.

Execution: The process of a computer carrying out an instruction or instructions of a computer.

Endurance Testing: Checks for memory leaks or other problems that may occur with prolonged execution.

End-to-End testing: Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Exhaustive Testing: Testing which covers all combinations of input values and preconditions for an element of the software under test.

Failure: The inability of a system or system component to perform a required function within specified limits. A failure may be produced when a fault is encountered.

Failure-directed Testing:
Testing based on the knowledge of the types of errors made
in the past that are likely for the system under test.

Fault:
A manifestation of an error in software. A fault, if encountered, may cause a failure.

Fault Tree Analysis: A form of safety analysis that assesses hardware safety to provide failure statistics and sensitivity analyses that indicate the possible effect of critical
failures.

Fault-based Testing:
Testing that employs a test data selection strategy designed to generate test data capable of demonstrating the absence of a set of pre-specified faults, typically, frequently occurring faults.

Flowchart: A diagram showing the sequential steps of a process or of a workflow around a product or service.

Formal Review: A technical review conducted with the end user, including the types of reviews called for in the standards.

Function Points: A consistent measure of software size based on user requirements. Data components include inputs, outputs, etc. Environment characteristics include data communications, performance, reusability, operational ease, etc. Weight scale: 0 = not present, 1 = minor influence, 5 = strong influence.

Functional Testing: Application of test data derived from the specified functional requirements without regard to the final program structure. Also known as black box testing.

Gorilla Testing: Testing one particular module, functionality heavily.

Gray Box Testing: A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings.

Heuristics Testing: Another term for failure-directed testing.

Histogram: A graphical description of individual measured values in a data set that is organized according to the frequency or relative frequency of occurrence. A histogram illustrates the shape of the distribution of individual values in a data set along with information regarding the average and variation.

Hybrid Testing: A combination of top-down testing combined with bottom-up testing of prioritized or available components.

Incremental Analysis: Incremental analysis occurs when (partial) analysis may be performed on an incomplete product to allow early feedback on the development of that product.

Infeasible Path: Program statement sequence that can never be executed.

Inputs: Products, services, or information needed from suppliers to make a process work.

Inspection: 1) A formal evaluation technique in which software requirements, design, or code are examined in detail by a person or group other than the author to detect faults, violations of development standards, and other problems.
2) A quality improvement process for written material that consists of two dominant

components: product (document) improvement and process improvement (document production and inspection).

Instrument: To install or insert devices or instructions into hardware or software to monitor the operation of a system or component.

Integration: The process of combining software components or hardware components, or both, into an overall system.
Integration Testing: An orderly progression of testing in which software components or hardware components, or both, are combined and tested until the entire system has been integrated.

Interface: A shared boundary. An interface might be a hardware component to link two devices, or it might be a portion of storage or registers accessed by two or more computer programs.

Interface Analysis: Checks the interfaces between program elements for consistency and adherence to predefined rules or axioms.

Intrusive Testing: Testing that collects timing and processing information during program execution that may change the behavior of the software from its behavior in a real environment. Usually involves additional code embedded in the software being tested or additional processes running concurrently with software being tested on the same platform.

Installation Testing: Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

IV&V: Independent verification and validation is the verification and validation of a software product by an organization that is both technically and managerially separate from the organization responsible for developing the product.

Life Cycle: The period that starts when a software product is conceived and ends when the product is no longer available for use. The software life cycle typically includes a requirements phase, design phase, implementation (code) phase, test phase, installation and checkout phase, operation and maintenance phase, and a retirement phase.

Localization Testing: This term refers to making software specifically designed for a specific locality.
Loop Testing: A white box testing technique that exercises program loops.

Mean: A value derived by adding several qualities and dividing the sum by the number of these quantities.

Measurement: 1) The act or process of measuring. A figure, extent, or amount obtained by measuring.

Metric: A measure of the extent or degree to which a product possesses and exhibits a certain quality, property, or attribute.

Monkey Testing: Testing a system or an Application on the fly, i.e. just few tests here and there to ensure the system or an application does not crash out.
Mutation Testing: A method to determine test set thoroughness by measuring the extent to which a test set can discriminate the program from slight variants of the program.

Non-intrusive Testing: Testing that is transparent to the software under test; i.e., testing that does not change the timing or processing characteristics of the software under test from its behavior in a real environment. Usually involves additional hardware that collects timing or processing information and processes that information on another platform.
Negative Testing: Testing aimed at showing software does not work. Also known as "test to fail".
Operational Requirements: Qualitative and quantitative parameters that specify the desired operational capabilities of a system and serve as a basis for deter-mining the operational effectiveness and suitability of a system prior to deployment.

Operational Testing: Testing performed by the end user on software in its normal operating environment.

Outputs: Products, services, or information supplied to meet end user needs.

Path Analysis: Program analysis performed to identify all possible paths through a program, to detect incomplete paths, or to discover portions of the program that are not on any path.

Path Coverage Testing: A test method satisfying coverage criteria that each logical path through the program is tested. Paths through the program often are grouped into a finite set of classes; one path from each class is tested.

Peer Reviews: A methodical examination of software work products by the producer’s peers to identify defects and areas where changes are needed.

Policy: Managerial desires and intents concerning either process (intended objectives) or
products (desired attributes).

Problem: Any deviation from defined standards. Same as defect.

Procedure: The step-by-step method followed to ensure that standards are met.

Process: The work effort that produces a product. This includes efforts of people and equipment guided by policies, standards, and procedures.

Process Improvement: To change a process to make the process produce a given product faster, more economically, or of higher quality. Such changes may require the product to be changed. The defect rate must be maintained or reduced.

Product: The output of a process; the work product. There are three useful classes of

products: manufactured products (standard and custom), administrative/ information
products (invoices, letters, etc.), and service products (physical, intellectual, physiological, and psychological). Products are defined by a statement of requirements; they are produced by one or more people working in a process.
Product Improvement: To change the statement of requirements that defines a product to make the product more satisfying and attractive to the end user (more competitive). Such changes may add to or delete from the list of attributes and/or the list of functions defining a product. Such changes frequently require the process to be changed. NOTE: This process could result in a totally new product.

Path Testing: Testing wherein all paths in the program source code are tested at least once.

Performance Testing: Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing".

Positive Testing:
Testing aimed at showing software works. Also known as "test to pass".

Productivity:
The ratio of the output of a process to the input, usually measured in the same units. It is frequently useful to compare the value added to a product by a process to the value of the input resources required (using fair market values for both input and output).

Proof Checker:
A program that checks formal proofs of program properties for logical correctness.

Prototyping:
Evaluating requirements or designs at the conceptualization phase, the requirements analysis phase, or design phase by quickly building scaled-down components of the intended system to obtain rapid feedback of analysis and design decisions.

Qualification Testing:
Formal testing, usually conducted by the developer for the end user, to demonstrate that the software meets its specified requirements.

Quality: A product is a quality product if it is defect free. To the producer a product is a quality product if it meets or conforms to the statement of requirements that defines the product. This statement is usually shortened to “quality means meets requirements.
NOTE: Operationally, the work quality refers to products.


Quality Audit: A systematic and independent examination to determine whether quality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve objectives.

Quality Circle:
A group of individuals with related interests that meet at regular intervals to consider problems or other matters related to the quality of outputs of a process and to the correction of problems or to the improvement of quality.

Quality Management: That aspect of the overall management function that determines and implements the quality policy.

Quality Policy: The overall intentions and direction of an organization as regards quality as formally expressed by top management.

Quality System: The organizational structure, responsibilities, procedures, processes, and resources for implementing quality management.
Quality Assurance (QA): The set of support activities (including facilitation, training, measurement, and analysis) needed to provide adequate confidence that processes are established and continuously improved in order to produce products that meet specifications and are fit for use.

Quality Control (QC): The process by which product quality is compared with applicable standards; and the action taken when nonconformance is detected. Its focus is defect detection and removal. This is a line function, that is, the performance of these tasks is the responsibility of the people working within the process.

Quality Improvement: To change a production process so that the rate at which defective products (defects) are produced is reduced. Some process changes may require the product to be changed.

Random Testing: An essentially black-box testing approach in which a program is tested by randomly choosing a subset of all possible input values. The distribution may be arbitrary or may attempt to accurately reflect the distribution of inputs in the application environment.

Regression Testing: Selective retesting to detect faults introduced during modification of a system or system component, to verify that modifications have not caused unintended adverse effects, or to verify that a modified system or system component still meets its specified requirements.
Ramp Testing: Continuously raising an input signal until the system breaks down.

Recovery Testing: Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
Reliability: The probability of failure-free operation for a specified period.

Requirement: A formal statement of: 1) an attribute to be possessed by the product or a function to be performed by the product; the performance standard for the attribute or function; or 3) the measuring process to be used in verifying that the standard has been met.

Review:
A way to use the diversity and power of a group of people to point out needed improvements in a product or confirm those parts of a product in which improvement is either not desired or not needed. A review is a general work product evaluation technique that includes desk checking, walkthroughs, technical reviews, peer reviews, formal reviews, and inspections.

Run Chart:
A graph of data points in chronological order used to illustrate trends or cycles of the characteristic being measured for the purpose of suggesting an assignable cause rather than random variation.

Software Characteristic:
An inherent, possibly accidental, trait, quality, or property of software (for example, functionality, performance, attributes, design constraints, number of states, lines of branches).

Software Feature:
A software characteristic specified or implied by requirements documentation (for example, functionality, performance, attributes, or design constraints).

Software Tool:
A computer program used to help develop, test, analyze, or maintain another computer program or its documentation; e.g., automated design tools, compilers, test tools, and maintenance tools.

Standards:
The measure used to evaluate products and identify nonconformance. The basis upon which adherence to policies is measured.

Standardize:
Procedures are implemented to ensure that the output of a process is maintained at a desired level.

Statement Coverage Testing:
A test method satisfying coverage criteria that requires each statement be executed at least once.

Statement of Requirements:
The exhaustive list of requirements that define a product. NOTE: The statement of requirements should document requirements proposed and rejected (including the reason for the rejection) during the requirements determination process.

Static Testing:
Verification performed without executing the system’s code. Also called static analysis.

Statistical Process Control:
The use of statistical techniques and tools to measure an ongoing process for change or stability.

Structural Coverage:
This requires that each pair of module invocations be executed at least once.

Stub:
A software component that usually minimally simulates the actions of called components that have not yet been integrated during top-down testing.

Syntax:
1) The relationship among characters or groups of characters independent of their meanings or the manner of their interpretation and use.
2) The structure of expressions in a language, and
3) the rules governing the structure of the language.
Sanity Testing:
Brief test of major functional elements of a piece of software to determine if it is basically operational.

Scalability Testing: Performance testing focused on ensuring the application under test gracefully handles increases in workload.

Security Testing:
Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.

Smoke Testing:
A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.

Soak Testing:
Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.
Software Requirements Specification: A deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software/

Software Testing:
A set of activities conducted with the intent of finding errors in software.

Static Analysis:
Analysis of a program carried out without executing the program.

Static Analyzer:
A tool that carries out static analysis.

Static Testing:
Analysis of a program carried out without executing the program.

Storage Testing:
Testing that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. This is external storage as opposed to internal storage.

Structural Testing:
Testing based on an analysis of internal workings and structure of a piece of software.
System Testing: Testing that attempts to discover defects that are properties of the entire system rather than of its individual components.

System:
A collection of people, machines, and methods organized to accomplish a set of specified functions.

System Simulation:
Another name for prototyping.

Technical Review:
A review that refers to content of the technical material being reviewed.

Test Bed:
1) An environment that contains the integral hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test of a logically or physically separate component.
2) A suite of test programs used in conducting the test of a component or system.

Test Development:
The development of anything required to conduct testing. This may include test requirements (objectives), strategies, processes, plans, software, procedures, cases, documentation, etc.

Test Executive:
Another term for test harness.

Test Harness:
A software tool that enables the testing of software components that links test capabilities to perform specific tests, accept program inputs, simulate missing components, compare actual outputs with expected outputs to determine correctness, and report discrepancies.

Testability: The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met.

Testing:
The process of exercising software to verify that it satisfies specified requirements and to detect errors.
The process of analyzing a software item to detect the differences between existing and required conditions (that is, bugs), and to evaluate the features of the software item (Ref. IEEE Std 829). The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component.
Test Case: Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing. A Test Case will consist of information such as requirements testing, test steps, verification steps, prerequisites, outputs, test environment, etc. A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.

Test Driven Development:
Testing methodology associated with Agile Programming in which every chunk of code is covered by unit tests, which must all pass all the time, in an effort to eliminate unit-level and regression bugs during development. Practitioners of TDD write a lot of tests, i.e. an equal number of lines of test code to the size of the production code.

Test Driver:
A program or test tool used to execute tests. Also known as a Test Harness.

Test Environment:
The hardware and software environment in which tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers.

Test First Design:
Test-first design is one of the mandatory practices of Extreme Programming (XP). It requires that programmers do not write any production code until they have first written a unit test.

Test Plan:
A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning. Ref IEEE Std 829.

Test Procedure
: A document providing detailed instructions for the execution of one or more test cases.

Test Script:
Commonly used to refer to the instructions for a particular test that will be carried out by an automated test tool.

Test Specification:
A document specifying the test approach for a software feature or combination or features and the inputs, predicted results and execution conditions for the associated tests.

Test Suite:
A collection of tests used to validate the behavior of a product. The scope of a Test Suite varies from organization to organization. There may be several Test Suites for a particular product for example. In most cases however a Test Suite is a high level concept, grouping together hundreds or thousands of tests related by what they are intended to test.

Test Tools:
Computer programs used in the testing of a system, a component of the system, or its documentation.

Thread Testing:
A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels.

Top Down Testing:
An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.

Traceability Matrix
: A document showing the relationship between Test Requirements and Test Cases.
Test Objective: An identified set of software features to be measured under specified conditions by comparing actual behavior with the required behavior described in the software documentation.

Test Plan:
A formal or informal plan to be followed to assure the controlled testing of the product under test.

Unit Testing:
The testing done to show whether a unit (the smallest piece of software that can be independently compiled or assembled, loaded, and tested) satisfies its functional specification or its implemented structure matches the intended design structure.

Usability Testing:
Testing the ease with which users can learn and use a product.

Use Case:
The specification of tests that are conducted from the end-user perspective. Use cases tend to focus on operating software as an end-user would conduct their day-to-day activities.
V- Diagram (model): a diagram that visualizes the order of testing activities and their corresponding phases of development

Verification:
The process of determining whether or not the products of a given phase of the software development cycle meet the implementation steps and can be traced to the incoming objectives established during the previous phase. The techniques for verification are testing, inspection and reviewing.

Validation:
The process of evaluating software to determine compliance with specified requirements.

Walkthrough:
Usually, a step-by-step simulation of the execution of a procedure, as when walking through code, line by line, with an imagined set of inputs. The term has been extended to the review of material that is not procedural, such as data descriptions, reference manuals, specifications, etc.

White-box Testing:
Testing approaches that examine the program structure and derive test data from the program logic. This is also known as clear box testing, glass-box or open-box testing. White box testing determines if program-code structure and logic is faulty. The test is accurate only if the tester knows what the program is supposed to do. He or she can then see if the program diverges from its intended goal. White box testing does not account for errors caused by omission, and all visible code must also be readable.

Workflow Testing:
Scripted end-to-end testing which duplicates specific workflows, which are expected to be utilized by the end-user.

'Software Quality Assurance'
Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention'.

What is the 'software life cycle'?

The life cycle begins when an application is first conceived and ends when it is no longer in use. It includes aspects such as initial concept, requirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, retesting, phase-out, and other aspects.