Learning Resources
Measurement in software testing
A software metric is a measure of some property of a piece of software or its specifications.
- A metric is a quantitative measure of the degree to which a system, system component, or process possesses a given attribute.
- A quality metric is a quantitative measurement of the degree to which an item possesses a given quality attribute.
Metrics are the most important responsibility of the Test Team. Metrics allow for deeper understanding of the performance of the application and its behaviour. The fine tuning of the application can be enhanced only with metrics. In a typical QA process, there are many metrics which provide information.
The following can be regarded as the fundamental metric:
• Functional or Test Coverage Metrics.
• Software Release Metrics.
• Software Maturity Metrics.
• Reliability Metrics.
– Mean Time To First Failure (MTTFF).
– Mean Time Between Failures (MTBF).
– Mean Time To Repair (MTTR).
Someone has rightly said that if something cannot be measured, it can not be managed or improved. There is immense value in measurement, but you should always make sure that you get some value out of any measurement that you are doing. You should be able to answer following questions before starting or following any measurement programme -
- What is the purpose of this measurement programme?
- What data items you are collecting and how you are reporting it?
- What is the correlation between the data and conclusion?
- What value addition are you getting out of this programme?
Any measurement program can be divided into two parts. First part is the collection of data, and second part is analysis of that data to get valuable insight which might help in decision making. Information collected during any measurement program can help in:
- Finding the relation between data points,
- Correlating cause and effect,
- Input for future planning.
Normally, any metric program involves certain steps which are repeated over a period of time. It starts with identifying what to measure. After the purpose is known, data can be collected and converted in to the metrics. Based on the analysis of these metrics appropriate action can be taken, and if necessary metrics can be refined and measurement goals can be adjusted for the better.
Data presented by testing team, together with their opinion, normally decides whether a product will go into market or not. So it becomes very important for test teams to present data and opinion in such a way that data looks meaningful to everyone, and decision can be taken based on the data presented.
Every testing projects should be measured for its schedule and the quality requirement for its release. There are lots of charts and metrics that we can use to track progress and measure the quality requirements of the release. We will discuss here some of the charts and the value addition that they bring to our product.
Some of the charts / matrices which could be used in your decision making process are
- Defect finding rate
- Defect fixing rate
- Defect cause distribution chart
- Closed defect distribution
- Functional coverage
- Platform coverage
When we can measure what we are speaking about and express it in numbers, we know something about it; but when we cannot measure, when we cannot express it in numbers, our knowledge is of a meager and unsatisfactory kind: it may be the beginning of knowledge, but we have scarcely, in your thoughts, advanced to the stage of science.
Why we need Metrics?
“We cannot improve what we cannot measure.”
“We cannot control what we cannot measure”
AND TEST METRICS HELPS IN
- Take decision for next phase of activities
- Evidence of the claim or prediction
- Understand the type of improvement required
- Take decision on process or technology change
II) Type of metrics
Base Metrics (Direct Measure)
Base metrics constitute the raw data gathered by a Test Analyst throughout the testing effort. These metrics are used to provide project status reports to the Test Lead and Project Manager; they also feed into the formulas used to derive Calculated Metrics.
Ex: # of Test Cases, # of Test Cases Executed
Calculated Metrics (Indirect Measure)
Calculated Metrics convert the Base Metrics data into more useful information. These types of metrics are generally the responsibility of the Test Lead and can be tracked at many different levels (by module, tester, or project).
Ex: % Complete, % Test Coverage
Base Metrics & Test Phases
- # of Test Cases (Test Development Phase)
- # of Test Cases Executed (Test Execution Phase)
- # of Test Cases Passed (Test Execution Phase)
- # of Test Cases Failed (Test Execution Phase)
- # of Test Cases Under Investigation (Test Development Phase)
- # of Test Cases Blocked (Test dev/execution Phase)
- # of Test Cases Re-executed (Regression Phase)
- # of First Run Failures (Test Execution Phase)
- Total Executions (Test Reporting Phase)
- Total Passes (Test Reporting Phase)
- Total Failures (Test Reporting Phase)
- Test Case Execution Time ((Test Reporting Phase)
- Test Execution Time (Test Reporting Phase
Calculated Metrics & Phases
The below metrics are created at Test Reporting Phase or Post test Analysis phase
- % Complete
- % Defects Corrected
- % Test Coverage
- % Rework
- % Test Cases Passed
- % Test Effectiveness
- % Test Cases Blocked
- % Test Efficiency
- 1st Run Fail Rate
- Defect Discovery Rate
- Overall Fail Rate
III) Crucial Web Based Testing Metrics
Test Plan coverage on Functionality
Total number of requirement v/s number of requirements covered through test scripts.
- (No of requirements covered / total number of requirements) * 100
Define requirements at the time of Effort estimation
Example: Total number of requirements estimated are 46, total number of requirements tested 39; blocked 7…define what is the coverage?
Note: Define requirement clearly at project level
Test Case defect density
Total number of errors found in test scripts v/s developed and executed.
- (Defective Test Scripts /Total Test Scripts) * 100
Example: Total test script developed 1360, total test script executed 1280, total test script passed 1065, total test script failed 215
So, test case defect density is
215 X 100
---------------------------- = 16.8%
1280
This 16.8% value can also be called as test case efficiency %, which is depends upon total number of test cases which uncovered defects
Defect Slippage Ratio
Number of defects slipped (reported from production) v/s number of defects reported during execution.
- Number of Defects Slipped / (Number of Defects Raised - Number of Defects Withdrawn)
Example: Customer filed defects are 21, total defect found while testing are 267, total number of invalid defects are 17
So, Slippage Ratio is
[21/ (267-17)] X 100 = 8.4%
Requirement Volatility
Number of requirements agreed v/s number of requirements changed.
- (Number of Requirements Added + Deleted + Modified) *100 / Number of Original Requirements
- Ensure that the requirements are normalized or defined properly while estimating
Example: VSS 1.3 release had total 67 requirements initially, later they added another 7 new requirements and removed 3 from initial requirements and modified 11 requirements
So, requirement Volatility is
(7 + 3 + 11) * 100/67 = 31.34%
Means almost 1/3 of the requirement changed after initial identification
Review Efficiency
The Review Efficiency is a metric that offers insight on the review quality and testing
Some organization also use this term as “Static Testing” efficiency and they are aiming to get min of 30% defects in static testing
Review efficiency=100*Total number of defects found by reviews/Total number of project defects
Example: A project found total 269 defects in different reviews, which were fixed and test team got 476 defects which were reported and valid
So, Review efficiency is [269/(269+476)] X 100 = 36.1%
Efficiency and Effectiveness of Processes
- Effectiveness: Doing the right thing. It deals with meeting the desirable attributes that are expected by the customer.
- Efficiency: Doing the thing right. It concerns the resources used for the service to be rendered
Metrics for Software Testing
• Defect Removal Effectiveness
DRE= (Defects removed during development phase x100%) / Defects latent in the product
Defects latent in the product = Defects removed during development
Phase+ defects found later by user
• Efficiency of Testing Process (define size in KLoC or FP, Req.)
Testing Efficiency= Size of Software Tested /Resources used