Testability

Testability, a property applying to an empirical hypothesis, involves two components:

  • The logical property that is variously described as contingency, defeasibility, or falsifiability, which means that counterexamples to the hypothesis are logically possible.
  • The practical feasibility of observing a reproducible series of such counterexamples if they do exist.

In short, a hypothesis is testable if there is some real hope of deciding whether it is true or false of real experience. Upon this property of its constituent hypotheses rests the ability to decide whether a theory can be supported or falsified by the data of actual experience. If hypotheses are tested, initial results may also be labeled inconclusive.

Software Testability

Software testability is the degree to which a software artifact (i.e. a software system, software module, requirements- or design document) supports testing in a given test context. If the testability of the software artifact is high, then finding faults in the system (if it has any) by means of testing is easier.

Testability is not an intrinsic property of a software artifact and can not be measured directly (such as software size). Instead testability is an extrinsic property which results from interdependency of the software to be tested and the test goals, test methods used, and test resources (i.e., the test context).

A lower degree of testability results in increased test effort. In extreme cases a lack of testability may hinder testing parts of the software or software requirements at all

In order to link the testability with the difficulty to find potential faults in a system (if they exist) by testing it, a relevant measure to assess the testability is how many test cases are needed in each case to form a complete test suite (i.e. a test suite such that, after applying all test cases to the system, collected outputs will let us unambiguously determine whether the system is correct or not according to some specification). If this size is small, then the testability is high. Based on this measure, a testability hierarchy has been proposed.

Testability, a property applying to empirical hypothesis, involves two components. The effort and effectiveness of software tests depends on numerous factors including:

  • Properties of the software requirements
  • Properties of the software itself (such as size, complexity and testability)
  • Properties of the test methods used
  • Properties of the development- and testing processes
  • Qualification and motivation of the persons involved in the test process

The testability of software components (modules, classes) is determined by factors such as:

Controllability: The degree to which it is possible to control the state of the component under test (CUT) as required for testing.

Observability: The degree to which it is possible to observe (intermediate and final) test results.

Isolateability: The degree to which the component under test (CUT) can be tested in isolation.

Separation of concerns: The degree to which the component under test has a single, well defined responsibility.

Understandability: The degree to which the component under test is documented or self-explaining.

Automatability: The degree to which it is possible to automate testing of the component under test.

Heterogeneity: The degree to which the use of diverse technologies requires to use diverse test methods and tools in parallel.

Fault Detection

A “fault” is another word for a problem. A “root cause” fault is a fundamental, underlying problem that may lead to other problems and observable symptoms. (It might not be directly observable).  A root cause is also generally associated with procedures for repair.

A “fault” or “problem does not have to be the result of a complete failure of a piece of equipment, or even involve specific hardware. For instance, a problem might be defined as non-optimal operation or off-spec product. In a process plant, root causes of non-optimal operation might be hardware failures, but problems might also be caused by poor choice of operating targets, poor feedstock quality, poor controller tuning, partial loss of catalyst activity, buildup of coke, low steam system pressure, sensor calibration errors, or human error. A fault may be considered a binary variable (”OK” vs. “failed”), or there may be a numerical “extent”, such as the amount of a leak or a measure of inefficiency.

Fault detection is recognizing that a problem has occurred, even if you don’t yet know the root cause. Faults may be detected by a variety of quantitative or qualitative means. This includes many of the multivariable, model-based approaches discussed later. It also includes simple, traditional techniques for single variables, such as alarms based on high, low, or deviation limits for process variables or rates of change; Statistical Process Control (SPC) measures; and summary alarms generated by packaged subsystems.

Fault diagnosis is pinpointing one or more root causes of problems, to the point where corrective action can be taken. This is also referred to as “fault isolation”, especially when emphasizing the distinction from fault detection. In common, casual usage, “fault diagnosis” often includes fault detection, so “fault isolation” emphasizes the distinction.

Fault detection capability is a measure of the faults detected by the fault detection system (usually built-in test) compared to the total number of system faults. Fault isolation capability is a measure of the percent of time the failure can be isolated to a given number of replaceable (repairable) components. Fault isolation can be accomplished by a diagnostic analysis, built-in test, or using external test equipment. False alarm rate is a measure of the rate at which the system declares the detection of a failure when no failure has occurred.

CM Analysis
Data Types and Sources

Get industry recognized certification – Contact us

keyboard_arrow_up