Saturday, September 25, 2010

An introduction to Control Flow Testing – A Black Box Testing Technique

Behavioral control-flow testing was introduced as the fundamental model of black-box testing. The control-flow graph is the basic model for the test design.

Control-flow behavioral testing is a fundamental testing technique that is applicable to majority of software programs and is quite effective for them. It is generally applicable for comparatively smaller programs or even for smaller segments of bigger programs.

The Technique of Test Design & Execution

Test design begins by creating a behavioral control-flow graph model from requirements documents such as specifications. The list notation is generally more convenient than graphical graphs, but small graphs are an aid to model design.

Test design and execution consists of the following steps:

Step 1: Examine the requirements and validate: Examine the requirements and analyze them for operationally satisfactory completeness and self-consistency. Confirm that the specification correctly reflects the requirements, and correct the specification if it doesn't.


Step 2: Rewrite the specification: Rewrite the specification using pseudo-code as a sequence of short sentences. The use of a semiformal language like pseudo-code helps to assure that things will be stated unambiguously. Although this looks like programming, it is not programming - it is modeling. We can use the link list notation because it's easier.

We need to pay special attention to predicates. Break up compound predicates to equivalent sequences of simple predicates. Watch for selector nodes and document them as simple lists. Remove any "ANDs" that are not part of predicates - break the sentence in half instead.


Step 3: Number the sentence uniquely. These will be the node names later.


Step 4: Build the model. We can program our model in an actual programming language and using the programmed model as an aid to test design.

Few tips for effective modeling:


a) Compound predicates should be avoided in the model and spelled out (e.g., replaced by equivalent graphs) so as not to hide essential complexity.

b) Use a truth table instead of a graph to model compound predicates with more than three component predicates.

c) Segment the model into pieces that start and end with a single node and note which predicates are correlated with which in all other segments.

d) Build the test paths as combinations of paths in the segments, eliminating unachievable paths as we go ahead.

e) Use contradictions between correlated predicates to rule out combinations wholesale.

Step 5: Verify the model – since tester’s work is as bug prone as that of the programmers.

Step 6: Select the test paths.

Few tips for effective path selection:

a) Pick enough paths through the model to assure 100 percent link coverage. Don't worry about having too many tests.

b)) Start by picking the obvious paths that relate directly to the requirements and see if we can achieve the coverage that way.

c) Augment these tests by as many paths as needed to guarantee 100 percent link coverage.


Step 7: Sensitize the test paths: paths were picked up by first interpreting the predicates along the path in terms of input values. That is, select input values that would cause the software to do the equivalent of traversing our selected paths if there were no bugs.

The interpreted predicates yield a set of conditions or equations or inequalities such that any solution to that set of inequalities will cause the selected path to be traversed. If sensitization is not obvious, check the work for a specification or model bug before investing time in equation solving.

Step 8: Predict and record the expected outcome for each test.

Step 9: Define the validation criterion for each test.

Step 10: Run the tests.

Step 11: Confirm the outcomes.

Step 12: Confirm the path.

Assumptions about bugs targeted by Control Flow Testing:


1) Majorities of bugs are able to uncover control flow errors or misbehavior.

2) Bugs have direct impact on control flow predicates or it is possible that the control flow itself might be incorrect.

Pros & Cons of Control Flow Testing:

1) These days we use structured programming languages & as such control flow bugs get reduced dramatically. For older applications build with assembly language or COBOL etc. such control flow bugs had been quite common.

2) Control flow testing is not the best technique to use, while computational bugs which do not have impact on the control flow may not be detected by this technique. We can use data flow testing & domain testing to unearth such bugs.

3) We won't be able to detect any missing requirement unless, our model had included this it missed the attention of the programmer.


4) We won't be able to detect unwanted features, which happened to get included in the model, but were not present in the requirements.


5) If the programmers have already done thorough unit testing, there is remote likelihood of detection of new bugs by control flow testing technique.

6) If the same person has written the program & the test model, there is a remote chance of detection of missing features & paths. However if someone else has designed the control flow tests more efforts have to be pumped in to detect paths and features that leave the program.

7) It is difficult to have correct software merely by a coincidence, however such an eventuality defeats the control flow testing technique unless we had verified all intermediate calculations & predicate values.


Automation of Control Flow Testing Process:


As of now commercial tools directly supporting the behavioral control-flow testing are not available, but many tools are available that support structural control-flow testing. We can use these tools by actually programming our models in a supported programming language, like C, Pascal, or Basic.

If we have created a properly detailed graph model, means we have done most of the work required to express the semiformal model as a program.

It may be borne in mind that programming a model is definitely not the same as programming the real thing. The major difference is that we don't have to be concerned with all the real life stuff, like data base access, I/O, operating system interfaces, environment issues including remaining stuff where the real bugs are born.

The model program need not include many details like it doesn't have to work on the target platform, it doesn't have to be that efficient, and most important of all, it doesn't have to be integrated with the remaining software.

Then the question comes as to what is the use of this model? When it is not at all the same as running tests on the actual program. Then how should we debug our tests? The model is used as a tool to help in designing a covering set of tests, to help pick and sensitize paths, and to act as an oracle for the real software. If we can create a running model, then we can use commercial test tools as well on it & that could make our job much easier.

No comments:

Post a Comment