程序代写代做代考 c++ Java Fault-Based Testing

Fault-Based Testing
(c) 2007 Mauro Pezzè & Michal Young Ch 16, slide 1

Summary: Black-box
and grey-box testing
• Black-box testing techniques: – functional testing
– random testing
– combinatorial testing
– stress testing – next slides – model-based testing
• Grey-box testing techniques (require access to the source code, but not understanding of specifics of implementation):
– fault-based testing (error seeding, fault injection, mutation)
(c) 2007 Mauro Pezzè & Michal Young Ch 14, slide 2

Stress Testing
• Atypeofnon-functionaltesting.
• Involvestestingbeyondnormaloperationalcapacity,often to a breaking point, in order to observe the results.
• Aformofsoftwaretestingthatisusedtodeterminethe stability of a given system.
• Puts greater emphasis on robustness, availability, and error handling under a heavy load, rather than on what
would be considered correct behaviour under normal circumstances.
• Thegoalsofsuchtestsmaybetoensurethesoftwaredoes not crash in conditions of insufficient computational resources (such as memory or disk space).
(c) 2007 Mauro Pezzè & Michal Young Ch 16, slide 3

Example of Dependability Qualities
1112 1 10 2 93
87654
• Correctness, reliability: let traffic pass according to correct pattern and central scheduling
• Robustness, safety: Provide degraded function when possible; never signal conflicting greens.
• Blinking red / blinking yellow is better than no lights; no lights is better than conflicting greens
(c) 2007 Mauro Pezzè & Michal Young
Ch 4, slide 4

Relation between Dependability Qualities
reliable but not correct: failures occur rarely
robust but not safe: catastrophic failures can occur
Reliable Correct Safe Robust
correct but not safe or robust: the specification is inadequate
safe but not correct: annoying failures can occur
good systems
(c) 2007 Mauro Pezzè & Michal Young Ch 4, slide 5

Stress Testing – part of system testing
• Often requires extensive simulation of the execution environment
– With systematic variation: What happens when we push the parameters? What if the number of users or requests is 10 times more, or 1000 times more?
• Often requires more resources (human and machine) than typical test cases
– Separate from regular feature tests
– Run less often, with more manual control
– Diagnose deviations from expectation
• Which may include difficult debugging of latent faults!
(c) 2007 Mauro Pezzè & Michal Young Ch 22, slide 6

Back to fault-based testing: Learning objectives
• Understand the basic ideas of fault-based testing
– How knowledge of a fault model can be used to create useful tests and judge the quality of test cases
– Understand the rationale of fault-based testing well enough to distinguish between valid and invalid uses
• Understand mutation testing as one application of fault-based testing principles
(c) 2007 Mauro Pezzè & Michal Young Ch 16, slide 7

Fault-based testing
• Testing based on common software faults
• Examples:
– division by zero
– buffer overflow
– memory management mistakes – array boundaries
• Fault-based testing: test cases need to distinguish between the program under test from programs with faults
• Exhaustiveness of the test suite is measured by artificially seeding faults in the program
(c) 2007 Mauro Pezzè & Michal Young Ch 16, slide 8

Fault-based testing
• In fault-based testing strategies, we do
not directly consider the artifact being tested when assessing the test adequacy. We only take into account the test set. Fault-based techniques are aimed at finding a test set with a high ability to detect faults
– high ability to distinguish
• We will discuss two fault-based testing techniques:
– error seeding
– mutation testing
(c) 2007 Mauro Pezzè & Michal Young
Ch 16, slide 9

Error seeding and mutation testing
Error seeding and mutation testing are both error-oriented techniques and are generally applicable to all levels of testing.
Error Seeding Technique:
No mutants are present.
Source code is tested within itself.
Errors are introduced directly
Test cases which detect errors are used for testing.
It is less efficient error testing technique.
It requires less time.
Mutation Technique:
Mutants are developed for testing.
Mutants are combined, compared for testing to find error introduced.
Special techniques are used to introduce errors.
Test cases which kill mutants are used for testing.
It is more efficient than error seeding.
It is more time consuming
(c) 2007 Mauro Pezzè & Michal Young Ch 16, slide 10

Mutation analysis – explanation
Photo credit: (c) KaCey97007 on Flickr, Creative Commons license
• Suppose we have a big bowl of marbles. How can we estimate how many?
– I don􏰀t want to count every marble individually
– I have a bag of 100 other marbles of the same size, but a different color
– What if I mix them?
(c) 2007 Mauro Pezzè & Michal Young
Ch 16, slide 11

Estimating marbles
• I mix 100 black marbles into the bowl
– Stir well …
• I draw out 100 marbles at random
• 20 of them are black
• How many marbles were in the bowl to begin with?
(c) 2007 Mauro Pezzè & Michal Young
Ch 16, slide 12

Estimating Test Suite Quality
• Now, instead of a bowl of marbles, I have a program with bugs
• I add 100 new bugs
• Assume they are exactly like real bugs in every way
• I make 100 copies of my program, each with one of my 100 new bugs
• I run my test suite on the programs with seeded bugs …
– … and the tests reveal 20 of the bugs
– (the other 80 program copies do not fail)
• What can I infer about my test suite?
(c) 2007 Mauro Pezzè & Michal Young Ch 16, slide 13

Basic Assumptions for error seeding
• We􏰀d like to judge effectiveness of a test suite in finding real faults, by measuring how well it finds seeded fake faults.
• Valid to the extent that the seeded bugs are representative of real bugs
– Not necessarily identical (e.g., black marbles are not identical to clear marbles); but the differences should not affect the selection
• E.g., if I mix metal ball bearings into the marbles, and pull them out with a magnet, I don􏰀t learn anything about how many marbles were in the bowl
(c) 2007 Mauro Pezzè & Michal Young Ch 16, slide 14

Mutation testing
• A mutant is a copy of a program with a mutation
• A mutation is a syntactic change (a seeded bug)
– Example: change (i < 0) to (i <= 0) • Run test suite on all the mutant programs • A mutant is killed if it fails on at least one test case • A mutant is distinguished from the original program by the test suite if its behaviour differs from the behaviour of the original program on at least one test • If many mutants are killed, infer that the test suite is also effective at finding real bugs • coverage (or score): percentage of killed mutants out of the total number of mutants (c) 2007 Mauro Pezzè & Michal Young Ch 16, slide 15 Assumptions about types of software errors • Competent programmer hypothesis: – Programs are nearly correct • Real faults are small variations from the correct program • => Mutants are reasonable models of real buggy programs
• Coupling effect hypothesis:
– Tests that find simple faults also find more complex faults (= complex faults are products of simple faults)
• Even if mutants are not perfect representatives of real faults, a test suite that kills mutants is good at finding real faults too
(c) 2007 Mauro Pezzè & Michal Young Ch 16, slide 16

Mutation Operators
• Syntactic change from legal program to legal program
• Specific to each programming language. C++ mutations don􏰀t work for Java, Java mutations don􏰀t work for Python
• Examples:
– crp: constant for constant replacement
• for instance: from (x < 5) to (x < 12) • select from constants found somewhere in program text – ror: relational operator replacement • for instance: from (x <= 5) to (x < 5) – vie: variable initialization elimination • change int x =5; to int x; (c) 2007 Mauro Pezzè & Michal Young Ch 16, slide 17 Original Program If (x>y)
Print “Hello” Else
Print “Hi”
Mutant Program If(x)
Print “Hello” Else
Print “Hi”
Examples of mutations
(c) 2007 Mauro Pezzè & Michal Young
Ch 16, slide 18

Mutation operations
(c) 2007 Mauro Pezzè & Michal Young Ch 16, slide 19

• Scenario:
Live Mutants
– We create 100 mutants from our program
– We run our test suite on all 100 mutants, plus the original program
– The original program passes all tests
– 94 mutant programs are killed (fail at least one test)
– 6 mutants remain alive
• What can we learn from the living mutants?
(c) 2007 Mauro Pezzè & Michal Young Ch 16, slide 20

How mutants survive
• A mutant may be equivalent to the original program
– Maybe changing (x < 0) to (x <= 0) didn􏰀t change the output at all. The seeded 􏰁fault􏰂 is not really a 􏰁fault􏰂, or it is not observable. • Determining whether a mutant is equivalent may be easy or hard; in the worst case it is undecidable • Or the test suite could be inadequate – If the mutant could have been killed, but was not, it indicates a weakness in the test suite – But adding a test case for just this mutant is a bad idea. We care about the real bugs, not the fakes! (c) 2007 Mauro Pezzè & Michal Young Ch 16, slide 21 Variations on Mutation • Weak mutation • Statistical mutation (c) 2007 Mauro Pezzè & Michal Young Ch 16, slide 22 Weak mutation • Problem: There are lots of mutants. Running each test case to completion on every mutant is expensive • Number of mutants grows with the square of program size • Approach: – Execute meta-mutant (with many seeded faults) together with original program – Mark a seeded fault as 􏰀killed􏰁 as soon as a difference in intermediate state is found • Without waiting for program completion • Restart with new mutant selection after each 􏰀kill􏰁 (c) 2007 Mauro Pezzè & Michal Young Ch 16, slide 23 Statistical Mutation • Problem: There are lots of mutants. Running each test case on every mutant is expensive • It􏰀s just too expensive to create N2 mutants for a program of N lines (even if we don􏰀t run each test case separately to completion) • Approach: Just create a random sample of mutants – May be just as good for assessing a test suite • Provided we don􏰀t design test cases to kill particular mutants (which would be like selectively picking out black marbles anyway) (c) 2007 Mauro Pezzè & Michal Young Ch 16, slide 24 Fault Injection Fault injection is a software testing technique by introducing faults into the code for improving the coverage and usually used with stress testing for robustness of the developed software. Fault injection Methods: Compile Time Injections • It is a fault injection technique where source code is modified to inject simulated faults into a system. Run-Time Injections • It makes use of software trigger to inject a fault into a software system during run time. • The Trigger can be of two types, Time Based triggers and Interrupt Based Triggers. (c) 2007 Mauro Pezzè & Michal Young Ch 16, slide 26 In real life – fault-based testing in semiconductor manufacturing • Fault-based testing is a widely used in semiconductor manufacturing – With good fault models of typical manufacturing faults: 􏰀stuck-at-1􏰁, “stuck-at-0” etc. for a transistor – But fault-based testing for design errors is more challenging (as in software) • Mutation testing is not widely used in industry – Except for mission-critical software such as avionics • Some use of fault models to design test cases is important and widely practiced (c) 2007 Mauro Pezzè & Michal Young Ch 16, slide 27 Summary • If bugs were marbles ... – We could get some nice black marbles to judge the quality of test suites • Since bugs aren􏰀t marbles ... – Mutation testing rests on some troubling assumptions about seeded faults, which may not be statistically representative of real faults • Nonetheless ... – A model of typical or important faults is invaluable information for designing and assessing test suites (c) 2007 Mauro Pezzè & Michal Young Ch 16, slide 28 Home reading • Chapter 16 of the book Software Testing and Analysis, by Mauro Pezze and Michal Young – Fault-based testing (c) 2007 Mauro Pezzè & Michal Young Ch 1, slide 29