Software Construction & Design 1
The University of Sydney Page 1
Agile Software
Development Practices
SOF2412 / COMP9412
Software Quality Assurance:
Software Testing
Dr. Basem Suleiman
School of Information Technologies
The University of Sydney Page 2
Agenda
– Software Quality Assurance
– Software Testing
– Why, what and how?
– Testing levels and techniques
– Test cases design
– Code (Test) Coverage
– Unit Testing Framework
– JUnit
– Code Coverage Tools
The University of Sydney Page 3
Software Quality
Assurance
The University of Sydney Page 4
Software Quality Assurance
– Software quality
– Satisfying end use’s needs; correct behaviour, easy to use, does not crash, etc.
– Easy to the developers to debug and enhance
– Software Quality Assurance
– Ensuring software under development have high quality and creating processes and standards
in organization that lead to high quality software
– Software quality is often determined through Testing
Juran and Gryna 1998
The University of Sydney Page 5
Why Software Testing?
The University of Sydney Page 6
Nissan Recall – Airbag Defect*
– What happened?
– Front passenger airbag may not deploy in an accident
– ~ 3.53 million vehicles recall of various models 2013-2017
– Why did it happen?
– Software that activates airbags deployment improperly classify occupied
passenger seat as empty in case of accident
– Software sensitivity calibration due to combination of factors (high engine
vibration and changing seat status)
http://www.reuters.com/article/us-autos-nissan-recall/nissan-to-recall-3-53-million-vehicles-air-bags-may-not-deploy-idUSKCN0XQ2A8
http://www.reuters.com/article/us-autos-nissan-recall/nissan-to-recall-3-53-million-vehicles-air-bags-may-not-deploy-idUSKCN0XQ2A8
The University of Sydney Page 7
Therac-25 Overdose*
– What happened?
– Therac-25 radiation therapy machine
– Patients exposed to overdose of radiation (100 times more than intended) – 3 lives!!
– Why did it happen?
– Nonstandard sequence of keystrokes was entered within 8 seconds
– Operator override a warning message with error code (“MALFUNCTION” followed by
a number from 1 to 64) which is not explained in the user manual
– Absence of independent software code review
– ‘Big Bang Testing’: software and hardware integration has never been tested until
assembled in the hospital
*https://en.wikipedia.org/wiki/Therac-25#Problem_description
https://en.wikipedia.org/wiki/Therac-25#Problem_description
The University of Sydney Page 8
What happened?
• European large rocket – 10 years development, ~$7 billion
• Unmanaged software exception resulted from a data conversion
from 64-bit floating point to a 16-bit signed integer
• Backup processor failed straight after using the same software
• Exploded 37 seconds after lift-off
Why did it happen?
• Design error, incorrect analysis of changing requirements, inadequate validation and
verification, testing and reviews, ineffective development processes and management
5 http://iansommerville.com/software-engineering-book/files/2014/07/Bashar-Ariane5.pdf
Software Failure – Ariane 5 Disaster5
http://iansommerville.com/software-engineering-book/files/2014/07/Bashar-Ariane5.pdf
The University of Sydney Page 9
https://en.wikipedia.org/wiki/List_of_failed_and_overbudget_custom_software_projects
Examples of Software Failures
Project Duration Cost Failure/Status
e-borders (UK Advanced
passenger Information System
Programme)
2007 – 2014 Over £ 412m
(expected), £742m
(actual)
Permanent failure – cancelled after a
series of delays
Pust Siebel – Swedish Police
case management (Swedish
Police)
2011 – 2014 $53m (actual) Permanent failure – scraped due to poor
functioning, inefficient in work
environments
US Federal Government
Health Care Exchange Web
application
2013 –
ongoing
$93.7m (expected),
$1.5bn (actual)
Ongoing problems – too slow, poor
performance, people get stuck in the
application process (frustrated users)
Australian Taxation Office’s
Standard Business Reporting
2010 –
ongoing
~$1bn (to-date),
ongoing
Significant spending on contracting fees
(IBM & Fjitsu), significant scope creep and
confused objectives
https://en.wikipedia.org/wiki/List_of_failed_and_overbudget_custom_software_projects
The University of Sydney Page 10
Software Testing – Costs
– Software development and maintenance costs
– Total costs of inadequate software testing on the US economy is $59.5bn
– NIST study 2002*
– One-third of the cost could be eliminated by improved software testing
– Need to develop functional, robust and reliable software systems
– Human/social factor – society dependency on software in every aspect of their
lives
• Critical software systems – medical devices, flight control, traffic control
– Meet user needs and solve their problems
– Small software errors could lead to disasters
* https://www.nist.gov/sites/default/files/documents/director/planning/report02-3.pdf
https://www.nist.gov/sites/default/files/documents/director/planning/report02-3.pdf
The University of Sydney Page 11
Software Testing – Costs
Capers Jones, Applied software measurement (2nd ed.): assuring productivity and quality, (1997), McGraw-Hill
The University of Sydney Page 12
What is Software
Testing?
The University of Sydney Page 13
Software Testing
– Software process to
– Demonstrate that software meets its requirements
– Find incorrect or undesired behavior caused by defects/bugs
• E.g., System crashes, incorrect computations, unnecessary interactions and data
corruptions
– Different system properties
– Functional: performs all expected functions properly
– Non-functional: secure, performance, usability
The University of Sydney Page 14
Testing Objectives
“Program testing can be used to show the presence of bugs, but never
to show their absence” – Edsger W. Dijkstra
The University of Sydney Page 15
Testing Objectives
– Objectives should be stated precisely and quantitatively to measure and control
the test process
– Testing completeness is never been feasible
– So many test cases possible – exhaustive testing is so expensive!
– Risk-driven or risk management strategy to increase our confidence
– How much testing is enough?
– Select test cases sufficient for a specific purpose (test adequacy criteria)
– Coverage criteria and graph theories used to analyse test effectiveness
The University of Sydney Page 16
Tests Modeling
– Testing modelled as input test data and
output test results
– Tests that cause defects/problems (defective
testing)
– Tests that lead to expected correct behavior
(validation testing)
The University of Sydney Page 17
Who Does Testing?
– Developers test their own code
– Developers in a team test one another’s code
– Many methodologies also have specialist role of tester
– Can help by reducing ego
– Testers often have different personality type from coders
– Real users, doing real work
The University of Sydney Page 18
Testing takes creativity
– To develop an effective test, one must have:
– Detailed understanding of the system
– Application and solution domain knowledge
– Knowledge of the testing techniques
– Testing is done best by independent testers
– We often develop a certain mental attitude that the program should be in a
certain way when in fact it does not
– Programmers often stick to the data set that makes the program work
– A program often does not work when tried by somebody else
The University of Sydney Page 20
When is Testing happening?
Waterfall Software Development
– Test whether system works according to
requirements
https://www.spritecloud.com/wp-content/uploads/2011/06/waterfall.png
https://blog.capterra.com/wp-content/uploads/2016/01/agile-methodology-720×617.png
Agile Software Development
• Testing is at the heart of agile practices
• Continuous integration
• Daily unit testing
https://www.spritecloud.com/wp-content/uploads/2011/06/waterfall.png
https://blog.capterra.com/wp-content/uploads/2016/01/agile-methodology-720×617.png
The University of Sydney Page 21
Software Testing Process
Ian Sommerville. 2016. Software Engineering (10th ed.). Addison-Wesley, USA.
The University of Sydney Page 22
Software Testing Process
– Design, execute and manage test plans and activities
– Select and prepare suitable test cases (selection criteria)
– Selection of suitable test techniques
– Test plans execution and analysis (study and observe test output)
– Root cause analysis and problem-solving
– Trade-off analysis (schedule, resources, test coverage or adequacy)
– Test effectiveness and efficiency
– Available resources, schedule, knowledge and skills of involved people
– Software design and development practices (“Software testability”)
• Defensive programming: writing programs in such a way it facilitates validation and debugging
using assertions
The University of Sydney Page 23
Types of Defects in Software
– Syntax error
– Picked up by IDE or at latest in build process
– Not by testing
– Runtime error
– Crash during execution
– Logic error
– Does not crash, but output is not what the spec asks it to be
– Timing Error
– Does not deliver computational result on time
23
The University of Sydney Page 25
Software Testing Levels
The University of Sydney Page 26
Testing Levels
Testing level Description
Unit / Functional Testing The process of verifying functionality of software components
(functional units, subprograms) independently from the whole system
Integration Testing The process of verifying interactions/communications among software
components. Incremental integration testing vs. “Big Bang” testing
System Testing The process of verifying the functionality and behaviour of the entire
software system including security, performance, reliability, and
external interfaces to other applications
Acceptance Testing The process of verifying desired acceptance criteria are met in the
system (functional and non-functional) from the user point of view
The University of Sydney Page 27
Integration Testing
– The process of verifying interactions/communications among software
components behave according to its specifications
– Incremental integration testing vs. “Big Bang” testing
– Independently developed (and tested) units may not behave correctly when
they interact with each other
– Activate corresponding components and run high-level tests
.
The University of Sydney Page 28
Acceptance Testing Process
Ian Sommerville. 2016. Software Engineering (10th ed.). Addison-Wesley, USA.
The University of Sydney Page 29
Regression Testing
– Verifies that a software behaviour has not changed by incremental changes
to the software
– Modern software development processes are iterative/incremental
– Changes may be introduced which may affect the validity of previous tests
– Regression testing is to verify
– Pre-tested functionality still working as expected
– No new bugs are introduced
.
The University of Sydney Page 30
Software Testing
Techniques
The University of Sydney Page 31
Principle Testing Techniques
Black-box Testing
– No programming and software knowledge
– Carried by software testers
– Acceptance and system testing (higher
levels)
White-box Testing
– Software code understanding
– Carried by software developers
– Unit and integration testing (lower level)
Executable
software
code
Test
Case
Input
Test
Case
Output
Test
Case
Input
Test
Case
Output
The University of Sydney Page 32
Black Box Testing – Example
– Test planned without knowledge of the code
– Based only on specification or design
– E.g., given a function that computes sign (x+y)
f(X,Y)f(X,Y)
x
y
sign(x+y)
The University of Sydney Page 33
Test-Driven Development (TDD)
– A particular aspect of many (not all) agile methodologies
– Write tests before writing code
– And indeed, only write code when needed in order to pass tests!
Ian Sommerville. 2016. Software Engineering (10th ed.). Addison-Wesley, USA.
The University of Sydney Page 34
Test Cases Design
The University of Sydney Page 35
Choosing Test Cases – Techniques
– Partition testing (equivalence partitioning)
– Identify groups of inputs with common characteristics
– For each partition, choose tests on the boundaries and close to the midpoint
– Guideline-based testing
– Use testing guidelines based on previous experience of the kinds of errors
often made
– Understanding developers thinking
The University of Sydney Page 36
Equivalence Partitioning
– Different groups with common characteristics
– E.g., positive numbers, negative numbers
– Program behave in a comparable way for all
members of a group
– Choose test cases from each of the partitions
– Boundary cases
– Select elements from the edges of the
equivalence class
The University of Sydney Page 37
Choosing Test Cases – Exercise
– For the following class method, apply equivalence partitioning to define
appropriate test cases.
The University of Sydney Page 38
Choosing Test Cases – Solution Sample
Equivalence Class Value for
month
Value for
year
Months with 31 days, non-leap years 7 (July) 1901
Months with 31 days, leap years 7 (July) 1904
Months with 30 days, non-leap years 6 (June) 1901
Months with 30 days, leap year 6 (June) 1904
Months with 28 or 29 days, non-leap year 2 February 1901
Months with 28 or 29 days, leap year 2 February 1904
Equivalence Class Value for
month
Value
for year
Leap years divisible by 400 2 (February) 2000
Non-leap years divisible by
100
2 (February) 1900
Non-positive invalid month 0 1291
Positive invalid months 13 1315
The University of Sydney Page 39
Code (Test) Coverage
The University of Sydney Page 40
Code (Test) Coverage
– The extent to which a source code has been executed by a set of tests
– Usually measured as percentage, e.g., 70% coverage
– Different criteria to measure coverage
– E.g., method, statement, loop
The University of Sydney Page 41
Coverage Criteria
Coverage Criteria Description
Method How many of the methods are called, during the tests
Statement How many statements are exercised, during the tests
Branch How many of the branches have been exercised during the tests
Condition Has each separate condition within each branch been evaluated to both
true and false
Condition/decision
coverage
Requires both decision and condition coverage be satisfied
Loop Each loop executed zero times, once and more than once
The University of Sydney Page 47
Coverage Target
– What coverage should one aims for?
– Software criticality determines coverage level
–
– Extremely high coverage for safety-critical (dependable) software
– Government/standardization organizations
– E.g., European corporation for space standardization (ESS-E-ST-40C)
100% statement and decision coverage for 2 out of 4 criticality levels
The University of Sydney Page 49
Tools for Agile Development
Version
Control
Version
Control
Build
Automation
Build
Automation
Automated
Testing
Automated
Testing
The University of Sydney Page 50
Unit Testing
Junit
The University of Sydney Page 51
Unit Testing – Terminology
– Code under test
– Unit test
– Code written by a developer that executes a specific functionality in the code under
test and asserts a certain behavior/state
– E.g., method or class
– External dependencies are removed (mocks can be used)
– Test Fixture
– The context for testing
• Usually shared set of testing data
• Methods for setup those data
The University of Sydney Page 52
Test Frameworks
– Software that allows test cases to be described in standard form and run
automatically
– Tests are managed and run automatically, and frequently
– Easy to understand reports
– Big Green or Red signal
The University of Sydney Page 53
Unit Testing Frameworks for Java
– Junit
– TestNG
– Jtest (commercial)
– Many others …
https://en.wikipedia.org/wiki/List_of_unit_testing_frameworks#Java
The University of Sydney Page 54
Unit Testing Frameworks – Junit
– An open source framework for writing and running tests for Java
– Uses annotations to identify methods that specify a test
– Can be integrated with Eclipse, and build automation tools (e.g., Ant,
Maven, Gradle)
https://github.com/junit-team/junit4
The University of Sydney Page 55
JUnit – Constructs
– JUnit test (Test class)
– A method contained in a class which is only used for testing (called Test class)
– Test suite
– Contains several test classes which will be executed all in the specified order
– Test Annotations
– To define/denote test methods (e.g., @Test, @Before)
– Such methods execute the code under test
– Assertion methods (assert)
– To check an expected result vs. actual result
– Variety of methods
– Provide meaningful messages in assert statements
The University of Sydney Page 56
JUnit – Annotations
JUnit 4* Description
import org.junit.* Import statement for using the following annotations
@Test Identifies a method as a test method
@Before Executed before each test to prepare the test environment (e.g., read input data,
initialize the class)
@After Executed after each test to cleanup the test environment (e.g., delete temporary
data, restore defaults) and save memory
@BeforeClass Executed once, before the start of all tests to perform time intensive activities,
e.g., to connect to a database
@AfterClass Executed once, after all tests have been finished to perform clean-up activities,
e.g., to disconnect from a database
*See Junit 5 annotations and compare them https://junit.org/junit5/docs/current/user-guide/#writing-tests-annotations
https://junit.org/junit5/docs/current/user-guide/#writing-tests-annotations
The University of Sydney Page 57
Junit Test – Example
MyClass’ multiply(int, int)
method
MyClassTests class for testing the method multiply(int, int)
The University of Sydney Page 58
Junit – Assertions
– Assert class provides static methods to test for certain conditions
– Assertion method compares the actual value returned by a test to
the expected value
– Allows you to specify the expected and actual results and the error
message
– Throws an AssertionException if the comparison fails
The University of Sydney Page 59
Junit – Methods to Assert Test Results*
Method / statement Description
assertTrue(Boolean condition [,message]) Checks that the Boolean condition is true.
assertFalse(Boolean condition [,message]) Checks that the Boolean condition is false.
assertEquals(expected, actual [,message]) Tests that two values are the same. Note: for arrays the
reference is checked not the content of the arrays.
assertEquals(expected, actual, delta
[,message])
Test that float values are equal within a given delta.
assertNull(object [,message]) Checks that the object is null.
assertNotNull(object, [,message]) Checks that the object is not null.
*More assertions in Junit 5 – https://junit.org/junit5/docs/current/user-guide/#writing-tests-assertions
https://junit.org/junit5/docs/current/user-guide/#writing-tests-assertions
The University of Sydney Page 60
Junit – Static Import
The University of Sydney Page 61
Junit – Executing Tests
– From the command line
– runClass(): org.junit.runner.JUnitCore class allows to run one or several test classes
– org.junit.runner.Result object maintain test results
– Test automation
– Build tools (e.g., Maven or Gradle) along with a Continuous Integration Server (e.g.,
Jekins) can be configured to automate test execution
– Essential for regular daily tests (agile development)
The University of Sydney Page 62
Junit – Executing Tests from Command line
The University of Sydney Page 63
Junit – Test Suites
The University of Sydney Page 65
Junit – Test Execution Order
– Junit assumes that all test methods can be executed in an arbitrary order
– Good test code should not depend on other tests and should be well defined
– You can control it but it will lead into problems (poor test practices)
– By default, Junit 4.11uses a deterministic order (MethodSorters.DEFAULT)
– @FixMethodOrder to change test execution order (not recommended practice)
– @FixMethod Order(MethodSorters.JVM)
– @FixMethod Order(MethodSorters.NAME ASCENDING)
https://junit.org/junit4/javadoc/4.12/org/junit/FixMethodOrder.html
The University of Sydney Page 66
Junit – Parameterized Test Example
The University of Sydney Page 67
Junit – Parameterized Test
– A class that contains a test method and that test method is executed with
different parameters provided
– Marked with @RunWith(Parameterized.class) annotation
– The test class must contain a static method annotated with @Parameters
– This method generates and returns a collection of arrays. Each item in this collection
is used a s a parameter for the test method
The University of Sydney Page 68
Junit – Verifying Exceptions
– Verifying that code behaves as expected in exceptional situations (exceptions)
is important
– The @Test annotation has an optional parameter “expected” that takes as
values subclasses of Throwable
Verify that ArrayList throws IndexOutOfBoundException
The University of Sydney Page 69
Junit – Verify Tests Timeout Behaviour (1)
– To automatically fail tests that ‘runaway’ or take too long:
– Timeout parameter on @Test
– Cause test method to fail if the test runs longer than the specified timeout
– timeout in milliseconds in @Test
The University of Sydney Page 70
Junit – Rules
– A way to add or redefine the behaviour of each test method in a test class
– E.g., specify the exception message you expect during the execution of test code
– Annotate fields with the @Rule
– Junit already implements some useful base rules
The University of Sydney Page 71
Junit – Rules
Rule Description
TemporaryFolder Creates files and folders that are deleted when the test finishes
ErrorCollector Let execution of test to continue after first problem is found
ExpectedException Allows in-test specification of expected exception types and
messages
TimeOut Applies the same timeout to all test methods in a class
ExternalResources Base class for rules that setup an external resource before a test
(a file, socket, database connection)
RuleChain Allows ordering of TestRules
See full list and code examples of Junit rules https://github.com/junit-team/junit4/wiki/Rules
The University of Sydney Page 73
Junit – ErrorCollector Rule Example
– Allows execution of a test to continue after the first problem is found
The University of Sydney Page 75
Junit – Eclipse Support
– Create Junit tests via wizards or write them manually
– Eclipse IDE also supports executing tests interactively
– Run-as Junit Test will starts JUnit and execute all test methods in the selected
class
– Extracting the failed tests and stack traces
– Create test suites
The University of Sydney Page 76
Tests Automation – Junit with Gradle
– To use Junit in your Gradle build, add a testCompile dependency to your build
file
– Gradle adds the test task to the build and needs only appropriate Junit JAR to
be added to the classpath to fully activate the test execution
The University of Sydney Page 77
Junit with Gradle – Parallel Tests
maximum simultaneous JVMs spawned
causes a test-running JVM to close and be replaced by a brand new
one after the specified number of tests have run under an instance
The University of Sydney Page 78
Code Coverage Tools
The University of Sydney Page 79
Tools for Code Coverage in Java
– There are many tools/plug-ins for code coverage
in Java
– Example: EclEmma*
– EclEmma is a code coverage plug-in for Eclipse
– It provides rich features for code coverage analysis
in Eclipse IDE
– EclEmma is based on the JaCoCo code coverage
library
• JaCoCo is a free code coverage library for
Java, which has been created by the EclEmma
team
https://www.eclemma.org/
The University of Sydney Page 80
EclEmma – Counters
– EclEmma supports different types of counters to be
summarized in code coverage overview
– bytecode instructions, branches, lines, methods, types and
cyclomatic complexity
– Should understand each counter and how it is measured
– Counters are based on JaCoCon – see JaCoCo
documentation for detailed counter definitions
https://www.eclemma.org/
http://www.jacoco.org/jacoco/trunk/doc/counters.html
The University of Sydney Page 81
EclEmma Coverage View
https://www.eclemma.org/
The Coverage view shows all analyzed Java elements within the common Java hierarchy.
Individual columns contain the numbers for the active session, always summarizing the child
elements of the respective Java element
The University of Sydney Page 82
EclEmma Coverage – Source Code Annotations
https://www.eclemma.org/
Source lines color code:
• green for fully covered lines,
• yellow for partly covered lines (some
instructions or branches missed)
• red for lines that have not been
executed at all
Diamonds color code
• green for fully covered branches,
• yellow for partly covered branches
• red when no branches in the
particular line have been executed.
The University of Sydney Page 83
References
– Armando Fox and David Patterson 2015. Engineering Software as a
Service: An Agile Approach Using Cloud Computing (1st Edition). Strawberry
Canyon LLC
– Ian Sommerville 2016. Software Engineering: Global Edition (3rd edition).
Pearson, Englad
– Tim Berglund and Matthew McCullough. 2011. Building and Testing with
Gradle (1st ed.). O’Reilly Media, Inc.
– Vogella GmbH, JUnt Testing with Junit – Tutorial (Version 4.3,21.06.2016 )
http://www.vogella.com/tutorials/JUnit/article.html
– Junit 4, Project Documentation, https://junit.org/junit4/
The University of Sydney Page 84
Tutorial: Testing with Junit
Next week Lecturer: Continuous
Integration / Continouos Delivery
(CI/CD)