程序代写代做代考 flex data structure finance gui C chain database Hive —


ISYS1087/5 Software Testing
Chapter 2
Fundamentals of Testing
A/Prof I. Balbin

Resources reminder
Students please use the canvas discussion forums to ask any questions. This is encouraged! Almost all material in all slides is directly drawn from
1. Foundations of Software Testing, ISTQB Certification by Black, Veenendaal and Graham, Cengage Learning.
2. Foundations of Software Testing, Mather, Pearson.
3. Software Testing and Analysis, Pezze and Young, Wiley
4. Software Testing-a craftsman’s approach, Jorgensen, CRC
5. Software Testing, Singh, Cambridge
6. Software Testing Foundations by Spillner, Tilo, Schaefer, Rookynook.
7. Software Testing, Patton, SAMS
8. Effective Software Testing, Dustin, Addison Wesley.

Lecture Session: Chapter 2
Week 3

Chapter 2 Aims
• We explain the role of testing in the entire life cycle of a software system
• We look at test levels and the test types that are used during development.
• Each project in software development should be planned and executed using a life cycle model chosen in advance.
• We consider non traditional models for Software Development.
➡Agile SDLC. Testing focus changes, but many of the principles remain ➡Each model implies certain views on software testing.
• The ISTQB Foundation once focussed mainly on the V Model. The new syllabus includes testing in Agile fundamentals

Verification
Make sure product meets the specification.
Specification?
– Describes what the product does.
vs. Validation
• Make sure the product meets the requirements.
• Requirement?
–What the user needs the product to do.

Verification Validation

Example: Elevator Response
Unverifiable (but validate-able) specification:
if a user presses a request button at floor i, an
available elevator must arrive at floor i soon … Verifiable specification:
if a user presses a request button at floor i, an available elevator must arrive at floor i within 30 seconds …
1234567 8

Manual or Automatic Testing?
What style of testing

When does testing take place?
• The test levels on the right branch of the model should be interpreted as levels of test execution.
• Test preparation (test planning, test analysis and design) starts earlier and is performed in parallel with the development phases on the left branch
• The development phases aren’t actually shown.
• The V model is used for small to medium sized projects where requirements
are clearly defined and relatively fixed.
• It is estimated that roughly 40% of development is still in the V model while some 60% is now Agile.

Weaknesses of the V Model
• Very rigid and least flexible.
• Software is developed during the implementation phase, so no early
prototypes of the software are produced.
• If any changes happen midway, then the test documents along with
requirement documents have to be updated • What about strengths? Testing?

Incremental and Iterative Development
• Smaller chunks. Each chunk can be quasi sequential • Incremental build a chunk at a time
➡after some chunks, can pre-release to customer
➡can’t assess till the “end”
• Iterative rough->refine->refine … ->refine
➡customer can assess partly working system
➡early feedback looks —implications for testing • Standard SDLC we test validation not verification
➡Although sequential, there can be some overlapping • Agile we test closer to verification
➡moving target

Concertina V model for Agile

characteristics of incremental and iterative
• planning and analysis done in parallel • test execution may need to overlap
➡more difficult with incremental. Why? • Testing tied to an iteration
➡May need stubs and harnesses • Incremental:
➡Test Completion Report biased towards end of project ➡Test Plan biased towards beginning of project
➡Don’t align “cleanly” with iterative
• scope of test processes volatile
• more regression testing
• defects outside envelope of an increment
• Can mean less thorough testing. Why formalise tests on small chunks?

Rational Unified Process (RUP)
Idea -> Concept -> Build/Test Bigger Chunk ->Release -> back to Build/Test ▪ Chunks are bigger than Scrum but smaller than V Model

Scrum
• Agile management approach
• Says nothing about the SD technique (eg test first programming)
• No guidance about how testing is to be done
• small sprints (days/weeks/months)
➡sprint is a feature = user story(ies) => UAT on feature ➡User Story (US) Dev->Test->Release
➡3C’s Card/Conversation/Confirmation
• Confirmation through tester. Should ask open questions and propose ways to test and confirm acceptance criteria
➡UAT can = US or be > US
• unit and possibly system tests
• Dev team includes testers
• Product owner = Owner/Client
• Scrum Master = Facilitator between PO and Dev
• Velocity is a measure of the amount of work a Team can tackle during a single Sprint and is the key metric in Scrum. Velocity is calculated at the end of the Sprint by totalling the Points for all fully completed User Stories. See here for details.

eXtreme Programming (XP)
An Agile approach introduced by Kent Beck which is described by five core values. Flows as a value system into the others
1.Simplicity: We will do what is needed and asked for, but no more. This will maximise the value created for the investment made to date. We will take small simple steps to our goal and mitigate failures as they happen. We will create something we are proud of and maintain it long term for reasonable costs.
2.Communication: Everyone is part of the team and we communicate face to face daily. We will work together on everything from requirements to code. We will create the best solution to our problem that we can together.
3.Feedback: We will take every iteration commitment seriously by delivering working software. We demonstrate our software early and often then listen carefully and make any changes needed. We will talk about the project and adapt our process to it, not the other way around.
4.Respect: Everyone gives and feels the respect they deserve as a valued team member. Everyone contributes value even if it’s simply enthusiasm. Developers respect the expertise of the customers and vice versa. Management respects our right to accept responsibility and receive authority over our own work.
5.Courage: We will tell the truth about progress and estimates. We don’t document excuses for failure because we plan to succeed. We don’t fear anything because no one ever works alone. We will adapt to changes when ever they happen

Sample Kanban Board

Kanban
• Japanese word which means “signal card”
• Eg Trello or inside Jira
• Visualisation of value chain to be managed
• Cards can span sprints
• Groups of cards can be called a “swim lane”
• Each column is a “station” i.e. related activities such as development or testing
• Work in progress limit: the amount of parallel active tasks is limited and controlled by the maximum number of tickets allowed for a station and/or globally for the board.
• Lead Time: Time for a unit of work to travel from work start to ship time. Kanban tries to minimise the average lead times for complete value streams.

Kanban vs Scrum
Scrum Kanban
Release
Methodology Visualisation of
tasks Roles
Time-boxing (iterations or
sprints)
Key Metrics
At end of each sprint Continuous delivery or at team’s discretion
Task Boards
Kanban Boards
Product owner, scrum No defined roles master, dev team
Mandatory: Shippable product at end of each sprint
Optional: releasing deliverables item by item
Velocity Lead Time

Challenges to testing in Agile
• Testers from V models used to proper documentation as part of test basis now find less formal “moving” documents
• Devs are doing component testing. But if Product owners are the overlords of acceptance testing, that opens up existing and future problems. Testers know about end to end testing and system testing. They are in a “strange” position within sprints as their activity is somewhat buried.
• Lack of documentation implies testers should become more requirements engineering coaches.
• The time for sprints doesn’t capture cognate testing needs across the system and it’s
integration. Testers need to be bold.
• Testers in Agile need to be more adept at automation and have an expert eye on the regression testing needs. They can’t be static.
• When I talk to experienced testers about testing in Agile, they roll their eyes somewhat …

Testing terms
• A component test is what we also call a unit test. Depending on the programming language, this may be called a module test as well as a class test.
• The items that we test are known as test objects. These could be a function, script, other software component (eg database)
• Back to our VSR subsystem DreamCar where we calculate the price of the car. The specification may have looked like this:

The starting point is baseprice minus discount, where baseprice is the general basic price of the vehicle and discount is the discount to this price granted by the dealer.
A specialprice for a special model and the price for extra equipment items (extraprice) shall be added.
If three or more extra equipment items are chosen (extras), there is a discount of 10% on these particular items. If five or more special equipment items are chosen, this discount is increased to 15 percent.
The discount by the dealer applies only to the baseprice, whereas the discount on special items applies to the special items only. These discounts cannot be combined for everything.

Implementation in C

Testing tasks
Write the component (module=test_calculate_price) driver
▪ A tester would need to know the language and understand the interface, so this is generally performed by the developer. (DevOps may help. How?)
▪ A developer doing their own unit/module tests only is not great.
Note: component testing is often confused with debugging. Debugging is not testing.
Debugging is finding and removing them Testing is the systematic approach for .
the cause of failures
finding failures

Test Driver (by Developer)

Test Objectives
• to check that the entire functionality of the test object works correctly and
completely as required by its specification (functional testing … later)
• functionality means .
• to check the correctness and of the implementation, the component is tested with a series of test cases, where each test case covers a particular input/output combination (partial functionality).
• testing for robustness is another important aspect. It is the same as functional testing.
➡That test focuses on items not allowed or forgotten in the specification.
➡Such test cases are also called negative tests. The component’s reaction should be an appropriate exception handling
the input/output behaviour of the test object
completeness

For example
Often more than 50% of the program code deals with exception handling.
▪ Robustness has its cost
Component testing should also check for non-functional characteristics, like efficiency and maintainability and portability (dev ops).

Efficiency may be explicit as in for example embedded software, or software with real time constraints.
Maintainability is how easy the code is to change. This will include: code structure, modularity, quality of the comments in the code, adherence to standards, understandability, and currency of the documentation
What can we say about the code for calculate_price()?
In terms of test strategy using a debugger is clearly a useful approach. if (discount > addon_discount)
addon_discount = discount;
is a line of code which can be used to test robustness, though it is important to construct test cases
without referring to code structure.
Test driven development is very popular: first write the tests then write the code, and iterate till the code passes the test(s).

Integration Testing
• A precondition for integration testing is that the test objects subjected to it (i.e., components) have already been tested. Defects should, if possible, already have been corrected
• Developers, testers, or special integration teams then compose groups of these components to form larger structural units and subsystems
• The goal of the integration test is to expose faults in the interfaces and in the interaction between integrated components

Example
CarConfig is an element (class) with methods calculate_price() and check_config() (and more).
check_config() interacts with a database and presents the options through a GUI.

We can see that although the user chose Alloy rims, that wasn’t included in the Price, so although calculations are correct, some data wasn’t passed

.. continued
Integration of the single components to the subsystem DreamCar is just the beginning of the integration test in the project VSR.
The other subsystems of the VSR (see previous overheads) must also be integrated and the subsystems must be connected to each other.
➡DreamCar has to be connected to the subsystem ContractBase ➡ContractBase is connected to the subsystems:
• JustInTime (order management), • NoRisk (vehicle insurance), and • EasyFinance (financing).
In one of the last steps of integration, VSR is connected to the external mainframe in the IT centre of the enterprise (the system environment).

Integration Levels
• There may be several integration levels for test objects of different sizes. • Component integration tests will test the interfaces between internal
components or between internal subsystems.
• System integration tests focus on testing interfaces between different systems
and between hardware and software.
➡For example, if business processes are implemented as a workflow through several interfacing systems and problems occur, it may be very expensive and challenging to find the defect in a special component or interface.
• The most important test objects of integration testing are internal interfaces between components

Test Environment
• Test drivers are needed in the integration test. They send test data to the test objects, and they receive and log the results
➡monitors are software which sends test data to logs
• Ideally, the drivers to components are written in a way that they are as
generic as can be and can be reused.
• If drivers are poorly architected, then wasteful effort ensues in massaging them for integration testing of the larger components

Test Objectives
• Clearly these are to reveal interface problems as well as conflicts between integrated parts
➡Eg. interface formats may not be compatible with each other because some files are missing or because the developers have split the system into completely different components than specified
• Harder to find problems are those due to the execution of connected parts. These are found dynamically only. Some examples follow:

Examples
• A component transmits syntactically incorrect or no data. The receiving component cannot operate or crashes
➡could occur because of functional fault in a component, incompatible interface formats, protocol faults.
• Communication works but involved components interpret the received data differently (functional fault of a component, contradicting or misinterpreted specifications).
• Data is transmitted correctly but at the wrong time, or it is late (timing problem), or the intervals between the transmissions are too short (throughput, load, or capacity problem).

“Skipping” component testing?
Not good to advance to integration testing. Most failures that will occur in a test designed like this are caused by functional faults within the individual components.
Because there is no suitable access to the individual component, some failures cannot be provoked and many faults, therefore, cannot be found
▪ If a failure occurs in the test, it can be difficult or impossible to locate its origin and to isolate its cause
The cost of trying to save effort by cutting the component test is finding fewer of the existing faults and experiencing more difficulty in diagnosis.

Combining a component test with a subsequent integration test is more effective and
efficient

Integration test strategies
• Efficiency is the relation between the cost of testing (the cost of test personnel and tools, etc.) and the benefit of testing (number and severity of the problems found) in a certain test level
• Components are completed in an unpredictable order. Testers should not sit idle until, say, the topmost/first integration is possible.
• An obvious ad hoc strategy to quickly solve this problem is to integrate the components in the order in which they are ready

VSR Example
• the central subsystem ContractBase is delayed because the work on it costs much more than originally expected.
• The project manager decides to start the tests with the available components DreamCar and NoRisk. These do not have a common interface, but they exchange data through ContractBase.
• To calculate the price of the insurance, NoRisk needs to know which type of vehicle was chosen because this determines the price and other parameters of the insurance. As a temporary replacement for ContractBase, a stub is programmed
• The stub makes it possible to put in different relevant data about the customer. NoRisk calculates the insurance price from the data and shows it in a window so it can be checked. The price is also saved in a test log.
➡The stub serves as a temporary replacement for the still missing subsystem ContractBase

Competing plans and schedule constraints
The system architecture determines how many and which components the entire system consists of and in which way they depend on each other.
The project plan determines at what time during the course of the project the parts of the system are developed and when they should be ready for testing.

.
The test plan determines which aspects of the system shall be tested, how intensely, and on which test level this has to happen
The test manager should be consulted when determining the order of
implementation

Test Manager Strategies
Has to design an optimal integration strategy for the project.
Because the integration strategy depends on delivery dates, the test manager should consult the project manager during project planning.
The order of component implementation should be suitable for integration testing

Stubs, drivers and harnesses
You have a component. It may not have been written, or it may be too buggy at the minute. You replace it with a fake bit of software which emulates what that component should do (if it works properly). This allows you to continue testing.
▪ This component is a stub.
You have a component which is responsible for the command and control section of the software suite. It may not yet be written or may be too buggy. Alternatively, you might need one to look at the interaction of a subset of software component interactions.
▪ The software you use is a driver.
Recall, that a test harness, is nothing but a collection of software and test data configured to test a program unit by running it under varying conditions and monitoring its behaviour and outputs. It has two main parts:
▪ the test execution engine and ▪ the test script repository.
You will gain experience with these in the labs.

APIs
An application programming interface (API) is a specification intended to be used as an interface by software components to communicate with each other.
An API may include specifications for routines, data structures, object classes, and variables. An API specification can take many forms, including an International Standard such as POSIX or vendor documentation (wikipedia)
What has this to do with testers? Surely it’s only something the code monkeys need to worry about?


• instead of testing by means of standard user inputs and outputs, you use software to send calls to the API, get output, and log the system’s response
• It should also log timing and any other relevant metrics (unless they are more easily captured by system debugging tools), along with line of test code that was running when an API error occurred.
➡ If the test code sets a memory buffer that is larger than that required by the API, you can then look at the contents of the buffer for improper overwriting on the part of the API.
• Most obviously, it tests for problem arising during everyday use. If a typical use of an API call produces an error under ordinary conditions, that tells you that there’s a serious problem somewhere.
• High-stress test conditions provide you with important tests of the API, which, like all software, must either function under difficult conditions or fail gracefully, predictably, and according to specifications
• APIs need to be documented, and the documentation needs to be accurate, complete, and usable. In the case of an API, the target audience for the documentation consists of developers, and the documentation must allow them to make full use of the API; it should be tested as part of the API

Top down integration testing
The test starts with the top-level component that calls other components but is not called itself.
Stubs replace all subordinate components.
Successively, integration proceeds with lower-level components. The higher level that has already been tested serves as test driver.
Advantage: Test drivers are not needed, or only simple ones are required, because higher-level components that have already been tested serve as the main part of the test environment.
Disadvantage: Stubs must replace lower-level components not yet integrated. This can be very costly

Bottom-up integration testing
The test starts with the elementary (bottom) system components that do not call further components, except for functions of the operating system.
Larger subsystems are assembled (upwards) from the tested components and then tested.
Advantage: No stubs are needed.
Disadvantage: Test drivers must simulate higher-level components.

Ad Hoc integration testing
The components are being integrated in the (casual) order in which they are finished.
Advantage: This saves time because every component is integrated as early as possible into its environment.
Disadvantage: Stubs as well as test drivers are required.

Backbone integration
A skeleton or backbone is built and components are gradually integrated into it.
Advantage: Components can be integrated in any order. Disadvantage: A likely labor-intensive skeleton or backbone is required

In practice, neither top down or bottom up are used in a pure form.
a more or less individualised mix of the previously mentioned integration
strategies
Special integration strategies can be followed for object-oriented, distributed, and real-time systems

Big Bang testing integration
Big bang integration means waiting until all software elements are developed and then throwing everything together in one step.
The time leading up to the big bang is lost time that could have been spent testing.
All the failures will occur at the same time.
It will be difficult or impossible to get the system to run at all. I
It will be very difficult and time- consuming to localise and correct defects If your test manager employs this, they are likely deficient—-run away!

System Testing
In the lower test levels, the testing was done against technical specifications, i.e., from the technical perspective of the software producer.
The system test looks at the system from the perspective of the customer and the future user.
The testers validate whether the requirements are completely and appropriately implemented.
Many functions and system characteristics result from the interaction of all system components and are visible only when the entire system is present and can be observed

VSR example
The main purpose of the VSR-System is to make ordering a car as easy as possible. While ordering a car, the user uses all the components of the VSR-System:
➡the car is configured (DreamCar),
➡financing and insurance are calculated (Easy- Finance, NoRisk), ➡the order is transmitted to production (JustInTime), and
➡the contracts are archived (ContractBase).
The system fulfils its purpose only when all these system functions and all the components collaborate correctly. The system test determines whether this is the case in a production-similar environment.
The system also checks system and user documentation, like system manuals, user manuals, training material, etc.
Testing configuration settings as well as optimising the system configuration during load and performance testing must often be covered.

System Test Objects
It is more and more important to check the quality of data in systems that use a database or large amounts of data. This should be included in the system test.
The data itself will then be new test objects. It must be assured that it is consistent, complete, and up-to-date.
▪ For example, if a system finds and displays train connections, the station list and schedule data must be correct.
One mistake commonly made to save costs and effort: instead of the system being tested in a separate environment, the system test is executed in the customer’s operational environment. This is detrimental. Why?

System Test Objectives
It is the goal of the system test to validate whether the complete system meets the specified functional and nonfunctional requirements
Failures from incorrect, incomplete, or inconsistent implementation of requirements should be detected.
Even undocumented or forgotten requirements should be identified
In (too) many projects, the requirements are incompletely or not at all written down. The
.
▪ This leads to testers having to gather information about desire behaviours. No good at all.
▪ I believe this has also given impetus to the use of Agile development
problem this poses for testers is that it’s unclear how the system is supposed to
behave. This makes it hard to find defects

Acceptance Test
We now transition to testing away from the software producer to the product consumer
The acceptance test may be the only test that the customers are actually involved in or that they can understand.
▪ The customer may even be responsible for this test!
How much acceptance testing should be done is dependent on the product risk.

Early acceptance testing?
Acceptance tests may also be executed as a part of lower test levels or be
distributed over several test levels:
➡ A commercial-off-the-shelf product (COTS) can be checked for
acceptance during its integration or installation.
➡Usability of a component can be acceptance tested during its component test.
➡Acceptance of new functionality can be checked on prototypes before system testing.

Test basis for acceptance testing
The test basis for acceptance testing can be all and any document describing the system from the user or customer viewpoint, such as:
➡user or system requirements, ➡use cases,
➡business processes,
➡risk analyses,
➡user process descriptions, ➡ forms,
➡ reports,
➡and laws and regulations
as well as descriptions of maintenance and system administration rules and processes

Types of acceptance testing
There are four typical forms of acceptance testing: • Contract acceptance testing
• User acceptance testing
• Operational acceptance testing
• Field testing (alpha and beta testing)

Contract Acceptance Testing
If customer-specific software was developed, the customer will perform contract
acceptance testing (in cooperation with the vendor).
Based on the results, the customer considers whether the software system is free of (major) deficiencies and whether the service defined by the development contract has been accomplished and is acceptable.
The test criteria are the acceptance criteria determined in the development contract. Therefore, these criteria must be stated as unambiguously as possible. Additionally,
conformance to any governmental, legal, or safety regulations
customer’s actual
In contrast to system testing, acceptance testing is run in the operational environment.
must be addressed

Different user groups
In the VSR example, the responsible customer is a car manufacturer. But the car manufacturer’s shops will use the system. Employees and customers who want to purchase cars will be the system’s end users. Some clerks in the company’s headquarter will work with the system, e.g., to update price lists in the system.
If major user acceptance problems are detected during acceptance testing, it is often too late to implement more than cosmetic countermeasures.
▪ To prevent disasters, it is advisable to let a number of representatives from the group of future users examine prototypes of the system early
This is also the Agile manifesto approach where acceptance is early and constant due to prototype polishing.

Operational Acceptance Testing
Operational (acceptance) testing assures the acceptance of the system by
the system administrators.
It may include testing of backup/restore cycles (including restoration of copied data), disaster recovery, user management, and checks of security vulnerabilities
DevOps

Field acceptance testing
If the software is supposed to run in many different operational environments, it is very expensive or even impossible for the software producer to create a test environment for each of them during system testing.
▪ In such cases, the software producer may choose to execute a field test after the system test.
The field test identifies influences from users’ environments that are not entirely known or specified and to eliminate them if necessary.
▪ This is often done by releasing an alpha or beta test to clients in different environments, so they can feed back.
A more recent term is dogfood test. It refers to a kind of internal field testing where the product is distributed to and used by internal users in the company that developed the software.“if you make dogfood, try it yourself first.”

Maintenance testing What happens if and when:
1. The system is run under new operating conditions that were not predictable and not planned.
2. The customers express new wishes.
3. Functions are necessary for rarely occurring special cases that were not
anticipated.
4. Crashes that happen rarely or only after a very long run time are reported. These are often caused by external influences.

In the VSR System
These four points are manifest by:
1. A few dealers use the system on an unsupported platform with an old version of the operating system. Sometimes the host access causes system crashes.
2. Users would therefore like to save equipment configurations & be able to retrieve them after a change
3. Some of the seldom-occurring insurance prices cannot be calculated at all because the corresponding
calculation wasn’t implemented in the insurance component
4. Sometimes, even after more than 15 minutes, a car order is not yet confirmed by the server. The system
cuts the connection after 15 minutes to avoid having unused connections remain open.
➡The customers are angry with this because they waste a lot of time waiting in vain for confirmation of the purchase order.
➡The dealer then has to repeat inputting the order and then has to mail the confirmation to the customer

Further Development on VSR?
1. New communication software will be installed on the host in the car manufacturer’s computing centre; therefore, the VSR communication module must be adapted.
2. Certain system extensions that could not be finished in release 1 will now be delivered in release 2
3. The installation base shall be extended to the EU dealer network. Therefore, specific adaptations necessary for each country must be integrated and all the manuals and the user interface must be translated (localisation testing)
After each release, the project effectively starts over, running through all the project phases. This approach is called iterative software development.

Types of different testing
The following types of testing can be distinguished: • Functional testing
• Nonfunctional testing
• Testing of software structure
• Testing related to changes

Functional Testing
The following shows a part of the formal requirements specification (FRS) concerning price calculation for VSR:
R 100: The user can choose a vehicle model from the current model list for configuration.
R 101: For a chosen model, the deliverable extra equipment items are indicated. The user can choose the desired individual equipment from this list.
R 102: The total price of the chosen configuration is continuously calculated from current price lists and displayed.


Functional requirements specify the behaviour of the system; they describe what the system must be able to do.
Implementation of these requirements is a precondition for the system to be applicable at all.
▪ Characteristics of functionality, according to ISO 9126, are suitability, accuracy, interoperability, and security
Once these are assured, we need to write functional test cases. They are best entered into a requirements management system.
▪ Templates for writing an SRS are available in IEEE 830. Check it out online. ▪ Test cases are about verifying and validating the input/output.
▪ This can be done “blind” without knowing the code, of course.

VSR example
Here are some tests based on Requirement 102, which is:
For a chosen model, the deliverable extra equipment items are indicated. The user can choose the desired individual equipment from this list.
102.1 A vehicle model is chosen; its base price according to the sales manual is displayed.
102.2 A special equipment item is selected; the price of this accessory is added. 102.3 A special equipment item is deselected; the price falls accordingly.
102.4 Three special equipment items are selected; the discount comes into effect as defined in the specification.

Testing a business process
Requirements-based functional testing as shown is mainly used in system testing and other higher levels of testing
If a software system’s purpose is to automate or support a business process for the customer, business-process-based testing or use-case-based testing are other similar suitable testing methods (Chapter 5)
Requirements-based testing focuses on single system functions (e.g., the transmission of a purchase order)
(e.g., the sales conversation, consisting of configuring a car, agreeing on the purchase contract, and the transmission of the purchase order). This means a sequence of several tests.
Business-process-based testing, however, focuses on the whole process consisting
of many steps

Example
You’re a developer on a team responsible for the company accounting system, implemented in Rails. A business person asks you to implement a reminder system to remind clients of their pending invoices. You sit with that business person and start defining behaviours.
it “adds a reminder date when an invoice is created”
it “sends an email to the invoice’s account’s primary contact after the reminder date has passed” it “marks that the user has read the email in the invoice”
The phrasing is in business language, not the system’s internal implementation language. You don’t see or care that an invoice belongs_to an account, because nobody outside the dev team cares about that. The developer might write a test method
it “adds a reminder date when an invoice is created” do current_invoice = create :invoice current_invoice.reminder_date.should == 20.days.from_now
end

nonfunctional testing
important for ultimate acceptance is often how easily they can use the system. This depends on how easy it
is to work with the system, if it reacts quickly enough, and if it returns easily understood information
Nonfunctional requirements do not describe the functions; they describe the attributes of the functional behaviour or the attributes of the system as a whole, i.e., “how well” or with what quality the (partial) system should work
Characteristics are, according to ISO 9126, ▪ reliability,
▪ usability, and
▪ efficiency.
For the new syllabus ISO/IEC 25010:2011, we include
▪ compatibility
▪ security
▪ See https://www.iso.org/obp/ui/#iso:std:iso-iec:25010:ed-1:v1:en

Typical non functional tests
• Load
• Performance
• Volume (eg large files)
• Stress
• Security or Penetration
• Stability/reliability
• Robustness
• Compatibility/Data conversion
• Configuration
• Usability ([ISO 9241], [ISO
9126])
• Documentation • Maintainability

Behavioural Testing
The business person specifies behaviours they want to see.
1. The developer asks questions based on their understanding of the system, while also writing down additional behaviours needed from a development perspective.
2. Ideally, both parties can refer to the list of current system behaviours to see if this new feature will break existing features.

Good practice: Representatives of the (later) system test personnel should participate in early requirement reviews and make sure that every nonfunctional requirement (as well as each functional one) can be measured and is testable
Example: The VSR-System is designed for use on a market-leading operating system. It is obvious that recommended or usual user interface conventions are followed for the “look and feel” of the VSR GUI. The DreamCar GUI (see last overhead) violates these conventions in several aspects.
▪ Even if no particular requirement is specified, such deviations from “matter of fact requirements” can and must be seen as faults or defects.

Testing Change
Tests must show that earlier faults are really repaired retesting. Additionally, there is the risk of unwanted side effects.
Repeating other tests in order to find them is called regression testing.
A regression test is a new test of a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made (uncovering masked defects
regression testing may be performed at all test levels and applies to functional, nonfunctional, and structural test
Test cases to be used in regression testing must be well documented and reusable. Therefore, they are strong candidates for test automation.
We can’t test everything. How extensive should regression testing be?

How extensive?
1. Rerunning of all the tests that have detected failures whose reasons (the defects) have been fixed in the new software release (defect retest, confirmation testing)
2. Testing of all program parts that were changed or corrected (testing of altered functionality)
3. Testing of all program parts or elements that were newly integrated (testing of new functionality)
4. Testing of the whole system (complete regression test)

Picking and Choosing regression tests
• Repeating only the high-priority tests according to the test plan
• In the functional test, omitting certain variations (special cases)
• Restricting the tests to certain configurations only (e.g., testing of the English product version only, testing of only one operating system version)
• Restricting the test to certain subsystems or test levels