Test Management
This Photo by Unknown Author is licensed under CC BY-SA-NC
Copyright By PowCoder代写 加微信 powcoder
Test Planning
Test Strategy
Entry and Exit Criteria
Test Execution Schedule
Factors influencing the Test Effort
Test Estimation Techniques
Test Monitoring and control
Metrics Used in Testing
Purpose, content and Audiences of test reports
Configuration Management
Test Execution Schedule
A Schedule for the execution of the test suites should be based on a number of factors,
the priorities of the tests (from risk analysis),
technical or logical dependencies of tests or
test suites and the type of tests.
Test Execution Example- 1
The following diagram shows the logical dependencies between a set of seven requirements, where a dependency is shown by an arrow. For example, “R1 -> R3” means that R3 depends on R1.
Which one of the following options structures the test execution schedule according to the requirement dependencies?
a) R1 → R3 → R1 → R2 → R5 → R6 → R4 → R7.
b) R1 → R3 → R2 → R5 → R2 → R6 → R4 → R7.
c) R1 → R3 → R2 → R5 → R6 → R4 → R7.
d) R1 → R2 → R5 → R6 → R3 → R4 → R7.
Test Execution Example- 2
You are testing a Customer Relationship Management (CRM) system and you have prepared the following test cases:
Precondition: CRM database contains at least two client records.
Steps: Clear the whole CRM client database by removing all records.
Expected result: The database is empty.
Precondition: No preconditions.
Steps: Create a new client record that does not exist in the database.
Expected result: Record correctly added in the database.
Precondition: Database contains at least one client record.
Steps: Try to create a new client record that is already present in the database.
Expected result: System does not allow to duplicate the record.
At the beginning, the CRM database is empty. The execution of each test lasts 5 minutes. You want to execute all three test cases, but in the shortest possible time. What is the reasonable test execution schedule in this situation?
(A) TC2, TC3, TC1
(B) TC2, TC2, TC3, TC1
(C) TC2, TC2, TC1, TC2, TC3
(D) TC2, TC1, TC3
Factors Influencing the Test Effort
Estimating
what testing will involve and
what it will cost
Factors that affect the test effort
Product characteristics
Development process characteristics
People characteristics
Test results
This Photo by Unknown Author is licensed under CC BY-SA
Product Characteristics
Risks associated with the product. A high-risk product if not estimated correctly can suffer a lot.
The quality of the test basis (It could be a system requirement, a technical specification, the code itself, or a business process)- should be well written.
The size of the product. Larger projects will result in more difficulty of predicting and managing the project.
The requirements for quality characteristics (usability, reliability etc.). They are time consuming and expensive.
The complexity of problem domain. For example, in Avionics, prevalence of strict security rules and regulations.
The location of geographically dispersed team, especially in different time zones.
The required level of detail for test documentation.
Requirements for legal and regulatory compliance.
Development Process Characteristics
The lifecycle and development process in use has an impact on how the test effort is spent.
The stability and maturity of the organisation. Their requirements analysis, design etc. are better.
The development lifecycle model in use. V-Model is more fragile when there is a late change, while agile can have high regression cost.
The test approach, whether testing starts with the project, or later etc. Right kind of approach is needed.
The tools used which effectively reduce the time spent on some repeated tasks and other development tools for debugging etc.
The test process which everyone on the teams know well and tester are trained to perform the activities and tasks they needed to do in optimum way.
Time Pressure. Intelligent plan and re-planning throughout the process is a hallmark for mature process.
Factors Affecting Test Efforts – Continuation.
People Characteristics
Test Results
The skills and experience of the people involved.
Team cohesion and leadership
The number and severity of the defects found.
The amount of rework required.
Estimation
It is a management activity which approximates
how long a task will take,
how many resources a task would need,
how much time a task would need, and
how much effort from humans will take to complete a task.
This Photo by Unknown Author is licensed under CC BY-SA-NC
Test Monitoring and Control
Test Monitoring is a test management activity that involves checking the status of testing activities, identifying an variances from the planned or expected status and reporting status to the stakeholders.
Test Control is a test management task that deals with developing and applying a set of corrective actions to get a test project on track when monitoring shows a deviation from what was planned.
Test Monitoring and Control
Test Monitoring is about gathering data and information about test activities.
The purpose of test monitoring is to give feedback and visibility about test activities. Information to be monitored may be collected manually or automatically and may be used to measure exit criteria, such as coverage. Metrics may also be used to assess progress against the planned schedule and budget.
Test Control is using that information to guide or control the remaining testing.
Test control describes any guiding or corrective actions taken as a result of information and metrics gathered and reported. Actions may cover any test activity and may affect any other software life cycle activity or task.
Examples of test control actions are:
Making decisions based on information from test monitoring.
Re-prioritize tests when an identified risk occurs (e.g. software delivered late).
Change the test schedule due to availability of a test environment.
Set an entry criterion requiring fixes to have been retested (confirmation tested) by a developer before accepting them into a build.
Test Monitoring
Test Control
Purpose of Metrics in Testing as in Monitoring is to,
Give feedback to test team and manager, on how testing is going, allowing opportunities to guide and improve the testing and the project.
How? By collecting time and cost data about progress versus the planned schedule and budget.
Provide team with visibility about test results and quality of the test object.
Measure the status of testing, test coverage, and test items again the exit criteria to determine whether the test work is done, and to assess the effectiveness of the test activities with respect to the objectives.
Gather data for use in estimating future test efforts, including the adequacy of the test approach.
Common Metrics for test progress monitoring
Percentage of work done in test case preparation (or percentage of planned test cases prepared).
Percentage of work done in test environment preparation.
Test case execution (e.g. number of test cases run/not run, and test cases passed/failed).
Defect information (e.g. defect density, defects found and fixed, failure rate, and retest results).
Test coverage of requirements, risks or code.
Subjective confidence of testers in the product.
Dates of test milestones.
Testing costs, including the cost compared to the benefit of finding the next defect or to run the next test.
Test Report
Test Report is a document which contains
A summary of test activities and final test results
An assessment of how well the Testing is performed
Based on the test report, the stakeholders can
Evaluate the quality of the tested product
Decide on the software release. For example, if the test report informs that there’re many defects remaining in the product, the stakeholder can delay the release until all the defects are fixed.
The Purpose of Test Reports
Test reporting is concerned with summarizing information about the testing endeavour, including:
What happened during a period of testing, such as dates when exit criteria were met.
Analysed information and metrics to support recommendations and decisions about future actions, such as an assessment of defects remaining, the economic benefit of continued testing, outstanding risks, and the level of confidence in tested software.
The outline of a test summary report is given in ‘Standard for Software Test Documentation’ (IEEE 829).
Content of a Test Report
A summary of testing performed.
Information about what occurred during the period the report covers.
Deviations from plan, with regard to schedule or effort of test activities.
Relevant metrics, such as: the number of test cases executed, the numbers of test cases pass, the numbers of test cases fail, pass percentage, fail percentage.
Factors that have blocked or continue to block progress.
Reusable test work products produced.
The status of the testing and of product (test object) quality with respect to the exit criteria or definition of done.
Consider the following two test reports:
Module: browseCatalog
# test cases planned vs. implemented: 56/54
# test cases executed (pass/fail/blocking/other): 54 (34/10/2/8)
Decision coverage (required): 56% (50%)
Condition coverage (required): 77% (70%)
Blocking test cases id: TC004-34, TC-004-50
Module Test Status Requirements Coverage
logIn PASS 100
browseCatalog PASS 80
pay FAIL 100
finalise PASS 100
What is the best audience for these reports?
(A) Test report 1 is better for a client, and test report 2 is better for a test automation engineer.
(B) Test report 1 is better for the IT director, and test report 2 is better for a developer.
(C) Test report 1 is better for a client, and test report 2 is better for a tester.
(D) Test report 1 is better for a tester, and test report 2 is better for a client or high-level management.
Test Report 1
Test Report 2
Configuration Management
The Configuration Management is a process of establishing and maintaining a product’s performance, functional and physical attributes with its requirements, design, and functionalities through its life.
In Configuration Management we making sure that these items are managed carefully in entire project & product life cycle. It allows Software Tester to manage their test ware and test outputs using same configuration management mechanisms.
Configuration Management is a change control process. It helps in managing & controlling the versions of software and hardware configurations. It is used primarily when software requirements change.
Configuration Management for Testing may involve:
All test items of the test object, test ware, and work products are uniquely identified, version controlled, tracked for changes and related to each other, that is what is being tested.
Risks and Testing
Defect Management
Definition of Risk
Risk is a factor that could result in future negative consequences. i.e. we predict that there is a possibility of a negative or undesirable outcome.
Risk Levels
The likelihood of a risk becoming an outcome is one factor to consider when thinking about the risk level associated with its possible negative consequences.– The qualitative or quantitative measure of a risk defined by impact and likelihood.
PRODUCT AND PROJECT RISKS
Factors relating to the way the work is carried out, i.e. the test project.
A risk that impacts project success.
Different categories
Project Issues
Organisational Issues
Political Issues
Technical Issues
Supplier Issues
Factors relating to what is produced by the work i.e. the thing we are testing.
A risk impacting the quality of the product.
Software not performing its intended functions according to the specifications, or the user, customer, and/or stakeholder expectations.
Response time inadequate.
PROJECT RISKS
PRODUCT RISKS
Project Risk Examples
Project Issues
Delays in delivery, task completion or satisfaction of exit criteria or definition of done.
Inaccurate estimates because of relocation of funds or cross cutting etc.
Organisational Issues
Skills, training and low number of staff.
Personnel issues. Users, or subject matter experts unavailability.
Political Issues
Testers may not communicate their needs and/or the test results adequately.
Developers and/or testers may fail to follow up on information found in testing and reviews.
A third party may fail to deliver a necessary product or service or go bankrupt.
Contractual issues may cause problems to the project.
Supplier Issues
Project Risk Examples – Technical Issues
Requirements of user stories may not be clear enough or well enough defined.
The requirements may not be met, given existing constraints.
The test environment may not be ready on time.
Data conversion, migration planning and their tool support may be late.
Weaknesses in the development process may impact the consistency or quality of project work products such as design, code, configurations, test data and test cases.
Poor defect management and similar problems may result in accumulated defects and other technical debts.
Risk-based Testing and Product Quality
Risk Management
Risk-based Testing
Risk Analysis
Assigning a Risk Level
Mitigation Options
Risk Management
Dealing with risks within an organisation is known as risk management and testing is one way to manage some aspects of risks.
For any risks, you have four typical options.
Contingency
Take steps in advance to reduce the likelihood of the risk and the impact of the risk.
Have a plan in place to reduce the impact should the risk become an outcome.
Convince some other member if the team or project stakeholder to reduce the likelihood or accept the impact of the risk.
Do nothing about the risk, which is usually the smart option only when there’s little that can be done or when the likelihood or the impact is low.
Risk Management Activities
Risk Identification
Consult stakeholders
Prepare a draft register of risks
Determine Priority
Of the risks.
Assign probability and consequence scores.
Implement Actions
To mitigate those risks to mitigate the likelihood or impact or both.
Make Contingency
Plans to deal with risks if they do happen.
Risk Analysis
(re-evaluate on regular basis) What can go wrong?
Discuss risks identified.
Risk-based Testing (RBT)
https://www.guru99.com/risk-based-testing.html
Testing in which the management, selection prioritization and use of testing activities and resources are based on corresponding risk types and risk levels.
Risk based testing involves both mitigation and contingency.
It involves measuring how well we are doing at findings and removing defects in critical areas.
Risk Analysis
Close Reading of the requirements specifications, user stories, design specifications, user documentation and other items.
Brainstorming with many of the project stakeholders.
Sequence of one-to-one or small group sessions with the business and technology experts in the company.
A team-based approach that involves the key stakeholders and experts is preferable to a purely document-based approach, as team approaches draw on the knowledge, wisdom and insight of the entire team to determine what to test and how much.
How to Perform Risk Analysis
Can start with simple question: “What should we worry about?”
Look for specific risks in particular product risk categories-
Functionality, Localisation, usability, reliability, performance and supportability.
Standard checklists can exists for specific types of risks.
Review the tests that failed and the bugs your found in a previous release.
Early analysis are usually educated guesses. Make sure you have a plan to reassess and adjusts the risks at regular intervals.
Assigning a Risk Level
After identifying the stakeholders should review the list to assign the likelihood of problems and the impact of problems associated with each one.
You can do this with all the stakeholders at once or you can have business people determine impact and the technical people determine likelihood, and then merge the determinations.
high-medium- low
Very high, high, medium, low and very low (recommended)
Mitigation Options
With a risk priority number, we can now decide on the various risk mitigation options available to us.
Concluding Thoughts on Risks
The results of product risk analysis are used:
To determine the test techniques to be used.
To determine the particular levels and types of testing to be performed.
To determine the extent of testing to be carried out for the different levels and types of testing.
To prioritize testing in order to find the most critical defects as early as possible.
To determine whether any activities in addition to testing could be employed to reduce risk such as providing training in design and testing to inexperienced developers.
What are defect reports for?
Running a test- you get actual results vary from the expected result. Is it a bad thing?
No, because one major goals of testing is to find the problems.
Commonly, these are called incidents, bugs, defects, problems or issues.
We log these defects so that we have a record of what we observed and can follow it up and track what is done to correct it, whether or not it turns out to be a problem in the work product we are testing or something else. –This is defect management.
Prevent the Defect
Early Detection
Minimize the impact
Resolution of the Defect
Process improvement
Defect Management
What are the defect reports for?
Objectives for defect reports?
How to write good defect report?
What goes in a defect report?
What happens to the defect reports after you file them?
How can we document and manage the defects that occur during testing?
The process of recognizing and recording defects, classifying them, investigating them taking action to resolve them and disposing of them when resolved.
Defect Management
Incident : Incident is an unplanned interruption. When the operational status of any activity turns from working to failed and causes the system to behave in an unplanned manner it is an incident.
A problem can cause more than one incidents which are to be resolved, preferably as soon as possible.
Defect Detection Percentage
Metric that compares field defects with test defects.
Useful metric of the effectiveness of the test process.
DDP = defects (testers)
Defects (testers) + defects (field)
Field defects that happen after deployment.
Objectives for defect reports
A defect report contains a description of the issue found and classification of that issue.
Helps to bring clarity to the goals. Typical objectives are:
To provide developers, managers and others with detailed information about the behaviour observed, that is the defect.
To support test managers in the analysis of trends in aggregate defects data., either for understanding and reporting the overall level of system quality.
To enable defect reports to be analysed over a project, and even across projects, to give information and ideas that can lead to development and test process improvements.
This Photo by Unknown Author is licensed under CC BY-SA-NC
How to write good reports?
Technical document
Use careful attentive approach to running your tests
Isolate the defect
Think outside the box
Choice of words
What goes into the defect report?
Name of tester who detected the defect
Tester role, like developer, business analyst, technical support analyst, etc.
Testing type that caught the defect – like regression testing, etc.
Problem summary
Problem description
Testing steps to recreate the observed failure, including expected and ac
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com