CS计算机代考程序代写 Java junit Testing Assignment A2

Testing Assignment A2
Due Apr 29 by 23:59 Points 20
Background
SOFT3202 / COMP9202 Testing Assignment A2
40 Hectare Forest Ltd must have been happy with your earlier work, and now they have asked you to extend the test suite you have created for TGR to ensure it can handle the many different connections TGR has with other hecfor ERP services.
Note: The development and testing processes you will be using here are for assessment purposes – they do not resemble normal industry practice. The API design has also been modified from best practices in order to support assessment, and has been pared down to just the API you need to test.
Your assignment will be automatically marked by a script. This places strict requirements on your classes and filenames, as well as on what you are able to assume in your code. You will be provided with package structure that you must follow along with the API. Your marks will be drawn entirely from the marking script – failure to follow these instructions will lead to automatic loss of marks (up to 100% depending on script output).
For the final submission, you must use the information contained in the 40HF System API V2 documents to complete the following tasks:
PRELIMINARY CORRECTIONS
PRODUCT MODULE
You must fix the issues you encountered in your preliminary submission to ensure your Product module and test suite are API compliant.
This module and test suite will not be otherwise extended in this submission.
Note that while ProductFactory is now deprecated in favour of TGRTestFactory you should still include your corrected implementation and tests for it

TGR
You must fix the issues you encountered in your preliminary submission to ensure your TGR test suite is API compliant.
This test suite will need to be significantly extended in this submission.
You do not need to implement this module.
A2 EXTENSIONS
In your preliminary submission you have written tests that break one of the most important rules of micro/unit testing (that’s ok, your client told you to). This lowest level of testing works on the (greatly simplified) basis that the system under test does not involve participation by any dependencies. We will fix this in this submission, as well as extend the test suite to cover some more complex features.
You will first modify your TGR test suite wherever you used the setProductProvider method – to instead inject a test double Product module rather than a real one. You will modify these tests so that instead of testing that the final state is correct (it likely won’t be) you test that the correct calls to the Product module are made, using the mock test double feature provided by Mockito.
Note: tests in the TGRFacadeImplTest class should follow collaboration/unit test design requirements: you can assume your efforts with the Product module implementation and test suite = a successfully passing contract test.
TGR
The TGR module will be extended to leverage several new dependencies:
Authentication service Reporting service Print service
Fax service
Email service
You must extend your collaboration test suite to test each of the TGR methods which involve these dependencies (through their injection methods i.e. setAuthenticationProvider). You should use mock test doubles when testing these methods as with Product.
Several new TGRFacade methods have been added to the API to leverage these new services. Several existing methods have been changed (particularly to do with Auth).
Full System / Customer / QA testing
Hecfor wants some key features of the TGR system included in a test the helpdesk team can run to ‘sanity check’ the entire system. Target the ‘TGRTestFactory’ API (you will not be provided with the

class itself) for the dependency construction and injection you need to create full system tests for the following tasks in the TGRSystemQA class:
Log in to the system, create a product, assign that product to ‘REJECTED” status , log out. Observed by:
getAllProducts() returns a List of size 1 with list.get(0).approvalFinalised() being true and
getApprovalStatus() returning “REJECTED”.
Log in to the system, create a product, add storage for that product with a non-zero count, a fax number, and an email address, set that storage to require print and email reporting only, create an report for that storage, log out
Observed by:
getLastReportedCount for the storage matches the output countof the created report An report is printed (detect this with standard out output: “Printing that report!”)
An email report is sent (detect this with standard out output: “Emailing that report!”) No fax report is sent (detect this with standard out output: “Faxing that report!”)
Remember – this is a full system test, do not mock anything. Your unit tests (Test.java) files will not be used to check this system – you must catch all bugs with TGRSystemQA.
Submission Requirements
You will be submitting your code as a GitHub Repository. This GitHub Repository must be created under your unikey account on the https://github.sydney.edu.au (https://github.sydney.edu.au/)
platform, with the repository name exactly matching: SCD2_2021_A2
You must create this repository as a PRIVATE repository. Make sure it is private, otherwise you could get in trouble for academic dishonesty!
You must add the following unikeys as ‘collaborators’ so both the marking script and the teaching team can access your work. Do not add any other collaborators, and make sure you get the spelling/numbers correct to avoid releasing your code to somebody else:
jbur2821 mmcg5982 bsch0132 ttho6664
Your repository should match the file structure of the provided ‘skeleton’ gradle project (keep the package and directory structure exactly as it is given to you). You can add other files if you like to help you while you work (for example, an implementation of TGRFacadeImpl), but only the following files will be assessed:
TGRFacadeImplTest.java TGRSystemQA.java ProductImpl.java

ProductImplTest.java ProductFactory.java ProductFactoryTest.java ProductListImpl.java ProductListImplTest.java
You can see a public repository example of this submission structure at
https://github.sydney.edu.au/JBUR2821/SCD2_2021_A2 (https://github.sydney.edu.au/JBUR2821/SCD2_2021_A2)
Marking Mechanism
The marking for this assignment is done by a script. This script will run each night at any time after midnight, based on your last pushed commit to your repository’s master/main branch. Below is a simplified description of the process the marking script will follow so you can better understand the feedback it gives you. Feedback will be available through this Canvas assignment. Note that some feedback will be hidden until the due date!
First, it checks to see if it has access to a correctly named repository for your unikey. If it does not, it terminates.
If it has access to a repository, it will clone the repository, and retrieve the latest pushed commit you have made to the master/main branch (most likely HEAD). Don’t do anything like deleting or renaming the master/main branch, but working on other branches is perfectly fine. The script will only look at master/main though.
Once it has the latest commit, it will parse the directory tree to see if it looks like it should (i.e. it will look for the 8 assessable files in the directories they should be in. If they are not there, it terminates.
If it has found the files, it will move them into the test harness. Your other files are ignored (your assessable code cannot rely on them!). This test harness includes:
An environment for testing your implemented code (i.e. my tests will be run on your code). Multiple environments for testing your test cases (i.e. your tests will be run on my code)
One version in each category will have no bugs. If you reject this version as being bugged, the script will terminate. You MUST pass the working version in order to gain any marks at all.
Various numbers of versions in each category will have one bug each. You gain marks based on the number of bugged versions you reject as bugged. Most bugs will be ‘hidden’ until after the due date.
Each of these will be checked with the gradle command ‘gradle test’ – using the same build.gradle file you have been provided.
Your code may fail to compile. If this is the case the script will terminate. Your code MUST compile in order to gain any marks at all. This can occur separately depending on which files are being tested (that is, your implementation might compile and run, but a test file might fail).
Once all of the above completes successfully a mark will be calculated and the script will terminate.

Your feedback will include some of the following, depending on how far the script got:
If the script terminated prematurely, you will be given a message indicating when it terminated. Any errors generated (such as compile-time errors) will be included.
If this is a ‘before the due date’ marking run, and the script completed, you will receive the following:
A message indicating your code structure appears to be ok and your code compiled successfully
A message indicating how many tests your implemented code passed vs failed, including the JUnit report
A message indicating the number of ‘open’ bugs you have caught vs missed
If this is the ‘after the due date’ marking run, and the script completed, you will receive the following:
A message indicating your code structure appears to be ok and your code compiled successfully
A message indicating how many tests your implemented code passed vs failed, including the JUnit report
A message indicating how many open AND hidden bugs you have caught vs missed, including what those bugs were
A mark derived from the above based on the marking guide.
Resources
Hecfor has provided you with a test admin login (guaranteed to work in any Auth module): Username: “Jim Cummings” Password: “hunter2”.
40HF System API V2 (https://canvas.sydney.edu.au/courses/31635/files/16021187/download? download_frd=1) – this is the API you must target for your tests and implementation. https://github.sydney.edu.au/JBUR2821/SCD2_2021_A2 (https://github.sydney.edu.au/JBUR2821/SCD2_2021_A2) – this contains a sample package structure – you must follow the package structure indicated (note this is not the submission structure). The included build.gradle file indicates the only externally imported libraries that will be available in the marking environment (no Apache Commons, Google Guava, alternate test frameworks/versions).
Assessment Notes:
Your final submission will be assessed using a variety of automated tests. These tests are complex as you have been asked to write a sophisticated test suite with some very specific requirements. Ensure you read and follow these instructions carefully as automated testing is not a forgiving system!
You will be assessed on the following requirements (see marking guide for weighting details):
Product module implementation correct (passes assessment test suite, repeat of A1)

Product module test suite correct (detects good and bugged assessment implementations, repeat of A1)
TGRFacade test suite correct (detects good and bugged assessment implementations) including:
Collaboration tests using test doubles for all dependencies
Full system test leveraging TGRTestFactory for all dependencies
In the TGRFacade collaboration tests, you may not call on concrete versions of the interfaces TGRFacade is targeting. The assessment script will not even include these files in the src folder when it runs the tests – if your test classes even try to import them, they will fail to compile. You must use mocks and only mocks in the collaboration tests.
Remember to only target the public API to ensure your code still compiles when the assessment implementation is swapped in – no custom/private method calls.
Be careful to test outcomes, not processes. Be careful also to allow for any valid outcome: e.g. when calling TGRFacade.addAccount null or Integer id input would be fine unless there is a specific reason for one or the other.
Some important notes (mostly kept from A1):
Ensure you stick to the folder structure, package structure, and filenames required for this assignment – the marking script will not know the difference between a typo in the filename and a syntax error and will fail you either way! In particular do not reference methods not declared in the public API documents – the code that is swapped in will not implement any other methods and this will cause a compilation failure.
Pay attention to what classes you are supposed to test – you do not need to test any of the given interfaces, you will be testing concrete implementations of those interfaces based on the requirements the interface javadocs specify.
You will be testing the defensive programming elements of these modules as well as their actual operations – that is, do they correctly identify and reject input that breaches their preconditions in the way their API says they will. However – YOUR implementation may not rely on the defensive programming of other elements (for example, the ProductListImpl class may not assume the ProductImpl class will handle rejecting input which breaches ProductListImpl preconditions, it needs to be making its own checks). However the implementation you are given to test may (and will) rely on correct outputs from its dependencies and you can rely this in your implementation also.
Something to make things easier: You may assume that the implementations you are given are entirely deterministic – there is no use of any pseudorandom functions, the system does not react to the system clock anywhere it doesn’t tell you it will in the API, it does not query the network, and it does not look at the current hardware. This is obviously NOT something you can assume when doing real testing! (be careful though – some Java in-built classes do not offer guarantees you might assume – for instance, the order of certain collections)
All postconditions should be considered to have an implicit ‘and no unrelated externally observable effects’ requirement. That is, for example, Product.setApprovalStatus does not explicitly say that this operation should not modify the output of Product.getName from what it

was before setApprovalStatus was called, but this and all similar cases should be assumed. You do not need to test for breaches of this requirement (none of the bugs you need to catch are like this), but breaching it in your implementation may result in your implementations failing assessment testing.
Unless otherwise specified, this API does not make any guarantees of concrete implementing class – that is, where List is specified, ArrayList or LinkedList or a custom List would all be valid. Do not make more detailed assumptions of behaviour when testing.
In the Javadocs, preconditions are the combination of the ‘Precondition’, ‘Parameter’, and ‘Throws’ sections. Postconditions are the combination of the ‘Postcondition’ and ‘Return’ sections. Do not rely on the copied information in the concrete class javadocs (not all of the documentation gets copied) – refer to the interface specification.
You may find ‘best practice’ information that says you should not test simple methods like basic getters and setters – this is correct, it’s usually a waste of time. For the scope of this assignment however you should be testing everything, even the simplest methods.
Sanity note: If your tests passing mean you can say with certainty that the API is adhered to, then you are 100% guaranteed to pick up all of the marking bugs. Each of them directly breach something said in the API – however a detailed and correct comprehension of this API will be required!
To give you an idea of just how much easier this makes things (and how hard real world testing is), there is no bug that only occurs if a Product object has an ID of 1337.
That also means you don’t need to test for some things that you normally should – such as integer over/underflows. Stick to the API. The test suite used for your implementation follows the same mindset.
Conversely, DON’T add things not required by the API – either in your implementation, or your test suite. e.g. if the API doesn’t say to throw an exception, then don’t (and don’t have your tests expect the implementation you are given to throw one either). There is at least 1 deliberate gotcha here where a well designed and consistent system would act differently – but we’re just here for verification, not validation.
Lists that do not guarantee order in the API should not be tested with a required order.
Marking Guide
Note: For each bug section, marks are only available IF your test suite accepts the given working example. If your test suite marks the working example as bugged, the total score for that section will be 0%.
0% Does implementation & test suite compile, and is the working version passed? (must achieve this for any marks in the relevant sections)
2% Product module implementation & test suite corrected (note that this is all or nothing as these bugs have not changed since Assignment 1 – 1% for the implementation passing, 1% for the test suite catching all bugs)
15% TGRFacade collaboration testing (20 bugs to detect – 5 open, 15 closed) 3% TGR system test (6 bugs to detect – 2 open, 4 closed)

A Final Note On Difficulty
There are some important differences to the assessment of this assignment compared to Assignment 1, beyond the above listed extensions and Mockito. The ‘working version’ for Assignment 2 is no longer static – just like the ‘not a bug’ examples in Task 1 the code in the working version can and will change over the course of the assignment – though at each step the changes that are made will still match the API. This means that if your tests enforce things beyond the API, that you might see your code pass the working version one day, and fail it the next. There is also a change in the nature of the testing required – where all the bugs in Assignment 1 were limited to single methods, bugs in Assignment 2 often concern processes, that is, multiple related methods run in succession. Running a single method, even with all possible inputs, will not be enough to catch these bugs.
Academic honesty
While the University is aware that the vast majority of students and staff act ethically and honestly, it is opposed to and will not tolerate academic dishonesty or plagiarism and will treat all allegations of dishonesty seriously.
Further information on academic honesty, academic dishonesty, and the resources available to all students can be found on the academic integrity pages on the current students website: https://sydney.edu.au/students/academic-integrity.html (https://sydney.edu.au/students/academic- integrity.html) .
Further information for on research integrity and ethics for postgraduate research students and students undertaking research-focussed coursework such as Honours and capstone research projects can be also be found on the current students website: https://sydney.edu.au/students/research-integrity-ethics.html (https://sydney.edu.au/students/research-integrity-ethics.html) .
Compliance statement
In submitting this work, I acknowledge I have understood the following:
I have read and understood the University of Sydney’s Academic Honesty in Coursework Policy 2015 (https://sydney.edu.au/policies/showdoc.aspx?recnum=PDOC2012/254&RendNum=0) . The work is substantially my own and where any parts of this work are not my own I have indicated this by acknowledging the source of those parts of the work and enclosed any quoted text in quotation marks.
The work has not previously been submitted in part or in full for assessment in another unit unless I have been given permission by my unit of study coordinator to do so.
The work will be submitted to similarity detection software (Turnitin) and a copy of the work will be retained in Turnitin’s paper repository for future similarity checking. Note: work submitted by postgraduate research students for research purposes is not added to Turnitin’s paper repository.