计算机代考 Homework 3

Homework 3
Start Assignment
Due Thursday by 11:59pm Points 100 Submitting a file upload Available May 3 at 11:59pm – May 19 at 11:59pm 16 days
In this assignment, we will be building a Naïve Bayes classifier and a SVM model for the productivity satisfaction of the given dataset (https://archive.ics.uci.edu/ml/datasets/Productivity+Prediction+of+Garment+Employees) , the productivity of garment employees.

Copyright By PowCoder代写 加微信 powcoder

Here is the zip (https://canvas.ucdavis.edu/courses/666488/files/16365353/download?download_frd=1) that contains the downloaded data and the template.
Background
The Garment Industry is one of the key examples of the industrial globalization of this modern era. It is a highly labour-intensive industry with lots of manual processes. Satisfying the huge global demand for garment products is mostly dependent on the production and delivery performance of the employees in the garment manufacturing companies. So, it is highly desirable among the decision makers in the garments industry to track, analyse and predict the productivity performance of the working teams in their factories.
Dataset Attribute Information
1. date: Date in MM-DD-YYYY
2. day: Day of the Week
3. quarter : A portion of the month. A month was divided into four quarters 4. department : Associated department with the instance
5. team_no : Associated team number with the instance
6. no_of_workers : Number of workers in each team
7. no_of_style_change : Number of changes in the style of a particular product
8. targeted_productivity : Targeted productivity set by the Authority for each team for each day. 9. smv : Standard Minute Value, it is the allocated time for a task
10. wip : Work in progress. Includes the number of unfinished items for products
11. over_time : Represents the amount of overtime by each team in minutes
12. incentive : Represents the amount of financial incentive (in BDT) that enables or motivates a particular course of action.
13 idle time : The amount of time when the production was interrupted due to several reasons

13. idle_time : The amount of time when the production was interrupted due to several reasons
14. idle_men : The number of workers who were idle due to production interruption
15. actual_productivity : The actual % of productivity that was delivered by the workers. It ranges from 0-1.
Libraries that can be used: numpy, scipy, pandas, scikit-learn, cvxpy, imbalanced-learn
Any libraries used in the discussion materials are also allowed.
Other Notes
– Don’t worry about not being able to achieve high accuracy, it is neither the goal nor the grading standard of this assignment.
– If not specified, you are not required to do hyperparameter tuning, but feel free to do so if you’d like.
(http://localhost:8888/notebooks/HW2/HW2.ipynb#Exercises
Exercise 1 – General Data Preprocessing (20 points)
(http://localhost:8888/notebooks/HW2/HW2.ipynb#Exercise-
1—General-Data-Preprocessing-(20-points))
Our dataset needs cleaning before building any models. Some of the cleaning tasks are common in general, but depends on what kind of models we are building, sometimes we have to do additional processing. These additional tasks will be mentioned in each of the remaining two exercises later.
Note that we will be using this processed data from exercise 1 in each of the remaining two exercises.
For convenience, here are the attributes that we would treat them as categorical attributes: day , quarter , department , and team .
Drop the column date .
For each of the categorical attributes, print out all the unique elements.
For each of the categorical attributes, remap the duplicated items, if you find there are typos or spaces among the duplicated items.
For example, “a” and “a ” should be the same, so we need to update “a ” to be “a”.
Another example, “apple” and “appel” should be the same, so you should update “appel” to be

Create another column named satisfied that records the productivity performance. The behavior is defined as follows. This is the dependent variable we’d like to classify in this assignment.
Return True or 1 if actual_productivity is equal to or greater than targeted_productivity .
Otherwise, return False or 0, which means the team fails to meet the expected performance. Drop the columns actual_productivity and targeted_productivity .
Find and print out which columns/attributes that have empty vaules, e.g., NA, NaN, null, None. Fill the empty values with 0.
Exercise 2 – Naïve Bayes Classifier (40 points in total)
(http://localhost:8888/notebooks/HW2/HW2- solution.ipynb#Exercise-2—Na%C3%AFve-Bayes- Classifier-(40-points-in-total))
Exercise 2.1 – Additional Data Preprocessing (10 points)
(http://localhost:8888/notebooks/HW2/HW2-solution.ipynb#Exercise- 2.1—Additional-Data-Preprocessing-(10-points))
To build a Naïve Bayes Classifier, we need to further encode our categorical variables.
For each of the categorical attribtues, encode the set of categories to be 0 ~ (n_classes – 1).
For example, [“paris”, “paris”, “tokyo”, “amsterdam”] should be encoded as [1, 1, 2, 0].
Note that the order does not really matter, i.e., [0, 0, 1, 2] also works. But you have to start with 0 in your encodings.
You can find information about this encoding in the discussion materials. Split the data into training and testing set with the ratio of 80:20.
Exercise 2.2 – Naïve Bayes Classifier for Categorical Attributes (15 points) (http://localhost:8888/notebooks/HW2/HW2- solution.ipynb#Exercise-2.2—Na%C3%AFve-Bayes-Classifier-for- Categorical-Attributes-(15-points))
Use the categorical attributes only, please build a Categorical Naïve Bayes classifier that predicts the column .

Report the testing result using .
Exercise 2.3 – Naïve Bayes Classifier for Numerical Attributes (15 points) (http://localhost:8888/notebooks/HW2/HW2.ipynb#Exercise- 2.3—Na%C3%AFve-Bayes-Classifier-for-Numerical-Attributes-(15- points))
Use the numerical attributes only, please build a Gaussian Naïve Bayes classifier that predicts the column satisfied .
Report the testing result using classification_report .
Remember to scale your data. The scaling method is up to you.
Exercies 3 – SVM Classifier (40 points in total)
(http://localhost:8888/notebooks/HW2/HW2.ipynb#Exercies-
3—SVM-Classifier-(40-points-in-total))
Exercise 3.1 – Additional Data Preprocessing (10 points)
(http://localhost:8888/notebooks/HW2/HW2.ipynb#Exercise-3.1— Additional-Data-Preprocessing-(10-points))
To build a SVM Classifier, we need a different encoding for our categorical variables. For each of the categorical attribtues, encode them with one-hot encoding.
You can find information about this encoding in the discussion materials. Split the data into training and testing set with the ratio of 80:20.
Exercise 3.2 – SVM with Different Kernels (20 points)
(http://localhost:8888/notebooks/HW2/HW2.ipynb#Exercise-3.2—SVM- with-Different-Kernels-(20-points))
Using all the attributes we have, please build a SVM that predicts the column satisfied . Specifically, please
Build one SVM with linear kernel.
classification_report

Build another SVM but with rbf kernel.
Report the testing results of both models using classification report .
The kernel is the only setting requirement.
Other hyperparameter tuning is not required. But make sure they are the same in these two SVMs if you’d like to tune the model. In other words, the only difference between the two SVMs should be the kernel setting.
Remember to scale your data. The scaling method is up to you.
Exercise 3.3 – SVM with Over-sampling (10 points)
(http://localhost:8888/notebooks/HW2/HW2.ipynb#Exercise-3.3—SVM- with-Over-sampling-(10-points))
For the column satisfied in our training set, please print out the frequency of each class. Oversample the training data
For the column satisfied in the oversampled data, print out the frequency of each class again. Re-build the 2 SVMs with the same setting you have in Exercise 3.2, but use oversampled training data instead.
Do not forget to scale the data first. As always, the scaling method is up to you. Report the testing result with classification_report .
You can use ANY methods listed on here (https://imbalanced- learn.org/stable/references/over_sampling.html) such as RandomOverSampler or SMOTE. You are definitely welcomed to build your own oversampler.
Note that you do not have to over-sample your testing data.

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com