代写 algorithm html math graph statistic Part I:

Part I:
CISC520 Data Engineering and Mining
Homework 1
For this part, you need to explore the bank data (bankdata_csv_all.csv), available on the LMS, and an accompanying description (bankdataDescription.doc) of the attributes and their values. The dataset contains attributes on each person’s demographics and banking information in order to determine they will want to obtain the new PEP (Personal Equity Plan).
Your goal is to perform Association Rule discovery on the dataset using R.
First perform the necessary preprocessing steps required for association rule mining, specifically the id field needs to be removed and a number of numeric fields need discretization or otherwise converted to nominal.
Next, set PEP as the right hand side of the rules, and see what rules are generated.
Select the top 5 most “interesting” rules and for each specify the following:
• Support, Confidence and Lift values
• An explanation of the pattern and why you believe it is interesting based on the business
objectives of the company.
• Any recommendations based on the discovered rule that might help the company to better
understand behavior of its customers or to develop a business opportunity.
Note that the top 5 most interesting rules are most likely not the top 5 in the strong rules. They are rules, that in addition to having high lift and confidence, also provide some non-trivial, actionable knowledge based on underlying business objectives.
To complete this assignment, write a short report describing your association rule mining process and the resulting 5 interesting rules, each with their three items of explanation and recommendations. For at least one of the rules, discuss the support, confidence and lift values and how they are interpreted in this data set.
You should write your answers as if you are working for a client who knows little about data mining. Your report should give your client some insightful and reliable suggestions on what kinds of potential buyers your client should contact, and convince your client that your suggestions are reliable based on the evidence gathered from your experiment results.
In more detail, your answers should include:
• Description of preprocessing steps
• Description of parameters and experiments in order to obtain strong rules
• Give the top 5 most interesting rules and the 3 items listed above for each rule.

Part II:
In this part of homework, you are expected to apply decision tree induction algorithm to solve a mystery in history: who wrote the disputed essays, Hamilton or Madison?
1. About the Federalist Papers
Quote from the Library of Congress http://www.loc.gov/rr/program/bib/ourdocs/federalist.html
The Federalist Papers were a series of eighty-five essays urging the citizens of New York to ratify the new United States Constitution. Written by Alexander Hamilton, James Madison, and John Jay, the essays originally appeared anonymously in New York newspapers in 1787 and 1788 under the pen name “Publius.” A bound edition of the essays was first published in 1788, but it was not until the 1818 edition published by the printer Jacob Gideon that the authors of each essay were identified by name. The Federalist Papers are considered one of the most important sources for interpreting and understanding the original intent of the Constitution.
2. About the disputed authorship
The original essays can be downloaded from the Library of Congress. http://thomas.loc.gov/home/histdox/fedpapers.html
In the author column, you will find 74 essays with identified authors: 51 essays written by Hamilton, 15 by Madison, 3 by Hamilton and Madison, 5 by Jay. The remaining 11 essays, however, is authored by “Hamilton or Madison”. These are the famous essays with disputed authorship. Hamilton wrote to claim the authorship before he was killed in a duel. Later Madison also claimed authorship. Historians were trying to find out which one was the real author.
3. Computational approach for authorship attribution
In 1960s, statistician Mosteller and Wallace analyzed the frequency distributions of common function words in the Federalist Papers, and drew their conclusions. This is a pioneering work on using mathematical approaches for authorship attribution.
Nowadays, authorship attribution has become a classic problem in the data mining field, with applications in forensics (e.g. deception detection), and information organization.
The Federalist Paper data set (fedPapers85.csv) is provided in LMS. The features are a set of “function words”, for example, “upon”. The feature value is the percentage of the word occurrence in an essay. For example, for the essay “Hamilton_fed_31.txt”, if the function word “upon” appeared 3 times, and the total number of words in this essay is 1000, the feature value is 3/1000=0.3%
Organize your report using the following template:

Section 1: Data preparation
You will need to separate the original data set to training and testing data for classification experiments. Describe what examples in your training and what in your test data.
Section 2: Build and tune decision tree models
First build a DT model using the default setting, and then tune the parameters to see if better model can be generated. Compare these models using appropriate evaluation measures. Describe and compare the patterns learned in these models.
Section 3: Prediction
After building the classification model, apply it to the disputed papers to find out the authorship and report the performance accuracy of your models.