{r setup, include=FALSE} knitr::opts_chunk$set(echo = T, comment=NA, message=F, warning=F)
PLEASE SUBMIT BOTH YOUR .RMD FILE AND THE KNITTED PDF FILE TO BLACKBOARD.
INSTRUCTIONS
– One line of code per question (Parts 1 and 2).
– R output is enough for an answer, you do not need to additionally type the answer to each question.
– No entering numbers manually.
– Example: What percent of people like the color yellow?
– Good: mean(favColor==’Yellow’) <- this will remain correct if data changes
- Bad: 6/15, after looking at data and determining 6 of the 15 had yellow as favorite color
- Bad: sum(favColor=='Yellow')/15 <- this will be incorrect if the data changes
- No unnecessary or irrelevant output in your document. Keep it organized, relevant, and well formatted.
PART 1
stateData <- data.frame(state.x77, Region=state.region)
1. What is the dimension of this data set?
dim(stateData)
2. What variables does it contain?
names(stateData)
3. Rename the variables Life.Exp and HS.Grad to LifeExp and HSGrad.
names(stateData)[4]='LifeExp'
names(stateData)[6]='HSGrad'
4. What is the mean population size?
mean(stateData$Population)
5. What is the area of the United States?
sum(stateData$Area)
6. How many states are in the ‘West’ region?
sum(stateData$Region=='West')
7. Use the table() function to see how many states are in each region.
table(stateData$Region)
8. What percent of states are in the ‘Northeast’ region?
mean(stateData$Region == 'Northeast')
9. What is the total area of the ‘North Central’ region?
sum(stateData$Area[stateData$Region == 'North Central'])
10. Using tapply(), determine the total area of each region.
tapply(stateData$Area, stateData$Region, sum)
11. Which states have the lowest illiteracy rate?
12. Which states in the South have above average income?
13. Which states have an area of over 100,000 square miles, life expectancies greater than 70 years, and more than 50% high-school graduates?
14. Which 3 states have life expectancies over 73 years or murder rates per 100,000 less than 2%?
PART 2
- Read in the Largest Companies by Revenue Wikipedia page using the htmltab package/function.
- Data can be found here - https://en.wikipedia.org/wiki/List_of_largest_companies_by_revenue
- Data contains information on the 50 largest companies by revenue.
- Convert the data into the format given below.
- Pay attention to variable types.
# install htmltab package before uncommenting the two lines below.
# do not leave install.packages() in the RMD file.
#library(htmltab)
#rev = htmltab('https://en.wikipedia.org/wiki/List_of_largest_companies_by_revenue')
# Data Cleaning in this block
library(htmltab)
myData = htmltab('https://en.wikipedia.org/wiki/List_of_largest_companies_by_revenue')
myData$`Revenue(USD millions)` = as.numeric(gsub('\\$|,', '', myData$`Revenue(USD millions)`))
# Leave line below to display structure of data after data cleaning
str(rev)
Additional Questions:
1. What is the average revenue by industry?
tapply(myData$`Revenue(USD millions)`, myData$Industry, mean)
2. What proportion of the companies listed are in the Oil and Gas industry?
mean(myData$Industry == 'Oil and gas')
3. How many employees are employed by the 10 largest (by revenue) companies? Note that the data is already sorted high to low by revenue.
4. Among these companies, what percent of total revenue does the financial industry capture?
mean(myData$`Revenue(USD millions)`[myData$Industry == 'Financials'])
5. What percent of oil and gas companies are based in the United States?
PART 3
The data for Part 3 represents the Miami Dolphins schedule page from ESPN, located here - https://www.espn.com/nfl/team/schedule/_/name/mia. It looks a bit hectic when you read it in, but if you look at it online you should see what is going on (Preseason stuff is at the top, Regular season starts about midway down). You will extract and clean the regular season table.
- DON’T BE AFRAID OF TRIAL AND ERROR. You can always re-read in the dataset if you accidentally overwrite something.
- vs/@ in the Opponent variable corresponds with Home/Away
I’m giving you a CSV file to read in, but if you are curious about pulling it directly from ESPN, here is the rvest (…like “harvest”) code I used for it -
{r eval=F, echo=T} # This code is just for reference # Data read-in is in next chunk. library(rvest) url <- 'https://www.espn.com/nfl/team/schedule/_/name/mia' page <- read_html(url) data <- data.frame(html_table(page, fill = TRUE))
data = read.csv('https://douglas2.s3.amazonaws.com/data/dolphins.csv', stringsAsFactors=F)
# data cleaning code here
age <- c(20, 40, 34, 24, 32, 53)
# Leave these two lines below to display the final, clean data
str(data)
data