CS计算机代考程序代写 Excel Section 1: smoking

Section 1: smoking
 
Flag question: Spacer

A study of patients with insulin-dependent diabetes was conducted to investigate the effects of cigarette smoking on renal and retinal complications. Before examining the results of the study, a researcher expects that the proportions of four different subgroups are as follow:
\(\begin{array}{lcc} \mbox{Subgroup} & \mbox{Proportion} \\ \hline \mbox{Nonsmokers} & 0.50 \\ \mbox{Current Smokers} & 0.20 \\\mbox{Tobacco Chewers} & 0.10 \\\mbox{Ex-smokers} & 0.20 \\ \hline\end{array}\)
Of 100 randomly selected patients, there are 44 nonsmokers, 24 current smokers, 13 tobacco chewers and 19 ex-smokers. Should the researcher revise his estimates? Use 0.01 level of significance.
> y_i = c(44, 24, 13, 19)

> p_i = c(0.5, 0.2, 0.1, 0.2)

> (n = sum(y_i))

[1] 100

> (e_i = n * p_i)

[1] 50 20 10 20

> sum((y_i – e_i)^2/e_i)

[1] 2.47

> qchisq(0.005, 1:6, lower.tail = FALSE)

[1] 7.879439 10.596635 12.838156 14.860259 16.749602 18.547584

> qchisq(0.01, 1:6, lower.tail = FALSE)

[1] 6.634897 9.210340 11.344867 13.276704 15.086272 16.811894

> qchisq(0.025, 1:6, lower.tail = FALSE)

[1] 5.023886 7.377759 9.348404 11.143287 12.832502 14.449375

> qchisq(0.05, 1:6, lower.tail = FALSE)

[1] 3.841459 5.991465 7.814728 9.487729 11.070498 12.591587
 
Flag question: Question 1
Question 11 pts
Which of the tests we have covered this semester is most appropriate in this scenario? Why?
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 2
Question 22 pts
Write down the appropriate null and alternative hypotheses.
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 3
Question 32 pts
What are the assumptions required for this test. Are they satisfied here?
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 4
Question 41 pts
What is the approximate distribution of the test statistic under the null hypothesis?
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 5
Question 51 pts
Write down an expression for the p-value.
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 6
Question 62 pts
What is your decision for the test and why?
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Spacer
Section 2: TV violence
 
Flag question: Spacer
A study of the amount of violence viewed on television as it relates to the age of the viewer yields the results shown in the accompanying table for 81 people.
\(\begin{array}{cccc} && \mbox{Age} & \\ \mbox{Viewing} & 16-34& 35-54 & 55 \ \mbox{and over}\\ \hline \mbox{Low violence} &8&12&21\\ \mbox{High violence} & 18 & 15 &7 \end{array}\)
> x = matrix(c(8, 18, 12, 15, 21, 7), ncol = 3)
> colnames(x) = c(“16-34”, “35-54”, “54+”)
> rownames(x) = c(“Low violence”, “High violence”)
> x

16-34 35-54 54+
Low violence 8 12 21
High violence 18 15 7

> (n = sum(x))

[1] 81

> (xr = apply(x, 1, sum))

Low violence High violence
41 40

> (xc = apply(x, 2, sum))

16-34 35-54 54+
26 27 28

> (ex = xr %*% t(xc) / n)

16-34 35-54 54+
[1,] 13.16049 13.66667 14.17284
[2,] 12.83951 13.33333 13.82716

> sum((x – ex)^2 / ex)

[1] 11.16884

> qchisq(0.05, 1:6, lower.tail = FALSE)

[1] 3.841459 5.991465 7.814728 9.487729 11.070498 12.591587

> qt(0.05, 1:6)

[1] -6.313752 -2.919986 -2.353363 -2.131847 -2.015048 -1.943180

> qt(0.025, 1:6)

[1] -12.706205 -4.302653 -3.182446 -2.776445 -2.570582 -2.446912
 
Flag question: Question 7
Question 72 pts
Which test is most appropriate in this scenario? Why?
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 8
Question 81 pts
Write down the appropriate null and alternative hypotheses.
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 9
Question 92 pts
What are the assumptions required for this test. Are they satisfied here?
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 10
Question 101 pts
What is the approximate distribution of the test statistic under the null hypothesis?
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 11
Question 111 pts
Write down an expression for the p-value.
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 12
Question 122 pts
What is your decision for the test and why?
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Spacer
Section 3: Ozone
 
Flag question: Spacer
Data was recorded on WS (wind speeds), Temp (temperature), H (humidity), In (insolation) and O (ozone) for 30 days. R output is given below to help you answer the following questions.
> pollut = read_csv(“https://raw.githubusercontent.com/DATA2002/data/master/pollut.txt”)
> glimpse(pollut)

Observations: 30
Variables: 5
$ WS 50, 47, 57, 38, 52, 57, 53, 62, 52, 42, 47, 40, 42, 40, 48,…
$ Temp 77, 80, 75, 72, 71, 74, 78, 82, 82, 82, 82, 80, 81, 85, 82,…
$ H 67, 66, 77, 73, 75, 75, 64, 59, 60, 62, 59, 66, 68, 62, 70,…
$ In 78, 77, 73, 69, 78, 80, 75, 78, 75, 58, 76, 76, 71, 74, 73,… $ O 15, 20, 13, 21, 12, 12, 12, 11, 12, 20, 11, 17, 20, 23, 17,…

> library(GGally)
> ggpairs(pollut) + theme_bw()

> pollut_lm = lm(O ~ ., pollut)
> summary(pollut_lm)

Call:
lm(formula = O ~ ., data = pollut)

Residuals:
Min 1Q Median 3Q Max
-6.5861 -1.0961 0.3512 1.7570 4.0712

Coefficients:

Estimate Std. Error t value Pr(>|t|)
(Intercept) -15.49370 13.50647 -1.147 0.26219
WS -0.44291 0.08678 -5.104 2.85e-05
Temp 0.56933 0.13977 4.073 0.00041
H 0.09292 0.06535 1.422 0.16743
In 0.02275 0.05067 0.449 0.65728

Residual standard error: 2.92 on 25 degrees of freedom
Multiple R-squared: 0.798, Adjusted R-squared: 0.7657
F-statistic: 24.69 on 4 and 25 DF, p-value: 2.279e-08

> pollut_step = step(pollut_lm, trace = FALSE)
> summary(pollut_step)

Call:
lm(formula = O ~ WS + Temp + H, data = pollut)

Residuals:
Min 1Q Median 3Q Max
-6.5887 -1.1686 0.1978 1.9004 4.1544

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -16.60697 13.07154 -1.270 0.215
WS -0.44620 0.08513 -5.241 1.78e-05
Temp 0.60190 0.11764 5.117 2.47e-05
H 0.09850 0.06316 1.559 0.131

Residual standard error: 2.874 on 26 degrees of freedom
Multiple R-squared: 0.7964, Adjusted R-squared: 0.7729
F-statistic: 33.89 on 3 and 26 DF, p-value: 3.904e-09

> newdata = data.frame(WS = 40, Temp = 80, H = 50)
> predict(pollut_step, newdata, interval = “confidence”)

fit lwr upr
1 18.6218 16.70852 20.53509

> predict(pollut_step, newdata, interval = “prediction”)

fit lwr upr
1 18.6218 12.41146 24.83215

> library(ggfortify)
> autoplot(pollut_step, which = 1:2) + theme_bw()

 
Flag question: Question 13
Question 132 pts
Does it look like any variables can be dropped from the full model? If you were doing backwards selection using a testing down strategy which would you drop first?
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 14
Question 142 pts
Write down a the workflow for a formal hypothesis test to see if the coefficient for insolation is significantly different to zero. Make sure you state the null and alternative hypotheses, test statistic (and its distribution), p-value and conclusion.
[You can do this using plain text, no need to use the Canvas equation editor]
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 15
Question 151 pts
Write down the fitted model for the model selected by the backward stepwise procedure.
[You can do this using plain text, no need to use the Canvas equation editor]
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 16
Question 162 pts
State and check the linear regression assumptions for the model selected by the backward stepwise procedure.
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 17
Question 171 pts
What proportion of the variability of ozone is explained by the explanatory variables in the stepwise selected model?
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 18
Question 182 pts
Use the stepwise model to estimate the average ‘ozone‘ for days when ‘WS=40‘, ‘Temp=80‘ and ‘H=50‘. Is a confidence interval or a prediction interval most appropriate here? Write down the estimated interval you think is most appropriate.
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Spacer
Section 4: Flicker frequency
 
Flag question: Spacer
If a light is flickering but at a very high frequency, it appears to not be flickering at all. Thus there exists a ”critical flicker frequency” where the flickering changes from ”detectable” to ”not detectable” and this varies from person to person.
The critical flicker frequency and iris colour for 19 randomly sampled people were obtained as part of a study into the relationship between critical frequency flicker and eye colour.
We want to use a one-way ANOVA to test if there is a significant difference in the mean detectable flicker frequency between people with different eye colours.
> library(tidyverse)
> flicker = read_tsv(“https://raw.githubusercontent.com/DATA2002/data/master/flicker.txt”)
> glimpse(flicker)

Observations: 19
Variables: 2
$ Colour “Brown”, “Brown”, “Brown”, “Brown”, “Brown”, “Brown”, “B…
$ Flicker 26.8, 27.9, 23.7, 25.0, 26.3, 24.8, 25.7, 24.5, 26.4, 24…

> ggplot(flicker, aes(x = Colour, y = Flicker)) +
geom_boxplot() +
theme_classic() +
labs(y = “Critical flicker frequency”, y = “Eye colour”)

> flicker_anova = aov(Flicker ~ Colour, data = flicker)
> summary(flicker_anova)

Df Sum Sq Mean Sq F value Pr(>F)
Colour 2 23.00 11.499 4.802 0.0232
Residuals 16 38.31 2.394

> library(emmeans)
> flicker_emmeans = emmeans(flicker_anova, ~ Colour)
> contrast(flicker_emmeans, method = “pairwise”, adjust = “bonferroni”)

contrast estimate SE df t.ratio p.value
Blue – Brown 2.58 0.836 16 3.086 0.0212
Blue – Green 1.25 0.937 16 1.331 0.6060
Brown – Green -1.33 0.882 16 -1.511 0.4512

P value adjustment: bonferroni method for 3 tests

> library(ggfortify)
> autoplot(flicker_anova, which = c(1,2)) + theme_classic()

 
 
Flag question: Question 19
Question 191 pts
Write out the appropriate null and alternative hypotheses. [Be sure to define all parameters used.]
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 20
Question 202 pts
What are the assumptions required for a one-way ANOVA? Are they satisfied in this case?
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 21
Question 212 pts
Write down the test statistic (with distribution), observed test statistic, p-value and an appropriate conclusion.
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 22
Question 221 pts
If appropriate, discuss the post hoc test results to identify which pairwise differences are significant. If not appropriate, give a brief justification as to why not.
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 23
Question 232 pts
Describe how to perform the Bonferroni correction in the context of post-hoc pairwise testing. Why is it needed?
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 24
Question 242 pts
Describe how you would perform a permutation test in this context.
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Spacer
Section 5: Weight gain
 
Flag question: Spacer
10 pigs were independently sampled and fed a specific diet. The weight of 5 pigs on diet X and 5 pigs on diet Y are
Diet X: 12, 16, 16, 12, 10 and diet Y : 30, 12, 24, 32, 24.
We want to test if there is a difference in weight between the two diets using the Wilcoxon rank-sum test.
> wdat = data.frame(
+ diet = rep(c(“X”,”Y”), each = 5),
+ weight = c(12, 16, 16, 12, 10, 30, 12, 24, 32, 24)
+ ) %>%
+ mutate(ranks = rank(weight))
> wdat

diet weight ranks
1 X 12 3.0
2 X 16 5.5
3 X 16 5.5
4 X 12 3.0
5 X 10 1.0
6 Y 30 9.0
7 Y 12 3.0
8 Y 24 7.5
9 Y 32 10.0
10 Y 24 7.5

> wdat %>% group_by(diet) %>% summarise(sum(ranks))

diet sum(ranks)
1 X 18
2 Y 37

> nx = 5
> ny = 5
> N = nx + ny
> ew = nx*(N+1)/2
> varw = (sum(wdat$ranks^2) – N*(N+1)^2/4)*nx*ny/(N*(N-1))
> c(ew, varw)

[1] 27.50000 22.08333

> qnorm(c(0.9,0.95,0.975))

[1] 1.281552 1.644854 1.959964

> qt(c(0.9,0.95,0.975), 8)

[1] 1.396815 1.859548 2.306004
 
 
Flag question: Question 25
Question 251 pts
Write out the null and alternative hypotheses.
[Be sure to define all parameters used.]
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 26
Question 262 pts
Calculate the Wilcoxon rank-sum test statistic and the standardised version of the test statistic.
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 27
Question 271 pts
At the level of significance α = 0.05, what is your conclusion?
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 28
Question 283 pts
What is a parametric test that could be used instead of the Wilcoxon rank-sum test? What is one advantage of using a Wilcoxon rank-sum test over a parametric test? What is one advantage of using a parametric test over the Wilcoxon rank-sum test?
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 29
Question 293 pts
Describe how you would calculate a 90% bootstrap confidence interval for the mean difference between the two diets.
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Spacer
Section 6: short answer questions
 
Flag question: Question 30
Question 302 pts
Describe the process of k-means clustering.
[100 words or less.]
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 31
Question 312 pts
What is the purpose of principal component analysis? How can we select the number of principal components we need to retain?
[100 words or less.]
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 32
Question 322 pts
Why is an ANOVA post-hoc t-test generally considered preferable to standard two-sample t-test?
[100 words or less.]
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 33
Question 334 pts
You’re the senior manager at a management consulting company. A junior data analyst on your team has been tasked with building a prediction model for a binary outcome. When you ask them how their model performs they respond with:
“It’s awesome, bro! Best model ever. I used all available variables in a logistic regression model. The resubstitution accuracy was pretty much the same as the leave-one-out cross validation accuracy. So I’m done for the day. I’m gonna go play some fussball and grab a kombucha from the fridge, can I get you one bro?
You remind the junior analyst for the 100th time that you’re not their “bro”. Internally you curse the HR department for hiring Commerce grads from UNSW.
In the text box below, provide some guidance to the junior analyst about their model selection and evaluation choices. Also suggest some alternative methods that they could use and briefly outline their advantages and disadvantages.
[150 words or less.]

Consider a study designed to evaluate the effect of meditation on students’ emotional state. Study participants were randomised to either the treatment group or a control group. The treated group were taught mindfulness techniques and asked to meditate for 20 minutes every day, while a control group made no change to their routine. At the end of the study, participants reported their emotional state.
The data found that out of 50 students in the treated group, 45 reported being calm, while 5 reported being stressed at the end of the study. Out of 200 patients in the control group, 40 reported being stressed, while 160 reported being calm.
The results are presented in the table below:

 
Some R output that may be helpful:
> qnorm(c(0.9,0.95,0.975)) %>% round(3)
[1] 1.282 1.645 1.960
> qchisq(c(0.9, 0.95, 0.975), 1) %>% round(3)
[1] 2.706 3.841 5.024
> qt(c(0.9, 0.95, 0.975), 3) %>% round(3)
[1] 1.638 2.353 3.182
Where calculations are required below, you can show your working using plain text equations, e.g. e^(5*6 + log(5))/9, or if you’re using R as a calculator, you can copy and paste your R code (you don’t need to use R though). You do not need to typeset the working using Canvas’ equation editor.
Question 11 pts
Is this a retrospective or a prospective study? Explain why.
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 2
Question 21 pts
Is it appropriate to use a relative risk to quantify the relationship between the treatment (meditation or control) and outcome (calm or stressed) in this example? Why or why not.
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 3
Question 32 pts
Assuming that it is appropriate, calculate the relative risk of being calm in the meditation group relative to the control group and provide an interpretation.
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 4
Question 42 pts
Calculate the odds ratio and provide an interpretation.
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 5
Question 52 pts
Given that the standard error for the log odds-ratio is 0.50 (to 2 decimal places), calculate a 90% confidence interval for the odds-ratio.
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 6
Question 61 pts
Is there evidence to suggest that meditation helps students achieve a calm mental state at the 10% level of significance? Explain your answer with reference to the confidence interval found above.
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 7
Question 71 pts
Under the assumption of independence between treatment and outcome, calculate the number of stressed people we would expect to see in the mediation group.

 
Flag question: Spacer
Section 3: lecture views and final grades
There are 5 questions in this section
 
 
 
Flag question: Spacer
Edwards and Clinton (2019) conducted a study to explore the impact of lecture recording availability and usage on student attendance and attainment. Data was collected from a matched cohort before (n1 = 161) and after (n2 = 160) the introduction of lecture recordings for a compulsory second year undergraduate quantitative research methods course within a 3 year BSc degree in the UK. We will focus on the second cohort, after the introduction of lecture recordings.
A student was defined as having “viewed” a lecture recording if they watched more than 5% of it by the final exam. Lecture attendance was recorded from the fourth to the eleventh teaching weeks about 30 mins into each lecture so that latecomers could be included. The reason for not taking attendance in the first few weeks was to not influence normal student behaviour. The outcome variable used to measure student’s learning is their final grade. A two-way ANOVA has been performed with R output provided below.
> marks = read_csv(“data/marks.csv”) %>%
+   mutate(
+     Views = factor(Views, levels = c(“No views”,”1-10 views”,”More than 10 views”)),
+     Attendance = factor(Attendance, levels = c(“Never”,”Up to 50%”,”More than 50%”))
+   )
> glimpse(marks)
Observations: 160
Variables: 4
$ year        2016, 2016, 2016, 2016, 2016, 2016, 2016, 2016, 2016…
$ final_grade 76.4, 73.0, 63.9, 72.2, 50.5, 97.2, 65.8, 66.2, 95.3…
$ Views       More than 10 views, 1-10 views, No views, No views, …
$ Attendance  More than 50%, Up to 50%, Never, More than 50%, Neve…
> marks %>% ggplot() + aes(x = Views,  y = final_grade) +
+   geom_boxplot(coef = 10) + theme_classic() +
+   facet_wrap(~ Attendance, labeller = label_both) +
+   labs(x = “Lecture recording views”, y = “Final grade”) +
+   theme(axis.text.x = element_text(angle = 20, hjust = 1))

 
> marks_summary = marks %>%
+   group_by(Views, Attendance) %>%
+   summarise(mean = mean(final_grade), sd = sd(final_grade), n = n())
> marks_summary
# A tibble: 9 x 5
# Groups:   Views [3]
  Views              Attendance     mean    sd     n
                       
1 No views           Never          58.9  7.98    28
2 No views           Up to 50%      68.4 10.6     23
3 No views           More than 50%  69.9  5.87    15
4 1-10 views         Never          58.0  5.48    13
5 1-10 views         Up to 50%      71.7  9.84    20
6 1-10 views         More than 50%  79.8  8.96    13
7 More than 10 views Never          69.6  7.82     7
8 More than 10 views Up to 50%      68.0  6.89    24
9 More than 10 views More than 50%  76.5  5.59    17
> final_grade_aov = aov(final_grade ~ Views * Attendance, data = marks)
> summary(final_grade_aov)
                  Df Sum Sq Mean Sq F value   Pr(>F)
Views              2   1397   698.5   10.77 4.23e-05
Attendance         2   4457  2228.5   34.37 5.00e-13
Views:Attendance   4   1131   282.7    4.36   0.0023
Residuals        151   9791    64.8 
> p1 = marks_summary %>% ggplot() +
+   aes(x = Attendance, y = mean, group = Views, linetype = Views) +
+   geom_line() + theme_classic()
> p2 = marks_summary %>% ggplot() +
+   aes(x = Views, y = mean, group = Attendance, linetype = Attendance) +
+   geom_line() + theme_classic()
> gridExtra::grid.arrange(p1, p2, ncol = 1)

> final_grade_resids = tibble::tibble(
+   fitted_values = final_grade_aov$fitted.values,
+   residuals = final_grade_aov$residuals
+ )
> p3 = ggplot(final_grade_resids, aes(x = fitted_values, y = residuals)) +
+   geom_point(alpha = 0.5) + geom_hline(yintercept = 0) +
+   theme_classic() + labs(title = “Residual plot”)
> p4 = ggplot(final_grade_resids, aes(sample = residuals)) +
+   geom_qq() + geom_qq_line() +
+   theme_classic() + labs(title = “Normal QQ plot”)
> gridExtra::grid.arrange(p3, p4, ncol = 2)

 
 
Flag question: Question 8
Question 82 pts
Can you identify any obvious trends in the data? Justify your answer with reference to the box plots and/or summary statistics.
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 9
Question 92 pts
With reference to the ANOVA table provided, can the interaction effect be dropped from the model? Why or why not? [No need to write out a full hypothesis test.]
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 10
Question 102 pts
Comment on the interaction plot. Do your observations agree with the results from the ANOVA table?
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 11
Question 112 pts
Are the ANOVA assumptions satisfied in this case? List the assumptions and provide comments with reference to the output above.
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 12
Question 122 pts
If we extended this model to a more general linear model, write down two additional predictor variables we could obtain or request be measured on the students that might help significantly reduce the residual variance.
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Spacer
Section 4: exercise and blood pressure
There are 7 questions in this section
 
 
Flag question: Spacer
A study was performed to investigate the relationship between an exercise program and systolic blood pressure in patients with pre-hypertension. A total of 15 patients with pre-hypertension enrolled in the study, and their systolic blood pressures were measured. Each patient then participated in an exercise training program 3 times per week for 6 weeks. After 6 weeks, systolic blood pressures were again measured.
Make sure you carefully read through the R output below before attempting the following questions.
> library(tidyverse)
> sbp = data_frame(
+ patient = 1:15,
+ before = c(125,132,138,120,135,127,136,139,131,132,135,136,128,127,130
+ ),
+ after = c(118,134,130,124,105,130,130,132,123,128,126,140,135,126,132)
+ )
> sbp = sbp %>% mutate(
+ diff = before – after,
+ rank = rank(abs(diff))
+ )
> sbp
# A tibble: 15 x 5
patient before after diff rank

1 1 125 118 7 10
2 2 132 134 -2 2.5
3 3 138 130 8 12.5
4 4 120 124 -4 6
5 5 135 105 30 15
6 6 127 130 -3 4
7 7 136 130 6 8
8 8 139 132 7 10
9 9 131 123 8 12.5
10 10 132 128 4 6
11 11 135 126 9 14
12 12 136 140 -4 6
13 13 128 135 -7 10
14 14 127 126 1 1
15 15 130 132 -2 2.5
> t.test(sbp$before, sbp$after, paired = TRUE, conf.level = 0.9)
Paired t-test

data: sbp$before and sbp$after
t = 1.6641, df = 14, p-value = 0.1183
alternative hypothesis: true difference in means is not equal to 0
90 percent confidence interval:
-0.225767 7.959100
sample estimates:
mean of the differences
3.866667
> wilcox.test(sbp$before, sbp$after, paired = TRUE, correct = FALSE)$p.value
[1] 0.09885703
> B = 10000
> set.seed(2018)
> boot_means = vector(“numeric”, length = B)
> for(i in 1:B){
+ boot_means[i] = mean(sample(sbp$diff, replace = TRUE))
+ }
> quantile(boot_means, c(0.025, 0.05, 0.95, 0.975))
  2.5% 5% 95% 97.5%
-0.1333333 0.4000000 7.8000000 8.7333333
> p1 = ggplot(sbp, aes(sample = diff)) +
+ geom_qq() + geom_qq_line() + theme_classic()
> p2 = ggplot(data.frame(boot_means), aes(x = boot_means)) +
+ geom_histogram() + theme_classic()
> gridExtra::grid.arrange(p1,p2, ncol = 2)

 
Flag question: Question 13
Question 131 pts
What is the most appropriate t-test to determine if there is a significant difference in systolic blood pressure before and after the exercise program? Why?
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 14
Question 142 pts
For the test you identified above:
· State the null and alternative hypothesis [be sure to define your parameter(s)].
· State the test statistic and its distribution under the null hypothesis.
· Write down the observed test statistic.
· Write down an expression for the p-value.
· Provide a conclusion using a 10% level of significance.
You can write your answers using plain text (e.g. H_0: theta, mu, beta_1, xbar, ybar, sqrt(n), s_x) in the same way you did for the first assignment, you do not need to use the Canvas equation editor.
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 15
Question 151 pts
Are the assumptions of the t-test satisfied? Justify your response with reference to the output above.
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 16
Question 161 pts
Calculate the Wilcoxon signed-rank test statistic.

 
Flag question: Question 17
Question 171 pts
Does the Wilcoxon signed-rank test give the same conclusion (at the 10% level of significance) as the t-test? Which do you trust more in this case?
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 18
Question 182 pts
Write down the 90% bootstrap confidence interval. Using the bootstrap confidence interval, do you reject or not reject the null hypothesis that the population mean difference is equal to zero.
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 19
Question 192 pts
Describe how you would perform a permutation test with this data.
[No need to provide R code, just describe the process in words.]
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Spacer
Section 5: diabetes prediction
There are 6 questions in this section
 
 
Flag question: Spacer
The Pima Indians of Arizona have the highest reported prevalence of diabetes of any population in the world. The National Institute of Diabetes and Digestive and Kidney Diseases collected data on a sample of individuals all of whom were female, at least 21 years old, and of Pima Indian heritage. The objective is to predict whether or not a patient has diabetes based on certain clinical measurements.
The dataset consists of several medical predictor variables and one outcome variable, y, which equals 1 if an individual is diabetic and 0 otherwise. Predictor variables include the number of pregnancies the patient has had (npreg), their BMI, insulin level (serum), age, triceps skin fold thickness (skin), diastolic blood pressure (bp), plasma glucose concentration (glu) and diabetes pedigree function (ped). Any rows with missing values have been removed from the data, leaving 392 individuals.
Read the R code and output below before answering the following questions.
> library(tidyverse)
> pima_raw = read_csv(“data/pima.csv”)
> pima_clean = pima_raw %>%
+ mutate_at(.vars = vars(bmi,bp,glu,serum,skin),
+ .funs = funs(ifelse(. == 0, NA, .))
+ )
> pima = pima_clean %>% drop_na()
> dim(pima)
[1] 392 9
> glimpse(pima)
Observations: 392
Variables: 9
$ npreg 1, 0, 3, 2, 1, 5, 0, 1, 1, 3, 11, 10, 1, 13, 3, 3, 4, 4, 3…
$ glu 89, 137, 78, 197, 189, 166, 118, 103, 115, 126, 143, 125, …
$ bp 66, 40, 50, 70, 60, 72, 84, 30, 70, 88, 94, 70, 66, 82, 76…
$ skin 23, 35, 32, 45, 23, 19, 47, 38, 30, 41, 33, 26, 15, 19, 36…
$ serum 94, 168, 88, 543, 846, 175, 230, 83, 96, 235, 146, 115, 14…
$ bmi 28.1, 43.1, 31.0, 30.5, 30.1, 25.8, 45.8, 43.3, 34.6, 39.3…
$ ped 0.167, 2.288, 0.248, 0.158, 0.398, 0.587, 0.551, 0.183, 0….
$ age 21, 33, 26, 53, 59, 51, 31, 33, 32, 27, 51, 41, 22, 57, 28…
$ y 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1…
> pima_m1 = glm(y ~ ., data = pima, family = binomial)
> summary(pima_m1)
Call:
glm(formula = y ~ ., family = binomial, data = pima)

Deviance Residuals:
Min 1Q Median 3Q Max
-2.7823 -0.6603 -0.3642 0.6409 2.5612

Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -1.004e+01 1.218e+00 -8.246 < 2e-16 npreg 8.216e-02 5.543e-02 1.482 0.13825 glu 3.827e-02 5.768e-03 6.635 3.24e-11 bp -1.420e-03 1.183e-02 -0.120 0.90446 skin 1.122e-02 1.708e-02 0.657 0.51128 serum -8.253e-04 1.306e-03 -0.632 0.52757 bmi 7.054e-02 2.734e-02 2.580 0.00989 ped 1.141e+00 4.274e-01 2.669 0.00760 age 3.395e-02 1.838e-02 1.847 0.06474 (Dispersion parameter for binomial family taken to be 1)   Null deviance: 498.10 on 391 degrees of freedom Residual deviance: 344.02 on 383 degrees of freedom AIC: 362.02 Number of Fisher Scoring iterations: 5 > pima_step = step(pima_m1, k = log(392), trace = FALSE)
> summary(pima_step)
Call:
glm(formula = y ~ glu + bmi + ped + age, family = binomial, data = pima)

Deviance Residuals:
Min 1Q Median 3Q Max
-2.8228 -0.6617 -0.3759 0.6702 2.5881

Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -10.092018 1.080251 -9.342 < 2e-16 glu 0.036189 0.004982 7.264 3.76e-13 bmi 0.074449 0.020267 3.673 0.000239 ped 1.087129 0.419408 2.592 0.009541 age 0.053012 0.013439 3.945 8.00e-05 (Dispersion parameter for binomial family taken to be 1)   Null deviance: 498.10 on 391 degrees of freedom Residual deviance: 347.23 on 387 degrees of freedom AIC: 357.23 Number of Fisher Scoring iterations: 5 > new_dat = data.frame(age = 40, glu = 122, bmi = 30, ped = 0.5)
> predict(pima_step, new_dat, type = “link”)
  1
-0.7794583
> predict(pima_step, new_dat, type = “response”)
  1
0.3144366
> preds = pima_step %>% predict(type = “response”) %>% round()
> truth = pima$y
> table(preds, truth)
  truth
preds 0 1
0 233 51
1 29 79
 
Flag question: Question 20
Question 202 pts
A logistic regression for predicting diabetes status from all the explanatory variables is shown above, pima_m1. If we only look at the output from summary(pima_m1), are we justified in dropping both skin and serum from the model at a 10% level of significance? Why or why not.
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 21
Question 211 pts
A stepwise model selection procedure was performed in an attempt to find a more parsimonious model. Write down the estimated regression equation for the selected model.
[You can do this using plain text, no need to use Canvas equation editor]
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 22
Question 221 pts
Using the final stepwise model, what is the estimated probability of having diabetes for a 40 year old woman with glucose of 122, BMI of 30 and diabetes pedigree function of 0.5?
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 23
Question 231 pts
Using the final stepwise model, interpret the coefficient for age.
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 24
Question 243 pts
Using the stepwise model, what is the:
· accuracy 
· sensitivity 
· specificity 
[Report your answer to 2 decimal places.]
 
Flag question: Question 25
Question 252 pts
What is an alternative to logistic regression that we could apply in this situation to predict diabetes status? Name one advantage that it has over logistic regression.
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Spacer
Section 6: short answer questions
There are 4 questions in this section
 
 
Flag question: Question 26
Question 262 pts
What is meant by the concept of the “power” of a test? How does the power change when the sample size increases?
[100 words or less.]
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 27
Question 272 pts
Describe how to perform the Bonferroni correction in the context of multiple testing. What kind of error rate does it seek to control?
[100 words or less.]
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 28
Question 282 pts
What is the difference between supervised learning and unsupervised learning? State two examples of each that we discussed in class.
[100 words or less.]
View keyboard shortcuts
pView keyboard shortcutsAccessibility Checker
0 words
Switch to the html editorFullscreen
 
Flag question: Question 29
Question 29 4 pts
A junior data scientist on your team excitedly tells you that their new prediction model has an R2 of 98%. They share their secret to success with you: before importing the data into R, they used Excel to sort each variable from smallest to largest because it gave them a better R2. When you asked them to describe the evaluation method they used in more detail, they revealed that the model was trained and evaluated using the same dataset. 
Explain why 98% is probably not a good estimate of the expected performance on new unseen data. Suggest a better performance evaluation method and give detailed explanations of your reasoning.
[150 words or less.]
View keyboard shortcuts
EditViewInsertFormatToolsTable
12pt
Paragraph