Statistical Data Modelling
This assignment consists of two sections, each contributing 50% of the total marks for this assignment. You should attempt both sections. Please submit one pdf vcontaining your solutions – it should be
written up using word processing software (e.g. LaTeX, R Markdown, or Word). Hand written solutions will be accepted where mathematical descriptions are required, but a professional word processed submission is preferred.
You are expected to work independently – strict disciplinary action will be taken for any plagiarism. Late submissions will also be penalised. The data required for this assignment can be
Copyright By PowCoder代写 加微信 powcoder
loaded into R using the load() function. If you have any issues or questions, please email me
Section A – Exercises
In this section, you are required to answer a series of exercises based on the module. Note that the questions are organised in the order we covered the topics, and not in order of difficulty. Therefore it is advised that you read through the questions first, and start working on those that you feel more comfortable with. Solutions are expected to be concise, well structured and well presented. Commented R code (e.g. ‘model <- glm(...)’) and the outcomes/plots should form part of your solutions. Do not display too much raw R output (e.g. don’t display the full output of ‘summary(model)’), but edit this down to the essentials. Ensure to include justification for each step of your analyses, providing comments alongside your R code to explain what you are doing and add appropriate titles and labelled axes to your plots. There are 60 marks in total for this section, with marks for each part question indicated.
Question 1
The data frame dengue involves data on the count of weekly dengue fever cases in Rio de Janeiro (yi), starting on the 36th week of 2012 and goes up to the 15th week of 2013. A scatter plot of cases versus time suggests a strong non-linear relationship:
0 10 20 30
Consider the model
Yi ∼ NegBin(μi, θ), Yi independent log(μi) = β0 + β1xi
where xi is time (in weeks). The goal is to capture the temporal structure of the disease outbreak in 2013. The Negative Binomial distribution with mean μ and dispersion parameter θ has pmf:
yi+θ−1 θ θ μi yi p(yi; μi, θ) = yi θ + μi θ + μi
Note the R functions dnbinom and qnbinom call θ the size.
(a) [2 marks] Write down the log-likelihood l(β0, β1, θ; y, x) for this model. Show your working.
(b) [2marks]WriteanRfunctionmylike()whichevaluatesthenegativelog-likelihood(i.e.−l(β0,β1,θ;y,x))
for any values of the three parameters.
(c) [5 marks] Use the R function nlm() in association with your function mylike() to numerically minimise
the negative log-likelihood. Provide some evidence of how you chose sensible starting values. Report the maximum likelihood estimates of the parameters and superimpose a plot of the associated mean relationship on a scatter plot of y versus x.
(d) [3 marks] Report the standard errors for β0 and β1, and use those to construct 95% confidence intervals.
(e) [3 marks] Test the hypothesis that β1 = 0 at the 5% significance level (not using the confidence interval)
and compute the associated p-value of the test.
(f) [3 marks] Use plug-in prediction to construct and plot 95% prediction intervals. Looking at the estimated
mean and prediction intervals, comment on the suitability of this model to capture this data.
Question 2
In this question, data y1, . . . , yn will be simulated from a known model involving the Poisson distribution with a log link. You are then asked to fit both a Gaussian GLM with a log-link to yi and a Poisson GLM with a log link to yi and comment on the differences in the results.
(a) [2 marks] In R, simulate values of an explanatory variable xi by sampling n = 200 observations from a uniform distribution between 0 and 1 (runif()). Now use rpois() to simulate corresponding response values yi from the following Poisson model
yi ∼ Pois(λi), i = 1,2,...,200 log(λi) = β0 + β1xi
with β0 = 0.3 and β1 = 5. Note: to make sure that you get the same simulated values you may want to
use set.seed().
(b) [3 marks] In R, fit a Gaussian GLM with a log-link to yi versus xi using glm() (careful, yi may contain
zeros so add a small constant to each yi, e.g. 0.1). In addition, fit a Poisson GLM to yi with a log link. Compare parameter estimates and standard errors of the two models, making reference to the actual β0 and β1 values.
(c) [6 marks] Produce predictions of the mean trend of both models and superimpose them over the data with associated 95% prediction intervals. Furthermore, produce the residuals vs fitted values plot and residual QQ plot for each model. Comment on both sets of plots and state which is the preferred model.
Question 3
The dataframe carbonD contains monthly observations of CO2 concentrations from 1959 to 1997, measured at (Hawaii). The variables are: co2 (CO2 concentrations in parts per million), month (month of measurement), year (year of measurement) and timeStep (unique time variable). A scatter plot of co2 against timeStep suggests CO2 is increasing over time with a seasonal cycle:
(a) [3 marks] Write down (mathematically) a plausible GAM to describe this data set.
(b) [4 marks] Fit the suggested GAM ensuring the fit is good.
(c) [2 marks] Use the function predict() with argument type="terms" and plot estimates of any smooth functions in your model.
(d) [3 marks] Use the model to predict year CO2 for the year 1998 and produce a plot of this along with 95% prediction intervals.
Question 4
The dataframe penicillin contains data on an experiment that was conducted to compare 4 different treatments on the production of penicillin. The material used for producing the penicillin is quite variable and it can only be made in 5 blends. So the data consists of 5 blends, and the 4 treatments were applied to each blend. The variables are yield (amount of penicillin produce), treat (treatment used; A, B, C, and D) and blend (blend used; 1-5).
(a) [2 marks] State reasons why one might treat blend as a random effect and treat as a fixed effect.
(b) [2 marks] Fit a Normal GLMM with yield as the response, blend as a random effect and treat as a
fixed effect.
(c) [3 marks] Comment on the significance of the fixed effects based on t-tests. State any assumptions you
are making.
(d) [6 marks] By fitting appropriate reduced models, test for the significance of both the fixed and the
random effects using likelihood ratio tests.
(e) [6 marks] Comment on the validity of using these tests in mixed effects models, suggest an alternative
way of implementing these tests, and use it to compare with results in (d).
Section B - Project
In this section, you are required to conduct an independent analysis using Generalized Additive Models (GAMs). You should write a report detailing your analyses, results and present a conclusion. Your report is expected to be concise, well structured and well presented. It should comprise at most two sides of text.
Figures, tables or R code are not included in this limit. You must use A4 paper and a font size of at least 11 points, while lines must be single spaced. No credit will be awarded to additional pages of text. Ensure all figures have appropriate titles, axes and captions. Commented R code (e.g. ‘model <- glm(...)’) and the outcomes/plots should not form part of your report but should be included as appendices.
There are 60 marks in total for this section, and a brief outline of the marking criteria for the report is given below:
• [10 marks] Understanding and exploration of both the problem and the data.
• [10 marks] Thoroughness and rigour, e.g. clear mathematical description of methods.
• [10 marks] Clear exposition of the steps you took in model fitting and exposition of a final model. • [15 marks] Clear presentation and interpretation of results.
• [5 marks] Critical review of the analysis.
• [10 marks] Clarity and conciseness in writing and tidy presentation of R code and associated plots.
You are required to analyse the daily trends in Nitrogen Dioxide (NO2) from an air pollution monitor in Harlington, London. The monitor is situated just North of Heathrow Airport and records the daily average NO2 for twelve years between 1st January 2010 and 31st December 2021 along with some meteorological variables. A line plot of the daily average NO2 over that period can be seen below.
2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022
The dataframe no2_data contains this information and contains the following variables:
1. site: Site name
2. date: Date of measurement (in yyyy-mm-dd format)
3. doy: Day of the year (1 - 1st January, 2 - 2nd January, 3 - 3rd January etc. )
4. month: Month of the year (1 - January, 2 - February, 3 - March etc.)
5. year: Year (2010, 2011, 2012, etc.)
6. no2: Daily average NO2 (in micrograms per cubic metre, μg/m3)
7. air_temp: Daily average temperature (in degrees celcius, oC)
8. ws: Daily average wind speed (in metres per second, m/s)
9. wd: Daily average wind direction (in degrees, 0o - wind blowing from the North, 90o - wind blowing
from the East, 180o - wind lowing from the South, 270o - wind blowing from the West)
The aim is to use this data to build one model (using the GAM framework) and use this to answer the
following questions:
• Do any of the meteorological variables (7-9 above) significantly affect the daily average NO2. If so, in what way?
• Is there a within year seasonal trend in NO2 concentrations? If so, when are concentrations typically at their highest/lowest?
• London has implemented a number of measures to reduce air pollution. Has there been a noticeable downward trend in NO2 concentrations between 2010 and 2021?
• The British government implemented a series of lockdowns in 2020 and 2021 as a response to the COVID-19 pandemic. Was there a sudden change in NO2 concentrations in 2020 and 2021 as a result of the pandemic? If so, update and use your model to estimate the decrease in NO2 concentrations.
• The World Health Organization (WHO) publishes guidelines for maximum daily average (25μg/m3) and annual average (10μg/m3) NO2 concentrations in order to protect human health.
– Do the outputs from your model indicate the daily limits were exceeded? If so how many times? Are the number of days decreasing each year?
– Do the outputs from your model indicate the annual limits likely exceeded? If so, when and by how much?
When building a model, make sure to perform all relevant model checks. Note, that you can use the predict() function with type = "terms" to extract the individual predicted smoothed functions in R.
Daily Average NO2 (in μg/m3)
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com