程序代写代做代考 graph More Regression 1

More Regression 1
Billie Anderson, Mark Newman
2020-09-28
Agenda
▪ Question from last class
• Last class we asked for an example of something with a yes/no outcome. Did your co workers suggest more than one possible predictor?
▪ More Regression
▪ In-Class Examples
▪ Question for next class
Models for Polytomous Responses
Up until this point in the class, the main model under study has been logistic regression. In logistic regression, the response variable, \(Y\) is binary.
We are going to start examining a model in which the response variable, \(Y\) is polytomous, meaning the \(Y\) variable has multiple categories, m > 2 discrete values.
For example, our previous arthritis data example, a patient reported ‘none’, ‘some’ or ‘marked’ improvement in arthritis symptoms after being on an experimental treatment or a placebo.
There are different models based on whether the polytomous response variable is nominal or ordinal.
We are going to be examining when the response variable is an ordinal polytomous variable. The appropriate model for this type of response variable is the proportional odds model. This model is only appropriate when the response varible, \(Y\), is an ordinal polytomous variable. And, one more assumption has to be made about the data to apply a proportional odds model, which will be described.
Basic Idea of all the Polytomous Response Models
Remember, just like in logistic regression, you are attempting to model the predicted probability of response (probability of being a 0 or \(1\) in the case of logistic regression).
Some notation: Let \(\pi_{ij}=\pi_j(x_i)\) be the probability of response \(j\) for case or group \(i\), given the predictors \(\mathbf{x}_j\).
Since all probabilities must sum to \(1\), \(\sum_{j}\pi_{ij}=1\), only \(m-1\) of these probabilities is independent.
The basic idea is to construct a model for the polytomous responses composed of \(m-1\) logit comparisons for the response categories in a similar way to how factors are treated in the predictor variables.
Proportional Odds Model
For an ordered response variable, \(Y\), with categories j=1, 2, …m, the ordinal nature of the response variable can be modeled by forming logits baesed on the \(m-1\) adjacent category cutpoints between successive categories.
The cumulative probabilities are represented as
\[Pr(Y \le j | x) = \pi_1(x) + \pi_2(x) + … + \pi_j(x)\]
then, the cumulative logit for category \(j\) is defined as
\[L_j \equiv logit[Pr(Y \le j | x)] = log\frac{Pr(Y \le j | x)}{Pr(Y > j | x)} = log\frac{Pr(Y \le j | x)}{1-Pr(Y \le j | x)}\] for \[j = 1,2,3,…m-1\]
Proportional Odds Model (cont.)
In our previous arthritis example, the original response variable was “Improved” with ordered levels “None” < “Some” < “Marked”. In this case, the cumulative logits would have been \[L_1 = log\frac{\pi_1(x)}{\pi_2(x) + \pi_3(x)}\] \[L_2 = log\frac{\pi_1(x) + \pi_2(x)}{\pi_3(x)}\] The proportional odds model constrains the predictors (\(x_i\)) to all have the same slopes for both the log of the odds and the predicted probabilities that the model produces.  Fitting the Proportional Odds Model data <- vcd::Arthritis str(data) ## 'data.frame': 84 obs. of 5 variables: ## $ ID : int 57 46 77 17 36 23 75 39 33 55 ... ## $ Treatment: Factor w/ 2 levels "Placebo","Treated": 2 2 2 2 2 2 2 2 2 2 ... ## $ Sex : Factor w/ 2 levels "Female","Male": 2 2 2 2 2 2 2 2 2 2 ... ## $ Age : int 27 29 30 32 46 58 59 59 63 63 ... ## $ Improved : Ord.factor w/ 3 levels "None"<"Some"<..: 2 1 1 3 3 3 1 3 1 1 ... head(data) ## ID Treatment Sex Age Improved ## 1 57 Treated Male 27 Some ## 2 46 Treated Male 29 None ## 3 77 Treated Male 30 None ## 4 17 Treated Male 32 Marked ## 5 36 Treated Male 46 Marked ## 6 23 Treated Male 58 Marked Fitting the Proportional Odds Model (cont.) Before building a proportional odds model, check to make sure the response variable is an ordinal variable. levels(data$Improved) ## [1] "None" "Some" "Marked" The building of the model using very similar syntax as when we were building the logistic regression models. We will use the polr() in the MASS package. library(MASS) fit <- polr(Improved ~ Sex + Treatment + Age, data = data) Fitting the Proportional Odds Model (cont.) summary(fit) ## ## Re-fitting to get Hessian ## Call: ## polr(formula = Improved ~ Sex + Treatment + Age, data = data) ## ## Coefficients: ## Value Std. Error t value ## SexMale -1.25168 0.54636 -2.291 ## TreatmentTreated 1.74529 0.47589 3.667 ## Age 0.03816 0.01842 2.072 ## ## Intercepts: ## Value Std. Error t value ## None|Some 2.5319 1.0571 2.3952 ## Some|Marked 3.4309 1.0912 3.1443 ## ## Residual Deviance: 145.4579 ## AIC: 155.4579 Fitting the Proportional Odds Model (cont.) library(car);Anova(fit) ## Analysis of Deviance Table (Type II tests) ## ## Response: Improved ## LR Chisq Df Pr(>Chisq)
## Sex 5.6880 1 0.0170812 *
## Treatment 14.7095 1 0.0001254 ***
## Age 4.5715 1 0.0325081 *
## —
## Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ‘ 1
Fitting the Proportional Odds Model (cont.)
Specifying the model:
Recall our notation/specification of the model from earlier. The odds of being less than or equal to a particular category is defined as \(P(Y \le j)\) is the cumulative probability of \(Y\) less than or equal to a specific category \(j=1, 2,….,m-1\).
The response variable, \(Y\), \(Improved\) has categories None, Some and Marked and are coded in the data as \(1, 2, 3,\) respectively.
The odds of being less than or equal to a particular category can be defined as \(\frac{P(Y \le j)}{P(Y > j)}\) for \(j=1, 2, …, m-1\).
\(logit(P(Y \le j))=log(\frac{P(Y \le j)}{P(Y>j)})\)
Fitting the Proportional Odds Model (cont.)
Since \(Y\) has three ordered categories, there will be two models (logit) fit for this data.
First logit: \[logit(P(Y \le 1))=P(Y \le \text{None})=L_1=log(\frac{\pi_1(x)}{\pi_2(x)+\pi_3(x)})=logit(\text{None vs. [Some or Marked]})\]
\[logit(P(Y \le 1))=P(Y \le \text{None})=\] \[2.5319-1.25168*sex+1.74529*Treatment + 0.03816*age\]
Second logit: \[logit(P(Y \le 2))=P(Y \le \text{Some})=L_2=log(\frac{\pi_1(x)+\pi_2(x)}{\pi_3(x)})=logit(\text{[None or Some] vs. Marked})\]
\[logit(P(Y \le 2))=P(Y \le \text{Some})=\] \[3.4309-1.25168*sex+1.74529*Treatment + 0.03816*age\]
Interpreting the model
Use odds ratios.
exp(coef(fit))
## SexMale TreatmentTreated Age
## 0.286024 5.727559 1.038900
Sex variable:
First logit, \(P(Y \le 1)=P(Y \le \text{None})\) For male patients, the odds of them being more likely to report improvement in symptoms (None vs. [Some or Marked]) is 0.28 times that of female patients.
Second logit, \(P(Y \le 2)=P(Y \le \text{Some})\) For male patients, the odds of them being more likely to report really improved symptoms ([None or Some] vs. Marked) is 0.28 times that of female patients.
Interpreting the model
exp(coef(fit))
## SexMale TreatmentTreated Age
## 0.286024 5.727559 1.038900
Treatment variable:
First logit, \(P(Y \le 1)=P(Y \le \text{None})\) For treated patients, the odds of them being more likely to report improvement in symptoms (None vs. [Some or Marked]) is 5.7 times that of patients receiving the placebo.
Second logit, \(P(Y \le 2)=P(Y \le \text{Some})\) For treated patients, the odds of them being more likely to report really improved symptoms ([None or Some] vs. Marked) is 5.7 times that of patients receiving the placebo.
Interpreting the model (cont.)
exp(coef(fit))
## SexMale TreatmentTreated Age
## 0.286024 5.727559 1.038900
Age variable:
First logit, \(P(Y \le 1)=P(Y \le \text{None})\) For every one unit increase in age, the odds of the patient being more likely to report improvement in symptoms (None vs. [Some or Marked]) increases by 1.04.
Second logit, \(P(Y \le 2)=P(Y \le \text{Some})\) For every one unit increase in age, the odds of the patient being more likely to report really improved symptoms ([None or Some] vs. Marked) increases by 1.04.
Graphical Assessment of the Proportional Odds Using Full Model Plots
Remember, for a model with two or more explanatory variables, full-model plots display the fitted response surface for all predictors together, rather than stratified by conditioning variables.
fitp <- cbind(data, predict(fit, type="probs")) str(fitp) ## 'data.frame': 84 obs. of 8 variables: ## $ ID : int 57 46 77 17 36 23 75 39 33 55 ... ## $ Treatment: Factor w/ 2 levels "Placebo","Treated": 2 2 2 2 2 2 2 2 2 2 ... ## $ Sex : Factor w/ 2 levels "Female","Male": 2 2 2 2 2 2 2 2 2 2 ... ## $ Age : int 27 29 30 32 46 58 59 59 63 63 ... ## $ Improved : Ord.factor w/ 3 levels "None"<"Some"<..: 2 1 1 3 3 3 1 3 1 1 ... ## $ None : num 0.733 0.717 0.71 0.694 0.57 ... ## $ Some : num 0.138 0.144 0.148 0.154 0.195 ... ## $ Marked : num 0.129 0.138 0.143 0.152 0.235 ... head(fitp) ## ID Treatment Sex Age Improved None Some Marked ## 1 57 Treated Male 27 Some 0.7326185 0.1380586 0.1293229 ## 2 46 Treated Male 29 None 0.7174048 0.1444325 0.1381627 ## 3 77 Treated Male 30 None 0.7096042 0.1476259 0.1427699 ## 4 17 Treated Male 32 Marked 0.6936286 0.1540035 0.1523679 ## 5 36 Treated Male 46 Marked 0.5702499 0.1950359 0.2347142 ## 6 23 Treated Male 58 Marked 0.4563432 0.2171302 0.3265266 Rearranging your data with R functions In order to construct the full model plots (probabilities on the y-axis), need to take the data from above this is in a wide format and make it a long format. Going to use melt() to perform this task; melt() is in the reshape2 package. ▪ Going from WIDE to LONG • melt() function • In wide formats, rows are participants (individual observations) and columns are variables. ▪ Going from LONG to WIDE • cast() function • In the LONG format, rows are the multiple measurements of the participants (individual observations). Rearranging your data with R functions - melt() ▪ MELT • dataframe, • id = c(“column”, “column”) - constant variables you do not want to change. These will stay their own column but get repeated when necessary • measured = c(“column”, “column”) - variables you want to combine into one column library(reshape2) plotdata <- melt( fitp, id.vars=c("Sex", "Treatment", "Age","Improved"), measure.vars=c("None", "Some", "Marked"), variable.name="Level", value.name="Probability") str(plotdata) ## 'data.frame': 252 obs. of 6 variables: ## $ Sex : Factor w/ 2 levels "Female","Male": 2 2 2 2 2 2 2 2 2 2 ... ## $ Treatment : Factor w/ 2 levels "Placebo","Treated": 2 2 2 2 2 2 2 2 2 2 ... ## $ Age : int 27 29 30 32 46 58 59 59 63 63 ... ## $ Improved : Ord.factor w/ 3 levels "None"<"Some"<..: 2 1 1 3 3 3 1 3 1 1 ... ## $ Level : Factor w/ 3 levels "None","Some",..: 1 1 1 1 1 1 1 1 1 1 ... ## $ Probability: num 0.733 0.717 0.71 0.694 0.57 ... Rearranging your data with R functions - melt() (cont.) head(fitp) ## ID Treatment Sex Age Improved None Some Marked ## 1 57 Treated Male 27 Some 0.7326185 0.1380586 0.1293229 ## 2 46 Treated Male 29 None 0.7174048 0.1444325 0.1381627 ## 3 77 Treated Male 30 None 0.7096042 0.1476259 0.1427699 ## 4 17 Treated Male 32 Marked 0.6936286 0.1540035 0.1523679 ## 5 36 Treated Male 46 Marked 0.5702499 0.1950359 0.2347142 ## 6 23 Treated Male 58 Marked 0.4563432 0.2171302 0.3265266 head(plotdata) ## Sex Treatment Age Improved Level Probability ## 1 Male Treated 27 Some None 0.7326185 ## 2 Male Treated 29 None None 0.7174048 ## 3 Male Treated 30 None None 0.7096042 ## 4 Male Treated 32 Marked None 0.6936286 ## 5 Male Treated 46 Marked None 0.5702499 ## 6 Male Treated 58 Marked None 0.4563432 Rearranging your data with R functions - melt() (cont.) library(ggplot2) library(directlabels) gg <- ggplot( plotdata, aes(x = Age, y = Probability, color = Level)) + geom_line(size = 2.5) + theme_bw() + xlim(10,80) + geom_point(color = "black", size = 1.5) + facet_grid(Sex ~ Treatment) gg  In-class Exercises Happiness in my house Question for next class At your work, what is an example of something with a multi-level outcome?