CS代考 BU.450.760 Technical Document T5.1 – pscore matching Prof.

BU.450.760 Technical Document T5.1 – pscore matching Prof.
Propensity score matching in R
In this document we implement a propensity score (pscore) matching analysis in R. The companion script is in S5.1. We illustrate the procedures using the dataset D5.1 on Instagram influencer disclosure (see C5.1). This dataset contains information of many Instagram posts by influencers, some of which were disclosed as sponsored content and others which were not. The key elements of the dataset are:
• Disclosure indicator (disclosure): whether or not the post was disclosed as sponsored content

Copyright By PowCoder代写 加微信 powcoder

• Post’s likes and influencer’s number of followers. These two variables give the key outcome we will be interested in, engagement = 100*(likes/followers)
• Other post characteritics (Xs)
We seek to answer the following question: does disclosure lead to less engagement? If we answered this question simply by running a regression of engagement on the disclosure indicator, we may get an incorrect answer because chances are influencers do not use disclosure at random. Indeed, they may use them on those posts in which the potential disclosure penalty on engagement is less likely to be felt.
1. Preliminaries
In addition to loading the data, the preliminaries include loading packages, which play a particularly important role in this case. Package “tableone” will be used to assess balance; “MatchIt” contains the methods that create the matched sample; “lattice” contains other useful routines including the “histogram” command used to assess overlap.
The lines below declare categorical variables as factors as well as define the outcome of interest, engagement (y).
2. Pre-matching assessment of balancing
We use the command “CreateTableOne” to assess the pre-matching treatment/control balancing of covariates. Notice that are writing this command directly inside of a “print()” instruction (otherwise the results would not be immediately shown). The “smd = TRUE” argument instructs “print” to also show standardized differences.

BU.450.760 Technical Document T5.1 – pscore matching Prof.
As for the arguments for “CreateTableOne”, notice that “strata” takes in the treatment indicator. Also notice that we have imputed a one-by-one list of the variables the balance of which needs to be assessed. We left “y” and “likes” out of this list because they pertain the outcome.
The output is shown below. The “0” column shows averages for non-treated observations and the “1” column for treated ones. Note that there are many more non-treated observations thant treated ones (about 8 times more). The “test SMD” column shows standardized differences, which suggest large imbalances in the number of hashtags (disclosed posts have more hashtags), picture quality (disclosed posts much less likely to have poor or very poor quality, much more likely to have excellent quality), category (disclosed posts have relatively more presence in the maternity and outdoors categories, relatively less on the others), and caption text sentiment (disclosed post are much more likely to contain personal anecdotes). Overall, these results indicate substantial imbalance.
Lastly, note that for datasets with many X variables, imputing to “CreateTableOne” a one-by-one list of balancing covariates can be impractical. An alternative in this case would have been to craft this list by instead excluding a select few:

BU.450.760 Technical Document T5.1 – pscore matching Prof.
3. Checking overlap
Recall that “overlap” is about the extent to which we can find in the data treated and untreated observations with a similar likelihood of having been treated (as reflected by their propensity score value). If, for example, we saw that at pscore=0.9 there are many treated observations but almost no untreated one, then that would be telling us that at this pscore level it will be very difficult to find a match for the treated observations. In other words, at this range of pscore, the treated and untreated subsamples have very poor overlap. Our analysis should concentrate on regions where the overlap levels do not make us worry that we will be unable to find satisfactory matches.
The lattice package’s “histogram” command makes it easy to check overlap graphically. To make this work, we need to first generate the pscores, which we do below in line 39. Notice that: (i) we are estimating the pscore model and predicting pscores in the same line, (ii) among the Xs of the pscore model, we have included followers but have excluded likes (likes is the key input for the outcome engagement).

BU.450.760 Technical Document T5.1 – pscore matching Prof.
Although the distribution for untreated observations (left) and treated ones (right) are similar, the latter has relatively more mass in the >= .3 range. For the untreated distribution, there is almost not mass in the >=.3 range. This finding suggests that it will be very difficult to generate close matches in this range. Accordingly, our analysis below will exclude observations associated with pscores larger than 0.3.
4. Creation of the matched sample
We now turn to the essential step of the methodology—creating the matched sample. The command “matchit” performs this step (line 48). The first argument corresponds to the specification for the propensity score, ie, treatment indicator ~ Xs. The second argument is an instruction to match observations with the opposite treatment observation that has the most similar estimated propensity score.
In line we are simply requesting R to list the info on the object that stores the result of the matching. We have called this object “matched”. The output shown below clarifies that this is 1:1 matching (each observation matched to one other) with no replacement. Overall, there are about 8,600 (out of the original 42,000) observations that end up matched, split evenly treated/non treated as we’ll see next.
While the object “matched” contains all the information to create the matched dataset, the matched dataset has not yet been created. We do this in line 58. In line 59 we incorporate the fact that overlap was poor for pscore>0.3, hence we request that the matched dataset (ds_matched) only includes matched observations with pscores below this value.
The commands of lines 60 and 61 are used to obtain the dimensions of the matched dataset: 4

BU.450.760 Technical Document T5.1 – pscore matching Prof.
The above statistics show that the matched dataset contains 4251 matched pairs of treated/non- treated observations.
5. Post-matching assessment of balancing
Line 68 reproduces our previous balancing analysis but now focusing on the matched sample. The output below shows that standardized differences are largely eliminated—the matching routine appears to be a reasonable approximation of the parallel worlds ideal.
6. Treatment effect estimation
The ATE is estimated in its most simple form in line 76. For comparison, the analog correlational estimate is estimated in line 77 (key difference is in the used sample).

BU.450.760 Technical Document T5.1 – pscore matching Prof.
The difference between the two estimates, causal (line 76) and correlational (line 77) is large and even implies a different directional conclusion. Whereas the correlational estimate would suggest that disclosure leads to more engagement, the causal estimate implies the opposite: disclosure lowers engagement by about 0.12 percent points. Both estimates are significant at the 99% confidence level.
In lines 80 and 82 we reproduce this analysis but adding controls to the regressions. Adding controls helps the correlational estimate a lot—it no longer implies a disclosure gain. For the causal estimate, adding controls matters little—the estimate is very similar to the one we got before. This is a sign that our matching was successful: we got quite close to the parallel worlds ideal where average conditions even out for treatment and control groups. Note that, although the

BU.450.760 Technical Document T5.1 – pscore matching Prof.
gap between the correlational and causal estimate closes, it still remains, ie, the correlational estimate is still being misleading. (Note: these outputs are not shown here.)
Our last regression (line 89) asks the question of whether the ATE may be different among “mega influencers”, ie, influencers with one million or more followers. We find that, for these, the effects is about twice as large: disclosure implies lower engagement by about 0.22 percent points.

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com