代写代考 Stat-3503/8109 Problem set 1

Stat-3503/8109 Problem set 1
problem set no. 1
learning objectives. compute likelihoods, both for a generic sample, i.e., (x1, …, xn), and for a specific sample, i.e., (2, 3, 6, 4, 8, 5, 6, 2, 3, 6, 5); write some short programs to generate fake data sets from a given model and plot the corresponding likelihoods.
problem 1. set-up: you are interested in studying the writing style of a popular Time Magazine contributor, FZ. you collect a simple random sample of his articles and count how many times he uses the word however in each of the articles in your sample, (x1,…,xn). In this set-up, xi is the number of times the word however appeared in the i-th article.

Copyright By PowCoder代写 加微信 powcoder

question 1.1. (10 points) define the population of interest, the population quantity of interest (the thing you¡¯re interested in the population), and the sampling units.
question 1.2. (10 points) what are potentially useful estimands for studying writing style? (hint: you are interested in comparing FZ writing style to that of other contributors.)
question 1.3. (10 points) model: let Xi denote the quantity that captures the number of times the word however appears in the i-th article. let¡¯s assume that the quantities X1,…Xn are independent and identically distributed (IID) according to a Poisson distri- bution with unknown parameter ¦Ë,
p(Xi = xi | ¦Ë) = Poisson(xi | ¦Ë) for i = 1,…,n.
using the 2-by-2 table of what¡¯s variable/constant versus what¡¯s observed/unknown, declare what¡¯s the technical nature (random variable, latent variable, known constant or unknown constant) of the quantities involved the set-up/model above: X1,..Xn, x1,…xn, ¦Ë and n.
question 1.4. (10 points) write the data generating process for the model above.
question 1.5. (10 points) define the likelihood L(¦Ë) = p(¡¤ | ¡¤) for this model as a function
of p(¡¤ | ¡¤).
question 1.6. (10 points) write the likelihood L(¦Ë) for a generic sample of n articles,
(x1, …, xn).
question 1.7. (10 points) write the log-likelihood l(¦Ë) for a generic sample of n articles,
(x1, …, xn).
question 1.8. (10 points) write the log-likelihood l(¦Ë) for the following specific sample of 7
articles (12, 4, 5, 3, 7, 5, 6) (you could use … to abbreviate it).

Stat-3503/8109 McAlinn
question 1.9. (10 points) plot the log-likelihood l(¦Ë) (on a computer) for the same specific sample of 7 articles (12, 4, 5, 3, 7, 5, 6). What is the maximum value of ¦Ë (approximately)?
question 1.10. (10 points) draw a graphical representation of this model, which explicitly shows the random quantities and the unknown constants only.
Extra credit mmmh … something is amiss. the articles FZ writes have different lengths. if we model the word occurrences in each article as IID Poisson random variables with rate ¦Ë, we are implicitly assuming that the articles have the same length. why? (10 points; extra credit) and if that is true, what is the implied common length? (10 points; extra credit)
problem 2. set-up: you collect another random sample of articles penned by FZ and count how many times he uses the word however in each of the articles in your sample, (x1, …, xn). you also count the length of each article in your sample, (y1, …, yn). In this set-up, xi is the number of times the word however appeared in the i-th article, as before, and yi is the total number of words in the i-th article.
question 2.1. (10 points) model: let Xi denote the quantity that captures the number of times the word however appears in the i-th article. let¡¯s assume that the quantities X1,…Xn are independent and identically distributed (IID) according to a Poisson distri- bution with unknown parameter ¦Í ¡¤ yi ,
p(Xi = xi | yi,¦Í,1000) = Poisson(xi | ¦Í ¡¤ yi ) for i = 1,…,n. 1000
using the 2-by-2 table of what¡¯s variable/constant versus what¡¯s observed/unknown, declare what¡¯s the technical nature (random variable, latent variable, known constant or unknown constant) of the quantities involved the set-up/model above: X1, ..Xn, x1, …xn, y1, …yn, ¦Í and n.
question 2.2. (10 points) what is the interpretation of yi in this model? explain. 1000
question 2.3. (10 points) what is the interpretation of ¦Í in this model? explain. question 2.4. (10 points) write the data generating process for the model above.
question 2.5. (10 points) define the likelihood L(¦Í) = p(¡¤ | ¡¤) for this model as a function of p(¡¤ | ¡¤).
question 2.6. (10 points) write the likelihood L(¦Í) for a generic sample of n articles, (x1, …, xn), and n article lengths, (y1, …, yn).

Stat-3503/8109 McAlinn
question 2.7. (10 points) write the log-likelihood l(¦Í) for a generic sample of n articles, (x1, …, xn), and n article lengths, (y1, …, yn).
question 2.8. (10 points) Simulate the number of occurrences of the word however for 5 articles using the data generating process. Assume ¦Í = 10 and corresponding article lengths y = (1730, 947, 1830, 1210, 1100). Record the number of occurrences of however in each article.
question 2.9. (10 points) write the log-likelihood l(¦Í) for the following the specific sample of occurrences you generated in the previous question and their corresponding 5 article lengths (1730, 947, 1830, 1210, 1100) (you can use … to abbreviate).
question 2.10. (10 points) Plot the log-likelihood from the previous question (on a com- puter). Does the maximum occur near ¦Í = 10?
question 2.11. (10 points) draw a graphical representation of this model, which explicitly shows the random quantities and the unknown constants only.
OK, that was a more reasonable model. but FZ writes about different topics. our model is not capturing that. is FZ more prone to offering his own opinions when he writes about politics than when he writes about other topics? let¡¯s investigate.
problem 3. set-up: you collect a random sample of articles penned by FZ and count how many times he uses the certain word I in each of the articles in your sample, (x1,…,xn). In this set-up, xi is the number of times the word I appeared in the i-th article.
question 3.1. (10 points) model: let Xi denote the quantity that captures the number of times the word I appears in the i-th article. let Zi indicate whether the i-th article is about politics, denoted by Zi = 1, or not, denoted by Zi = 0. let¡¯s assume that the quan- tities X1, …, Xn are independent of one another conditionally on the corresponding values of Z1, …, Zn. let¡¯s assume that the quantities Z1, …, Zn are independent and identically distributed (IID) according to a Bernoulli distribution with parameter ¦Ð,
p(Zi | ¦Ð) = Bernoulli(zi | ¦Ð) for i = 1, …, n.
let¡¯s further assume that the number of occurrences of the word I in an article about
politics follows a Poisson distribution with unknown parameter ¦ËPolitics,
p(Xi = xi | Zi = 1, ¦ËP olitics) = Poisson(xi | ¦ËP olitics) for i = 1, …, n,
and that the number of occurrences of the word I in an article about any other topic follows a Binomial distribution with size 1000 and unknown parameter ¦ÈOther,
p(Xi = xi | Zi = 0, 1000, ¦ÈOther) = Binomial(xi | 1000, ¦ÈOther) for i = 1, …, n. 3

Stat-3503/8109 McAlinn
using the 2-by-2 table of what¡¯s variable/constant versus what¡¯s observed/unknown, declare what¡¯s the technical nature (random variable, latent variable, known constant or unknown constant) of the quantities involved the set-up/model above: X1,..Xn, x1,…xn, Z1,..Zn, z1,…zn,¦Ð,¦ËPolitics,¦ÈOther andn.
question 3.2. (10 points) write the data generating process for the model above.
question 3.3. (10 points) simulate 1000 values of Xi in R from the data generating process assuming pi = 0.3, ¦ËPolitics = 30 and ¦ÈOther = 0.02. Plot the values of Xi|Zi = 1 and Xi|Zi = 0 as two histograms on the same plot. Color the histograms by the value of Zi so the two populations can be distinguished.
question 3.4. (10 points) write the likelihood for 1 article, Li(¦ËPolitics,¦ÈOther) = p(Xi = xi |¦ËPolitics,¦ÈOther).
question 3.5. (10 points) write the likelihood L(¦ËPolitics,¦ÈOther) for a generic sample of n articles, (x1, …, xn).
question 3.6. (10 points) write the log-likelihood l(¦ËPolitics,¦ÈOther) for a generic sample of n articles, (x1, …, xn).
question 3.7. (10 points) write the log-likelihood l(¦ËPolitics,¦ÈOther) for the following specific sample of 8 articles (12, 4, 8, 3, 3, 10, 1, 9).
question 3.8. (10 points) draw a graphical representation of this model, which explicitly shows the random quantities and the unknown constants only.
Extra credit wait, but is it reasonable to assume that the rate ¦Ë is an unknown constant in all of our models? it seems like a stretch. (10 points; if you agree)
problem 4. This one is for 8109 ONLY!! set-up: let¡¯s go back to the simplest possible set-up for this exercise. you collect a random sample of articles penned by FZ and count how many times he uses the word and in each of the articles in your sample, (x1, …, xn). In this set-up, xi is the number of times the word and appeared in the i-th article, as before.
question 4.1. (10 points) model: let Xi denote the quantity that captures the number of times the word and appears in the i-th article. let¡¯s assume that the quantities X1,…Xn are independent and identically distributed (IID) according to a Poisson distribution with unknown parameter ¦«,
p(Xi =xi |¦«=¦Ëi)=Poisson(xi |¦Ëi) for i=1,…,n. 4

Stat-3503/8109 McAlinn
in addition, let¡¯s assume that the rate ¦« is distributed according to a Gamma distribution with unknown parameters ¦Á and ¦È,
f(¦« = ¦Ëi | ¦Á,¦È) = Gamma(¦Ëi | ¦Á,¦È).
using the 2-by-2 table of what¡¯s variable/constant versus what¡¯s observed/unknown, declare what¡¯s the technical nature (random variable, latent variable, known constant or unknown constant) of the quantities involved the set-up/model above: X1, ..Xn, x1, …xn, ¦«, ¦Ë1, …¦Ën, ¦Á,¦È and n.
question 4.2. (10 points) write the data generating process for the model above.
question 4.3. (10 points) simulate 1000 values from the data generating process. Assume
¦Á = 10 and ¦È = 1. Compute the mean and variance of the Xi.
question 4.4. (10 points) simulate 1000 values assuming ¦Ëi = 10 for all i (ignore the Gamma distribution). Compute the mean and variance of the Xi now. How do they compare to the mean and variance you calcualted in question 4.3?
question 4.5. (10 points) write the likelihood for 1 article, Li(¦Á, ¦È) = p(Xi = xi | ¦Á, ¦È). question 4.6. (10 points) write the log-likelihood l(¦Á, ¦È) for a generic sample of n articles,
(x1, …, xn).
question 4.7. (10 points) write the log-likelihood l(¦Á,¦È) for the following specific sample
of 8 articles (64, 61, 89, 55, 57, 76, 47, 55).
question 4.8. (10 points) draw a graphical representation of this model, which explicitly
shows the random quantities and the unknown constants only.
Extra credit do you recognize the very special probability mass function you just obtained for p(Xi = xi | ¦Á, ¦È) = Li(¦Á, ¦È)? (10 points; extra credit) excellent! you just proved a useful result: Gamma mixture of Poisson is a … .
Generate samples from this distribution and verify graphically that you get the distribution looks the same as that in 4.3 (you must use appropriate parameters you identified above). (10 points; extra credit)

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com