CS计算机代考程序代写 database DNA Bayesian information theory algorithm Introduction to Statistics

Introduction to Statistics
Class 10, 18.05

Jeremy Orloff and Jonathan Bloom

1 Learning Goals

1. Know the three overlapping “phases” of statistical practice.

2. Know what is meant by the term statistic.

2 Introduction to statistics

Statistics deals with data. Generally speaking, the goal of statistics is to make inferences
based on data. We can divide this the process into three phases: collecting data, describing
data and analyzing data. This fits into the paradigm of the scientific method. We make
hypotheses about what’s true, collect data in experiments, describe the results, and then
infer from the results the strength of the evidence concerning our hypotheses.

2.1 Experimental design

The design of an experiment is crucial to making sure the collected data is useful. The
adage ‘garbage in, garbage out’ applies here. A poorly designed experiment will produce
poor quality data, from which it may be impossible to draw useful, valid inferences. To
quote R.A. Fisher one of the founders of modern statistics,

To consult a statistician after an experiment is finished is often merely to ask
him to conduct a post-mortem examination. He can perhaps say what the
experiment died of.

2.2 Descriptive statistics

Raw data often takes the form of a massive list, array, or database of labels and numbers.
To make sense of the data, we can calculate summary statistics like the mean, median, and
interquartile range. We can also visualize the data using graphical devices like histograms,
scatterplots, and the empirical cdf. These methods are useful for both communicating and
exploring the data to gain insight into its structure, such as whether it might follow a
familiar probability distribution.

2.3 Inferential statistics

Ultimately we want to draw inferences about the world. Often this takes the form of
specifying a statistical model for the random process by which the data arises. For example,
suppose the data takes the form of a series of measurements whose error we believe follows
a normal distribution. (Note this is always an approximation since we know the error must

1

18.05 class 10, Introduction to Statistics, Spring 2014 2

have some bound while a normal distribution has range (−∞,∞).) We might then use the
data to provide evidence for or against this hypothesis. Our focus in 18.05 will be on how
to use data to draw inferences about model parameters. For example, assuming gestational
length follows a N(µ, σ) distribution, we’ll use the data of the gestational lengths of, say,
500 pregnancies to draw inferences about the values of the parameters µ and σ. Similarly,
we may model the result of a two-candidate election by a Bernoulli(p) distribution, and use
poll data to draw inferences about the value of p.

We can rarely make definitive statements about such parameters because the data itself
comes from a random process (such as choosing who to poll). Rather, our statistical evidence
will always involve probability statements. Unfortunately, the media and public at large
are wont to misunderstand the probabilistic meaning of statistical statements. In fact,
researchers themselves often commit the same errors. In this course, we will emphasize the
meaning of statistical statements alongside the methods which produce them.

Example 1. To study the effectiveness of new treatment for cancer, patients are recruited
and then divided into an experimental group and a control group. The experimental group
is given the new treatment and the control group receives the current standard of care.
Data collected from the patients might include demographic information, medical history,
initial state of cancer, progression of the cancer over time, treatment cost, and the effect of
the treatment on tumor size, remission rates, longevity, and quality of life. The data will
be used to make inferences about the effectiveness of the new treatment compared to the
current standard of care.

Notice that this study will go through all three phases described above. The experimental
design must specify the size of the study, who will be eligible to join, how the experimental
and control groups will be chosen, how the treatments will be administered, whether or
not the subjects or doctors know who is getting which treatment, and precisely what data
will be collected, among other things. Once the data is collected it must be described and
analyzed to determine whether it supports the hypothesis that the new treatment is more
(or less) effective than the current one(s), and by how much. These statistical conclusions
will be framed as precise statements involving probabilities.

As noted above, misinterpreting the exact meaning of statistical statements is a common
source of error which has led to tragedy on more than one occasion.

Example 2. In 1999 in Great Britain, Sally Clark was convicted of murdering her two
children after each child died weeks after birth (the first in 1996, the second in 1998).
Her conviction was largely based on a faulty use of statistics to rule out sudden infant
death syndrome. Though her conviction was overturned in 2003, she developed serious
psychiatric problems during and after her imprisonment and died of alcohol poisoning in
2007. See http://en.wikipedia.org/wiki/Sally_Clark

This TED talk discusses the Sally Clark case and other instances of poor statistical intuition:

2.4 What is a statistic?

We give a simple definition whose meaning is best elucidated by examples.

Definition. A statistic is anything that can be computed from the collected data.

http://en.wikipedia.org/wiki/Sally_Clark

18.05 class 10, Introduction to Statistics, Spring 2014 3

Example 3. Consider the data of 1000 rolls of a die. All of the following are statistics:
the average of the 1000 rolls; the number of times a 6 was rolled; the sum of the squares
of the rolls minus the number of even rolls. It’s hard to imagine how we would use the
last example, but it is a statistic. On the other hand, the probability of rolling a 6 is not a
statistic, whether or not the die is truly fair. Rather this probability is a property of the die
(and the way we roll it) which we can estimate using the data. Such an estimate is given
by the statistic ‘proportion of the rolls that were 6’.

Example 4. Suppose we treat a group of cancer patients with a new procedure and collect
data on how long they survive post-treatment. From the data we can compute the average
survival time of patients in the group. We might employ this statistic as an estimate of the
average survival time for future cancer patients following the new procedure. The actual
survival is not a statistic.

Example 5. Suppose we ask 1000 residents whether or not they support the proposal to
legalize marijuana in Massachusetts. The proportion of the 1000 who support the proposal
is a statistic. The proportion of all Massachusetts residents who support the proposal is
not a statistic since we have not queried every single one (note the word “collected” in the
definition). Rather, we hope to draw a statistical conclusion about the state-wide proportion
based on the data of our random sample.

The following are two general types of statistics we will use in 18.05.

1. Point statistics: a single value computed from data, such as the sample average xn or
the sample standard deviation sn.

2. Interval statistics: an interval [a, b] computed from the data. This is really just a pair of
point statistics, and will often be presented in the form x± s.

3 Review of Bayes’ theorem

We cannot stress strongly enough how important Bayes’ theorem is to our view of inferential
statistics. Recall that Bayes’ theorem allows us to ‘invert’ conditional probabilities. That
is, if H and D are events, then Bayes’ theorem says

P (D H
P (H|D)

| )P (H)
= .

P (D)

In scientific experiments we start with a hypothesis and collect data to test the hypothesis.
We will often let H represent the event ‘our hypothesis is true’ and let D be the collected
data. In these words Bayes’ theorem says

P (data |hypothesis is true) · P (hypothesis is true)
P (hypothesis is true | data) =

P (data)

The left-hand term is the probability our hypothesis is true given the data we collected.
This is precisely what we’d like to know. When all the probabilities on the right are known
exactly, we can compute the probability on the left exactly. This will be our focus next
week. Unfortunately, in practice we rarely know the exact values of all the terms on the

18.05 class 10, Introduction to Statistics, Spring 2014 4

right. Statisticians have developed a number of ways to cope with this lack of knowledge
and still make useful inferences. We will be exploring these methods for the rest of the
course.

Example 6. Screening for a disease redux
Suppose a screening test for a disease has a 1% false positive rate and a 1% false negative
rate. Suppose also that the rate of the disease in the population is 0.002. Finally suppose
a randomly selected person tests positive. In the language of hypothesis and data we have:
Hypothesis: H = ‘the person has the disease’
Data: D = ‘the test was positive.’
What we want to know: P (H|D) = P (the person has the disease | a positive test)
In this example all the probabilities on the right are known so we can use Bayes’ theorem
to compute what we want to know.

P (hypothesis | data) = P (the person has the disease | a positive test)
= P (H|D)
P (D

=
|H)P (H)
P (D)

.99
=

· .002
.99 · .002 + .01 · .998

= 0.166

Before the test we would have said the probability the person had the disease was 0.002.
After the test we see the probability is 0.166. That is, the positive test provides some
evidence that the person has the disease.

MIT OpenCourseWare
https://ocw.mit.edu

18.05 Introduction to Probability and Statistics
Spring 2014

For information about citing these materials or our Terms of Use, visit: https://ocw.mit.edu/terms.

https://ocw.mit.edu
https://ocw.mit.edu/terms

Maximum Likelihood Estimates
Class 10, 18.05

Jeremy Orloff and Jonathan Bloom

1 Learning Goals

1. Be able to define the likelihood function for a parametric model given data.

2. Be able to compute the maximum likelihood estimate of unknown parameter(s).

2 Introduction

Suppose we know we have data consisting of values x1, . . . , xn drawn from an exponential
distribution. The question remains: which exponential distribution?!

We have casually referred to the exponential distribution or the binomial distribution or the
normal distribution. In fact the exponential distribution exp(λ) is not a single distribution
but rather a one-parameter family of distributions. Each value of λ defines a different dis-
tribution in the family, with pdf fλ(x) = λe

−λx on [0,∞). Similarly, a binomial distribution
bin(n, p) is determined by the two parameters n and p, and a normal distribution N(µ, σ2)
is determined by the two parameters µ and σ2 (or equivalently, µ and σ). Parameterized
families of distributions are often called parametric distributions or parametric models.

We are often faced with the situation of having random data which we know (or believe)
is drawn from a parametric model, whose parameters we do not know. For example, in
an election between two candidates, polling data constitutes draws from a Bernoulli(p)
distribution with unknown parameter p. In this case we would like to use the data to
estimate the value of the parameter p, as the latter predicts the result of the election.
Similarly, assuming gestational length follows a normal distribution, we would like to use
the data of the gestational lengths from a random sample of pregnancies to draw inferences
about the values of the parameters µ and σ2.

Our focus so far has been on computing the probability of data arising from a parametric
model with known parameters. Statistical inference flips this on its head: we will estimate
the probability of parameters given a parametric model and observed data drawn from it.
In the coming weeks we will see how parameter values are naturally viewed as hypotheses,
so we are in fact estimating the probability of various hypotheses given the data.

3 Maximum Likelihood Estimates

There are many methods for estimating unknown parameters from data. We will first
consider the maximum likelihood estimate (MLE), which answers the question:

For which parameter value does the observed data have the biggest probability?

The MLE is an example of a point estimate because it gives a single value for the unknown
parameter (later our estimates will involve intervals and probabilities). Two advantages of

1

18.05 class 10, Maximum Likelihood Estimates , Spring 2014 2

the MLE are that it is often easy to compute and that it agrees with our intuition in simple
examples. We will explain the MLE through a series of examples.

Example 1. A coin is flipped 100 times. Given that there were 55 heads, find the maximum
likelihood estimate for the probability p of heads on a single toss.

Before actually solving the problem, let’s establish some notation and terms.

We can think of counting the number of heads in 100 tosses as an experiment. For a given
value of p, the probability of getting 55 heads in this experiment is the binomial probability

P (55 heads) =

(
100
)
p55(1 p

55
− )45.

The probability of getting 55 heads depends on the value of p, so let’s include p in by using
the notation of conditional probability:

P (55 heads | p) =
(

100
)
p55(1− p)45.

55

You should read P (55 heads | p) as:
‘the probability of 55 heads given p,’

or more precisely as

‘the probability of 55 heads given that the probability of heads on a single toss is p.’

Here are some standard terms we will use as we do statistics.

• Experiment: Flip the coin 100 times and count the number of heads.

• Data: The data is the result of the experiment. In this case it is ‘55 heads’.

• Parameter(s) of interest: We are interested in the value of the unknown parameter p.

• Likelihood, or likelihood function: this is P (data | p). Note it is a function of both the
data and the parameter p. In this case the likelihood is

P (55 heads | p) =
(

100

55

)
p55(1− p)45.

Notes: 1. The likelihood P (data | p) changes as the parameter of interest p changes.
2. Look carefully at the definition. One typical source of confusion is to mistake the likeli-
hood P (data | p) for P (p | data). We know from our earlier work with Bayes’ theorem that
P (data | p) and P (p |data) are usually very different.

Definition: Given data the maximum likelihood estimate (MLE) for the parameter p is
the value of p that maximizes the likelihood P (data | p). That is, the MLE is the value of
p for which the data is most likely.

answer: For the problem at hand, we saw above that the likelihood

100
P (55 heads | p) =

(
55

)
p55(1− p)45.

18.05 class 10, Maximum Likelihood Estimates , Spring 2014 3

We’ll use the notation p̂ for the MLE. We use calculus to find it by taking the derivative of
the likelihood function and setting it to 0.

d 100
P (data p) = (55p54(1 p)45 45p55(1 p)44) = 0.

dp
|

(
55

)
− − −

Solving this for p we get

55p54(1− p)45 = 45p55(1− p)44
55(1− p) = 45p
55 = 100p

the MLE is p̂ = .55

Note: 1. The MLE for p turned out to be exactly the fraction of heads we saw in our data.

2. The MLE is computed from the data. That is, it is a statistic.

3. Officially you should check that the critical point is indeed a maximum. You can do this
with the second derivative test.

3.1 Log likelihood

If is often easier to work with the natural log of the likelihood function. For short this is
simply called the log likelihood. Since ln(x) is an increasing function, the maxima of the
likelihood and log likelihood coincide.

Example 2. Redo the previous example using log likelihood.

answer: We had the likelihood P (55 heads | p) =
(
100
)
p55(1 − p)45. Therefore the log

55
likelihood is

ln(P (55 heads | p) = ln
((

100
))

+ 55 ln(p) + 45 ln(1− p).
55

Maximizing likelihood is the same as maximizing log likelihood. We check that calculus
gives us the same answer as before:

d

dp
(log likelihood) =

d 100
ln

dp

[ ((
55

))
+ 55 ln(p) + 45 ln(1− p)

55

]
=

p
− 45 = 0

1− p
⇒ 55(1− p) = 45p
⇒ p̂ = .55

3.2 Maximum likelihood for continuous distributions

For continuous distributions, we use the probability density function to define the likelihood.
We show this in a few examples. In the next section we explain how this is analogous to
what we did in the discrete case.

18.05 class 10, Maximum Likelihood Estimates , Spring 2014 4

Example 3. Light bulbs
Suppose that the lifetime of Badger brand light bulbs is modeled by an exponential distri-
bution with (unknown) parameter λ. We test 5 bulbs and find they have lifetimes of 2, 3,
1, 3, and 4 years, respectively. What is the MLE for λ?

answer: We need to be careful with our notation. With five different values it is best to
use subscripts. Let X be the lifetime of the ithj bulb and let xi be the value Xi takes. Then
each X λxii has pdf fXi(xi) = λe

− . We assume the lifetimes of the bulbs are independent,
so the joint pdf is the product of the individual densities:

f(x , x , x , x , x |λ) = (λe−λx1)(λe−λx2)(λe−λx3)(λe−λx4)(λe−λx5) = λ5e−λ(x1+x2+x3+x4+x5)1 2 3 4 5 .

Note that we write this as a conditional density, since it depends on λ. Viewing the data
as fixed and λ as variable, this density is the likelihood function. Our data had values

x1 = 2, x2 = 3, x3 = 1, x4 = 3, x5 = 4.

So the likelihood and log likelihood functions with this data are

f(2, 3, 1, 3, 4 |λ) = λ5e−13λ, ln(f(2, 3, 1, 3, 4 |λ) = 5 ln(λ)− 13λ

Finally we use calculus to find the MLE:

d


(log likelihood) =

5

λ
− 13 = 0 ⇒ λ̂ = 5

13
.

Note: 1. In this example we used an uppercase letter for a random variable and the
corresponding lowercase letter for the value it takes. This will be our usual practice.

ˆ2. The MLE for λ turned out to be the reciprocal of the sample mean x̄, so X ∼ exp(λ)
satisfies E(X) = x̄.

The following example illustrates how we can use the method of maximum likelihood to
estimate multiple parameters at once.

Example 4. Normal distributions
Suppose the data x1, x2, . . . , xn is drawn from a N(µ, σ

2) distribution, where µ and σ are
unknown. Find the maximum likelihood estimate for the pair (µ, σ2).

answer: Let’s be precise and phrase this in terms of random variables and densities. Let
uppercase X 21, . . . , Xn be i.i.d. N(µ, σ ) random variables, and let lowercase xi be the value
Xi takes. The density for each Xi is

1
fXi(xi) = √

2π σ
e
− (xi−µ)

2

22σ .

Since the Xi are independent their joint pdf is the product of the individual pdf’s:

1
f(x1, . . . , xn |µ, σ) =

(

2π σ

)n
e

∑n
i=1

(xi−µ)
2

22σ .

For the fixed data x1, . . . , xn, the likelihood and log likelihood are

1
f(x1, . . . , xn|µ, σ) =

(

2π σ

)n
e

∑n
i=1

(xi−µ)
2

22σ , ln(f(x1, . . . , xn|µ, σ)) = −n ln(
√ ∑n (x

2π)−n ln(σ)− i − µ)
2

i=1

.
2σ2

18.05 class 10, Maximum Likelihood Estimates , Spring 2014 5

Since ln(f(x1, . . . , xn|µ, σ)) is a function of the two variables µ, σ we use partial derivatives
to find the MLE. The easy value to find is µ̂:

∂f(x1, . . . , xn|µ, σ)
∂µ

=
n∑
i=1

(xi − µ)
σ2

= 0 ⇒
n∑
i=1

xi = nµ ⇒ µ̂ =
∑n

i=1 xi
n

= x.

To find σ̂ we differentiate and solve for σ:

∂f(x1, . . . , xn|µ, σ)
∂σ

= −n
∑n (xi

+
− µ)2

σ
i=1

σ3
= 0 ⇒ σ̂2 =

∑n
i=1(xi − µ)2

n
.

We already know µ̂ = x, so we use that as the value for µ in the formula for σ̂. We get the
maximum likelihood estimates

µ̂ = x = the mean of the data

σ̂2 =

n∑
i=1

1

n
(xi − µ̂)2 =

n∑
i=1

1

n
(xi − x)2 = the variance of the data.

Example 5. Uniform distributions
Suppose our data x1, . . . xn are independently drawn from a uniform distribution U(a, b).
Find the MLE estimate for a and b.

answer: This example is different from the previous ones in that we won’t use calculus to
find the MLE. The density for U(a, b) is 1

b−a on [a, b]. Therefore our likelihood function is

f(x1, . . . , xn | a, b) =
{(

1 al
b

)n
if all x in− i are in the terv [a, b]a

0 otherwise.

This is maximized by making b − a as small as possible. The only restriction is that the
interval [a, b] must include all the data. Thus the MLE for the pair (a, b) is

ˆâ = min(x1, . . . , xn) b = max(x1, . . . , xn).

Example 6. Capture/recapture method

The capture/recapture method is a way to estimate the size of a population in the wild.
The method assumes that each animal in the population is equally likely to be captured by
a trap.

Suppose 10 animals are captured, tagged and released. A few months later, 20 animals are
captured, examined, and released. 4 of these 20 are found to be tagged. Estimate the size
of the wild population using the MLE for the probability that a wild animal is tagged.

answer: Our unknown parameter n is the number of animals in the wild. Our data is that
4 out of 20 recaptured animals were tagged (and that there are 10 tagged animals). The
likelihood function is

n

P (data |n animals) =
( −10

16

)(
10
4

)
of

(
n
20

(The numerator is the number ways to choose 16 animals

)
from among the n−10 untagged

ones times the number of was to choose 4 out of the 10 tagged animals. The denominator

18.05 class 10, Maximum Likelihood Estimates , Spring 2014 6

is the number of ways to choose 20 animals from the entire population of n.) We can use
R to compute that the likelihood function is maximized when n = 50. This should make
some sense. It says our best estimate is that the fraction of all animals that are tagged is
10/50 which equals the fraction of recaptured animals which are tagged.

Example 7. Hardy-Weinberg. Suppose that a particular gene occurs as one of two
alleles (A and a), where allele A has frequency θ in the population. That is, a random copy
of the gene is A with probability θ and a with probability 1 − θ. Since a diploid genotype
consists of two genes, the probability of each genotype is given by:

genotype AA Aa aa

probability θ2 2θ(1− θ) (1− θ)2

Suppose we test a random sample of people and find that k1 are AA, k2 are Aa, and k3 are
aa. Find the MLE of θ.

answer: The likelihood function is given by

+ k|
(
k1 2 + k3 k2 + k3 k3

P (k1, k
2k1 k2 2k3

2, k3 θ) =
1

)(
k2

)(
θ (2θ(1 θ)) (1 θ) .

k k3

)
− −

So the log likelihood is given by

constant + 2k1 ln(θ) + k2 ln(θ) + k2 ln(1− θ) + 2k3 ln(1− θ)

We set the derivative equal to zero:

2k1 + k2
θ

− k2 + 2k3 = 0
1− θ

Solving for θ, we find the MLE is

2kˆ 1 + k2θ = ,
2k1 + 2k2 + 2k3

which is simply the fraction of A alleles among all the genes in the sampled population.

4 Why we use the density to find the MLE for continuous
distributions

The idea for the maximum likelihood estimate is to find the value of the parameter(s) for
which the data has the highest probability. In this section we ’ll see that we’re doing this
is really what we are doing with the densities. We will do this by considering a smaller
version of the light bulb example.

Example 8. Suppose we have two light bulbs whose lifetimes follow an exponential(λ)
distribution. Suppose also that we independently measure their lifetimes and get data
x1 = 2 years and x2 = 3 years. Find the value of λ that maximizes the probability of this
data.

answer: The main paradox to deal with is that for a continuous distribution the probability
of a single value, say x1 = 2, is zero. We resolve this paradox by remembering that a single

18.05 class 10, Maximum Likelihood Estimates , Spring 2014 7

measurement really means a range of values, e.g. in this example we might check the light
bulb once a day. So the data x1 = 2 years really means x1 is somewhere in a range of 1 day
around 2 years.

If the range is small we call it dx1. The probability that X1 is in the range is approximated
by fX1(x1|λ) dx1. This is illustrated in the figure below. The data value x2 is treated in
exactly the same way.

x

density fX1(x1|λ)

λ

x1

dx1

probability ≈ fX1(x1|λ) dx1

x

density fX2(x2|λ)

λ

x2

dx2

probability ≈ fX2(x2|λ) dx2

The usual relationship between density and probability for small ranges.

Since the data is collected independently the joint probability is the product of the individual
probabilities. Stated carefully

P (X1 in range, X2 in range|λ) ≈ fX1(x1|λ) dx1 · fX2(x2|λ) dx2
Finally, using the values x1 = 2 and x2 = 3 and the formula for an exponential pdf we have

P (X in range, X in range|λ) ≈ λe−2λ dx · λe−3λ 21 2 1 dx2 = λ e−5λ dx1 dx2.

Now that we have a genuine probability we can look for the value of λ that maximizes it.
Looking at the formula above we see that the factor dx1 dx2 will play no role in finding the
maximum. So for the MLE we drop it and simply call the density the likelihood:

likelihood = f(x1, x2|λ) = λ2e−5λ.
ˆThe value of λ that maximizes this is found just like in the example above. It is λ = 2/5.

5 Appendix: Properties of the MLE

For the interested reader, we note several nice features of the MLE. These are quite technical
and will not be on any exams.

The MLE behaves well under transformations. That is, if p̂ is the MLE for p and g is a
one-to-one function, then g(p̂) is the MLE for g(p). For example, if σ̂ is the MLE for the
standard deviation σ then (σ̂)2 is the MLE for the variance σ2.

Furthermore, the MLE is asymptotically unbiased and has asymptotically minimal variance.
To explain these notions, note that the MLE is itself a random variable since the data is
random and the MLE is computed from the data. Let x1, x2, . . . be an infinite sequence of
samples from a distribution with parameter p. Let p̂n be the MLE for p based on the data
x1, . . . , xn.

Asymptotically unbiased means that as the amount of data grows, the mean of the MLE
converges to p. In symbols: E(p̂n)→ p as n→∞. Of course, we would like the MLE to be

18.05 class 10, Maximum Likelihood Estimates , Spring 2014 8

close to p with high probability, not just on average, so the smaller the variance of the MLE
the better. Asymptotically minimal variance means that as the amount of data grows, the
MLE has the minimal variance among all unbiased estimators of p. In symbols: for any
unbiased estimator p̃n and � > 0 we have that Var(p̃n) + � > Var(p̂n) as n→∞.

MIT OpenCourseWare
https://ocw.mit.edu

18.05 Introduction to Probability and Statistics
Spring 2014

For information about citing these materials or our Terms of Use, visit: https://ocw.mit.edu/terms.

https://ocw.mit.edu
https://ocw.mit.edu/terms

Bayesian Updating with Discrete Priors
Class 11, 18.05

Jeremy Orloff and Jonathan Bloom

1 Learning Goals

1. Be able to apply Bayes’ theorem to compute probabilities.

2. Be able to define the and to identify the roles of prior probability, likelihood (Bayes
term), posterior probability, data and hypothesis in the application of Bayes’ Theorem.

3. Be able to use a Bayesian update table to compute posterior probabilities.

2 Review of Bayes’ theorem

Recall that Bayes’ theorem allows us to ‘invert’ conditional probabilities. If H and D are
events, then:

P (
P (H |D) = D |H)P (H)

P (D)
Our view is that Bayes’ theorem forms the foundation for inferential statistics. We will
begin to justify this view today.

2.1 The base rate fallacy

When we first learned Bayes’ theorem we worked an example about screening tests showing
that P (D|H) can be very different from P (H|D). In the appendix we work a similar example.
If you are not comfortable with Bayes’ theorem you should read the example in the appendix
now.

3 Terminology and Bayes’ theorem in tabular form

We now use a coin tossing problem to introduce terminology and a tabular format for Bayes’
theorem. This will provide a simple, uncluttered example that shows our main points.

Example 1. There are three types of coins which have different probabilities of landing
heads when tossed.

• Type A coins are fair, with probability 0.5 of heads

• Type B coins are bent and have probability 0.6 of heads

• Type C coins are bent and have probability 0.9 of heads

Suppose I have a drawer containing 5 coins: 2 of type A, 2 of type B, and 1 of type C. I
reach into the drawer and pick a coin at random. Without showing you the coin I flip it
once and get heads. What is the probability it is type A? Type B? Type C?

1

18.05 class 11, Bayesian Updating with Discrete Priors, Spring 2014 2

answer: Let A, B, and C be the event that the chosen coin was type A, type B, and type
C. Let D be the event that the toss is heads. The problem asks us to find

P (A|D), P (B|D), P (C|D).
Before applying Bayes’ theorem, let’s introduce some terminology.

• Experiment: pick a coin from the drawer at random, flip it, and record the result.

• Data: the result of our experiment. In this case the event D = ‘heads’. We think of
D as data that provides evidence for or against each hypothesis.

• Hypotheses: we are testing three hypotheses: the coin is type A, B or C.

• Prior probability: the probability of each hypothesis prior to tossing the coin (collect-
ing data). Since the drawer has 2 coins of type A, 2 of type B and 1 or type C we
have

P (A) = 0.4, P (B) = 0.4, P (C) = 0.2.

• Likelihood: (This is the same likelihood we used for the MLE.) The likelihood function
is P (D|H), i.e., the probability of the data assuming that the hypothesis is true. Most
often we will consider the data as fixed and let the hypothesis vary. For example,
P (D|A) = probability of heads if the coin is type A. In our case the likelihoods are

P (D|A) = 0.5, P (D|B) = 0.6, P (D|C) = 0.9.

The name likelihood is so well established in the literature that we have to teach
it to you. However in colloquial language likelihood and probability are synonyms.
This leads to the likelihood function often being confused with the probabity of a
hypothesis. Because of this we’d prefer to use the name Bayes’ term. However since
we are stuck with ‘likelihood’ we will try to use it very carefully and in a way that
minimizes any confusion.

• Posterior probability: the probability (posterior to) of each hypothesis given the data
from tossing the coin.

P (A|D), P (B|D), P (C|D).
These posterior probabilities are what the problem asks us to find.

We now use Bayes’ theorem to compute each of the posterior probabilities. We are going
to write this out in complete detail so we can pick out each of the parts (Remember that
the data D is that the toss was heads.)
First we organize the probabilities into a tree:

A B C

H T H T H T

0.4 0.4 0.2

0.5 0.5 0.6 0.4 0.9 0.1

Probability tree for choosing and tossing a coin.

18.05 class 11, Bayesian Updating with Discrete Priors, Spring 2014 3

P (D|A)P (A)
Bayes’ theorem says, e.g. P (A|D) = . The denominator P (

P (D) D) is computed
using the law of total probability:

P (D) = P (D|A)P (A) +P (D|B)P (B) +P (D|C)P (C) = 0.5 · 0.4 + 0.6 · 0.4 + 0.9 · 0.2 = 0.62.

Now each of the three posterior probabilities can be computed:

P (
P A

D )|D) = |A)P (A(
P (D) =

0.5 · 0.4
0.62

=
0.2

0.62

P (
P (B|D) = D|B)P (B)

P (D) =
0.6 · 0.4

0.62
=

0.24

0.62

P (
P (C|D) = D|C)P (C)

P (D) =
0.9 · 0.2

0.62
=

0.18

0.62

Notice that the total probability P (D) is the same in each of the denominators and that it
is the sum of the three numerators. We can organize all of this very neatly in a Bayesian
update table:

Bayes
hypothesis prior likelihood numerator posterior

H P (H) P (D|H) P (D|H)P (H) P (H|D)
A 0.4 0.5 0.2 0.3226
B 0.4 0.6 0.24 0.3871
C 0.2 0.9 0.18 0.2903

total 1 0.62 1

The Bayes numerator is the product of the prior and the likelihood. We see in each of the
Bayes’ formula computations above that the posterior probability is obtained by dividing
the Bayes numerator by P (D) = 0.625. We also see that the law of law of total probability
says that P (D) is the sum of the entries in the Bayes numerator column.
Bayesian updating: The process of going from the prior probability P (H) to the pos-
terior P (H|D) is called Bayesian updating. Bayesian updating uses the data to alter our
understanding of the probability of each of the possible hypotheses.

3.1 Important things to notice

1. There are two types of probabilities: Type one is the standard probability of data, e.g.
the probability of heads is p = 0.9. Type two is the probability of the hypotheses, e.g.
the probability the chosen coin is type A, B or C. This second type has prior (before
the data) and posterior (after the data) values.

2. The posterior (after the data) probabilities for each hypothesis are in the last column.
We see that coin B is now the most probable, though its probability has decreased from
a prior probability of 0.4 to a posterior probability of 0.39. Meanwhile, the probability
of type C has increased from 0.2 to 0.29.

3. The Bayes numerator column determines the posterior probability column. To compute
the latter, we simply rescaled the Bayes numerator so that it sums to 1.

18.05 class 11, Bayesian Updating with Discrete Priors, Spring 2014 4

4. If all we care about is finding the most likely hypothesis, the Bayes numerator works as
well as the normalized posterior.

5. The likelihood column does not sum to 1. The likelihood function is not a probability
function.

6. The posterior probability represents the outcome of a ‘tug-of-war’ between the likelihood
and the prior. When calculating the posterior, a large prior may be deflated by a small
likelihood, and a small prior may be inflated by a large likelihood.

7. The maximum likelihood estimate (MLE) for Example 1 is hypothesis C, with a likeli-
hood P (D|C) = 0.9. The MLE is useful, but you can see in this example that it is not
the entire story, since type B has the greatest posterior probability.

Terminology in hand, we can express Bayes’ theorem in various ways:

P ( )
(H|D) = D|H)P (P H

P (D)

P (hypothesis|data) = P (data|hypothesis)P (hypothesis)
P (data)

With the data fixed, the denominator P (D) just serves to normalize the total posterior prob-
ability to 1. So we can also express Bayes’ theorem as a statement about the proportionality
of two functions of H (i.e, of the last two columns of the table).

P (hypothesis|data) ∝ P (data|hypothesis)P (hypothesis)

This leads to the most elegant form of Bayes’ theorem in the context of Bayesian updating:

posterior ∝ likelihood× prior

3.2 Prior and posterior probability mass functions

Earlier in the course we saw that it is convenient to use random variables and probability
mass functions. To do this we had to assign values to events (head is 1 and tails is 0). We
will do the same thing in the context of Bayesian updating.

Our standard notations will be:

• θ is the value of the hypothesis.

• p(θ) is the prior probability mass function of the hypothesis.

• p(θ|D) is the posterior probability mass function of the hypothesis given the data.

• p(D|θ) is the likelihood function. (This is not a pmf!)

In Example 1 we can represent the three hypotheses A, B, and C by θ = 0.5, 0.6, 0.9. For
the data we’ll let x = 1 mean heads and x = 0 mean tails. Then the prior and posterior
probabilities in the table define the prior and posterior probability mass functions.

18.05 class 11, Bayesian Updating with Discrete Priors, Spring 2014 5

Hypothesis θ prior pmf p(θ) poster pmf p(θ|x = 1)
A 0.5 P (A) = p(0.5) = 0.4 P (A|D) = p(0.5|x = 1) = 0.3226
B 0.6 P (B) = p(0.6) = 0.4 P (B|D) = p(0.6|x = 1) = 0.3871
C 0.9 P (C) = p(0.9) = 0.2 P (C ) = p(0.9 x = 1) = 0.2903|D |

Here are plots of the prior and posterior pmf’s from the example.

θ

p(θ)

.5 .6 .9

.2

.4

θ

p(θ|x = 1)

.5 .6 .9

.2

.4

Prior pmf p(θ) and posterior pmf p(θ|x = 1) for Example 1

If the data was different then the likelihood column in the Bayesian update table would be
different. We can plan for different data by building the entire likelihood table ahead of
time. In the coin example there are two possibilities for the data: the toss is heads or the
toss is tails. So the full likelihood table has two likelihood columns:

hypothesis likelihood p(x|θ)
θ p(x = 0|θ) p(x = 1|θ)

0.5 0.5 0.5
0.6 0.4 0.6
0.9 0.1 0.9

Example 2. Using the notation p(θ), etc., redo Example 1 assuming the flip was tails.

answer: Since the data has changed, the likelihood column in the Bayesian update table is
now for x = 0. That is, we must take the p(x = 0|θ) column from the likelihood table.

Bayes
hypothesis prior likelihood numerator posterior

θ p(θ) p(x = 0 | θ) p(x = 0 | θ)p(θ) p(θ |x = 0)
0.5 0.4 0.5 0.2 0.5263
0.6 0.4 0.4 0.16 0.4211
0.9 0.2 0.1 0.02 0.0526

total 1 0.38 1

Now the probability of type A has increased from 0.4 to 0.5263, while the probability of
type C has decreased from 0.2 to only 0.0526. Here are the corresponding plots:

18.05 class 11, Bayesian Updating with Discrete Priors, Spring 2014 6

θ

p(θ)

.5 .6 .9

.2

.4

θ

p(θ|x = 0)

.5 .6 .9

.2

.4

Prior pmf p(θ) and posterior pmf p(θ|x = 0)

3.3 Food for thought.

Suppose that in Example 1 you didn’t know how many coins of each type were in the
drawer. You picked one at random and got heads. How would you go about deciding which
hypothesis (coin type) if any was most supported by the data?

4 Updating again and again

In life we are continually updating our beliefs with each new experience of the world. In
Bayesian inference, after updating the prior to the posterior, we can take more data and
update again! For the second update, the posterior from the first data becomes the prior
for the second data.

Example 3. Suppose you have picked a coin as in Example 1. You flip it once and get
heads. Then you flip the same coin and get heads again. What is the probability that the
coin was type A? Type B? Type C?

answer: As we update several times the table gets big, so we use a smaller font to fit it in:

Bayes Bayes
hypothesis prior likelihood 1 numerator 1 likelihood 2 numerator 2 posterior 2

θ p(θ) p(x1 = 1|θ) p(x1 = 1|θ)p(θ) p(x2 = 1|θ) p(x2 = 1|θ)p(x1 = 1|θ)p(θ) p(θ|x1 = 1, x2 = 1)
0.5 0.4 0.5 0.2 0.5 0.1 0.2463
0.6 0.4 0.6 0.24 0.6 0.144 0.3547
0.9 0.2 0.9 0.18 0.9 0.162 0.3990

total 1 0.406 1

Note that the second Bayes numerator is computed by multiplying the first Bayes numerator
and the second likelihood; since we are only interested in the final posterior, there is no
need to normalize until the last step. As shown in the last column and plot, after two heads
the type C hypothesis has finally taken the lead!

18.05 class 11, Bayesian Updating with Discrete Priors, Spring 2014 7

θ

p(θ)

.5 .6 .9

.2

.4

θ

p(θ|x = 1)

.5 .6 .9

.2

.4

θ

p(θ|x1 = 1, x2 = 1)

.5 .6 .9

.2

.4

prior p(θ), first posterior p(θ|x1 = 1), second posterior p(θ|x1 = 1, x2 = 1)

5 Appendix: the base rate fallacy

Example 4. A screening test for a disease is both sensitive and specific. By that we mean
it is usually positive when testing a person with the disease and usually negative when
testing someone without the disease. Let’s assume the true positive rate is 99% and the
false positive rate is 2%. Suppose the prevalence of the disease in the general population is
0.5%. If a random person tests positive, what is the probability that they have the disease?

answer: As a review we first do the computation using trees. Next we will redo the
computation using tables.

Let’s use notation established above for hypotheses and data: let H+ be the hypothesis
(event) that the person has the disease and let H be the hypothesis they do not. Likewise,−
let T+ and T represent the data of a positive and negative screening test respectively. We−
are asked to compute P (H+|T+).
We are given

P (T+|H+) = 0.99, P (T+|H ) = 0.02, P (− H+) = 0.005.

From these we can compute the false negative and true negative rates:

P (T |H+) = 0.01, P (T |H ) = 0.98− − −

All of these probabilities can be displayed quite nicely in a tree.

H−H+

T+ T−T+ T−

0.005 0.995

0.99 0.01 0.02 0.98

Bayes’ theorem yields

P (
P (H+|T+) =

T+|H+)P (H+)
P (T+)

=
0.99 · 0.005

= 0.19920
0.99 0.005 + 0.02 0.995

≈ 20%· ·

Now we redo this calculation using a Bayesian update table:

18.05 class 11, Bayesian Updating with Discrete Priors, Spring 2014 8

Bayes
hypothesis prior likelihood numerator posterior

H P (H) P (T+|H) P (T+|H)P (H) P (H|T+)
H+ 0.005 0.99 0.00495 0.19920
H− 0.995 0.02 0.01990 0.80080

total 1 NO SUM 0.02485 1

The table shows that the posterior probability P (H+|T+) that a person with a positive test
has the disease is about 20%. This is far less than the sensitivity of the test (99%) but
much higher than the prevalence of the disease in the general population (0.5%).

MIT OpenCourseWare
https://ocw.mit.edu

18.05 Introduction to Probability and Statistics
Spring 2014

For information about citing these materials or our Terms of Use, visit: https://ocw.mit.edu/terms.

https://ocw.mit.edu
https://ocw.mit.edu/terms

Bayesian Updating: Probabilistic Prediction
Class 12, 18.05

Jeremy Orloff and Jonathan Bloom

1 Learning Goals

1. Be able to use the law of total probability to compute prior and posterior predictive
probabilities.

2 Introduction

In the previous class we looked at updating the probability of hypotheses based on data.
We can also use the data to update the probability of each possible outcome of a future
experiment. In this class we will look at how this is done.

2.1 Probabilistic prediciton; words of estimative probability (WEP)

There are many ways to word predictions:

• Prediction: “It will rain tomorrow.”

• Prediction using words of estimative probability (WEP): “It is likely to rain tomor-
row.”

• Probabilistic prediction: “Tomorrow it will rain with probability 60% (and not rain
with probability 40%).”

Each type of wording is appropriate at different times.

In this class we are going to focus on probabilistic prediction and precise quantitative state-
ments. You can see http://en.wikipedia.org/wiki/Words_of_Estimative_Probability
for an interesting discussion about the appropriate use of words of estimative probability.
The article also contains a list of weasel words such as ‘might’, ‘cannot rule out’, ‘it’s
conceivable’ that should be avoided as almost certain to cause confusion.

There are many places where we want to make a probabilistic prediction. Examples are

• Medical treatment outcomes

• Weather forecasting

• Climate change

• Sports betting

• Elections

• . . .

1

http://en.wikipedia.org/wiki/Words_of_Estimative_Probability

18.05 class 12, Bayesian Updating: Probabilistic Prediction, Spring 2014 2

These are all situations where there is uncertainty about the outcome and we would like as
precise a description of what could happen as possible.

3 Predictive Probabilities

Probabilistic prediction simply means assigning a probability to each possible outcomes of
an experiment.

Recall the coin example from the previous class notes: there are three types of coins which
are indistinguishable apart from their probability of landing heads when tossed.

• Type A coins are fair, with probability 0.5 of heads

• Type B coins have probability 0.6 of heads

• Type C coins have probability 0.9 of heads

You have a drawer containing 4 coins: 2 of type A, 1 of type B, and 1 of type C. You reach
into the drawer and pick a coin at random. We let A stand for the event ‘the chosen coin
is of type A’. Likewise for B and C.

3.1 Prior predictive probabilities

Before taking data we can compute the probability that our chosen coin will land heads (or
tails) if flipped. Let DH be the event it lands heads and let DT the event it lands tails. We
can use the law of total probability to determine the probabilities of these events. Either
by drawing a tree or directly proceeding to the algebra, we get:

A B C

DH DT DH DT DH DT

.5 .25 .25

.5 .5 .6 .4 .9 .1

Coin type

Flip result

P (DH) = P (DH |A)P (A) + P (DH |B)P (B) + P (DH |C)P (C)
= 0.5 · 0.5 + 0.6 · 0.25 + 0.9 · 0.25 = 0.625

P (DT ) = P (DT |A)P (A) + P (DT |B)P (B) + P (DT |C)P (C)
= 0.5 · 0.5 + 0.4 · 0.25 + 0.1 · 0.25 = 0.375

Definition: These probabilities give a (probabilistic) prediction of what will happen if the
coin is tossed. Because they are computed before we collect any data they are called prior
predictive probabilities.

3.2 Posterior predictive probabilities

Suppose we flip the coin once and it lands heads. We now have data D, which we can use
to update the prior probabilities of our hypotheses to posterior probabilities. Last class we
learned to use a Bayes table to facilitate this computation:

18.05 class 12, Bayesian Updating: Probabilistic Prediction, Spring 2014 3

Bayes
hypothesis prior likelihood numerator posterior

H P (H) P (D|H) P (D|H)P (H) P (H|D)
A 0.5 0.5 0.25 0.4
B 0.25 0.6 0.15 0.24
C 0.25 0.9 0.225 0.36

total 1 0.625 1

Having flipped the coin once and gotten heads, we can compute the probability that our
chosen coin will land heads (or tails) if flipped a second time. We proceed just as before, but
using the posterior probabilities P (A|D), P (B|D), P (C|D) in place of the prior probabilities
P (A), P (B), P (C).

A B C

DH DT DH DT DH DT

.4 .24 .36

.5 .5 .6 .4 .9 .1

Coin type

Flip result

P (DH |D) = P (DH |A)P (A|D) + P (DH |B)P (B|D) + P (DH |C)P (C|D)
= 0.5 · 0.4 + 0.6 · 0.24 + 0.9 · 0.36 = 0.668

P (DT |D) = P (DT |A)P (A|D) + P (DT |B)P (B|D) + P (DT |C)P (C|D)
= 0.5 · 0.4 + 0.4 · 0.24 + 0.1 · 0.36 = 0.332

Definition: These probabilities give a (probabilistic) prediction of what will happen if the
coin is tossed again. Because they are computed after collecting data and updating the
prior to the posterior, they are called posterior predictive probabilities.

Note that heads on the first toss increases the probability of heads on the second toss.

3.3 Review

Here’s a succinct description of the preceding sections that may be helpful:

Each hypothesis gives a different probability of heads, so the total probability of heads is
a weighted average. For the prior predictive probability of heads, the weights are given by
the prior probabilities of the hypotheses. For the posterior predictive probability of heads,
the weights are given by the posterior probabilities of the hypotheses.

Remember: Prior and posterior probabilities are for hypotheses. Prior predictive and
posterior predictive probabilities are for data. To keep this straight, remember that the
latter predict future data.

MIT OpenCourseWare
https://ocw.mit.edu

18.05 Introduction to Probability and Statistics
Spring 2014

For information about citing these materials or our Terms of Use, visit: https://ocw.mit.edu/terms.

https://ocw.mit.edu
https://ocw.mit.edu/terms

Bayesian Updating: Odds
Class 12, 18.05

Jeremy Orloff and Jonathan Bloom

1 Learning Goals

1. Be able to convert between odds and probability.

2. Be able to update prior odds to posterior odds using Bayes factors.

3. Understand how Bayes factors measure the extent to which data provides evidence for
or against a hypothesis.

2 Odds

When comparing two events, it common to phrase probability statements in terms of odds.

Definition The odds of event E versus event E′ are the ratio of their probabilities P (E)/P (E′).
If unspecified, the second event is assumed to be the complement Ec. So the odds of E are:

P (E)
O(E) = .

P (Ec)

For example, O(rain) = 2 means that the probability of rain is twice the probability of no
rain (2/3 versus 1/3). We might say ‘the odds of rain are 2 to 1.’

1/2
Example. For a fair coin, O(heads) = = 1. We might say the odds of heads are 1 to

1/2
1 or fifty-fifty.

1/6
Example. For a standard die, the odds of rolling a 4 are

5/6
=

1
. We might say the odds

5
are ‘1 to 5 for’ or ‘5 to 1 against’ rolling a 4.

Example. The probability of a pair in a five card poker hand is 0.42257. So the odds of a
pair are 0.42257/(1-0.42257) = 0.73181.

We can go back and forth between probability and odds as follows.
p

Conversion formulas: if P (E) = p then O(E) =
1− p

. If O(E) = q then P (E) =
q

.
1 + q

Notes:
1. The second formula simply solves q = p/(1− p) for p.
2. Probabilities are between 0 and 1, while odds are between 0 to ∞.
3. The property P (Ec) = 1− P (E) becomes O(Ec) = 1/O(E).

Example. Let F be the event that a five card poker hand is a full house. Then P (F ) =
0.00145214 so O(F ) = 0.0014521/(1− 0.0014521) = 0.0014542.
The odds not having a full house are O(F c) = (1− 0.0014521)/0.0014521 = 687 = 1/O(F ).

1

18.05 class 12, Bayesian Updating: Odds, Spring 2014 2

4. If P (E) or O(E) is small then O(E) ≈ P (E). This follows from the conversion formulas.

Example. In the poker example where F = ‘full house’ we saw that P (F ) and O(F ) differ
only in the fourth significant digit.

3 Updating odds

3.1 Introduction

In Bayesian updating, we used the likelihood of data to update prior probabilities of hy-
potheses to posterior probabilities. In the language of odds, we will update prior odds to
posterior odds. One of our key points will be that the data can provide evidence supporting
or negating a hypothesis depending on whether its posterior odds are greater or less than
its prior odds.

3.2 Example: Marfan syndrome

Marfan syndrome is a genetic disease of connective tissue that occurs in 1 of every 15000
people. The main ocular features of Marfan syndrome include bilateral ectopia lentis (lens
dislocation), myopia and retinal detachment. About 70% of people with Marfan syndrome
have a least one of these ocular features; only 7% of people without Marfan syndrome do.
(We don’t guarantee the accuracy of these numbers, but they will work perfectly well for
our example.)

If a person has at least one of these ocular features, what are the odds that they have
Marfan syndrome?

answer: This is a standard Bayesian updating problem. Our hypotheses are:
M = ‘the person has Marfan syndrome’
M c = ‘the person does not have Marfan syndrome’
The data is:
F = ‘the person has at least one ocular feature’.

We are given the prior probability of M and the likelihoods of F given M or M c:

P (M) = 1/15000, P (F |M) = 0.7, P (F |M c) = 0.07.

As before, we can compute the posterior probabilities using a table:

Bayes
hypothesis prior likelihood numerator posterior

H P (H) P (F |H) P (F |H)P (H) P (H|F )
M 0.000067 0.7 0.0000467 0.00066
M c 0.999933 0.07 0.069995 0.99933

total 1 0.07004 1

irst we find the prior odds:

P (M) 1/15000 1
O(M) = = = .

P (M c) 14999/15000
≈ 0.000067

14999

F

18.05 class 12, Bayesian Updating: Odds, Spring 2014 3

The posterior odds are given by the ratio of the posterior probabilities or the Bayes numer-
ators, since the normalizing factor will be the same in both numerator and denominator.

P (M )
O(M | ) =

|F
F

P (M c|F )
=

P (F |M)P (M)
= 0.000667.

P (F |M c)P (M c)

The posterior odds are a factor of 10 larger than the prior odds. In that sense, having an
ocular feature is strong evidence in favor of the hypothesis M . However, because the prior
odds are so small, it is still highly unlikely the person has Marfan syndrome.

4 Bayes factors and strength of evidence

The factor of 10 in the previous example is called a Bayes factor. The exact definition is
the following.

Definition: For a hypothesis H and data D, the Bayes factor is the ratio of the likelihoods:

P (D )
Ba es factor =

|H
y .

P (D|Hc)

Let’s see exactly where the Bayes factor arises in updating odds. We have

P (H
O(H|D) =

|D)
P (Hc|D)
P (D

=
|H)P (H)

P (D|Hc)P (Hc)
P (D

=
|H) P (H)

P (D|Hc)
·
P (Hc)

=
P (D|H)

O
P (D|Hc)

· (H)

posterior odds = Bayes factor × prior odds

From this formula, we see that the Bayes’ factor (BF ) tells us whether the data provides
evidence for or against the hypothesis.

• If BF > 1 then the posterior odds are greater than the prior odds. So the data
provides evidence for the hypothesis.

• If BF < 1 then the posterior odds are less than the prior odds. So the data provides evidence against the hypothesis. • If BF = 1 then the prior and posterior odds are equal. So the data provides no evidence either way. The following example is taken from the textbook Information Theory, Inference, and Learning Algorithms by David J. C. Mackay, who has this to say regarding trial evidence. 18.05 class 12, Bayesian Updating: Odds, Spring 2014 4 In my view, a jury’s task should generally be to multiply together carefully evaluated likelihood ratios from each independent piece of admissible evidence with an equally carefully reasoned prior probability. This view is shared by many statisticians but learned British appeal judges recently disagreed and actually overturned the verdict of a trial because the jurors had been taught to use Bayes’ theorem to handle complicated DNA evidence. Example 1. Two people have left traces of their own blood at the scene of a crime. A suspect , Oliver, is tested and found to have type ‘O’ blood. The blood groups of the two traces are found to be of type ‘O’ (a common type in the local population, having frequency 60%) and type ‘AB’ (a rare type, with frequency 1%). Does this data (type ‘O’ and ‘AB’ blood were found at the scene) give evidence in favor of the proposition that Oliver was one of the two people present at the scene of the crime?” answer: There are two hypotheses: S = ‘Oliver and another unknown person were at the scene of the crime Sc = ‘two unknown people were at the scene of the crime’ The data is: D = ‘type ‘O’ and ‘AB’ blood were found’ P (D|S) The Bayes factor for Oliver’s presence is BFOliver = . We compute the numerator P (D|Sc) and denominator of this separately. The data says that both type O and type AB blood were found. If Oliver was at the scene then ‘type O’ blood would be there. So P (D|S) is the probability that the other person had type AB blood. We are told this is .01, so P (D|S) = 0.01. If Oliver was not at the scene then there were two random people one with type O and one with type AB blood. The probability of this is 2 · 0.6 · 0.01. The factor of 2 is because there are two ways this can happen –the first person is type O and the second is type AB or vice versa.* Thus the Bayes factor for Oliver’s presence is P (D ) BFOliver |S = P (D|Sc) = 0.01 = 0.83. 2 · 0.6 · 0.01 Since BFOliver < 1, the data provides (weak) evidence against Oliver being at the scene. *We have assumed the blood types of the two people are independent. This is not precisely true, 2 NO NAB but for a large population it is close enough. The exact probability is · · where NO N (N 1) is the number of people with type O blood, NAB the number with type AB · bloo − d and N the size NO of the population. We have N = 0.6. For large N we have N ≈ N − 1, so NAB 0 N 1 ≈ .01. This shows the probability is approximately 2 · 0.6 0 − · .01 as claimed. Example 2. Another suspect Alberto is found to have type ‘AB’ blood. Do the same data give evidence in favor of the proposition that Alberto was one of the two people present at the crime? 18.05 class 12, Bayesian Updating: Odds, Spring 2014 5 answer: Reusing the above notation with Alberto in place of Oliver we have: P (D BFAlberto = |S) P (D|Sc) = 0.6 = 50. 2 · 0.6 · 0.01 Since BFAlberto � 1, the data provides strong evidence in favor of Alberto being at the scene. Notes: 1. In both examples, we have only computed the Bayes factor, not the posterior odds. To compute the latter, we would need to know the prior odds that Oliver (or Alberto) was at the scene based on other evidence. 2. Note that if 50% of the population had type O blood instead of 60%, then the Oliver’s Bayes factor would be 1 (neither for nor against). More generally, the break-even point for blood type evidence is when the proportion of the suspect’s blood type in the general population equals the proportion of the suspect’s blood type among those who left blood at the scene. 4.1 Updating again and again Suppose we collect data in two stages, first D1, then D2. We have seen in our dice and coin examples that the final posterior can be computed all at once or in two stages where we first update the prior using the likelihoods for D1 and then update the resulting posterior using the likelihoods for D2. The latter approach works whenever likelihoods multiply: P (D1, D2|H) = P (D1|H)P (D2|H). Since likelihoods are conditioned on hypotheses, we say that D1 and D2 are conditionally independent if the above equation holds for every hypothesis H. Example. There are five dice in a drawer, with 4, 6, 8, 12, and 20 sides (these are the hypotheses). I pick a die at random and roll it twice. The first roll gives 7. The second roll gives 11. Are these results conditionally independent? Are they independent? answer: These results are conditionally independent. For example, for the hypothesis of the 8-sided die we have: P (7 on roll 1 | 8-sided die) = 1/8 P (11 on roll 2 | 8-sided die) = 0 P (7 on roll 1, 11 on roll 2 | 8-sided die) = 0 For the hypothesis of the 20-sided die we have: P (7 on roll 1 | 20-sided die) = 1/20 P (11 on roll 2 | 20-sided die) = 1/20 P (7 on roll 1, 11 on roll 2 | 20-sided die) = (1/20)2 However, the results of the rolls are not independent. That is: P (7 on roll 1, 11 on roll 2) 6= P (7 on roll 1)P (11 on roll 2). 18.05 class 12, Bayesian Updating: Odds, Spring 2014 6 Intuitively, this is because a 7 on the roll 1 allows us to rule out the 4- and 6-sided dice, making an 11 on roll 2 more likely. Let’s check this intuition by computing both sides precisely. On the righthand side we have: 1 P (7 on roll 1) = 5 · 1 8 + 1 5 · 1 12 + 1 5 · 1 20 = 31 600 1 P (11 on roll 2) = 5 · 1 12 + 1 5 · 1 20 = 2 75 On the lefthand side we have: P (7 on roll 1, 11 on roll 2) = P (11 on roll 2 | 7 on roll 1)P (7 on roll 1) = ( 30 93 · 1 12 + 6 31 · 1 20 ) · 31 600 = 17 31 465 · 600 = 17 9000 Here 30 and 6 are the posterior probabilities of the 12- and 20-sided dice given a 7 on roll 93 31 . We conclude that, without conditioning on hypotheses, the rolls are not independent. eturning the to general setup, if D1 and D2 are conditionally independent for H and H c hen it makes sense to consider each Bayes factor independently: P (Di BFi = |H) . P (Di|Hc) he prior odds of H are O(H). The posterior odds after D1 are O(H|D1) = BF1 ·O(H). nd the posterior odds after D1 and D2 are O(H|D1, D2) = BF2 ·O(H|D1) 1 R t T A = BF2 ·BF1 ·O(H) We have the beautifully simple notion that updating with new data just amounts to mul- tiplying the current posterior odds by the Bayes factor of the new data. Example 3. Other symptoms of Marfan Syndrome Recall from the earlier example that the Bayes factor for a least one ocular feature (F ) is P (F BFF = |M) 0.7 = = 10. P (F |M c) 0.07 The wrist sign (W ) is the ability to wrap one hand around your other wrist to cover your pinky nail with your thumb. Assume 10% of the population have the wrist sign, while 90% of people with Marfan’s have it. Therefore the Bayes factor for the wrist sign is P (W M BFW = | ) 0.9 = = 9. P (W |M c) 0.1 18.05 class 12, Bayesian Updating: Odds, Spring 2014 7 We will assume that F and W are conditionally independent symptoms. That is, among people with Marfan syndrome, ocular features and the wrist sign are independent, and among people without Marfan syndrome, ocular features and the wrist sign are independent. Given this assumption, the posterior odds of Marfan syndrome for someone with both an ocular feature and the wrist sign are 1 O(M |F,W ) = BFW ·BFF ·O(M) = 9 · 10 · 14999 ≈ 6 . 1000 We can convert the posterior odds back to probability, but since the odds are so small the result is nearly the same: 6 P (M |F,W ) ≈ . 1000 ≈ 0.596% + 6 So ocular features and the wrist sign are both strong evidence in favor of the hypothesis M , and taken together they are very strong evidence. Again, because the prior odds are so small, it is still unlikely that the person has Marfan syndrome, but at this point it might be worth undergoing further testing given potentially fatal consequences of the disease (such as aortic aneurysm or dissection). Note also that if a person has exactly one of the two symptoms, then the product of the Bayes factors is near 1 (either 9/10 or 10/9). So the two pieces of data essentially cancel each other out with regard to the evidence they provide for Marfan’s syndrome. 5 Log odds In practice, people often find it convenient to work with the natural log of the odds in place of odds. Naturally enough these are called the log odds. The Bayesian update formula O(H|D1, D2) = BF2 ·BF1 ·O(H) becomes ln(O(H|D1, D2)) = ln(BF2) + ln(BF1) + ln(O(H)). We can interpret the above formula for the posterior log odds as the sum of the prior log odds and all the evidence ln(BFi) provided by the data. Note that by taking logs, evidence in favor (BFi > 1) is positive and evidence against (BFi < 1) is negative. To avoid lengthier computations, we will work with odds rather than log odds in this course. Log odds are nice because sums are often more intuitive then products. Log odds also play a central role in logistic regression, an important statistical model related to linear regression. MIT OpenCourseWare https://ocw.mit.edu 18.05 Introduction to Probability and Statistics Spring 2014 For information about citing these materials or our Terms of Use, visit: https://ocw.mit.edu/terms. https://ocw.mit.edu https://ocw.mit.edu/terms Bayesian Updating with Continuous Priors Class 13, 18.05 Jeremy Orloff and Jonathan Bloom 1 Learning Goals 1. Understand a parameterized family of distributions as representing a continuous range of hypotheses for the observed data. 2. Be able to state Bayes’ theorem and the law of total probability for continous densities. 3. Be able to apply Bayes’ theorem to update a prior probability density function to a posterior pdf given data and a likelihood function. 4. Be able to interpret and compute posterior predictive probabilities. 2 Introduction Up to now we have only done Bayesian updating when we had a finite number of hypothesis, e.g. our dice example had five hypotheses (4, 6, 8, 12 or 20 sides). Now we will study Bayesian updating when there is a continuous range of hypotheses. The Bayesian update process will be essentially the same as in the discrete case. As usual when moving from discrete to continuous we will need to replace the probability mass function by a probability density function, and sums by integrals. The first few sections of this note are devoted to working with pdfs. In particular we will cover the law of total probability and Bayes’ theorem. We encourage you to focus on how these are essentially identical to the discrete versions. After that, we will apply Bayes’ theorem and the law of total probability to Bayesian updating. 3 Examples with continuous ranges of hypotheses Here are three standard examples with continuous ranges of hypotheses. Example 1. Suppose you have a system that can succeed or fail with probability p. Then we can hypothesize that p is anywhere in the range [0, 1]. That is, we have a continuous range of hypotheses. We will often model this example with a ‘bent’ coin with unknown probability p of heads. Example 2. The lifetime of a certain isotope is modeled by an exponential distribution exp(λ). In principal, the mean lifetime 1/λ can be any real number in (0, ∞). Example 3. We are not restricted to a single parameter. In principle, the parameters µ and σ of a normal distribution can be any real numbers in (−∞, ∞) and (0, ∞), respectively. If we model gestational length for single births by a normal distribution, then from millions of data points we know that µ is about 40 weeks and σ is about one week. 1 2 18.05 class 13, Bayesian Updating with Continuous Priors, Spring 2014 In all of these examples we modeled the random process giving rise to the data by a dis­ tribution with parameters –called a parametrized distribution. Every possible choice of the parameter(s) is a hypothesis, e.g. we can hypothesize that the probability of succcess in Example 1 is p = 0.7313. We have a continuous set of hypotheses because we could take any value between 0 and 1. 4 Notational conventions 4.1 Parametrized models As in the examples above our hypotheses often take the form a certain parameter has value θ. We will often use the letter θ to stand for an arbitrary hypothesis. This will leave symbols like p, f , and x to take there usual meanings as pmf, pdf, and data. Also, rather than saying ‘the hypothesis that the parameter of interest has value θ’ we will simply say the hypothesis θ. 4.2 Big and little letters We have two parallel notations for outcomes and probability: 1. (Big letters) Event A, probability function P (A). 2. (Little letters) Value x, pmf p(x) or pdf f(x). These notations are related by P (X = x) = p(x), where x is a value the discrete random variable X and ‘X = x’ is the corresponding event. We carry these notations over to the probabilities used in Bayesian updating. 1. (Big letters) From hypotheses H and data D we compute several associated probabilities P (H), P (D), P (H|D), P (D|H). In the coin example we might have H = ‘the chosen coin has probability 0.6 of heads’, D = ‘the flip was heads’, and P (D|H) = 0.6 2. (Small letters) Hypothesis values θ and data values x both have probabilities or proba­ bility densities: p(θ) p(x) p(θ|x) p(x|θ) f(θ) f(x) f(θ|x) f(x|θ) In the coin example we might have θ = 0.6 and x = 1, so p(x|θ) = 0.6. We might also write p(x = 1|θ = 0.6) to emphasize the values of x and θ, but we will never just write p(1|0.6) because it is unclear which value is x and which is θ. Although we will still use both types of notation, from now on we will mostly use the small letter notation involving pmfs and pdfs. Hypotheses will usually be parameters represented by Greek letters (θ, λ, µ, σ, . . . ) while data values will usually be represented by English letters (x, xi, y, . . . ). 3 5 6 18.05 class 13, Bayesian Updating with Continuous Priors, Spring 2014 Quick review of pdf and probability Suppose X is a random variable with pdf f(x). Recall f(x) is a density; its units are probability/(units of x). x f(x) c d P (c ≤ X ≤ d) x f(x) x dx probability f(x)dx The probability that the value of X is in [c, d] is given by l d f(x) dx. c The probability that X is in an infinitesimal range dx around x is f(x) dx. In fact, the integral formula is just the ‘sum’ of these infinitesimal probabilities. We can visualize these probabilities by viewing the integral as area under the graph of f(x). In order to manipulate probabilities instead of densities in what follows, we will make frequent use of the notion that f(x) dx is the probability that X is in an infinitesimal range around x of width dx. Please make sure that you fully understand this notion. Continuous priors, discrete likelihoods In the Bayesian framework we have probabilities of hypotheses –called prior and posterior probabilities– and probabilities of data given a hypothesis –called likelihoods. In earlier classes both the hypotheses and the data had discrete ranges of values. We saw in the introduction that we might have a continuous range of hypotheses. The same is true for the data, but for today we will assume that our data can only take a discrete set of values. In this case, the likelihood of data x given hypothesis θ is written using a pmf: p(x|θ). We will use the following coin example to explain these notions. We will carry this example through in each of the succeeding sections. Example 4. Suppose we have a bent coin with unknown probability θ of heads. The value of of θ is random and could be anywhere between 0 and 1. For this and the examples that follow we’ll suppose that the value of θ follows a distribution with continuous prior probability density f(θ) = 2θ. We have a discrete likelihood because tossing a coin has only two outcomes, x = 1 for heads and x = 0 for tails. p(x = 1|θ) = θ, p(x = 0|θ) = 1 − θ. Think: This can be tricky to wrap your mind around. We have a coin with an unknown probability θ of heads. The value of the parameter θ is itself random and has a prior pdf f(θ). It may help to see that the discrete examples we did in previous classes are similar. For example, we had a coin that might have probability of heads 0.5, 0.6, or 0.9. So, 4 7 18.05 class 13, Bayesian Updating with Continuous Priors, Spring 2014 we called our hypotheses H0.5, H0.6, H0.9 and these had prior probabilities P (H0.5) etc. In other words, we had a coin with an unknown probability of heads, we had hypotheses about that probability and each of these hypotheses had a prior probability. The law of total probability The law of total probability for continuous probability distributions is essentially the same as for discrete distributions. We replace the prior pmf by a prior pdf and the sum by an integral. We start by reviewing the law for the discrete case. Recall that for a discrete set of hypotheses H1, H2, . . . Hn the law of total probability says n P (D) = n P (D|Hi)P (Hi). (1) i=1 This is the total prior probability of D because we used the prior probabilities P (Hi) In the little letter notation with θ1, θ2, . . . , θn for hypotheses and x for data the law of total probability is written n p(x) = n p(x|θi)p(θi). (2) i=1 We also called this the prior predictive probability of the outcome x to distinguish it from the prior probability of the hypothesis θ. Likewise, there is a law of total probability for continuous pdfs. We state it as a theorem using little letter notation. Theorem. Law of total probability. Suppose we have a continuous parameter θ in the range [a, b], and discrete random data x. Assume θ is itself random with density f(θ) and that x and θ have likelihood p(x|θ). In this case, the total probability of x is given by the formula. l b p(x) = p(x|θ)f(θ) dθ (3) a Proof. Our proof will be by analogy to the discrete version: The probability term p(x|θ)f(θ) dθ is perfectly analogous to the term p(x|θi)p(θi) in Equation 2 (or the term P (D|Hi)P (Hi) in Equation 1). Continuing the analogy: the sum in Equation 2 becomes the integral in Equation 3 As in the discrete case, when we think of θ as a hypothesis explaining the probability of the data we call p(x) the prior predictive probability for x. Example 5. (Law of total probability.) Continuing with Example 4. We have a bent coin with probability θ of heads. The value of θ is random with prior pdf f(θ) = 2θ on [0, 1]. Suppose I flip the coin once. What is the total probability of heads? answer: In Example 4 we noted that the likelihoods are p(x = 1|θ) = θ and p(x = 0|θ) = 1 − θ. So the total probability of x = 1 is l 1 l 1 l 1 2 p(x = 1) = p(x = 1|θ) f(θ) dθ = θ · 2θ dθ = 2θ2dθ = . 30 0 0 Since the prior is weighted towards higher probabilities of heads, so is the total probability. 5 18.05 class 13, Bayesian Updating with Continuous Priors, Spring 2014 8 Bayes’ theorem for continuous probability densities The statement of Bayes’ theorem for continuous pdfs is essentially identical to the statement for pmfs. We state it including dθ so we have genuine probabilities: Theorem. Bayes’ Theorem. Use the same assumptions as in the law of total probability, i.e. θ is a continuous parameter with pdf f(θ) and range [a, b]; x is random discrete data; together they have likelihood p(x|θ). With these assumptions: p(x|θ)f(θ) dθ p(x|θ)f(θ) dθ f(θ|x) dθ = = . (4) p(x) J b p(x|θ)f(θ) dθ a Proof. Since this is a statement about probabilities it is just the usual statement of Bayes’ theorem. This is important enough to warrant spelling it out in words: Let Θ be the random variable that produces the value θ. Consider the events H = ‘Θ is in an interval of width dθ around the value θ’ and D = ‘the value of the data is x’. Then P (H) = f(θ) dθ, P (D) = p(x), and P (D|H) = p(x|θ). Now our usual form of Bayes’ theorem becomes P (D|H)P (H) p(x|θ)f(θ) dθ f(θ|x) dθ = P (H|D) = = P (D) p(x) Looking at the first and last terms in this equation we see the new form of Bayes’ theorem. Finally, we firmly believe that is is more conducive to careful thinking about probability to keep the factor of dθ in the statement of Bayes’ theorem. But because it appears in the numerator on both sides of Equation 4 many people drop the dθ and write Bayes’ theorem in terms of densities as p(x|θ)f(θ) p(x|θ)f(θ) f(θ|x) = = . p(x) J b p(x|θ)f(θ) dθ a 9 Bayesian updating with continuous priors Now that we have Bayes’ theorem and the law of total probability we can finally get to Bayesian updating. Before continuing with Example 4, we point out two features of the Bayesian updating table that appears in the next example: 1. The table for continuous priors is very simple: since we cannot have a row for each of an infinite number of hypotheses we’ll have just one row which uses a variable to stand for all hypotheses θ. 2. By including dθ, all the entries in the table are probabilities and all our usual probability rules apply. Example 6. (Bayesian updating.) Continuing Examples 4 and 5. We have a bent coin with unknown probability θ of heads. The value of θ is random with prior pdf f(θ) = 2θ. Suppose we flip the coin once and get heads. Compute the posterior pdf for θ. 6 18.05 class 13, Bayesian Updating with Continuous Priors, Spring 2014 answer: We make an update table with the usual columns. Since this is our first example the first row is the abstract version of Bayesian updating in general and the second row is Bayesian updating for this particular example. hypothesis prior likelihood Bayes numerator posterior θ f(θ) dθ p(x = 1|θ) p(x = 1|θ)f(θ) dθ f(θ|x = 1) dθ θ 2θ dθ θ 2θ2 dθ 3θ2 dθ total J b a f(θ) dθ = 1 p(x = 1) = J 1 0 2θ 2 dθ = 2/3 1 Therefore the posterior pdf (after seeing 1 heads) is f(θ|x) = 3θ2 . We have a number of comments: 1. Since we used the prior probability f(θ) dθ, the hypothesis should have been: ’the unknown paremeter is in an interval of width dθ around θ’. Even for us that is too much to write, so you will have to think it everytime we write that the hypothesis is θ. 2. The posterior pdf for θ is found by removing the dθ from the posterior probability in the table. f(θ|x) = 3θ2 . 3. (i) As always p(x) is the total probability. Since we have a continuous distribution instead of a sum we compute an integral. (ii) Notice that by including dθ in the table, it is clear what integral we need to compute to find the total probability p(x). 4. The table organizes the continuous version of Bayes’ theorem. Namely, the posterior pdf is related to the prior pdf and likelihood function via: p(x|θ) f(θ)dθ p(x|θ) f(θ) f(θ|x)dθ = =J b p(x)p(x|θ)f(θ) dθ a Removing the dθ in the numerator of both sides we have the statement in terms of densities. 5. Regarding both sides as functions of θ, we can again express Bayes’ theorem in the form: f(θ|x) ∝ p(x|θ) · f(θ) posterior ∝ likelihood × prior. 9.1 Flat priors One important prior is called a flat or uniform prior. A flat prior assumes that every hypothesis is equally probable. For example, if θ has range [0, 1] then f(θ) = 1 is a flat prior. Example 7. (Flat priors.) We have a bent coin with unknown probability θ of heads. Suppose we toss it once and get tails. Assume a flat prior and find the posterior probability for θ. 7 1 18.05 class 13, Bayesian Updating with Continuous Priors, Spring 2014 answer: This is the just Example 6 with a change of prior and likelihood. hypothesis prior likelihood Bayes numerator posterior θ f(θ) dθ p(x = 0|θ) f(θ|x = 0) dθ θ 1 · dθ 1 − θ (1 − θ) dθ 2(1 − θ) dθ l 1J b total f(θ) dθ = 1 p(x = 0) = (1 − θ) dθ = 1/2 a 0 9.2 Using the posterior pdf Example 8. In the previous example the prior probability was flat. First show that this means that a priori the coin is equally like to be biased towards heads or tails. Then, after observing one heads, what is the (posterior) probability that the coin is biased towards heads? answer: Since the parameter θ is the probability the coin lands heads, the first part of the problem asks us to show P (θ > .5) = 0.5 and the second part asks for P (θ > .5 | x = 1).
These are easily computed from the prior and posterior pdfs respectively.

The prior probability that the coin is biased towards heads is

l 1 l 1 1
P (θ > .5) = f(θ) dθ = 1 · dθ = θ|1 = . .5 2 .5 .5

The probability of 1/2 means the coin is equally likely to be biased toward heads or tails.
The posterior probabilitiy that it’s biased towards heads is

l 1 l 1
1 3

P (θ > .5|x = 1) = f(θ|x = 1) dθ = 2θ dθ = θ2
= .
.5 4 .5 .5

We see that observing one heads has increased the probability that the coin is biased towards
heads from 1/2 to 3/4.

10 Predictive probabilities

Just as in the discrete case we are also interested in using the posterior probabilities of the
hypotheses to make predictions for what will happen next.

Example 9. (Prior and posterior prediction.) Continuing Examples 4, 5, 6: we have a
coin with unknown probability θ of heads and the value of θ has prior pdf f(θ) = 2θ. Find
the prior predictive probability of heads. Then suppose the first flip was heads and find the
posterior predictive probabilities of both heads and tails on the second flip.

answer: For notation let x1 be the result of the first flip and let x2 be the result of the
second flip. The prior predictive probability is exactly the total probability computed in
Examples 5 and 6.

l 1 l 1 2
p(x1 = 1) = p(x1 = 1|θ)f(θ) dθ = 2θ2 dθ = .

30 0

8 18.05 class 13, Bayesian Updating with Continuous Priors, Spring 2014

The posterior predictive probabilities are the total probabilities computed using the poste­
rior pdf. From Example 6 we know the posterior pdf is f(θ|x1 = 1) = 3θ2 . So the posterior
predictive probabilities are

l 1 l 1
p(x2 = 1|x1 = 1) = p(x2 = 1|θ, x1 = 1)f(θ|x1 = 1) dθ = θ · 3θ2 dθ = 3/4

0 0 l 1 l 1
p(x2 = 0|x1 = 1) = p(x2 = 0|θ, x1 = 1)f(θ|x1 = 1) dθ = (1 − θ) · 3θ2 dθ = 1/4

0 0

(More simply, we could have computed p(x2 = 0|x1 = 1) = 1 − p(x2 = 1|x1 = 1) = 1/4.)

11 From discrete to continuous Bayesian updating

To develop intuition for the transition from discrete to continuous Bayesian updating, we’ll
walk a familiar road from calculus. Namely we will:

(i) approximate the continuous range of hypotheses by a finite number.

(ii) create the discrete updating table for the finite number of hypotheses.

(iii) consider how the table changes as the number of hypotheses goes to infinity.

In this way, will see the prior and posterior pmf’s converge to the prior and posterior pdf’s.

Example 10. To keep things concrete, we will work with the ‘bent’ coin with a flat prior
f(θ) = 1 from Example 7. Our goal is to go from discrete to continuous by increasing the

number of hypotheses

4 hypotheses. We slice [0, 1] into 4 equal intervals: [0, 1/4], [1/4, 1/2], [1/2, 3/4], [3/4, 1].

Each slice has width Δθ = 1/4. We put our 4 hypotheses θi at the centers of the four slices:

θ1: ‘θ = 1/8’, θ2: ‘θ = 3/8’, θ3: ‘θ = 5/8’, θ4: ‘θ = 7/8’.

The flat prior gives each hypothesis a probability of 1/4 = 1 · Δθ. We have the table:

hypothesis prior likelihood Bayes num. posterior

Total 1 –

n∑

i=1

θi ∆θ 1

1/4

θ = 1/8 1/8 (1/4) × (1/8) 1/16

1/4

θ = 3/8 3/8 (1/4) × (3/8) 3/161/4

θ = 5/8 5/8 (1/4) × (5/8) 5/16

1/4

θ = 7/8 7/8 (1/4) × (7/8) 7/16

Here are the density histograms of the prior and posterior pmf. The prior and posterior
pdfs from Example 7 are superimposed on the histograms in red.

9 18.05 class 13, Bayesian Updating with Continuous Priors, Spring 2014

x

density

1/8 3/8 5/8 7/8

.5

1

1.5

2

x

density

1/8 3/8 5/8 7/8

.5

1

1.5

2

8 hypotheses. Next we slice [0,1] into 8 intervals each of width Δθ = 1/8 and use the
center of each slice for our 8 hypotheses θi.

θ1: ’θ = 1/16’, θ2: ’θ = 3/16’, θ3: ’θ = 5/16’, θ4: ’θ = 7/16’
θ5: ’θ = 9/16’, θ6: ’θ = 11/16’, θ7: ’θ = 13/16’, θ8: ’θ = 15/16’

The flat prior gives each hypothesis the probablility 1/8 = 1 · Δθ. Here are the table and
density histograms.

hypothesis prior likelihood Bayes num. posterior

Total 1 –

n∑

i=1

θi ∆θ 1

1/8

θ = 1/16 1/16 (1/8) × (1/16) 1/64

1/8

θ = 3/16 3/16 (1/8) × (3/16) 3/64

1/8

θ = 5/16 5/16 (1/8) × (5/16) 5/64

1/8

θ = 7/16 7/16 (1/8) × (7/16) 7/641/8
θ = 9/16 9/16 (1/8) × (9/16) 9/64

1/8

θ = 11/16 11/16 (1/8) × (11/16) 11/64

1/8

θ = 13/16 13/16 (1/8) × (13/16) 13/64

1/8

θ = 15/16 15/16 (1/8) × (15/16) 15/64

x

density

1/16 3/16 5/16 7/16 9/16 11/16 13/16 15/16

.5

1

1.5

2

x

density

1/16 3/16 5/16 7/16 9/16 11/16 13/16 15/16

.5

1

1.5

2

20 hypotheses. Finally we slice [0,1] into 20 pieces. This is essentially identical to the
previous two cases. Let’s skip right to the density histograms.

x

density

.5

1

1.5

2

x

density

.5

1

1.5

2

Looking at the sequence of plots we see how the prior and posterior density histograms
converge to the prior and posterior probability density functions.

MIT OpenCourseWare
https://ocw.mit.edu

18.05 Introduction to Probability and Statistics
Spring 2014

For information about citing these materials or our Terms of Use, visit: https://ocw.mit.edu/terms.

https://ocw.mit.edu
https://ocw.mit.edu/terms

Notational conventions

Class 13, 18.05

Jeremy Orloff and Jonathan Bloom

1 Learning Goals

1. Be able to work with the various notations and terms we use to describe probabilities
and likelihood.

2 Introduction

We’ve introduced a number of different notations for probability, hypotheses and data. We
collect them here, to have them in one place.

3 Notation and terminology for data and hypotheses

The problem of labeling data and hypotheses is a tricky one. When we started the course
we talked about outcomes, e.g. heads or tails. Then when we introduced random variables
we gave outcomes numerical values, e.g. 1 for heads and 0 for tails. This allowed us to do
things like compute means and variances. We need to do something similar now. Recall
our notational conventions:

• Events are labeled with capital letters, e.g. A, B, C.

• A random variable is capital X and takes values small x.

• The connection between values and events: ‘X = x’ is the event that X takes the
value x.

• The probability of an event is capital P (A).

• A discrete random variable has a probability mass function small p(x) The connection
between P and p is that P (X = x) = p(x).

• A continuous random variable has a probability density function f(x) The connection b
between P and f is that P (a ≤ X ≤ b) = f(x) dx.

a

• For a continuous random variable X the probability that X is in an infinitesimal
interval of width dx round x is f(x) dx.

In the context of Bayesian updating we have similar conventions.

• We use capital letters, especially H, to indicate a hypothesis, e.g. H = ’the coin is
fair’.

1

2 18.05 class 13, Notational conventions, Spring 2014

• We use lower case letters, especially θ, to indicate the hypothesized value of a model
parameter, e.g. the probability the coin lands heads is θ = 0.5.

• We use upper case letters, especially D, when talking about data as events. For
example, D = ‘the sequence of tosses was HTH.

• We use lower case letters, especially x, when talking about data as values. For exam­
ple, the sequence of data was x1, x2, x3 = 1, 0, 1.

• When the set of hypotheses is discrete we can use the probability of individual hy­
potheses, e.g. p(θ). When the set is continuous we need to use the probability for an
infinitesimal range of hypotheses, e.g. f(θ) dθ.

The following table summarizes this for discret θ and continuous θ. In both cases we
are assuming a discrete set of possible outcomes (data) x. Tomorrow we will deal with a
continuous set of outcomes.

hypothesis prior likelihood
Bayes

numerator posterior
H P (H) P (D|H) P (D|H)P (H) P (H|D)

Discrete θ:
Continuous θ:

θ
θ

p(θ)
f(θ) dθ

p(x|θ) p(x|θ)p(θ) p(θ|x)
p(x|θ) p(x|θ)f(θ) dθ f(θ|x) dθ

Remember the continuous hypothesis θ is really a shorthand for ‘the parameter θ is in an
interval of width dθ around θ’.

MIT OpenCourseWare
https://ocw.mit.edu

18.05 Introduction to Probability and Statistics
Spring 2014

For information about citing these materials or our Terms of Use, visit: https://ocw.mit.edu/terms.

https://ocw.mit.edu
https://ocw.mit.edu/terms

Beta Distributions
Class 14, 18.05

Jeremy Orloff and Jonathan Bloom

1 Learning Goals

1. Be familiar with the 2-parameter family of beta distributions and its normalization.

2. Be able to update a beta prior to a beta posterior in the case of a binomial likelihood.

2 Beta distribution

The beta distribution beta(a, b) is a two-parameter distribution with range [0, 1] and pdf

(a+ b 1)!
f(θ) =


θa−1(1 )

a− 1)!(b− 1)!
− θ b−1

(

We have made an applet so you can explore the shape of the Beta distribution as you vary
the parameters:

http://mathlets.org/mathlets/beta-distribution/.

As you can see in the applet, the beta distribution may be defined for any real numbers
a > 0 and b > 0. In 18.05 we will stick to integers a and b, but you can get the full story
here: http://en.wikipedia.org/wiki/Beta_distribution

In the context of Bayesian updating, a and b are often called hyperparameters to distinguish
them from the unknown parameter θ representing our hypotheses. In a sense, a and b are
‘one level up’ from θ since they parameterize its pdf.

2.1 A simple but important observation!

If a pdf f(θ) has the form cθa−1(1 − θ)b−1 then f(θ) is a beta(a, b) distribution and the
normalizing constant must be

(a+ b 1)!
c =


.

(a− 1)! (b− 1)!
This follows because the constant c must normalize the pdf to have total probability 1.
There is only one such constant and it is given in the formula for the beta distribution.

A similar observation holds for normal distributions, exponential distributions, and so on.

2.2 Beta priors and posteriors for binomial random variables

Example 1. Suppose we have a bent coin with unknown probability θ of heads. We toss
it 12 times and get 8 heads and 4 tails. Starting with a flat prior, show that the posterior
pdf is a beta(9, 5) distribution.

1

Beta Distribution


http://en.wikipedia.org/wiki/Beta_distribution

18.05 class 14, Beta Distributions, Spring 2014 2

answer: This is nearly identical to examples from the previous class. We’ll call the data
from all 12 tosses x1. In the following table we call the leading constant factor in the
posterior column c2. Our simple observation will tell us that it has to be the constant
factor from the beta pdf.

The data is 8 heads and 4 tails. Since this comes from a binomial(12, θ) distribution, the
12

likelihood p(x1|θ) =
(

8

)
θ8(1− θ)4. Thus the Bayesian update table is

Bayes
hypothesis prior likelihood numerator posterior

θ 1 · dθ
(
12
)
θ8(1− θ)4

(
12
)
θ8(1− θ)4 dθ c2 θ8(18 8 − θ)

4 dθ

total 1 T =

(
12
)∫ 1

θ8(1− θ)4 dθ 1
8 0

ur simple observation above holds with a = 9 and b = 5. Therefore the posterior pdf

f(θ|x1) = c2θ8(1− θ)4

llows a beta(9, 5) distribution and the normalizing constant c2 must be

13!
c2 = .

O

fo

8! 4!

Note: We explicitly included the binomial coefficient 12 in the likelihood. We could just
(
8

s easily have given it a name, say c1 and not bothered making its value explicit.

xample 2. Now suppose we toss the same coin again, getting n heads and m tails. Using
he posterior pdf of the previous example as our new prior pdf, show that the new posterior
df is that of a beta(9 + n, 5 +m) distribution.

nswer: It’s all in the table. We’ll call the data of these n+m additional tosses x2. This
ime we won’t make the binomial coefficient explicit. Instead we’ll just call it c3. Whenever
e need a new label we will simply use c with a new subscript.

Bayes
hyp. prior likelihood posterior numerator

θ c2θ
8(1− θ)4 dθ c m3 θn(1− θ)m c +82c3 θn (1− θ) +4 dθ c4 θn+8(1− θ)m+4 dθ

total 1 T =

∫ 1
c2c3 θ

n+8(1
0

− θ)m+4 dθ 1

)
a

E
t
p

a
t
w

Again our simple observation holds and therefore the posterior pdf

f(θ|x , x ) = c θn+81 2 4 (1− θ)m+4

follows a beta(n+ 9,m+ 5) distribution.

Note: Flat beta. The beta(1, 1) distribution is the same as the uniform distribution on
[0, 1], which we have also called the flat prior on θ. This follows by plugging a = 1 and
b = 1 into the definition of the beta distribution, giving f(θ) = 1.

18.05 class 14, Beta Distributions, Spring 2014 3

Summary: If the probability of heads is θ, the number of heads in n+m tosses follows a
binomial(n + m, θ) distribution. We have seen that if the prior on θ is a beta distribution
then so is the posterior; only the parameters a, b of the beta distribution change! We
summarize precisely how they change in a table. We assume the data is n heads in n+m
tosses.

hypothesis data prior likelihood posterior

θ x = n beta(a, b) binomial(n+m, θ) beta(a+ n, b+m)

θ x = n c θa−1(1− θ)b−1 dθ c θn(1− θ)m a1 2 c3θ +n−1(1− θ)b+m−1 dθ

2.3 Conjugate priors

In the literature you’ll see that the beta distribution is called a conjugate prior for the
binomial distribution. This means that if the likelihood function is binomial, then a beta
prior gives a beta posterior. In fact, the beta distribution is a conjugate prior for the
Bernoulli and geometric distributions as well.

We will soon see another important example: the normal distribution is its own conjugate
prior. In particular, if the likelihood function is normal with known variance, then a normal
prior gives a normal posterior.

Conjugate priors are useful because they reduce Bayesian updating to modifying the param-
eters of the prior distribution (so-called hyperparameters) rather than computing integrals.
We saw this for the beta distribution in the last table. For many more examples see:
http://en.wikipedia.org/wiki/Conjugate_prior_distribution

http://en.wikipedia.org/wiki/Conjugate_prior_distribution

MIT OpenCourseWare
https://ocw.mit.edu

18.05 Introduction to Probability and Statistics
Spring 2014

For information about citing these materials or our Terms of Use, visit: https://ocw.mit.edu/terms.

https://ocw.mit.edu
https://ocw.mit.edu/terms

Continuous Data with Continuous Priors

Class 14, 18.05

Jeremy Orloff and Jonathan Bloom

1 Learning Goals

1. Be able to construct a Bayesian update table for continuous hypotheses and continuous
data.

2. Be able to recognize the pdf of a normal distribution and determine its mean and variance.

2 Introduction

We are now ready to do Bayesian updating when both the hypotheses and the data take
continuous values. The pattern is the same as what we’ve done before, so let’s first review
the previous two cases.

3 Previous cases

1. Discrete hypotheses, discrete data

Notation

• Hypotheses H

• Data x

• Prior P (H)

• Likelihood p(x |H)

• Posterior P (H | x).

Example 1. Suppose we have data x and three possible explanations (hypotheses) for the
data that we’ll call A, B, C. Suppose also that the data can take two possible values, -1
and 1.

In order to use the data to help estimate the probabilities of the different hypotheses we
need a prior pmf and a likelihood table. Assume the prior and likelihoods are given in
the following table. (For this example we are only concerned with the formal process of of
Bayesian updating. So we just made up the prior and likelihoods.)

1

2 18.05 class 14, Continuous Data with Continuous Priors, Spring 2014

hypothesis prior
H P (H)
A 0.1
B 0.3
C 0.6

hypothesis likelihood p(x |H)
H x = −1 x = 1
A 0.2 0.8
B 0.5 0.5
C 0.7 0.3

Prior probabilities Likelihoods

Naturally, each entry in the likelihood table is a likelihood p(x |H). For instance the 0.2
row A and column x = −1 is the likelihood p(x = −1 | A).

Question: Suppose we run one trial and obtain the data x1 = 1. Use this to find the
posterior probabilities for the hypotheses.

answer: The data picks out one column from the likelihood table which we then use in our
Bayesian update table.

hypothesis prior likelihood
Bayes

numerator posterior

H P (H) p(x = 1 | H) p(x | H)P (H) P (H | x) =
p(x | H)P (H)

p(x)
A
B
C

0.1
0.3
0.6

0.8
0.5
0.3

0.08
0.15
0.18

0.195
0.366
0.439

total 1 p(x) = 0.41 1

To summarize: the prior probabilities of hypotheses and the likelihoods of data given hy­
pothesis were given; the Bayes numerator is the product of the prior and likelihood; the
total probability p(x) is the sum of the probabilities in the Bayes numerator column; and
we divide by p(x) to normalize the Bayes numerator.

2. Continuous hypotheses, discrete data

Now suppose that we have data x that can take a discrete set of values and a continuous
parameter θ that determines the distribution the data is drawn from.

Notation

• Hypotheses θ

• Data x

• Prior f(θ) dθ

• Likelihood p(x | θ)

• Posterior f(θ | x) dθ.

Note: Here we multiplied by dθ to express the prior and posterior as probabilities. As
densities, we have the prior pdf f(θ) and the posterior pdf f(θ | x).
Example 2. Assume that x ∼ Binomial(5, θ). So θ is in the range [0, 1] and the data x
can take six possible values, 0, 1, . . . , 5.

4

18.05 class 14, Continuous Data with Continuous Priors, Spring 2014 3

Since there is a continuous range of values we use a pdf to describe the prior on θ. Let’s
suppose the prior is f(θ) = 2θ. We can still make a likelihood table, though it only has one
row representing an arbitrary hypothesis θ.

hypothesis likelihood p(x | θ)

x = 0 x = 1 x = 2 x = 3 x = 4 x = 5

θ

5
0

(1 − θ)5

5
1

θ(1 − θ)4

5
2

θ2(1 − θ)3

5
3

θ3(1 − θ)2

5
4

θ4(1 − θ)

5
5

θ5

Likelihoods

Question: Suppose we run one trial and obtain the data x1 = 2. Use this to find the
posterior pdf for the parameter (hypotheses) θ.

answer: As before, the data picks out one column from the likelihood table which we can
use in our Bayesian update table. Since we want to work with probabilities we write f(θ)d θ
and f(θ | x1) dθ for the pdf’s.

hypothesis prior likelihood
Bayes

numerator posterior

θ f(θ) dθ p(x = 2 | θ) p(x | θ)f(θ) dθ f(θ | x) dθ =
p(x | θ)f(θ) dθ

p(x)

θ 2θ dθ

5
2

θ2(1 − θ)3 2

5
2

θ3(1 − θ)3 dθ f(θ | x) dθ =

3! 3!
7!

θ3(1 − θ)3 dθ

total 1 p(x) =
1
0 2

5
2

θ2(1 − θ)3 dθ = 2

5
2

3! 3!
7! 1

To summarize: the prior probabilities of hypotheses and the likelihoods of data given hy­
pothesis were given; the Bayes numerator is the product of the prior and likelihood; the
total probability p(x) is the integral of the probabilities in the Bayes numerator column;
and we divide by p(x) to normalize the Bayes numerator.

Continuous hypotheses and continuous data

When both data and hypotheses are continuous, the only change to the previous example is
that the likelihood function uses a pdf f(x | θ) instead of a pmf p(x | θ). The general shape
of the Bayesian update table is the same.

Notation

• Hypotheses θ

• Data x

• Prior f(θ)dθ

18.05 class 14, Continuous Data with Continuous Priors, Spring 2014 4

• Likelihood f(x | θ) dx

• Posterior f(θ |x) dθ.

Simplifying the notation. In the previous cases we included dθ so that we were working
with probabilities instead of densities. When both data and hypotheses are continuous
we will need both dθ and dx. This makes things conceptually simpler, but notationally
cumbersome. To simplify the notation we will allow ourselves to dx in our tables. This is
fine because the data x is a fixed. We keep the dθ because the hypothesis θ is allowed to
vary.

For comparison, we first show the general table in simplified notation followed immediately
afterward by the table showing the infinitesimals.

Bayes
hypoth. prior likelihood numerator posterior

f(x θ)f(θ) dθ
θ f(θ) dθ f(x | θ) f(x | θ)f(θ) dθ f(θ

|
|x) =

f(x)

total 1 f(x) =

f(x | θ)f(θ) dθ 1

Bayesian update table without dx

Bayes
hypoth. prior likelihood numerator posterior

f(x θ)f(θ) dθ dx f(x θ)f(θ)
θ f(θ) dθ f(x | θ) dx f(x | θ)f(θ) dθ dx f(θ |x) dθ =

|
=

|
f(x) dx f(x)

total 1 f(x) dx =
(∫
f(x | θ)f(θ) dθ

)
dx 1

Bayesian update table with dθ and dx

To summarize: the prior probabilities of hypotheses and the likelihoods of data given hy-
pothesis were given; the Bayes numerator is the product of the prior and likelihood; the
total probability f(x) dx is the integral of the probabilities in the Bayes numerator column;
we divide by f(x) dx to normalize the Bayes numerator.

5 Normal hypothesis, normal data

A standard example of continuous hypotheses and continuous data assumes that both the
data and prior follow normal distributions. The following example assumes that the variance
of the data is known.

Example 3. Suppose we have data x = 5 which wass drawn from a normal distribution

18.05 class 14, Continuous Data with Continuous Priors, Spring 2014 5

with unknown mean θ and standard deviation 1.

x ∼ N(θ, 1)

Suppose further that our prior distribution for θ is θ ∼ N(2, 1).
Let x represent an arbitrary data value.

(a) Make a Bayesian table with prior, likelihood, and Bayes numerator.

(b) Show that the posterior distribution for θ is normal as well.

(c) Find the mean and variance of the posterior distribution.

answer: As we did with the tables above, a good compromise on the notation is to include
dθ but not dx. The reason for this is that the total probability is computed by integrating
over θ and the dθ reminds of us that.

Our prior pdf is
1 −(θ−2)2/2f(θ) = √ e .

The likelihood function is
1 −(5−θ)2/2f(x = 5 | θ) = √ e .

We know we are going to multiply the prior and the likelihood, so we carry out that algebra
first. In the very last step we simplify the constant factor into one constant we call c1.

1 1−(θ−2)2/2 −(5−θ)2/2prior · likelihood = √ e · √ e
2π 2π

1 −(2θ2−14θ+29)/2 = e

1 −(θ2−7θ+29/2)= e (complete the square)

1 −((θ−7/2)2+9/4)= e

−9/4e −(θ−7/2)2)= e

−(θ−7/2)2 = c1e

In the last step we replaced the complicated constant factor by the simpler expression c1.

hypothesis prior likelihood
Bayes

numerator
posterior

f(θ | x = 5) dθ

θ f(θ) dθ f(x = 5 | θ) f(x = 5 | θ)f(θ) dθ
f(x = 5 | θ)f(θ) dθ

f(x = 5)

θ √1

e−(θ−2)

2/2 dθ √1

e−(5−θ)

2/2 c1e
−(θ−7/2)2 c2e

−(θ−7/2)2

total 1 f(x = 5) = f(x = 5 | θ)f(θ) dθ 1

6 18.05 class 14, Continuous Data with Continuous Priors, Spring 2014

We can see by the form of the posterior pdf that it is a normal distribution. Because the
exponential for a normal distribution is e−(θ−µ)

2/2σ2 we have mean µ = 7/2 and 2σ2 = 1,
so variance σ2 = 1/2.

We don’t need to bother computing the total probability; it is just used for normalization
and we already know the normalization constant √1 for a normal distribution.

σ 2π

Here is the graph of the prior and the posterior pdf’s for this example. Note how the data
‘pulls’ the prior towards the data.

prior = blue; posterior = purple; data = red

Now we’ll repeat the previous example for general x. When reading this if you mentally
substitute 5 for x you will understand the algebra.

Example 4. Suppose our data x is drawn from a normal distribution with unknown mean
θ and standard deviation 1.

x ∼ N(θ, 1)

answer: As before, we show the algebra used to simplify the Bayes numerator: The prior
pdf and likelihood function are

1 1−(θ−2)2/2 −(x−θ)2/2f(θ) = √ e f(x | θ) = √ e .
2π 2π

The Bayes numerator is the product of the prior and the likelihood:

1 1−(θ−2)2/2 −(x−θ)2/2prior · likelihood = √ e · √ e
2π 2π

1 −(2θ2−(4+2x)θ+4+x2)/2 = e

1 −(θ2−(2+x)θ+(4+x2)/2)= e (complete the square)

1 −((θ−(1+x/2))2−(1+x/2)2+(4+x2)/2)= e

−(θ−(1+x/2))2 = c1e

Just as in the previous example, in the last step we replaced all the constants, including
the exponentials that just involve x, by by the simple constant c1.

7

6

18.05 class 14, Continuous Data with Continuous Priors, Spring 2014

Now the Bayesian update table becomes

hypothesis prior likelihood
Bayes

numerator
posterior
f(θ | x) dθ

θ f(θ) dθ f(x | θ) f(x | θ)f(θ) dθ
f(x | θ)f(θ) dθ

f(x)

θ √1

e−(θ−2)

2/2 dθ √1

e−(x−θ)

2/2 c1e
−(θ−(1+x/2))2 c2e

−(θ−(1+x/2))2

total 1 f(x) = f(x | θ)f(θ) dθ 1

As in the previous example we can see by the form of the posterior that it must be a normal
distribution with mean 1 + x/2 and variance 1/2. (Compare this with the case x = 5 in the
previous example.)

Predictive probabilities

Since the data x is continuous it has prior and posterior predictive pdfs. The prior predictive
pdf is the total probability density computed at the bottom of the Bayes numerator column:

f(x) = f(x|θ)f(θ) dθ,

where the integral is computed over the entire range of θ.

The posterior predictive pdf has the same form as the prior predictive pdf, except it use
the posterior probabilities for θ:

f(x2|x1) = f(x2|θ, x1)f(θ|x1) dθ,

As usual, we usually assume x1 and x2 are conditionally independent. That is,

f(x2|θ, x1) = f(x2|θ).

In this case the formula for the posterior predictive pdf is a little simpler:
f(x2|x1) = f(x2|θ)f(θ|x1) dθ,

MIT OpenCourseWare
https://ocw.mit.edu

18.05 Introduction to Probability and Statistics
Spring 2014

For information about citing these materials or our Terms of Use, visit: https://ocw.mit.edu/terms.

https://ocw.mit.edu
https://ocw.mit.edu/terms

Conjugate priors: Beta and normal

Class 15, 18.05

Jeremy Orloff and Jonathan Bloom

1 Learning Goals

1. Understand the benefits of conjugate priors.

2. Be able to update a beta prior given a Bernoulli, binomial, or geometric likelihood.

3. Understand and be able to use the formula for updating a normal prior given a normal
likelihood with known variance.

2 Introduction and definition

In this reading, we will elaborate on the notion of a conjugate prior for a likelihood function.
With a conjugate prior the posterior is of the same type, e.g. for binomial likelihood the beta
prior becomes a beta posterior. Conjugate priors are useful because they reduce Bayesian
updating to modifying the parameters of the prior distribution (so-called hyperparameters)
rather than computing integrals.

Our focus in 18.05 will be on two important examples of conjugate priors: beta and normal.
For a far more comprehensive list, see the tables herein:

http://en.wikipedia.org/wiki/Conjugate_prior_distribution

We now give a definition of conjugate prior. It is best understood through the examples in
the subsequent sections.

Definition. Suppose we have data with likelihood function f(x|θ) depending on a hypothe­
sized parameter. Also suppose the prior distribution for θ is one of a family of parametrized
distributions. If the posterior distribution for θ is in this family then we say the the prior
is a conjugate prior for the likelihood.

3 Beta distribution

In this section, we will show that the beta distribution is a conjugate prior for binomial,
Bernoulli, and geometric likelihoods.

3.1 Binomial likelihood

We saw last time that the beta distribution is a conjugate prior for the binomial distribution.
This means that if the likelihood function is binomial and the prior distribution is beta then
the posterior is also beta.

1

http://en.wikipedia.org/wiki/Conjugate_prior_distribution

2 18.05 class 15, Conjugate priors: Beta and normal, Spring 2014

More specifically, suppose that the likelihood follows a binomial(N, θ) distribution where N
is known and θ is the (unknown) parameter of interest. We also have that the data x from
one trial is an integer between 0 and N . Then for a beta prior we have the following table:

hypothesis data prior likelihood posterior
θ x beta(a, b) binomial(N, θ) beta(a + x, b + N − x)
θ x c1θ

a−1(1 − θ)b−1 c2θx(1 − θ)N−x c3θa+x−1(1 − θ)b+N−x−1

The table is simplified by writing the normalizing coefficient as c1, c2 and c3 respectively.
If needed, we can recover the values of the c1 and c2 by recalling (or looking up) the
normalizations of the beta and binomial distributions.

(a + b − 1)! N N ! (a + b + N − 1)!
c1 = c2 = = c3 =

(a − 1)! (b − 1)! x x! (N − x)! (a + x − 1)! (b + N − x − 1)!

3.2 Bernoulli likelihood

The beta distribution is a conjugate prior for the Bernoulli distribution. This is actually
a special case of the binomial distribution, since Bernoulli(θ) is the same as binomial(1,
θ). We do it separately because it is slightly simpler and of special importance. In the
table below, we show the updates corresponding to success (x = 1) and failure (x = 0) on
separate rows.

hypothesis data prior likelihood posterior
θ x beta(a, b) Bernoulli(θ) beta(a + 1, b) or beta(a, b + 1)
θ x = 1 c1θ

a−1(1 − θ)b−1 θ c3θa(1 − θ)b−1
θ x = 0 c1θ

a−1(1 − θ)b−1 1 − θ c3θa−1(1 − θ)b

The constants c1 and c3 have the same formulas as in the previous (binomial likelihood
case) with N = 1.

3.3 Geometric likelihood

Recall that the geometric(θ) distribution describes the probability of x successes before
the first failure, where the probability of success on any single independent trial is θ. The
corresponding pmf is given by p(x) = θx(1 − θ).
Now suppose that we have a data point x, and our hypothesis θ is that x is drawn from a
geometric(θ) distribution. From the table we see that the beta distribution is a conjugate
prior for a geometric likelihood as well:

hypothesis data prior likelihood posterior
θ x beta(a, b) geometric(θ) beta(a + x, b + 1)
θ x c1θ

a−1(1 − θ)b−1 θx(1 − θ) c3θa+x−1(1 − θ)b

At first it may seem strange that the beta distribution is a conjugate prior for both the
binomial and geometric distributions. The key reason is that the binomial and geometric
likelihoods are proportional as functions of θ. Let’s illustrate this in a concrete example.

Example 1. While traveling through the Mushroom Kingdom, Mario and Luigi find some
rather unusual coins. They agree on a prior of f(θ) ∼ beta(5,5) for the probability of heads,

3

4

18.05 class 15, Conjugate priors: Beta and normal, Spring 2014

though they disagree on what experiment to run to investigate θ further.

a) Mario decides to flip a coin 5 times. He gets four heads in five flips.

b) Luigi decides to flip a coin until the first tails. He gets four heads before the first tail.

Show that Mario and Luigi will arrive at the same posterior on θ, and calculate this posterior.

answer: We will show that both Mario and Luigi find the posterior pdf for θ is a beta(9, 6)

distribution.

Mario’s table

hypothesis data prior likelihood posterior

θ x = 4 beta(5, 5) binomial(5,θ)_ _ ???
θ x = 4 c1θ

4(1 − θ)4 5
4 θ

4(1 − θ) c3θ8(1 − θ)5

Luigi’s table

hypothesis data prior likelihood posterior

θ x = 4 beta(5, 5) geometric(θ) ???
θ x = 4 c1θ

4(1 − θ)4 θ4(1 − θ) c3θ8(1 − θ)5

Since both Mario and Luigi’s posterior has the form of a beta(9, 6) distribution that’s what
they both must be. The normalizing factor is the same in both cases because it’s determined
by requiring the total probability to be 1.

Normal begets normal

We now turn to another important example: the normal distribution is its own conjugate
prior. In particular, if the likelihood function is normal with known variance, then a normal
prior gives a normal posterior. Now both the hypotheses and the data are continuous.

Suppose we have a measurement x ∼ N(θ, σ2) where the variance σ2 is known. That is, the
mean θ is our unknown parameter of interest and we are given that the likelihood comes
from a normal distribution with variance σ2 . If we choose a normal prior pdf

f(θ) ∼ N(µprior, σ2 prior)

then the posterior pdf is also normal: f(θ|x) ∼ N(µpost, σ2 ) where post
µpost µprior x 1 1 1

= + , = + (1)
σ2 σ2 σ2 σ2 σ2 σ2 post prior post prior

The following form of these formulas is easier to read and shows that µpost is a weighted
average between µprior and the data x.

1 1 aµprior + bx 1
σ2 a = b = , µpost = , post = . (2)

σ2 σ2 a + b a + bprior

With these formulas in mind, we can express the update via the table:

hypothesis data prior likelihood posterior
θ x f(θ) ∼ N(µprior, σ2 prior) f(x|θ) ∼ N(θ, σ

2) f(θ|x) ∼ N(µpost, σ2 post)

θ x c1 exp
−(θ−µprior)2

2σ2
prior

c2 exp
_
−(x−θ)2

2σ2

_
c3 exp

_
−(θ−µpost)2

2σ2 post

_ ( )

4 18.05 class 15, Conjugate priors: Beta and normal, Spring 2014

We leave the proof of the general formulas to the problem set. It is an involved algebraic
manipulation which is essentially the same as the following numerical example.

Example 2. Suppose we have prior θ ∼ N(4, 8), and likelihood function likelihood x ∼
N(θ, 5). Suppose also that we have one measurement x1 = 3. Show the posterior distribution
is normal.

answer: We will show this by grinding through the algebra which involves completing the
square.

−(θ−4)2/16 −(x1−θ)2/10 −(3−θ)2/10prior: f(θ) = c1 e ; likelihood: f(x1|θ) = c2 e = c2 e

We multiply the prior and likelihood to get the posterior:

−(θ−4)2/16 −(3−θ)2/10f(θ|x1) = c3 e e
(θ − 4)2 (3 − θ)2

= c3 exp − −
16 10

We complete the square in the exponent

(θ − 4)2 (3 − θ)2 5(θ − 4)2 + 8(3 − θ)2
− − = −

16 10 80
13θ2 − 88θ + 152

= −
80

θ2 − 88 152θ +
13 13= −
80/13

(θ − 44/13)2 + 152/13 − (44/13)2
= − .

80/13

Therefore the posterior is

(θ−44/13)2+152/13−(44/13)2 (θ−44/13)2 − −
80/13 80/13f(θ|x1) = c3 e = c4 e .

This has the form of the pdf for N(44/13, 40/13). QED

For practice we check this against the formulas (2).

1 1
µprior = 4, σ

2 σ2 = 5 ⇒ a = , b = .prior = 8, 8 5

Therefore

aµprior + bx 44
µpost = = = 3.38

a + b 13
1 40

σ2 = = = 3.08.post a + b 13

Example 3. Suppose that we know the data x ∼ N(θ, 1) and we have prior N(0, 1). We
get one data value x = 6.5. Describe the changes to the pdf for θ in updating from the
prior to the posterior.

( )

5 18.05 class 15, Conjugate priors: Beta and normal, Spring 2014

answer: Here is a graph of the prior pdf with the data point marked by a red line.

Prior in blue, posterior in magenta, data in red

The posterior mean will be a weighted average of the prior mean and the data. So the peak
of the posterior pdf will be be between the peak of the prior and the read line. A little
algebra with the formula shows

1 σ
σ2 = = σ2 · < σ2 post prior prior 1/σ2 + 1/σ2 σ2 + σ2 prior prior That is the posterior has smaller variance than the prior, i.e. data makes us more certain about where in its range θ lies. 4.1 More than one data point Example 4. Suppose we have data x1, x2, x3. Use the formulas (1) to update sequentially. answer: Let’s label the prior mean and variance as µ0 and σ0 2 . The updated means and variances will be µi and σi 2 . In sequence we have 1 1 1 µ1 µ0 x1 = + ; = + σ2 σ2 σ2 σ2 σ2 σ2 1 0 1 0 1 1 1 1 2 µ2 µ1 x2 µ0 x1 + x2 = + = + ; = + = + σ2 σ2 σ2 σ2 σ2 σ2 σ2 σ2 σ2 σ2 2 1 0 2 1 0 1 1 1 1 3 µ3 µ2 x3 µ0 x1 + x2 + x3 = + = + ; = + = + σ2 σ2 σ2 σ2 σ2 σ2 σ2 σ2 σ2 σ2 3 2 0 3 2 0 The example generalizes to n data values x1, . . . , xn: 6 18.05 class 15, Conjugate priors: Beta and normal, Spring 2014 Normal-normal update formulas for n data points µpost µprior nx̄ 1 1 n x1 + . . . + xn = + , = + , x̄ = . (3) σ2 σ2 σ2 σ2 σ2 σ2 npost prior post prior Again we give the easier to read form, showing µpost is a weighted average of µprior and the sample average x̄: 1 n aµprior + bx̄ 1 a = b = , µpost = , σ 2 = . (4)post σ2 σ2 a + b a + bprior Interpretation: µpost is a weighted average of µprior and x̄. If the number of data points is large then the weight b is large and x̄ will have a strong influence on the posterior. If σ2 prior is small then the weight a is large and µprior will have a strong influence on the posterior. To summarize: 1. Lots of data has a big influence on the posterior. 2. High certainty (low variance) in the prior has a big influence on the posterior. The actual posterior is a balance of these two influences. MIT OpenCourseWare https://ocw.mit.edu 18.05 Introduction to Probability and Statistics Spring 2014 For information about citing these materials or our Terms of Use, visit: https://ocw.mit.edu/terms. https://ocw.mit.edu https://ocw.mit.edu/terms Choosing priors Class 15, 18.05 Jeremy Orloff and Jonathan Bloom 1 Learning Goals 1. Learn that the choice of prior affects the posterior. 2. See that too rigid a prior can make it difficult to learn from the data. 3. See that more data lessons the dependence of the posterior on the prior. 4. Be able to make a reasonable choice of prior, based on prior understanding of the system under consideration. 2 Introduction Up to now we have always been handed a prior pdf. In this case, statistical inference from data is essentially an application of Bayes’ theorem. When the prior is known there is no controversy on how to proceed. The art of statistics starts when the prior is not known with certainty. There are two main schools on how to proceed in this case: Bayesian and frequentist. For now we are following the Bayesian approach. Starting next week we will learn the frequentist approach. Recall that given data D and a hypothesis H we used Bayes’ theorem to write P (D|H) · P (H) P (H|D) = P (D) posterior ∝ likelihood · prior. Bayesian: Bayesians make inferences using the posterior P (H|D), and therefore always need a prior P (H). If a prior is not known with certainty the Bayesian must try to make a reasonable choice. There are many ways to do this and reasonable people might make different choices. In general it is good practice to justify your choices and to explore a range of priors to see if they all point to the same conclusion. Frequentist: Very briefly, frequentists do not try to create a prior. Instead, they make inferences using the likelihood P (D|H). We will compare the two approaches in detail once we have more experience with each. For now we simply list two benefits of the Bayesian approach. 1. The posterior probability P (H|D) for the hypothesis given the evidence is usually exactly what we’d like to know. The Bayesian can say something like ‘the parameter of interest has probability 0.95 of being between 0.49 and 0.51.’ 2. The assumptions that go into choosing the prior can be clearly spelled out. More good data: It is always the case that more good data allows for stronger conclusions and lessens the influence of the prior. The emphasis should be as much on good data (quality) as on more data (quantity). 1 2 18.05 class 15, Choosing priors, Spring 2014 3 Example: Dice Suppose we have a drawer full of dice, each of which has either 4, 6, 8, 12, or 20 sides. This time, we do not know how many of each type are in the drawer. A die is picked at random from the drawer and rolled 5 times. The results in order are 4, 2, 4, 7, and 5. 3.1 Uniform prior Suppose we have no idea what the distribution of dice in the drawer might be. In this case it’s reasonable to use a flat prior. Here is the update table for the posterior probabilities that result from updating after each roll. In order to fit all the columns, we leave out the unnormalized posteriors. hyp. prior lik1 post1 lik2 post2 lik3 post3 lik4 post4 lik5 post5 H4 1/5 1/4 0.370 1/4 0.542 1/4 0.682 0 0.000 0 0.000 H6 1/5 1/6 0.247 1/6 0.241 1/6 0.202 0 0.000 1/6 0.000 H8 1/5 1/8 0.185 1/8 0.135 1/8 0.085 1/8 0.818 1/8 0.876 H12 1/5 1/12 0.123 1/12 0.060 1/12 0.025 1/12 0.161 1/12 0.115 H20 1/5 1/20 0.074 1/20 0.022 1/20 0.005 1/20 0.021 1/20 0.009 This should look familiar. Given the data the final posterior is heavily weighted towards hypthesis H8 that the 8-sided die was picked. 3.2 Other priors To see how much the above posterior depended on our choice of prior, let’s try some other priors. Suppose we have reason to believe that there are ten times as many 20-sided dice in the drawer as there are each of the other types. The table becomes: hyp. prior lik1 post1 lik2 post2 lik3 post3 lik4 post4 lik5 post5 H4 0.071 1/4 0.222 1/4 0.453 1/4 0.650 0 0.000 0 0.000 H6 0.071 1/6 0.148 1/6 0.202 1/6 0.193 0 0.000 1/6 0.000 H8 0.071 1/8 0.111 1/8 0.113 1/8 0.081 1/8 0.688 1/8 0.810 H12 0.071 1/12 0.074 1/12 0.050 1/12 0.024 1/12 0.136 1/12 0.107 H20 0.714 1/20 0.444 1/20 0.181 1/20 0.052 1/20 0.176 1/20 0.083 Even here the final posterior is heavily weighted to the hypothesis H8. What if the 20-sided die is 100 times more likely than each of the others? hyp. prior lik1 post1 lik2 post2 lik3 post3 lik4 post4 lik5 post5 H4 0.0096 1/4 0.044 1/4 0.172 1/4 0.443 0 0.000 0 0.000 H6 0.0096 1/6 0.030 1/6 0.077 1/6 0.131 0 0.000 1/6 0.000 H8 0.0096 1/8 0.022 1/8 0.043 1/8 0.055 1/8 0.266 1/8 0.464 H12 0.0096 1/12 0.015 1/12 0.019 1/12 0.016 1/12 0.053 1/12 0.061 H20 0.9615 1/20 0.889 1/20 0.689 1/20 0.354 1/20 0.681 1/20 0.475 With such a strong prior belief in the 20-sided die, the final posterior gives a lot of weight to the theory that the data arose from a 20-sided die, even though it extremely unlikely the 3 4 18.05 class 15, Choosing priors, Spring 2014 20-sided die would produce a maximum of 7 in 5 roles. The posterior now gives roughly even odds that an 8-sided die versus a 20-sided die was picked. 3.3 Rigid priors Mild cognitive dissonance. Too rigid a prior belief can overwhelm any amount of data. Suppose I’ve got it in my head that the die has to be 20-sided. So I set my prior to P (H20) = 1 with the other 4 hypotheses having probability 0. Look what happens in the update table. hyp. prior lik1 post1 lik2 post2 lik3 post3 lik4 post4 lik5 post5 H4 0 1/4 0 1/4 0 1/4 0 0 0 0 0 H6 0 1/6 0 1/6 0 1/6 0 0 0 1/6 0 H8 0 1/8 0 1/8 0 1/8 0 1/8 0 1/8 0 H12 0 1/12 0 1/12 0 1/12 0 1/12 0 1/12 0 H20 1 1/20 1 1/20 1 1/20 1 1/20 1 1/20 1 No matter what the data, a hypothesis with prior probability 0 will have posterior probabil­ ity 0. In this case I’ll never get away from the hypothesis H20, although I might experience some mild cognitive dissonance. Severe cognitive dissonance. Rigid priors can also lead to absurdities. Suppose I now have it in my head that the die must be 4-sided. So I set P (H4) = 1 and the other prior probabilities to 0. With the given data on the fourth roll I reach an impasse. A roll of 7 can’t possibly come from a 4-sided die. Yet this is the only hypothesis I’ll allow. My unnormalized posterior is a column of all zeros which cannot be normalized. hyp. prior lik1 post1 lik2 post2 lik3 post3 lik4 unnorm. post4 post4 H4 1 1/4 1 1/4 1 1/4 1 0 0 ??? H6 0 1/6 0 1/6 0 1/6 0 0 0 ??? H8 0 1/8 0 1/8 0 1/8 0 1/8 0 ??? H12 0 1/12 0 1/12 0 1/12 0 1/12 0 ??? H20 0 1/20 0 1/20 0 1/20 0 1/20 0 ??? I must adjust my belief about what is possible or, more likely, I’ll suspect you of accidently or deliberately messing up the data. Example: Malaria Here is a real example adapted from Statistics, A Bayesian Perspective by Donald Berry: By the 1950’s scientists had begun to formulate the hypothesis that carriers of the sickle-cell gene were more resistant to malaria than noncarriers. There was a fair amount of circum­ stantial evidence for this hypothesis. It also helped explain the persistance of an otherwise deleterious gene in the population. In one experiment scientists injected 30 African volun­ teers with malaria. Fifteen of the volunteers carried one copy of the sickle-cell gene and the other 15 were noncarriers. Fourteen out of 15 noncarriers developed malaria while only 2 4 18.05 class 15, Choosing priors, Spring 2014 out of 15 carriers did. Does this small sample support the hypothesis that the sickle-cell gene protects against malaria? Let S represent a carrier of the sickle-cell gene and N represent a non-carrier. Let D+ indicate developing malaria and D− indicate not developing malaria. The data can be put in a table. D+ D− S 2 13 15 N 14 1 15 16 14 30 Before analysing the data we should say a few words about the experiment and experimental design. First, it is clearly unethical: to gain some information they infected 16 people with malaria. We also need to worry about bias. How did they choose the test subjects. Is it possible the noncarriers were weaker and thus more susceptible to malaria than the carriers? Berry points out that it is reasonable to assume that an injection is similar to a mosquito bite, but it is not guaranteed. This last point means that if the experiment shows a relation between sickle-cell and protection against injected malaria, we need to consider the hypothesis that the protection from mosquito transmitted malaria is weaker or non-existent. Finally, we will frame our hypothesis as ’sickle-cell protects against malaria’, but really all we can hope to say from a study like this is that ’sickle-cell is correlated with protection against malaria’. Model. For our model let θS be the probability that an injected carrier S develops malaria and likewise let θN be the probabilitiy that an injected noncarrier N develops malaria. We assume independence between all the experimental subjects. With this model, the likelihood is a function of both θS and θN : S(1 − θS )13θ14P (data|θS , θN ) = c θ2 N (1 − θN ). As usual we leave the constant factor c as a letter. (It is a product of two binomial coeffi­e e 15 15cients: c = 2 14 .) Hypotheses. Each hypothesis consists of a pair (θN , θS ). To keep things simple we will only consider a finite number of values for these probabilities. We could easily consider many more values or even a continuous range of hypotheses. Assume θS and θN are each one of 0, 0.2, 0.4, 0.6, 0.8, 1. This leads to two-dimensional tables. First is a table of hypotheses. The color coding indicates the following: 1. Light orange squares along the diagonal are where θS = θN , i.e. sickle-cell makes no difference one way or the other. 2. Pink and red squares above the diagonal are where θN > θS , i.e. sickle-cell provides
some protection against malaria.
3. In the red squares θN − θS ≥ 0.6, i.e. sickle-cell provides a lot of protection.
4. White squares below diagonal are where θS > θN , i.e. sickle-cell actually increases the
probability of developing malaria.

5 18.05 class 15, Choosing priors, Spring 2014

θN\θS 0 0.2 0.4 0.6 0.8 1
1 (0,1) (.2,1) (.4,1) (.6,1) (.8,1) (1,1)

0.8 (0,.8) (.2,.8) (.4,.8) (.6,.8) (.8,.8) (1,.8)

0.6 (0,.6) (.2,.6) (.4,.6) (.6,.6) (.8,.6) (1,.6)

0.4 (0,.4) (.2,.4) (.4,.4) (.6,.4) (.8,.4) (1,.4)

0.2 (0,.2) (.2,.2) (.4,.2) (.6,.2) (.8,.2) (1,.2)

0 (0,0) (.2,0) (.4,0) (.6,0) (.8,0) (1,0)

Hypotheses on level of protection due to S:

red = strong; pink = some; orange = none; white = negative.

Next is the table of likelihoods. (Actually we’ve taken advantage of our indifference to scale
and scaled all the likelihoods by 100000/c to make the table more presentable.) Notice that,
to the precision of the table, many of the likelihoods are 0. The color coding is the same as
in the hypothesis table. We’ve highlighted the biggest likelihoods with a blue border.

θN\θS 0 0.2 0.4 0.6 0.8 1
1 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000

0.8 0.00000 1.93428 0.18381 0.00213 0.00000 0.00000

0.6 0.00000 0.06893 0.00655 0.00008 0.00000 0.00000

0.4 0.00000 0.00035 0.00003 0.00000 0.00000 0.00000

0.2 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000

0 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000

Likelihoods p(data|θS , θN ) scaled by 100000/c

4.1 Flat prior

Suppose we have no opinion whatsoever on whether and to what degree sickle-cell protects
against malaria. In this case it is reasonable to use a flat prior. Since there are 36 hypotheses
each one gets a prior probability of 1/36. This is given in the table below. Remember each
square in the table represents one hypothesis. Because it is a probability table we include
the marginal pmf.

θN\θS 0 0.2 0.4 0.6 0.8 1 p(θN )
1 1/36 1/36 1/36 1/36 1/36 1/36 1/6

0.8 1/36 1/36 1/36 1/36 1/36 1/36 1/6

0.6 1/36 1/36 1/36 1/36 1/36 1/36 1/6

0.4 1/36 1/36 1/36 1/36 1/36 1/36 1/6

0.2 1/36 1/36 1/36 1/36 1/36 1/36 1/6

0 1/36 1/36 1/36 1/36 1/36 1/36 1/6

p(θS) 1/6 1/6 1/6 1/6 1/6 1/6 1

Flat prior p(θS , θN ): every hypothesis (square) has equal probability

To compute the posterior we simply multiply the likelihood table by the prior table and

6 18.05 class 15, Choosing priors, Spring 2014

normalize. Normalization means making sure the entire table sums to 1.

θN\θS 0 0.2 0.4 0.6 0.8 1 p(θN |data)
1 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000

0.8 0.00000 0.88075 0.08370 0.00097 0.00000 0.00000 0.96542

0.6 0.00000 0.03139 0.00298 0.00003 0.00000 0.00000 0.03440

0.4 0.00000 0.00016 0.00002 0.00000 0.00000 0.00000 0.00018

0.2 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000

0 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000

p(θS |data) 0.00000 0.91230 0.08670 0.00100 0.00000 0.00000 1.00000

Posterior to flat prior: p(θS , θN |data)
To decide whether S confers protection against malaria, we compute the posterior proba­
bilities of ‘some protection’ and of ‘strong protection’. These are computed by summing
the corresponding squares in the posterior table.

Some protection: P (θN > θS ) = sum of pink and red = .99995

Strong protection: P (θN − θS > .6) = sum of red = .88075
Working from the flat prior, it is effectively certain that sickle-cell provides some protection
and very probable that it provides strong protection.

4.2 Informed prior

The experiment was not run without prior information. There was a lot of circumstantial
evidence that the sickle-cell gene offered some protection against malaria. For example it
was reported that a greater percentage of carriers survived to adulthood.

Here’s one way to build an informed prior. We’ll reserve a reasonable amount of probability
for the hypotheses that S gives no protection. Let’s say 24% split evenly among the 6
(orange) cells where θN = θS . We know we shouldn’t set any prior probabilities to 0, so
let’s spread 6% of the probability evenly among the 15 white cells below the diagonal. That
leaves 70% of the probability for the 15 pink and red squares above the diagonal.

θN\θS 0 0.2 0.4 0.6 0.8 1 p(θN )
1 0.04667 0.04667 0.04667 0.04667 0.04667 0.04000 0.27333

0.8 0.04667 0.04667 0.04667 0.04667 0.04000 0.00400 0.23067

0.6 0.04667 0.04667 0.04667 0.04000 0.00400 0.00400 0.18800

0.4 0.04667 0.04667 0.04000 0.00400 0.00400 0.00400 0.14533

0.2 0.04667 0.04000 0.00400 0.00400 0.00400 0.00400 0.10267

0 0.04000 0.00400 0.00400 0.00400 0.00400 0.00400 0.06000

p(θS) 0.27333 0.23067 0.18800 0.14533 0.10267 0.06000 1.0

Informed prior p(θS , θN ): makes use of prior information that sickle-cell is protective.

We then compute the posterior pmf.

7 18.05 class 15, Choosing priors, Spring 2014

θN\θS 0 0.2 0.4 0.6 0.8 1 p(θN |data)
1 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000

0.8 0.00000 0.88076 0.08370 0.00097 0.00000 0.00000 0.96543

0.6 0.00000 0.03139 0.00298 0.00003 0.00000 0.00000 0.03440

0.4 0.00000 0.00016 0.00001 0.00000 0.00000 0.00000 0.00017

0.2 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000

0 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000

p(θS |data) 0.00000 0.91231 0.08669 0.00100 0.00000 0.00000 1.00000

Posterior to informed prior: p(θS , θN |data)
We again compute the posterior probabilities of ‘some protection’ and ‘strong protection’.

Some protection: P (θN > θS ) = sum of pink and red = .99996

Strong protection: P (θN − θS > .6) = sum of red = .88076
Note that the informed posterior is nearly identical to the flat posterior.

4.3 PDALX

The following plot is based on the flat prior. For each x, it gives the probability that
θN − θS ≥ x. To make it smooth we used many more hypotheses.

0

0.2

0.4

0.6

0.8

1

0 0.2 0.4 0.6 0.8 1P
r
o
b
.

d
i
f
f
.

a
t

l
e
a
s
t

x

x

Probability the difference θN − θS is at least x (PDALX).

Notice that it is virtually certain that the difference is at least .4.

MIT OpenCourseWare
https://ocw.mit.edu

18.05 Introduction to Probability and Statistics
Spring 2014

For information about citing these materials or our Terms of Use, visit: https://ocw.mit.edu/terms.

https://ocw.mit.edu
https://ocw.mit.edu/terms

Probability intervals

Class 16, 18.05

Jeremy Orloff and Jonathan Bloom

1 Learning Goals

1. Be able to find probability intervals given a pmf or pdf.

2. Understand how probability intervals summarize belief in Bayesian updating.

3. Be able to use subjective probability intervals to construct reasonable priors.

4. Be able to construct subjective probability intervals by systematically estimating quan­
tiles.

2 Probability intervals

Suppose we have a pmf p(θ) or pdf f(θ) describing our belief about the value of an unknown

parameter of interest θ.

Definition: A p-probability interval for θ is an interval [a, b] with P (a ≤ θ ≤ b) = p.

Notes.

1. In the discrete case with pmf p(θ), this means a≤θi≤b p(θi) = p. b
2. In the continuous case with pdf f(θ), this means f(θ) dθ = p.

a

3. We may say 90%-probability interval to mean 0.9-probability interval. Probability
intervals are also called credible intervals to contrast them with confidence intervals, which
we’ll introduce in the frequentist unit.

Example 1. Between the 0.05 and 0.55 quantiles is a 0.5 probability interval. There are
many 50% probability intervals, e.g. the interval from the 0.25 to the 0.75 quantiles.

In particular, notice that the p-probability interval for θ is not unique.

Q-notation. We can phrase probability intervals in terms of quantiles. Recall that the
s-quantile for θ is the value qs with P (θ ≤ qs) = s. So for s ≤ t, the amount of probability
between the s-quantile and the t-quantile is just t − s. In these terms, a p-probability
interval is any interval [qs, qt] with t − s = p.
Example 2. We have 0.5 probability intervals [q0.25, q0.75] and [q0.05, q0.55].

Symmetric probability intervals.
The interval [q0.25, q0.75] is symmetric because the amount of probability remaining on either
side of the interval is the same, namely 0.25. If the pdf is not too skewed, the symmetric
interval is usually a good default choice.

More notes.
1. Different p-probability intervals for θ may have different widths. We can make the width

1

http:q0.25,q0.75
http:q0.05,q0.55
http:q0.25,q0.75

2 18.05 class 16, Probability intervals, Spring 2014

smaller by centering the interval under the highest part of the pdf. Such an interval is
usually a good choice since it contains the most likely values. See the examples below for
normal and beta distributions.

2. Since the width can vary for fixed p, a larger p does not always mean a larger width.
Here’s what is true: if a p1-probability interval is fully contained in a p2-probability interval,
then p1 is bigger than p2.

Probability intervals for a normal distribution. The figure shows a number of prob­
ability intervals for the standard normal.

1. All of the red bars span a 0.68-probability interval. Notice that the smallest red bar
runs between -1 and 1. This runs from the 16th percential to the 84th percentile so it is a
symmetric interval.

2. All the magenta bars span a 0.9-probability interval. They are longer than the red
bars because they include more probability. Note again that the shortest magenta bar is
symmetric.

red = 0.68, magenta = 0.9, green = 0.5

Probabilitiy intervals for a beta distribution. The following figure shows probability
intervals for a beta distribution. Notice how the two red bars have very different lengths
yet cover the same probability p = 0.68.

3 18.05 class 16, Probability intervals, Spring 2014

red = 0.68, magenta = 0.9, green = 0.5

3 Uses of probability intervals

3.1 Summarizing and communicating your beliefs

Probability intervals are an intuitive and effective way to summarize and communicate your
beliefs. It’s hard to describe an entire function f(θ) to a friend in words. If the function isn’t
from a parameterized family then it’s especially hard. Even with a beta distribution, it’s
easier to interpret “I think θ is between 0.45 and 0.65 with 50% probability” than “I think θ
follows a beta(8,6) distribution”. An exception to this rule of communication might be the
normal distribution, but only if the recipient is also comfortable with standard deviation.
Of course, what we gain in clarity we lose in precision, since the function contains more
information than the probability interval.

Probability intervals also play well with Bayesian updating. If we update from the prior
f(θ) to the posterior f(θ|x), then the p-probability interval for the posterior will tend to be
shorter than than the p-probability interval for the prior. In this sense, the data has made
us more certain. See for example the election example below.

4 Constructing a prior using subjective probability intervals

Probability intervals are also useful when we do not have a pmf or pdf at hand. In this
case, subjective probability intervals give us a method for constructing a reasonable prior
for θ “from scratch”. The thought process is to ask yourself a series of questions, e.g., ‘what
is my expected value for θ?’; ‘my 0.5-probability interval?’; ‘my 0.9-probability interval?’
Then build a prior that is consistent with these intervals.

4 18.05 class 16, Probability intervals, Spring 2014

4.1 Estimating the intervals directly

Example 3. Building priors
In 2013 there was a special election for a congressional seat in a district in South Carolina.
The election pitted Republican Mark Sanford against Democrat Elizabeth Colbert Busch.
Let θ be the fraction of the population who favored Busch. Our goal in this example is to
build a subjective prior for θ. We’ll use the following prior evidence.

• Sanford is a former S. Carolina Congressman and Governor

• He had famously resigned after having an affair in Argentina while he claimed to be
hiking the Appalachian trail.

• In 2013 Sanford won the Republican primary over 15 primary opponents.

• In the district in the 2012 presidential election the Republican Romney beat the
Democrat Obama 58% to 40%.

• The Colbert bump: Elizabeth Colbert Busch is the sister of well-known comedian
Stephen Colbert.

Our strategy will be to use our intuition to construct some probability intervals and then
find a beta distribution that approximately matches these intervals. This is subjective so
someone else might give a different answer.

Step 1. Use the evidence to construct 0.5 and 0.9 probability intervals for θ.

We’ll start by thinking about the 90% interval. The single strongest prior evidence is the
58% to 40% of Romney over Obama. Given the negatives for Sanford we don’t expect he’ll
win much more than 58% of the vote. So we’ll put the top of the 0.9 interval at 0.65. With
all of Sanford’s negatives he could lose big. So we’ll put the bottom at 0.3.

0.9 interval: [0.3, 0.65]

For the 0.5 interval we’ll pull these endpoints in. It really seems unlikely Sanford will get
more votes than Romney, so we can leave 0.25 probability that he’ll get above 57%. The
lower limit seems harder to predict. So we’ll leave 0.25 probability that he’ll get under 42%.

0.5 interval: [0.42, 0.57]

Step 2. Use our 0.5 and 0.9 probability intervals to pick a beta distribution that approx­
imats these intervals. We used the R function pbeta and a little trial and error to choose
beta(11,12). Here is our R code.

a = 11

b = 12

pbeta(0.65, a, b) – pbeta(0.3, a, b)

pbeta(0.57, a, b) – pbeta(0.42, a, b)

This computed P ([0.3, 0.65]) = 0.91 and P ([0.42, 0.57]) = 0.52. So our intervals are actually
0.91 and 0.52-probability intervals. This is pretty close to what we wanted!

http:pbeta(0.42
http:pbeta(0.57
http:pbeta(0.65

18.05 class 16, Probability intervals, Spring 2014 5

At right is a graph of the density of beta(11,12). The red line shows our interval [0.42, 0.57]
and the blue line shows our interval [0.3, 0.65].

0.0 0.2 0.4 0.6 0.8 1.0

0
1

2
3

PDF for beta(11,12)

θ
0.0 0.2 0.4 0.6 0.8 1.0

0
1

2
3

PDF for beta(9.9,11.0)

θ

q0.25 = 0.399
q0.25 = 0.472
q0.25 = 0.547

beta(11,12) found using probability intervals and beta(9.9,11.0) found using quantiles

4.2 Constructing a prior by estimating quantiles

The method in Example 3 gives a good feel for building priors from probability intervals.
Here we illustrate a slightly different way of building a prior by estimating quantiles. The
basic strategy is to first estimate the median, then divide and conquer to estimate the first
and third quantiles. Finally you choose a prior distribution that fits these estimates.

Example 4. Redo the Sanford vs. Colbert-Busch election example using quantiles.

answer: We start by estimating the median. Just as before the single strongest evidence is
the 58% to 40% victory of Romney over Obama. However, given Sanford’s negatives and
Busch’s Colbert bump we’ll estimate the median at 0.47.

In a district that went 58 to 40 for the Republican Romney it’s hard to imagine Sanford’s
vote going a lot below 40%. So we’ll estimate Sanford 25th percentile as 0.40. Likewise,
given his negatives it’s hard to imagine him going above 58%, so we’ll estimate his 75th
percentile as 0.55.

We used R to search through values of a and b for the beta distribution that matches these
quartiles the best. Since the beta distribution does not require a and b to be integers we
looked for the best fit to 1 decimal place. We found beta(9.9, 11.0). Above is a plot of
beta(9.9,11.0) with its actual quartiles shown. These match the desired quartiles pretty
well.

Historic note. In the election Sanford won 54% of the vote and Busch won 45.2%. (Source:
http://elections.huffingtonpost.com/2013/mark-sanford-vs-elizabeth-colbert-busch-sc1

http://elections.huffingtonpost.com/2013/mark-sanford-vs-elizabeth-colbert-busch-sc1

MIT OpenCourseWare
https://ocw.mit.edu

18.05 Introduction to Probability and Statistics
Spring 2014

For information about citing these materials or our Terms of Use, visit: https://ocw.mit.edu/terms.

https://ocw.mit.edu
https://ocw.mit.edu/terms

The Frequentist School of Statistics

Class 17, 18.05

Jeremy Orloff and Jonathan Bloom

1 Learning Goals

1. Be able to explain the difference between the frequentist and Bayesian approaches to
statistics.

2. Know our working definition of a statistic and be able to distinguish a statistic from a
non-statistic.

2 Introduction

After much foreshadowing, the time has finally come to switch from Bayesian statistics to
frequentist statistics. For much of the twentieth century, frequentist statistics has been the
dominant school. If you’ve ever encountered confidence intervals, p-values, t-tests, or χ2­
tests, you’ve seen frequentist statistics. With the rise of high-speed computing and big data,
Bayesian methods are becoming more common. After we’ve studied frequentist methods
we will compare the strengths and weaknesses of the two approaches.

2.1 The fork in the road

Both schools of statistics start with probability. In particular both know and love Bayes’
theorem:

P (D|H) P (H)
P (H|D) = .

P (D)

When the prior is known exactly all statisticians will use this formula. For Bayesian inference
we take H to be a hypothesis and D some data. Over the last few weeks we have seen that,
given a prior and a likelihood model, Bayes’ theorem is a complete recipe for updating our
beliefs in the face of new data. This works perfectly when the prior was known perfectly. We
saw this in our dice examples. We also saw examples of a disease with a known frequency
in the general population and a screening test of known accuracy.

In practice we saw that there is usually no universally-accepted prior – different people
will have different a priori beliefs – but we would still like to make useful inferences from
data. Bayesians and frequentists take fundamentally different approaches to this challenge,
as summarized in the figure below.

1

2 18.05 class 17, The Frequentist School of Statistics, Spring 2014

Probability
(mathematics)

Statistics
(art)

P (H|D) = P (D|H)P (H)
P (D)

Everyone uses Bayes’
formula when the prior
P (H) is known.

PPosterior(H|D) =
P (D|H)Pprior(H)

P (D)
Likelihood L(H;D) = P (D|H)

Bayesian path Frequentist path

Bayesians require a prior, so
they develop one from the best
information they have.

Without a known prior frequen-
tists draw inferences from just
the likelihood function.

The reasons for this split are both practical (ease of implementation and computation) and
philosophical (subjectivity versus objectivity and the nature of probability).

2.2 What is probability?

The main philosophical difference concerns the meaning of probability. The term frequentist
refers to the idea that probabilities represent longterm frequencies of repeatable random
experiments. For example, ‘a coin has probability 1/2 of heads’ means that the relative
frequency of heads (number of heads out of number of flips) goes to 1/2 as the number of
flips goes to infinity. This means the frequentist finds it non-sensical to specify a probability
distribution for a parameter with a fixed value. While Bayesians are happy to use probability
to describe their incomplete knowledge of a fixed parameter, frequentists reject the use of
probability to quantify degree of belief in hypotheses.

Example 1. Suppose I have a bent coin with unknown probability θ of heads. The value
of θ may be unknown, but it is a fixed value. Thus, to the frequentist there can be no prior
pdf f(θ). By comparison the Bayesian may agree that θ has a fixed value, but interprets
f(θ) as representing uncertainty about that value. Both the Bayesian and the frequentist
are perfectly happy with p(heads | θ) = θ, since the longterm frequency of heads given θ is
θ.

In short, Bayesians put probability distributions on everything (hypotheses and data), while
frequentists put probability distributions on (random, repeatable, experimental) data given
a hypothesis. For the frequentist when dealing with data from an unknown distribution
only the likelihood has meaning. The prior and posterior do not.

3 Working definition of a statistic

Our view of statistics is that it is the art of drawing conclusions (making inferences) from
data. With that in mind we can make a simple working definition of a statistic. There is a
more formal definition, but we don’t need to introduce it at this point.

3 18.05 class 17, The Frequentist School of Statistics, Spring 2014

Statistic. A statistic is anything that can be computed from data. Sometimes to be more
precise we’ll say a statistic is a rule for computing something from data and the value of the
statistic is what is computed. This can include computing likelihoods where we hypothesize
values of the model parameters. But it does not include anything that requires we know
the true value of a model parameter with unknown value.

Examples. 1. The mean of data is a statistic. It is a rule that says given data x1, . . . , xn
x1+…+xncompute .

n

2. The maximum of data is a statistic. It is a rule that says to pick the maximum value of
the data x1, . . . , xn.

3. Suppose x ∼ N(µ, 9) where µ is unknown. Then the likelihood

1 (x−7)2 − p(x|µ = 7) = √ e 18
3 2π

is a statistic. However, the distance of x from the true mean µ is not a statistic since we
cannot compute it without knowing µ

Point statistic. A point statistic is a single value computed from data. For example, the
mean and the maximum are both point statistics. The maximum likelihood estimate is also
a point statistic since it is computed directly from the data based on a likelihood model.

Interval statistic. An interval statistic is an interval computed from data. For example,
the range from the minimum to maximum of x1, . . . , xn is an interval statistic, e.g. the data
0.5, 1.0, 0.2, 3.0, 5.0 has range [0.2, 5.0].

Set statistic. A set statistic is a set computed from data.

Example. Suppose we have five dice: 4, 6, 8, 12 and 20-sided. We pick one at random and
roll it once. The value of the roll is the data. The set of dice for which this roll is possible
is a set statistic. For example, if the roll is a 10 then the value of this set statistic is {12,
20}. If the roll is a 7 then this set statistic has value {8, 12, 20}.
It’s important to remember that a statistic is itself a random variable since it is computed
from random data. For example, if data is drawn from N(µ, σ2) then the mean of n data
points follows N(µ, σ2/n)).

Sampling distribution. The probability distribution of a statistic is called its sampling
distribution.

Point estimate. We can use statistics to make a point estimate of a parameter θ. For
example, if the parameter θ represents the true mean then the data mean x̄ is a point
estimate of θ.

MIT OpenCourseWare
https://ocw.mit.edu

18.05 Introduction to Probability and Statistics
Spring 2014

For information about citing these materials or our Terms of Use, visit: https://ocw.mit.edu/terms.

https://ocw.mit.edu
https://ocw.mit.edu/terms

Null Hypothesis Significance Testing I
Class 17, 18.05

Jeremy Orloff and Jonathan Bloom

1 Learning Goals

1. Know the definitions of the significance testing terms: NHST, null hypothesis, alternative
hypothesis, simple hypothesis, composite hypothesis, significance level, power.

2. Be able to design and run a significance test for Bernoulli or binomial data.

3. Be able to compute a p-value for a normal hypothesis and use it in a significance test.

2 Introduction

Frequentist statistics is often applied in the framework of null hypothesis significance testing
(NHST). We will look at the Neyman-Pearson paradigm which focuses on one hypothesis
called the null hypothesis. There are other paradigms for hypothesis testing, but Neyman-
Pearson is the most common. Stated simply, this method asks if the data is well outside
the region where we would expect to see it under the null hypothesis. If so, then we reject
the null hypothesis in favor of a second hypothesis called the alternative hypothesis.

The computations done here all involve the likelihood function. There are two main differ-
ences between what we’ll do here and what we did in Bayesian updating.

1. The evidence of the data will be considered purely through the likelihood function it
will not be weighted by our prior beliefs.

2. We will need a notion of extreme data, e.g. 95 out of 100 heads in a coin toss or a Mayfly
that lives for a month.

2.1 Motivating examples

Example 1. Suppose you want to decide whether a coin is fair. If you toss it 100 times
and get 85 heads, would you think the coin is likely to be unfair? What about 60 heads? Or
52 heads? Most people would guess that 85 heads is strong evidence that the coin is unfair,
whereas 52 heads is no evidence at all. Sixty heads is less clear. Null hypothesis significance
testing (NHST) is a frequentist approach to thinking quantitatively about these questions.

Example 2. Suppose you want to compare a new medical treatment to a placebo or
the current standard of care. What sort of evidence would convince you that the new
treatment is better than the placebo or the current standard? Again, NHST is a quantitative
framework for answering these questions.

1

18.05 class 17, Null Hypothesis Significance Testing I, Spring 2014 2

3 Significance testing

We’ll start by listing the ingredients for NHST. Formally they are pretty simple. There is
an art to choosing good ingredients. We will explore the art in examples. If you have never
seen NHST before just scan this list now and come back to it after reading through the
examples and explanations given below.

3.1 Ingredients

• H0: the null hypothesis. This is the default assumption for the model generating the
data.

• HA: the alternative hypothesis. If we reject the null hypothesis we accept this alter-
native as the best explanation for the data.

• X: the test statistic. We compute this from the data.
• Null distribution: the probability distribution of X assuming H0.
• Rejection region: if X is in the rejection region we reject H0 in favor of HA.
• Non-rejection region: the complement to the rejection region. If X is in this region

we do not reject H0. Note that we say ‘do not reject’ rather than ‘accept’ because
usually the best we can say is that the data does not support rejecting H0.

The null hypothesis H0 and the alternative hypothesis HA play different roles. Typically
we choose H0 to be either a simple hypothesis or the default which we’ll only reject if we
have enough evidence against it. The examples below will clarify this.

4 NHST Terminology

In this section we will use one extended example to introduce and explore the terminology
used in null hypothesis significance testing (NHST).

Example 3. To test whether a coin is fair we flip it 10 times. If we get an unexpectedly
large or small number of heads we’ll suspect the coin is unfair. To make this precise in the
language of NHST we set up the ingredients as follows. Let θ be the probability that the
coin lands heads when flipped.

1. Null hypothesis: H0 = ‘the coin is fair’, i.e. θ = 0.5.
2. Alternative hypothesis: HA = ‘the coin is not fair’, i.e. θ �= .5
3. Test statistic: X = number of heads in 10 flips
4. Null distribution: This is the probability function based on the null hypothesis

p(x | θ = 0.5) ∼ binomial(10, 0.5).
Here is the probability table for the null distribution.

x 0 1 2 3 4 5 6 7 8 9 10

p(x |H0) .001 .010 .044 .117 .205 .246 .205 .117 .044 .010 .001
5. Rejection region: under the null hypothesis we expect to get about 5 heads in 10 tosses.

18.05 class 17, Null Hypothesis Significance Testing I, Spring 2014 3

We’ll reject H0 if the number of heads is much fewer or greater than 5. Let’s set the rejection
region as {0, 1, 2, 8, 9, 10}. That is, if the number of heads in 10 tosses is in this region we
will reject the hypothesis that the coin is fair in favor of the hypothesis that it is not.

We can summarize all this in the graph and probability table below. The rejection region
consists of those values of x in red. The probabilities corresponding to it are shaded in red.
We also show the null distribution as a stem plot with the rejection values of x in red.

x 0 1 2 3 4 5 6 7 8 9 10

p(x|H0) .001 .010 .044 .117 .205 .246 .205 .117 .044 .010 .001

Rejection region and null probabilities as a table for Example 3.

x

p(x |H0)

.05

.15

.25

3 4 5 6 70 1 2 8 9 10

Rejection region and null probabilities as a stemp plot for Example 3.

Notes for Example 3:
1. The null hypothesis is the cautious default: we won’t claim the coin is unfair unless we
have compelling evidence.
2. The rejection region consists of data that is extreme under the null hypothesis. That is,
it consists of the outcomes that are in the tail of the null distribution away from the high
probability center. As we’ll discuss soon, how far away depends on the significance level α
of the test.
3. If we get 3 heads in 10 tosses, then the test statistic is in the non-rejection region. The
usual scientific language would be to say that the data ‘does not support rejecting the null
hypothesis’. Even if we got 5 heads, we would not claim that the data proves the null
hypothesis is true.

Question: If we have a fair coin what is the probability that we will decide incorrectly it
is unfair?

answer: The null hypothesis is that the coin is fair. The question asks for the probability
the data from a fair coin will be in the rejection region. That is, the probability that we
will get 0, 1, 2, 8, 9 or 10 heads in 10 tosses. This is the sum of the probabilities in red.
That is,

P (rejecting H0 |H0 is true) = 0.11
Below we will continue with Example 3, define more terms used in NHST and see how to
quantify properties of the significance test.

4.1 Simple and composite hypotheses

Definition: simple hypothesis: A simple hypothesis is one for which we can specify its
distribution completely. A typical simple hypothesis is that a parameter of interest takes a
specific value.

18.05 class 17, Null Hypothesis Significance Testing I, Spring 2014 4

Definition: composite hypotheses: If its distribution cannot be fully specified, we say
that the hypothesis is composite. A typical composite hypothesis is that a parameter of
interest lies in a range of values.

In Example 3 the null hypothesis is that θ = 0.5, so the null distribution is binomial(10, 0.5).
Since the null distribution is fully specified, H0 is simple. The alternative hypothesis is that
θ �= 0.5. This is really many hypotheses in one: θ could be 0.51, 0.7, 0.99, etc. Since the
alternative distribution binomial(10, θ) is not fully specified, HA is composite.

Example 4. Suppose we have data x1, . . . , xn. Suppose also that our hypotheses are
H0: the data is drawn from N(0, 1)
HA: the data is drawn from N(1, 1).
These are both simple hypotheses – each hypothesis completely specifies a distribution.

Example 5. (Composite hypotheses.) Now suppose that our hypotheses are
H0: the data is drawn from a Poisson distribution of unknown parameter.
HA: the data is not drawn from a Poisson distribution.
These are both composite hypotheses, as they don’t fully specify the distribution.

Example 6. In an ESP experiment a subject is asked to identify the suits of 100 cards
drawn (with replacement) from a deck of cards. Let T be the number of successes. The
(simple) null hypothesis that the subject does not have ESP is given by

H0: T ∼ binomial(100, 0.25)
The (composite) alternative hypothesis that the subject has ESP is given by

HA: T ∼ binomial(100, p) with p > 0.25
Another (composite) alternative hypothesis that something besides pure chance is going on,
i.e. the subject has ESP or anti-ESP. This is given by

HA: T ∼ binomial(100, p), with p �= 0.25
Values of p < 0.25 represent hypotheses that the subject has a kind of anti-esp. 4.2 Types of error There are two types of errors we can make. We can incorrectly reject the null hypothesis when it is true or we can incorrectly fail to reject it when it is false. These are unimagina- tively labeled type I and type II errors. We summarize this in the following table. True state of nature H0 HA Our Reject H0 Type I error correct decision decision ‘Don’t reject’ H0 correct decision Type II error Type I: false rejection of H0 Type II: false non-rejection (‘acceptance’) of H0 4.3 Significance level and power Significance level and power are used to quantify the quality of the significance test. Ideally a significance test would not make errors. That is, it would not reject H0 when H0 was true 18.05 class 17, Null Hypothesis Significance Testing I, Spring 2014 5 and would reject H0 in favor of HA when HA was true. Altogether there are 4 important probabilities corresponding to the 2× 2 table just above. P (reject H0|H0) P (reject H0|HA) P (do not reject H0|H0) P (do not reject H0|HA) The two probabilities we focus on are: Significance level = P (reject H0|H0) = probability we incorrectly reject H0 = P (type I error). Power = probability we correctly reject H0 = P (reject H0|HA) = 1− P (type II error). Ideally, a hypothesis test should have a small significance level (near 0) and a large power (near 1). Here are two analogies to help you remember the meanings of significance and power. Some analogies 1. Think of H0 as the hypothesis ‘nothing noteworthy is going on’, i.e. ‘the coin is fair’, ‘the treatment is no better than placebo’ etc. And think of HA as the opposite: ‘something interesting is happening’. Then power is the probability of detecting something interesting when it’s present and significance level is the probability of mistakenly claiming something interesting has occured. 2. In the U.S. criminal defendents are presumed innocent until proven guilty beyond a reasonable doubt. We can phrase this in NHST terms as H0: the defendent is innocent (the default) HA: the defendent is guilty. Significance level is the probability of finding and innocent person guilty. Power is the probability of correctly finding a guilty party guilty. ‘Beyond a reasonable doubt’ means we should demand the significance level be very small. Composite hypotheses HA is composite in Example 3, so the power is different for different values of θ. We expand the previous probability table to include some alternate values of θ. We do the same with the stem plots. As always in the NHST game, we look at likelihoods: the probability of the data given a hypothesis. x 0 1 2 3 4 5 6 7 8 9 10 H0 : p(x|θ = 0.5) .001 .010 .044 .117 .205 .246 .205 .117 .044 .010 .001 HA : p(x|θ = 0.6) .000 .002 .011 .042 .111 .201 .251 .215 .121 .040 .006 HA : p(x|θ = 0.7) .000 .0001 .001 .009 .037 .103 .200 .267 .233 .121 .028 18.05 class 17, Null Hypothesis Significance Testing I, Spring 2014 6 p(x |H0) .05 .15 .25 3 4 5 6 70 1 2 8 9 10 p(x | θ = .6) .05 .15 .25 3 4 5 6 70 1 2 8 9 10 x p(x | θ = .7) .05 .15 .25 3 4 5 6 70 1 2 8 9 10 Rejection region and null and alternative probabilities for example 3 We use the probability table to compute the significance level and power of this test. Significance level = probability we reject H0 when it is true = probability the test statistic is in the rejection region when H0 is true = probability the test stat. is in the rejection region of the H0 row of the table = sum of red boxes in the θ = 0.5 row = 0.11 Power when θ = 0.6 = probability we reject H0 when θ = 0.6 = probability the test statistic is in the rejection region when θ = 0.6 = probability the test stat. is in the rejection region of the θ = 0.6 row of the table = sum of dark blue boxes in the θ = 0.6 row = 0.180 Power when θ = 0.7 = probability we reject H0 when θ = 0.7 = probability the test statistic is in the rejection region when θ = 0.7 = probability the test stat. is in the rejection region of the θ = 0.7 row of the table = sum of dark green boxes in the θ = 0.7 row = 0.384 We see that the power is greater for θ = 0.7 than for θ = 0.6. This isn’t surprising since we expect it to be easier to recognize that a 0.7 coin is unfair than is is to recognize 0.6 coin is unfair. Typically, we get higher power when the alternate hypothesis is farther from the null hypothesis. In Example 3, it would be quite hard to distinguish a fair coin from one with θ = 0.51. 4.4 Conceptual sketches We illustrate the notions of null hypothesis, rejection region and power with some sketches of the pdfs for the null and alternative hypotheses. 4.4.1 Null distribution: rejection and non-rejection regions The first diagram below illustrates a null distribution with rejection and non-rejection re- gions. Also shown are two possible test statistics: x1 and x2. 18.05 class 17, Null Hypothesis Significance Testing I, Spring 2014 7 x f(x|H0) 0-3 3x1x2 reject H0 reject H0don’t reject H0 The test statistic x1 is in the non-rejection region. So, if our data produced the test statistic x1 then we would not reject the null hypothesis H0. On the other hand the test statistic x2 is in the rejection region, so if our data produced x2 then we would reject the null hypothesis in favor of the alternative hypothesis. There are several things to note in this picture. 1. The rejection region consists of values far from the center of the null distribution. 2. The rejection region is two-sided. We will also see examples of one-sided rejection regions as well. 3. The alternative hypothesis is not mentioned. We reject or don’t reject H0 based only on the likelihood f(x|H0), i.e. the probability of the test statistic conditioned on H0. As we will see, the alternative hypothesis HA should be considered when choosing a rejection region, but formally it does not play a role in rejecting or not rejection H0. 4. Sometimes we rather lazily call the non-rejection region the acceptance region. This is technically incorrect because we never truly accept the null hypothesis. We either reject or say the data does not support rejecting H0. This is often summarized by the statement: you can never prove the null hypothesis. 4.4.2 High and low power tests The next two figures show high and low power tests. The shaded area under f(x|H0) represents the significance level. Remember the significance level is • The probability of falsely rejecting the null hypothesis when it is true. • The probabilitiy the test statistic falls in the rejection region even though H0 is true. Likewise, the shaded area under f(x|HA) represents the power, i.e. the probability that the test statistic is in the rejection (of H0) region when HA is true. Both tests have the same significance level, but if f(x|HA) has considerable overlap with f(x|H0) the power is much lower. It is well worth your while to thoroughly understand these graphical representations of significance testing. 18.05 class 17, Null Hypothesis Significance Testing I, Spring 2014 8 x f(x|H0) 0 f(x|HA) -4 .reject H0 region non-reject H0 region High power test x1 x2 x3 x f(x|H0) 0 f(x|HA) -0.4 .reject H0 region non-reject H0 region Low power test x1 x2 x3 In both tests both distributions are standard normal. The null distribution, rejection region and significance level are all the same. (The significance level is the red/purple area under f(x |H0 and above the rejection region.) In the top figure we see the means of the two distributions are 4 standard deviations apart. Since the areas under the densities have very little overlap the test has high power. That is if the data x is drawn from HA it will almost certainly be in the rejection region. For example x3 would be a very surprising outcome for the HA distribution. In the bottom figure we see the means of the two distributions are just 0.4 standard devia- tions apart. Since the areas under the densities have a lot of overlap the test has low power. That is if the data x is drawn from HA it is highly likely to be in the non-rejection region. For example x3 would be not be a very surprising outcome for the HA distribution. Typically we can increase the power of a test by increasing the amount of data and thereby decreasing the variance of the null and alternative distributions. In experimental design it is important to determine ahead of time the number of trials or subjects needed to achieve a desired power. Example 7. Suppose a drug for a disease is being compared to a placebo. We choose our null and alternative hypotheses as H0 = the drug does not work better than the placebo HA = the drug works better than the placebo The power of the hypothesis test is the probability that the test will conclude that the drug is better, if it is indeed truly better. The significance level is the probability that the test will conclude that the drug works better, when in fact it does not. 18.05 class 17, Null Hypothesis Significance Testing I, Spring 2014 9 5 Designing a hypothesis test Formally all a hypothesis test requires is H0, HA, a test statistic and a rejection region. In practice the design is often done using the following steps. 1. Pick the null hypothesis H0. The choice of H0 and HA is not mathematics. It’s art and custom. We often choose H0 to be simple. Or we often choose H0 to be the simplest or most cautious explanation, i.e. no effect of drug, no ESP, no bias in the coin. 2. Decide if HA is one-sided or two-sided. In the example 3 we wanted to know if the coin was unfair. An unfair coin could be biased for or against heads, so HA : θ �= 0.5 is a two-sided hypothesis. If we only care whether or not the coin is biased for heads we could use the one-sided hypothesis HA : θ > 0.5.

3. Pick a test statistic.
For example, the sample mean, sample total, or sample variance. Often the choice is obvious.
Some standard statistics that we will encounter are z, t, and χ2. We will learn to use these
statistics as we work examples over the next few classes. One thing we will say repeatedly
is that the distributions that go with these statistics are always conditioned on the null
hypothesis. That is, we will compute likelihoods such as f(z |H0).
4. Pick a significance level and determine the rejection region.
We will usually use α to denote the significance level. The Neyman-Pearson paradigm is to
pick α in advance. Typical values are 0.1, 0.05, 0.01. Recall that the significance level is
the probability of a type I error, i.e. of incorrectly rejecting the null hypothesis when it is
true. The value we choose will depend on the consequences of a type I error.

Once the significance level is chosen we can determine the rejection region in the tail(s) of
the null distribution. In Example 3, HA is two sided so the rejection region is split between
the two tails of the null distribution. This distribution is given in the following table:

x 0 1 2 3 4 5 6 7 8 9 10

p(x|H0) .001 .010 .044 .117 .205 .246 .205 .117 .044 .010 .001

If we set α = 0.05 then the rejection region must contain at most .05 probability. For a
two-sided rejection region we get

{0, 1, 9, 10}.
If we set α = 0.01 the rejection region is

{0, 10}.

Suppose we change HA to ‘the coin is biased in favor of heads’. We now have a one-sided
hypothesis θ > 0.5. Our rejection region will now be in the right-hand tail since we don’t
want to reject H0 in favor of HA if we get a small number of heads. Now if α = 0.05 the
rejection region is the one-sided range

{9, 10}.
If we set α = 0.01 then the rejection region is

{10}.

18.05 class 17, Null Hypothesis Significance Testing I, Spring 2014 10

5. Determine the power(s).
As we saw in Example 3, once the rejection region is set we can determine the power of the
test at various values of the alternate hypothesis.

Example 8. (Consequences of significance) If α = 0.1 then we’d expect a 10% type
I error rate. That is, we expect to reject the null hypothesis in 10% of those experiments
where the null hypothesis is true. Whether 0.1 is a reasonable signficance level depends on
the decisions that will be made using it.

For example, if you were running an experiminent to determine if your chocolate is more
than 72% cocoa then a 10% error type I error rate is probably okay. That is, falsely believing
some 72% chocalate is greater that 72%, is probably acceptable. On the other hand, if your
forensic lab is identifying fingerprints for a murder trial then a 10% type I error rate, i.e.
mistakenly claiming that fingerprints found at the crime scene belonged to someone who
was truly innocent, is definitely not acceptable.

Significance for a composite null hypothesis. IfH0 is composite then P(type I error) depends
on which member of H0 is true. In this case the significance level is defined as the maximum
of these probabilities.

6 Critical values

Critical values are like quantiles except they refer to the probability to the right of the value
instead of the left.

Example 9. Use R to find the 0.05 critical value for the standard normal distribution.

answer: We label this critical value z0.05. The critical value z0.05 is just the 0.95 quantile,
i.e. it has 5% probability to its right and therefore 95% probability to its left. We computed
it with the R function qnorm: qnorm(0.95, 0, 1), which returns 1.64.

In a typical significance test the rejection region consists of one or both tails of the null
distribution. The value of the test significant that marks the start of the rejection region is
a critical value. We show this and the notation used in some examples.

Example 10. Critical values and rejection regions. Suppose our test statistic x has null
distribution is N(100, 152), i.e. f(x|H0) ∼ N(100, 152). Suppose also that our rejection
region is right-sided and we have a significance level of 0.05. Find the critical value and
sketch the null distribution and rejection region.

answer: The notation used for the critical value with right tail containing probability 0.05
is x0.05. The critical value x0.05 is just the 0.95 quantile, i.e. it has 5% probability to its
right and therefore 95% probability to its left. We computed it with the R function qnorm:
qnorm(0.95, 100, 15), which returned 124.7. This is shown in the figure below.

18.05 class 17, Null Hypothesis Significance Testing I, Spring 2014 11

x

f(x|H0) ∼ N(100, 152)

x0.05
100

reject H0non-reject H0

x0.05 = 124.7
α = red = 0.05

Example 11. Critical values and rejection regions. Repeat the previous example for a
left-sided rejection region with significance level 0.05. In this case, the start of the rejection
region is at the 0.05 quantile. Since there is 95%

answer: In this case the critical value has 0.05 probability to its left and therefore 0.95
probability to its right. So we label it x0.95. Since it is the 0.05 quantile compute it with
the R function: qnorm(0.05, 100, 15), which returned 75.3.

x

f(x|H0) ∼ N(100, 152)

x0.95
100

reject H0 non-reject H0

x0.95 = 75.3
α = red = 0.05

Example 12. Critical values. Repeat the previous example for a two-sided rejection region.
Put half the significance in each tail.

answer: To have a total significance of 0.05 we put 0.025 in each tail. That is, the left tail
starts at x0.975 = q0.025 and the right tail starts at x0.025 = q0.975. We compute these values
with qnorm(0.025, 100, 15) and qnorm(0.975, 100, 15). The values are shown in the
figure below.

x

f(x|H0) ∼ N(100, 152)f(x|H0) ∼ N(100, 152)

x0.975 x0.025
100

reject H0 reject H0non-reject H0

x0.025 = 129.4
x0.975 = 70.6
α = red = 0.05

7 p-values

In practice people often specify the significance level and do the significance test using what
are called p-values. We will first define p-value and the see that

If the p-value is less than the significance level α then qw reject H0. Other-
wise we do not reject H0.

Definition. The p-value is the probability, assuming the null hypothesis, of seeing data at
least as extreme as the experimental data. What ‘at least as extreme’ means depends on

18.05 class 17, Null Hypothesis Significance Testing I, Spring 2014 12

the experimental design.

We illustrate the definition and use of p-values with a simple one-sided example. In later
classes we will look at two-sided examples. This example also introduces the z-test. All this
means is that our test statistic is standard normal (or approximately standard normal).

Example 13. The z-test for normal hypotheses
IQ is normally distributed in the population according to a N(100, 152) distribution. We
suspect that most MIT students have above average IQ so we frame the following hypothe-
ses.

H0 = MIT student IQs are distributed identically to the general population
= MIT IQ’s follow a N(100, 152) distribution.

HA = MIT student IQs tend to be higher than those of the general population
= the average MIT student IQ is greater than 100.

Notice that HA is one-sided.

Suppose we test 9 students and find they have an average IQ of x = 112. Can we reject H0
at a significance level α = 0.05?

answer: To compute p we first standardize the data: Under the null hypothesis x̄ ∼
N(100, 152/9) and therefore


z =

− 100
15/


9

=
36

= 2.4 ∼ N(0, 1).
15

That is, the null distribution for z is standard normal. We call z a z-statistic, we will use
it as our test statistic.

For a right-sided alternative hypothesis the phrase ‘data at least as extreme’ is a one-sided
tail to the right of z. The p-value is then

p = P (Z ≥ 2.4) = 1- pnorm(2.4,0,1) = 0.0081975.

Since p ≤ α we reject the null hypothesis. The reason this works is explained below. We
phrase our conclusion as

We reject the null hypothesis in favor of the alternative hypothesis that MIT
students have higher IQs on average. We have done this at significance level
0.05 with a p-value of 0.008.

Notes: 1. The average x = 112 is random: if we ran the experiment again we could get a
different value for x.

2. We could use the statistic x directly. Standardizing is fairly standard because, with
practice, we will have a good feel for the meaning of different z-values.

The justification for rejecting H0 when p ≤ α is given in the following figure.

18.05 class 17, Null Hypothesis Significance Testing I, Spring 2014 13

z

f(z|H0) ∼ N(0, 1)

z0.05 2.4

reject H0non-reject H0

z0.05 = 1.64
α = pink + red = 0.05
p = red = 0.008

In this example α = 0.05, z0.05 = 1.64 and the rejection region is the range to the right
of z0.05. Also, z = 2.4 and the p-value is the probability to the right of z. The picture
illustrates that

• z = 2.64 is in the rejection region
• is the same as z is to the right of z0.05
• is the same as the probability to the right of z is less than 0.05
• which means p < 0.05. 8 More examples Hypothesis testing is widely used in inferential statistics. We don’t expect that the following examples will make perfect sense at this time. Read them quickly just to get a sense of how hypothesis testing is used. We will explore the details of these examples in class. Example 14. The chi-square statistic and goodness of fit. (Rice, example B, p.313) To test the level of bacterial contamination, milk was spread over a grid with 400 squares. The amount of bacteria in each square was counted. We summarize in the table below. The bottom row of the table is the number of different squares that had a given amount of bacteria. Amount of bacteria 0 1 2 3 4 5 6 7 8 9 10 19 Number of squares 56 104 80 62 42 27 9 9 5 3 2 1 We compute that the average amount of bacteria per square is 2.44. Since the Poisson(λ) distribution is used to model counts of relatively rare events and the parameter λ is the expected value of the distribution. we decide to see if these counts could come from a Poisson distribution. To do this we first graphically compare the observed frequencies with those expected from Poisson(2.44). 18.05 class 17, Null Hypothesis Significance Testing I, Spring 2014 14 ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 5 10 15 0 20 40 60 80 N um be r of s qu ar es Number of bacteria in square ● ● ● ● ● ● ● ● ● ● ● ● ● ● Poisson(2.44) Observed The picture is suggestive, so we do a hypothesis test with H0 : the samples come from a Poisson(2.44) distribution. HA : the samples come from a different distribution. We use a chi-square statistic, so called because it (approximately) follows a chi-square distribution. To compute X2 we first combine the last few cells in the table so that the minimum expected count is around 5 (a general rule-of-thumb in this game.) The expected number of squares with a certain amount of bacteria comes from considering 400 trials from a Poisson(2.44) distribution, e.g., with l = 2.44 the expected number of 3 squares with 3 bacteria is 400 e−l l× 3! = 84.4. The chi-square statistic is ∑ (Oi − Ei)2 , where Oi is the observed number and Ei is the Ei expected number. Number per square 0 1 2 3 4 5 6 > 6

Observed 56 104 80 62 42 27 9 20

Expected 34.9 85.1 103.8 84.4 51.5 25.1 10.2 5.0

Component of X2 12.8 4.2 5.5 6.0 1.7 0.14 0.15 44.5

Summing up we get X2 = 74.9.

Since the mean (2.44) and the total number of trials (400) are fixed, the 8 cells only have
6 degrees of freedom. So, assuming H0, our chi-square statistic follows (approximately) a
χ26 distribution. Using this distribution, P (X

2 > 74.59) = 0 (to at least 6 decimal places).
Thus we decisively reject the null hpothesis in favor of the alternate hypothesis that the
distribution is not Poisson(2.44).

To analyze further, look at the individual components of X2. There are large contributions
in the tail of the distribution, so that is where the fit goes awry.

18.05 class 17, Null Hypothesis Significance Testing I, Spring 2014 15

Example 15. Student’s t test.

Suppose we want to compare a medical treatment for increasing life expectancy with a
placebo. We give n people the treatment and m people the placebo. Let X1, . . . , Xn be the
number of years people live after receiving the treatment. Likewise, let Y1, . . . , Ym be the

¯ ¯number of years people live after receiving the placebo. Let X and Y be the sample means.
¯ ¯We want to know if the difference between X and Y is statistically significant. We frame

this as a hypothesis test. Let μX and μY be the (unknown) means.

H0 : μX = μY , HA : μX �= μY .
With certain assumptions and a proper formula for the pooled standard error sp the test

X̄ Y
t =

− ¯
statistic follow a t distribution with n + m − 2 degrees of freedom. So our

sp
rejection region is determined by a threshold t0 with P (t > t0) = α.

MIT OpenCourseWare
https://ocw.mit.edu

18.05 Introduction to Probability and Statistics
Spring 2014

For information about citing these materials or our Terms of Use, visit: https://ocw.mit.edu/terms.

https://ocw.mit.edu
https://ocw.mit.edu/terms

Null Hypothesis Significance Testing II

Class 18, 18.05

Jeremy Orloff and Jonathan Bloom

1 Learning Goals

1. Be able to list the steps common to all null hypothesis significance tests.

2. Be able to define and compute the probability of Type I and Type II errors.

3. Be able to look up and apply one and two sample t-tests.

2 Introduction

We continue our study of significance tests. In these notes we will introduce two new tests:
one-sample t-tests and two-sample t-tests. You should pay careful attention to the fact that
every test makes some assumptions about the data – often that is drawn from a normal
distribution. You should also notice that all the tests follow the same pattern. It is just the
computation of the test statistic and the type of the null distribution that changes.

3 Review: setting up and running a significance test

There is a fairly standard set of steps one takes to set up and run a null hypothesis signifi­
cance test.

1. Design an experiment to collect data and choose a test statistic x to be computed
from the data. The key requirement here is to know the null distribution f(x|H0).
To compute power, one must also know the alternative distribution f(x|HA).

2. Decide if the test is one or two-sided based on HA and the form of the null distribution.

3. Choose a significance level α for rejecting the null hypothesis. If applicable, compute
the corresponding power of the test.

4. Run the experiment to collect data x1, x2, . . . , xn.

5. Compute the test statistic x.

6. Compute the p-value corresponding to x using the null distribution.

7. If p < α, reject the null hypothesis in favor of the alternative hypothesis. Notes. 1. Rather than choosing a significance level, you could instead choose a rejection region and reject H0 if x falls in this region. The corresponding significance level is then the probability that x falls in the rejection region. 1 2 4 18.05 class 18, Null Hypothesis Significance Testing II, Spring 2014 2. The null hypothesis is often the ‘cautious hypothesis’. The lower we set the significance level, the more “evidence” we will require before rejecting our cautious hypothesis in favor of a more sensational alternative. It is standard practice to publish the p value itself so that others may draw their own conclusions. 3. A key point of confusion: A significance level of 0.05 does not mean the test only makes mistakes 5% of the time. It means that if the null hypothesis is true, then the probability the test will mistakenly reject it is 5%. The power of the test measures the accuracy of the test when the alternative hypothesis is true. Namely, the power of the test is the probability of rejecting the null hypothesis if the alternative hypothesis is true. Therefore the probability of falsely failing to reject the null hypothesis is 1 minus the power. Errors. We can summarize these two types of errors and their probabilities as follows: Type I error = rejecting H0 when H0 is true. Type II error = failing to reject H0 when HA is true. P(type I error) = probability of falsely rejecting H0 = P(test statistic is in the rejection region | H0) = significance level of the test P(type II error) = probability of falsely not rejecting H0 = P(test statistic is in the acceptance region | HA) = 1 - power. Helpful analogies. In terms of medical testing for a disease, a Type I error is a false positive and a Type II error is a false negative. In terms of a jury trial, a Type I error is convicting an innocent defendant and a Type II error is acquitting a guilty defendant. Understanding a significance test Questions to ask: 1. How did they collect data? What is the experimental setup? 2. What are the null and alternative hypotheses? 3. What type of significance test was used? Does their data match the criteria needed to use this type of test? How robust is the test to deviations from this criteria. 4. For example, some tests comparing two groups of data assume that the groups are drawn from distributions that have the same variance. This needs to be verified before applying the test. Often the check is done using another significance test designed to compare the variances of two groups of data. 5. How is the p-value computed? A significance test comes with a test statistic and a null distribution. In most tests the p-value is p = P (data at least as extreme as what we got | H0) 3 18.05 class 18, Null Hypothesis Significance Testing II, Spring 2014 What does ‘data at least as extreme as the data we saw,’ mean? I.e. is the test one or two-sided. 6. What is the significance level α for this test? If p < α then the experimenter will reject H0 in favor of HA. 5 t tests Many significance tests assume that the data are drawn from a normal distribution, so before using such a test you should examine the data to see if the normality assumption is reasonable. We will describe how to do this in more detail later, but plotting a histogram is a good start. Like the z-test, the one-sample and two-sample t-tests we’ll consider below start from this normality assumption. We don’t expect you to memorize all the computational details of these tests and those to follow. In real life, you have access to textbooks, google, and wikipedia; on the exam, you’ll have your notecard. Instead, you should be able to identify when a t test is appropriate and apply this test after looking up the details and using a table or software like R. 5.1 z-test Let’s first review the z-test. • Data: we assume x1, x2, . . . , xn ∼ N(µ, σ2), where µ is unknown and σ is known. • Null hypothesis: µ = µ0 for some specific value µ0 x − µ0• Test statistic: z = √ = standardized mean σ/ n • Null distribution: f(z | H0) is the pdf of Z ∼ N(0, 1) • One-sided p-value (right side): p = P (Z > z | H0)

One-sided p-value (left side): p = P (Z < z | H0) Two-sided p-value: p = P (|Z| > |z|).

Example 1. Suppose that we have data that follows a normal distribution of unknown
mean µ and known variance 4. Let the null hypothesis H0 be that µ = 0. Let the alternative
hypothesis HA be that µ > 0. Suppose we collect the following data:

1, 2, 3, 6, −1

At a significance level of α = 0.05, should we reject the null hypothesis?

answer: There are 5 data points with average x = 2.2. Because we have normal data with
a known variance we should use a z test. Our z statistic is

x − µ0 2.2 − 0
z = √ = √ = 2.460

σ/ n 2/ 5

4 18.05 class 18, Null Hypothesis Significance Testing II, Spring 2014

Our test is one-sided because the alternative hypothesis is one-sided. So (using R) our
p-value is

p = P (Z > z) = P (Z > 2.460) = 0.007

Since p < .05, we reject the null hypothesis in favor of the alternative hypothesis µ > 0.

We can visualize the test as follows:

z

f(z|H0) ∼ Norm(0, 1)

1.645 2.46

reject H0don’t reject H0

Rejection region starts at q.95 = 1.645.
α = pink + red = .05
z = black dot = 2.46
p = red = .007

5.2 The Student t distribution

‘Student’ is the pseudonym used by the William Gosset who first described this test and
this test and distribution. See http://en.wikipedia.org/wiki/Student’s_t-test

The t-distribution is symmetric and bell-shaped like the normal distribution. It has a
parameter df which stands for degrees of freedom. For df small the t-distribution has more
probability in its tails than the standard normal distribution. As df increases t(df) becomes
more and more like the standard normal distribution.

Here is a simple applet that shows t(df) and compares it to the standard normal distribution:

T Distribution

As usual in R, the functions pt, dt, qt, rt correspond to cdf, pdf, quantiles, and random
sampling for a t distribution. Remember that you can type ?dt in RStudio to view the help
file specifying the parameters of dt. For example, pt(1.65,3) computes the probability
that x is less than or equal 1.65 given that x is sampled from the t distribution with 3
degrees of freedom, i.e. P (x ≤ 1.65) given that x ∼ t(3)).

5.3 One sample t-test

For the z-test, we assumed that the variance of the underlying distribution of the data was
known. However, it is often the case that we don’t know σ and therefore we must estimate
it from the data. In these cases, we use a one sample t-test instead of a z-test and the
studentized mean in place of the standardized mean

• Data: we assume x1, x2, . . . , xn ∼ N(µ, σ2), where both µ and σ are unknown.

• Null hypothesis: µ = µ0 for some specific value µ0
• Test statistic:

x − µ0
t = √

s/ n

http://en.wikipedia.org/wiki/Student’s_t-test

T Distribution

5 18.05 class 18, Null Hypothesis Significance Testing II, Spring 2014

where

nn

2 s =
1

(xi − x)2 .
n − 1

i=1

Here t is called the Studentized mean and s2 is called the sample variance. The latter
is an estimate of the true variance σ2 .

• Null distribution: f(t | H0) is the pdf of T ∼ t(n − 1), the t distribution with n − 1
degrees of freedom.*

• One-sided p-value (right side): p = P (T > t | H0)

One-sided p-value (left side): p = P (T < t | H0) Two-sided p-value: p = P (|T | > |t|).

*It’s a theorem (not an assumption) that if the data is normal with mean µ0 then the
Studentized mean follows a t-distribution. A proof would take us too far afield, but you can
look it up if you want: http://en.wikipedia.org/wiki/Student’s_t-distribution#
Derivation

Example 2. Now suppose that in the previous example the variance is unknown. That
is, we have data that follows a normal distribution of unknown mean µ and and unknown
variance σ. Suppose we collect the same data as before:

1, 2, 3, 6, −1
As above, let the null hypothesis H0 be that µ = 0 and the alternative hypothesis HA be
that µ > 0. At a significance level of α = 0.05, should we reject the null hypothesis?

answer: There are 5 data points with average x = 2.2. Because we have normal data with
unknown mean and unknown variance we should use a one-sample t test. Computing the
sample variance we get

2 1 s = (1 − 2.2)2 + (2 − 2.2)2 + (3 − 2.2)2 + (6 − 2.2)2 + (−1 − 2.2)2 = 6.7
4

Our t statistic is

x − µ0 2.2 − 0

t = √ = √ √ = 1.901
s/ n 6.7/ 5

Our test is one-sided because the alternative hypothesis is one-sided. So (using R) the
p-value is

p = P (T > t) = P (T > 1.901) = 1-pt(1.901,4) = 0.065

Since p > .05, we do not reject the null hypothesis.

We can visualize the test as follows:

t

f(t|H0) ∼ t(4)

2.131.90

reject H0don’t reject H0

Rejection region starts at q.95 = 2.13.
α = red = .05
t = black dot = 1.90
p = pink + red = 0.065

http://en.wikipedia.org/wiki/Student’s_t-distribution#Derivation
http://en.wikipedia.org/wiki/Student’s_t-distribution#Derivation

6 18.05 class 18, Null Hypothesis Significance Testing II, Spring 2014

5.4 Two-sample t-test with equal variances

We next consider the case of comparing the means of two samples. For example, we might
be interested in comparing the mean efficacies of two medical treatments.

• Data: We assume we have two sets of data drawn from normal distributions

x1, x2, . . . , xn ∼ N(µ1, σ2)
y1, y2, . . . , ym ∼ N(µ2, σ2)

where the means µ1 and µ2 and the variance σ
2 are all unknown. Notice the assump­

tion that the two distributions have the same variance. Also notice that there are n
samples in the first group and m samples in the second.

• Null hypothesis: µ1 = µ2 (the values of µ1 and µ2 are not specified)

• Test statistic:
x − ȳ

t = ,
sp

2where s

y

is the pooled variance

x
2 2(n − 1)s + (m − 1)s

p

1 1
2 +
s
=
p n + m − 2 n m

2 2Here s
and s
are the sample variances of the xi and yj respectively. The expression x y

yx

for t is somewhat complicated, but the basic idea remains the same and it still results
in a known null distribution.

• Null distribution: f(t | H0) is the pdf of T ∼ t(n + m − 2).

• One-sided p-value (right side): p = P (T > t | H0)

One-sided p-value (left side): p = P (T < t | H0) Two-sided p-value: p = P (|T | > |t|).

Note 1: Some authors use a different notation. They define the pooled variance as

2 2(n − 1)s + (m − 1)s
2 s
= -other-authorsp n + m − 2

and what we called the pooled variance they point out is the estimated variance of x − ȳ.
That is,

2 2 sp-other-authors × (1/n + 1/m) ≈ ss
=
y−¯

Note 2: There is a version of the two-sample t-test that allows the two groups to have
different variances. In this case the test statistic is a little more complicated but R will
handle it with equal ease.

Example 3. The following data comes from a real study in which 1408 women were
admitted to a maternity hospital for (i) medical reasons or through (ii) unbooked emergency

p x

7 18.05 class 18, Null Hypothesis Significance Testing II, Spring 2014

admission. The duration of pregnancy is measured in complete weeks from the beginning
of the last menstrual period. We can summarize the data as follows:

Medical: 775 observations with x̄M = 39.08 and s
2 = 7.77.
M

Emergency: 633 observations with x̄E = 39.60 and s
2
E = 4.95

Set up and run a two-sample t-test to investigate whether the mean duration differs for the

two groups.

What assumptions did you make?

answer: The pooled variance for this data is

2 s = p
774(7.77) + 632(4.95) 1 1

+ = .0187
1406 775 633

The t statistic for the null distribution is

x̄M − ȳE
= −3.8064

sp

We have 1406 degrees of freedom. Using R to compute the two-sided p-value we get

p = P (|T | > |t|) = 2*dt(-3.8064, 1406) = 0.00015

p is very small, much smaller than α = .05 or α = .01. Therefore we reject the null
hypothesis in favor of the alternative that there is a difference in the mean durations.

Rather than compute the two-sided p-value exactly using a t-distribution we could have
noted that with 1406 degrees of freedom the t distribution is essentially standard normal
and 3.8064 is almost 4 standard deviations. So

P (|t| ≥ 3.8064) ≈ P (|z| ≥ 3.8064) < .001 We assumed the data was normal and that the two groups had equal variances. Given the large difference between the sample variances this assumption may not be warranted. In fact, there are other significance tests that test whether the data is approximately normal and whether the two groups have the same variance. In practice one might apply these first to determine whether a t test is appropriate in the first place. We don’t have time to go into normality tests here, but we will see the F distribution used for equality of variances next week. http://en.wikipedia.org/wiki/Normality_test http://en.wikipedia.org/wiki/F-test_of_equality_of_variances ( ) http://en.wikipedia.org/wiki/Normality_test http://en.wikipedia.org/wiki/F-test_of_equality_of_variances http:774(7.77 MIT OpenCourseWare https://ocw.mit.edu 18.05 Introduction to Probability and Statistics Spring 2014 For information about citing these materials or our Terms of Use, visit: https://ocw.mit.edu/terms. https://ocw.mit.edu https://ocw.mit.edu/terms 1 2 Null Hypothesis Significance Testing III Class 19, 18.05 Jeremy Orloff and Jonathan Bloom Learning Goals 1. Given hypotheses and data, be able to identify to identify an appropriate significance test from a list of common ones. 2. Given hypotheses, data, and a suggested significance test, know how to look up details and apply the significance test. Introduction In these notes we will collect together some of the most common significance tests, though by necessity we will leave out many other useful ones. Still, all significance tests follow the same basic pattern in their design and implementation, so by learning the ones we include you should be able to easily apply other ones as needed. Designing a null hypothesis significance test (NHST): • Specify null and alternative hypotheses. • Choose a test statistic whose null distribution and alternative distribution(s) are known. • Specify a rejection region. Most often this is done implicitly by specifying a signif­ icance level α and a method for computing p-values based on the tails of the null distribution. • Compute power using the alternative distribution(s). Running a NHST: • Collect data and compute the test statistic. • Check if the test statistic is in the rejection region. Most often this is done implicitly by checking if p < α. If so, we ‘reject the null hypothesis in favor of the alternative hypothesis’. Otherwise we conclude ‘the data does not support rejecting the null hypothesis’. Note the careful phrasing: when we fail to reject H0, we do not conclude that H0 is true. The failure to reject may have other causes. For example, we might not have enough data to clearly distinguish H0 and HA, whereas more data would indicate that we should reject H0. 1 2 18.05 class 19, Null Hypothesis Significance Testing III, Spring 2014 3 Population parameters and sample statistics Example 1. If we randomly select 10 men from a population and measure their heights we say we have sampled the heights from the population. In this case the sample mean, say x, is the mean of the sampled heights. It is a statistic and we know its value explicitly. On the other hand, the true average height of the population, say µ, is unknown and we can only estimate its value. We call µ a population parameter. The main purpose of significance testing is to use sample statistics to draw conlusions about population parameters. For example, we might test if the average height of men in a given population is greater than 70 inches. 4 A gallery of common significance tests related to the nor­ mal distribution We will show a number of tests that all assume normal data. For completeness we will include the z and t tests we’ve already explored. You shouldn’t try to memorize these tests. It is a hopeless task to memorize the tests given here and even more hopeless to memorize all the tests we’ve left out. Rather, your goal should be to be able to find the correct test when you need it. Pay attention to the types of hypotheses the tests are designed to distinguish and the assumptions about the data needed for the test to be valid. We will work through the details of these tests in class and on homework. The null distributions for all of these tests are all related to the normal distribution by explicit formulas. We will not go into the details of these distributions or the arguments showing how they arise as the null distributions in our significance tests. However, the arguments are accessible to anyone who knows calculus and is interested in undersdanding them. Given the name of any distribution, you can easily look up the details of its con­ struction and properties online. You can also use R to explore the distribution numerically and graphically. When analyzing data with any of these tests one thing of key importance is to verify that the assumptions are true or at least approximately true. For example, you shouldn’t use a test that assumes the data is normal unless you’ve checked that the data is approximately normal. The script class19.r contains examples of using R to run some of these tests. It is posted in our usual place for R code. 4.1 z-test • Use: Test if the population mean equals a hypothesized mean. • Data: x1, x2, . . . , xn. • Assumptions: The data are independent normal samples: xi ∼ N(µ, σ2) where µ is unknown, but σ is known. • H0: For a specified µ0, µ = µ0. 3 18.05 class 19, Null Hypothesis Significance Testing III, Spring 2014 • HA: Two-sided: µ = µ0 one-sided-greater: µ > µ0

one-sided-less: µ < µ0 x − µ0• Test statistic: z = √ σ/ n • Null distribution: f(z | H0) is the pdf of Z ∼ N(0, 1). • p-value: Two-sided: p = P (|Z| > z) = 2*(1-pnorm(abs(z), 0, 1))
one-sided-greater: p = P (Z > z) = 1 – pnorm(z, 0, 1)
one-sided-less: p = P (Z < z) = pnorm(z, 0, 1) • R code: There does not seem to be a single R function to run a z-test. Of course it is easy enough to get R to compute the z score and p-value. Example 2. We quickly reprise our example from the class 17 notes. IQ is normally distributed in the population according to a N(100, 152) distribution. We suspect that most MIT students have above average IQ so we frame the following hypothe­ ses. H0 = MIT student IQs are distributed identically to the general population = MIT IQ’s follow a N(100, 152) distribution. HA = MIT student IQs tend to be higher than those of the general population = the average MIT student IQ is greater than 100. Notice that HA is one-sided. Suppose we test 9 students and find they have an average IQ of x̄ = 112. Can we reject H0 at a significance level α = 0.05? answer: Our test statistic is x̄− 100 36 z = √ = = 2.4. 15/ 9 15 The right-sided p-value is thereofre p = P (Z ≥ 2.4) = 1- pnorm(2.4,0,1) = 0.0081975. Since p ≤ α we reject the null hypothesis in favor of the alternative hypothesis that MIT students have higher IQs on average. 4.2 One-sample t-test of the mean • Use: Test if the population mean equals a hypothesized mean. • Data: x1, x2, . . . , xn. • Assumptions: The data are independent normal samples: xi ∼ N(µ, σ2) where both µ and σ are unknown. • H0: For a specified µ0, µ = µ0 4 18.05 class 19, Null Hypothesis Significance Testing III, Spring 2014 • HA: Two-sided: µ = µ0 one-sided-greater: µ > µ0
one-sided-less: µ < µ0 • Test statistic: t = x − µ0 s/ √ n , where s2 is the sample variance: s 2 = 1 n (xi − x)2 n − 1 i=1 • Null distribution: f(t | H0) is the pdf of T ∼ t(n − 1). (Student t-distribution with n − 1 degrees of freedom) • p-value: Two-sided: p = P (|T | > t) = 2*(1-pt(abs(t), n-1))
one-sided-greater: p = P (T > t) = 1 – pt(t, n-1)
one-sided-less: p = P (T < t) = pt(t, n-1) • R code example: For data x = 1, 3, 5, 7, 2 we can run a one-sample t-test with H0: µ = 2.5 using the R command: t.test(x, mu = mu0, alternative=t́wo.sided=́TRUE) This will return a several pieces of information including the mean of the data, t-value and the two-sided p-value. See the help for this function for other argument settings. Example 3. Look in the class 18 notes or slides for an example of this test. The class 19 example R code also gives an example. 4.3 Two-sample t-test for comparing means 4.3.1 The case of equal variances We start by describing the test assuming equal variances. • Use: Test if the population means from two populations differ by a hypothesized amount. • Data: x1, x2, . . . , xn and y1, y2, . . . , ym. • Assumptions: Both groups of data are independent normal samples: xi ∼ N(µx, σ2) yj ∼ N(µy, σ2) where both µx and µy are unknown and possibly different. The variance σ 2 is un­ known, but the same for both groups. • H0: For a specified µ0: µx − µy = µ0 • HA: Two-sided: µx − µy = µ0 one-sided-greater: µx − µy > µ0
one-sided-less: µx − µy < µ0 18.05 class 19, Null Hypothesis Significance Testing III, Spring 2014 5 x − ȳ − µ0• Test statistic: t = , sP 2 2 2where and s are the sample variances of the x and y data respectively, and s s Px y is (sometimes called) the pooled sample variance: 2 2(n − 1)s + (m − 1)syx 1 1 2 + and df = n + m − 2 s = p n + m − 2 n m • Null distribution: f(t | H0) is the pdf of T ∼ t(df), the t-distribution with df = n + m − 2 degrees of freedom. p • p-value: Two-sided: p = P (|T | > t) = 2*(1-pt(abs(t), df))

one-sided-greater: p = P (T > t) = 1 – pt(t, df)

one-sided-less: p = P (T < t) = pt(t, df) • R code: The R function t.test will run a two-sample t-test. See the example code in class19.r Example 4. Look in the class 18 notes or slides for an example of the two-sample t-test. Notes: 1. Most often the test is done with µ0 = 0. That is, the null hypothesis is the the means are equal, i.e. µx − µy = 0. 2. If the x and y data have the same length, n, then the formula for s2 becomes simpler: 2 2+ s s 2 x ys p = n 4.3.2 The case of unequal variances There is a form of the t-test for when the variances are not assumed equal. It is sometimes called Welch’s t-test. This looks exactly the same as the case of equal except for a small change in the assumptions and the formula for the pooled variance: • Use: Test if the population means from two populations differ by a hypothesized amount. • Data: x1, x2, . . . , xn and y1, y2, . . . , ym. • Assumptions: Both groups of data are independent normal samples: , σ2 xxi ∼ N(µx ) , σ2yj ∼ N(µy )y where both µx and µy are unknown and possibly different. The varian and σ 2 yces σ 2 x are unknown and not assumed to be equal. • H0, HA: Exactly the same as the case of equal variances. x − ȳ − µ0• Test statistic: t = , sP 2 2where s and s are the sample variances of the x and y data respectively, and s y P x 2 18.05 class 19, Null Hypothesis Significance Testing III, Spring 2014 6 is (sometimes called) the pooled sample variance: 2 2 2 s 2 s2 x y (s 2x/n+ sy/m) sp = + and df =n m (s2 2x/n) /(n− 1) + (s2y/m)2/(m− 1) • Null distribution: f(t |H0) is the pdf of T ∼ t(df), the t distribution with df degrees of freedom. • p-value: Exactly the same as the case of equal variances. • R code: The function t.test also handles this case by setting the argument var.equal=F 4.3.3 The paired two-sample t-test When the data naturally comes in pairs (xi, yi), we can us the paired two-sample t-test. (After checking the assumptions are valid!) Example 5. To measure the effectiveness of a cholesterol lowering medication we might test each subject before and after treatment with the drug. So for each subject we have a pair of measurements: xi = cholesterol level before treatment and yi = cholesterol level after treatment. Example 6. To measure the effectiveness of a cancer treatment we might pair each subject who received the treatment with one who did not. In this case we would want to pair subjects who are similar in terms of stage of the disease, age, sex, etc. • Use: Test if the average difference between paired values in a population equals a hypothesized value. • Data: x1, x2, . . . , xn and y1, y2, . . . , yn must have the same length. • Assumptions: The differences wi = xi−yi between the paired samples are independent draws from a normal distribution N(µ, σ2), where µ and σ are unknown. • NOTE: This is just a one-sample t-test using wi. • H0: For a specified µ0, µ = µ0. • HA: Two-sided: µ = µ0 one-sided-greater: µ > µ0
one-sided-less: µ < µ0 w − µ • 0Test statistic: t = s/ √ , n n where s2 1 is the sample variance: s2 = (w w n− i1 i=1 − )2 • Null distribution: f(t |H0) is the pdf of T ∑ ∼ t(n− 1). (Student t-distribution with n− 1 degrees of freedom) • p-value: Two-sided: p = P (|T | > t) = 2*(1-pt(abs(t), n-1))
one-sided-greater: p = P (T > t) = 1 – pt(t, n-1)
one-sided-less: p = P (T < t) = pt(t, n-1) ALSE. 6 18.05 class 19, Null Hypothesis Significance Testing III, Spring 2014 7 • R code: The R function t.test will do a paired two-sample test if you set the argu­ ment paired=TRUE. You can also run a one-sample t-test on x−y. There are examples of both of these in class19.r Example 7. The following example is taken from Rice 1 To study the effect of cigarette smoking on platelet aggregation Levine (1973) drew blood samples from 11 subjects before and after they smoked a cigarette and measured the extent to which platelets aggregated. Here is the data: Before 25 25 27 44 30 67 53 53 52 60 28 After 27 29 37 56 46 82 57 80 61 59 43 Difference 2 4 10 12 16 15 4 27 9 -1 15 The null hypothesis is that smoking had no effect on platelet aggregation, i.e. that the dif­ ference should have mean µ0 = 0. We ran a paired two-sample t-test to test this hypothesis. Here is the R code: (It’s also in class19.r.) before.cig = c(25,25,27,44,30,67,53,53,52,60,28) after.cig = c(27,29,37,56,46,82,57,80,61,59,43) mu0 = 0 result = t.test(after.cig, before.cig, alternative="two.sided", mu=mu0, paired=TRUE) print(result) Here is the output: Paired t-test data: after.cig and before.cig t = 4.2716, df = 10, p-value = 0.001633 alternative hypothesis: true difference in means is not equal to 0 mean of the differences: 10.27273 We got the same results with the one-sample t-test: t.test(after.cig - before.cig, mu=0) 4.4 One-way ANOVA (F -test for equal means) • Use: Test if the population means from n groups are all the same. • Data: (n groups, m samples from each group) x1,1, x1,2, . . . , x1,m x2,1, x2,2, . . . , x2,m . . . xn,1, xn,2, . . . , xn,m • Assumptions: Data for each group is an independent normal sample drawn from distributions with (possibly) different means but the same variance: x1,j ∼ N(µ1, σ2) x2,j ∼ N(µ2, σ2) . . . xn,j ∼ N(µn, σ2) 1John Rice, Mathematical Statistics and Data Analysis, 2nd edition, p. 412. This example references P.H Levine (1973) An acute effect of cigarette smoking on platelet function. Circulation, 48, 619-623. 8 18.05 class 19, Null Hypothesis Significance Testing III, Spring 2014 The group means µi are unknown and possibly different. The variance σ is unknown, but the same for all groups. • H0: All the means are identical µ1 = µ2 = . . . = µn. • HA: Not all the means are the same. MSB• Test statistic: w = , where MSW x̄i = mean of group i xi,1 + xi,2 + . . . + xi,m = . m x = grand mean of all the data. 2si = sample variance of group i m 1 xi) 2 = (xi,j − ¯ . m − 1 j=1 MSB = between group variance = m × sample variance of group means n m = (x̄i − x)2 . n − 1 i=1 MSW = average within group variance 2 2= sample mean of s1, . . . , sn 2 2 2s1 + s2 + . . . + sn = n • Idea: If the µi are all equal, this ratio should be near 1. If they are not equal then MSB should be larger while MSW should remain about the same, so w should be larger. We won’t give a proof of this. • Null distribution: f(w | H0) is the pdf of W ∼ F (n − 1, n(m − 1)). This is the F -distribution with (n − 1) and n(m − 1) degrees of freedom. Several F -distributions are plotted below. • p-value: p = P (W > w) = 1- pf(w, n-1, n*(m-1)))

0 2 4 6 8 10

0
.0

0
.2

0
.4

0
.6

0
.8

1
.0

x

F(3,4)
F(10,15)
F(30,15)

Notes: 1. ANOVA tests whether all the means are the same. It does not test whether
some subset of the means are the same.
2. There is a test where the variances are not assumed equal.
3. There is a test where the groups don’t all have the same number of samples.
4. R has a function aov() to run ANOVA tests. See:
https://personality-project.org/r/r.guide/r.anova.html#oneway
http://en.wikipedia.org/wiki/F-test

https://personality-project.org/r/r.guide/r.anova.html#oneway
http://en.wikipedia.org/wiki/F-test

9 18.05 class 19, Null Hypothesis Significance Testing III, Spring 2014

Example 8. The table shows patients’ perceived level of pain (on a scale of 1 to 6) after
3 different medical procedures.

T1 T2 T3
2 3 2
4 4 1
1 6 3
5 1 3
3 4 5

(1) Set up and run an F-test comparing the means of these 3 treatments.

(2) Based on the test, what might you conclude about the treatments?

answer: Using the code below, the F statistic is 0.325 and the p-value is 0.729 At any
reasonable significance level we will fail to reject the null hypothesis that the average pain
level is the same for all three treatments..

Note, it is not reasonable to conclude the the null hypothesis is true. With just 5 data
points per procedure we might simply lack the power to distinguish different means.

R code to perform the test
# DATA —­
T1 = c(2,4,1,5,3)

T2 = c(3,4,6,1,4)

T3 = c(2,1,3,3,5)

procedure = c(rep(’T1’,length(T1)),rep(’T2’,length(T2)),rep(’T3’,length(T3)))

pain = c(T1,T2,T3)

data.pain = data.frame(procedure,pain)

aov.data = aov(pain∼procedure,data=data.pain) # do the analysis of variance

print(summary(aov.data)) # show the summary table

# class19.r also show code to compute the ANOVA by hand.

The summary shows a p-value (shown as Pr(>F)) of 0.729. Therefore we do not reject the
null hypothesis that all three group population means are the same.

4.5 Chi-square test for goodness of fit

This is a test of how well a hypothesized probability distribution fits a set of data. The test
statistic is called a chi-square statistic and the null distribution associated of the chi-square
statistic is the chi-square distribution. It is denoted by χ2(df) where the parameter df is
called the degrees of freedom.

Suppose we have an unknown probability mass function given by the following table.

Outcomes ω1 ω2 . . . ωn
Probabilities p1 p2 . . . pn

In the chi-square test for goodness of fit we hypothesize a set of values for the probabilities.
Typically we will hypothesize that the probabilities follow a known distribution with certain
parameters, e.g. binomial, Poisson, multinomial. The test then tries to determine if this
set of probabilities could have reasonably generated the data we collected.

18.05 class 19, Null Hypothesis Significance Testing III, Spring 2014 10

• Use: Test whether discrete data fits a specific finite probability mass function.
• Data: An observed count Oi for each possible outcome ωi.
• Assumptions: None
• H0: The data was drawn from a specific discrete distribution.
• HA: The data was drawn from a different distribution.
• Test statistic: The data consists of observed counts Oi for each ωi. From the null hy-

pothesis probability table we get a set of expected counts Ei. There are two statistics
that we can use:

Oi
Likelihood ratio statistic G = 2 ∗ Oi ln

Ei

2


∑ (O )

(
i E

)
Pearson’s chi-square statistic X =

− 2i
.

Ei

It is a theorem that under the null hypthesis X2 ≈ G and both are approximately
chi-square. Before computers, X2 was used because it was easier to compute. Now,
it is better to use G although you will still see X2 used quite often.

• Degrees of freedom df : For chi-square tests the number of degrees of freedom can be
a bit tricky. In this case df = n − 1. It is computed as the number of cell counts
that can be freely set under HA consistent with the statistics needed to compute the
expected cell counts assuming H0.

• Null distribution: Assuming H0, both statistics (approximately) follow a chi-square
distribution with df degrees of freedom. That is both f(G |H0) and f(X2 |H0) have
the same pdf as Y ∼ χ2(df).

• p-value:
p = P (Y > G) = 1 – pchisq(G, df)
p = P (Y > X2) = 1 – pchisq(X2, df)

• R code: The R function chisq.test can be used to do the computations for a chi-
square test use X2. For G you either have to do it by hand or find a package that has
a function. (It will probably be called likelihood.test or G.test.

Notes. 1. When the likelihood ratio statistic G is used the test is also called a G-test or
a likelihood ratio test.

Example 9. First chi-square example. Suppose we have an experiment that produces
numerical data. For this experiment the possible outcomes are 0, 1, 2, 3, 4, 5 or more. We
run 51 trials and count the frequency of each outcome, getting the following data:

Outcomes 0 1 2 3 4 ≥ 5
Observed counts 3 10 15 13 7 3

Suppose our null hypothesis H0 is that the data is drawn from 51 trials of a binomial(8,
0.5) distribution and our alternative hypothesis HA is that the data is drawn from some
other distribution. Do all of the following:

1. Make a table of the observed and expected counts.
2. Compute both the likelihood ratio statistic G and Pearson’s chi-square statistic X2.

11 18.05 class 19, Null Hypothesis Significance Testing III, Spring 2014

3. Compute the degrees of freedom of the null distribution.
4. Compute the p-values corresponding to G and X2 .

answer: All of the R code used for this example is in class19.r.

1. Assuming H0 the data truly comes from a binomial(8, 0.5) distribution. We have 51
total observations, so the expected count for each outcome is just 51 times its probability.
We computed the binomial(8, 0.5) probabilities and expected counts in R:

Outcomes
Observed counts
H0 probabilities
Expected counts

0
3

0.0039
0.19

1
10

0.0313
1.53

2
15

0.1094
5.36

3
13

0.2188
10.72

4
7

0.2734
13.40

≥ 5
3

0.3633
17.80

2. Using the formulas above we compute that X2 = 116.41 and G = 66.08

3. The only statistic used in computing the expected counts was the total number of
observations 51. So, the degrees of freedom is 5, i.e we can set 5 of the cell counts freely
and the last is determined by requiring that the total number is 51.

4. The p-values are pG =1 – pchisq(G, 5) and pX2 = 1 – pchisq(X2 , 5). Both p-
values are effectively 0. For almost any significance level we would reject H0 in favor of
HA.

Example 10. (Degrees of freedom.) Suppose we have the same data as in the previous
example, but our null hypothesis is that the data comes from independent trials of bino­
mial(8, θ) distribution, where θ can be anything. (HA is that the data comes from some
other distribution.) In this case we must estimate θ from the data, e.g. using the MLE.
In total we have computed two values from the data: the total number of counts and the
estimate of θ. So, the degrees of freedom is 6 − 2 = 4.
Example 11. Mendel’s genetic experiments (Adapted from Rice Mathematical Statis­
tics and Data Analysis, 2nd ed., example C, p.314)

In one of his experiments on peas Mendel crossed 556 smooth, yellow male peas with
wrinkled green female peas. Assuming the smooth and wrinkled genes occur with equal
frequency we’d expect 1/4 of the pea population to have two smooth genes (SS), 1/4 to
have two wrinkled genes (ss), and the remaining 1/2 would be heterozygous Ss. We also
expect these fractions for yellow (Y ) and green (y) genes. If the color and smoothness
genes are inherited independently and smooth and yellow are both dominant we’d expect
the following table of frequencies for phenotypes.

Yellow Green
Smooth 9/16 3/16 3/4
Wrinkled 3/16 1/16 1/4

3/4 1/4 1
Probability table for the null hypothesis

So from the 556 crosses the expected number of smooth yellow peas is 556 × 9/16 = 312.75.
Likewise for the other possibilities. Here is a table giving the observed and expected counts
from Mendel’s experiments.

18.05 class 19, Null Hypothesis Significance Testing III, Spring 2014 12

Observed count Expected count

Smooth yellow 315 312.75
Smooth green 108 104.25
Wrinkled yellow 102 104.25
Wrinkled green 31 34.75

The null hypothesis is that the observed counts are random samples distributed according
to the frequency table given above. We use the counts to compute our statistics

The likelihood ratio statistic( is
G = 2 ∗

∑ Oi
Oi ln

Ei

)
108

2 ∗
(

315 ln

(
315 102 31

=

)
+ 108 ln

( )
+ 102 ln

412.75 104.25

(
.25

)
+ 31 ln

104

(
34.75

= 0.618

))

Pearson’s chi-square statistic is

2
∑ (Oi − Ei)2 2.75 3.75 2.25 3.75

X = = + + + = 0.604
Ei 312.75 104.25 104.25 34.75

You can see that the two statistics are very close. This is usually the case. In general the
likelihood ratio statistic is more robust and should be preferred.

The degrees of freedom is 3, because there are 4 observed quantities and one relation between
them, i.e. they sum to 556. So, under the null hypothesis G follows a χ2(3) distribution.
Using R to compute the p-value we get

p = 1- pchisq(0.618, 3) = 0.892

Assuming the null hypothesis we would see data at least this extreme almost 90% of the
time. We would not reject the null hypothesis for any reasonable significance level.

The p-value using Pearson’s statistic is 0.985 –nearly identical.

The script class19.r shows these calculations and also how to use chisq.test to run a
chi-square test directly.

4.6 Chi-square test for homogeneity

This is a test to see if several independent sets of random data are all drawn from the same
distribution. (The meaning of homogeneity in this case is that all the distributions are the
same.)

• Use: Test whether m different independent sets of discrete data are drawn from the
same distribution.

• Outcomes: ω1, ω2, . . . , ωn are the possible outcomes. These are the same for each set
of data.

• Data: We assume m independent sets of data giving counts for each of the possible
outcomes. That is, for data set i we have an observed count Oi,j for each possible
outcome ωj .

13 18.05 class 19, Null Hypothesis Significance Testing III, Spring 2014

• Assumptions: None
• H0: Each data set is drawn from the same distribution. (We don’t specify what this

distribution is.)

• HA: The data sets are not all drawn from the same distribution.
• Test statistic: See the example below. There are mn cells containing counts for each

outcome for each data set. Using the null distribution we can estimate expected counts
for each of the data sets. The statistics X2 and G are computed exactly as above.

• Degrees of freedom df : (m − 1)(n − 1). (See the example below.)
• The null distribution χ2(df). The p-values are computed just as in the chi-square test

for goodness of fit.

• R code: The R function chisq.test can be used to do the computations for a chi-
square test use X2 . For G you either have to do it by hand or find a package that has
a function. (It will probably be called likelihood.test or G.test.

Example 12. Someone claims to have found a long lost work by William Shakespeare.
She asks you to test whether or not the play was actually written by Shakespeare .

You go to http://www.opensourceshakespeare.org and pick a random 12 pages from
King Lear and count the use of common words. You do the same thing for the ‘long lost
work’. You get the following table of counts.

Word a an this that
King Lear 150 30 30 90

Long lost work 90 20 10 80

Using this data, set up and evaluate a significance test of the claim that the long lost book
is by William Shakespeare. Use a significance level of 0.1.

answer: The null hypothesis H0: For the 4 words counted the long lost book has the same
relative frequencies as the counts taken from King Lear.

The total word count of both books combined is 500, so the the maximum likelihood estimate
of the relative frequencies assuming H0 is simply the total count for each word divided by
the total word count.

Word a an this that Total count
King Lear 150 30 30 90 300

Long lost work 90 20 10 80 200
totals 240 50 40 170 500

rel. frequencies under H0 240/500 50/500 40/500 170/500 500/500

Now the expected counts for each book under H0 are the total count for that book times
the relative frequencies in the above table. The following table gives the counts: (observed,
expected) for each book.

Word a an this that Totals
King Lear (150, 144) (30, 30) (30, 24) (90, 102) (300, 300)

Long lost work (90, 96) (20, 20) (10, 16) (80, 68) (200, 200)

Totals (249, 240) (50, 50) (40, 40) (170, 170) (500, 500)

http://www.opensourceshakespeare.org

14 18.05 class 19, Null Hypothesis Significance Testing III, Spring 2014

The chi-square statistic is

(Oi − Ei)2
X2 =

Ei
62 02 62 122 62 02 62 122

= + + + + + + +
144 30 24 102 96 20 16 68

≈ 7.9

There are 8 cells and all the marginal counts are fixed because they were needed to determine
the expected counts. To be consistent with these statistics we could freely set the values
in 3 cells in the table, e.g. the 3 blue cells, then the rest of the cells are determined
in order to make the marginal totals correct. Thus df = 3. (Or we could recall that
df = (m − 1)(n − 1) = (3)(1) = 3, where m is the number of columns and n is the number
of rows.)

Using R we find p = 1-pchisq(7.9,3) = 0.048. Since this is less than our significance
level of 0.1 we reject the null hypothesis that the relative frequencies of the words are the
same in both books.

If we make the further assumption that all of Shakespeare’s plays have similar word fre­
quencies (which is something we could check) we conclude that the book is probably not
by Shakespeare.

4.7 Other tests

There are far too many other tests to even make a dent. We will see some of them in
class and on psets. Again, we urge you to master the paradigm of NHST and recognize the
importance of choosing a test statistic with a known null distribution.

MIT OpenCourseWare
https://ocw.mit.edu

18.05 Introduction to Probability and Statistics
Spring 2014

For information about citing these materials or our Terms of Use, visit: https://ocw.mit.edu/terms.

https://ocw.mit.edu
https://ocw.mit.edu/terms

Summary of NHST for 18.05
Jeremy Orloff and Jonathan Bloom

z-test

• Use: Compare the data mean to an hypothesized mean.
• Data: x1, x2, . . . , xn.
• Assumptions: The data are independent normal samples:

xi ∼ N(µ, σ2) where µ is unknown, but σ is known.
• H0: For a specified µ0, µ = µ0.
• HA:

Two-sided: µ 6= µ0
one-sided-greater: µ > µ0
one-sided-less: µ < µ0 • Test statistic: z = x− µ0 σ/ √ n • Null distribution: f(z |H0) is the pdf of Z ∼ N(0, 1). • p-value: Two-sided: p = P (|Z| > z |H0) = 2*(1-pnorm(abs(z), 0, 1))
one-sided-greater (right-sided): p = P (Z > z |H0) = 1 – pnorm(z, 0, 1)
one-sided-less (left-sided): p = P (Z < z |H0) = pnorm(z, 0, 1) • Critical values: zα has right-tail probability α P (z > zα | H0) = α ⇔ zα = qnorm(1− α, 0, 1).

• Rejection regions: let α be the significance.
Right-sided rejection region: [zα,∞)
Left-sided rejection region: (−∞, z1−α]
Two-sided rejection region: (−∞, z1−α/2] ∪ [zα/2,∞)

Alternate test statistic

• Test statistic: x

• Null distribution: f(x |H0) is the pdf of X̄ ∼ N(µ0, σ2/n).
• p-value:

Two-sided: p = P (|X̄ − µ0| > |x− µ0| |H0) = 2*(1-pnorm(abs((x− µ0), 0, σ/

n))

¯one-sided-greater: p = P (X > x) = 1 – pnorm(x, µ0, σ/

n)

one-sided-less: p = P (X̄ < x) = pnorm(x, µ0, σ/ √ n) • Critical values: xα has right-tail probability α P (X > xα H0) = α xα = qnorm(1 α, µ0, σ/

| ⇔ − n).

• Rejection regions: let α be the significance.
Right-sided rejection region: [xα,∞)
Left-sided rejection region: (−∞, x1−α]
Two-sided rejection region: (−∞, x1 α/2] ∪ [x− α/2,∞)

1

Summary of NHST for 18.03, Spring 2014 2

One-sample t-test of the mean

• Use: Compare the data mean to an hypothesized mean.
• Data: x1, x2, . . . , xn.
• Assumptions: The data are independent normal samples:

xi ∼ N(µ, σ2) where both µ and σ are unknown.
• H0: For a specified µ0, µ = µ0
• HA:

Two-sided: µ 6= µ0
one-sided-greater: µ > µ0
one-sided-less: µ < µ0 • Test statistic: t = x− µ0 s/ √ n , where s2 is the sample variance: s2 = 1 n− 1 n∑ i=1 (xi − x)2 • Null distribution: f(t |H0) is the pdf of T ∼ t(n− 1). (Student t-distribution with n− 1 degrees of freedom) • p-value: Two-sided: p = P (|T | > t) = 2*(1-pt(abs(t), n-1))
one-sided-greater: p = P (T > t) = 1 – pt(t, n-1)
one-sided-less: p = P (T < t) = pt(t, n-1) • Critical values: tα has right-tail probability α P (T > tα | H0) = α ⇔ tα = qt(1− α, n− 1).

Right-sided rejection region: [tα,∞)
• Rejection regions: let α be the significance. Left-sided rejection region: (−∞, t1−α]

Two-sided rejection region: (−∞, t1−α/2] ∪ [tα/2,∞)

Two-sample t-test for comparing means (assuming equal variance)

• Use: Compare the means from two groups.
• Data: x1, x2, . . . , xn and y1, y2, . . . , ym.
• Assumptions: Both groups of data are independent normal samples:

xi ∼ N(µ 2x, σ )
yj ∼ N(µy, σ2)

where both µx and µy are unknown and possibly different. The variance σ is unknown,
but the same for both groups.

• H0: µx = µy
• HA:

Two-sided: µx 6= µy
one-sided-greater: µx > µy
one-sided-less: µx < µy Summary of NHST for 18.03, Spring 2014 3 • Test statistic: t = x− ȳ sP , where s2x and s 2 y are the sample variances and s 2 P is (sometimes called) the pooled sample variance: s2p = (n− 1)s2x + (m− 1)s2y n+m− 2 ( 1 n + 1 m ) • Null distribution: f(t |H0) is the pdf of T ∼ t(n+m− 2). (Student t-distribution with n+m− 2 degrees of freedom.) • p-value: Two-sided: p = P (|T | > t) = 2*(1-pt(abs(t), n+m-2))
one-sided-greater: p = P (T > t) = 1 – pt(t, n+m-2)
one-sided-less: p = P (T < t) = pt(t, n+m-2) • Critical values: tα has right-tail probability α P (t > tα | H0) = α ⇔ tα = qt(1− α, n+m− 2).

• Rejection regions: let α be the significance.
Right-sided rejection region: [tα, ∞)
Left-sided rejection region: (−∞, t1 ]−α
Two-sided rejection region: (−∞, t1−α/2] ∪ [tα/2, ∞)

Notes: 1. There is a form of the t-test for when the variances are not assumed equal. It is
sometimes called Welch’s t-test.
2. When the data naturally comes in pairs (xi, yi), one uses the paired two-sample t-test.
For example, in comparing two treatments, each patient receiving treatment 1 might be
paired with a patient receiving treatment 2 who is similar in terms of stage of disease, age,
sex, etc.

χ2 test for variance

• Use: Compare the data variance to an hypothesized variance.
• Data: x1, x2, . . . , xn.
• Assumptions: The data are independent normal samples:

xi ∼ N(µ, σ2) where both µ and σ are unknown.
• H0: For a specified σ0, σ = σ0
• HA:

Two-sided: σ 6= σ0
one-sided-greater: σ > σ0
one-sided-less: σ < σ0 2 • Test statistic: X2 (n− 1)s = σ20 , where s2 is the sample variance: s2 = 1 n− 1 n∑ i=1 (xi − x)2 • Null distribution: f(X2 |H0) is the pdf of χ2 ∼ χ2(n− 1). (Chi-square distribution with n− 1 degrees of freedom) Summary of NHST for 18.03, Spring 2014 4 • p-value: Because the χ2 distribution is not symmetric around zero the two-sided test is a little awkward to write down. The idea is to look at the X2 statistic and see if it’s in the left or right tail of the distribution. The p-value is twice the probability in that tail. An easy check for which tail it’s in is: s2/σ2{ 0 > 1 (right tail) or s2/σ20 < 1 (left tail). 2 o-sided: p = ∗ P (χ2 > X2) if X2 is in the right tail

Tw
2 ∗ P (χ2 < X2) if X2 is in the left tail = 2*min(pchisq(X2,n-1), 1-pchisq(X2,n-1)) one-sided-greater: p = P (χ2 > X2) = 1 – pchisq(X2, n-1)

one-sided-less: p = P (χ2 < X2) = pchisq(X2, n-1) • Critical values: xα has right-tail probability α P (χ2 > xα | H0) = α ⇔ xα = qchisq(1− α, n− 1).

• Rejection regions: let α be the significance.
Right-sided rejection region: [xα,∞)
Left-sided rejection region: (−∞, x1−α]
Two-sided rejection region: (−∞, x1−α/2] ∪ [xα/2,∞)

χ2 test for goodness of fit for categorical data

• Use: Test whether discrete data fits a specific finite probability mass function.
• Data: An observed count Oi in cell i of a table.
• Assumptions: None
• H0: The data was drawn from a specific discrete distribution.
• HA: The data was drawn from a different distribution

• Test statistic: The data consists of observed counts Oi for each cell. From the null hy-
pothesis probability table we get a set of expected counts Ei. There are two statistics
that we can use:

i
Likelihood ratio statistic G = 2 ∗


Oi ln

(
O

Ei

)
Pearson’s chi-square statistic X2 =

∑ (Oi − Ei)2
.

Ei

It is a theorem that under the null hypthesis X2 ≈ G and both are approximately
chi-square. Before computers, X2 was used because it was easier to compute. Now,
it is better to use G although you will still see X2 used quite often.

• Degrees of freedom df : The number of cell counts that can be freely specified. In the
case above, of the n cells n − 1 can be freely specified and the last must be set to
make the correct total. So we have df = n− 1 degrees of freedom.
In other chi-square tests there can be more relations between the cell counts os df
might be different from n− 1.

Summary of NHST for 18.03, Spring 2014 5

• Rule of thumb: Combine cells until the expected count in each cell is at least 5.
• Null distribution: Assuming H0, both statistics (approximately) follow a chi-square

distribution with df degrees of freedom. That is both f(G |H0) and f(X2 |H0) have
the same pdf as Y ∼ χ2(df).

• p-value:
p = P (Y > G) = 1 – pchisq(G, df)
p = P (Y > X2) = 1 – pchisq(X2, df)

• Critical values: cα has right-tail probability α

P (Y > cα | H0) = α ⇔ cα = qchisq(1− α, df).

• Rejection regions: let α be the significance.
We expect X2 to be small if the fit of the data to the hypothesized distribution is
good. So we only use a right-sided rejection region: [cα, ∞).

One-way ANOVA (F -test for equal means)

• Use: Compare the data means from n groups with m data points in each group.
• Data:

x1,1, x1,2, . . . , x1,m
x2,1, x2,2, . . . , x2,m

. . .
xn,1, xn,2, . . . , xn,m

• Assumptions: Data for each group is an independent normal sample drawn from
distributions with (possibly) different means but the same variance:

x1,j ∼ N(µ1, σ2)
x2,j ∼ N(µ2, σ2)

. . .
xn,j ∼ N(µn, σ2)

The group means µi are unknown and possibly different. The variance σ is unknown,
but the same for all groups.

• H0: All the means are identical µ1 = µ2 = . . . = µn.
• HA: Not all the means are the same.
• Test statistic: w = MSB

MSW
, where

Summary of NHST for 18.03, Spring 2014 6

x̄i = mean of group i
xi,1 + xi,2 + . . .+ xi,m

=
m

.

x = grand mean of all the data.
s2i = sample variance of group i

1
=

m

(x
m− i,j1

j=1

− x̄i)2.

MSB = between


group variance

= m × sample variance of group means
m

=
n− 1

n∑
i=1

(x̄i − x)2.

MSW = average within group variance
= sample mean of s21, . . . , s

2
n

s2
= 1

+ s22 + . . .+ s
2
n

n
• Idea: If the µi are all equal, this ratio should be near 1. If they are not equal then

MSB should be larger while MSW should remain about the same, so w should be
larger. We won’t give a proof of this.

• Null distribution: f(w |H0) is the pdf of W ∼ F (n− 1, n(m− 1)).
This is the F -distribution with (n − 1) and n(m − 1) degrees of freedom. Several
F -distributions are plotted below.

• p-value: p = P (W > w) = 1- pf(w, n-1, n*(m-1)))

0 2 4
x

6 8 10

1
.0

0
.8

0
.6

0
.4

0
.2

0
.0

F(3,4)
F(10,15)
F(30,15)

Notes: 1. ANOVA tests whether all the means are the same. It does not test whether
some subset of the means are the same.
2. There is a test where the variances are not assumed equal.
3. There is a test where the groups don’t all have the same number of samples.

F -test for equal variances

• Use: Compare the vaiances from two groups.
• Data: x1, x2, . . . , xn and y1, y2, . . . , ym.
• Assumptions: Both groups of data are independent normal samples:

xi ∼ N(µ 2x, σx)
yj ∼ N(µy, σ2y)

Summary of NHST for 18.03, Spring 2014 7

where µx, µy, σx and σy are all unknown.

• H0: σx = σy
• HA:

Two-sided: σx 6= σy
one-sided-greater: σx > σy
one-sided-less: σx < σy s2 • Test statistic: w = x , s2y where s2x and s 2 y are the sample variances of the data. • Null distribution: f(w |H0) is the pdf of W ∼ F (n− 1,m− 1). (F -distribution with n− 1 and m− 1 degrees of freedom.) • p-value: Two-sided: p = 2*min(pf(w,n-1,m-1), 1-pf(w, n-1,m-1)) one-sided-greater: p = P (W > w) = 1 – pf(w, n-1, m-1)
one-sided-less: p = P (W < w) = pf(w, n-1, m-1) • Critical values: wα has right-tail probability α P (W > wα | H0) = α ⇔ wα = qf(1− α, n− 1, m− 1).

MIT OpenCourseWare
https://ocw.mit.edu

18.05 Introduction to Probability and Statistics
Spring 2014

For information about citing these materials or our Terms of Use, visit: https://ocw.mit.edu/terms.

https://ocw.mit.edu
https://ocw.mit.edu/terms

Comparison of frequentist and Bayesian inference.

Class 20, 18.05

Jeremy Orloff and Jonathan Bloom

1 Learning Goals

1. Be able to explain the difference between the p-value and a posterior probability to a
doctor.

2 Introduction

We have now learned about two schools of statistical inference: Bayesian and frequentist.
Both approaches allow one to evaluate evidence about competing hypotheses. In these notes
we will review and compare the two approaches, starting from Bayes’ formula.

3 Bayes’ formula as touchstone

In our first unit (probability) we learned Bayes’ formula, a perfectly abstract statement
about conditional probabilities of events:

P (B | A)P (A)
P (A | B) = .

P (B)

We began our second unit (Bayesian inference) by reinterpreting the events in Bayes’ for­
mula:

P (D |H)P (H)
P (H |D) = .

P (D)
Now H is a hypothesis and D is data which may give evidence for or against H. Each term
in Bayes’ formula has a name and a role.

• The prior P (H) is the probability that H is true before the data is considered.

• The posterior P (H |D) is the probability that H is true after the data is considered.

• The likelihood P (D |H) is the evidence about H provided by the data D.

• P (D) is the total probability of the data taking into account all possible hypotheses.

If the prior and likelihood are known for all hypotheses, then Bayes’ formula computes the
posterior exactly. Such was the case when we rolled a die randomly selected from a cup
whose contents you knew. We call this the deductive logic of probability theory, and it gives
a direct way to compare hypotheses, draw conclusions, and make decisions.

In most experiments, the prior probabilities on hypotheses are not known. In this case, our
recourse is the art of statistical inference: we either make up a prior (Bayesian) or do our
best using only the likelihood (frequentist).

1

2 18.05 class 20, Comparison of frequentist and Bayesian inference., Spring 2014

The Bayesian school models uncertainty by a probability distribution over hypotheses.
One’s ability to make inferences depends on one’s degree of confidence in the chosen prior,
and the robustness of the findings to alternate prior distributions may be relevant and
important.

The frequentist school only uses conditional distributions of data given specific hypotheses.
The presumption is that some hypothesis (parameter specifying the conditional distribution
of the data) is true and that the observed data is sampled from that distribution. In
particular, the frequentist approach does not depend on a subjective prior that may vary
from one investigator to another.

These two schools may be further contrasted as follows:

Bayesian inference

• uses probabilities for both hypotheses and data.

• depends on the prior and likelihood of observed data.

• requires one to know or construct a ‘subjective prior’.

• dominated statistical practice before the 20th century.

• may be computationally intensive due to integration over many parameters.

Frequentist inference (NHST)

• never uses or gives the probability of a hypothesis (no prior or posterior).

• depends on the likelihood P (D |H)) for both observed and unobserved data.

• does not require a prior.

• dominated statistical practice during the 20th century.

• tends to be less computationally intensive.

Frequentist measures like p-values and confidence intervals continue to dominate research,
especially in the life sciences. However, in the current era of powerful computers and
big data, Bayesian methods have undergone an enormous renaissance in fields like ma­
chine learning and genetics. There are now a number of large, ongoing clinical trials using
Bayesian protocols, something that would have been hard to imagine a generation ago.
While professional divisions remain, the consensus forming among top statisticians is that
the most effective approaches to complex problems often draw on the best insights from
both schools working in concert.

4 Critiques and defenses

4.1 Critique of Bayesian inference

1. The main critique of Bayesian inference is that a subjective prior is, well, subjective.
There is no single method for choosing a prior, so different people will produce different
priors and may therefore arrive at different posteriors and conclusions.

3 18.05 class 20, Comparison of frequentist and Bayesian inference., Spring 2014

2. Furthermore, there are philosophical objections to assigning probabilities to hypotheses,
as hypotheses do not constitute outcomes of repeatable experiments in which one can mea­
sure long-term frequency. Rather, a hypothesis is either true or false, regardless of whether
one knows which is the case. A coin is either fair or unfair; treatment 1 is either better or
worse than treatment 2; the sun will or will not come up tomorrow.

4.2 Defense of Bayesian inference

1. The probability of hypotheses is exactly what we need to make decisions. When the
doctor tells me a screening test came back positive I want to know what is the probability
this means I’m sick. That is, I want to know the probability of the hypothesis “I’m sick”.

2. Using Bayes’ theorem is logically rigorous. Once we have a prior all our calculations
have the certainty of deductive logic.

3. By trying different priors we can see how sensitive our results are to the choice of prior.

4. It is easy to communicate a result framed in terms of probabilities of hypotheses.

5. Even though the prior may be subjective, one can specify the assumptions used to arrive
at it, which allows other people to challenge it or try other priors.

6. The evidence derived from the data is independent of notions about ‘data more extreme’
that depend on the exact experimental setup (see the “Stopping rules” section below).

7. Data can be used as it comes in. There is no requirement that every contingency be
planned for ahead of time.

4.3 Critique of frequentist inference

1. It is ad-hoc and does not carry the force of deductive logic. Notions like ‘data more
extreme’ are not well defined. The p-value depends on the exact experimental setup (see
the “Stopping rules” section below).

2. Experiments must be fully specified ahead of time. This can lead to paradoxical seeming
results. See the ‘voltmeter story’ in:
http://en.wikipedia.org/wiki/Likelihood_principle

3. The p-value and significance level are notoriously prone to misinterpretation. Careful
statisticians know that a significance level of 0.05 means the probability of a type I error
is 5%. That is, if the null hypothesis is true then 5% of the time it will be rejected due to
randomness. Many (most) other people erroneously think a p-value of 0.05 means that the
probability of the null hypothesis is 5%.

Strictly speaking you could argue that this is not a critique of frequentist inference but,
rather, a critique of popular ignorance. Still, the subtlety of the ideas certainly contributes
to the problem. (see “Mind your p’s” below).

4.4 Defense of frequentist inference

1. It is objective: all statisticians will agree on the p-value. Any individual can then decide
if the p-value warrants rejecting the null hypothesis.

http://en.wikipedia.org/wiki/Likelihood_principle

4 18.05 class 20, Comparison of frequentist and Bayesian inference., Spring 2014

2. Hypothesis testing using frequentist significance testing is applied in the statistical anal­
ysis of scientific investigations, evaluating the strength of evidence against a null hypothesis
with data. The interpretation of the results is left to the user of the tests. Different users
may apply different significance levels for determining statistical significance. Frequentist
statistics does not pretend to provide a way to choose the significance level; rather it ex­
plicitly describes the trade-off between type I and type II errors.

3. Frequentist experimental design demands a careful description of the experiment and
methods of analysis before starting. This helps control for experimenter bias.

4. The frequentist approach has been used for over 100 years and we have seen tremendous
scientific progress. Although the frequentist herself would not put a probability on the belief
that frequentist methods are valuable shoudn’t this history give the Bayesian a strong prior
belief in the utility of frequentist methods?

5 Mind your p’s.

We run a two-sample t-test for equal means, with α = 0.05, and obtain a p-value of 0.04.
What are the odds that the two samples are drawn from distributions with the same mean?

(a) 19/1 (b) 1/19 (c) 1/20 (d) 1/24 (e) unknown

answer: (e) unknown. Frequentist methods only give probabilities of statistics conditioned
on hypotheses. They do not give probabilities of hypotheses.

6 Stopping rules

When running a series of trials we need a rule on when to stop. Two common rules are:
1. Run exactly n trials and stop.
2. Run trials until you see a certain result and then stop.

In this example we’ll consider two coin tossing experiments.

Experiment 1: Toss the coin exactly 6 times and report the number of heads.

Experiment 2: Toss the coin until the first tails and report the number of heads.

Jon is worried that his coin is biased towards heads, so before using it in class he tests it
for fairness. He runs an experiment and reports to Jerry that his sequence of tosses was
HHHHHT . But Jerry is only half-listening, and he forgets which experiment Jon ran to
produce the data.

Frequentist approach.
Since he’s forgotten which experiment Jon ran, Jerry the frequentist decides to compute
the p-values for both experiments given Jon’s data.

Let θ be the probability of heads. We have the null and one-sided alternative hypotheses

H0 : θ = 0.5, HA : θ > 0.5.

Experiment 1: The null distribution is binomial(6, 0.5) so, the one sided p-value is the
probability of 5 or 6 heads in 6 tosses. Using R we get

p = 1 – pbinom(4, 6, 0.5) = 0.1094.

5 18.05 class 20, Comparison of frequentist and Bayesian inference., Spring 2014

Experiment 2: The null distribution is geometric(0.5) so, the one sided p-value is the prob­
ability of 5 or more heads before the first tails. Using R we get

p = 1 – pgeom(4, 0.5) = 0.0313.

Using the typical significance level of 0.05, the same data leads to opposite conclusions! We
would reject H0 in experiment 2, but not in experiment 1.

The frequentist is fine with this. The set of possible outcomes is different for the different
experiments so the notion of extreme data, and therefore p-value, is different. For example,
in experiment 1 we would consider THHHHH to be as extreme as HHHHHT . In ex­
periment 2 we would never see THHHHH since the experiment would end after the first
tails.

Bayesian approach.
Jerry the Bayesian knows it doesn’t matter which of the two experiments Jon ran, since
the binomial and geometric likelihood functions (columns) for the data HHHHHT are
proportional. In either case, he must make up a prior, and he chooses Beta(3,3). This is a
relatively flat prior concentrated over the interval 0.25 ≤ θ ≤ 0.75.
See http://mathlets.org/mathlets/beta-distribution/

Since the beta and binomial (or geometric) distributions form a conjugate pair the Bayesian
update is simple. Data of 5 heads and 1 tails gives a posterior distribution Beta(8,4). Here
is a graph of the prior and the posterior. The blue lines at the bottom are 50% and 90%
probability intervals for the posterior.

0
.0

1
.0

2
.0

3
.0

θ
0 .25 .50 .75 1.0

Prior Beta(3,3)
Posterior Beta(8,4)

Prior and posterior distributions with 0.5 and 0.9 probability intervals

Here are the relevant computations in R:

Posterior 50% probability interval: qbeta(c(0.25, 0.75), 8, 4) = [0.58 0.76]
Posterior 90% probability interval: qbeta(c(0.05, 0.95), 8, 4) = [0.44 0.86]
P (θ > 0.50 | data) = 1- pbeta(0.5, posterior.a, posterior.b) = 0.89
Starting from the prior Beta(3,3), the posterior probability that the coin is biased toward
heads is 0.89.

Beta Distribution


http:qbeta(c(0.05
http:qbeta(c(0.25

18.05 class 20, Comparison of frequentist and Bayesian inference., Spring 2014 6

7 Making decisions

Quite often the goal of statistical inference is to help with making a decision, e.g. whether
or not to undergo surgery, how much to invest in a stock, whether or not to go to graduate
school, etc.

In statistical decision theory, consequences of taking actions are measured by a utility
function. The utility function assigns a weight to each possible outcome; in the language of
probability, it is simply a random variable.

For example, in my investments I could assign a utility of d to the outcome of a gain of
d dollars per share of a stock (if d < 0 my utility is negative). On the other hand, if my tolerance for risk is low, I will assign a more negative utility to losses than to gains (say, −d2 if d < 0 and d if d ≥ 0). A decision rule combines the expected utility with evidence for each hypothesis given by the data (e.g., p-values or posterior distributions) into a formal statistical framework for making decisions. In this setting, the frequentist will consider the expected utility given a hypothesis E(U |H) where U is the random variable representing utility. There are frequentist methods for combining the expected utility with p-values of hypotheses to guide decisions. The Bayesian can combine E(U |H) with the posterior (or prior if it’s before data is col­ lected) to create a Bayesian decision rule. In either framework, two people considering the same investment may have different utility functions and make different decisions. For example, a riskier stock (with higher potential upside and downside) will be more appealing with respect to the first utility function above than with respect to the second (loss-averse) one. A significant theoretical result is that for any decision rule there is a Bayesian decision rule which is, in a precise sense, at least as good a rule. MIT OpenCourseWare https://ocw.mit.edu 18.05 Introduction to Probability and Statistics Spring 2014 For information about citing these materials or our Terms of Use, visit: https://ocw.mit.edu/terms. https://ocw.mit.edu https://ocw.mit.edu/terms 1 2 Confidence intervals based on normal data Class 22, 18.05 Jeremy Orloff and Jonathan Bloom Learning Goals 1. Be able to determine whether an expression defines a valid interval statistic. 2. Be able to compute z and t confidence intervals for the mean given normal data. 3. Be able to compute the χ2 confidence interval for the variance given normal data. 4. Be able to define the confidence level of a confidence interval. 5. Be able to explain the relationship between the z confidence interval (and confidence level) and the z non-rejection region (and significance level) in NHST. Introduction We continue to survey the tools of frequentist statistics. Suppose we have a model (proba­ bility distribution) for observed data with an unknown parameter. We have seen how NHST uses data to test the hypothesis that the unknown parameter has a particular value. We have also seen how point estimates like the MLE use data to provide an estimate of the unknown parameter. On its own, a point estimate like x̄ = 2.2 carries no information about its accuracy; it’s just a single number, regardless of whether its based on ten data points or one million data points. For this reason, statisticians augment point estimates with confidence intervals. For exam­ ple, to estimate an unknown mean µ we might be able to say that our best estimate of the mean is x = 2.2 with a 95% confidence interval [1.2, 3.2]. Another way to describe the interval is: x ± 1. We will leave to later the explanation of exactly what the 95% confidence level means. For now, we’ll note that taken together the width of the interval and the confidence level provide a measure on the strength of the evidence supporting the hypothesis that the µ is close to our estimate x. You should think of the confidence level of an interval as analogous to the significance level of a NHST. As explained below, it is no accident that we often see significance level α = 0.05 and confidence level 0.95 = 1 − α. We will first explore confidence intervals in situations where you will easily be able to com­ pute by hand: z and t confidence intervals for the mean and χ2 confidence intervals for the variance. We will use R to handle all the computations in more complicated cases. Indeed, the challenge with confidence intervals is not their computation, but rather interpreting them correctly and knowing how to use them in practice. 1 2 18.05 class 22, Confidence intervals based on normal data, Spring 2014 3 Interval statistics Recall that our working definition of a statistic is anything that can be computed from data. In particular, the formula for a statistic cannot include unknown quantities. Example 1. Suppose x1, . . . , xn is drawn from N(µ, σ2) where µ and σ are unknown. (i) x and x − 5 are statistics. (ii) x − µ is not a statistic since µ is unknown. (iii) If µ0 a known value, then x − µ0 is a statistic. This case arises when we consider the null hypothesis µ = µ0. For example, if the null hypothesis is µ = 5, then the statistic x − µ0 is just x − 5 from (i). We can play the same game with intervals to define interval statistics Example 2. Suppose x1, . . . , xn is drawn from N(µ, σ2) where µ is unknown. (i) The interval [x − 2.2, x + 2.2] = x ± 2.2 is an interval statistic. 2σ 2σ (ii) If σ is known, then x − √ , x + √ is an interval statistic. n n 2σ 2σ (iii) On the other hand, if σ is unknown then x − √ , x + √ is not an interval statistic. n n 2s 2s 2(iv) If s2 is the sample variance, then x − √ , x + √ is an interval statistic because s n n is computed from the data. We will return to (ii) and (iv), as these are respectively the z and t confidence intervals for estimating µ. Technically an interval statistic is nothing more than a pair of point statistics giving the lower and upper bounds of the interval. Our reason for emphasizing that the interval is a statistic is to highlight the following: 1. The interval is random – new random data will produce a new interval. 2. As frequentists we are perfectly happy using it because it doesn’t depend on the value of an unknown parameter or hypothesis. 3. As usual with frequentist statistics we have to assume a certain hypothesis, e.g. value of µ, before we can compute probabilities about the interval. Example 3. Suppose we draw n samples x1, . . . , xn from a N(µ, 1) distribution, where µ is unknown. Suppose we wish to know the probability that 0 is in the interval [x − 2, x + 2]. Without knowing the value of µ this is impossible. However, we can compute this probability for any given (hypothesized) value of µ. 4. A warning which will be repeated: Be careful in your thinking about these probabili­ ties. Confidence intervals are a frequentist notion. Since frequentists do not compute probabilities of hypotheses, the confidence level is never a probability that the un­ known parameter is in the confidence interval. x1, x2, . . . , xn ∼ N(µ, σ2). variance σ2 . The (1 − α) confidence interval for µ is · σ · σzα/2 zα/2 x − √ , x + √ , (1) n n where zα/2 is the right critical value P (Z > zα/2) = α/2.

For example, if α = 0.05 then zα/2 = 1.96 so the 0.95 (or 95%) confidence interval is

1.96σ 1.96σ
x − √ , x + √ .

n n

We’ve created an applet that generates normal data and displays the corresponding z con­
fidence interval for the mean. It also shows the t-confidence interval, as discussed in the
next section. Play around to get a sense for random intervals!

Confidence Intervals

Example 4. Suppose we collect 100 data points from a N(µ, 32) distribution and the
sample mean is x = 12. Give the 95 % confidence interval for µ.

answer: Using the formula this is trivial to compute: the 95% confidence interval for µ is

1.96σ 1.96σ 1.96 · 3 1.96 · 3
x − √ , x + √ = 12 − , 12 +

n n 10 10

4.2 Explaining the definition part 1: rejection regions

Our next goal is to explain the definition (1) starting from our knowledge of rejection/non­

18.05 class 22, Confidence intervals based on normal data, Spring 2014 3

4 z confidence intervals for the mean

Throughout this section we will assume that we have normally distributed data:

x1, x2, . . . , xn ∼ N(µ, σ2).

As we often do, we will introduce the main ideas through examples, building on what
we know about rejection and non-rejection regions in NHST until we have constructed a
confidence interval.

4.1 Definition of z confidence intervals for the mean

We start with z confidence intervals for the mean. First we’ll give the formula. Then we’ll
walk through the derivation in one entirely numerical example. This will give us the basic
idea. Then we’ll repeat this example, replacing the explicit numbers by symbols. Finally
we’ll work through a computational example.

Definition: Suppose the data x1, . . . , xn ∼ N(µ, σ2), with unknown mean µ and known
2

[ ]

[ ]

[ ] [ ]

rejection regions. The phrase ‘non-rejection region’ is not pretty, but we will discipline
ourselves to use it instead of the inacurate phrase ‘acceptance region’.

Confidence Intervals

4 18.05 class 22, Confidence intervals based on normal data, Spring 2014

Example 5. Suppose that n = 12 data points are drawn from N(µ, 52) where µ is unknown.
Set up a two-sided significance test of H0 : µ = 2.71 using the statistic x at significance
level α = 0.05. Describe the rejection and non-rejection regions.

answer: Under the null hypothesis µ = 2.71 we have xi ∼ N(2.71, 52) and thus

x ∼ N(2.71, 52/12)

where 52/12 is the variance (σx)2 of x. We know that significance α = 0.05 corresponds to
a rejection region outside 1.96 standard deviations from the hypothesized mean. That is,
the non-rejection and rejection regions are separated by the critical values x ± 1.96 σx.
Non-rejection region:

1.96 · 5 1.96 · 5
2.71 − √

12
, 2.71 + √

12
= [−0.12, 5.54].

1.96 · 5

− √
12

∪ 2.71 + √
12

, ∞ = (−∞, −0.12] ∪
[5.54, ∞)

The following figure shows the rejection and non-rejection regions for x. The regions repre­
sent ranges of x so they are represented by the colored bars on the x axis. The area of the
shaded region is the significance level.

x

N(2.71, 52/12)

−.12 5.542.71

The rejection (orange) and non-rejection (blue) regions for x.

Let’s redo the previous example using symbols for the known quantities as well as for µ.

Example 6. Suppose that n data points are drawn from N(µ, σ2) where µ is unknown
and σ is known. Set up a two-sided significance test of H0 : µ = µ0 using the statistic x at
significance level α = 0.05. Describe the rejection and non-rejection regions.

answer: Under the null hypothesis µ = µ0 we have xi ∼ N(µ0, σ2) and thus

x ∼ N(µ0, σ2/n),

where σ2/n is the variance (σx)2 of x and µ0, σ and n are all known values.

Let zα/2 be the critical value: P (Z > zα/2) = α/2. Then the non-rejection and rejection
regions are separated by the values of x that are zα/2 · σx from the hypothesized mean.

σ
Since σx = √

n
we have

Non-rejection region:

µ0 −
zα/2 · σ √

n
, µ0 +

zα/2 · σ √
n

(2)

[ ]

Rejection region:

(
−∞, 2.71 − 1.96 · 5

[ ]

5 18.05 class 22, Confidence intervals based on normal data, Spring 2014

Rejection region:

· σ · σzα/2 zα/2−∞, µ0 − √ ∪ µ0 + √ , ∞ .
n n

We get the same figure as above, with the explicit numbers replaced by symbolic values.

x

N(µ0, σ
2/n)

µ0µ0 −
zα/2·σ√

n
µ0 +

zα/2·σ√
n

The rejection (orange) and non-rejection (blue) regions for x.

4.3 Manipulating intervals: pivoting

We need to get comfortable manipulating intervals. In general, we will make use of the type
of ‘obvious’ statements that are very hard to get across. One key is to be clear about the
various items.

Here is a quick summary of intervals around x and µ0 and what is called pivoting. Pivoting
is the idea the x is in µ0 ± a says exactly the same thing as µ0 is in x ± a.
Example 7. Suppose we have the sample mean x and hypothesized mean µ0 = 2.71.
Suppose also that the null distribution is N(µ0, 32). Then with a significance level of 0.05
we have:

• µ0 + 1.96σ = 2.71 + 1.96(3) = 2.71 + 5.88 is the 0.025 critical value

• µ0 − 1.96σ = 2.71 − 1.96(3) = 2.71 − 5.88 is the 0.975 critical value

• The non-rejection region is centered on µ0 = 2.71. That is, we don’t reject H0 if x is
in the interval

[µ0 − 1.96σ, µ0 + 1.96σ] = [2.71 − 5.88, 2.71 + 5.88]

• The confidence interval is centered on x. The 0.95 confidence interval uses the same
width as the non-rejection region. It is the interval

[x − 1.96σ, x + 1.96σ] = [x − 5.88, x + 5.88]

There is a symmetry here: x is in the interval [2.71 − 1.96σ, 2.71 + 1.96σ] is equivalent to
2.71 is in the interval [x − 1.96σ, x + 1.96σ].
This symmetry is called pivoting. Here are some simple numerical examples of pivoting.

Example 8. (i) 1.5 is in the interval [0−2.3, 0+2.3], so 0 is in the interval [1.5−2.3, 1.5+2.3]
(ii) Likewise 1.5 is not in the interval [0−1, 0+1], so 0 is not in the interval [1.5−1, 1.5+1].

6 18.05 class 22, Confidence intervals based on normal data, Spring 2014

The symmetry might be most clear if we talk In terms of distances: the statement
’1.5 is in the interval [0 − 2.3, 0 + 2.3]’

says that the distance from 1.5 to 0 is at most 2.3. Likewise, the statement
’0 is in the interval [1.5 − 2.3, 1.5 + 2.3]’

says exactly the same thing, i.e. the distance from 0 to 1.5 is less than 2.3.

Here is a visualization of pivoting from intervals around µ0 to intervals around x.

−2 −1 0 1 2 3 4
µ0 x

this interval does not contain x
this interval does not contain µ0

this interval contains x
this interval contains µ0

µ0 ± 1
x± 1
µ0 ± 2.3
x± 2.3

The distance between x and µ is 1.5. Now, since 1 < 1.5, µ ± 1, does not stretch far enough to contain x. Likewise the inteval x ± 1 does not stretch far enough to contain µ0. In contrast, since 2.3 > 1.5, we have x is in the interval µ0 ± 2.3 and µ0 is in the interval
x ± 2.3.

4.4 Explaining the definition part 2: translating the non-rejection region
to a confidence interval

The previous examples are nice if we happen to have a null hypothesis. But what if we
don’t have a null hypothesis? In this case, we have the point estimate x but we still want
to use the data to estimate an interval range for the unknown mean. That is, we want an
interval statistic. This is given by a confidence interval.

Here we will show how to translate the notion of an non-rejection region to that of a
confidence interval. The confidence level will control the rate of certain types of errors in
much the same way the significance level does for NHST.

The trick is to give a little thought to the non-rejection region. Using the numbers from
Example 5 we would say that at significance level 0.05 we don’t reject if


x is in the interval 2.71 ± √ = 2.71 ± 1.96 · 5/ 12. (3) 1.96 · 5

12

The roles of x and 2.71 are symmetric. The equation just above can be read as x is within √
1.96 · 5/ 12 of 2.71. This is exactly equivalent to saying that we don’t reject if

1.96 · 5
2.71 is in the interval x ± √ , (4)

12

i.e. 2.71 is within 1.96 · 5/ 12 of x.
Now we have magically arrived at our goal of an interval statistic estimating the unknown
mean. We can rewrite equation (4) as: at significance level 0.05 we don’t reject if

1.96 · 5 1.96 · 5
the interval x − √

12
, x + √

12
contains 2.71. (5)

Thus, different values of x generate different intervals.

[ ]

7 18.05 class 22, Confidence intervals based on normal data, Spring 2014

The interval in equation (5) is exactly the confidence interval defined in Equation (1). We
make a few observations about this confidence interval.

1. It only depends on x, so it is a statistic.

2. The significance level α = 0.05 means that, assuming the null hypothesis that µ = 2.71
is true, random data will lead us to reject the null hypothesis 5% of the time (a Type
I error).

3. Again assuming that µ = 2.71, then 5% of the time the confidence interval will not
contain 2.71, and conversely, 95% of the time it will contain 2.71

The following figure illustrates how we don’t reject H0 if the confidence interval around it
contains µ0 and we reject H0 if the confidence interval doesn’t contain µ0. There is a lot in
the figure so we will list carefully what you are seeing:

1. We started with the figure from Example 5 which shows the null distribution for µ0 = 2.71
and the rejection and non-rejection regions.

2. We added two possible values of the statistic x, i.e. x1 and x2, and their confidence
intervals. Note that the width of each interval is exactly the same as the width of the

1.96 · 5
non-rejection region since both use ± √ .

12
The first value, x1, is in the non-rejection region and its interval includes the null hypothesis
µ0 = 2.71. This illustrates that not rejecting H0 corresponds to the confidence interval
containing µ0.

The second value, x2, is in the rejection region and its interval does not contain µ0. This
illustrates that rejecting H0 corresponds to the confidence interval not containing µ0.

x

N(2.71, 52/12)

−.12 5.542.71 x1x2

The non-rejection region (blue) and two confidence intervals (green).

We can still wring one more essential observation out of this example. Our choice of
null hypothesis µ = 2.71 was completely arbitrary. If we replace µ = 2.71 by any other
hypothesis µ = µ0 then the interval (5) will come out the same.

We call the interval (5) a 95% confidence interval because, assuming µ = µ0, on average it
will contain µ0 in 95% of random trials.

4.5 Explaining the definition part 3: translating a general non-rejection
region to a confidence interval

Note that the specific values of σ and n in the preceding example were of no particular
consequence, so they can be replaced by their symbols. In this way we can take Example
(6) quickly through the same steps as Example (5).

18.05 class 22, Confidence intervals based on normal data, Spring 2014 8

In words, Equation (2) and the corresponding figure say that we don’t reject if
zα/2σ

x is in the interval µ0 ± √ .
n

This is exactly equivalent to saying that we don’t reject if
zα/2σ

µ0 is in the interval x ± √ . (6)
n

We can rewrite equation (6) as: at significance level α we don’t reject if

· σ · σzα/2 zα/2
the interval x − √ , x + √ contains µ0. (7)

n n

We call the interval (7) a (1 − α) confidence interval because, assuming µ = µ0, on average
it will contain µ0 in the fraction (1 − α) of random trials.

The following figure illustrates the point that µ0 is in the (1 − α) confidence interval around
x is equivalent to x is in the non-rejection region (at significance level α) for H0 : µ0 = µ.

x

N(µ0, σ
2/n)

µ0 − zα/2 · σ√n µ0 + zα/2 ·
σ√
n

µ0 x1x2

x1 is in non-rejection region for µ0 ⇔ the confidence interval around x1 contains µ0.

4.6 Computational example

Example 9. Suppose the data 2.5, 5.5, 8.5, 11.5 was drawn from a N(µ, 102) distribution
with unknown mean µ.

(a) Compute the point estimate x for µ and the corresponding 50%, 80% and 95% confidence
intervals.

(b) Consider the null hypothesis µ = 1. Would you reject H0 at α = 0.05? α = 0.20?
α = 0.50? Do these two ways: first by checking if the hypothesized value of µ is in the
relevant confidence interval and second by constructing a rejection region.

answer: (a) We compute that x = 7.0. The critical points are
z0.025 = qnorm(0.975) = 1.96, z0.1 = qnorm(0.9) = 1.28, z0.25 = qnorm(0.75) = 0.67.

Since n = 4 we have x ∼ N(µ, 102/4), i.e. σx = 5. So we have:
95% conf. interval = [x − z0.025σx, x + z0.025σx] = [7 − 1.96 · 5, 7 + 1.96 · 5] = [−2.8, 16.8]
80% conf. interval = [x − z0.1σx, x + z0.1σx] = [7 − 1.28 · 5, 7 + 1.28 · 5] = [ 0.6, 13.4]
50% conf. interval = [x − z0.75σx, x + z0.75σx] = [7 − 0.67 · 5, 7 + 0.67 · 5] = [ 3.65, 10.35]
Each of these intervals is a range estimate of µ. Notice that the higher the confidence level,
the wider the interval needs to be.

(b) Since µ = 1 is in the 95% and 80% confidence intervals, we would not reject the null
hypothesis at the α = 0.05 or α = 0.20 levels. Since µ = 1 is not in the 50% confidence
interval, we would reject H0 at the α = 0.5 level.

[ ]

http:qnorm(0.75
http:1.28,z0.25

9 18.05 class 22, Confidence intervals based on normal data, Spring 2014

We construct the rejection regions using the same critical values as in part (a). The differ­
ence is that rejection regions are intervals centered on the hypothesized value for µ: µ0 = 1
and confidence intervals are centered on x. Here are the rejection regions.
α = 0.05 ⇒ (−∞, µ0 − z0.025σx] ∪ [µ0 + z0.025σx, ∞) = (−∞, −8.8] ∪ [10.8, ∞)
α = 0.20 ⇒ (−∞, µ0 − z0.1σx] ∪ [µ0 + z0.1σx, ∞) = (−∞, −5.4] ∪ [7.4, ∞)
α = 0.25 ⇒ (−∞, µ0 − z0.25σx] ∪ [µ0 + z0.25σx, ∞) = (−∞, −2.35] ∪ [4.35, ∞)
To to do the NHST we must check whether or not x = 7 is in the rejection region.
α = 0.05: 7 < 10.8 is not in the rejection region. We do not reject the hypothesis that µ = 1 at a significance level of 0.05. α = 0.2: 7 < 7.4 is not in the rejection region. We do not reject the hypothesis that µ = 1 at a significance level of 0.2. α = 0.5: 7 > 4.35 is in the rejection region.

We reject the hypothesis that µ = 1 at a significance level 0.5.
We get the same answers using either method.

5 t-confidence intervals for the mean

This will be nearly identical to normal confidence intervals. In this setting σ is not known,
so we have to make the following replacements.

s σ
1. Use sx = √ instead of σx = √ . Here s is the sample variance we used before in

n n
t-tests

2. Use t-critical values instead of z-critical values.

5.1 Definition of t-confidence intervals for the mean

Definition: Suppose that x1, . . . , xn ∼ N(µ, σ2), where the values of the mean µ and the
standard deviation σ are both unknown. . The (1 − α) confidence interval for µ is

· s · stα/2 tα/2
x − √ , x + √ , (8)

n n

here tα/2 is the right critical value P (T > tα/2) = α/2 for T ∼ t(n − 1) and s2 is the sample
variance of the data.

5.2 Construction of t confidence intervals

Suppose that n data points are drawn from N(µ, σ2) where µ and σ are unknown. We’ll
derive the t confidence interval following the same pattern as for the z confidence interval.

x − µ0
t = √ ∼ t(n − 1).

s/ n

[ ]

Under the null hypothesis µ = µ0, we have xi ∼ N(µ0, σ2). So the studentized mean follows
a Student t distribution with n− 1 degrees of freedom:

t =

10

6

18.05 class 22, Confidence intervals based on normal data, Spring 2014

Let tα/2 be the critical value: P (T > tα/2) = α/2, where T ∼ t(n − 1). We know from
running one-sample t-tests that the non-rejection region is given by

|t| ≤ tα/2

Using the definition of the t-statistic to write the rejection region in terms of x we get: at
significance level α we don’t reject if

|x − µ0| s √ ≤ tα/2 ⇔ |x − µ0| ≤ tα/2 · √ .
s/ n n

Geometrically, the right hand side says that we don’t reject if

s
µ0 is within tα/2 · √ of x.

n

This is exactly equivalent to saying that we don’t reject if

· s · stα/2 tα/2
the interval x − √ , x + √ contains µ0.

n n

This interval is the confidence interval defined in (8).

Example 10. Suppose the data 2.5, 5.5, 8.5, 11.5 was drawn from a N(µ, σ2) distribution
with µ and σ both unknown.

Give interval estimates for µ by finding the 95%, 80% and 50% confidence intervals.

answer: By direct computation we have x = 7 and s2 = 15. The critical points are
t0.025 = qt(0.975) = 3.18, t0.1 = qt(0.9) = 1.64, and t0.25 = qt(0.75) = 0.76.

s s
95% conf. interval = x − t0.025 · √ , x + t0.025 · √ = [0.84, 13.16]

n n
s s

80% conf. interval = x − t0.1 · √ , x + t0.1 · √ = [3.82, 10.18]
n n
s s

50% conf. interval = x − t0.25 · √ , x + t0.25 · √ = [5.53, 8.47]
n n

All of these confidence intervals give interval estimates for the value of µ. Again, notice
that the higher the confidence level, the wider the corresponding interval.

Chi-square confidence intervals for the variance

We now turn to an interval estimate for the unknown variance.

Definition: Suppose the data x1, . . . , xn is drawn from N(µ, σ2) with mean µ and standard
deviation σ both unknown. The (1 − α) confidence interval for the variance σ2 is

2 2(n − 1)s (n − 1)s
, . (9)

cα/2 c1−α/2

Here cα/2 is the right critical value P (X2 > cα/2) = α/2 for X2 ∼ χ2(n − 1) and s2 is the
sample variance of the data.

[ ]

11 18.05 class 22, Confidence intervals based on normal data, Spring 2014

The derivation of this interval is nearly identical to that of the previous derivations, now
starting from the chi-square test for variance. The basic fact we need is that, for data drawn
from N(µ, σ2) with known σ, the statistic

2(n − 1)s
σ2

follows a chi-square distribution with n − 1 degrees of freedom. So given the null hypothesis
H0 : σ = σ0, the test statistic is (n − 1)s2/σ2 and the non-rejection region at significance 0
level α is

2(n − 1)s
c1−α/2 < < cα/2. σ2 0 A little algebra converts this to 2 2(n − 1)s (n − 1)s > σ0

2 > .
c1−α/2 cα/2

This says we don’t reject if

2 2(n − 1)s (n − 1)s
the interval , contains σ2 0 cα/2 c1−α/2

This is our (1 − α) confidence interval.

We will continue our exploration of confidence intervals next class. In the meantime, truly
the best way is to internalize the meaning of the confidence level is to experiment with the
confidence interval applet:

Confidence Intervals

[ ]

Confidence Intervals

MIT OpenCourseWare
https://ocw.mit.edu

18.05 Introduction to Probability and Statistics
Spring 2014

For information about citing these materials or our Terms of Use, visit: https://ocw.mit.edu/terms.

https://ocw.mit.edu
https://ocw.mit.edu/terms

1

Confidence Intervals: Three Views

Class 23, 18.05

Jeremy Orloff and Jonathan Bloom

Learning Goals

1. Be able to produce z, t and χ2 confidence intervals based on the corresponding stan­
dardized statistics.

2. Be able to use a hypothesis test to construct a confidence interval for an unknown
parameter.

3. Refuse to answer questions that ask, in essence, ‘given a confidence interval what is
the probability or odds that it contains the true value of the unknown parameter?’

2 Introduction

Our approach to confidence intervals in the previous reading was a combination of stan­
dardized statistics and hypothesis testing. Today we will consider each of these perspectives
separately, as well as introduce a third formal viewpoint. Each provides its own insight.

1. Standardized statistic. Most confidence intervals are based on standardized statistics
with known distributions like z, t or χ2 . This provides a straightforward way to construct
and interpret confidence intervals as a point estimate plus or minus some error.

2. Hypothesis testing. Confidence intervals may also be constructed from hypothesis
tests. In cases where we don’t have a standardized statistic this method will still work. It
agrees with the standardized statistic approach in cases where they both apply.

This view connects the notions of significance level α for hypothesis testing and confidence
level 1 − α for confidence intervals; we will see that in both cases α is the probability of
making a ‘type 1’ error. This gives some insight into the use of the word confidence. This
view also helps to emphasize the frequentist nature of confidence intervals.

3. Formal. The formal definition of confidence intervals is perfectly precise and general.
In a mathematical sense it gives insight into the inner workings of confidence intervals.
However, because it is so general it sometimes leads to confidence intervals without useful
properties. We will not dwell on this approach. We offer it mainly for those who are
interested.

3 Confidence intervals via standardized statistics

The strategy here is essentially the same as in the previous reading. Assuming normal data
we have what we called standardized statistics like the standardized mean, Studentized
mean, and standardized variance. These statistics have well known distributions which
depend on hypothesized values of µ and σ. We then use algebra to produce confidence
intervals for µ or σ.

1

2 18.05 class 23, Confidence Intervals: Three Views, Spring 2014

Don’t let the algebraic details distract you from the essentially simple idea underlying
confidence intervals: we start with a standardized statistic (e.g., z, t or χ2) and use some
algebra to get an interval that depends only on the data and known parameters.

3.1 z-confidence intervals for µ: normal data with known σ

z-confidence intervals for the mean of normal data are based on the standardized mean, i.e.
the z-statistic. We start with n independent normal samples

x1, x2, . . . , xn ∼ N(µ, σ2).

We assume that µ is the unknown parameter of interest and σ is known.

We know that the standardized mean is standard normal:

x − µ
z = √ ∼ N(0, 1).

σ/ n

For the standard normal critical value zα/2 we have: P (−zα/2 < Z < zα/2) = 1 − α. Thus, x − µ P −zα/2 < √ < zα/2 | µ = 1 − α σ/ n A little bit of algebra puts this in the form of an interval around µ: σ σ P x − zα/2 · √ < µ < x + zα/2 · √ | µ = 1 − α n n We can emphasize that the interval depends only on the statistic x and the known value σ by writing this as σ σ P x − zα/2 · √ , x + zα/2 · √ contains µ | µ = 1 − α. n n This is the (1 − α) z-confidence interval for µ. We often write it using the shorthand σ x ± zα/2 · √ n Think of it as x ± error. Make sure you notice that the probabilities are conditioned on µ. As with all frequen­ tist statistics, we have to fix hypothesized values of the parameters in order to compute probabilities. 3.2 t-confidence intervals for µ: normal data with unknown µ and σ t-confidence intervals for the mean of normal data are based on the Studentized mean, i.e. the t-statistic. Again we have x1, x2, . . . , xn ∼ N(µ, σ2), but now we assume both µ and σ are unknown. We know that the Studentized mean follows a Student t distribution with n − 1 degrees of freedom. That is, x − µ t = √ ∼ t(n − 1), s/ n 3 18.05 class 23, Confidence Intervals: Three Views, Spring 2014 where s2 is the sample variance. Now all we have to do is replace the standardized mean by the Studentized mean and the same logic we used for z gives us the t-confidence interval: start with x − µ P −tα/2 < √ < tα/2 | µ = 1 − α. s/ n A little bit of algebra isolates µ in the middle of an interval: s s P x − tα/2 · √ < µ < x + tα/2 · √ | µ = 1 − α n n We can emphasize that the interval depends only on the statistics x and s by writing this as s s P x − tα/2 · √ , x + tα/2 · √ contains µ | µ = 1 − α. n n This is the (1 − α) t-confidence interval for µ. We often write it using the shorthand s x ± tα/2 · √ n Think of it as x ± error. 3.3 χ2-confidence intervals for σ2: normal data with unknown µ and σ You guessed it: χ2-confidence intervals for the variance of normal data are based on the standardized variance, i.e. the χ2-statistic. We follow the same logic as above to get a χ2-confidence interval for σ2 . Because this is the third time through it we’ll move a little more quickly. We assume we have n independent normal samples: x1, x2, . . . , xn ∼ N(µ, σ2). We assume that µ and σ are both unknown. The standardized variance is 2(n − 1)s X2 = σ2 ∼ χ2(n − 1). We know that the X2 statistic follows a χ2 distribution with n − 1 degrees of freedom. For Z and t we used, without comment, the symmetry of the distributions to replace z1−α/2 by −zα/2 and t1−α/2 by −tα/2. Because the χ2 distribution is not symmetric we need to be explicit about the critical values on both the left and the right. That is, P (c1−α/2 < X 2 < cα/2) = 1 − α, where cα/2 and c1−α/2 are right tail critical values. Thus, 2(n − 1)s P c1−α/2 < < cα/2 | σ = 1 − α σ2 A little bit of algebra puts this in the form of an interval around σ2: 2 2(n − 1)s (n − 1)s P < σ2 < | σ = 1 − α cα/2 c1−α/2 ( ) ( ) ([ ] ) This is the (1− α) t-confidence interval for µ. We often write it using the shorthand 4 4 18.05 class 23, Confidence Intervals: Three Views, Spring 2014 We can emphasize that the interval depends only on the statistic s2 by writing this as 2 2(n − 1)s (n − 1)s P , contains σ2 | σ2 = 1 − α. cα/2 c1−α/2 This is the (1 − α) χ2-confidence interval for σ2 . Confidence intervals via hypothesis testing Suppose we have data drawn from a distribution with a parameter θ whose value is unknown. A significance test for the value θ has the following short description. 1. Set the null hypothesis H0 : θ = θ0 for some special value θ0, e.g. we often have H0 : θ = 0. 2. Use the data to compute the value of a test statistic, call it x. 3. If x is far enough into the tail of the null distribution (the distribution assuming the null hypothesis) then we reject H0. In the case where there is no special value to test we may still want to estimate θ. This is the reverse of significance testing; rather than seeing if we should reject a specific value of θ because it doesn’t fit the data we want to find the range of values of θ that do, in some ([ ] ) sense, fit the data. This gives us the following definitions. Definition. Given a value x of the test statistic, the (1−α) confidence interval contains all values θ0 which are not rejected (at significance level α) when they are the null hypothesis. Definition. A type 1 CI error occurs when the confidence interval does not contain the true value of θ. For a (1− α) confidence interval the type 1 CI error rate is α. Example 1. Here is an example relating confidence intervals and hypothesis tests. Suppose data x is drawn from a binomial(12, θ) distribution with θ unknown. Let α = 0.1 and create the (1− α) = 90% confidence interval for each possible value of x. Our strategy is to look at one possible value of θ at a time and choose rejection regions for a significance test with α = 0.1. Once this is done, we will know, for each value of x, which values of θ are not rejected, i.e. the confidence interval associated with x. To start we set up a likelihood table for binomial(12, θ) in Table 1. Each row shows the probabilities p(x|θ) for one value of θ. To keep the size manageable we only show θ in increments of 0.1. 18.05 class 23, Confidence Intervals: Three Views, Spring 2014 5 θ\x 0 1 2 3 4 5 6 7 8 9 10 11 12 1.0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.9 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.02 0.09 0.23 0.38 0.28 0.8 0.00 0.00 0.00 0.00 0.00 0.00 0.02 0.05 0.13 0.24 0.28 0.21 0.07 0.7 0.00 0.00 0.00 0.00 0.01 0.03 0.08 0.16 0.23 0.24 0.17 0.07 0.01 0.6 0.00 0.00 0.00 0.01 0.04 0.10 0.18 0.23 0.21 0.14 0.06 0.02 0.00 0.5 0.00 0.00 0.02 0.05 0.12 0.19 0.23 0.19 0.12 0.05 0.02 0.00 0.00 0.4 0.00 0.02 0.06 0.14 0.21 0.23 0.18 0.10 0.04 0.01 0.00 0.00 0.00 0.3 0.01 0.07 0.17 0.24 0.23 0.16 0.08 0.03 0.01 0.00 0.00 0.00 0.00 0.2 0.07 0.21 0.28 0.24 0.13 0.05 0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.1 0.28 0.38 0.23 0.09 0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.0 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Table 1. Likelihood table for Binomial(12, θ) Tables 2-4 below show the rejection region (in orange) and non-rejection region (in blue) for the various values of θ. To emphasize the row-by-row nature of the process the Table 2 just shows these regions for θ = 1.0, then Table 3 adds in regions for θ = 0.9 and Table 4 shows them for all the values of θ. Immediately following the tables we give a detailed explanation of how the rejection/non-rejection regions were chosen. θ\x 0 1 2 3 4 5 6 7 8 9 10 11 12 1.0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.9 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.02 0.09 0.23 0.38 0.28 0.8 0.00 0.00 0.00 0.00 0.00 0.00 0.02 0.05 0.13 0.24 0.28 0.21 0.07 0.7 0.00 0.00 0.00 0.00 0.01 0.03 0.08 0.16 0.23 0.24 0.17 0.07 0.01 0.6 0.00 0.00 0.00 0.01 0.04 0.10 0.18 0.23 0.21 0.14 0.06 0.02 0.00 0.5 0.00 0.00 0.02 0.05 0.12 0.19 0.23 0.19 0.12 0.05 0.02 0.00 0.00 0.4 0.00 0.02 0.06 0.14 0.21 0.23 0.18 0.10 0.04 0.01 0.00 0.00 0.00 0.3 0.01 0.07 0.17 0.24 0.23 0.16 0.08 0.03 0.01 0.00 0.00 0.00 0.00 0.2 0.07 0.21 0.28 0.24 0.13 0.05 0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.1 0.28 0.38 0.23 0.09 0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.0 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 significance 0.000 Table 2. Likelihood table for binomial(12, θ) with rejection/non-rejection regions for θ = 1.0 θ\x 0 1 2 3 4 5 6 7 8 9 10 11 12 1.0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.9 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.02 0.09 0.23 0.38 0.28 0.8 0.00 0.00 0.00 0.00 0.00 0.00 0.02 0.05 0.13 0.24 0.28 0.21 0.07 0.7 0.00 0.00 0.00 0.00 0.01 0.03 0.08 0.16 0.23 0.24 0.17 0.07 0.01 0.6 0.00 0.00 0.00 0.01 0.04 0.10 0.18 0.23 0.21 0.14 0.06 0.02 0.00 0.5 0.00 0.00 0.02 0.05 0.12 0.19 0.23 0.19 0.12 0.05 0.02 0.00 0.00 0.4 0.00 0.02 0.06 0.14 0.21 0.23 0.18 0.10 0.04 0.01 0.00 0.00 0.00 0.3 0.01 0.07 0.17 0.24 0.23 0.16 0.08 0.03 0.01 0.00 0.00 0.00 0.00 0.2 0.07 0.21 0.28 0.24 0.13 0.05 0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.1 0.28 0.38 0.23 0.09 0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.0 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 significance 0.000 0.026 Table 3. Likelihood table with rejection/non-rejection regions shown for θ = 1.0 and 0.9 18.05 class 23, Confidence Intervals: Three Views, Spring 2014 6 significanceθ\x 0 1 2 3 4 5 6 7 8 9 10 11 12 1.0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.9 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.02 0.09 0.23 0.38 0.28 0.8 0.00 0.00 0.00 0.00 0.00 0.00 0.02 0.05 0.13 0.24 0.28 0.21 0.07 0.7 0.00 0.00 0.00 0.00 0.01 0.03 0.08 0.16 0.23 0.24 0.17 0.07 0.01 0.6 0.00 0.00 0.00 0.01 0.04 0.10 0.18 0.23 0.21 0.14 0.06 0.02 0.00 0.5 0.00 0.00 0.02 0.05 0.12 0.19 0.23 0.19 0.12 0.05 0.02 0.00 0.00 0.4 0.00 0.02 0.06 0.14 0.21 0.23 0.18 0.10 0.04 0.01 0.00 0.00 0.00 0.3 0.01 0.07 0.17 0.24 0.23 0.16 0.08 0.03 0.01 0.00 0.00 0.00 0.00 0.2 0.07 0.21 0.28 0.24 0.13 0.05 0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.1 0.28 0.38 0.23 0.09 0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.0 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000 0.026 0.073 0.052 0.077 0.092 0.077 0.052 0.073 0.026 0.000 Table 4. Likelihood table with rejection/non-rejection regions for θ = 0.0 to 1.0 Choosing the rejection and non-rejection regions in the tables The first problem we confront is how exactly to choose the rejection region. We used two rules: 1. The total probabilitiy of the rejection region, i.e. the significance, should be less than or equal to 0.1. (Since we have a discrete distribution it is impossible to make the significance exactly 0.1.) 2. We build the rejection region by choosing values of x one at a time, always picking the unused value with the smallest probability. We stop when the next value would make the significance more that 0.1. There are other ways to choose the rejection region which would result in slight differences. Our method is one reasonable way. Table 2 shows the rejection (orange) and non-rejection (blue) regions for θ = 1.0. This is a special case because most of the probabilities in this row are 0.0. We’ll move right on to the next table and step through the process for that. In Table 3, let’s walk through the steps used to find these regions for θ = 0.9. • The smallest probability is when x = 0, so x = 0 is in the rejection region. • The next smallest is when x = 1, so x = 1 is in the rejection region. • We continue with x = 2, . . . , 8. At this point the total probability in the rejection region is 0.026. • The next smallest probability is when x = 9. Adding this probability (0.09) to 0.026 would put the total probability over 0.1. So we leave x = 9 out of the rejection region and stop the process. Note three things for the θ = 0.9 row: 1. None of the probabilities in this row are truly zero, though some are small enough that they equal 0 to 2 decimal places. 2. We show the significance for this value of θ in the right hand margin. More precisely, we show the significance level of the NHST with null hypothesis θ = 0.9 and the given rejection region. 3. The rejection region consists of values of x. When we say the rejection region is shown in orange we really mean the rejection region contains the values of x corresponding to the probabilities highlighted in orange. Think: Look back at the θ = 1.0 row and make sure you understand why the rejection region is x = 0, . . . , 11 and the significance is 0.000. Example 2. Using Table 4 determine the 0.90 confidence interval when x = 8. 7 18.05 class 23, Confidence Intervals: Three Views, Spring 2014 answer: The 90% confidence interval consists of all those θ that would not be rejected by an α = 0.1 hypothesis test when x = 8. Looking at the table, the blue (non-rejected) entries in the column x = 8 correspond to 0.5 ≤ θ ≤ 0.8: the confidence interval is [0.5, 0.8]. Remark: The point of this example is to show how confidence intervals and hypothesis tests are related. Since Table 4 has only finitely many values of θ, our answer is close but not exact. Using a computer we could look at many more values of θ. For this problem we used R to find that, correct to 2 decimal places, the confidence interval is [0.42, 0.85]. Example 3. Explain why the expected type one CI error rate will be at most 0.092, provided that the true value of θ is in the table. answer: The short answer is that this is the maximum significance for any θ in Table 4. Expanding on that slightly: we make a type one CI error if the confidence interval does not contain the true value of θ, call it θtrue. This happens exactly when the data x is in the rejection region for θtrue. The probability of this happening is the significance for θtrue and this is at most 0.092. Remark: The point of this example is to show how confidence level, type one CI error rate and significance for each hypothesis are related. As in the previous example, we can use R to compute the significance for many more values of θ. When we do this we find that the maximum significance for any θ is 0.1 ocurring when θ ≈ 0.0452. Summary notes: 1. We start with a test statistic x. The confidence interval is random because it depends on x. 2. For each hypothesized value of θ we make a significance test with significance level α by choosing rejection regions. 3. For a specific value of x the associated confidence interval for θ consists of all θ that aren’t rejected for that value, i.e. all θ that have x in their non-rejection regions. 4. Because the distribution is discrete we can’t always achieve the exact significance level, so our confidence interval is really an ‘at least 90% confidence interval’. Example 4. Open the applet http://mathlets.org/mathlets/confidence-intervals/. We want you to play with the applet to understand the random nature of confidence intervals and the meaning of confidence as (1 - type I CI error rate). (a) Read the help. It is short and will help orient you in the applet. Play with different settings of the parameters to see how they affect the size of the confidence intervals. (b) Set the number of trials to N = 1. Click the ‘Run N trials’ button repeatedly and see that each time data is generated the confidence intervals jump around. (c) Now set the confidence level to c = .5. As you click the ‘Run N trials’ button you should see that about 50% of the confidence intervals include the true value of µ. The ‘Z correct’ and ‘t correct’ values should change accordingly. (d) Now set the number of trials to N = 100. With c = .8. The ‘Run N trials’ button will now run 100 trials at a time. Only the last confidence interval will be shown in the graph, but the trials all run and the ‘percent correct’ statistics will be updated based on all 100 trials. Click the run trials button repeatedly. Watch the correct rates start to converge to the confidence level. To converge even faster, set N = 1000. Formal view of confidence intervals Recall: An interval statistic is an interval Ix computed from data x. An interval is determined by its lower and upper bounds, and these are random because x is random. 5 http://mathlets.org/mathlets/confidence-intervals/ interval 18.05 class 23, Confidence Intervals: Three Views, Spring 2014 8 We suppose that x is drawn from a distribution with pdf f(x|θ) where the parameter θ is unknown. Definition: A (1 − α) confidence interval for θ is an interval statistic Ix such that P (Ix contains θ0 | θ = θ0) = 1 − α for all possible values of θ0. We wish this was simpler, but a definition is a definition and this definition is one way to weigh the evidence provided by the data x. Let’s unpack it a bit. The confidence level of an interval statistic is a probability concerning a random interval and a hypothesized value θ0 for the unknown parameter. Precisely, it is the probability that the random interval (computed from random data) contains the value θ0, given that the model parameter truly is θ0. Since the true value of θ is unknown, the frequentist statistician defines a, say, 95% confidence intervals so that the 0.95 probability is valid no matter which hypothesized value of the parameter is actually true. 6 Comparison with Bayesian probability intervals Confidence intervals are a frequentist notion, and as we’ve repeated many times, frequentists don’t assign probabilities to hypotheses, e.g. the value of an unknown parameter. Rather they compute likelihoods; that is, probabilities about data or associated statistics given a hypothesis (note the condition θ = θ0 in the formal view of confidence intervals). Note that the construction of confidence intervals proceeds entirely from the full likelihood table. In contrast Bayesian posterior probability intervals are truly the probability that the value of the unknown parameter lies in the reported range. We add the usual caveat that this depends on the specific choice of a (possibly subjective) Bayesian prior. This distinction between the two is subtle because Bayesian posterior probability intervals and frequentist confidence intervals share the following properties: 1. They start from a model f(x|θ) for observed data x with unknown parameter θ. 2. Given data x, they give an interval I(x) specifying a range of values for θ. 3. They come with a number (say .95) that is the probability of something. In practice, many people misinterpret confidence intervals as Bayesian probability intervals, forget­ ting that frequentists never place probabilities on hypotheses (this is analogous to mistaking the p-value in NHST for the probability that H0 is false). The harm of this misinterpretation is some­ what mitigated by that fact that, given enough data and a reasonable prior, Bayesian and frequentist intervals often work out to be quite similar. For an amusing example illustrating how they can be quite different, see the first answer here (involving chocolate chip cookies!): http://stats.stackexchange.com/questions/2272/whats-the-difference-between-a-confidence- interval-and-a- This example uses the formal definitions and is really about confidence sets instead of confidence intervals. http://stats.stackexchange.com/questions/2272/whats-the-difference-between-a-confidence-interval-and-a-credible-interval http://stats.stackexchange.com/questions/2272/whats-the-difference-between-a-confidence-interval-and-a-credible-interval MIT OpenCourseWare https://ocw.mit.edu 18.05 Introduction to Probability and Statistics Spring 2014 For information about citing these materials or our Terms of Use, visit: https://ocw.mit.edu/terms. https://ocw.mit.edu https://ocw.mit.edu/terms Confidence Intervals for the Mean of Non-normal Data Class 23, 18.05 Jeremy Orloff and Jonathan Bloom 1 Learning Goals 1. Be able to derive the formula for conservative normal confidence intervals for the proportion θ in Bernoulli data. 2. Be able to compute rule-of-thumb 95% confidence intervals for the proportion θ of a Bernoulli distribution. 3. Be able to compute large sample confidence intervals for the mean of a general distri­ bution. 2 Introduction So far, we have focused on constructing confidence intervals for data drawn from a normal distribution. We’ll now will switch gears and learn about confidence intervals for the mean when the data is not necessarily normal. We will first look carefully at estimating the probability θ of success when the data is drawn from a Bernoulli(θ) distribution –recall that θ is also the mean of the Bernoulli distribution. Then we will consider the case of a a large sample from an unknown distribution; in this case we can appeal to the central limit theorem to justify the use z-confidence intervals. 3 Bernoulli data and polling One common use of confidence intervals is for estimating the proportion θ in a Bernoulli(θ) distribution. For example, suppose we want to use a political poll to estimate the proportion of the population that supports candidate A, or equivalent the probability θ that a random person supports candidate A. In this case we have a simple rule-of-thumb that allows us to quickly compute a confidence interval. 3.1 Conservative normal confidence intervals Suppose we have i.i.d. data x1, x2, . . . , xn all drawn from a Bernoulli(θ) distribution. then a conservative normal (1 − α) confidence interval for θ is given by 1 x ± zα/2 · 2 √ n . (1) The proof given below uses the central limit theorem and the observation that σ = θ(1 − θ) ≤ 1/2. 1 2 18.05 class 23, Confidence Intervals for the Mean of Non-normal Data , Spring 2014 You will also see in the derivation below that this formula is conservative, providing an ‘at least (1 − α)’ confidence interval. Example 1. A pollster asks 196 people if they prefer candidate A to candidate B and finds that 120 prefer A and 76 prefer B. Find the 95% conservative normal confidence interval for θ, the proportion of the population that prefers A. answer: We have x = 120/196 = 0.612, α = 0.05 and z.025 = 1.96. The formula says a 95% confidence interval is 1.96 I ≈ 0.612 ± = 0.612 ± 0.007. 2 · 14 3.2 Proof of Formula 1 The proof of Formula 1 will rely on the following fact. Fact. The standard deviation of a Bernoulli(θ) distribution is at most 0.5. Proof of fact: Let’s denote this standard deviation by σθ to emphasize its dependence on θ. The variance is then σ2 = θ(1 − θ). It’s easy to see using calculus or by graphing this θ parabola that the maximum occurs when θ = 1/2. Therefore the maximum variance is 1/4, which implies that the standard deviation σp is less the 1/4 = 1/2. Proof of formula (1). The proof relies on the central limit theorem which says that (for large n) the distribution of x is approximately normal with mean θ and standard deviation √ σθ/ n. For normal data we have the (1 − α) z-confidence interval σθ·x ± zα/2 √ n The trick now is to replace σθ by 1 : since σθ ≤ 1 the resulting interval around x2 2 1 · √x ± zα/2 2 n √ is always at least as wide as the interval using ± σθ/ n. A wider interval is more likely to contain the true value of θ so we have a ‘conservative’ (1 − α) confidence interval for θ. 1Again, we call this conservative because 2 √ n overestimates the standard deviation of x̄, resulting in a wider interval than is necessary to achieve a (1 − α) confidence level. 3.3 How political polls are reported Political polls are often reported as a value with a margin-of-error. For example you might hear 52% favor candidate A with a margin-of-error of ±5%. The actual precise meaning of this is if θ is the proportion of the population that supports A then the point estimate for θ is 52% and the 95% confidence interval is 52% ± 5%. Notice that reporters of polls in the news do not mention the 95% confidence. You just have to know that that’s what pollsters do. 2 · 14 = 0.612± 0.007. √ 3 4 18.05 class 23, Confidence Intervals for the Mean of Non-normal Data , Spring 2014 The 95% rule-of-thumb confidence interval. Recall that the (1 − α) conservative normal confidence interval is 1 x ± zα/2 · √ . 2 n If we use the standard approximation z.025 = 2 (instead of 1.96) we get the rule-of thumb 95% confidence interval for θ: 1 x ± √ . n Example 2. Polling. Suppose there will soon be a local election between candidate A and candidate B. Suppose that the fraction of the voting population that supports A is θ. Two polling organizations ask voters who they prefer. 1. The firm of Fast and First polls 40 random voters and finds 22 support A. 2. The firm of Quick but Cautious polls 400 random voters and finds 190 support A. Find the point estimates and 95% rule-of-thumb confidence intervals for each poll. Explain how the statistics reflect the intuition that the poll of 400 voters is more accurate. answer: For poll 1 we have Point estimate: x = 22/40 = 0.55 1 1 Confidence interval: x ± √ = 0.55 ± √ = 0.55 ± 0.16 = 55% ± 16%. n 40 For poll 2 we have Point estimate: x = 190/400 = 0.475 1 1 Confidence interval: x ± √ = 0.475 ± √ = 0.475 ± 0.05 = 47.5% ± 5%. n 400 The greater accuracy of the poll of 400 voters is reflected in the smaller margin of error, i.e. 5% for the poll of 400 voters vs. 16% for the poll of 40 voters. Other binomial proportion confidence intervals There are many methods of producing confidence intervals for the proportion p of a binomial(n, p) distribution. For a number of other common approaches, see: http://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval Large sample confidence intervals One typical goal in statistics is to estimate the mean of a distribution. When the data follows a normal distribution we could use confidence intervals based on standardized statistics to estimate the mean. But suppose the data x1, x2, . . . , xn is drawn from a distribution with pmf or pdf f(x) that may not be normal or even parametric. If the distribution has finite mean and variance and if n is sufficiently large, then the following version of the central limit theorem shows we can still use a standardized statistic. Central Limit Theorem: For large n, the sampling distribution of the studentized mean x̄− µ is approximately standard normal: √ ≈ N(0, 1). s/ n http://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval � � 4 18.05 class 23, Confidence Intervals for the Mean of Non-normal Data , Spring 2014 So for large n the (1 − α) confidence interval for µ is approximately s s · zα/2, · zα/2x̄− √ x̄+ √ n n where zα/2 is the α/2 critical value for N(0, 1). This is called the large sample confidence interval. Example 3. How large must n be? Recall that a type 1 CI error occurs when the confidence interval does not contain the true value of the parameter, in this case the mean. Let’s call the value (1 − α) the nominal confidence level. We say nominal because unless n is large we shouldn’t expect the true type 1 CI error rate to be α. We can run numerical simulations to approximate of the true confidence level. We expect that as n gets larger the true confidence level of the large sample confidence interval will converge to the nominal value. We ran such simulations for x drawn from the exponential distribution exp(1) (which is far from normal). For several values of n and nominal confidence level c we ran 100,000 trials. Each trial consisted of the following steps: 1. draw n samples from exp(1). 2. compute the sample mean x̄ and sample standard deviation s. s 3. construct the large sample c confidence interval: x ± zα/2 · √ . n 4. check for a type 1 CI error, i.e. see if the true mean µ = 1 is not in the interval. With 100,000 trials, the empirical confidence level should closely approximate the true level. For comparison we ran the same tests on data drawn from a standard normal distribution. Here are the results. Simulations for exp(1) Simulations for N(0, 1). For the exp(1) distribution we see that for n = 20 the simulated confidence of the large sample confidence interval is less than the nominal confidence 1 − α. But for n = 100 the nominal conf. n 1 − α simulated conf. 20 0.95 0.905 20 0.90 0.856 20 0.80 0.762 50 0.95 0.930 50 0.90 0.879 50 0.80 0.784 100 0.95 0.938 100 0.90 0.889 100 0.80 0.792 400 0.95 0.947 400 0.90 0.897 400 0.80 0.798 nominal conf. n 1 − α simulated conf. 20 0.95 0.936 20 0.90 0.885 20 0.80 0.785 50 0.95 0.944 50 0.90 0.894 50 0.80 0.796 100 0.95 0.947 100 0.900 0.896 100 0.800 0.797 400 0.950 0.949 400 0.900 0.898 400 0.800 0.798 5 18.05 class 23, Confidence Intervals for the Mean of Non-normal Data , Spring 2014 simulated confidence and nominal confidence are quite close. So for exp(1), n somewhere between 50 and 100 is large enough for most purposes. Think: For n = 20 why is the simulated confidence for the N(0, 1) distribution is smaller than the nominal confidence? This is because we used zα/2 instead of tα/2. For large n these are quite close, but for n = 20 there is a noticable difference, e.g. z.025 = 1.96 and t.025 = 2.09. MIT OpenCourseWare https://ocw.mit.edu 18.05 Introduction to Probability and Statistics Spring 2014 For information about citing these materials or our Terms of Use, visit: https://ocw.mit.edu/terms. https://ocw.mit.edu https://ocw.mit.edu/terms Bootstrap confidence intervals Class 24, 18.05 Jeremy Orloff and Jonathan Bloom 1 Learning Goals 1. Be able to construct and sample from the empirical distribution of data. 2. Be able to explain the bootstrap principle. 3. Be able to design and run an empirical bootstrap to compute confidence intervals. 4. Be able to design and run a parametric bootstrap to compute confidence intervals. 2 Introduction The empirical bootstrap is a statistical technique popularized by Bradley Efron in 1979. Though remarkably simple to implement, the bootstrap would not be feasible without modern computing power. The key idea is to perform computations on the data itself to estimate the variation of statistics that are themselves computed from the same data. That is, the data is ‘pulling itself up by its own bootstrap.’ (A google search of ‘by ones own bootstraps’ will give you the etymology of this metaphor.) Such techniques existed before 1979, but Efron widened their applicability and demonstrated how to implement the bootstrap effectively using computers. He also coined the term ‘bootstrap’ 1 . Our main application of the bootstrap will be to estimate the variation of point estimates; that is, to estimate confidence intervals. An example will make our goal clear. Example 1. Suppose we have data x1, x2, . . . , xn If we knew the data was drawn from N(µ, σ2) with the unknown mean µ and known variance σ2 then we have seen that σ σ x − 1.96√ , x + 1.96√ n n is a 95% confidence interval for µ. Now suppose the data is drawn from some completely unknown distribution. To have a name we’ll call this distribution F and its (unknown) mean µ. We can still use the sample mean x as a point estimate of µ. But how can we find a confidence interval for µ around x? Our answer will be to use the bootstrap! In fact, we’ll see that the bootstrap handles other statistics as easily as it handles the mean. For example: the median, other percentiles or the trimmed mean. These are statistics where, even for normal distributions, it can be difficult to compute a confidence interval from theory alone. 1Paraphrased from Dekking et al. A Modern Introduction to Probabilty and Statistics, Springer, 2005, page 275. 1 2 18.05 class 24, Bootstrap confidence intervals, Spring 2014 3 Sampling In statistics to sample from a set is to choose elements from that set. In a random sample the elements are chosen randomly. There are two common methods for random sampling. Sampling without replacement Suppose we draw 10 cards at random from a deck of 52 cards without putting any of the cards back into the deck between draws. This is called sampling without replacement or simple random sampling. With this method of sampling our 10 card sample will have no duplicate cards. Sampling with replacement Now suppose we draw 10 cards at random from the deck, but after each draw we put the card back in the deck and shuffle the cards. This is called sampling with replacement. With this method, the 10 card sample might have duplicates. It’s even possible that we would draw the 6 of hearts all 10 times. Think: What’s the probability of drawing the 6 of hearts 10 times in a row? Example 2. We can view rolling an 8-sided die repeatedly as sampling with replacement from the set {1,2,3,4,5,6,7,8}. Since each number is equally likely, we say we are sampling uniformly from the data. There is a subtlety here: each data point is equally probable, but if there are repeated values within the data those values will have a higher probability of being chosen. The next example illustrates this. Note. In practice if we take a small number from a very large set then it doesn’t matter whether we sample with or without replacement. For example, if we randomly sample 400 out of 300 million people in the U.S. then it is so unlikely that the same person will be picked twice that there is no real difference between sampling with or without replacement. 4 The empirical distribution of data The empirical distribution of data is simply the distribution that you see in the data. Let’s illustrate this with an example. Example 3. Suppose we roll an 8-sided die 10 times and get the following data, written in increasing order: 1, 1, 2, 3, 3, 3, 3, 4, 7, 7. Imagine writing these values on 10 slips of paper, putting them in a hat and drawing one at random. Then, for example, the probability of drawing a 3 is 4/10 and the probability of drawing a 4 is 1/10. The full empirical distribution can be put in a probability table value x 1 2 3 4 7 p(x) 2/10 1/10 4/10 1/10 2/10 Notation. If we label the true distribution the data is drawn from as F , then we’ll label the empirical distribution of the data as F ∗ . If we have enough data then the law of large numbers tells us that F ∗ should be a good approximation of F . Example 4. In the dice example just above, the true and empirical distributions are: 3 18.05 class 24, Bootstrap confidence intervals, Spring 2014 value x 1 2 3 4 5 5 7 8 true p(x) 1/8 1/8 1/8 1/8 1/8 1/8 1/8 1/8 empirical p(x) 2/10 1/10 4/10 1/10 0 0 2/10 0 The true distribution F and the empirical distribution F ∗ of the 8-sided die. Because F ∗ is derived strictly from data we call it the empirical distribution of the data. We will also call it the resampling distribution. Notice that we always know F ∗ explicitly. In particular the expected value of F ∗ is just the sample mean x. 5 Resampling The empirical bootstrap proceeds by resampling from the data. We continue the dice example above. Example 5. Suppose we have 10 data points, given in increasing order: 1, 1, 2, 3, 3, 3, 3, 4, 7, 7 We view this as a sample taken from some underlying distribution. To resample is to sample with replacement from the empirical distribution, e.g. put these 10 numbers in a hat and draw one at random. Then put the number back in the hat and draw again. You draw as many numbers as the desired size of the resample. To get us a little closer to implementing this on a computer we rephrase this in the following way. Label the 10 data points x1, x2, . . . , x10. To resample is to draw a number j from the uniform distribution on {1, 2, . . . , 10} and take xj as our resampled value. In this case we could do so by rolling a 10-sided die. For example, if we roll a 6 then our resampled value is 3, the 6th element in our list. If we want a resampled data set of size 5, then we roll the 10-sided die 5 times and choose the corresponding elements from the list of data. If the 5 rolls are 5, 3, 6, 6, 1 then the resample is 3, 2, 3, 3, 1. Notes: 1. Because we are sampling with replacement, the same data point can appear multiple times when we resample. 2. Also because we are sampling with replacement, we can have a resample data set of any size we want, e.g. we could resample 1000 times. Of course, in practice one uses a software package like R to do the resampling. 5.1 Star notation If we have sample data of size n x1, x2, . . . , xn 4 18.05 class 24, Bootstrap confidence intervals, Spring 2014 then we denote a resample of size m by adding a star to the symbols ∗ ∗ ∗ x1, x 2, . . . , x m ∗Similarly, just as x is the mean of the original data, we write x for the mean of the resampled data. 6 The empirical bootstrap Suppose we have n data points x1, x2, . . . , xn drawn from a distribution F . An empirical bootstrap sample is a resample of the same size n: ∗ ∗ ∗ x1, x 2, . . . , x .n You should think of the latter as a sample of size n drawn from the empirical distribution ∗F ∗ . For any statistic v computed from the original sample data, we can define a statistic v by the same formula but computed instead using the resampled data. With this notation we can state the bootstrap principle. 6.1 The bootstrap principle The bootstrap setup is as follows: 1. x1, x2, . . . , xn is a data sample drawn from a distribution F . 2. u is a statistic computed from the sample. 3. F ∗ is the empirical distribution of the data (the resampling distribution). ∗ ∗ ∗4. x1, x 2, . . . , x is a resample of the data of the same size as the original sample n ∗5. u is the statistic computed from the resample. Then the bootstrap principle says that 1. F ∗ ≈ F . ∗2. The variation of u is well-approximated by the variation of u . ∗Our real interest is in point 2: we can approximate the variation of u by that of u . We will exploit this to estimate the size of confidence intervals. 6.2 Why the resample is the same size as the original sample This is straightforward: the variation of the statistic u will depend on the size of the sample. If we want to approximate this variation we need to use resamples of the same size. 6.3 Toy example of an empirical bootstrap confidence interval Example 6. Toy example. We start with a made-up set of data that is small enough to show each step explicitly. The sample data is 30, 37, 36, 43, 42, 43, 43, 46, 41, 42 5 18.05 class 24, Bootstrap confidence intervals, Spring 2014 Problem: Estimate the mean µ of the underlying distribution and give an 80% bootstrap confidence interval. Note: R code for this example is shown in the section ‘R annotated transcripts’ below. The code is also implemented in the R script class24-empiricalbootstrap.r which is posted with our other R code. answer: The sample mean is x = 40.3. We use this as an estimate of the true mean µ of the underlying distribution. As in Example 1, to make the confidence interval we need to know how much the distribution of x varies around µ. That is, we’d like to know the distribution of δ = x − µ. If we knew this distribution we could find δ.1 and δ.9, the 0.1 and 0.9 critical values of δ. Then we’d have P (δ.9 ≤ x − µ ≤ δ.1 | µ) = 0.8 ⇔ P (x − δ.9 ≥ µ ≥ x − δ.1 | µ) = 0.8 which gives an 80% confidence interval of [x − δ.1, x − δ.9] . As always with confidence intervals, we hasten to point out that the probabilities computed above are probabilities concerning the statistic x given that the true mean is µ. The bootstrap principle offers a practical approach to estimating the distribution of δ = x − µ. It says that we can approximate it by the distribution of δ ∗ = x ∗ − x ∗where x is the mean of an empirical bootstrap sample. Here’s the beautiful key to this: since δ∗ is computed by resampling the original data, we we can have a computer simulate δ∗ as many times as we’d like. Hence, by the law of large numbers, we can estimate the distribution of δ∗ with high precision. Now let’s return to the sample data with 10 points. We used R to generate 20 bootstrap samples, each of size 10. Each of the 20 columns in the following array is one bootstrap sample. 43 36 46 30 43 43 43 37 42 42 43 37 36 42 43 43 42 43 42 43 43 41 37 37 43 43 46 36 41 43 43 42 41 43 46 36 43 43 43 42 42 43 37 43 46 37 36 41 36 43 41 36 37 30 46 46 42 36 36 43 37 42 43 41 41 42 36 42 42 43 42 43 41 43 36 43 43 41 42 46 42 36 43 43 42 37 42 42 42 46 30 43 36 43 43 42 37 36 42 30 36 36 42 42 36 36 43 41 30 42 37 43 41 41 43 43 42 46 43 37 43 37 41 43 41 42 43 46 46 36 43 42 43 30 41 46 43 46 30 43 41 42 30 42 37 43 43 42 43 43 46 43 30 42 30 42 30 43 43 42 46 42 42 43 41 42 30 37 30 42 43 42 43 37 37 37 42 43 43 46 42 43 43 41 42 36 43 30 37 43 42 43 41 36 37 41 43 42 43 43 Next we compute δ∗ = x ∗ − x for each bootstrap sample (i.e. each column) and sort them from smallest to biggest: -1.6, -1.4, -1.4, -0.9, -0.5, -0.2, -0.1, 0.1, 0.2, 0.2, 0.4, 0.4, 0.7, 0.9, 1.1, 1.2, 1.2, 1.6, 1.6, 2.0 6 18.05 class 24, Bootstrap confidence intervals, Spring 2014 We will approximate the critical values δ and δ by δ∗ and δ∗ Since δ∗ is at the 90th .1 .9 .1 .9. .1 percentile we choose the 18th element in the list, i.e. 1.6. Likewise, since δ. ∗ 9 is at the 10th percentile we choose the 2nd element in the list, i.e. -1.4. Therefore our bootstrap 80% confidence interval for µ is [x − δ . ∗ 1, x − δ . ∗ 9] = [40.3 − 1.6, 40.3 + 1.4] = [38.7, 41.7] In this example we only generated 20 bootstrap samples so they would fit on the page. Using R, we would generate 10000 or more bootstrap samples in order to obtain a very accurate estimate of δ. ∗ 1 and δ. ∗ 9. 6.4 Justification for the bootstrap principle The bootstrap is remarkable because resampling gives us a decent estimate on how the point estimate might vary. We can only give you a ‘hand-waving’ explanation of this, but it’s worth a try. The bootstrap is based roughly on the law of large numbers, which says, in short, that with enough data the empirical distribution will be a good approximation of the true distribution. Visually it says that the histogram of the data should approximate the density of the true distribution. First let’s note what resampling can’t do for us: it can’t improve our point estimate. For ∗example, if we estimate the mean µ by x then in the bootstrap we would compute x for ∗many resamples of the data. If we took the average of all the x we would expect it to be very close to x. This wouldn’t tell us anything new about the true value of µ. Even with a fair amount of data the match between the true and empirical distributions is not perfect, so there will be error in estimating the mean (or any other value). But the amount of variation in the estimates is much less sensitive to differences between the density and the histogram. As long as they are reasonably close both the empirical and true distributions will exhibit the similar amounts of variation. So, in general the bootstrap principle is more robust when approximating the distribution of relative variation than when approximating absolute distributions. What we have in mind is the scenario of our examples. The distribution (over different ∗sets of experimental data) of x is ‘centered’ at µ and the distribution of x is centered at x If there is a significant separation between x and µ then these two distributions will also differ significantly. On the other hand the distribution of δ = x − µ describes the variation of x about its center. Likewise the distribution of δ∗ = x ∗ − x describes the variation of ∗ x about its center. So even if the centers are quite different the two variations about the centers can be approximately equal. The figure below illustrates how the empirical distribution approximates the true distribu­ tion. To make the figure we generate 100 random values from a chi-square distribution with 3 degrees of freedom. The figure shows the pdf of the true distribution as a blue line and a histogram of the empirical distribuion in orange. 7 7 18.05 class 24, Bootstrap confidence intervals, Spring 2014 0 5 10 15 0 .0 0 0 .1 0 0 .2 0 The true and empirical distributions are approximately equal. Other statistics So far in this class we’ve avoided confidence intervals for the median and other statistics because their sample distributions are hard to describe theoretically. The bootstrap has no such problem. In fact, to handle the median all we have to do is change ‘mean’ to ‘median’ in the R code from Example 6. Example 7. Old Faithful: confidence intervals for the median Old Faithful is a geyser in Yellowstone National Park in Wyoming: http://en.wikipedia.org/wiki/Old_Faithful There is a publicly available data set which gives the durations of 272 consecutive eruptions. Here is a histogram of the data. Question: Estimate the median length of an eruption and give a 90% confidence interval for the median. answer: The full answer to this question is in the R file oldfaithful simple.r and the Old Faithful data set. Both are posted on the class R code page. (Look under ‘Other R code’ for the old faithful script and data.) http://en.wikipedia.org/wiki/Old_Faithful 8 18.05 class 24, Bootstrap confidence intervals, Spring 2014 Note: the code in oldfaithful simple.r assumes that the data oldfaithful.txt is in the current working directory. Let’s walk through a summary of the steps needed to answer the question. 1. Data: x1, . . . , x272 2. Data median: xmedian = 240 ∗ ∗ ∗3. Find the median x of a bootstrap sample x1, . . . , x Repeat 1000 times. median 272. 4. Compute the bootstrap differences ∗ δ ∗ = xmedian − xmedian Put these 1000 values in order and pick out the .95 and .05 critical values, i.e. the 50th and 950th biggest values. Call these δ. ∗ 95 and δ ∗ .05. 5. The bootstrap principle says that we can use δ∗ and δ∗ as estimates of δ.95 and δ.05. .95 .05 So our estimated 90% bootstrap confidence interval for the median is [xmedian − δ . ∗ 05, xmedian − δ . ∗ 95] The bootstrap 90% CI we found for the Old Faithful data was [235, 250]. Since we used 1000 bootstrap samples a new simulation starting from the same sample data should produce a similar interval. If in Step 3 we increase the number of bootstrap samples to 10000, then the intervals produced by simulation would vary even less. One common strategy is to increase the number of bootstrap samples until the resulting simulations produce intervals that vary less than some acceptable level. Example 8. Using the Old Faithful data, estimate P (|x − µ| > 5 | µ).
answer: We proceed exactly as in the previous example except using the mean instead of
the median.

1. Data: x1, . . . , x272

2. Data mean: x = 209.27
∗ ∗ ∗3. Find the mean x of 1000 empirical bootstrap samples: x1, . . . , x 272.

4. Compute the bootstrap differences

δ ∗ ∗ − x= x

5. The bootstrap principle says that we can use the distribution of δ∗ as an approximation
for the distribution δ = x − µ. Thus,

P (|x − µ| > 5 | µ) = P (|δ| > 5 | µ) ≈ P (|δ ∗ | > 5)

Our bootstrap simulation for the Old Faithful data gave 0.225 for this probability.

Parametric bootstrap

The examples in the previous sections all used the empirical bootstrap, which makes no as­
sumptions at all about the underlying distribution and draws bootstrap samples by resam­
pling the data. In this section we will look at the parametric bootstrap. The only difference

8

9 18.05 class 24, Bootstrap confidence intervals, Spring 2014

between the parametric and empirical bootstrap is the source of the bootstrap sample. For
the parametric bootstrap, we generate the bootstrap sample from a parametrized distribu­
tion.

Here are the elements of using the parametric bootstrap to estimate a confidence interval
for a parameter.

0. Data: x1, . . . , xn drawn from a distribution F (θ) with unknown parameter θ.

1. A statistic θ̂ that estimates θ.

2. Our bootstrap samples are drawn from F (θ̂).

3. For each bootstrap sample
∗ ∗ x1, . . . , x n

we compute θ̂∗ and the bootstrap difference δ∗ = θ̂∗ − θ̂.
4. The bootstrap principle says that the distribution of δ∗ approximates the distribution of

ˆδ = θ − θ.
5. Use the bootstrap differences to make a bootstrap confidence interval for θ.

Example 9. Suppose the data x1, . . . , x300 is drawn from an exp(λ) distribution. Assume
also that the data mean x = 2. Estimate λ and give a 95% parametric bootstrap confidence
interval for λ.

answer: This is implemented in the R script class24-parametricbootstrap.r which is
posted with our other R code.

It’s will be easiest to explain the solution using commented code.

# Parametric bootstrap

# Given 300 data points with mean 2.

# Assume the data is exp(lambda)

# PROBLEM: Compute a 95% parametric bootstrap confidence interval for lambda

# We are given the number of data points and mean

n = 300

xbar = 2

# The MLE for lambda is 1/xbar

lambdahat = 1.0/xbar

# Generate the bootstrap samples

# Each column is one bootstrap sample (of 300 resampled values)

nboot = 1000

# Here’s the key difference with the empirical bootstrap:

# We draw the bootstrap sample from Exponential(lambdahat)

x = rexp(n*nboot, lambdahat)

bootstrapsample = matrix(x, nrow=n, ncol=nboot)

# Compute the bootstrap lambdastar

lambdastar = 1.0/colMeans(bootstrapsample)

# Compute the differences

deltastar = lambdastar – lambdahat

18.05 class 24, Bootstrap confidence intervals, Spring 2014 10

# Find the 0.05 and 0.95 quantile for deltastar
d = quantile(deltastar, c(0.05,0.95))

# Calculate the 95% confidence interval for lambda.
ci = lambdahat – c(d[2], d[1])

# This line of code is just one way to format the output text.
# sprintf is an old C function for doing this. R has many other
# ways to do the same thing.
s = sprintf(“Confidence interval for lambda: [%.3f, %.3f]”, ci[1], ci[2])
cat(s)

9 The bootstrap percentile method (should not be used)

Instead of computing the differences δ∗, the bootstrap percentile method uses the distribu­
tion of the bootstrap sample statistic as a direct approximation of the data sample statistic.

Example 10. Let’s redo Example 6 using the bootstrap percentile method.
∗We first compute x from the bootstrap samples given in Example 6. After sorting we get

35.7 37.4 38.0 39.5 39.7 39.8 39.8 40.1 40.1 40.6 40.7 40.8 41.1 41.1 41.7 42.0
42.1 42.4 42.4 42.4

∗The percentile method says to use the distribution of x as an approximation to the dis­
tribution of x. The 0.9 and 0.1 critical values are given by the 2nd and 18th elements.
Therefore the 80% confidence interval is [37.4, 42.4]. This is a bit wider than our answer
to Example 6.

The bootstrap percentile method is appealing due to its simplicity. However it depends on
the bootstrap distribution of x ∗ based on a particular sample being a good approximation to
the true distribution of x. Rice says of the percentile method, “Although this direct equation
of quantiles of the bootstrap sampling distribution with confidence limits may seem initially
appealing, it’s rationale is somewhat obscure.” 2 In short, don’t use the bootstrap
percentile method. Use the empirical bootstrap instead (we have explained both in the
hopes that you won’t confuse the empirical bootstrap for the percentile bootstrap).

10 R annotated transcripts

10.1 Using R to generate an empirical bootstrap confidence interval

This code only generates 20 bootstrap samples. In real practice we would generate many
more bootstrap samples. It is making a bootstrap confidence interval for the mean. This
code is implemented in the R script class24-empiricalbootstrap.r which is posted with
our other R code.

# Data for the example 6
x = c(30,37,36,43,42,43,43,46,41,42)
n = length(x)

2John Rice, Mathematical Statistics and Data Analysis, 2nd edition, p. 272.

18.05 class 24, Bootstrap confidence intervals, Spring 2014 11

# sample mean
xbar = mean(x)

nboot = 20
# Generate 20 bootstrap samples, i.e. an n x 20 array of
# random resamples from x
tmpdata = sample(x,n*nboot, replace=TRUE)
bootstrapsample = matrix(tmpdata, nrow=n, ncol=nboot)

∗ # Compute the means x
bsmeans = colMeans(bootstrapsample)

# Compute δ∗ for each bootstrap sample
deltastar = bsmeans – xbar

# Find the 0.1 and 0.9 quantile for deltastar

d = quantile(deltastar, c(0.1, 0.9))

# Calculate the 80% confidence interval for the mean.

ci = xbar – c(d[2], d[1])

cat(’Confidence interval: ’,ci, ’\n’)

# ALTERNATIVE: the quantile() function is sophisticated about

# choosing a quantile between two data points. A less sophisticated

# approach is to pick the quantiles by sorting deltastar and

# choosing the index that corresponds to the desired quantiles.

# We do this below.

# Sort the results

sorteddeltastar = sort(deltastar)

# Look at the sorted results

hist(sorteddeltastar, nclass=6)

print(sorteddeltastar)

# Find the .1 and .9 critical values of deltastar

d9alt = sorteddeltastar[2]

d1alt = sorteddeltastar[18]

# Find and print the 80% confidence interval for the mean

ciAlt = xbar – c(d1alt,d9alt)

cat(’Alternative confidence interval: ’,ciAlt, ’\n’)

MIT OpenCourseWare
https://ocw.mit.edu

18.05 Introduction to Probability and Statistics
Spring 2014

For information about citing these materials or our Terms of Use, visit: https://ocw.mit.edu/terms.

https://ocw.mit.edu
https://ocw.mit.edu/terms

Linear regression
Class 25, 18.05

Jeremy Orloff and Jonathan Bloom

1 Learning Goals

1. Be able to use the method of least squares to fit a line to bivariate data.

2. Be able to give a formula for the total squared error when fitting any type of curve to
data.

3. Be able to say the words homoscedasticity and heteroscedasticity.

2 Introduction

Suppose we have collected bivariate data (xi, yi), i = 1, . . . , n. The goal of linear regression
is to model the relationship between x and y by finding a function y = f(x) that is a
close fit to the data. The modeling assumptions we will use are that xi is not random and
that yi is a function of xi plus some random noise. With these assumptions x is called the
independent or predictor variable and y is called the dependent or response variable.

Example 1. The cost of a first class stamp in cents over time is given in the following list.
.05 (1963) .06 (1968) .08 (1971) .10 (1974) .13 (1975) .15 (1978) .20 (1981) .22 (1985)
.25 (1988) .29 (1991) .32 (1995) .33 (1999) .34 (2001) .37 (2002) .39 (2006) .41 (2007)
.42 (2008) .44 (2009) .45 (2012) .46 (2013) .49 (2014)

Using the R function lm we found the ‘least squares fit’ for a line to this data is

y = −0.06558 + 0.87574x,

where x is the number of years since 1960 and y is in cents.

Using this result we ‘predict’ that in 2016 (x = 56) the cost of a stamp will be 49 cents
(since −0.06558 + 0.87574x · 56 = 48.98).

0 10 20 30 40 50 60

1
0

2
0

3
0

4
0

5
0

x

y

Stamp cost (cents) vs. time (years since 1960). Red dot is predicted cost in 2016.

1

� �

2

3

18.05 class 25, Linear regression, Spring 2014

Note that none of the data points actually lie on the line. Rather this line has the ‘best fit’
with respect to all the data, with a small error for each data point.

Example 2. Suppose we have n pairs of fathers and adult sons. Let xi and yi be the
heights of the ith father and son, respectively. The least squares line for this data could be
used to predict the adult height of a young boy from that of his father.

Example 3. We are not limited to best fit lines. For all positive d, the method of least
squares may be used to find a polynomial of degree d with the ‘best fit’ to the data. Here’s
a figure showing the least squares fit of a parabola (d = 2).

Fitting a parabola, b2x2 + b1x + b0, to data

Fitting a line using least squares

Suppose we have data (xi, yi) as above. The goal is to find the line

y = β1x + β0

that ‘best fits’ the data. Our model says that each yi is predicted by xi up to some error Ei:

yi = β1xi + β0 + Ei.

So
Ei = yi − β1xi − β0.

ˆ ˆThe method of least squares finds the values β0 and β1 of β0 and β1 that minimize the sum
of the squared errors:

S(β0, β1) = E2 = (yi − β1xi − β0)2 .i
i

Using calculus or linear algebra (details in the appendix), we find

β̂1 =
sxy

β̂0 = ȳ − β̂1 x̄ (1)
sxx

3 18.05 class 25, Linear regression, Spring 2014

where

1 1 1 1
x̄ = xi, ȳ = yi, sxx = (xi−x̄)2 , sxy = (xi−x̄)(yi−ȳ).

n n (n − 1) (n − 1)

Here x̄ is the sample mean of x, ȳ is the sample mean of y, sxx is the sample variance of x,
and sxy is the sample covariance of x and y.

Example 4. Use least squares to fit a line to the following data: (0,1), (2,1), (3,4).

answer: In our case, (x1, y1) = (0, 1), (x2, y2) = (2, 1) and (x3, y3) = (3, 4). So

5 14 4
x̄ = , ȳ = 2, sxx = , sxy =

3 9 3

Using the above formulas we get

6 4ˆ ˆβ1 = , β0 = .
7 7

4 6
So the least squares line has equation y = + x. This is shown as the green line in the

7 7
following figure. We will discuss the blue parabola soon.

x

y

1

4

1 2 3

Least squares fit of a line (green) and a parabola (blue)

Simple linear regression: It’s a little confusing, but the word linear in ‘linear regression’
does not refer to fitting a line. We will explain its meaning below. However, the most
common curve to fit is a line. When we fit a line to bivariate data it is called simple linear
regression.

3.1 Residuals

For a line the model is
yi = β̂1x + β̂0 + Ei.

We think of β̂1xi + β̂0 as predicting or explaining yi. The left-over term Ei is called the
residual, which we think of as random noise or measurement error. A useful visual check of
the linear regression model is to plot the residuals. The data points should hover near the
regression line. The residuals should look about the same across the range of x.

∑ ∑ ∑ ∑

4 18.05 class 25, Linear regression, Spring 2014



●●




●●

●●
●●

●●




●●


●●

●●●

●●



●●


0 2 4 6 8 10

0
5

1
0

1
5

2
0

x

y


●●





●●




●●


●●

0 2 4 6 8 10


3


2


1

0
1

2
3

4

x

e

Data with regression line (left) and residuals (right). Note the homoscedasticity.

3.2 Homoscedasticity

An important assumption of the linear regression model is that the residuals Ei have the
same variance for all i. This is called homoscedasticity. You can see this is the case for
both figures above. The data hovers in the band of fixed width around the regression line
and at every x the residuals have about the same vertical spread.

Below is a figure showing heteroscedastic data. The vertical spread of the data increases as
x increases. Before using least squares on this data we would need to transform the data
to be homoscedastic.


●●●

●●



●●

●●


●●



●●

●●



●●



●●
●●

●●

●●



●●


●●

0 2 4 6 8 10

0
5

1
0

1
5

2
0

x

y

Heteroscedastic Data

Linear regression for fitting polynomials

When we fit a line to data it is called simple linear regression. We can also use linear
regression to fit polynomials to data. The use of the word linear in both cases may seem
confusing. This is because the word ‘linear’ in linear regression does not refer to fitting a
line. Rather it refers to the linear algebraic equations for the unknown parameters βi, i.e.
each βi has exponent 1.

Example 5. Take the same data as in the Example 4 and use least squares to find the

4

5 18.05 class 25, Linear regression, Spring 2014

best fitting parabola to the data.

answer: A parabola has the formula y = β0 + β1x + β2×2 . The squared error is

2S(β0, β1, β2) = (yi − (β0 + β1xi + β2xi ))
2 .

After substituting the given values for each xi and yi, we can use calculus to find the triple
(β0, β1, β2) that minimizes S. With this data, we find that the least squares parabola has
equation

2 y = 1 − 2x + x .

Note that for 3 points the quadratic fit is perfect.

x

y

1

4

1 2 3

Least squares fit of a line (green) and a parabola (blue)

Example 6. The pairs (xi, yi) may give the the age and vocabulary size of a n children.
Since we expect that young children acquire new words at an accelerating pace, we might
guess that a higher order polynomial might best fit the data.

Example 7. (Transforming the data) Sometimes it is necessary to transform the data
before using linear regression. For example, let’s suppose the relationship is exponential,

axi.e. y = ce . Then
ln(y) = ax + ln(c).

So we can use simple linear regression to obtain a model

ln(yi) = β̂0 + β̂1xi

and then exponentiate to obtain the exponential model

ˆ ˆβ0 β1x yi = e e .

4.1 Overfitting

You can always achieve a better fit by using a higher order polynomial. For instance, given 6
data points (with distinct xi) one can always find a fifth order polynomial that goes through
all of them. This can result in what’s called overfitting. That is, fitting the noise as well
as the true relationship between x and y. An overfit model will fit the original data better
but perform less well on predicting y for new values of x. Indeed, a primary challenge of
statistical modeling is balancing model fit against model complexity.

Example 8. In the plot below, we fit polynomials of degree 1, 3, and 9 to bivariate data
consisting of 10 data points. The degree 2 model (maroon) gives a significantly better fit

6 18.05 class 25, Linear regression, Spring 2014

than the degree 1 model (blue). The degree 10 model (orange) gives fits the data exactly,
but at a glance we would guess it is overfit. That is, we don’t expect it to do a good job
fitting the next data point we see.

In fact, we generated this data using a quadratic model, so the degree 2 model will tend to
perform best fitting new data points.

0 2 4 6 8 10

0
5

10

x

y

4.2 R function lm

As you would expect we don’t actually do linear regression by hand. Computationally,
linear regression reduces to solving simultaneous equations, i.e. to matrix calculations. The
R function lm can be used to fit any order polynomial to data. (lm stands for linear model).
We will explore this in the next studio class. In fact lm can fit many types of functions
besides polynomials, as you can explore using R help or google.

5 Multiple linear regression

Data is not always bivariate. It can be trivariate or even of some higher dimension. Suppose
we have data in the form of tuples

(yi, x1,i, x2,i, . . . xm,i)

We can analyze this in a manner very similar to linear regression on bivariate data. That
is, we can use least squares to fit the model

y = β0 + β1×1 + β2×2 + . . . + βmxm.

Here each xj is a predictor variable and y is the response variable. For example, we might
be interested in how a fish population varies with measured levels of several pollutants, or
we might want to predict the adult height of a son based on the height of the mother and
the height of the father.

We don’t have time in 18.05 to study multiple linear regression, but we wanted you to see
the name.

7 18.05 class 25, Linear regression, Spring 2014

6 Least squares as a statistical model

The linear regression model for fitting a line says that the value yi in the pair (xi, yi) is
drawn from a random variable

Yi = β0 + β1xi + εi

where the ‘error’ terms εi are independent random variables with mean 0 and standard
deviation σ. The standard assumption is that the εi are i.i.d. with distribution N(0, σ2).
In any case, the mean of Yi is given by:

E(Yi) = β0 + β1xi + E(εi) = β0 + β1xi.

From this perspective, the least squares method chooses the values of β0 and β1 which
minimize the sample variance about the line.

In fact, the least square estimate ( β̂0, β̂1) coincides with the maximum likelihood estimate
for the parameters (β0, β1); that is, among all possible coefficients, ( β̂0, β̂1) are the ones that
make the observed data most probable.

7 Regression to the mean

The reason for the term ‘regression’ is that the predicted response variable y will tend to
be ‘closer’ to (i.e., regress to) its mean than the predictor variable x is to its mean. Here
closer is in quotes because we have to control for the scale (i.e. standard deviation) of each
variable. The way we control for scale is to first standardize each variable.

xi − x̄ yi − ȳ
ui = √ , vi = √ .

sxx syy

Standardization changes the mean to 0 and variance to 1:

ū = v̄ = 0, suu = svv = 1.

The algebraic properties of covariance show

sxy
suv = √ = ρ,

sxxsyy

the correlation coefficient. Thus the least squares fit to v = β0 + β1u has

β̂1 =
suv

= ρ and β̂0 = v̄ − β̂1ū = 0.
suu

So the least squares line is v = ρu. Since ρ is the correlation coefficient, it is between -1 and
1. Let’s assume it is positive and less than 1 (i.e., x and y are positively but not perfectly
correlated). Then the formula v = ρu means that if u is positive then the predicted value
of v is less than u. That is, v is closer to 0 than u. Equivalently,

y − ȳ x − x̄
√ < √ syy sxx i.e., y regresses to ȳ. Notice how the standardization takes care of controlling the scale. 8 18.05 class 25, Linear regression, Spring 2014 Consider the extreme case of 0 correlation between x and y. Then, no matter what the x value, the predicted value of y is always ȳ. That is, y has regressed all the way to its mean. Note also that the regression line always goes through the point (x̄, ȳ). Example 9. Regression to the mean is important in longitudinal studies. Rice (Mathemat­ ical Statistics and Data Analysis) gives the following example. Suppose children are given an IQ test at age 4 and another at age 5 we expect the results will be positively correlated. The above analysis says that, on average, those kids who do poorly on the first test will tend to show improvement (i.e. regress to the mean) on the second test. Thus, a useless intervention might be misinterpreted as useful since it seems to improve scores. Example 10. Another example with practical consequences is reward and punishment. Imagine a school where high performance on an exam is rewarded and low performance is punished. Regression to the mean tells us that (on average) the high performing students will do slighly worse on the next exam and the low performing students will do slightly better. An unsophisticated view of the data will make it seem that punishment improved performance and reward actually hurt performance. There are real consequences if those in authority act on this idea. 8 Appendix We collect in this appendix a few things you might find interesting. You will not be asked to know these things for exams. 8.1 Proof of the formula for least square fit of a line The most straightforward proof is to use calculus. The sum of the squared errors is n S(β0, β1) = (yi − β1xi − β0)2 . i=1 Taking partial derivatives (and remembering that xi and yi are the data, hence constant) n ∂S = −2(yi − β1xi − β0) = 0 ∂β0 i=1 n ∂S = −2xi(yi − β1xi − β0) = 0 ∂β1 i=1 Summing this up we get two linear equations in the unknowns β0 and β1: xi β1 + nβ0 = yi x 2 i β1 + xi β0 = xiyi Solving for β1 and β0 gives the formulas in Equation (1). ∑ ∑ ∑ ∑ ∑ ∑ ∑ ∑ � � 9 18.05 class 25, Linear regression, Spring 2014 A sneakier approach which avoids calculus is to standardize the data, find the best fit line, and then unstandardize. We omit the details. For a deluge of applications across disciplines see: http://en.wikipedia.org/wiki/Linear_regression#Applications_of_linear_regression 8.2 Measuring the fit Once one computes the regression coefficients, it is important to check how well the regres­ sion model fits the data (i.e., how closely the best fit line tracks the data). A common but crude ‘goodness of fit’ measure is the coefficient of determination, denoted R2 . We’ll need some notation to define it. The total sum of squares is given by: TSS = (yi − ȳ)2 . The residual sum of squares is given by the sum of the squares of the residuals. When fitting a line, this is: RSS = (yi − β̂0 − β̂1 xi)2 . The RSS is the “unexplained” portion of the total sum of squares, i.e. unexplained by the regression equation. The difference TSS − RSS is the “explained” portion of the total sum of squares. The coefficient of determination R2 is the ratio of the “explained” portion to the total sum of squares: TSS − RSS R2 = . TSS In other words, R2 measures the proportion of the variability of the data that is accounted for by the regression model. A value close to 1 indicates a good fit, while a value close to 0 indicates a poor fit. In the case of simple linear regression, R2 is simply the square of the correlation coefficient between the observed values yi and the predicted values β0 + β1xi. Example 11. In the overfitting example (8), the values of R2 are: degree R2 1 0.3968 2 0.9455 9 1.0000 Notice the goodness of fit measure increases as n increases. The fit is better, but the model also becomes more complex, since it takes more coefficients to describe higher order polynomials. 18.05 class 25, Linear regression, Spring 2014 9 A sneakier approach which avoids calculus is to standardize the data, find the best fit line, and then unstandardize. We omit the details. For a deluge of applications across disciplines see: 8.2 Measuring the fit ∑ ∑ http://en.wikipedia.org/wiki/Linear_regression#Applications_of_linear_regression MIT OpenCourseWare https://ocw.mit.edu 18.05 Introduction to Probability and Statistics Spring 2014 For information about citing these materials or our Terms of Use, visit: https://ocw.mit.edu/terms. https://ocw.mit.edu https://ocw.mit.edu/terms Binder2-stat MIT18_05S14_Reading1a Probability vs. Statistics Frequentist vs. Bayesian Interpretations Applications, Toy Models, and Simulation MIT18_05S14_Reading1b Learning Goals Counting Motivating questions Sets and notation Definitions Venn Diagrams Products of sets Counting Inclusion-exclusion principle Rule of Product Permutations and combinations Permutations Combinations Formulas Examples MIT18_05S14_Reading2 Learning Goals Terminology Probability cast list Simple examples Definition of a discrete sample space The probability function Some rules of probability MIT18_05S14_Reading3 Learning Goals Conditional Probability Multiplication Rule Law of Total Probability Using Trees to Organize the Computation Shorthand vs. precise trees Independence Paradoxes of Independence Bayes' Theorem The Base Rate Fallacy Bayes' rule in 18.05 MIT18_05S14_Reading4a Learning Goals Random Variables Recap Random variables as payoff functions Events and random variables Probability mass function and cumulative distribution function Events and inequalities The cumulative distribution function (cdf) Graphs of p(a) and F(a) Properties of the cdf F Specific Distributions Bernoulli Distributions Binomial Distributions Explanation of the binomial probabilities Geometric Distributions Uniform Distribution Discrete Distributions Applet Other Distributions Arithmetic with Random Variables MIT18_05S14_Reading4b Expected Value Mean and center or mass Algebraic properties of E(X) Proofs of the algebraic properties of E(X) Expected values of functions of a random variable MIT18_05S14_Reading5a Learning Goals Spread Variance and standard deviation The variance of a Bernoulli(p) random variable. A word about independence Properties of variance Variance of binomial(n,p) Proof of properties 2 and 3 Tables of Distributions and Properties MIT18_05S14_Reading5b Learning Goals Introduction Calculus Warmup Continuous Random Variables and Probability Density Functions Graphical View of Probability The terms `probability mass' and `probability density' Cumulative Distribution Function Properties of cumulative distribution functions Probability density as a dartboard MIT18_05S14_Reading6a Learning Goals Introduction Expected value of a continuous random variable Examples Properties of E(X) Expectation of Functions of X Variance Properties of Variance Quantiles Percentiles, deciles, quartiles Appendix: Integral Computation Details MIT18_05S14_Reading6b Learning Goals Introduction There is more to experimentation than mathematics The law of large numbers Formal statement of the law of large numbers Histograms The law of large numbers and histograms The Central Limit Theorem Standardization Statement of the Central Limit Theorem Standard Normal Probabilities Applications of the CLT Why use the CLT How big does n have to be to apply the CLT? MIT18_05S14_Reading6c Introduction With high probability the density histogram resembles the graph of the probability density function: The Chebyshev inequality The need for variance MIT18_05S14_Reading7a Learning Goals Introduction Joint Distribution Discrete case Continuous case Events Joint cumulative distribution function Properties of the joint cdf Marginal distributions Marginal pmf Marginal pdf Marginal cdf 3D visualization Independence MIT18_05S14_Reading7b Learning Goals Covariance Properties of covariance Sums and integrals for computing covariance Examples Proofs of the properties of covariance Correlation Properties of correlation Proof of Property 3 of correlation Bivariate normal distributions Overlapping uniform distributions MIT18_05S14_Reading8a Counting and Probability Conditional Probability and Bayes' Theorem Independence Expectation and Variance Probability Mass Functions, Probability Density Functions and Cumulative Distribution Functions Joint Probability, Covariance, Correlation Law of Large Numbers, Central Limit Theorem MIT18_05S14_Reading8b Counting and Probability Conditional Probability and Bayes' Theorem Independence Expectation and Variance Probability Mass Functions, Probability Density Functions and Cumulative Distribution Functions Joint Probability, Covariance, Correlation Law of Large Numbers, Central Limit Theorem MIT18_05S14_Reading10a Learning Goals Introduction to statistics Experimental design Descriptive statistics Inferential statistics What is a statistic? Review of Bayes' theorem MIT18_05S14_Reading10b Learning Goals Introduction Maximum Likelihood Estimates Log likelihood Maximum likelihood for continuous distributions Why we use the density to find the MLE for continuous distributions Appendix: Properties of the MLE MIT18_05S14_Reading11 Learning Goals Review of Bayes' theorem The base rate fallacy Terminology and Bayes' theorem in tabular form Important things to notice Prior and posterior probability mass functions Food for thought. Updating again and again Appendix: the base rate fallacy MIT18_05S14_Reading12a Learning Goals Introduction Probabilistic prediciton; words of estimative probability (WEP) Predictive Probabilities Prior predictive probabilities Posterior predictive probabilities Review MIT18_05S14_Reading12b Learning Goals Odds Updating odds Introduction Example: Marfan syndrome Bayes factors and strength of evidence Updating again and again Log odds MIT18_05S14_Reading13a Learning Goals Introduction Examples with continuous ranges of hypotheses Notational conventions Parametrized models Big and little letters Quick review of pdf and probability Continuous priors, discrete likelihoods The law of total probability Bayes' theorem for continuous probability densities Bayesian updating with continuous priors Flat priors Using the posterior pdf Predictive probabilities From discrete to continuous Bayesian updating MIT18_05S14_Reading13b Learning Goals Introduction Notation and terminology for data and hypotheses MIT18_05S14_Reading14a Learning Goals Beta distribution A simple but important observation! Beta priors and posteriors for binomial random variables Conjugate priors MIT18_05S14_Reading14b Learning Goals Introduction Previous cases Continuous hypotheses and continuous data Normal hypothesis, normal data Predictive probabilities MIT18_05S14_Reading15a Learning Goals Introduction and definition Beta distribution Binomial likelihood Bernoulli likelihood Geometric likelihood Normal begets normal More than one data point MIT18_05S14_Reading15b Learning Goals Introduction Example: Dice Uniform prior Other priors Rigid priors Example: Malaria Flat prior Informed prior PDALX MIT18_05S14_Reading16 Learning Goals Probability intervals Uses of probability intervals Summarizing and communicating your beliefs Constructing a prior using subjective probability intervals Estimating the intervals directly Constructing a prior by estimating quantiles MIT18_05S14_Reading17a Learning Goals Introduction The fork in the road What is probability? Working definition of a statistic MIT18_05S14_Reading17b MIT18_05S14_Reading18 Learning Goals Introduction Review: setting up and running a significance test Understanding a significance test t tests z-test The Student t distribution One sample t-test Two-sample t-test with equal variances MIT18_05S14_Reading19 Learning Goals Introduction Population parameters and sample statistics A gallery of common significance tests related to the normal distribution z-test One-sample t-test of the mean Two-sample t-test for comparing means The case of equal variances The case of unequal variances The paired two-sample t-test One-way ANOVA (F-test for equal means) Chi-square test for goodness of fit Chi-square test for homogeneity Other tests MIT18_05S14_Reading20 Learning Goals Introduction Bayes' formula as touchstone Critiques and defenses Critique of Bayesian inference Defense of Bayesian inference Critique of frequentist inference Defense of frequentist inference Mind your p's. Stopping rules Making decisions MIT18_05S14_Reading22 Learning Goals Introduction Interval statistics z confidence intervals for the mean Definition of z confidence intervals for the mean Explaining the definition part 1: rejection regions Manipulating intervals: pivoting Explaining the definition part 2: translating the non-rejection region to a confidence interval Explaining the definition part 3: translating a general non-rejection region to a confidence interval Computational example t-confidence intervals for the mean Definition of t-confidence intervals for the mean Construction of t confidence intervals Chi-square confidence intervals for the variance MIT18_05S14_Reading23a Learning Goals Introduction Confidence intervals via standardized statistics z-confidence intervals for : normal data with known t-confidence intervals for : normal data with unknown and 2-confidence intervals for 2: normal data with unknown and Confidence intervals via hypothesis testing Formal view of confidence intervals Comparison with Bayesian probability intervals MIT18_05S14_Reading23b Learning Goals Introduction Bernoulli data and polling Conservative normal confidence intervals Proof of Formula ?? How political polls are reported Large sample confidence intervals MIT18_05S14_Reading24 Learning Goals Introduction Sampling The empirical distribution of data Resampling Star notation The empirical bootstrap The bootstrap principle Why the resample is the same size as the original sample Toy example of an empirical bootstrap confidence interval Justification for the bootstrap principle Other statistics Parametric bootstrap The bootstrap percentile method red(should not be used) R annotated transcripts Using R to generate an empirical bootstrap confidence interval MIT18_05S14_Reading25 Learning Goals Introduction Fitting a line using least squares Residuals Homoscedasticity Linear regression for fitting polynomials Overfitting R function lm Multiple linear regression Least squares as a statistical model Regression to the mean Appendix Proof of the formula for least square fit of a line Measuring the fit Summary of NHST