程序代写代做代考 data science algorithm Introduction to information system

Introduction to information system

Hypothesis Testing

Bowei Chen

School of Computer Science

University of Lincoln

CMP3036M/CMP9063M Data Science

Assessment Item 1 Has Been Released

Please check the assessment documents on Blackboard, including:

• Tasks/questions

• Datasets (ds_training.csv, ds_test.csv, ds_submission_sample.csv)

• Hand-in date

• Submission requirements!

Note:

• Several algorithms and methods will be delivered in the following weeks,

which are useful in completing the assessment.

• You have enough time to complete the assessment.

• If you have any questions, please get in touch.

Module Github Page

The course contents have been slightly adjusted according to your feedback

during Weeks 1-5, please check the module Github page for the updated

information:

https://github.com/boweichen/CMP3036MDataScience

https://github.com/boweichen/CMP3036MDataScience

No Lecture and Workshop Next Week

• There is no lecture and workshop in the next week (i.e., the week since 7th

November 2016) and you will do self-directed learning.

• Some additional materials will be uploaded onto Blackboard for you to read.

They contain the advanced topics (not required for the assessment but can

increase your understanding about some topics).

• The solutions of the additional exercises in the previous workshops and the

challenging questions in the previous lectures will be uploaded onto Blackboard.

• Fundamentals of Hypothesis Testing

• Type I and Type II Errors

• Tests of Significance

Objectives

Quick

Recap!

Population

Statistic/
estimator

Sample

Parameter

Sampling

Inference

𝑓(𝑥; 𝜃) 𝜃

Random Sample

The random variables 𝑥1, ⋯ , 𝑥𝑛 are called a random sample of size 𝒏 from the
population 𝑓(𝑥) if 𝑥1, ⋯ , 𝑥𝑛 are mutually independent random variables and the
marginal PDF or PMF of each 𝑥𝑖 is the same function 𝑓(𝑥).

In other words, 𝑥1, ⋯ , 𝑥𝑛 are called independent and identically distributed
(i.i.d. or IID) random variables with PDF or PMF 𝑓(𝑥).

Joint PDF/PMF

Expression without parameter:

𝑓 𝑥1, ⋯ , 𝑥𝑛 = 𝑓 𝑥1 ⋯ 𝑓 𝑥𝑛 =

𝑖=1

𝑛

𝑓(𝑥𝑖)

Expression with parameter:

𝑓 𝑥1, ⋯ , 𝑥𝑛 ∣ 𝜃 = 𝑓 𝑥1 ∣ 𝜃 ⋯ 𝑓 𝑥𝑛 ∣ 𝜃 =

𝑖=1

𝑛

𝑓(𝑥𝑖 ∣ 𝜃)

= ℒ 𝜃

This is called likelihood function, will be studied in later
lectures. Please Google it if you are interested!

Measures of

central location

Advantages Disadvantages

Mean All the data is used

to find the answer

Very large or very small numbers can

distort the answer

Median Very big and very

small values don’t

affect it

Takes a long time to calculate for a very

large set of data

Mode The only average

we can use when

the data is not

numerical

1) There may be more than one mode

2) There may be no mode at all if all of the

data is the same

3) It may not accurately represent the data

Sampling Distribution

Since an estimator 𝜃 is a function of random
variables, it follows that 𝜃 is also an random
variable. The probability distribution of an

estimator is called a sampling distribution.

Example:

• Sample mean

• Sample variance

Statistic/
estimatorParameter

Inference

𝑓(𝑥; 𝜃) 𝜃

Central Limit Theorem (CLT)

Let 𝑥1, ⋯ , 𝑥𝑛 be a set of i.i.d. random variables and each variable has a population
mean 𝜇 and a finite variance 𝜎2. Then

lim
𝑛→∞

𝑛
𝑖=1

𝑛 𝑥𝑖
𝑛

− 𝜇 → 𝒩(0, 𝜎2)

Hint: Using Moment Generating Function 𝑚𝑋 𝑡 = 𝔼(𝑒
𝑋𝑡), please Google it!

Unbiased Estimator

Let 𝑋 be a random variable with PDF 𝑓(𝑥; 𝜃). Let 𝑥1, ⋯ , 𝑥𝑛 be a random sample

from the distribution of 𝑋. Let 𝜃 denote a statistic, then it is an unbiased
estimator of 𝜃 if

𝔼 𝜃 = 𝜃

If 𝜃 is not unbiased, we say that 𝜃 is a biased estimator of 𝜃, with

𝐵𝑖𝑎𝑠 𝜃 = 𝔼 𝜃 − 𝜃

Question

We create a sample 𝑥 in R

The standard deviation 𝑥 in R is

However, using the definition of the

(population) standard deviation, we have

𝜎 =
1

𝑛

𝑖=1

𝑛

𝑥𝑖 − 𝑥
2 = 2.312345

Hint: unbiased estimator for population variance

What is Hypothesis Testing?

In statistics, hypothesis is a statement about a population parameter.

Hypothesis testing is to test if the statement is correct or not.

Example:

The average starting salary for computer science undergraduate is £30,000 per

year in the UK. The average starting salary is the parameter.

Population mean 𝜇

Null Hypothesis and Alternative Hypothesis

Null Hypothesis 𝑯𝟎

Statement regarding the value of the
population parameter, denoted by 𝜃.
If we denote the null hypothesis
space by Θ0, then 𝜃 ∈ Θ0

Example:

We denote the average starting
salary by 𝜃.

• If 𝜃 = 𝜃0, then Θ0 = 𝜃0.

• If 𝜃 ≤ 𝜃0, the Θ0 = [𝜃0, ∞)

Alternative Hypothesis 𝑯𝟏

Statement contradictory to the null

hypothesis. 𝜃 ∈ Θ1, where Θ0 ∪ Θ1 = Θ
and Θ is the parameter space.

Example:

For the average starting salary case, the

parameter space Θ = [0, ∞).

• If 𝜃 = 𝜃0, then Θ1 = 0, 𝜃0 ∪ (𝜃0, ∞).

• If 𝜃 ≤ 𝜃0, the Θ1 = [0, 𝜃0).

𝜃0 = £30,000

The goal in hypothesis testing is to decide which one of

the two hypotheses (null and alternative) is true!

𝐻0 𝐻1

Simple Hypothesis and Composite Hypothesis

When a hypothesis uniquely specifies the distribution of the population from

which the sample is taken, the hypothesis is said to be simple. In simple

hypothesis, Θ0 is composed of a single element. Otherwise, the hypothesis said
to be composite.

Example:

For a Bernoulli distribution 𝐵𝑒𝑟 𝑝 :

• If 𝐻0: 𝑝 = 0.4, the null hypothesis is simple since 𝑝 = 0.4 uniquely specifies
the distribution as 𝐵𝑒𝑟(0.4).

• If 𝐻0: 𝑝 < 0.4, the hypothesis is composite because 𝑝 can take any value in the interval [0,0.4). One-Tailed Test and Two-Tailed Test Null hypothesis Alternative hypothesis Type of alternative 𝐻0: 𝜃 = 𝜃0 𝐻1: 𝜃 < 𝜃0 Lower one-sided 𝐻1: 𝜃 > 𝜃0 Upper one-sided

𝐻1: 𝜃 ≠ 𝜃0 Two-sided

Decision Outcomes

The decision is correct:

• If the null hypothesis is true and it is accepted.

• If the null hypothesis is false and it is rejected.

The decision one reaches using a hypothesis test is always subject to errors:

• If the null hypothesis is true but it is rejected. This is called type I error.

• If the null hypothesis is false it is not rejected. This is called type II error.

Type I Error

The probability of committing a type I error, i.e. rejecting 𝐻0 when it is true, is
called the level of significance (also the size of test) for a hypothesis test,

denoted by 𝛼.

Mathematically, it can be expressed as

𝛼 = ℙ type I error = ℙ reject 𝐻0 𝐻0 is true = ℙ accept 𝐻1 𝐻0 is true

The level of confidence is defined as 1 − 𝛼

Type II Error

The probability of committing a type II error, i.e. accepting 𝐻0 when it is false, is
denoted by 𝛽.

Mathematically, it can be expressed as

𝛽 = ℙ type II error = ℙ accept 𝐻0 𝐻0 is false = ℙ fail to reject 𝐻0 𝐻0 is false

The power of the test, denoted by 𝑃𝑜𝑤𝑒𝑟(𝜃), is defined by

𝑃𝑜𝑤𝑒𝑟 𝜃 = ℙ reject 𝐻0 𝐻0 is false = 1 − 𝛽

Which Error Is More Serious?

Reject 𝐻0 Fail to reject 𝐻0

Null

hypothesis

𝐻0

True Type I Error

ℙ Type I Error = 𝛼
Level of significance

False positive

Correct decision

ℙ Accept 𝐻0 ∣ 𝐻0 = 1 − 𝛼
True negative

False Correct decision

ℙ Accept 𝐻1 ∣ 𝐻1 = 1 − 𝛽
Power of the test

True positive

Type II Error

ℙ Type II Error = 𝛽
False negative

Tests of Significance

Step I: Hypotheses

– State the null and alternative hypotheses, i.e., 𝐻0 and 𝐻1
Step 2: Test Statistic

– Select an appropriate test statistic and determine the sampling

distribution of the test statistic under the assumption that 𝐻0 is true

Step 3: Rejection Region Calculations

– By hand, use the specified 𝛼 level to compute the critical value and to
determine the rejection region for the standardised test statistic

– By computer -> skip this 

Step 4: Statistical Conclusion

Critical Region

We consider there is set 𝐶𝑎. If the sample static lies in 𝐶𝑎, then we reject the null
hypothesis. The set 𝐶𝑎 is called the critical region or rejection region of the test.

Mathematically, it can be expressed as

ℙ 𝑋 ∈ 𝐶𝑎 𝜃 = 𝜃0 = 𝛼

Example

A random sample of size 𝑛 = 30 is taken from a distribution known to be
𝒩(𝜇, 22). If 𝑖=1

30 𝑥𝑖 = 56, then test 𝐻0: 𝜇 = 1.8 versus 𝐻1: 𝜇 > 1.8 at the 𝛼 = 0.05
significance level

Solution (1/2)

Step I: Hypotheses

𝐻0: 𝜇 = 1.8 versus 𝐻1: 𝜇 > 1.8

Step II: Test Statistic

– 𝑥 =
𝑖=1

30 𝑥𝑖

30
=

56

30
= 1.867

– 𝑧obs =
𝑥−𝜇0

𝜎/ 𝑛
∼ 𝒩(0,1)

𝜇0

By Central Limit Theorem

𝑧1−𝛼

Solution (2/2)

Step III: Rejection Region Calculations

Step IV: Statistical Conclusion

Fail to reject 𝐻0 because 𝑧obs = 0.183 < 𝑧0.95 = 1.64 Rejection region 𝑧1−𝛼 = 𝑧0.95 = 1.64 𝑧obs = 𝑥 − 𝜇0 𝜎/ 𝑛 = 1.867 − 1.8 2/ 30 = 0.183 Several Topics For Your Direct Study Please read Casella’s book if you are interested! Not required for assessment! • Likelihood ratio test (LRT) • 𝑝-value • Power function • Hypothesis tests for population means • Hypothesis tests for population variances References • G.Casella and R.Berger (2002) Statistical Inference. Chapter 8 Thank You! bchen@Lincoln.ac.uk mailto:bchen@Lincoln.ac.uk