程序代写代做代考 Lecture 26 (April 25, 2019)¶

Lecture 26 (April 25, 2019)¶
In today’s lecture, we discuss how to use control variates for variance reduction
In [0]:
import numpy as np
import matplotlib.pyplot as plt
import math
import scipy.special as spsp

import scipy.stats as spst

Introduction to control variates¶
Again, our task is to reduce the variance of the sample mean of $x$.
Suppose we can easily generate a variable $z$ that with a known mean $\mu_z$ and correlated with x.
Can we use $z$ to help us achieve variance reduction?
Proposal: instead of using $x$, let’s use $x+c(z-\mu_z)$, where $c$ is a constant.
$E(\overline{x+c(z-\mu_z)})=E(\bar{x})+cE(\bar{z})-cE(\mu_z)=E(\bar{x})+c\mu_z-c\mu_z=\mu_x$
$Var(\overline{x+c(z-\mu_z)})=Var(\bar{x})+c^2Var(\bar{z})+2c Cov(\bar{x},\bar{z})$
We can check that this variance is minimized when $c=-\frac{Cov(\bar{x},\bar{z})}{Var(\bar{z})}=-\frac{Cov(x,z)}{Var(z)}$.
It is easy to observe that the variance reduces to $Var(\bar{x})$ when $c=0$ [In this case, $x$ and $z$ are not correlated]. However, if we can have $Cov(x,z)\neq 0$, we will be able to achieve variance reduction.
Thus, there are three key points we are looking for:
1. $z$ is easy to generate

2. $E(z)$ has a closed form solution

3. $z$ and $x$ are correlated.


Process of variance reduction using one control variate¶
Step 1: Generate a set of $x$ samples and also a set of $z$ samples.
Step 2: estimate $c=-\frac{Cov(x,z)}{ Var(z)}$ based on the samples we have
Step 3: Generate $y$ samples usign $x+c(z-\mu_z)$
Step 4: Construct the confidence interval using the $y$ samples

Practice¶
Assume that we are interested in constructing the confidence interval for the population mean of an exponential distribution with $\lambda=3$.
In [0]:
#Original
u=np.random.rand(1000)
x=-1/3*np.log(u)

print(np.mean(x)-1.96*np.std(x,ddof=1)/1000**0.5,np.mean(x)+1.96*np.std(x,ddof=1)/1000**0.5 )
print(np.std(x,ddof=1)/1000**0.5 )

#control
c=-np.cov(x,u,ddof=1)[0,1]/np.var(u,ddof=1)
y=x+c*(u-0.5)

print(np.mean(y)-1.96*np.std(y,ddof=1)/1000**0.5,np.mean(y)+1.96*np.std(y,ddof=1)/1000**0.5 )
print(np.std(y,ddof=1)/1000**0.5 )

#antithetic

u=u[0:500]
x1=-1/3*np.log(u)
x2=-1/3*np.log(1-u)
y=(x1+x2)/2

print(np.mean(y)-1.96*np.std(y,ddof=1)/500**0.5,np.mean(y)+1.96*np.std(y,ddof=1)/500**0.5 )
print(np.std(y,ddof=1)/500**0.5 )

0.2940677579140698 0.3344583114382177
0.010303712633711184
0.32750303758218186 0.3468791126051474
0.004942876281368762
0.3265268127634493 0.3509599058236399
0.006232931903109853

Applying the antithetic method to discrete event system simulation.¶
Let’s assume that we are still interested in constructing the confidence interval for the system output for the following system:
Simulate the single queueing system. Assume that the system will only take the first 20 customers.
Assume
• the inter-arrival time follows $t_a \sim Exp(\lambda=10)$
• the service time follows $t_s \sim Exp(\lambda=11)$
We use the system to produce the average time consumers spent in the system.
How do we find the variables that are correlated with the average time?
Many options:
Average inter-arrival time: This should be negatively correlated with the system output. The expected value should be equal to $\frac{1}{10}$. Also, this value can be easily computed.
Average service time: This should be positively correlated with the system output. The expected value should be equal to $\frac{1}{11}$. Also, this value can be easily computed.

A Queueing system without variance reduction¶
In [0]:
def single_server():

#initialize clock
t=0

#initialize the states
n_current=0
#total number of arrivals
N_arrived=0
#total number of departures
N_departed=0
#generate the pending event times

##arrival time
lmbda_a=10
lmbda_s=11
#size of arrivals=20
size=20
Arrival_times= np.cumsum(-1/lmbda_a*np.log(np.random.rand(size)))
Service_times=-1/lmbda_s*np.log(np.random.rand(size))

t_arrival=Arrival_times[0]

##departure time
t_depart=np.inf

#Collecting system output (history)

##array to store Arrival times
#Arrival_times=np.array([])

##array to store Departure times
Depart_times=np.array([])

while(t0 and N_departed<20: t_depart=t+Service_times[N_departed] else: t_depart=np.inf return np.mean(Depart_times-Arrival_times) ##when simulation stops, we compute the average time in the system ATs=[single_server() for i in range(2000)] print(np.mean(ATs), np.std(ATs, ddof=1)/2000**0.5) 0.2767339500836288 0.0037003334195892545 In [0]: def single_server(Arrival_times, Service_times): #initialize clock t=0 #initialize the states n_current=0 #total number of arrivals N_arrived=0 #total number of departures N_departed=0 #generate the pending event times ##arrival time lmbda_a=10 lmbda_s=11 #size of arrivals=20 size=20 Arrival_times= np.cumsum(-1/lmbda_a*np.log(np.random.rand(size))) Service_times=-1/lmbda_s*np.log(np.random.rand(size)) t_arrival=Arrival_times[0] ##departure time t_depart=np.inf #Collecting system output (history) ##array to store Arrival times #Arrival_times=np.array([]) ##array to store Departure times Depart_times=np.array([]) while(t0 and N_departed<20: t_depart=t+Service_times[N_departed] else: t_depart=np.inf return np.mean(Depart_times-Arrival_times) ##when simulation stops, we compute the average time in the system In [0]: ## apply antithetic to only service time def antithetic(): lmbda_a=10 lmbda_s=11 size=20 U_a=np.random.rand(size) U_s=np.random.rand(size) Arrival_times1=np.cumsum(-1/lmbda_a*np.log(U_a)) Service_times1=-1/lmbda_s*np.log(U_s) Output1=single_server(Arrival_times1, Service_times1) Arrival_times2=np.cumsum(-1/lmbda_a*np.log(1-U_a)) Service_times2=-1/lmbda_s*np.log(1-U_s) Output2=single_server(Arrival_times2, Service_times2) return (Output1+Output2)/2 NewATs=[antithetic() for i in range(1000)] print(np.mean(NewATs), np.std(NewATs, ddof=1)/1000**0.5) 0.2791252750662072 0.003766719567639211 In [0]: Applying control variates.¶ In [0]: def single_server(): #initialize clock t=0 #initialize the states n_current=0 #total number of arrivals N_arrived=0 #total number of departures N_departed=0 #generate the pending event times ##arrival time lmbda_a=10 lmbda_s=11 #size of arrivals=20 size=20 IATs=-1/lmbda_a*np.log(np.random.rand(size)) Arrival_times= np.cumsum(IATs) Service_times=-1/lmbda_s*np.log(np.random.rand(size)) t_arrival=Arrival_times[0] ##departure time t_depart=np.inf #Collecting system output (history) ##array to store Arrival times #Arrival_times=np.array([]) ##array to store Departure times Depart_times=np.array([]) while(t0 and N_departed<20: t_depart=t+Service_times[N_departed] else: t_depart=np.inf return np.mean(Depart_times-Arrival_times), np.mean(Service_times), np.mean(IATs) ##when simulation stops, we compute the average time in the system returns=[single_server() for i in range(2000)] #print(np.mean(ATs), np.std(ATs, ddof=1)/2000**0.5) ATs=np.array([item[0] for item in returns]) Control1=np.array([item[1] for item in returns]) Control2=np.array([item[2] for item in returns]) c1=-np.cov(ATs, Control1,ddof=1)[0,1]/np.var(Control1,ddof=1) lmbda_s=11 y1=ATs+c1*(Control1-1/lmbda_s) c2=-np.cov(ATs, Control2,ddof=1)[0,1]/np.var(Control2,ddof=1) lmbda_a=10 y2=ATs+c2*(Control2-1/lmbda_a) y3=ATs+c1*(Control1-1/lmbda_s)+c2*(Control2-1/lmbda_a) print(np.mean(ATs), np.std(ATs, ddof=1)/2000**0.5) print(np.mean(y1), np.std(y1, ddof=1)/2000**0.5) print(np.mean(y2), np.std(y2, ddof=1)/2000**0.5) print(np.mean(y3), np.std(y3, ddof=1)/2000**0.5) 0.2816475902333196 0.00387507066217033 0.2801375271132942 0.0025586695557210503 0.28319974505229073 0.00363079219058182 0.2816896819322654 0.002103822686361213 Process of variance reduction using several independent control variates¶ Sometimes, we might have several variables that are correlated with the system output. If these variables are independent from one another, we can use the following method [Proof is easy. You simply will need to verify that this will lead to the same expected value but a lower variance for the sample mean] Let's assume we have 3 variables $z'$, $z''$, $z'''$. $z'$, $z''$ and $z'''$ are independent from one another and are all correlated with $x$ . Step 1: Generate all of the x samples (our system output), also generate a set of $z'$, a set of $z''$ and a set of $z'''$ (our control variates). Step 2: estimate $c'=-\frac{Cov(x,z')}{ Var(z')}, c''=-\frac{Cov(x,z'')}{ Var(z'')}, c'''=-\frac{Cov(x,z''')}{ Var(z''')}$ Step 3: Generate $y=x+c'(z'-\mu_{z'})+c''(z''-\mu_{z''})+c'''(z'''-\mu_{z'''})$ Step 4: Construct the confidence interval using $y_1, y_2, \dots, y_n$