CS代考 Simulation and intro to random

Simulation and intro to random
processes CompSci 369, 2022

School of Computer Science, University of Auckland

Copyright By PowCoder代写 加微信 powcoder

Last lecture
Modeling systems
Likelihood
Inference via maximum likelihood A few words on Bayesian inference Intro to simulation

This lecture
Intro to simulation (review)
Pseudo RNGs
Sampling from nite discrete distributions
Inversion method for sampling from continous distributions Intro to Random process
the Random walk

Why simulate?
Gain an intuitive understanding of how a process operates Distribution or properties of distribution often analytically unavailable
e.g. in Bayesian inference, the posterior distribution involves an integral which is, typically, impossible to calculate exactly.
Even when is available, nding some property of it such as an expectation, involves another large integral.
Integration is hard!

Why simulate?
Basic idea is obtain samples of values from distribution of interest and use this sample to estimate properties of distribution.
Example: The mean of the distribution is
This can be approximated by
) i ( θ ∑ 1 = ̄θ ≈ ] θ [ E
Ω∈θ .θd)θ(πθ ∫ = ]θ[E
)θ(π })n(θ , … ,)3(θ ,)2(θ ,)1(θ{

is a univariate random variable.
Suppose we can simulate values of …
but cannot nd the distribution function analytically.
How could we study it?
Let’s draw 1000 samples, .
The rst 20 look like 3.55, 1.038, 1.741, 8.778, 2.333, 4.182, 0.694, 4.579, 7.03, 1.338, 2.504, 3.069, 2.358, 3.304, 0.328, 3.559, 1.952, 6.962, 1.888, 5.446 etc.
})0001(θ , … ,)3(θ ,)2(θ ,)1(θ{ θ

Now make a histogram to get an idea of the shape of the distribution:

Let’s approximate the mean of the distribution by the average of the sampled values:
=)i(θ 1∑0001 = ̄θ≈]θ[E 0001 1

How good are these estimates?
The sample is random so estimates based on it will vary with dierent samples.
As the number of samples increases, the approximation gets better.
In general, if want to estimate by , is normally distributed with mean and variance .
When stated formally, this is known as a central limit theorem (CLT).
So with lots of samples, arbitrarily good approximations to quantities of interest can be made.
Interactive example of the CLT here
n/)g(rav = ) ̄g(rav ]g[E
̄g ) θ(g 1∑ n = )θ( ̄g ])θ(g[E )i( n 1
})n(θ , … ,)3(θ ,)2(θ ,)1(θ{

Simulation in silico
Simulation relies on a ready supply of random numbers
Computers are, by design, deterministic
So instead of using truly random numbers, use pseudo-random numbers Indistinguishable from real random numbers for most purposes
Since sequences of pseudo-random numbers are repeatable, often better than truly random numbers for scientic simulations
Generate pseudo-random numbers that are uniformly distributed between 0 and 1, i.e.
Then transform to have the desired distribution.
u )1,0(U ∼ u

Properties of a good pseudo-random number generator (RNG)
have a very long period so that no repetitions of the cycle occur be ecient in memory and speed
repeatable so that simulations can be reproduced
portable between implementations
have ecient jump-ahead methods so that a number of virtual streams of random numbers can be created from a single stream for use in parallel computations; and,
have good statistical properties approximating the uniform distribution on

RNGs to use
We omit details of how RNGs actually work (see notes for short description)
Important: Standard random number generators used as the default in many languages (e.g. ) are not adequate for scientic simulation studies.
I recommend instead the Mersenne Twister which is implemented in most languages. It is the default generator in Python.
In Python to repeat a sequence of random bits, choose a seed and call to set generator in same state
modnaR.litu.avaj
)a(dees.modnar

Obtaining pseudo-random numbers from other distributions: finite discrete distribution
To sample from
1: generate 2: if
a discrete r.v. with
, set otherwise if
1=j 1=j jp ∑ < u < jp ∑ ip = )ix = X(P 1x = X )1,0(U ∼ u Sampling from a finite discrete distribution: Example Suppose can take values A, B, C, D or E with the probabilities Suppose we draw and get 0.503. Then takes the value C since )C(rP+)B(rP+)A(rP = 55.0 ≤ 305.0 = u ≤ 4.0 = 3.0+1.0 = )B(rP+)A(rP x )1,0(U ∼ u ytilibaborp eulav Visualising sampling a discrete distribution Drawing a uniform rv of 0.503 produces a C. 01.0 E 53.0 D 51.0 C 03.0 B 01.0 A ytilibaborp eulav Visualising sampling a discrete distribution So generate 20 realisations of the rv by generating the uniform rv and transforming: for u Produces :C,D,E,B,D,D,D,D,B,B,B,C,B,D,C,D,D,D,C,E 119.0 014.0 586.0 807.0 478.0 354.0 946.0 943.0 ]31[ ## 745.0 541.0 341.0 612.0 056.0 796.0 967.0 386.0 792.0 809.0 065.0 344.0 ]1[ ## Obtaining pseudo-random numbers from other distributions --- Inversion method Suppose we want to simulate from which has the cumulative distribution function (cdf) . Remember, if has probability distribution function then the cdf is td)t(f∞−∫ = )X(F )X(f Xx Result (Inversion method) If , then produces a draw from where is the inverse of . So when we can nd the inverse of the cdf is known, sampling from the distribution is easy. 1− F X )U(1− F = X )1,0(U ∼ U Inversion sampling example: exponential If then and so So generate exponential rv by generating the uniform rv and set λ − = )u(1− F xλ−e − 1 = )x(F )λ(pxE ∼ X )u−1(gol λ/)u − 1(gol − = x So to draw 20 exponential rvs, draw 20 uniform rvs and transform them: : 0.121, 0.121, 0.177, 0.224, 0.323, 0.346, 0.387, 0.466, 0.476, 0.503, 0.55, 0.576, 0.634, 0.713, 0.736, 0.828, 0.846, 0.896, 0.946, 1 : 0.13, 0.13, 0.2, 0.25, 0.39, 0.43, 0.49, 0.63, 0.65, 0.7, 0.8, 0.86, 1, 1.25, 1.33, 1.76, 1.87, 2.27, 2.91, 9.64 Random processes Random processes are sequences of random variables: where is some index set. is a discrete time process when is discrete (e.g is a continuous time process when is continuous (e.g. ]∞ ,0[ = T T }... ,3 ,2 ,1 ,0{ = T X }T ∈ t : tX{ = X Random Walk Also known as the drunkard's walk records the position of a drunk after steps. Each step is either to the left or the right with equal proability. Let and where Equivalently, Simple enough that it can be analysed analytically, by also very easy to simulate. .2/1 ytilibaborp htiw 1− { = iS 2/1 ytilibaborp htiw 1+ jS 0=j∑ + 0X = iX 1−i iS + iX = 1+iX 0 = 0X Example: random walk tolp # )klaw,petSn(petStolp spets mus # ))pets,0(c(musmuc = klaw noitcerid pets elpmas # )T = ecalper ,petSn = ezis ,)1,1-(c(elpmas = pets 05 = petSn While it is typically easier to view these things graphically, we can write down the random process as {0, -1, 0, 1, 0, -1, -2, -3, -2, -1, 0, 1, 0, 1, 2, 1, 2, 1, 0, -1, -2, -3, -4, -5, -6, -7, -8, -9, -8, -7, -8, -9, -8, -7, -8, -9, -8, -9, -10, -11, -12, -11, -10, -11, -12, -11, -12, -13, -12, -13, -12,...} )klaw,petSn(petStolp ))pets,0(c(musmuc = klaw )T = ecalper ,petSn = ezis ,)1,1-(c(elpmas = pets Longer walk )klaw,petSn(petStolp ))pets,0(c(musmuc = klaw )T = ecalper ,petSn = ezis ,)1,1-(c(elpmas = pets 005 = petSn Still longer walk )klaw,petSn(petStolp ))pets,0(c(musmuc = klaw )T = ecalper ,petSn = ezis ,)1,1-(c(elpmas = pets 00001 = petSn Add in a bias: Prob of +1 step now 0.51 )"spets 00001 htiw klaw modnar desaiB",klaw,petSn(petStolp ))pets,0(c(musmuc = klaw )T = ecalper ,petSn = ezis ,)15.0,94.0(c = borp ,)1,1-(c(elpmas = pets 00001 = petSn 程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com