程序代写代做代考 kernel graph Bayesian Contents

Contents
Exploring and Transforming Data
I. Univariate Characterizations 1
I.0StatisticalSummary…………………………………….. 1 I.1Histograms…………………………………………. 3 I.2DensityEstimation …………………………………….. 4 I.3QuantilePlots ……………………………………….. 6 I.4Boxplots…………………………………………… 8
II Bivariate Characterizations 10
II.1Scatterplots ………………………………………… 10 II.2JitteringScatterplots ……………………………………. 12
III. Transforming Data 14
III.1Logarithms………………………………………… 14 III.2PowerTransformations ………………………………….. 16 III.3TransformingRestricted-RangeVariables…………………………. 20 III.4TransformationstoEqualizeSpread……………………………. 21 III.5TransformationsTowardLinearity…………………………….. 23
Note: To access your textbook resources type the following on the console:
I. Univariate Characterizations
In a regression setting we first start by looking at a univariate characterization of the data. This entails looking at the respective statistical summaries, features, and distributions of all the variables we have. Among the popular univariate graphical tools, we will consider histograms, density plots, qqplots, and boxplots.
I.0 Statistical Summary
Looking at the summary statistics of our data allows to identify any potential issues we may encounter when building our regression model. For example, there may NAs, values outside the expected range, etc. Below are two examples using the pastecs and psych libraries.
#library(car)
#carWeb()
library(pastecs)
# Summary for 1 variable: stat.desc(rnorm(100),norm=TRUE)
1

## nbr.val nbr.null nbr.na min max range
## 100.00000000 0.00000000 0.00000000 -3.42766916 1.87611647 5.30378564
##
##
##
## 0.90850502 -58.54565602 -0.52381391 -1.08504098 0.67034591 0.70071323
## ##
normtest.W
0.97558189
normtest.p
0.05992554
sum
-1.55178895
std.dev
median
0.07376450
coef.var
mean
-0.01551789
skewness
SE.mean CI.mean.0.95
0.09085050 0.18026711
skew.2SE kurtosis
var
0.82538137
kurt.2SE
# Summary for an entire datacube:
library(car)
attach(Prestige)
stat.desc(Prestige, norm=TRUE)
##
## nbr.val
## nbr.null
## nbr.na
## min
## max
## range
## sum
## median
## mean
## SE.mean
## CI.mean.0.95 5.359173e-01 8.339783e+02 6.231368e+00
education income women prestige
1.020000e+02 1.020000e+02 1.020000e+02 102.0000000
0.000000e+00 0.000000e+00 5.000000e+00
0.000000e+00 0.000000e+00 0.000000e+00
6.380000e+00 6.110000e+02 0.000000e+00
1.597000e+01 2.587900e+04 9.751000e+01
9.590000e+00 2.526800e+04 9.751000e+01
1.095280e+03 6.933860e+05 2.955860e+03 4777.0000000
1.054000e+01 5.930500e+03 1.360000e+01
1.073804e+01 6.797902e+03 2.897902e+01
2.701562e-01 4.204089e+02 3.141236e+00
43.6000000
46.8333333
1.7034979
3.3792816
7.444408e+00 1.802786e+07 1.006471e+03 295.9943234
## var
## std.dev
## coef.var
## skewness
## skew.2SE
## kurtosis
## kurt.2SE
## normtest.W
## normtest.p
##
## nbr.val
## nbr.null
## nbr.na
## min
## max
## range
## sum
## median
## mean
## SE.mean
## CI.mean.0.95 5.195260e+02 NA
17.2044856
0.3673556
0.3287011
0.6874610
-0.7925793
-0.8363533
0.9719786
0.0287498
2.728444e+00 4.245922e+03 3.172493e+01
2.540915e-01 6.245930e-01 1.094755e+00
3.247507e-01 2.129019e+00 8.987765e-01
6.791988e-01 4.452731e+00 1.879744e+00
-1.028405e+00 6.287745e+00 -6.757897e-01
-1.085203e+00 6.635016e+00 -7.131135e-01
9.495767e-01 8.150513e-01 8.157943e-01
6.772830e-04 5.633526e-10 5.956847e-10
census type
1.020000e+02 NA
0.000000e+00 NA
0.000000e+00 NA
1.113000e+03 NA
9.517000e+03 NA
8.404000e+03 NA
5.509810e+05 NA
5.135000e+03 NA
5.401775e+03 NA
2.618934e+02 NA
## var
## std.dev
## coef.var
## skewness
## skew.2SE
## kurtosis
## kurt.2SE
## normtest.W 9.008953e-01 NA
6.995989e+06 NA
2.644993e+03 NA
4.896527e-01 NA
1.116090e-01 NA
2.334243e-01 NA
-1.490291e+00 NA
-1.572599e+00 NA
2
0.0000000
0.0000000
14.8000000
87.2000000
72.4000000

## normtest.p 1.270617e-06 NA
library(psych)
describe(Prestige)
##
## education
## income
## women
## prestige
## census
## type*
##
## education 9.59 0.32
## income 25268.00 2.13
## women 97.51 0.90
## prestige 72.40 0.33
## census 8404.00 0.11
## type* 2.00 0.40
I.1 Histograms
vars n mean sd median trimmed mad min max
1 102 10.74 2.73 10.54 10.63 3.15 6.38 15.97
2 102 6797.90 4245.92 5930.50 6161.49 3060.83 611.00 25879.00
3 102 28.98 31.72 13.60 24.74 18.73 0.00 97.51
4 102 46.83 17.20 43.60 46.20 19.20 14.80 87.20
5 102 5401.77 2644.99 5135.00 5393.87 4097.91 1113.00 9517.00
698 1.79 0.80 2.00 range skew kurtosis se -1.03 0.27 6.29 420.41 -0.68 3.14 -0.79 1.70 -1.49 261.89 -1.36 0.08
1.74 1.48 1.00 3.00
# Note: The flag norm allows us to also include normal
# distribution statistics such as skewness, kurtosis, etc.
Histograms should be one our first analysis tool since they can help us learn about each variable’s properties. Things to look for:
• Outliers: This can influene the regression fit in many ways
• Shape: Is the distribution symmetric, skewed, etc?
• Range: Is the range of values very large? For exmaple several orders of magnitude?
• Density of observaions and/or clustering feautures. Are the data all clustered in certain regions? In high density
regions your regression fit may be robust but then become unstable in sless dense ones.
The near optimal number of bins to use based on the number of observations n is given by: k = 1 + log2(n)
However, more sample-specific formulas exist such as Freedman’s and Diaconis (FD)’ suggested: n1/3(max − min)
2(Q3 − Q1)
For example, we can compare the two methods for 2 different data sets:
#Compare histograms using k vs FD:
#quartz()
par(mfrow=c(2,2))
n = 1000;k = 1 + log2(n)
hist(rnorm(n),breaks = k,col=”skyblue4″, main = “Normal Distr. Using k “) hist(rnorm(n),breaks = “FD”,col=”skyblue4″, main = “Normal Distr. Using FD”) n = 1000;k = 1 + log2(n)
hist(rexp(n,0.5),breaks = k,col=”skyblue2″, main =”Exp Distr. Using k”)
hist(rexp(n,0.5),breaks = “FD”,col=”skyblue2″, main =”Exp Distr. Using FD”)
3

Normal Distr. Using k
−3 −2 −1 0 1 2 3
rnorm(n)
Exp Distr. Using k
0 5 10 15
rexp(n, 0.5)
I.2 Density Estimation
Normal Distr. Using FD
−3 −2 −1 0 1 2 3
rnorm(n)
Exp Distr. Using FD
0 5 10 15
rexp(n, 0.5)
Histograms are very informative but being able to visualize the density plot, along with the histogram and 1D scatterplot (rug-plot) can be more informative. A common kernel-density (nonparametric) estimate used is given by:
1∑n 􏰀x−xi􏰁 p􏰃(x)=nh K h
i=1
where K is kernel function (e.g., Normal distribution), h is the bandwidth (amount of smoothness), and n the number of observations.
#quartz()
n=1000
hist(rexp(n,0.5),breaks =”FD”,col=”skyblue2″, freq = FALSE, ylab = “Density”)
lines(density(rexp(n,0.5)),lwd = 2, col =”red”)
lines(density(rexp(n,0.5), adjust =0.5),lwd = 2, col = “blue”, type =’l’, lty=2)
rug(rexp(n,0.5))
4
Frequency
Frequency
0 300
0 100
Frequency
Frequency
0 100
0 40 80

Histogram of rexp(n, 0.5)
Density
0.0 0.1 0.2 0.3 0.4
0 2 4 6 8 10 12
rexp(n, 0.5)
#You can also show the denisty with color
library(ggplot2)
data=data.frame(value=rnorm(10000))
ggplot(data, aes(x=value))+geom_histogram(binwidth = 0.2, aes(fill = ..count..) )
5

800
600
400
200
0
−4 −2 0 2 4
value
count 800
600 400 200 0
I.3 Quantile Plots
These plots are very useful for comparing the distribution of a variable to a theoretical reference distribution. By default the Normal distribution is used in the function ‘qqPlot’ (Quantile – Quantile) but we can provide any other distribution.
#The dataset Prestige from the car package has 102 observations
library(car)
attach(Prestige)
qqPlot(~ income, data=Prestige, id=list(n=3))
6
count

general.managers physicians
lawyers
−2 −1 0 1 2
norm quantiles
## general.managers physicians lawyers
## 2 24 17
qqPlot(income ~ type, data=Prestige, layout=c(1, 3))
type = bc
type = prof
type = wc
firefighters
farm.workers
general.managers physicians
commercial.travellers insurance.agents
−2 −1 0 1 2
norm quantiles
−2 −1 0 1 2
norm quantiles
−2 −1 0 1 2
norm quantiles
# Type = Professional (Prof), White Collar (WC), “Blue Collar (BC)”
7
income
income
5000 10000 15000 20000
25000
income
5000 10000 15000 20000
25000
income
5000 10000 15000 20000
25000
0 5000 15000
25000

qqPlot(lm(prestige ~ income + education + type, data=Duncan),
envelope=.99)
minister
machinist
## minister machinist
## 6 28
I.4 Boxplots
−2 −1 0 1 2
t Quantiles
Boxplots allow us know more quantitative aspects of a variables distribution such as the min, max, Q1, Q3, IQR, and Median. Another helpful property is that they are very convenient when comparing many distributions simultaneously and/or conditioning on other variables.
#Boxplot of Income
Boxplot(~income, data=Prestige)
8
Studentized Residuals(lm(prestige ~ income + education + type, data = Duncan))
−1 0 1 2 3 4

general.managers physicians
lawyers osteopaths.chiropractors
veterinarians
## [1] “general.managers”
## [3] “physicians”
## [5] “osteopaths.chiropractors”
“lawyers”
“veterinarians”
# Note: For more choices of boxplots, look at: https://www.r-graph-gallery.com/boxplot/
# These are knownn as Parallel Boxplots
Boxplot(Ornstein$interlocks~Ornstein$nation)
2
3 1
9 65
13 27
CAN
OTH UK US
Ornstein$nation
## [1] “1” “2” “3” “5” “6” “9” “13” “27”
# Note: interlocks = Number of interlocking director and executive positions
# shared with other major firms.
library(“plotrix”)
means <- Tapply(interlocks ~ nation, mean, data=Ornstein) sds <- Tapply(interlocks ~ nation, sd, data=Ornstein) 9 Ornstein$interlocks income 0 20 40 60 80 100 0 5000 15000 25000 plotCI(1:4, means, sds, xaxt="n", xlab="Nation of Control", ylab="interlocks", ylim=range(Ornstein$interlocks)) lines(1:4, means) axis(1, at=1:4, labels = names(means)) CAN OTH II Bivariate Characterizations UK US Nation of Control Once you have the univariate description of the variables, we can then exam the pairwise associations via e.g., scatterplots II.1 Scatterplots library(car) attach(Prestige) scatterplot(prestige ~income, lwd=3, id=TRUE) 10 interlocks 0 20 40 60 80 100 24 2 0 5000 10000 15000 20000 25000 income ## [1] 2 24 #Note: Lowess smoother is a 'locally weighted smoother' #We may want to condition on other variables, for exmaple, on type: scatterplot(prestige ~income|type, lwd=3,col=c("blue", "green", "red"),id=TRUE) type bc prof wc 31 24 2 82 58 51 5000 10000 15000 income 20000 25000 11 prestige 20 30 40 50 60 70 80 prestige 20 40 60 80 ## 31 58 51 82 ## 1 2 3192425 #If we have sevaral predictors in our model, it is more practical to # simply look at as scatterplot matrix of the data. For example: # scatterplotMatrix(~ prestige + income + education + women) #We can also look at all the variables conditioned on type: scatterplotMatrix(~ prestige + income + education + women | type,smooth=FALSE, ellipse=list(levels=0.5)) #We can also look at 3D visualizations scatter3d(prestige ~ income + education) axis(1, at=1:4, labels = names(means)) prestigebc prof wc 20 40 60 80 II.2 Jittering Scatterplots 5000 15000 25000 income 0 20 40 60 80 0 40 80 5000 20000 6 10 14 20 50 80 Jittering the data (adding noise) helps separate overplotted observations plot(Vocab$vocabulary~Vocab$education) education 6 8 10 12 14 16 women 12 CAN 0 5 10 15 20 Vocab$education #Add some random noise to the variables --> jitter the data
plot(jitter(Vocab$vocabulary)~jitter(Vocab$education))
jitter(Vocab$vocabulary) Vocab$vocabulary
0 2 4 6 8 10 0 2 4 6 8 10
0 5 10 15 20
jitter(Vocab$education)
# Increse the amount of jitter
plot(jitter(Vocab$vocabulary,factor =2)~jitter(Vocab$education, factor =2),col=”gray”,cex=0.5)
abline(lm(Vocab$vocabulary~ Vocab$education),lwd= 3, lty=”dashed”)
lines(lowess(Vocab$education,Vocab$vocabulary,f=0.2), lwd =3, col=”red”)
13

jitter(Vocab$vocabulary, factor = 2)
0 2 4 6 8 10
0 5 10 15 20
jitter(Vocab$education, factor = 2)
III. Transforming Data III.1 Logarithms
In econometrics logarithms are very useful when dealing with variables that have span different orders of magnitude yet must be included in the regression model. For example consider a model that includes the predictors GDP ( 1010) and real interest rates ( 10−2). Logarithms can help us mitigate violations to the constant variance assumption often used in Least Squares. By default R uses natural logs when you use the log function.
# Here we look at the Ornstein data which includes financial assets of 248 Canadian companies.
par(mfrow=c(1, 2), mar=c(5, 4, 6, 2) + 0.1)
densityPlot(~ assets, data=Ornstein, xlab=”assets”, main=”Untransformed”)
densityPlot(~ log10(assets), data=Ornstein, adjust=0.65,
xlab=expression(log[10]~”(assets)”), main=”Log Plot”)
basicPowerAxis(0, base=10, side=”above”, at=10^(2:5),
axis.title=””)
14

Untransformed
Log Plot
100 10000
0 50000
assets
150000
2 3 4 5
log10 (assets)
scatterplot(infantMortality ~ ppgdp, data=UN, xlab=”GDP per Capita”,
ylab=”Infant Mortality Rate (per 1000 births)”, main=”Untransformed”)
Untransformed
0e+00 2e+04
4e+04 6e+04
GDP per Capita
8e+04 1e+05
scatterplot(infantMortality ~ ppgdp, data=UN, xlab=”GDP per capita”,
ylab=”Infant Mortality Rate (per 1000 births)”, main=”Log-Log Plot”,
log=”xy”, id=list(n=3))
15
Infant Mortality Rate (per 1000 births)
0.00000 0.00015 0.00030
0 20 40 60 80 100 120
0.0 0.2 0.4
0.6 0.8
Density
Density

Log−Log Plot
Angola
Equatorial Guinea
Gabon
1e+02 5e+02 5e+03
GDP per capita
## Angola Equatorial Guinea Gabon
## 4 54 62
III.2 Power Transformations
5e+04
Power transformations of the predictors and/or response variable(s) can help improve their respective distributions for modeling/estimation purposes (e.g., Least Squares estimates of regression coefficients) and sampling/robustness (e.g., Bootstrapping and Bayesian inference). A popular transformation is the Box-Cox scale power transformation given by:
􏰂 xλ−1 TBC(x,λ)=x(λ) = λ whenλ̸=0;
logex when λ = 0.
The R function symbox automatically produces boxplots of the variable to transform for different values of λ to quickly gauge which power-law is more appropriate. For example, we can use this function on the Per Person GDP (ppgdp) as follows:
library(car)
symbox(~ppgdp,data=UN)
16
Infant Mortality Rate (per 1000 births)
2 5 10 20 50 100

−1 −0.5 log 0.5 1
Powers
# Or we can look at income from the Prestige dataset
symbox(~ income, data=Prestige)
−1 −0.5 log 0.5 1
Powers
Often times our variables take on negative values (e.g., S&P 500 returns), yet transformations such as logs would not work. Instead we can use the Yeo-Johnson transformations TBC(x + 1, λ) for nonnegative values of x and TBC(−x + 1, 2 − λ) for negative values of x.
# The function yjPower takes as inputs your variable (x) and lambda, # and outputs the respective transformed variable.
x=-5:5
yjPower(x, 3)
17
Transformations of income Transformations of ppgdp

## [1] -0.8333333 -0.8000000 -0.7500000 -0.6666667 -0.5000000 0.0000000
## [7] 2.3333333 8.6666667 21.0000000 41.3333333 71.6666667
For regression purposes, transformations for normality, linearity and/or constant variance can be easily implemented and tested statistically with the powerTransform function. For example, consider the variable infant mortality from the UN dataset.
hist(UN$infantMortality)
Histogram of UN$infantMortality
0 20 40 60 80 100 120
UN$infantMortality
## bcPower Transformation to Normality
## Est Power Rounded Pwr Wald Lwr Bnd Wald Upr Bnd
## Y1 0.0468 0 -0.0879 0.1814
##
## Likelihood ratio test that transformation parameter is equal to 0
## (log transformation)
## LRT df pval
## LR test, lambda = (0) 0.4644634 1 0.49555
##
## Likelihood ratio test that no transformation is needed
## LRT df pval
## LR test, lambda = (1) 172.8143 1 < 2.22e-16 testTransform(p1, lambda=1/2) ## LRT df pval ## LR test, lambda = (0.5) 41.95826 1 9.3243e-11 # The histogram shows that the distribution is right-skewed. Therefore, we can try # (1) test if a transformation is needed and (2) if it is needed, transform it. p1 <- powerTransform(infantMortality ~ 1, data=UN, family="bcPower") summary(p1) 18 Frequency 0 10 20 30 40 50 60 In the case of Multiple Regression, we may need to transform more than one variable at the same time (each with its own power-law). For example, we can look at the Multivariate Box Cox Highway1 data. The following example if from the bcPower documentaion: ## bcPower Transformations to Multinormality ## Est Power Rounded Pwr Wald Lwr Bnd Wald Upr Bnd # Multivariate Box Cox uses Highway1 data # See: https://www.rdocumentation.org/packages/car/versions/2.1-6/topics/Highway1 summary(powerTransform(cbind(len, adt, trks, sigs1) ~ 1, Highway1)) ## len 0.1439 ## adt 0.0876 ## trks -0.6954 ## sigs1 -0.2654 ## 0 -0.2728 0 -0.1712 0 -1.9046 0 -0.5575 0.5606 0.3464 0.5139 0.0267 ## Likelihood ratio test that transformation parameters are equal to 0 ## (all log transformations) ## LRT df pval ## LR test, lambda = (0 0 0 0) 6.014218 4 0.19809 ## ## Likelihood ratio test that no transformations are needed ## LRT df pval ## LR test, lambda = (1 1 1 1) 127.7221 4 < 2.22e-16 ## bcPower Transformations to Multinormality ## Est Power Rounded Pwr Wald Lwr Bnd Wald Upr Bnd # Multivariate transformation to normality within levels of 'htype' summary(a3 <- powerTransform(cbind(len, adt, trks, sigs1) ~ htype, Highway1)) ## len 0.1451 ## adt 0.2396 ## trks -0.7336 ## sigs1 -0.2959 ## 0.00 -0.2733 0.33 0.0255 0.00 -1.9408 -0.50 -0.5511 0.5636 0.4536 0.4735 -0.0408 ## Likelihood ratio test that transformation parameters are equal to 0 ## (all log transformations) ## LRT df pval ## LR test, lambda = (0 0 0 0) 13.1339 4 0.01064 ## ## Likelihood ratio test that no transformations are needed ## LRT df pval ## LR test, lambda = (1 1 1 1) 140.5853 4 < 2.22e-16 ## LRT df pval ## LR test, lambda = (0 0 0 -1) 31.12644 4 2.8849e-06 # test lambda = (0 0 0 -1) testTransform(a3, c(0, 0, 0, -1)) # save the rounded transformed values, plot them with a separate # color for each highway type transformedY <- bcPower(with(Highway1, cbind(len, adt, trks, sigs1)), coef(a3, round=TRUE)) scatterplotMatrix( ~ transformedY|htype, Highway1) 19 1.0 2.0 3.0 len.0 FAI MA MC PA 0 2 4 6 8 −6 −4 −2 0 III.3 Transforming Restricted-Range Variables adt.0.33 trks.0 1.8 2.0 2.2 2.4 2.6 sigs1..0.5 −6 −2 0 4 8 1.8 2.2 2.6 1.0 2.0 3.0 In regression models such as probit and logistic where the response variable is a probability, their range is restricted to the interval [0, 1], however, this can present problems for such models because values can cluster near the end points. Therefore, transformations that can spread the values such as the arcsine square-root are good choices for these cases. This transformation is known for its variance stabilizing properties and is given by: Tasnisqrt(x) = sin−1(√x) 20 par(mfrow=c(2,2)) hist(seq(0,1,length=100),main="Original", xlab="x") hist(asin(sqrt(seq(0,1,length=100))),main="Transformed", xlab="x") plot(seq(0,1,length=100),main="Original",type='l') plot(asin(sqrt(seq(0,1,length=100))),main="Transformed", type='l') Original Transformed 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 xx Original 0 20 40 60 80 100 Index Another popular transformation choice is the logit transformation: 􏰀x􏰁 Tlogit = logit(x) = ln 1 − x III.4 Transformations to Equalize Spread Transformed 0 20 40 60 80 100 Index library(car) attach(Prestige) par(mfrow=c(1, 3)) densityPlot(~women,main="(a) Untransformed") #densityPlot(~logit(women),main="(b) Logit",adjust=1) densityPlot(~asin(sqrt(women/100)),main="(c) Arcsine square-root") Consider again the Ornstein dataset showing the number of interlocks by country. Boxplot(interlocks ~ nation, data=Ornstein) 21 seq(0, 1, length = 100) Frequency 0.0 0.6 0 4 8 asin(sqrt(seq(0, 1, length = 100))) Frequency 0.0 1.0 0 10 20 2 3 1 9 65 13 27 CAN OTH UK US ## [1] "1" "2" "3" "5" "6" "9" "13" "27" nation From the figure we can see that it would be helpful to even out the observed spread across countries. A measure of the spread-level can easily be obtain with Tukey’s Spread-Level plot which also provides a suggested power-law transformation to stabilize the variance: spreadLevelPlot(interlocks+1 ~ nation,Ornstein) Spread−Level Plot for interlocks + 1 by nation CAN OTH US UK ## ## US 6 8 10 12 14 16 Median LowerHinge Median UpperHinge Hinge-Spread 2 6.0 13 11 22 Hinge−Spread interlocks 10 12 14 16 18 22 0 20 40 60 80 100 ## UK ##CAN ##OTH ## 4 9.0 6 13.0 4 15.5 14 10 30 24 24 20 ## Suggested power transformation: 0.1534487 According to the output, the choice of λ = 0.15 would be an appropriate one. Since this value is close to 0, which would correspond to a log function, we can try a log to see if it works: Boxplot(log10(interlocks+1) ~ nation, data=Ornstein) CAN OTH III.5 Transformations Toward Linearity UK US nation We can consider transforming both x and y in an effort to linearize the relationship between them. For example, we can find the values of the positive exponents p and q such that the linear regression model Yiq = β0 + β1Xip + ε exhibits a more linear relationship. The choice of these exponents can be guided by Mosteller & Tukey’s Bulging Rule for Linearization as shown in the figure below. For example, below is a visualization of this rule taken from https://dzone.com/articles/tukey-and- mosteller\T1\textquoterights-bulging 23 log10(interlocks + 1) 0.0 0.5 1.0 1.5 2.0 fakedataMT=function(p=1,q=1,n=99,s=.1){ set.seed(1) X=seq(1/(n+1),1-1/(n+1),length=n) Y=(5+2*X^p+rnorm(n,sd=s))^(1/q) return(data.frame(x=X,y=Y))} par(mfrow=c(2,2)) plot(fakedataMT(p=.5,q=2),main="(p=1/2,q=2)") plot(fakedataMT(p=3,q=-5),main="(p=3,q=-5)") plot(fakedataMT(p=.5,q=-1),main="(p=1/2,q=-1)") plot(fakedataMT(p=3,q=5),main="(p=3,q=5)") 0.0 0.2 (p=1/2,q=2) (p=3,q=−5) 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 xx (p=1/2,q=−1) (p=3,q=5) 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 xx 0.0 0.2 24 yy 0.14 0.18 2.3 2.5 yy 1.38 1.44 0.68 0.71