程序代写代做代考 data science Excel finance C Outline

Outline
➢ Introduction to financial time series (financial econometrics) ➢ Data features
➢ Review on probability and statistics
➢ Introduction to R
Reading:
SDA chapter 4/5
FTS chapter 1
SFM chapter 3 http://cran.r-project.org/doc/manuals/R-intro.pdf https://cran.r-project.org/doc/contrib/usingR.pdf
**[SDA]Statistics and Data Analysis for Financial Engineering (2010) by David Ruppert
[FTS] Tsay, R.S. (2010) Analysis of Financial Time Series,Third Edition,Wiley.
[SFM] Franke, J., Härdle, W. K., Hafner, C. M.(2015) Statistics of Financial Markets An Introduction. Springer.
2

Data
Financial data are naturally time series.
Definition: A time series is a sequence of data points, measured typically at
successive points in time spaced at uniform time intervals.
Examples of time series are the daily and the annual flow volume of the Nile River at Aswan.
Examples of time series in finance:
❑ ❑ ❑ ❑ ❑
Quarterly earnings of Johnson & Johnson.
Monthly interest rates of Singapore.
Weekly exchange rate between U.S. Dollar vs Singapore Dollar. Daily closing value of the Strait Times Index (STI).
Intra-daily (tick by tick) transaction prices of BA.
5

Quarterly earnings of Johnson & Johnson
An exponentially increasing trend, accompanied by seasonal variations.
Johnson & Johnson (JnJ) is an New Jersey-based multinational medical devices, pharmaceutical and consumer packaged goods manufacturer founded in 1886. Its common stock is a component of the Dow Jones Industrial Average and the company is listed among the Fortune 500.
6

Monthly interest rates of Singapore
There is no obvious pattern.
The benchmark interest rate in Singapore was last recorded at 0.23% (June 2016). Interest Rate in Singapore is reported by the Monetary Authority of Singapore. Historically, from 1988 until 2013, Singapore Interest Rate averaged 1.69 Percent reaching an all time high of 20 Percent in January of 1990 and a record low of -0.75 Percent in October of 1993. SIBOR is a reference rate based on the interest rates at which banks offer to lend unsecured funds to each other in the Singapore interbank market.
U.S. interest rate (0.25-0.50%) reported by Federal reserve.
7

Weekly exchange rate USD/SGD
An exponentially decreasing trend, after 2009.
8

Daily closing value of the Strait Times Index
There is cyclical component.
STI (Straits Times Index) prices exhibit a cyclical component along business cycle. The values dropped down around 1997, 2002 and 2008. The Asian financial crisis began in July 1997. During 2001–2003 there was economic recession in Singapore. During 2008 there was the global financial crisis.
9

Intra-daily (tick by tick) transaction prices of BA
Limit Order Book
10

Nature of financial data
Financial data are naturally time series.
BUT financial data are observed at a much higher frequency than e.g. macro economic data or bio data.
Also the properties of financial series differ. An important issue is whether the series has a unit root (thus its statistical property varies over time) and how to devise methods to estimate models when the variables are integrated of order one. In most cases, we observe prices most of the time, yet deal with asset or portfolio returns.
11

Returns of STI
12

Why returns?
Prices are generally found to be non-stationary (properties are changing over time).
Conventional statistics methods are appropriate for handling stationary data. Returns are found to be stationary, at least relative to prices.
Financial data analysis is easy?
The analysis of financial data brings its own challenges. As you will see, financial returns possess some common properties that need to be incorporated in econometric models.
❑ returns of assets such as stocks and bonds exhibit time-varying volatility.
❑ financial returns can exhibit asymmetry in volatility. ❑ financial data are not normally distributed.
13

We will learn
Statistical/Econometric/IT techniques
that are appropriate for analysing, understanding and solving financial problems.
14

What to do with financial data?
Aggregation and Statistics
Data warehouse and On-Line Analytical Processing (OLAP)
Indexing, Searching, and Querying
Keyword based search
Pattern matching
Knowledge discovery via Statistical Modeling and Data Science:
Discovery of useful, possibly unexpected, patterns in data
Non-trivial extraction of implicit, previously unknown and potentially useful information from data
Exploration & analysis, by automatic or semi-automatic means, of large quantities of data in order to discover meaningful patterns
Formulation of relationships between variables in the form of mathematical equations to interpret the underlying mechanism and make forecast


❑ ❑
❑ ❑
❑ ❑
15

Sampling
Extract information from data to understand “real world” & enhance decision-making
16

Literary Digest’s survey in 1936
In the 1936 presidential election, Literary Digest mailed questionnaires to 10 million people (25% of voters at the time).
Selected from telephone books, club memberships, mail order lists, automobile ownership lists
2.4 million people responded.
The Literary Digest predicted an overwhelming victory of Landon over Roosevelt: 57% to 43%.
Roosevelt won the election by a landslide – 62% to 38%. 17

Statistical methods
Descriptive statistics Involves
Inferential statistics Involves
• Estimation • Hypothesis • Testing
Purpose: Make Decisions About Population Characteristics
Population (Universe)
Sample: Portion of Population
Parameter: Summary Measure about Population Statistic: Summary Measure about Sample
● ● ●
Collecting Data Presenting Data Characterizing Data
Purpose: Describe Data
18
Statistical Modeling

Cumulative Distribution Function (CDF)
For a given x, the cumulative distribution function (cdf) F(x) of a continuous r.v. X is defined by
F(x) is the probability that X does not exceed x. f (x)
By definition :
• 0≤F(x)≤1
• F(x) is a nondecreasing
x
It is also easy to see that :
function of x
Note:-4.0
-3.0 -2.0
-1.0 0.0
1.0 2.0 3.0 4.0
19

Normal distributions & Standard Normal distribution
We can use the N(0,1) table to compute probabilities for any r.v. that obeys a Normal distribution : . Why?
Because there is a special relationship between and .
If , then the r.v. Z defined by (0, 1) obeys a standard Normal distribution.
02
20

Facts of normal distribution
68.26% 95.44% 99.72%
of values of X are within 1 std deviation of its mean of values of X are within 2 std deviations of its mean of values of X are within 3 std deviations of its mean
21

Estimator
Let be a random sample. Let be the parameter of the statistical distribution describing a random variable (e.g., the mean and variance of stock returns).
is estimated based on a sample.
Let be an estimator of . Two criteria of a good estimator
(a) Unbiasedness, meaning,
(b) Small MSE (mean-square error)

MSE of is defined by
The estimator is said to be efficient if it is unbiased and has the minimum variance among all the unbiased estimators.
22

Good estimator
To reduce the variability, use a larger sample.
However bias can not be corrected by increasing the sample size! To reduce bias, use random sampling.
23

Inference
24

One-sided or two-sided hypothesis tests
An important application of statistics in finance is to test, based on sample data, a hypothesis concerning the population. Construct hypotheses for the following situations.
Return of fund manager A’s portfolio is higher than that of the market
Volatility of Yen is higher than that of Euro Interest rate and stock return are uncorrelated
The first two examples are one-sided hypotheses, while the third example is two-sided hypotheses.
25

P-values and significance level
The P-value is the probability of seeing the observed data (or something even less likely) given the null hypothesis. The risk to reject the null.
If p-value < , we reject the null hypothesis at the level of significance (or higher). Otherwise, we don’t. 26 R ❑ The R statistical programming language is a free open source package based on the S language developed by Bell Labs. ❑ The language is very powerful for writing programs. ❑ Many statistical functions are already built in. How to get R: http://www.r-project.org/ Google: “R” Windows, Linux, Mac OS X, source Files for introduction to R (source: Tyler K. Perrachione, Gabrieli Lab, MIT) http://web.mit.edu/tkp/www/R/R_Tutorial_Data.txt http://web.mit.edu/tkp/www/R/R_Tutorial_Inputs.txt ● ● ● ● ● 27 Getting Started ❑ Opening a script. ❑ This gives you a script window. 28 Getting Started ❑ Basic assignment and operations. ❑ Arithmetic Operations: +, -, *, /, ^ are the standard arithmetic operators. ❑ ❑ Matrix Arithmetic. * is element wise multiplication %*% is matrix multiplication ❑ Assignment To assign a value to a variable use “<-” ❑ ❑ ❑ 29 Math and variables Math: Variables: >1+1
[1] 2
>1+1*7
[1] 8
> (1 + 1) * 7
[1] 14
> x <- 1 >x
[1] 1 >y=2 >y
[1] 2
> 3 -> z
>z
[1] 3
> (x + y) * z [1] 9
30

Arrays
> x <- c(0,1,2,3,4) >x
[1] 0 1 2 3 4
> y <- 1:5 >y
[1] 1 2 3 4 5
> z <- 1:50 >z
[1] 1 2 3
[16] 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
[31] 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45
[46] 46 47 48 49 50
4 5
6 7 8 9101112131415
31

Math on arrays
> x <- c(0,1,2,3,4) > y <- 1:5 > z <- 1:50 >x+y
[1] 1 3 5 7 9
>x*y
[1] 0 2 61220
>x*z
[1] 0 2 61220 0 7162740 0 [12]12264260 017365780 022 [23]4672100 0275687120 03266 [34]102140 0 37 76117160 0 42 86132 [45] 180 0 47 96 147 200
32

Functions
> arc <- function(x) 2*asin(sqrt(x)) > arc(0.5)
[1] 1.570796
> x <- c(0,1,2,3,4) > x <- x / 10 > arc(x)
[1] 0.0000000 0.6435011 0.9272952
[4] 1.1592795 1.3694384
> plot(arc(Percents)~Percents,
+ pch=21,cex=2,xlim=c(0,1),ylim=c(0,pi),
+ main=”The Arcsine Transformation”)
> lines(c(0,1),c(0,pi),col=”red”,lwd=2)
33

Getting help
How to use help in R?
R has a very good help system built in.
If you know which function you want help with simply use ?_______ with the function in the blank.
Ex: ?hist.
If you don’t know which function to use, then use help.search(“_______”).
Ex: help.search(“histogram”).
❑ ❑
❑ ❑

34

Getting help
> help(t.test)
> help.search(“standard deviation”)
35

Data analysis with R
Example experiment: Subjects learning to perform a new task:
Two groups of subjects
• (“A”and“B”;highandlowaptitudelearners)
Two types of training paradigm
• (“Highvariability”and“Lowvariability”)
Four pre-training assessment tests Example data in “R_Tutorial_Data.txt”
Use setwd(“c:/…/”) to set working directory



36

Importing data
❑ How do we get data into R?
❑ Remember we have no point and click…
❑ First make sure your data is in an easy to read format such as CSV (Comma Separated Values).
❑ Use code:
myData <- read.table(“path”,sep=“,”,header=TRUE) ❑ 37 Reading data from files > myData <- read.table("R_Tutorial_Data.txt", + header=TRUE, sep="\t") > myData
Condition Group Pre1 Pre2 Pre3 Pre4 Learning
1 Low 2 Low 3 Low …
61 High
62 High
63 High
A 0.77 0.91 0.24 0.72 0.90
A 0.82 0.91 0.62 0.90 0.87
A 0.81 0.70 0.43 0.46 0.90
B 0.44 0.41 0.84 0.82 0.29
B 0.48 0.56 0.83 0.85 0.48
B 0.61 0.82 0.88 0.95 0.28
38

Visualizing datasets
> plot(myData)
39

Selecting subsets of data
❑ Use a logical operator to do this.
==, >, <, <=, >=, <> are all logical operators.
Note that the “equals” logical operator is two = signs.
❑ Example:
❑ myData[myData$Group==”A”]
This will return the rows of myData where Group is “A”. Remember R is case sensitive!
This code does nothing to the original dataset.
D.M <- myData[myData$Group=="A"]gives a dataset with the appropriate rows. ❑ ❑ ❑ ❑ ❑ ❑ 40 Selecting subsets of data > myData$Learning
[1] 0.90 0.87 0.90 0.85 0.93 0.93 0.89 0.80 0.98
[10] 0.88 0.88 0.94 0.99 0.92 0.83 0.65 0.57 0.55
[19] 0.94 0.68 0.89 0.60 0.63 0.84 0.92 0.56 0.78
[28] 0.54 0.47 0.45 0.59 0.91 0.98 0.82 0.93 0.81
[37] 0.97 0.95 0.70 1.00 0.90 0.99 0.95 0.95 0.97
[46] 1.00 0.99 0.18 0.33 0.88 0.23 0.75 0.21 0.35
[55] 0.70 0.34 0.43 0.75 0.44 0.44 0.29 0.48 0.28
> myData$Learning[myData$Group==”A”]
[1] 0.90 0.87 0.90 0.85 0.93 0.93 0.89 0.80 0.98
[10] 0.88 0.88 0.94 0.99 0.92 0.83 0.65 0.98 0.82
[19] 0.93 0.81 0.97 0.95 0.70 1.00 0.90 0.99 0.95
[28] 0.95 0.97 1.00 0.99
Group A (high aptitude) usually performs well with high marks.
41

Selecting subsets of data
> myData$Learning
[1] 0.90 0.87 0.90 0.85 0.93 0.93 0.89 0.80 0.98
[10] 0.88 0.88 0.94 0.99 0.92 0.83 0.65 0.57 0.55
[19] 0.94 0.68 0.89 0.60 0.63 0.84 0.92 0.56 0.78
[28] 0.54 0.47 0.45 0.59 0.91 0.98 0.82 0.93 0.81
[37] 0.97 0.95 0.70 1.00 0.90 0.99 0.95 0.95 0.97
[46] 1.00 0.99 0.18 0.33 0.88 0.23 0.75 0.21 0.35
[55] 0.70 0.34 0.43 0.75 0.44 0.44 0.29 0.48 0.28
> attach(myData)
> Learning
[1] 0.90 0.87 0.90 0.85 0.93 0.93 0.89 0.80 0.98
[10] 0.88 0.88 0.94 0.99 0.92 0.83 0.65 0.57 0.55
[19] 0.94 0.68 0.89 0.60 0.63 0.84 0.92 0.56 0.78
[28] 0.54 0.47 0.45 0.59 0.91 0.98 0.82 0.93 0.81
[37] 0.97 0.95 0.70 1.00 0.90 0.99 0.95 0.95 0.97
[46] 1.00 0.99 0.18 0.33 0.88 0.23 0.75 0.21 0.35
[55] 0.70 0.34 0.43 0.75 0.44 0.44 0.29 0.48 0.28
42

Selecting subsets of data
> Learning[Group==”A”]
[1] 0.90 0.87 0.90 0.85 0.93 0.93 0.89 0.80 0.98
[10] 0.88 0.88 0.94 0.99 0.92 0.83 0.65 0.98 0.82
[19] 0.93 0.81 0.97 0.95 0.70 1.00 0.90 0.99 0.95
[28] 0.95 0.97 1.00 0.99
> Learning[Group!=”A”]
[1] 0.57 0.55 0.94 0.68 0.89 0.60 0.63 0.84 0.92
[10] 0.56 0.78 0.54 0.47 0.45 0.59 0.91 0.18 0.33
[19] 0.88 0.23 0.75 0.21 0.35 0.70 0.34 0.43 0.75
[28] 0.44 0.44 0.29 0.48 0.28
Is the low performance of Group B due to the teaching paradigm offered?
> Condition[Group==”B”&Learning<0.5] [1] Low Low High High High High High High High [10] High High High High High Levels: High Low 43 Are my data normally distributed? > plot(dnorm,-3,3,col=”blue”,lwd=3, main=”The Normal Distribution”)
> par(mfrow=c(1,2))
> hist(Learning[Condition==”High”&Group==”A”])
> hist(Learning[Condition==”Low”&Group==”A”])
44

Are my data normally distributed?
> shapiro.test(Learning[Condition==”High”&Group==”A”])
Shapiro-Wilk normality test
data: Learning[Condition == “High” & Group == “A”]
W = 0.7858, p-value = 0.002431
> shapiro.test(Learning[Condition==”Low”&Group==”A”])
Shapiro-Wilk normality test
data: Learning[Condition == “Low” & Group == “A”]
W = 0.8689, p-value = 0.02614
45

Linear models and ANOVA
> myModel <- lm(Learning ~ Pre1 + Pre2 + Pre3 + Pre4) > par(mfrow=c(2,2))
> plot(myModel)
46

Linear models and ANOVA
> summary(myModel)
Call:
lm(formula = Learning ~ Pre1 + Pre2 + Pre3 + Pre4)
Residuals:
Min 1Q Median 3Q Max
-0.40518 -0.08460 0.01707 0.09170 0.29074
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.22037
Pre1 1.05299
Pre2 0.41298
Pre3 0.07339
Pre4 -0.18457

0.11536 -1.910 0.061055 .
0.12636 8.333 1.70e-11 ***
0.10926 3.780 0.000373 ***
0.07653 0.959 0.341541
0.11318 -1.631 0.108369
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.1447 on 58 degrees of freedom
Multiple R-squared: 0.6677, Adjusted R-squared: 0.6448
F-statistic: 29.14 on 4 and 58 DF, p-value: 2.710e-13
47

Linear models and ANOVA
> step(myModel, direction=”backward”)
Start: AIC=-238.8
Learning ~ Pre1 + Pre2 + Pre3 + Pre4
DfSumofSq RSS AIC – Pre3 1 0.01925 1.2332 -239.81 1.2140 -238.80 – Pre4 1 0.05566 1.2696 -237.98 – Pre2 1 0.29902 1.5130 -226.93 – Pre1 1 1.45347 2.6675 -191.21
Step: AIC=-239.81
Learning ~ Pre1 + Pre2 + Pre4
DfSumofSq RSS AIC – Pre4 1 0.03810 1.2713 -239.89 1.2332 -239.81 – Pre2 1 0.28225 1.5155 -228.83 – Pre1 1 1.54780 2.7810 -190.58


Step: AIC=-239.89
Learning ~ Pre1 + Pre2
Df Sum of Sq RSS AIC
1.2713 -239.89
– Pre2 1 0.24997 1.5213 -230.59
– Pre1 1 1.52516 2.7965 -192.23
Call:
lm(formula = Learning ~ Pre1 + Pre2)
Coefficients:
(Intercept) Pre1 Pre2
-0.2864 1.0629 0.3627

48

Linear models and ANOVA
ANOVA:
> myANOVA <- aov(Learning~Group*Condition) > summary(myANOVA)
Df Sum Sq Mean Sq F value Pr(>F)
Group 1 1.8454 1.84537 81.7106 9.822e-13 ***
Condition 1 0.1591 0.15910 7.0448 0.0102017 *
Group:Condition 1 0.3164 0.31640 14.0100 0.0004144 ***
Residuals 59 1.3325 0.02258

Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
> boxplot(Learning~Group*Condition,col=c(“#ffdddd”,”#ddddff”))
49

Linear models and ANOVA
ANOVA:
> myANOVA2 <- aov(Learning~Group*Condition+Gender) > summary(myANOVA2)
Df Sum Sq Mean Sq F value Pr(>F)
1 1.84537 1.84537 80.3440 1.523e-12 ***
1 0.15910 0.15910 6.9270 0.010861 *
1 0.04292 0.04292 1.8688 0.176886
Group
Condition
Gender
Group:Condition 1 0.27378 0.27378 11.9201 0.001043 **
Residuals 58 1.33216 0.02297

Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
> boxplot(Learning~Group*Condition+Gender,
+ col=c(rep(“pink”,4),rep(“light blue”,4)))
50

Outline of the course
1. Introduction to R and review of Probability and Statistics.
2. Regression Analysis and Application: Theory, estimation and diagnostics for linear
regression with single and multiple predictors.
3. Applications in Finance cover topics on e.g. tests of CAPM, return based style analysis for hedge funds.
4. Modeling Univariate Time Series including stationarity and invertibility of ARMA process, identification tools.
5. Approaches to ARIMA Modeling and Forecasting: Box-Jenkins procedure, model estimation and diagnostic checking, forecasting.
6. Autoregressive Conditional Heteroskedastic Models: ARCH/GARCH process, estimation, variants of the GARCH model.
(Recess Week)
7. Mid term Homework and Project Proposal
8. Value at Risk and Expected Shortfall: Modelling and estimation, applications and
backtesting procedure
9. Vector Autoregressive Models including VAR models, estimation and forecasting with VAR models.
10. Cointegration, Error Correction Models and Pairs Trading
11. Factor models: Principal Component Analysis and Factor Analysis with applications 12. Group Project presentation
51

Books
Textbook
1. [SDA]Statistics and Data Analysis for Financial Engineering (2010) by David Ruppert and published by Springer. While the focus of this text is on statistics and data analysis, it is an excellent text, provides a good introduction to R and also overlaps with several topics in the course. If students are going to purchase a text then I recommend this one. It may be downloaded for free from NUS library E-Book website.
2. [FTS] Tsay, R.S. (2010) Analysis of Financial Time Series, Third Edition, Wiley. It may be downloaded for free from NUS library E-Book website.
3. [SFM] Franke, J., Härdle, W. K., Hafner, C. M.(2015) Statistics of Financial Markets An Introduction. Springer. It may be downloaded for free from NUS library E-Book website.
52

Books
Other textbooks include:
4. J. Y. Campbell, A. W. Lo and A.C. MacKinlay (1997) The Econometrics of
Financial Markets, Princeton University Press.
5. Time Series: Theory and Methods by Peter J. Brockwell and Richard A. Davis (2006)
53

Grading
The grading scheme will be approximately: Midterm Homework 30%,
Project Proposal 10%,
Project Oral Presentation 30%, and Project Report 30%.
Details follow LumiNUS Course Description
Bonus Points: Answer quiz in LumiNUS.
1 point for correct answer within lecture day
0.5 point for correct answer within 3 days of the lecture 0.5 point for wrong answer but within lecture day
54

Example of computer (R) lab and solutions
If you are unfamiliar with any of the R functions used below, then use R’s help to learn about them; e.g., type ?rnorm to learn that rnorm generates normally distributed random numbers.
You should study each line of code, understand what it is doing, and convince yourself that the code does what is being requested.
Note that anything that follows a pound sign is a comment and is used only to annotate the code.
55

Example: R lab – Data Analysis
Obtain the data set Stock_FX_bond.csv from LumiNUS or the book’s website and put it in your working directory. Start R and read the data with the following command:
dat = read.csv(“Stock_FX_bond.csv”,header=TRUE)
The data set Stock_FX_bond.csv contains the volumes and adjusted closing (AC) prices of stocks and the S&P 500, yields on bonds. The next lines of code print the names of the variables in the data set, attach the data, and plot the adjusted closing prices of GM and Ford.
names(dat) attach(dat) par(mfrow=c(1,2)) plot(GM_AC) plot(F_AC)
Run the code below to find the sample size (n), compute GM and Ford returns, and plot GM returns versus the Ford returns.
n = dim(dat)[1]
GMReturn = GM_AC[2:n]/GM_AC[1:(n-1)] – 1 FReturn = F_AC[2:n]/F_AC[1:(n-1)] – 1 par(mfrow=c(1,1))
plot(GMReturn,FReturn)
56

Example: R lab – Data Analysis
Problem 1. Do the GM and Ford returns seem positively correlated? Do you notice any outlying returns? If yes, do outlying GM returns seem to occur
with outlying Ford returns?
Answer: The plot below shows a strong positive correlation, and the outliers in GM returns seem to occur with outlying Ford returns.
57

Example: R lab – Data Analysis
Problem 2. Compute the log returns for GM and plot the returns versus the log returns? How highly correlated are the two types of returns? (The R function cor computes correlations.)
Answer: The correlation is almost 1 and the plot below shows an almost perfect linear relationship.
> cor(GMLogReturn,GMReturn) [1] 0.9995408
When you exit R, you can Save workspace image, which will create an R workspace file in your working directory. Later, you can restart R from within Windows and load this workspace image into memory by right-clicking on the R workspace file. When R starts, your working directory will be the folder containing the R workspace that was opened.
58

Excursion: Matrix
Applied Multivariate Statistical Analysis (2015) by Härdle, Wolfgang Karl; Simar, Léopold, Springer. Chapter 2. Access to NUS library E- book.
A matrix is any doubly subscripted array of elements arranged in rows and columns.
Row vector is a [1 x n] matrix: Column vector is an [m x 1] matrix:
59

Excursion: Matrix
Square Matrix has the same number of rows and columns.
Identity matrix is square matrix with ones on the diagonal and zeros elsewhere.
FE5102 Quantitative Methods and Programming, Part I
Transpose matrix: Rows become columns and columns become rows
60

FE5102 Quantitative Methods and Programming, Part
Excursion: Matrix addition and subtraction
A new matrix C may be defined as the additive combination of matrices A and B, where: .
All three matrices are of the same dimension!
then
Matrix subtraction: is defined by .
61

FE5102 Quantitative Methods and Programming, Part I
Excursion: Matrix multiplication
Matrices A and B have these dimensions: [r x c] and [s x d]
Matrices A and B can be multiplied if c=s:
The resulting matrix will have the dimensions [rxd]:
[2 x 2]
rxd
62
[2 x 3]
[2 x 3]

Excursion: Matrix inversion
Like a reciprocal Like the number in scalar math one in scalar math
Consider n equations in n variables: or
Inversion
The unknown values of x can be found using the inverse of matrix A suchthat
Inverse of
63