CS计算机代考程序代写 Linear Algebra

Linear Algebra
Maths Preliminaries
COMP9318 @ CSE, UNSW
February 18, 2021
1/30
COMP9318 @ CSE, UNSW
Maths Preliminaries

Linear Algebra
Introduction
This review serves two purposes:
Recap relevant maths contents that you may have learned a
long time ago (probably not in a CS course and rarely used in any CS course).
More importantly, present it in a way that is useful (i.e., giving semantics/motivations) for understanding maths behind Machine Learning.
Contents
Linear Algebra
2/30
COMP9318 @ CSE, UNSW
Maths Preliminaries

Linear Algebra
Note
You’ve probability learned Linear Algebra from matrix/system of linear equations, etc. We will review key concepts in LA from the perspective of linear transformations (think of it as functions for now). This perspective provides semantics and intuition into most of the ML models and operations.
Here we emphasize more on intuitions; We deliberately skip many concepts and present some contents in an informal way.
It is a great exercise for you to view related maths and ML models/operations in this perspective throughout this course!
3/30
COMP9318 @ CSE, UNSW
Maths Preliminaries

Linear Algebra
A Common Trick in Maths I
Question
Calculate 210, 2−1, 2ln 5 and 24−3i ?
Properties:
fa(n) = fa(n − 1) ∗ a, for n ≥ 1; fa(0) = 1.
f (u) ∗ f (v) = f (u + v).
f(x)=y ⇔ln(y)=xln(a)⇔f(x)=exp{xlna}. eix =cos(x)+i·sin(x).
The trick:
Same in Linear algebra
4/30
COMP9318 @ CSE, UNSW
Maths Preliminaries

Linear Algebra
Objects and Their Representations
Goal
We need to study the objects On one side:
A good representation helps (a lot)! On the other side:
Properties of the objects should be independent of the representation!
5/30
COMP9318 @ CSE, UNSW
Maths Preliminaries

Linear Algebra
Basic Concepts I
Algebra
a set of objects
two operations and their identity objects (aka. identify
element):
addition (+); its identity is 0.
scalar multiplication (·); its identity is 1.
constraints:
Closed for both operations
Some nice properties of these operations:
Communicative: a + b = b + a. Associative: (a+b)+c = a+(b+c). Distributive: λ(a + b) = λa + λb.
6/30
COMP9318 @ CSE, UNSW
Maths Preliminaries

Linear Algebra
Basic Concepts II
Think: What about substraction and division?
Tips
Always use analogy from algebra on integers (Z) and algebra on Polynomials (P).
Why these constraints are natural and useful?
7/30
COMP9318 @ CSE, UNSW
Maths Preliminaries

Linear Algebra
Basic Concepts III
Representation matters?
Consider even geometric vectors: c = a + b
What if we represent vectors by a column of their coordinates? What if by their polar coordinates?
Notes
Informally, the objects we are concerned with in this course are (column) vectors.
The set of all n-dimensional real vectors is called Rn.
8/30
COMP9318 @ CSE, UNSW
Maths Preliminaries

Linear Algebra
(Column) Vector
Vector
A n-dimensional vector, v, is a n × 1 matrix. We can emphasize its shape by calling it a column vector.
A row vector is a transposed column vector: v⊤. Operations
Addition: v1 + v2 =
(Scalar) Multiplication: λv1 =
9/30
COMP9318 @ CSE, UNSW
Maths Preliminaries

Linear Algebra
Linearity I
Linear Combination: Generalization of Univariate Linear Functions
Letλi ∈R,givenasetofkvectorsvi (i∈[k]),alinear combination of them is
λ1v1+λ2v2+…+λkvk =􏰃i∈[k]λivi
Later, this is just Vλ, where
  λ1
  λ2 V=v1 v2 … vk λ= 
… λk
Span: All linear combination of a set of vectors is the span of them.
Basis: The minimal set of vectors whose span is exactly the whole Rn.
10/30
COMP9318 @ CSE, UNSW
Maths Preliminaries

Linear Algebra
Linearity II
Benefit: every vector has a unique decomposition into basis.
Examples
Span of 􏰜1􏰝 and 􏰜0􏰝 is R2. They are also the basis. 01
Span of 􏰜1􏰝, 􏰜0􏰝, 􏰜2􏰝 is R2. But one of them is redundant. 013
Decompose 􏰜4􏰝 6
Think: Why uniqueness is desirable?
Think: Who?
11/30
COMP9318 @ CSE, UNSW
Maths Preliminaries

Linear Algebra
Linearity III
Exercises
What are the (natural) basis of all (univariate) Polynomials of degrees up to d?
Decompose 3×2 + 4x − 7 into the linear combination of 2, 2x − 3, x2 + 1.
3×2 + 4x − 7 = 3(x2 + 1) + 2(2x − 3) + (−2)(2).
The “same” polynomial is mapped to two different vectors under two different bases.
Think: Any analogy?
COMP9318 @ CSE, UNSW
12/30
Maths Preliminaries

Linear Algebra
Matrix I
Linear Transformation
is a “nice” linear function that maps a vector in Rn to another vector in Rm.
The general form:
􏰆x1 􏰇 x2
f
=⇒
−→
􏰆x1 􏰇 x2
y1  y2 y3
f
y1   y 2  y3
y1 = M11x1 + M12x2 y2 = M21x1 + M22x2 y3 = M31x1 + M32x2
−→
13/30
COMP9318 @ CSE, UNSW
Maths Preliminaries

Linear Algebra
Matrix II
Nonexample
􏰆 x 1 􏰇 x
 y 1  y 1 = α x 12 + β x 2 f2
2
−→ y2 =⇒ y2=γx1+θx1+τx2
y3
y3 = cos(x1) + ex2
Why Only Linear Transformation?
Simple and nice properties: (f1 + f2)(x) = f1(x) + f2(x)
(λf )(x) = λ · f (x) What about f(g(x))?
Useful
14/30
COMP9318 @ CSE, UNSW
Maths Preliminaries

Linear Algebra
Matrix I
Definition
A m × n matrix corresponds to a linear transformation from Rn to Rm
f (x) = y =⇒ M x = y, where matrix-vector multiplication is defined as: yi = 􏰃k Mi k · xk
MoutDim×inDim
Transformation or Mapping emphasizes more on the mapping between two sets, rather than the detailed specification of the mapping; the latter is more or less the elementary understanding of a function. These are all specific instances of morphism in category theory.
Semantic Interpretation
15/30
COMP9318 @ CSE, UNSW
Maths Preliminaries

Linear Algebra
Matrix II
Linear combination of columns of M:
    x1 y1
  x2 . M1 M2 … Mnx=M1 M2 … Mn.= . 
Example:
2 􏰆 1 􏰇  1  2 21 251 25135
 1
−4 9 10 =1−4+109=86
 .  xn
ym
y = x1M•1 + . . . + xnM•n
16/30
COMP9318 @ CSE, UNSW
Maths Preliminaries

Linear Algebra
Matrix III
􏰆 1 2􏰇 􏰆 1 􏰇 􏰆 1 􏰇 􏰆2􏰇 􏰆21􏰇 −49 10=1−4+109=86
Rotation and scaling When x is also a matrix,
1 2􏰆1 2􏰇 21 42 −4 9 10 20 = 86 172 25 1 35 70
Think: What does M do for the last example?
17/30
COMP9318 @ CSE, UNSW
Maths Preliminaries

Linear Algebra
System of Linear Equations I
y1 = M11x1 + M12x2 y2 = M21x1 + M22x2 y3 = M31x1 + M32x2
y1 M11 =⇒ y2 = M21 y3 M31
y = Mx
M12 􏰆x1􏰇 M22 x2 M32
Interpretation: find a vector in R2 such that its image (under M) is exactly the given vector y ∈ R3.
How to solve it?
18/30
COMP9318 @ CSE, UNSW
Maths Preliminaries

Linear Algebra
System of Linear Equations II
x1
x2
x3 y3
M Range
The above transformation is injective, but not surjective.
Domain
y1 y2
19/30
COMP9318 @ CSE, UNSW
Maths Preliminaries

Linear Algebra
A Matrix Also Specifies a (Generalized) Coordinate System I
Yet another interpretation
y=Mx =⇒ Iy=Mx.
The vector y wrt standard coordinate system, I, is the same as
x wrt the coordinate system defined by column vectors of M.
Think: why columns of M?
Example for polynomials
for1 1 0 0 I: forx 0 1 0
for3 3 −1 −4 forx−1 0 1 5 
for2x2+5x−4002
=⇒
􏰙1􏰚 􏰙−7􏰚
forx2 001
Letx=−2 =⇒Mx=I13 36
M:
20/30
COMP9318 @ CSE, UNSW
Maths Preliminaries

Linear Algebra
Exercise I
What if y is given in the above example? What does the following mean?
3−1−4100 3−1−4 0 1 50 1 0=0 1 5
002001002
Think about representing polynomials using the basis: (x −1)2, x2 −1, x2 +1.
21/30
COMP9318 @ CSE, UNSW
Maths Preliminaries

Linear Algebra
Inner Product
THE binary operator – some kind of “similarity”
Type signature: vector × vector → scalar: ⟨x, y⟩.
n def⊤ 􏰃
a
Hilbert Space
Properties / definitions for R: conjugate symmetry: ⟨x, y⟩ = ⟨y, x⟩
linearity in the first argument: ⟨ax + y, z⟩ = a⟨x, z⟩ + ⟨y, z⟩ positive definitiveness: ⟨x, x⟩ ≥ 0; ⟨x, x⟩ ⇔ x = 0;
Generalizes many geometric concepts to vector spaces: angle (orthogonal), projection, norm
⟨sinnt,sinmt⟩ = 0 within [−π,π] (m ̸= n) ⇒ they are orthogonal to each other.
C=A⊤B: Cij =⟨Ai,Bj⟩ Special case: A⊤A.
In R , usually called dot product: x·y = x y = i xiyi. Forcertainfunctions,⟨f,g⟩=􏰦bf(t)g(t)dt. ⇒leadstothe
22/30
COMP9318 @ CSE, UNSW
Maths Preliminaries

Linear Algebra
Eigenvalues/vectors and Eigen Decomposition
“Eigen” means “characteristic of” (German)
A (right) eigenvector of a square matrix A is u such that Au = λu.
Not all matrices have eigenvalues. Here, we only consider “good” matrices. Not all eigenvalues need to be distinct.
Traditionally, we normalize u (such that u⊤u = 1).
We can use all eigenvectors of A to construct a matrix U (as columns). Then AU = UΛ, or equivalently, A = UΛU−1. This is the Eigen Decomposition.
We can interpret U as a transformation between two coordinate systems. Note that vectors in U are not necessarily orthogonal. Λ as the scaling on each of the directions in the “new” coordinate system.
23/30
COMP9318 @ CSE, UNSW
Maths Preliminaries

Linear Algebra
Similar Matrices
Similar Matrix
Let A and B be two n×n matrix. A is similar to B (denoted as A ∼ B) if there exists an invertible n × n matrix P such that P−1AP = B.
Think of P as a change of basis transformation. Relationship with the Eigen decomposition.
Similar matrices have the same value wrt many properties (e.g., rank, trace, eigenvalues, determinant, etc.)
Think: What does this mean?
24/30
COMP9318 @ CSE, UNSW
Maths Preliminaries

Linear Algebra
SVD
Singular Vector Decomposition
Let M be n × d (n ≥ d).
Reduced SVD: M = UˆΣˆ V⊤ exists for any M, such that Σˆ is a diagonal matrix with diagonal elements σi (called
singular values) in decreasing order
Uˆ consists of an (incomplete) set of basis vectors ui (left singular vectors in Rn) (n × d: original space as M)
Vˆ consists of a set of basis vectors vi (right singular vectors in Rd)(d×d: reducedspace)
Full SVD: M = UΣV⊤:
Add the remaining (n − d) basis vectors to Uˆ (thus becomes
n × n).
Add the n−d rows of 0 to Λˆ (thus becomes n×d).
25/30
COMP9318 @ CSE, UNSW
Maths Preliminaries

Linear Algebra
Geometric Illustration of SVD
Geometric Meaning
Mvi =σiui
26/30
COMP9318 @ CSE, UNSW
Maths Preliminaries

Linear Algebra
Graphical Illustration of SVD I
(n×d )
(n×n)
(n×d )
27/30
M
(n×d )
(n×d )
Figure: Reduced SVD vs Full SVD
=
=

Σˆ
(d×d) (d×d)
V⊤
M
U
Σ
V⊤
(d×d)
COMP9318 @ CSE, UNSW
Maths Preliminaries

Linear Algebra
Graphical Illustration of SVD II
Meaning:
Columns of U are the basis of Rn Rows of V⊤ are the basis of Rd
28/30
COMP9318 @ CSE, UNSW
Maths Preliminaries

Linear Algebra
SVD Applications I
Relationship between Singular Values and Eigenvalues
What are the eigenvalues of M⊤M?
Hint: M = UΣV⊤ and U and V are unitary (i.e., rotations)
Related to PCA (Principle Component Analysis)
29/30
COMP9318 @ CSE, UNSW
Maths Preliminaries

Linear Algebra
References and Further Reading I
Gaussian Quadrature:

Linear Algebra Review and Reference.
http://cs229.stanford.edu/section/cs229-linalg.pdf
Scipy LA tutorial. https://docs.scipy.org/doc/scipy/ reference/tutorial/linalg.html
We Recommend a Singular Value Decomposition.
http://www.ams.org/samplings/feature-column/fcarc-svd
30/30
COMP9318 @ CSE, UNSW
Maths Preliminaries