程序代写CS代考 Linear Algebra

Linear Algebra
Australian National University

2.4.1 Groups

Consider a set 𝒢 and an operation ⨂: 𝒢⨂𝒢 → 𝒢 defined on 𝒢. Then 𝐺:= (𝒢, ⨂) is called a group if the following holds
• Closureof𝒢under⨂:∀𝑥,𝑦∈𝒢:𝑥⨂𝑦∈𝒢
• Associativity:∀𝑥,𝑦,𝑧∈𝒢: 𝑥⊗𝑦 ⊗𝑧=𝑥⊗ 𝑦⊗𝑧
• Neutralelement:∃𝑒∈𝒢∀𝑥∈𝒢:𝑥⊗𝑒=𝑥and𝑒⊗𝑥=𝑥
• Inverseelement:∀𝑥∈𝒢∃𝑦∈𝒢:𝑥⊗𝑦=𝑒and𝑦⊗𝑥=𝑒.Weoftenwrite 𝑥!” to denote the inverse element of 𝑥
Additionally, If ∀𝑥, 𝑦 ∈ 𝒢, 𝑥⨂𝑦 = 𝑦⨂𝑥 (commutative), then 𝐺 := (𝒢, ⨂) is an Abelian group.
Examples
Z,+ isagroupandanAbeliangroup
• …,-5,-4,-3,-2,-1,0,1,2,3,4,…
Z,− isnotagroup:itdoesnotsatisfyassociativity,hasnoneutralelement or inverse element
• •
• •
Closure: √
Associativity: 𝑥+𝑦 +𝑧=𝑥+ 𝑦+𝑧 √ Neutral element: 0 √
Inverse element: ∀𝑥 ∈ Z, 𝑦 = −𝑥 ∈ Z √
Associativity: 𝑥−𝑦 −𝑧≠𝑥− 𝑦−𝑧

• Examples

R,×., + , the set of 𝑚×𝑛-matrices is Abelian (component-wise addition).
• Closure: addition of any two matrices in R#×% is a matrix in R#×% • Associativity:∀𝑨,𝑩,𝑪∈R#×%, 𝑨+𝑩 +𝑪=𝑨+(𝑩+𝑪)
• Neutral element: 𝟎
• Inverse element: ∀𝑨 ∈ R#×%, there exists its inverse element −𝑨 • Commutative:∀𝑨,𝑩∈R#×%,𝑨+𝑩=𝑩+𝑨

2.4.2 Vector spaces • Definition
• A real-valued vector space 𝑉 = 𝒱, +,& is a set 𝒱 with two operations + ∶ 𝒱⨂𝒱 → 𝒱
· ∶ R⨂𝒱 → 𝒱 • 𝒱, + is an Abelian group
• where
• Distributivity:
∀𝜆∈R,𝒙,𝒚∈𝒱: ∀𝜆,𝜑∈R,𝒙∈𝒱:
• Associativity(outeroperation·): ∀𝜆,𝜑∈R,𝒙∈𝒱:
𝜆· 𝒙+𝒚 =𝜆·𝒙+𝜆·𝒚 𝜆+𝜑 ·𝒙=𝜆·𝒙+𝜑·𝒙
𝜆· 𝜑·𝒙 = 𝜆𝜑 ·𝒙
• Neutralelement(w.r.ttoouteroperation·): ∀𝒙 ∈ 𝒱: 1 · 𝒙 = 𝒙

2.4.2 Vector spaces
• Elements 𝒙 ∈ 𝒱 are called vectors
• The neutral element of 𝒱,+ is the zero vector 𝟎 = 0,⋯,0 & • + is called vector addition
• Elements 𝜆 ∈ R are called scalars
• Outer operation · is a multiplication by scalars
• Example
• 𝒱 = R%, 𝑛 ∈ N is a vector space. Its operations are defined as
• Addition:𝒙+𝒚= 𝑥’,⋯,𝑥( ) + 𝑦’,⋯,𝑦( ) = 𝑥’ +𝑦’,⋯,𝑥( +𝑦( ),for𝒙,𝒚∈R( • Multiplication by scalars: 𝜆𝒙 = 𝜆 𝑥’,⋯,𝑥( ) = 𝜆𝑥’,⋯,𝜆𝑥( ), for 𝒙 ∈ R(, 𝜆 ∈ R
• Custom
• We usually write 𝒙 ∈ R% in a column vector
𝑥” ⋮
𝒙=
𝑥%

Vector spaces – example
• 𝒱 = R%, 𝑛 ∈ N is a vector space. Its operations are defined as
• Addition:for𝒙,𝒚∈R(
𝒙+𝒚= 𝑥’,⋯,𝑥( ) + 𝑦’,⋯,𝑦( ) = 𝑥’ +𝑦’,⋯,𝑥( +𝑦( ),for𝒙,𝒚∈R(
• Multiplication by scalars: for 𝒙 ∈ R(, 𝜆 ∈ R 𝜆𝒙 = 𝜆 𝑥’,⋯,𝑥( ) = 𝜆𝑥’,⋯,𝜆𝑥( )
𝑥’
𝑥* ⋮
𝑥(
• We usually write 𝒙 ∈ R% in a column vector

2.4.3 Vector Subspaces
• Sets contained in the original vector space
• “closed”
• When we perform vector space operations on elements within this subspace, we will never leave it
• 𝑈 = 𝒰,+,. is called vector subspace of 𝑉 =
• 𝒰 ⊆ 𝒱,
• 𝒰≠∅,inparticular𝟎∈𝒰
• Closure of 𝑈
• ∀𝒙,𝒚∈𝒰,𝒙+𝒚∈𝒰
• ∀𝒙∈𝒰,𝜆 ∈R,𝜆𝒙∈𝒰
𝒱,+,. , if

2.4.3 Vector Subspaces
• Examples
• For every vector space 𝑉, the trivial subspaces are 𝑉 itself and 𝟎 • Is it a subspace of R’?
Is it a subset of R’? Does it satisfy closure?
Yes
0 Doesitsatisfy𝒰≠∅,inparticular𝟎∈𝒰 Yes
Yes
𝑥+𝑦∈ 0 𝜆𝑥∈ 0

2.4.3 Vector Subspaces • Examples
• Is it a subspace of R’?
0
• Typeequationhere.
Yes
Is it a subset of R’?
Does it satisfy 𝒰 ≠ ∅, in particular 𝟎 ∈ 𝒰 Does it satisfy closure? No
0.8,0 + 0.9,0 = (1.7,0) ∉ 𝒰
Yes

2.4.3 Vector Subspaces
• Examples
• Is it a subspace of R’?
0
Is it a subset of R’?
Does it satisfy 𝒰 ≠ ∅, in particular 𝟎 ∈ 𝒰 Does it satisfy closure? No
Yes
Yes

2.4.3 Vector Subspaces • Examples
• The solution set of a homogeneous system of linear equations 𝑨𝒙 = 𝟎 with 𝑛 unknowns 𝒙 = 𝑥”,⋯,𝑥% &. Is it a subspace of R%?
Is it a subset of R%?
Does it satisfy 𝒰 ≠ ∅, in particular 𝟎 ∈ 𝒰 Yes Does it satisfy closure? Yes
Yes
∀𝒙, 𝒚 ∈ 𝒰, we have 𝑨𝒙 = 𝟎, 𝑨𝒚 = 𝟎
1) We investigate whether 𝒙 + 𝒚 ∈ 𝒰. Because𝑨 𝒙+𝒚 =𝑨𝒙+𝑨𝒚=𝟎,
We know 𝒙 + 𝒚 is a solution, thus belonging to𝒰 2) We investigate whether λ𝒙 ∈ 𝒰. Because𝑨λ𝒙 =λ(𝑨𝒙)=𝟎,
We knowλ𝒙 is a solution, thus belonging to𝒰

2.4.3 Vector Subspaces • Examples
• The solution set of an inhomogeneous system of linear equations 𝑨𝒙 = 𝒃, 𝒃 ≠ 𝟎. Is it a subspace of R%?
Is it a subset of R’?
Does it satisfy closure? No
Yes
Does it satisfy 𝒰 ≠ ∅, in particular 𝟎 ∈ 𝒰
No

Linear combination
• Consider a vector space 𝑉 and 𝑘 vectors 𝒙”,⋯,𝒙( ∈ 𝑉. For 𝜆”,⋯, 𝜆( ∈ R,
𝒗 ∈ 𝑉 is called a linear combination of vectors 𝒙”, ⋯ , 𝒙(, if (
𝝂=𝜆”𝒙” + ⋯ +𝜆(𝒙(=a𝜆)𝒙)∈𝑉 )*”

2.5 Linear Independence
• Consider a system of linear functions 𝜆” 𝒙” + ⋯ + 𝜆( 𝒙( = 𝟎
• If there is a non-trivial solution, 𝜆”, … , 𝜆(, with at least one 𝜆) ≠ 0, the vectors 𝒙”, ⋯ , 𝒙( are linearly dependent
• If only the trivial solution exists, i.e., 𝜆” = ⋯ = 𝜆( = 0, then vectors 𝒙”, ⋯ , 𝒙( are linearly independent
• Intuitively, a set of linearly independent vectors consists of vectors that have no redundancy, i.e., if we remove any of those vectors from the set, we will lose something.

How to determine linear (in)dependence
• Write all vectors 𝒙”, ⋯ , 𝒙( as columns of a matrix 𝑨
• Perform Gaussian elimination until the matrix is in row echelon form
• The pivot columns correspond to independent vectors
130 002
𝑥” 𝑥’ 𝑥+
• All column vectors are linearly independent if and only if all columns are pivot columns.
• If there is at least one non-pivot column, the columns are linearly dependent.
𝑥’ = 2𝑥”

Determine linear (in)dependence • Consider three vectors in R>
𝒙=1,𝒙=0,𝒙=1 ‘ −1 * 2 : 3
012
1 0 1R1+R2->R2 1 0 1 SwapR2andR3 1 0 1 −1 2 3 0 2 4 0 1 2 012 012 024
R3-2R2->R3 1 0 1 𝑥 =𝑥 +2𝑥 012+”‘
000 𝑥” 𝑥’ 𝑥+

The Basis of a vector space
• Asetofvectors 𝒙”,⋯,𝒙( issaidtoformabasisforavectorspaceif
(1) The vectors 𝒙’, ⋯ , 𝒙; span the vector space: every vector in this space can be representedbyalinearcombinationof 𝒙’,⋯,𝒙;
(2) The vectors 𝒙’, ⋯ , 𝒙; are linearly independent. Example:
0,1
1,0

• Example
• In R>, the canonical/standard basis is 100
B= 0,1,0 001
First, this REF has three pivots, so the three bases are linearly independent.
• Different bases in R> are
111 B”= 0,1,1 001
Second, do the three bases span R:?
Specifically,∀ 𝑎,𝑏,𝑐< ∈R:,we examine whether it can be obtained by a linear combination by the three bases. 𝜆 ' 111 𝑎 0 + 𝜆 * 1 + 𝜆 : 1 = 𝑏𝑐 001 1 1 1 𝜆' 𝑎 0 1 1 𝜆* = 𝑏 0 0 1 𝜆: 𝑐 Wecanobtainthesolution 𝜆:=𝑐 E𝜆* =𝑏−𝑐 𝜆'=𝑎−𝑏 • Another different basis in R> is
• Another example
0.5 1.8 −2.2 B’= 0.8,0.3,−1.3
0.4 0.3 3.5
121
𝒜= 2,−1,1 300
4 2 −4
is linearly independent, but not a basis of RH : For instance, the 1,0,0,0 I cannot be obtained by a linear combination of
elements in 𝒜.

So, a couple of things about basis
• Let𝑉= 𝒱,+,. beavectorspaceandB⊆𝒱,B≠∅beabasis of 𝑉.
• B is a maximal linearly independent set of vectors in 𝑉, i.e., adding any other vector to this set will make it linearly dependent.
• Every vector 𝒙 ∈ 𝑉 is a linear combination of vectors from B, and every linear combination is unique, i.e., with
MM
𝑥=I𝜆J𝒃J =I𝜓J𝒃J JKL JKL
and𝜆J,𝜓J ∈R,𝒃J ∈𝐵itfollowsthat𝝀J =𝜓J,𝑖=1,⋯,𝑘.
Think about:

• Every vector space 𝑉 possesses a basis B.
• There can be many bases of a vector space.
• All bases possess the same number of elements, called the
basis vectors
•Dimensionof 𝑉:numberofbasisvectorsof𝑉.Wewrite
dim 𝑉
• If𝑈⊆𝑉isasubspaceof𝑉,thendim 𝑈 ≤dim 𝑉 • dim 𝑈 =dim 𝑉 ifandonlyif𝑈=𝑉
B=
0 0 1
0 1 0
1 0 0
,thendimB =3

Determining a Basis
• Write the spanning vectors as columns of a matrix 𝑨
• Determine the row-echelon form of 𝑨.
• The spanning vectors associated with the pivot columns are a basis of 𝑈.
• Example
• For a vector subspace 𝑈 ⊆ RR, spanned by the vectors
1 2 3 −1 2 −1 −4 8
𝒙”=−1, 𝒙’= 1 , 𝒙+= 3 , 𝒙,=−5∈R- −1 2 5 −6
−1 −2 −3 1

Determining a Basis – Example • Whichvectorsof𝒙”,…,𝒙,areabasisfor𝑈?
• Checkwhether𝒙”,…,𝒙,arelinearlyindependent. • A homogeneous system of equations with matrix
1 2 3 −1
2 −1 −4 8 𝒙”,𝒙’,𝒙+,𝒙, = −1 1 3 −5 −1 2 5 −6
−1−2−3 1
• Through Gaussian Elimination, we obtain the row-echelon form
𝒙” 𝒙’ 𝒙+ 𝒙, 1 2 3 −1 1 2 3 −1
2 −1 −4 8 0 1 2 −2 −1 1 3 −5⇝⋯⇝000 1
−1 2 5 −6 −1 −2 −3 1
0 0 0 0 0 0 0 0
𝒙”, 𝒙’, 𝒙, are linearly independent. Therefore,
𝒙”, 𝒙’, 𝒙, is a basis of 𝑈
H
I𝜆J𝒙J =𝟎 JKL

2.6.2 Rank
• The number of linearly independent columns of a matrix 𝑨 ∈
R,×. is called the rank of 𝑨, denoted by rk 𝑨
•rk𝑨 alsoequalsthenumberoflinearlyindependentrows
• Rank gives us an idea of how much information a matrix contains

Important properties
• rk 𝑨 = rk 𝐀S
• Columns and rows of 𝑨 ∈ R,×. are both subspaces of the same
dimension rk 𝑨
• The basis of the subspace spanned by columns (rows) can be
found by Gaussian elimination to 𝑨 (𝐀S) to identify the pivot columns.
• For all 𝑨 ∈ R.×. it holds that 𝑨 is regular (invertible) if and only if
rk𝑨 =𝑛.





𝑛×𝑛

• Example
• We use Gaussian elimination to determine the rank
121
013
000 𝒙” 𝒙’ 𝒙+
121 −2 −3 1 350
⇝⋯⇝ • 2pivotcolumns.Sork 𝑨 =2
𝑨=

More properties
• For all 𝑨 ∈ R,×. and all 𝒃 ∈ R, it holds that the linear equation system𝑨𝒙=𝒃canbesolvedifandonlyifrk 𝑨 =rk 𝑨|𝒃 , where 𝑨|𝒃 denotes the augmented matrix
• For 𝑨 ∈ R,×. the subspace of solutions for 𝑨 𝒙 = 𝟎 possesses dimension 𝑛 − rk 𝑨 .
Let’slookatasimplercasewhere𝑨∈R(×(andrk𝑨 =𝑛. Inthisscenario,thedimensionofthesolutionspaceis𝑛−rk𝑨 =𝟎. The solution is 𝒙 = 𝟎.

More properties
• A matrix 𝑨 ∈ R,×. has full rank if its rank equals the largest possible rank for a matrix of the same dimensions.
• The rank of a full-rank matrix is the lesser of the number of rows andcolumns,i.e.,rk𝑨 =min𝑚,𝑛.
𝑨 ∈ RZ×:, rk 𝑨 does not exceed 3.
• A matrix is said to be rank deficient if it does not have full rank.

2.7 Linear Mappings
• For vector spaces 𝑉, 𝑊, a mapping Φ: 𝑉 → 𝑊 is called
a linear mapping if
∀𝒙,𝒚∈𝑉,∀𝜆,𝜓∈R:Φ 𝜆𝒙+𝜓𝒚 =𝜆Φ 𝒙 +𝜓Φ(𝒚)
• It implies the following
Φ 𝒙+𝒚 =Φ 𝒙 +Φ(𝒚) Φ(𝜆𝒙)=𝜆Φ(𝒙)

Example
•ThemappingΦ:RY→C,Φ𝒙 =𝑥L+𝑖𝑥Y,isalinearmapping:
Φ 𝑥L + 𝑦L = 𝑥 + 𝑦 + 𝑖 𝑥 + 𝑦 = 𝑥 + 𝑖𝑥 + 𝑦 + 𝑖𝑦 𝑥Y 𝑦Y L L Y Y L Y L Y
=Φ 𝑥L +Φ 𝑦L 𝑥Y 𝑦Y
Φ𝜆𝑥L =𝜆𝑥+𝜆𝑖𝑥=𝜆(𝑥+𝑖𝑥)=𝜆Φ 𝑥L 𝑥Y L Y L Y 𝑥Y

2.7 Linear Mappings
• For linear mappings Φ: 𝑉 → 𝑊 and Ψ:𝑊 → 𝑋, the mapping
Φ ∘ Ψ: 𝑉 → 𝑋 is also linear
• IfΦ: 𝑉→𝑊andΨ:𝑉→𝑊arebothlinearmappings,thenΦ+Ψ and λΦ, λ ∈ R are also linear.

Coordinates of a vector •Consideravectorspace𝑉andanorderedbasis𝐵= 𝒃L,⋯,𝒃.
of 𝑉. For any 𝒙 ∈ 𝑉 we obtain a unique representation 𝒙 = 𝑎L𝒃L + …+ 𝑎.𝒃.
of 𝒙 with respect to 𝐵. Then 𝛼L, ⋯ , 𝛼. are the coordinates of 𝒙 with respect to 𝐵, and the vector
𝜶 = 𝛼 L⋮ ∈ R . 𝛼.
is the coordinate vector/coordinate representation of 𝒙 with respect to the ordered basis 𝐵.

Coordinates of a vector
• [Left] A Cartesian coordinate system in two dimensions, which is
spanned by the canonical basis vectors 𝒆L, 𝒆Y.
• The same vector 𝒙 may have different coordinates under different basis.

2.7.1 Matrix Representation of Linear Mappings • Example – Linear Transformations of Vectors
• The following three linear transformations are used
𝐴=cos(𝜋4) −sin(𝜋4) 𝐴=2 0 𝐴=13 −1 L s i n ( 𝜋4 ) c o s ( 𝜋4 ) Y 0 1 > 2 1 − 1

• Consider vector spaces 𝑉, 𝑊 with corresponding bases 𝐵 = 𝒃L,⋯,𝒃. and 𝐶 = 𝒄L,⋯,𝒄, . We consider a linear mapping
Φ: 𝑉→𝑊.For𝑗∈ 1,⋯,𝑛 ,
,
Φ𝒃\ =𝛼L\𝒄L+⋯+𝛼,\𝒄,=I𝛼J\𝒄J JKL
is the unique representation of Φ 𝒃𝒋 with respect to 𝐶. Then, we call the 𝑚×𝑛-matrix 𝑨[ the transformation matrix of Φ, whose
elements are given by
𝐴[ 𝑖,𝑗 =𝛼J\
• If 𝒙t is the coordinate vector of 𝒙 ∈ 𝑉 with respect to 𝐵, and 𝒚t the
coordinatevectorof𝒚=Φ𝒙 ∈𝑊withrespectto𝐶,then 𝒚t = 𝑨 ] 𝒙t

Spatial Transformer Networks (Jaderberg et al., NIPS 2015)
Affine transformation

Check your understanding
• Which of the following statements are correct?
(A) In a vector space, any vector can be represented as a linear combination of a certain set of vectors in this space
(B) The dimension of a vector equals the dimension of the space itis in.
(C) 𝑈 is a vector subspace of 𝑉. Then vectors in 𝑈 have lower dimension than vectors in 𝑉
(D)Theset 20 , 12 , 1 , 62 formsabasisforR> 0 0 1 −2
(E)𝑈= 𝑥,𝑦 :𝑥=𝑦,𝑥∈R,𝑦∈R isasubspaceofRY
(F) The vector 𝟎 is linearly dependent with any vector in the same vector space