CS代考 COMP4121 Lecture Notes

COMP4121 Lecture Notes
The Discrete Fourier Transform, the Discrete Cosine Transform and JPEG

THE UNIVERSITY OF NEW SOUTH WALES
School of Computer Science and Engineering The University of Wales Sydney 2052, Australia

1 DFT as a change of basis
Recall that the scalar product (also called the dot product) of two vectors x,y ∈ Rn, x = (x0,x1,…,xn−1) and y = (y0,y1,…,yn−1), denoted by ⟨x,y⟩, is defined as
n−1 ⟨x,y⟩ = 􏰊 xiyi.
If the coordinates of the two vectors are complex numbers, i.e., if x,y ∈ Cn, then the scalar product is defined
⟨x,y⟩ = 􏰊 xiyi, (1.1)
where z denotes the complex conjugate of z, i.e., if z = a + ı ̇b where a,b ∈ R, then z = a − ı ̇b. Note that ⟨y, x⟩ = ⟨x, y⟩. In fact, in a vector space V , any binary function ⟨x, y⟩ : V 2 → C which satisfies the following three properties is a scalar product because it has all of the important properties which the particular scalar product given by (1.1) has.
1. (Conjugate symmetry) ⟨y,x⟩ = ⟨x,y⟩
2. (Linearity in the first argument) for every scalar α, ⟨αx,y⟩ = α⟨x,y⟩ 3. (Positive definitness) ⟨x, x⟩ ≥ 0 and ⟨x, x⟩ = 0 just in case x = 0
Exercise: Define a scalar product on Cn which is different from the usual one, given by (1.1). Hint: Try giving different “importances” to different coordinates…
Every scalar product also defines an associated norm of a vector by ∥x∥ = 􏰘⟨x,x⟩ ≥ 0
Geometrically, the norm plays the role of the length of a vector (and, in case of R2 or R3, it is just the usual, Euclidean length of the vector).
The usual basis of both Rn and Cn is given by
B = {(1,0,0,0,…0),(0,1,0,0,…,0),…,(0,0,0,…,1)}
Note that for any vector a = (a0,a1,a2,…,an−1), such that a ∈ Rn or a ∈ Cn,
(a0, a1, a2, . . . , an−1) = a0(1, 0, 0, 0, . . . , 0) + a1(0, 1, 0, 0, . . . , 0) + . . . + an−1(0, 0, 0, . . . , 1)
Let us set
e0 = (1,0,0,…,0,0); e1 = (0,1,0,…,0,0); … en−1 = (0,0,0,…,0,1);
n−1 (a0, a1, a2, . . . , an−1) = a0e0 + a1e1 + . . . + an−1en−1 = 􏰊 aiei
powersofωn,i.e.,zn =1ifanonlyifzisoftheformz=(ωn)k forsomeksuchthat0≤k≤n−1. We now introduce another basis, this time only in Cn, given by F = {f0,…,fn−1}, where
f = ((ω0)k,(ω1)k,…,(ωn−1)k) = (1,ωk,…,ωk(n−1)). knnnnn
Thus, the coordinates of fk are the kth powers of the sequence of the n roots of unity.
To show that this is indeed a basis we have to show that these vectors are linearly independent. In fact,
they form an orthogonal basis. Two vectors are mutually orthogonal if ⟨x,y⟩ = 0.
Exercise: Prove that if a set consists of vectors which are pairwise orthogonal, then such a set of vectors must
be linearly independent.
To see that vectors fk and fm are orthogonal if k ̸= m we compute their scalar product:
Let us denote the complex number eı ̇ 2π = cos 2π + ı ̇ sin 2π by ω ; such a number is a primitive root of unity
because (ωn)n = 1, and the set of all complex numbers z which satisfy zn = 1 is precisely the set of the n

n−1 n−1 n−1 n−1
⟨f ,f ⟩ = 􏰊(ω )k·i(ω )m·i = 􏰊(ω )k·i(ω )−m·i = 􏰊(ω )(k−m)·i = 􏰊(ωk−m)i
kmnnnnnn i=0 i=0 i=0 i=0
The last sum is a sum of a geometric progression with ratio q = ωk−m and thus, using formula n
􏰊qi = 1−qn
because (ωk−m)n = (ωn)k−m = 1 (the denominator is different from 0 because we have assumed that k ̸= m).
Let us compute the norm of these vectors:
n−1 n−1 n−1 n−1
∥fk∥2 = ⟨fk,fk⟩ = 􏰊(ωn)k·i(ωn)k·i = 􏰊(ωn)k·i(ωn)−k·i = 􏰊(ωn)0 = 􏰊 1 = n.
i=0 i=0 i=0 i=0
Thus, ∥fk∥ = √n; consequently, if we define vectors φk = fk/√n, then these n vectors form an orthonormal basis, Φ = {φ0,…,φn−1}, i.e., a basis satisfying ⟨φk,φm⟩ = δ(m−k) where δ(0) = 1 and δ(m) = 0 if m ̸= 0.
We now show that every scalar product and its corresponding norm satisfy the Cauchy-Bunyakovsky-Schwarz inequality:
⟨fk,fm⟩ = n k−m
1 − (ωk−m)n
Theorem 1.1
Proof: Let z = ⟨x, y ⟩ y and u = x − z; note that the scalar product of z and u satisfies
unit vector y/∥y∥:
⟨z,u⟩ = 􏰓x − ⟨x, y ⟩ y , ⟨x, y ⟩ y 􏰔
|⟨x, y⟩| ≤ ∥x∥ · ∥y∥.
∥y∥ ∥y∥ ∥y∥ ∥y∥
=􏰓x, ⟨x, y ⟩ y 􏰔−􏰓⟨x, y ⟩ y , ⟨x, y ⟩ y 􏰔
= ⟨x, y ⟩⟨x, y ⟩ − ⟨x,
i.e., u is orthogonal on z. Thus, z is a vector corresponding to the orthogonal projection of vector x onto the
∥y∥ ∥y∥ ∥y∥ ∥y∥ y ⟩⟨x, y ⟩⟨ y , y ⟩
∥y∥ ∥y∥ ∥y∥ ∥y∥
We now have ∥u∥ ≥ 0 and thus
0≤⟨x−⟨x, y ⟩ y , x−⟨x, y ⟩ y ⟩
Figure 1.1:
∥y∥ ∥y∥ ∥y∥ ∥y∥
= ∥x∥2 − ⟨x, y ⟩⟨ y ,x⟩ − ⟨x, y ⟩⟨x, y ⟩ + ⟨x, y ⟩⟨x, y ⟩⟨ y , y ⟩
∥y∥ ∥y∥ ∥y∥ ∥y∥ ∥y∥ ∥y∥ ∥y∥ ∥y∥ = ∥x∥2 − 1 |⟨x,y⟩|2;
note that on the second line the second term (or the third, which is equal!) cancels out the fourth term. The
last line implies the claim of the theorem.

If the field of scalars is the field of real numbers R, then the scalar product ⟨u,v⟩ of any two vectors u and v is a real number and the inequality from Theorem 1.1 allows us to define angles between vectors u and v by
∠(u,v) = arccos ⟨u,v⟩ . ∥u∥ · ∥v∥
Thus, since we have a length function (the norm ∥x∥ of a vector) and an angle function, a vector space with a scalar product has a well defined geometry. Also, if y is a unit vector, ∥y∥ = 1, then we have ⟨x, y⟩ = ∥x∥ · cos(∠ x, y), i.e., ⟨x, y⟩ is just the length of the orthogonal projection of x onto the line to which y belongs; see Fig. 1.1.
Theorem 1.2 Let {φm}0≤mφ0 +φ1+φ2

Note that the “forward” transform formula for calculating {􏰋c(m)}0≤m 0
nnnn 􏰋c(m)ei2πm·k +􏰋c(n−m)ei2π(n−m)·k =􏰋c(m)ei2πm·k +􏰋c(m)ei2πm·k
nn =􏰋c(m)ei2πm·k +􏰋c(m)ei2πm·k
= 2Re(􏰋c(m) ei 2πm ·k ).
Thus, real discrete signals c of length n are precisely the signals which satisfy 􏰋c(m) = 􏰋cn−m for all 1 ≤ k ≤ n − 1.
So for real signals, if n is even, in order to store 􏰋c we have to store two floating point real numbers 􏰋c(0) and 􏰋c(n/2) plus n/2 − 1 complex numbers, 􏰋c(1), . . . , 􏰋c(n/2 − 1) because the remaining complex coefficients 􏰋c(n/2 + k) can be obtained using (1.6). Thus, in total, to store 􏰋c we have to store 2 + 2(n/2 − 1) = n floating point real numbers, exactly as many as to store the (real) vector c.
If n is odd than we have to store one real number (i.e., 􏰋c(0)) plus (n − 1)/2 complex numbers, which again adds up to n floating point numbers. Thus, both for real and complex vectors c, storing c takes the same space as storing 􏰋c􏰋c􏰋c.
Note that if n is even, then the bin with index n/2 has a bin frequency 2π/n · n/2 = π, which is called the Nyquist frequency.
We can replace frequencies larger than π with the corresponding negative frequencies, i.e., we can replace 2π(n − m)/n for m < n/2 with −2πm/n. This is useful if we are considering vectors obtained by sampling π-bandlimited signals, to be introduced in the next lecture. But why consider such a complicated basis Φ?? Note that, if signal c is real, then 􏰋c(0)=√1 􏰊ce−ı ̇2πk·0=√1 􏰊c and thus 􏰋c(n/2) is also real. and consequently 􏰋c(0) is also real (􏰋c(0) is called the DC component or DC offset of c; DC stands for Direct Current). If, in addition, n is even, then n−1 n−1 n−1 􏰋c(n/2)= √1 􏰊c e−ı ̇ 2π k·n/2 = √1 􏰊c e−ı ̇πk = √1 􏰊(−1)kc knkk k=0 k=0 k=0 To answer this question we look at the complex sinusoids of the form: n 1i2πk·t1􏰅􏰅2π􏰆􏰅2π􏰆􏰆 Sik(t)= √ne n = √n cos n k·t +ı ̇sin n k·t offrequencies2πk/n,forall0≤k≤n−1. Theneachofthebasisvectorsφk =1/ n(ωn ,ωn ,...,ωn ) √ k·0 k·1 is just a sequence of samples of the function Sink (t), evaluated at integers t = 0, 1, . . . , n − 1: φk =(Sink(0),...,(Sink(n−1)) 4 Thus, if we represent a vector c in such a basis, i.e., as c = k=0 􏰋c(k)φk, we have represented c as a linear combination of samples of such complex sinusoids. If we also see the sequence c as a sequence of samples of a continuous time (i.e., analog) signal c(t), then over the interval [0, n − 1] 􏰊n 1􏰊i2πk·t c(t)≈ 􏰋c(k)Sik(t)=√n 􏰋c(k)e n (2.1) where the equality is exact on integers 0, 1, . . . , n − 1. Let us write each coordinate 􏰋c(k) in the polar form: 􏰋c(k) = |􏰋c(k)|eı ̇ arg(􏰋c(k)); then we get n−1 c(t)≈ |􏰋c(k)|e Note that the equality is exact only on integers 0, . . . , n − 1, i.e., for m such that 0 ≤ m ≤ n − 1, we have n−1 |􏰋c(k)| i( 2πk ·m+arg(􏰋c(k))) c(m)=􏰊 √ e n Thus, we have obtained, what is called, a spectral analysis of c, because the values |􏰋c(k)|/√n represent the amplitudes of the harmonics, i.e., complex sinusoids of frequencies 2πk , and the arguments arg(􏰋c(k)) represent ı ̇ arg(􏰋c(k)) ei n ·t 􏰊 |􏰋c(k)| i( 2πk ·t+arg(􏰋c(k))) √ = √ e n nn =􏰊 √n cos n ·t+arg(􏰋c(k)) +ı ̇sin n ·t+arg(􏰋c(k)) . n−1 |􏰋c(k)| 􏰅 􏰅2πk n−1 |􏰋c(k)| 􏰅 􏰅2πk 􏰆 􏰅2πk 􏰆􏰆 =􏰊 √n cos n ·m+arg(􏰋c(k)) +ı ̇sin n ·m+arg(􏰋c(k)) k=0 the phase shifts of these complex sinusoids. The values of k, 0 ≤ k < n, are the indices of the corresponding frequency bins. The above approximation given by (2.1) has two important shortcomings: 1. Evan if c is real, the sum (2.1) attains complex values for non integer values of t, because, as we have emphasized, (1.6) holds only for t = k, where k is an integer. 2. It unnecessarily involves complex exponentials of frequencies larger than π. This can easily be remedied by allowing complex exponentials of negative frequencies, thus letting n 1i2πk·t1􏰅􏰅2π􏰆􏰅2π􏰆􏰆 Sik(t)= √ne n = √n cos n k·t +ı ̇sin n k·t for all k such that −⌊(n − 1)/2⌋ ≤ k ≤ ⌊(n − 1)/2⌋, and, if n is even, also Sin/2(t)=√1 cos(πt). n These basis functions have frequencies at most π and, for real valued c, they result in a real valued interpolation function which is exact on the integers 0 . . . n − 1 and with frequencies of its components at most equal to ±π: – for even n: c(t)≈√ – for odd n: c(t)≈√ i2π 2t +􏰋c(2)e n i2π 2t +􏰋c(2)e n i2π (n/2−1)t +...+􏰋c(n/2−1)e n +􏰋c(n/2)cosπt+ nn +􏰋c(n/2+1)e−i2π (n/2−1)t +...+􏰋c(n−1)e−i2π t􏰄 i2π ⌊n/2⌋t +...+􏰋c(⌊n/2⌋)e n + nn +􏰋c(⌊n/2⌋+1)e−i2π ⌊n/2⌋t +...+􏰋c(n−1)e−i2π t􏰄 􏰋c(0)+􏰋c(1)e n 􏰋c(0)+􏰋c(1)e n 3 Application to signal compression One of the main applications of spectral analysis of signals, namely signal compression, is based on the following heuristics: If a signal is not just noise, then only a small number of harmonics have a significant amplitude, i.e., only a small number of values 􏰋c(k) are significant; thus, all the remaining values 􏰋c(k) can be set to zero without causing an unacceptable distortion of the original signal. You might want to look at the Mathematica file used to produce the calculations and plots below, available at http://www.cse.unsw.edu.au/~cs4121/DFT_and_DCT.nb. Let us start by considering the following signal: 􏰅 2π · 3 􏰆 􏰅 2π · 7 􏰆 s1(t)=cos 32 t+2 +2cos 32 t−1 Note that this signal is real valued; thus, for each component we need two complex exponentials which are complex conjugates of each other, to cancel out the imaginary parts: s (t) = 1 􏰃eı ̇(2π·3 t+2) + e− ı ̇(2π·3 t+2)􏰄 + eı ̇(2π·7 t−1) + e− ı ̇(2π·7 t−1) = 1 􏰃eı ̇( 2π·3 t+2) + eı ̇( 2π·61 t−2)􏰄 + eı ̇( 2π·7 t−1) + eı ̇( 2π·57 t+1) 32 32 32 32 1 32 32 32 32 2 e2 ı ̇ 2π·3 e−2 ı ̇ 2π·61 2π·7 2π·57 = eı ̇ 32 t + eı ̇ 32 t +e−ı ̇eı ̇ 32 t +eı ̇eı ̇ 32 t 22 Let us sample this signal at 32 integer points, thus obtaining the sequence s1 = (s1(0), s1(1), . . . , s1(31)). Let usalsotaketheDFTofthesequences1,denotedbys􏰋s􏰋s􏰋1 =(s􏰋1(0),s􏰋1(1),...,s􏰋1(31))andthenlookatthesequence of the moduli of the components, |s􏰋s􏰋s􏰋1| = (|s􏰋1(0)|, |s􏰋1(1)|, . . . , |s􏰋1(31))|). Using Mathematica we obtain Fig. 3.1 with a plot of |s􏰋s􏰋s􏰋1| and which looks as expected, with 4 peaks at frequency bins k = 3, k = 7, k = 32−7 = 25 and k = 32 − 3 = 29. Figure 3.1: Clearly, such a signal is extremely compressible in the frequency domain because all we need to store are 4 real numbers, namely the real and imaginary components of s􏰋1(3) and of s􏰋1(7) plus the corresponding bin indices 3 and 7; s􏰋1(25) can then be obtained as s􏰋1(7) and s􏰋1(29) as s􏰋1(3), and we can then recover all samples of the signal using formula (1.5). But what happens if the samples s0, s1, . . . sn−1 come from a signal containing frequencies different from the bin frequencies, i.e., not of the form 2πk/n? Let us now consider the following example. Define 􏰅 2π · 1.35 􏰆 􏰅 2π · 4.05 􏰆 s2(t) = cos 32 t + 2 + 2 cos 32 t − 2 We again evaluate this sequence at integers t = 0 to t = 31 thus obtaining a sequence of values s2 and then compute the DFT s􏰋s􏰋s􏰋2 of such a sequence. Figure 3.2 shows plots of the real part (blue) of s􏰋s􏰋s􏰋2 and of its imaginary part (red). Figure 3.2: Plots of |Re(s􏰋2)| (blue) and of |Im(s􏰋2)| (red). Since the components of the signal do not correspond to the frequencies of the bins of the DFT of length 32 32 32 32 32, for each of the four complex exponentials 􏰖eı ̇ 2π·1.35 t, e− ı ̇ 2π·1.35 t, eı ̇ 2π·4.05 t, e− ı ̇ 2π·4.05 t􏰗 there are several complex exponentials with significant amplitudes (and with bin frequencies), because several adjacent complex exponentials have to superimpose in order to approximate a complex exponential of frequency between two bin frequencies. However, the signal is still compressible, because lots of components have small amplitudes. As we have explained, out of 32 values of the DFT, we need to keep only the first 17 values s􏰋2 (0), . . . , s􏰋2 (16) (out of which s􏰋2(0) and s􏰋2(16) are real and the rest are complex), because the remaining values can be obtained from these by complex conjugation, using s􏰋2(32 − k) = s􏰋2(k) for k = 1 to k = 15. To compress our signal, let us keep only 8 largest components (real or imaginary) of the 17 numbers 􏰋c(0),...􏰋c(16), setting all other components to zero, thus obtaining the sequence σ􏰋2(0),...,σ􏰋2(16), and then again obtain the remaining values as σ􏰋2(32 − k) = σ􏰋2(k) for k = 1 to k = 15. We now use (1.5) in the form s (m)≈σ (m)= √1 􏰊σ􏰋 (k)ei2πk·m (3.1) k=0 to obtain the time domain representation of the corresponding approximation σ2 = (σ2(m) : 0 ≤ m < 32). We plot the reconstructed sequence σ2 = (σ2(m) : 0 ≤ m < 32) in blue and the original sequence s2 = (s2(m) : 0 ≤ m < 32) in red: So σ2 looks like a pretty good approximation of s2, given that we used only 8/32=1/4 of the original information. So one might be tempted to think that sampled signals which are not noise can be compressed by simply computing the DFT of the sampled signal and by setting to zero all real and imaginary components of the elements of such a DFT which are smaller than a threshold (of course the floats can then be quantized to further improve compression). However, unfortunately, things are more complicated than that. To see this, lets look at another signal: s3(t)=cos 32 t +2cos 32 t Sampling this signal at integers 0 − 31 we obtain a vector of samples s3. Taking the DFT of this sequence we obtain a complex valued sequence s􏰋s􏰋s􏰋3; let us plot the real and imaginary parts of s􏰋s􏰋s􏰋3 (left on Fig. 3.3) and compare them with the real and imaginary parts of s􏰋s􏰋s􏰋2 (right). 􏰅2π·1.34 􏰆 􏰅2π·3.5 􏰆 Figure 3.3: Real (blue) and imaginary (red) components of s􏰋s􏰋s􏰋3 (left) and of s􏰋s􏰋s􏰋2 (right). Suddenly, a large number of components appear with significant amplitudes; thus, taking only 8 largest real or imaginary parts of the components of s􏰋s3 will no longer result in good approximation. Fig. 3.4 on the left shows the plot of signal c3 (in blue) and the plot of the signal σ3 (in red), obtained by computing the DFT s􏰋s3 of s3 and then replacing all the real and the imaginary components of the elements of s􏰋s3 with zero, except for 8 largest such components, thus obtaining sequence σ􏰋3, and then using (1.5) to obtain the corresponding sequence σ3; on the right are shown sequences s2 and σ2 from the previous example. Figure 3.4: Left: signal s3 (blue) and its approximation σ3 (red); right: signal s2 and its approximation σ2. Fig. 3.5 compares errors s3 − σ3 (blue) and s2 − σ2 (red) of approximation σ3 of s3 and approximation σ2 of s2. Clearly, σ3 does much worse job of approximating signal s3 then σ2 approximating s2, especially towards the edges of the interval [0, 31]. We now want to examine why this is so. Note that formula (1.5), namely 1􏰊 ı ̇2πkm repetition of the values c0,...,cn−1. So, instead of plotting just one such period, let us plot on Fig. 3.6 two 8 (3.2) can be evaluated not only for k = 0 . . . n − 1, but for arbitrary values of integer k; this results in a periodic Figure 3.5: Approximation errors |s3 − σ3| (blue) and |s2 − σ2| (red). consecutive complete periods of s2 (left) and of s3 (right); the first periods of both sequences are shown in red and the second periods in blue, joining the two periods with a dashed black line. Figure 3.6: Two consecutive periods of s3 (left) and of s2 (right). We now see that the initial samples of the second period of s2 (on the right) “continue the trend” of the final samples of the first period of s2; thus, the two periods taken together look like a sequence of samples of a continuous function. On the other hand, as it can be seen on the left plot, there is a large mismatch between the last sample of the first period of s3 and the first sample of its second period; the two periods taken together look more like samples of a signal which has a discontinuity somewhere between points t = 31 and t = 32. As a consequence, in order for s3 to “jump” between samples at t = 31 and t = 32, s􏰋s􏰋s􏰋3 needs to contain significant high frequency harmonics (which are more rapidly changing in value) and which enable s3 to make such a steep jump. Thus, these higher frequency harmonics are NOT “genuinely” present in the original signal but are spuri- ous artifacts of our signal representation with complex exponentials! Since such spurious harmonics present in s􏰋s3 are significant in size, they would also have to be encoded in the compressed encoding of the signal, to avoid large distortions after signal reconstruction (decompression) from its compressed spectral repres