NUMERICAL OPTIMISATION
TUTORIAL 9: NONSMOOTH OPTIMISATION
MARTA BETCKE
KIKO RUL·LAN
EXERCISE 1
Consider a sparse signal x = {xn}n=1…N of length N = 213 consisting of:
• T = 100 randomly distributed spikes with values {±1}.
• the remaining, N − T , values equal to 0.
A possible realisation of the signal x is:
n
0 1000 2000 3000 4000 5000 6000 7000 8000 9000
A
m
p
li
tu
d
e
-1
-0.5
0
0.5
1
X
Following the compressed sensing paradigm we take K = 210 compressed measurements of the form
yi = a
T
i x,
where ai ∈ RN is an appropriately chosen sensing vector. Assuming that the measurement is corrupted
with zero-mean Gaussian noise with standard deviation σ = 0.005, the measurements can be compactly
written as
ỹ = Ax+ e,
where aTi is the ith row of the measurement matrix A ∈ R
K×N and e ∈ N (0, σ) is the normally dis-
tributed noise vector.
We consider two different measurement types:
• A is a orthogonal random matrix (use randn() and orth() to construct it)
• A is a subsampled Welsh-Hadamard transform (the forward and inverse WH transform is available
in Matlab: fwht(), ifwht())
While the problem to recover x from ỹ is underdetermined, under assumption of sparsity it is still
possible to recover sparse signals from such incomplete measurements (under certain assumptions on
A and sparsity of the signal vs. number of measurements). While forcing sparsity would lead to a
1
combinatorial problem, the L1 norm has been proven to provide a good relaxation resulting in the
following compressed sensing recovery problem
xCS = arg min
x
1
2
‖Ax− y‖22 + λ‖x‖1, (CS)
where λ is the regularisation parameter chosen depending on the properties of A and the standard devi-
ation of the noise.
Implement and solve the compressed sensing recovery problem (CS) with
• ISTA
• FISTA
• ADMM
EXERCISE 2
Repeat the same experiment in 2D. This is to demonstrate how to apply the same methods in image
settings.
2