程序代写代做 c++ c/c++ algorithm kernel MPHY0030: Programming Foundations for Medical Image Analysis Assessed Coursework 2019-20

MPHY0030: Programming Foundations for Medical Image Analysis Assessed Coursework 2019-20
C/C++ and Object-Oriented Programming
Available on 12th December 2019
Submission before 23:59 – 19th February 2020, at UCL Moodle submission section
1. Introduction: Medical Image Filtering
This coursework is to design and implement a set of digital image filtering algorithms using C/C++ programming language, one of the basic tasks in medical image analysis. Specifically, three two- dimensional (2D) filters, Gaussian, image derivatives and anisotropic diffusion, will be implemented.
The aim of the coursework is to develop and assess your ability a) to understand the provided scientific concepts behind the image filtering methods, b) to research the relevant methodology and implementation details of the topic, and c) to develop the numerical algorithms in C/C++. Although the assessment does not place emphasis on coding skills and advanced software development techniques, basic C/C++ language knowledge will be taken into account, such as the correct use of pointer/reference and dynamic memory management.
Before the submission deadline, you should digitally submit your source code (NOT the compiled binary files) and a PDF file containing your written report. All files need to be combined in one single zip file and submitted on the module page at UCL Moodle.
There are hyperlinks in the documents pointing out additional materials for clarification and further readings. Throughout this document, various parts of the text are highlighted, for example:
1.1. Image filtering based on convolution
Digital images, in which most modern medical image are acquired and achieved, can be represented by 2D arrays of intensity values. The image can also be considered as an intensity-valued function 𝐼(𝑖, 𝑗) of the spatial coordinates [𝑖,𝑗]. Convolution is a mathematical operation on two functions with its 2D discrete form often used in image filtering:
+∞ +∞
𝐼∗𝐾= ∑ ∑ 𝐼(𝑖,𝑗)𝐾(𝑥−𝑖,𝑦−𝑗) (𝑒𝑞.0) 𝑥=−∞ 𝑦=−∞
where 𝐾(𝑥,𝑦) is a second function specifying the type of filtering, often being referred to as kernel function (or “kernel”, “mask”, “window”), while ∗ denotes the convolution operator. The convolution
MPHY0030 (C/C++) Coursework 2019-20
Class names are highlighted for those mandatory classes that should be found in your submitted code;
Function names are highlighted for those mandatory functions that should be found in your submitted code;
Report-Qx: description or answers that should be found in your submitted written report;
[5]: square brackets indicate marks (with total marks being 127) in this C/C++ coursework;
“filename.ext”: quotation marks indicate the names of files.
1

operation is illustrated in the figure. The (usually bigger) image matrix is convolved with a “moving window”, representing the (usually smaller) kernel matrix. The values of the output matrix, the filtered image, are computed using the weighted sum at each output location. More details on discrete convolution and digital image filtering using convolution can be found in many available tutorials and textbooks, such as the Wikipedia Page and the MATLAB Documentation.
1.2. Gaussian filtering
Gaussian filtering is a special class of filters with the 2D kernel approximating a Gaussian probability density function. Its isotropic version used in this coursework is given as follows:
1 −𝑥2+𝑦2
𝐺(𝑥, 𝑦) =
where 𝜎 is the standard deviation of the Gaussian, a single user-defined filter parameter. The Gaussian
filters are an important tool for medical image analysis, such as resampling, normalising and denoising.
1.3. Image derivatives
First-order Image derivatives (the image gradient) ∂𝐼 and ∂𝐼 are useful for many image computing tasks. ∂𝑥 ∂𝑦
This can be implemented by convolution, with finite difference kernels 𝐷𝑥 and 𝐷𝑦, for example:
MPHY0030 (C/C++) Coursework 2019-20
2𝜋𝜎2
𝑒 2𝜎2 (𝑒𝑞. 1)
∂𝐼=𝐼∗𝐷𝑥=𝐼∗[1 0 −1]
∂𝑥 2 2𝑇 (𝑒𝑞.2)
∂𝐼=𝐼∗𝐷𝑦=𝐼∗[1 0 −1] ∂𝑦 2 2
where [∙]𝑇 denotes the matrix transpose. However, using the finite difference kernels to compute the first- order image derivatives can artificially magnifies the high frequency noise level. This problem is usually mitigated by smoothing the image with a Gaussian filter 𝐼𝑠 = 𝐼 ∗ 𝐺 before taking the derivatives, which can be efficiently implemented using a convolution with the derivatives of a Gaussian kernel ∇G:
∇𝐼𝑠 =∇(𝐼∗𝐺)=∇𝐼∗𝐺=𝐼∗∇G(𝑒𝑞.3) ∂𝐼 ∂𝐼𝑇 ∂𝐺 ∂𝐺𝑇
−𝑥2+𝑦2 2𝜎2
𝑠 2 ∂𝐼𝑠 2 ∂𝐼𝑠 2 ‖∇𝐼‖=√(∂𝑥) +(∂𝑦) (𝑒𝑞.5),
where ∇𝐼 = [∂𝑥 ∂𝑦] and ∇𝐺 = [∂𝑥 ∂𝑦] , while the Gaussian derivatives are given by:
∂𝐺 𝑥
∂𝑥 = 2𝜋𝜎4 𝑒 ∂𝐺 𝑦
−𝑥2+𝑦2 (𝑒𝑞.4) 2𝜎2
∂𝑦 = 2𝜋𝜎4 𝑒
These first-order image derivatives and the magnitude ‖∇𝐼𝑠‖,
2

are closely related to edge detection, such as Sobel operator and Canny edge detector, with higher-order derivatives found useful in many other medical image analysis applications.
1.4. Anisotropic diffusion
The Gaussian filters smooth image content including edges, whilst the image derivatives extract the image edges. The anisotropic diffusion filters were introduced to smooth the image, i.e. denoising, without removing useful edge information in the image [Perona and Malik 1990]. In this case, the images are modelled as a time-dependent diffusion process. Isotropic diffusion can be characterised by the heat
equation, ∂𝐼 = 𝛼∇2𝐼 = 𝛼 (∂2𝐼 + ∂2𝐼 ), where 𝛼 is the diffusivity coefficient. A numerical solution to solve ∂t ∂𝑥2 ∂𝑦2
the heat function is the Forward-Time Central-Space (FTCS) method.
Anisotropic diffusion considers that the diffusion rates differ in different directions at a given time t, by
using the “flux function” 𝑐𝑡 (𝑥, 𝑦) as an adaptive diffusivity coefficient in the heat equation, ∂𝐼 = 𝑐𝑡 (𝑥, 𝑦) ∙ ∂t
∇2𝐼 + ∇𝑐 ∙ ∇𝐼. This diffusivity coefficient function controls the diffusion rate based on how much edges in each direction, therefore it is a function of image gradient. One example diffusivity coefficient is given by:
‖∇𝐼‖ 2 𝑐𝑡(𝑥,𝑦)=𝑐(‖∇𝐼‖)=𝑒−( 𝛫 ) (𝑒𝑞.6)
where 𝛫 is the user-defined parameter that estimates anisotropic diffusion strength. Using the FTCS numerical scheme, an iterative process can be used to filter the original image. In each iteration, we update the intensity value 𝐼𝑡(𝑖, 𝑗) at time t to 𝐼𝑡+1(𝑖, 𝑗) at time t + 1:
𝐼𝑡+1(𝑖, 𝑗) = 𝐼𝑡(𝑖, 𝑗) + 𝜆
∙ [𝑐𝑡(𝑖 − 1, 𝑗) ∙ ∇𝐼(𝑖 − 1, 𝑗) + 𝑐𝑡(𝑖 + 1, 𝑗) ∙ ∇𝐼(𝑖 + 1, 𝑗) + 𝑐𝑡(𝑖, 𝑗 − 1) ∙ ∇𝐼(𝑖, 𝑗 − 1) + 𝑐𝑡(𝑖, 𝑗 + 1) ∙ ∇𝐼(𝑖, 𝑗 + 1)] (𝑒𝑞. 7)
where, 𝜆 is the time constant, often being set between (0, 0.25] for a stable solution.
2. OOP Design: Classes and Methods
The submitted code should contain the following classes, defined in the header file “NaiveImage.h”, with function bodies detailing the member methods in the “NaiveImage.cpp” file.
2.1. The NaiveImage class
The NaiveImage class should:
• Have methods to obtain and modify the intensity values at a specified location, getIntensityValue and setIntensityValue, respectively, with the image reading and writing functions, readFile and writeFile, respectively. There are examples in the provided Image Filtering tutorial (hereinafter referred to as “the Tutorial”).
• Have a member function filter that should:
o Accept one input argument, an object of the filter classes described below [1],
o Loop through all the pixels in the image [3],
o Call the member function compute of the filter object, to obtain the filtered value at each
pixel location [1],
o Modify the intensity values of the image with the filtered values [1], and o Returnvoid.
MPHY0030 (C/C++) Coursework 2019-20
3

The actual computation of the filtered values should be encapsulated in the filter classes [1]. All the filters should use the same loop in the filter function. In particular, consider how to implement this for anisotropic diffusion filters, which require iterating the entire pixel-visiting loop, and the number of iterations should be obtained from the filter object [2].
2.2. The Filter classes
The BaseFilter class that will be used to derive all the following filter classes [3]; Report-Q1: Consider what should be included in this class and explain your decision briefly in the report. [2]
The GaussianFilter class should:
• Have a constructor accepting a single input augment parameter sigma (𝜎 in eq. 1) [1]; Report-Q2: Consider how to determine the kernel size given sigma and explain your method in the report. [3], then construct a 2D Gaussian kernel for the filter (eq. 1) [2],
• Have an overriding member function compute that sample an appropriate region of the original image [3] and perform convolution operation (eq. 0) with the constructed kernel at the pixel location [2], and
• Return the filtered value at the current location [1]. The GradientFilter class should:
• Have a constructor accepting a single input augment parameter sigma (𝜎 in eq. 4) [1], then construct two finite difference kernels (eq. 2) and two 2D Gaussian derivative kernels (eq. 4) [4],
• Have an overriding member function compute that accepts an input argument (a type flag) to specify if the gradient in X, gradient in Y (eq. 4) or the magnitude (eq. 5) should be returned [1],
• The overriding member function compute should sample an appropriate region of the original image [3] and perform convolution operation(s) (eq. 3) with the constructed kernels at the pixel location [2], Report-Q3: Consider what can be done to minimise the code repetition with the implemented Gaussian filter and describe your method in the report. [2],
• Return the filtered value at the current location, according to the specified type flag [3].
• If sigma is set to 0, use finite difference to compute the gradient (eq. 2) [3].
• Report-Q4: Compare the finite difference and the Gaussian derivatives with different sigma values,
and summarise your findings in the report with plotted filtered images. [6] The AnisotropicDiffusion class should:
• Have a constructor accepting three parameters [1]: o kappa (𝛫 in eq. 6),
o lambda (𝜆 in eq. 7), and
o n, the number of iterations (or time steps) – this should be used in the NaiveImage::filter
loop for the iterative diffusion filtering [2].
• Perform necessary pre-computing [2]. Report-Q5: Consider what can be pre-computed, when
initialising the object without re-computing in each step of the NaiveImage::filter loop. [1],
• Have an overriding member function compute that samples an appropriate region of the original image [3] and perform the diffusion filtering operation (eq. 7) with specified parameters [5], and
• Return the filtered value at the current location [1]. 4
MPHY0030 (C/C++) Coursework 2019-20

• Report-Q6: Experiment with different lambda and n values to investigate the impacts to the filtering process, and summarise your findings in the report. [2]
• Report-Q7: Compare the filtered images with different kappa value, and summarise your findings in the report with plotted filtered images. [3]
3. The Tasks
There are a number of numerical implementation details need to be considered:
• All the convolution kernels should be normalised, i.e. sum to one [2];
• The boundary issue (where there are no complete neighbour pixels available, e.g. first- and last
rows/columns of locations) is an example of practical implementation details. The zero-padding
is required in this coursework (Hint: there is an example in the Tutorial) [2];
• The image reading and writing functions used in this coursework only accepts integer-type images. However, all the filtering processes need data precision to be float. Make sure the classes and functions are using the correct data types. Modify the reading and writing functions NaiveImage::readFile and NaiveImage::writeFile, such that the image intensity values will be rescaled/converted between integers and float, and all the filtered images saved from the programme will be readable by the MATLAB function “naive_image_read.m” provided in the Tutorial [5]. Report-Q8: Explain the details of the implemented data type conversion strategy,
indicating where data type conversion is needed and implemented, in the report. [3]
• Any significant variable parsing should use either reference or pointer for memory efficiency [5]. Exceptions include small variables such as parameters and the need to keep the original variables. Report-Q9: Summarise all the exceptions in your implementation of the mandatory functions and
provide explanations in the report. [3]
• All the dynamically-allocated variables will be checked for potential memory leaks [3].
Submit a “coursework_tasks.cpp” file containing a main function, which should:
• Be compiled to a command line programme, with submitted “NaiveImage.h” and “NaiveImage.cpp” files [5];
• Accept three command line arguments with the following syntax [3]: coursework_tasks input_pathname output_pathname task_id
o coursework_tasks is the name of the compiled executable programme;
o input_pathname is the full path of the input NaiveImage file with extension .nim;
o output_pathname is the full path of the output NaiveImage file; and
o task_id is an integer number specifying which task to run when the programme runs.
Only one task should run each time. The required tasks are detailed below.
• Read a .nim image in the file specified by input_pathname [2];
• Perform one of the following tasks on the input image [12]:
o Task 1: perform a Gaussian filtering with 𝜎 = 3;
o Task2:computethegradientinxdirectionusingfinitedifference; o Task3:computethegradientinydirectionusingfinitedifference; o Task4:computethegradientmagnitudeusingfinitedifference;
MPHY0030 (C/C++) Coursework 2019-20
5

o Task 5: compute the gradient in x direction using Gaussian derivatives with 𝜎 = 2;
o Task6:computethegradientinydirectionusingGaussianderivativeswith𝜎=5;
o Task7:computethegradientmagnitudeusingGaussianderivativeswith𝜎=1.5;
o Task 8: perform the anisotropic diffusion filtering with 𝛫 = 12.5, 𝑛 = 5, 𝜆 = 0.25; and o Task 9: perform the anisotropic diffusion filtering with 𝛫 = 50.0, 𝑛 = 20, 𝜆 = 0.1.
• Save the filtered images in the specified output_pathname [2].
• Report-Q10: Write a brief summary for each of these nine tasks including the command you used
and the filtered images in the report. [9]
4. Final remarks:
The MATLAB utility functions provided in the Tutorial should not be submitted, any customised MATLAB functions will be ignored. The C/C++ code in the Tutorial is only for reference. Some need significant update to accommodate the requirements for this coursework.
This document only outlines/defines the assessed key classes, functions and parameters, you should define and use other useful variables and classes as needed. However, you should only use the C++ Standard Library, without code from any external libraries or any other open-source code base.
The design and implementation in this coursework are not optimised for real-world requirements, such as performance or maintainability. They are designed for assessing the ability in using C/C++ with object- oriented programming for medical image analysis tasks. For example, more efficient filtering algorithms can be implemented with separable kernels and fast Fourier transform, as in most other libraries.
The submitted code should work with any 2D image converted by the provided MATLAB function “image2naive.m”. For illustration purposes in the report, the ultrasound images provided in the Tutorial should be used.
Finally, a checklist is provided for the coursework submission zip file:
✓ a “NaiveImage.h” file,
✓ a “NaiveImage.cpp” file,
✓ a “coursework_tasks.cpp” file, and ✓ a “report.pdf” file.
All other submitted files will be ignored.
6
MPHY0030 (C/C++) Coursework 2019-20