程序代写代做代考 deep learning algorithm Computer Vision

Computer Vision
Project Description
In genetics, phenotype [1] is the term used for the composite observable traits or characteristics of an organism. The phenotype of a plant describes its characteristics such as number of leaves, architecture, visual age or maturity level, height, leaf shape and so on. In this project we shall explore an image-based approach to plant phenotyping, exploring interesting vision tasks to identify plant types, localize their position in an image and segment the plant and its leaves. See Figure 1 for a high-level overview of image-based plant phenotyping.
Figure 1: Image-based plant phenotyping [2]
1 DATASET
For this project you will be using the Plant Phenotyping Dataset available here [2]. To download the data, follow the very first link provided after filling out the form. Once

downloaded, the dataset is organized into three subfolders ‘Plant’, ‘Stacks’ and ‘Tray’. In this project, we shall be using the data from the ‘Plant’ and ‘Tray’ folders only. The dataset to be used with each task will be specified along with the Task specification. Some image samples and leaf segmentation results are shown in Figure 2.
Figure 2: Image Samples from the dataset, (Left) Plant image; (Right) Leaf segmentation.
2 INDIVIDUALCOMPONENT(15MARKS)
For this component you will perform image classification of plants into two plant classes,
namely Arabidopsis and tobacco.
INPUT DATA:
• Arabidopsis:Plant/Ara2013-Canon/*_rgb.png(165files), Plant/Ara2013-Canon/Metadata.csv.
• Tobacco:Plant/Tobacco/*_rgb.png(62files), Plant/Tobacco/Metadata.csv.
TASK: In this task you will implement a Python solution to distinguish Arabidopsis plant images from tobacco plant images. You can implement either a supervised or an unsupervised classification technique. If you are implementing a supervised technique, you may use either all the training data provided or a portion of it, depending on the algorithm of your choice / computational resource limitations.

EVALUATION: You should evaluate your implementation using precision, recall and AUC [3] criteria.
DELIVERABLES FOR INDIVIDUAL COMPONENT: Each student will submit an individual report of maximum 3 pages (2-column IEEE format), along with your source code(s) by Friday week 7 Oct 30th 19:59:59. The report should include the following parts:
1. Introduction and Background: briefly discuss your understanding of the task specification, data, and a brief literature review of relevant techniques
2. Method (implementation): justify and explain the selection of the techniques you implemented, using relevant references when necessary.
3. Experiment: explain the experimental setup and the evaluation methods and metrics used.
4. Results and Discussion: provide some visual results in addition to statistical evaluation, along with a discussion on performance and outcomes.
5. References: list all sources including papers and code used for this task, with details of what source was used for which part of the task
3 GROUPCOMPONENT(25MARKS)
The group Component consists of 3 tasks, each of which needs to be completed as a group
and will be evaluated ONCE for the whole group.
3.1 TASK 1
For this task, implement a Python solution to detect and localise plants in a tray image. The dataset has a total of 70 tray images, with a bounding box around each individual plant in the tray. Detect every plant in a given test image, draw bounding boxes around each of them and display the total number of plants in the image. Also evaluate the performance of the algorithm using Average Precision (AP) [4] as there is only one class.
Input Data:
• Ara2012:Tray/Ara2012/*_rgb.png(16files), Tray/Ara2012/*_bbox.csv (16 files)
• Ara2013(Canon):Tray/Ara2013-Canon/*_rgb.png(27files), Tray/Ara2013-Canon/*_bbox.csv (27 files)
• Ara2013(RPi):Tray/Ara2013-RPi/*_rgb.png(27files), Tray/Ara2013-RPi/*_bbox.csv (27 files).
• CSV files containing plant bounding box annotations (*_bbox.csv) report bounding box coordinates of the four
Note: You are required to use traditional feature extraction techniques from
computer vision (hand-crafted or engineered features, and not deep learning features) to implement this task.

corners in the following order: c1x, c1y, c2x, c2y, c3x, c3y, c4x, c4y.
3.2 TASK 2
An important plant breeding trait that reflects the overall plant quality is its biomass, measured as projected leaf area (PLA), which is effectively the number of plant pixels. For this task implement a Python solution to find the PLA by segmenting the plant from its background. Evaluate the segmentation algorithm performance using Dice Similarity coefficient (DSC) and Intersection over Union (IOU) measures.
Input Data:
3.3
• •
Ara2012:Tray/Ara2012/*_rgb.png(16files), Tray/Ara2012/*_fg.png (16 files)
Ara2013(Canon):Tray/Ara2013-Canon/*_rgb.png(27files), Tray/Ara2013-Canon/*_fg.png (27 files).
TASK 3
When leaves are highly overlapping as in rosette plants (plants having a circular leaf arrangement), PLA may not be an accurate measure of the plant biomass. In such instances segmentation of the individual leaves are required. For this task implement a Python solution to perform individual leaf segmentation, which is a multi-instance segmentation problem. Evaluate the performance using Symmetric Best Dice measure [2].
Input Data:
• • •
Ara2012:Plant/Ara2012/*_rgb.png(120files), Plant/Ara2012/*_label.png (120 files)
Ara2013(Canon):Plant/Ara2013-Canon/*_rgb.png(165 files), Plant/Ara2013-Canon/*_label.png (165 files)
Tobacco:Plant/Tobacco/*_rgb.png(62files), Plant/Tobacco/*_label.png (62 files).
3.4
3.4.1 Demo
Project group demos will be scheduled in week 10. Each group will make a 12 minute online live presentation cum demo to your own tutor and one assessor, and students from other groups may tune in as well. The demo should include a short slide-show presentation (5 slides maximum) explaining your methods and evaluation, followed by a demonstration of your methods, and a brief discussion of how they perform on the given data. Afterwards, you will answer questions from the tutor/assessor/audience. All group members must be present for this demo. The demo roster will be released closer to the deadline.
DELIVERABLES FOR GROUP COMPONENT
The deliverables for the group project are 1) a group demo and 2) a group report. Both are due in Week 10. More detailed information on the two deliverables:

3.4.2 Report
Each group will also submit a report (maximum 10 pages, 2-column IEEE format) along with the source code(s), before 20 Nov 2020 19:59:59. The report should include:
1. Introduction: Discuss your understanding of the task specification and data sets.
2. Literature Review: Review relevant techniques in literature, along with any necessary
background to understand the techniques you selected.
3. Methods: Justify and explain the selection of the techniques you implemented, using
relevant references and theories where necessary.
4. Experimental Setup: Explain the experimental setup and evaluation methods.
5. Results and Discussion: Provide statistical and visual results, along with a discussion
of method performance and outcomes of the experiments.
6. Conclusion: Summarise what worked / did not work and recommend future work.
7. Contribution of Group Members: State each group member’s contribution in brief. In
utmost 3 lines per member, describe the component(s) that each group member
contributed to.
8. References: List the references to papers and code used in your work, including
sources used in the code with details of what is used.
3.4.3 Group Project Logistics
• Each member of a team generally receives the same mark for the project, however, where individual contributions to software development and report are highly unequal, this mark will be adjusted to reflect the level of contribution using peer assessments entered on the Moodle Team Evaluation tool. Peer review is mandatory, and any student who does not enter their review will get 0 for the Contribution of Group Members section of the report. Instructions on how to complete the peer review will be posted later on.
• It is recommended that all communications for the group project be maintained on an online system, for example the Microsoft Teams platform. Your assigned tutor will create a Team in Microsoft Teams for each project group, then invite group members to it. Your group may use this Team for communication with your tutor as well as for the consultation sessions. In addition, you may optionally maintain all the communication, code sharing and task planning within your group on Teams. Please keep the code sharing private within the group to avoid the possibility of plagiarism. If you prefer another platform for the group communication, we would still recommend that you maintain it systematically. Some useful apps you can install in your Microsoft Teams include:
o Github / Bitbucket for code sharing o Asana / Trello for task planning
4 REFERENCES
[1]. https://en.wikipedia.org/wiki/Phenotype

[2].Massimo Minervini, Andreas Fischbach, Hanno Scharr, Sotirios A. Tsaftaris, Finely- grained annotated datasets for image-based plant phenotyping, Pattern Recognition Letters, Volume 81, 2016, Pages 80-89, ISSN 0167-8655.
[3]. https://developers.google.com/machine-learning/crash-course/classification/roc- and-auc
[4]. Everingham, M., Van Gool, L., Williams, C.K.I. et al. The PASCAL Visual Object Classes (VOC) Challenge. Int J Comput Vis 88, 303–338 (2010). https://doi.org/10.1007/s11263-009-0275-4
Some Useful Papers
[5].H. Scharr, M. Minervini, A.P. French, C. Klukas, D. Kramer, Xiaoming Liu, I. Luengo Muntion, J.-M. Pape, G. Polder, D. Vukadinovic, Xi Yin, and S.A. Tsaftaris. Leaf segmentation in plant phenotyping: A collation study. Machine Vision and Applications, pages 1-18, 2015.
[6]. M. Minervini , M.M. Abdelsamea, S.A. Tsaftaris. Image-based plant phenotyping with incremental learning and active contours. Ecological Informatics 23, 35–48, 2014. [7].Minervini M. et al., Image analysis: the new bottleneck in plant phenotyping, IEEE
Signal Process. Mag. 2015; 32: 126-131.
[8].Augustin, M., Haxhimusa, Y., Busch, W., Kropatsch, W.G.: Aframework for the
extraction of quantitative traits from 2d images of mature Arabidopsis thaliana. Mach.
Vis. Appl.27(5), 647–661(2016).
[9].Augustin, M., Haxhimusa, Y., Busch, W., Kropatsch, W.G.: Image-based phenotyping
of the mature Arabidopsis shoot system. In:Computer Vision—ECCV 2014 Workshops,
vol. 8928, pp. 231–246. Springer (2015).
[10]. Shubhra Aich, Ian Stavness, Leaf Counting With Deep Convolutional and
Deconvolutional Networks. The IEEE International Conference on Computer Vision
(ICCV), 2017, pp. 2080-2089.
[11]. Pape JM., Klukas C. (2015) 3-D Histogram-Based Segmentation and Leaf
Detection for Rosette Plants. In: Agapito L., Bronstein M., Rother C. (eds) Computer Vision – ECCV 2014 Workshops. ECCV 2014. Lecture Notes in Computer Science, vol 8928. Springer, Cham.
[12]. Mario Valerio Giuffrida, Massimo Minervini and Sotirios Tsaftaris. Learning to Count Leaves in Rosette Plants. In S. A. Tsaftaris, H. Scharr, and T. Pridmore, editors, Proceedings of the Computer Vision Problems in Plant Phenotyping (CVPPP), pages 1.1-1.13. BMVA Press, September 2015.
[13]. Jean-Michel Pape and Christian Klukas. Utilizing machine learning approaches to improve the prediction of leaf counts and individual leaf segmentation of rosette plant images. In S. A. Tsaftaris, H. Scharr, and T. Pridmore, editors, Proceedings of the Computer Vision Problems in Plant Phenotyping (CVPPP), pages 3.1-3.12. BMVA Press, September 2015.