CS计算机代考程序代写 algorithm decision tree Excel Bayesian COMP 9517 WK5

COMP 9517 WK5
2021 T1
Introduction
• Pattern recognition: is the scientific discipline whose goal is to automatically recognise patterns and regularities in the data (e.g. images)>
• Examples:
• Object recognition / object classification
• Text classification (e.g. spam / non-spam emails) • Speech recognition
• Event detection
• Recommender systems

Pattern recognition categories
Based on learning paradigm:
• Supervisedlearning:learningpatternsinasetofdatatogetherwithavailable labels (ground truth)
• Unsupervisedlearning:findingpatternsinasetofdatawithoutanylabels available.
• Semi-supervised learning: uses a combination of labelled and unlabelled data to learn patterns
• Weakly supervised learning: uses noisy, limited or imprecise labels for the data to learn patterns in supervised setting
Pattern Recognition Systems

Pattern Recognition Systems
Pattern Recognition Concepts

More Concepts
Pattern Recognition Overview-I

Pattern Recognition Overview
Features and Descriptions

Feature Vector Representation
Feature Extraction

Feature Extraction
Supervised Learning Overview

Classification
Classification

Nearest Class Mean Classifier
Nearest Class Mean Classifier

Nearest Class Mean Classifier
Nearest Class Mean Classifier
• Pros:
• Simple
• Fast
• Works well when classes are compact and far from each other.
• Cons:
• For complex classes (eg. Multimodal, non-spherical) may give very poor results • Cannot handle outliers and noisy data well
• Cannot handle missing data

K-nearest Neighbours
• K-NN is a classifier that decides class label for a sample, based on the K nearest samples
• The sample will be assigned to the class which has majority of members in the neighborhood
• The neighbours are selected from a set of samples for which the class is known
• For every new test sample, the distances between the test sample and all training samples are computed, and the K nearest training samples are used to decide the class label of test sample
K-nearest Neighbours

K-nearest Neighbours
• Pros:
• Very simple and intuitive
• Easy to implement
• No a priori assumptions
• No training step
• Decision surfaces are non-linear
• Cons:
• Slow algorithm for big datasets
• Does not perform well when the number of variables grows (curse of dimensionality)
• Needs homogeneous feature types and scales
• Finding the optimal number of neighbours can be challenging
Bayesian Decision Theory
• A classifier‘s decision may or may not be correct, so setting should be probabilistic (soft decision)
• Probability distributions may be used to make classification decisions with least expected error rate

Bayesian Decision Theory
Bayesian Decision Theory

Bayesian Decision Theory
Bayesian Decision Theory

Bayesian Decision Theory
Bayesian Decision Theory

Bayesian Decision Theory Risk
Bayesian Decision Theory Risk

Bayesian Decision Theory Risk
Bayesian Decision Theory

Bayesian Decision Theory
• Pros:
• Considers uncertainties
• Permits combining new information with the current knowledge • Simple & intuitive
• Cons:
• Computationally expensive
• Choice of priors can be subjective
Decision Trees Introduction

Decision Trees: Example
Decision Trees: Example

Decision Trees Overview
Decision Trees Construction

Decision Trees Construction
Constructing Optimal Decision Tree

Decision Trees: Entropy
Decision Trees: Information Gain

Decision Trees: Information Gain
Decision Trees: Information Gain

Decision Trees: Information Gain
Decision Tress Summary
• Pros:
• Easy to interpret
• Can handle both numerical and categorical data
• It can handle outliers and missing values
• Gives information on importance of features (feature selection)
• Cons:
• Tends to overfit
• Only axis-aligned splits
• Greedy algorithm ( may not find the best tree)

Ensemble Learning
Random Forests

Random Forests: Breiman’s
Random Forests

Random Forests
• Pros
• Unexcelled in accuracy among current classical ML algorithms for many problems • Works efficiently on large datasets
• Handles thousands of input features without feature selection
• Handles missing values effectively
• Cons
• Less interpretable than an individual decision tree
• More complex and more time-consuming to construct than decision trees