程序代写代做代考 Hidden Markov Mode algorithm graph game Subject Review

Subject Review
COMP90042
Natural Language Processing Lecture 22
COPYRIGHT 2020, THE UNIVERSITY OF MELBOURNE
1

COMP90042
L22
• •

Sentence segmentation Tokenisation
‣ Subword tokenisation Word normalisation
‣ Derivational vs. inflectional morphology
‣ Lemmatisation vs. stemming Stop words

Preprocessing
2

COMP90042
L22
N-gram Language Models
• Derivation
• Smoothingtechniques
‣ Add-k
‣ Absolute discounting
‣ Katz Backoff
‣ Kneser-Ney smoothing ‣ Interpolation
• Evaluation
3

COMP90042
L22
Text Classification
• Buildingaclassificationsystem • Textclassificationtasks
‣ Topic classification
‣ Sentiment analysis
‣ Native language identification ‣ Automatic fact-checking
• Algorithms
‣ Naive-Bayes, logistic regression, SVM ‣ kNN, neural networks
• Biasvs.variance • Evaluationmetrics
‣ Precision, recall, F1
4

COMP90042
L22
Part-of-Speech Tagging
• EnglishPOS
‣ Open vs. closed POS classes
• Tagsets
‣ Penn Treebank tags
• Automatictaggers ‣ Rule-based
‣ Statistical
– Unigram, classifier-based, HMM
5

COMP90042
L22
Hidden Markov Models Probabilistic formulation
‣ Parameters: emission and transition probabilities
• • •

Training
Viterbi algorithm
Generative vs. discriminative models
6

COMP90042
L22
DL: Feed-forward Networks
• •
• •
Word embeddings Convolutional networks
Formulation
Designing FF networks for NLP tasks
‣ Topic classification ‣ Language model
‣ POS tagging
7

COMP90042
L22
DL: Recurrent Networks
• Formulation
• RNNlanguagemodels • LSTM
‣ Functions of gates
‣ Variants
• DesigningRNNforNLPtasks
‣ Text classification: sentiment analysis ‣ POS tagging
8

COMP90042
L22
Lexical Semantics
• Definitionofwordsenses,glosses • Lexicalrelationships
‣ Synonymy, antonymy, hypernymy, meronymy • StructureofWordNet
• Wordsimilarity
‣ Path length, depth information, information content • Wordsensedisambiguation
‣ Supervised vs. unsupervised
9

COMP90042
L22

• • • •
Matrices for distributional semantics
‣ VSM, TF-IDF, word-word co-occurrence
Distributional Semantics
Association measures: PMI, PPMI Count-based methods: SVD Neural methods: skip-gram, CBOW Evaluation
‣ Word similarity, analogy
10

COMP90042
L22
Contextual Representation
• • •

Formulation with RNN ELMo
BERT
‣ Objectives
‣ Fine-tuning for downstream tasks Transformers
‣ Multi-head attention
11

COMP90042
L22
Discourse
• Motivationformodellingbeyondwords • Discoursesegmentation
‣ Text Tiling
• Discourseparsing
‣ Rhetorical structure theory • Anaphoraresolution
‣ Centering
‣ Supervised models
12

COMP90042 L22
Formal Language Theory & FST
• Formallanguagetheoryasaframeworkfordefining language
• Regularlanguages
‣ Closure properties
• Finitestateacceptors
‣ Word morphology, weighted variant ‣ N-gram language model as WFSA
• Finitestatetransducers
‣ Weighted variant, edit distance, morphological analysis
13

COMP90042
L22
• • • •
Center embedding
Basics of CFG
Syntactic constituent and its properties CFG parsing
‣ Chomsky normal form
‣ CYK
English sentence structure (Penn Treebank)

Context-Free Grammar
14

COMP90042 L22
Probabilistic Context-Free Grammar
• Ambiguityingrammars
• BasicsofprobabilisticCFGs • ProbabilityofaCFGtree
• Parsing
‣ Probabilistic CYK • Improvements
‣ Parent conditioning ‣ Head lexicalisation
15

COMP90042
L22
• • •

Notion of dependency between words Universal dependency
Properties of dependency trees
‣ Projectivity Parsing
‣ Transition-based ‣ Graph-based
Dependency Grammar
16

COMP90042
L22
Machine Translation
• StatisticalMT
‣ Language + translation model ‣ Alignments
• NeuralMT
‣ Encoder-decoder
‣ Beam search decoding ‣ Attention mechanism
• Evaluation:BLEU
17

COMP90042
L22


Named entity recognition
‣ NER tags, IOB tagging, models
• •
Relation extraction
‣ Rule-based, supervised, semi-supervised, distant supervision
‣ Unsupervised: ReVERB Temporal expression extraction Event extraction
Information Extraction
18

COMP90042
L22

IR-based QA
‣ Question processing, answer type prediction ‣ Passage retrieval, answer extraction

• •
Reading comprehension
‣ Models: LSTM-based, BERT
Question Answering
Knowledge-based QA Hybrid QA: IBM Watson
19

COMP90042
L22
• •
Evolution of topic models LDA
‣ Sampling-based learning
‣ Hyper-parameters Evaluation:
‣ Word intrusion
‣ Topic coherence

Topic Modelling
20

COMP90042
L22
Summarisation
• Extractivesummarisation ‣ Single-document
– Unsupervised content selection ‣ Multi-document
– Maximum marginal relevance • Abstractivesummarisation
‣ Neural models: copy mechanism • Evaluation:ROUGE
21

COMP90042
L22
Exam
22

COMP90042
L22
• 40marks
• Gradescope
• 2hoursintotal:
‣ 1 hour 45 minutes of writing
‣ 15 minutes to upload answers • 3parts:
‣ A: short answer questions ‣ B: method questions
‣ C: algorithm questions
Exam Structure
23

COMP90042
L22

Several short questions
‣ 1-2 sentence answers for each
‣ Definitional, e.g. what is X?
‣ Conceptual, e.g. relate X and Y, purpose of Z?
‣ May call for an example illustrating a technique/ problem
Short Answer Questions
24

COMP90042
L22
• •
Longer answer
Focus on analysis and understanding
‣ Contrast different methods
‣ Outline or analyse an algorithm
‣ Motivate a modelling technique
‣ Explain or derive mathematical equation
Method Questions
25

COMP90042
L22
Algorithmic Questions Perform algorithmic computations
‣ Numerical computations for algorithm on some given example data
‣ Present an outline of an algorithm on your own example


Not required to simplify maths (e.g. leaving fractions as log(5/4) is fine)
26

COMP90042
L22
What to Expect
Even coverage of topic from the semester
Be prepared for concepts that have not yet been assessed by homework / project


• •
Prescribed reading is fair game for topics mentioned in the lectures and workshops
Mock exam
27