程序代写 Lecture 14: Neural Networks – Overview and example applications

Lecture 14: Neural Networks – Overview and example applications
Semester 1, 2022 , CIS
Copyright @ University of Melbourne 2022. All rights reserved. No part of the publication may be reproduced in any form by print, photoprint, microfilm or any other means without
written permission from the author.

Copyright By PowCoder代写 加微信 powcoder

So far …
“Traditional” machine learning
1. Identify a problem and data set
2. Engineer features
3. Train your machine learning model 4. Evaluate your model
• Deep learning: warm up • What’s the big deal?
• Example use cases
Next week(s)
• Inner workings of neural networks (week 9)
• Risks of AI/ML: bias and (un)fairness (weeks 11/12)

Introduction

The impact of Machine Learning
The AI hype / promise

Reasons for Success I: Storage & Compute
Source: https://rpradeepmenon.medium.com/an-executive-primer-to-deep-learning-80c1ece69b34

Reasons for Success II: Big data
https://www.weforum.org/agenda/2021/08/one- minute- internet- web- social- media- technology- online/

The cycle: more funding → better models → more funding → …
Feed-forward Neural Net (next week!)
Transformer [Vaswani et al., 2017]
Recurrent Neural Net [Goldberg, 2017, Ch. 14]
Convolutional Neural Net [Goldberg, 2017, Ch. 13]

Traditional vs. Deep Learning
“Traditinal models” are e.g., Naive Bayes, Logistic Regression, SVMs, Probabilistic graphical models, …
• Traditional ML: feature engineering and/or feature selection
• Deep learning: the model “learns” its own representations from raw intput
Advantage of more training data
• Traditional models at some point do not benefit from more data why? • Large neural networks can benefit from every growing data sets why?
Can you think of advantages of traditional approaches?

Case Study 1:
Deep learning for Medical Image Recognition

Motivation and impact
Medical doctors…
• improve with experience
• are not universally available (rural, remote areas) • are limited by speed
• get tired
Medical data
• Electronic health records becoming the de-facto standard
• Images are a large part of the record
• (Small) repositories of “doctor-labelled images”
• Lots of unlabeled medical images/scans ← “big data”
AI to promote reliable and universally accessible health care.

Medical Image Analysis Tasks
• Detection (“Is the disease present?”)
• Localization (“Where is the kidney in this image?”)
• Segmentation (“Where are the boundaries of the lung tumor?”)

Brief history of medical AI
• Rule-based systems (1970s)
• Manual feature extraction and supervised learning (early 2000s-2015) • Supervised neural networks (2015–)
• “Automatic” feature extraction
• Take as input raw, labeled images

Convolutional Neural Networks
• Input: raw pixels
• Analyze local patches of the image (convolution layer) • Preserve local structure
• Generate more and more high-level shapes / features
• Eventually: predict class from final representation

Medical Image Analysis Tasks (again)
• Detection (“Is the disease present?”)
• Localization (“Where is the kidney in this image?”)
• Segmentation (“Where are the boundaries of the lung tumor?”)
Can you formalize these as a machine learning task (aka concept)?

Experiments and Performance
(Very!) generally,
• increasingly, human doctors (e.g. dermatologists and radiologists) are outperformed by machine learning algorithms [Ker et al., 2017] (But be careful: evaluation error! Are the test sets representative?)
• that is, despite training on often small, supervised data sets (see e.g., [Ker et al., 2017, Shen et al., 2017] for details if interested)
“Cho et al. […] ascertained the accuracy of a CNN […] in classifying individual axial CT images into one of 6 body regions: brain, neck, shoulder, chest, abdomen, pelvis. With 200 training images, accuracies of 88-98% were achieved on a test set of 6000 images.“ [Ker et al., 2017]
Discuss the trade-off between model bias and evaluation bias in the context of medical applications and the quote above.

Medical image analysis: Outlook
Research challenges
• Leveraging unlabeled images
• Class imbalance in the training data
• Most patients are healthy
• Few images for rare diseases
• Patients’ perception/trust of an “AI doctor”
• Legal and moral responsibility if “things go wrong”
More medical ML problems
• Medical/surgical robotics
• Text and vision tasks: medical report summarization or retrieval • Generating images with (neural) generative models
An expert’s outlook: https://www.youtube.com/watch?v=G1IsZeFR_Rk

Case Study 2: Chatbots!

Motivation and impact
“A computer program designed to sim- ulate conversation with human users, especially over the Internet”
[Adamopoulou and Moussiades, 2020]
• Language as the most natural way to interact with electronic devices • FAQ vs chatbot vs human advice
• Business/e-commerce: scale customer service
• Health, age care: improve access
• Entertainment: Alexa, Siri, …

A chatbot’s tasks
Message analysis
• Natural language understanding
• Making sense of the human’s language
• Possibly follow a multi-human conversation
Dialogue management
• Plan the content to contribute next (“turn”)
• The turn type must make sense in context (e.g., ask a clarification question if unsure; provide an answer if human asked a question; …)
• Often: abstract content, logic
Response generation
• Natural language generation
• Translate the turn/content into natural language

Brief history of chatbots
From pattern matching to machine learning
1950s : The Turing test
1960s : ELIZA: s simulated Psychotherapist (patterns and templates)
2000s : SmarterChild: AOL/Microsoft messenger, to check news,
weather etc (access to knowledge base)
2010s- : “Smart” personal voice assistants (Alexa, Siri, Cortana,
…). Diverse, extensible, adaptable, access personal and public data. Smart, really? what’s your experience?

Deep learning powered chatbots: Overview
Sequence-to-sequence neural networks
The encoder passes an input through a neural network and generates a hidden, vector representation. The decoder takes this vector and generates a natural language response.
An end-to-end model, rather than a collection of task-specific modules.

Deep learning powered chatbots: Some detail
Feed Forward Neural Network
• Many, connected perceptron units
• Information flows from input (bottom) to output (top) • (Much more on this next week)

Deep learning powered chatbots: Some detail
Recurrent Neural Network
• Information also flows from left-to-right
• Time step N receives as input the hidden state of time step N − 1
Output Hidden Input

Deep learning powered chatbots: Some detail
Recurrent Neural Network for Language Generation
• Information also flows from left-to-right
• Time step N receives as input the hidden state of time step N − 1 • Time step N receives as input the output of time step N − 1
Output Hidden Input

Deep learning powered chatbots: Some detail
Encoder network
• reads in the input (user utterance)
• passes its last hidden state to the initial hidden state of the decoder
Decoder RNN
• generates the output (system response)
Output Hidden Input

Training a chatbot model
Typical training data sets [Shao et al., 2017, Vinyals and Le, 2015]
• Reddit conversations (221 million conversations) • Movie subtitles (0.5 million conversations)
• IT Helpdesk Troubleshooting conversations (
• The tasks of understanding, planning, generation are not necessarily separated any longer
• With more data/bigger models, more dialogue history can be considered

Does it work?
[Vinyals and Le, 2015]

Does it work?
[Vinyals and Le, 2015]

Does it work? II
Challenges: Generally short answers (not diverse), and a trade-off between length and coherence:
• incoherent (“The sun is in the center of the sun.”),
• redundancy (“i like cake and cake”),
• contradiction (“I don’t own a gun, but I do own a gun.”)
Still some work to do before we can pass the Turing test…

• The impact of deep learning on AI in everyday life
• Medical image analysis
• Chatbots
• Of course, there’s lots more: assisted driving, machine translation, …
• Inner workings of (feed forward) neural networks • Neural network training with backpropagation

References i
Adamopoulou, E. and Moussiades, L. (2020).
Chatbots: History, technology, and applications.
Machine Learning with Applications, 2:100006.
Goldberg, Y. (2017).
Neural network methods for natural language processing.
Synthesis lectures on human language technologies, 10(1):1–309.
Ker, J., Wang, L., Rao, J., and Lim, T. (2017).
Deep learning applications in medical image analysis.
Ieee Access, 6:9375–9389.
Kuutti, S., Bowden, R., Jin, Y., Barber, P., and Fallah, S. (2020).
A survey of deep learning applications to autonomous vehicle control.
IEEE Transactions on Intelligent Transportation Systems, 22(2):712–733.

References ii
Shao, Y., Gouws, S., Britz, D., Goldie, A., Strope, B., and Kurzweil, R. (2017).
Generating high-quality and informative conversation responses with sequence-to-sequence models.
In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2210–2219, Copenhagen, Denmark. Association for Computational Linguistics.
Shen, D., Wu, G., and Suk, H.-I. (2017).
Deep learning in medical image analysis.
Annual review of biomedical engineering, 19:221–248.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. (2017).
Attention is all you need.
Advances in neural information processing systems, 30.

References iii
Vinyals, O. and Le, Q. (2015).
A neural conversational model.
arXiv preprint arXiv:1506.05869.

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com