Deep Learning and Text Analytics II
¶
References:
• General introduction
▪ http://ufldl.stanford.edu/tutorial/supervised/MultiLayerNeuralNetworks/
• Word vector:
▪ https://code.google.com/archive/p/word2vec/
• Keras tutorial
▪ https://machinelearningmastery.com/tutorial-first-neural-network-python-keras/
• CNN
▪ http://www.wildml.com/2015/11/understanding-convolutional-neural-networks-for-nlp/
1. Agenda¶
• Introduction to neural networks
• Word/Document Vectors (vector representation of words/phrases/paragraphs)
• Convolutional neural network (CNN)
• Application of CNN in text classification
4. Word2Vector (a.k.a word embedding) and Doc2Vector¶
4.1. Word2Vector¶
• Vector representation of words (i.e. word vectors) learned using neural network
▪ e.g. “apple” : [0.35, -0.2, 0.4, …], ‘mongo’: [0.32, -0.18, 0.5, …]
▪ Interesting properties of word vectors:
◦ Words with similar semantics have close word vectors 
◦ https://www.kdnuggets.com/2017/04/cartoon-word2vec-espresso-cappuccino.html
◦ Composition: e.g. vector(“woman”)+vector(“king”)-vector(‘man’) $\approx$ vector(“queen”)
• Models:
▪ CBOW (Continuous Bag of Words): Predict a target word based on context
◦ e.g. the fox jumped over the lazy dog
◦ Assuming symmetric context with window size 3, this sentence can create training samples:
◦ ([-, fox], the)
◦ ([the, jumped], fox)
◦ ([fox, over], jumped)
◦ ([jumped, the], over)
◦ …
◦ 
◦ source: https://www.analyticsvidhya.com/blog/2017/06/word-embeddings-count-word2veec/
▪ Skip Gram: predict context based on target words

▪ source: https://www.analyticsvidhya.com/blog/2017/06/word-embeddings-count-word2veec/
▪ Nagtive Sampling:
◦ When training a neural network, for each sample, all weights are adjusted slightly so that it predicts that training sample more accurately.
◦ CBOW or skip gram models have tremendous number of weights, all of which would be updated slightly by every one of billions of training samples!
◦ Negative sampling addresses this by having each training sample only modify a small percentage of the weights, rather than all of them.
◦ e.g. when training with sample ([fox, over], jumped), update output weights connected to “jumped” along with a small number of other “negative words” sampled randomly
◦ For details, check http://mccormickml.com/2017/01/11/word2vec-tutorial-part-2-negative-sampling/
In [5]:
# set up interactive shell
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = “all”
In [6]:
# Exercise 4.1.1 Train your word vector
import pandas as pd
import nltk,string
# Load data
data=pd.read_csv(‘amazon_review_large.csv’, header=None)
data.columns=[‘label’,’text’]
data.head()
# tokenize each document into a list of unigrams
# strip punctuations and leading/trailing spaces from unigrams
# only unigrams with 2 or more characters are taken
sentences=[ [token.strip(string.punctuation).strip() \
for token in nltk.word_tokenize(doc.lower()) \
if token not in string.punctuation and \
len(token.strip(string.punctuation).strip())>=2]\
for doc in data[“text”]]
print(sentences[0:2])
Out[6]:
label
text
0
2
This is a little longer and more detailed than…
1
1
Only Michelle Branch save this album!!!!All gu…
2
2
A surprisingly good book, given its inherently…
3
2
This is a wonderful, quiet and relaxing CD tha…
4
1
The lights that I received are absolutely not …
[[‘this’, ‘is’, ‘little’, ‘longer’, ‘and’, ‘more’, ‘detailed’, ‘than’, ‘the’, ‘first’, ‘two’, ‘books’, ‘in’, ‘the’, ‘series’, ‘however’, ‘have’, ‘enjoyed’, ‘each’, ‘new’, ‘aspect’, ‘of’, ‘the’, ‘exciting’, ‘fantasy’, ‘universe’], [‘only’, ‘michelle’, ‘branch’, ‘save’, ‘this’, ‘album’, ‘all’, ‘guys’, ‘play’, ‘along’, ‘with’, ‘unenthusiastic’, ‘beat’, ‘even’, ‘karl’]]
In [7]:
# Train your own word vectors using gensim
# gensim.models is the package for word2vec
# check https://radimrehurek.com/gensim/models/word2vec.html
# for detailed description
from gensim.models import word2vec
import logging
import pandas as pd
# print out tracking information
logging.basicConfig(format=’%(asctime)s : %(levelname)s : %(message)s’, \
level=logging.INFO)
# min_count: words with total frequency lower than this are ignored
# size: the dimension of word vector
# window: context window, i.e. the maximum distance
# between the current and predicted word
# within a sentence (i.e. the length of ngrams)
# workers: # of parallel threads in training
# for other parameters, check https://radimrehurek.com/gensim/models/word2vec.html
wv_model = word2vec.Word2Vec(sentences, \
min_count=5, size=200, \
window=5, workers=4 )
2019-05-11 18:30:19,631 : INFO : collecting all words and their counts
2019-05-11 18:30:19,633 : INFO : PROGRESS: at sentence #0, processed 0 words, keeping 0 word types
2019-05-11 18:30:19,758 : INFO : PROGRESS: at sentence #10000, processed 712003 words, keeping 36988 word types
2019-05-11 18:30:19,882 : INFO : collected 55278 word types from a corpus of 1424321 raw words and 20000 sentences
2019-05-11 18:30:19,883 : INFO : Loading a fresh vocabulary
2019-05-11 18:30:19,921 : INFO : effective_min_count=5 retains 12133 unique words (21% of original 55278, drops 43145)
2019-05-11 18:30:19,922 : INFO : effective_min_count=5 leaves 1361983 word corpus (95% of original 1424321, drops 62338)
2019-05-11 18:30:19,956 : INFO : deleting the raw counts dictionary of 55278 items
2019-05-11 18:30:19,957 : INFO : sample=0.001 downsamples 57 most-common words
2019-05-11 18:30:19,958 : INFO : downsampling leaves estimated 1015574 word corpus (74.6% of prior 1361983)
2019-05-11 18:30:19,988 : INFO : estimated required memory for 12133 words and 200 dimensions: 25479300 bytes
2019-05-11 18:30:19,988 : INFO : resetting layer weights
2019-05-11 18:30:20,130 : INFO : training model with 4 workers on 12133 vocabulary and 200 features, using sg=0 hs=0 sample=0.001 negative=5 window=5
2019-05-11 18:30:20,679 : INFO : worker thread finished; awaiting finish of 3 more threads
2019-05-11 18:30:20,682 : INFO : worker thread finished; awaiting finish of 2 more threads
2019-05-11 18:30:20,686 : INFO : worker thread finished; awaiting finish of 1 more threads
2019-05-11 18:30:20,691 : INFO : worker thread finished; awaiting finish of 0 more threads
2019-05-11 18:30:20,692 : INFO : EPOCH – 1 : training on 1424321 raw words (1015610 effective words) took 0.6s, 1822095 effective words/s
2019-05-11 18:30:21,232 : INFO : worker thread finished; awaiting finish of 3 more threads
2019-05-11 18:30:21,234 : INFO : worker thread finished; awaiting finish of 2 more threads
2019-05-11 18:30:21,239 : INFO : worker thread finished; awaiting finish of 1 more threads
2019-05-11 18:30:21,242 : INFO : worker thread finished; awaiting finish of 0 more threads
2019-05-11 18:30:21,243 : INFO : EPOCH – 2 : training on 1424321 raw words (1016220 effective words) took 0.5s, 1855737 effective words/s
2019-05-11 18:30:21,781 : INFO : worker thread finished; awaiting finish of 3 more threads
2019-05-11 18:30:21,785 : INFO : worker thread finished; awaiting finish of 2 more threads
2019-05-11 18:30:21,788 : INFO : worker thread finished; awaiting finish of 1 more threads
2019-05-11 18:30:21,792 : INFO : worker thread finished; awaiting finish of 0 more threads
2019-05-11 18:30:21,793 : INFO : EPOCH – 3 : training on 1424321 raw words (1016329 effective words) took 0.5s, 1863999 effective words/s
2019-05-11 18:30:22,348 : INFO : worker thread finished; awaiting finish of 3 more threads
2019-05-11 18:30:22,351 : INFO : worker thread finished; awaiting finish of 2 more threads
2019-05-11 18:30:22,354 : INFO : worker thread finished; awaiting finish of 1 more threads
2019-05-11 18:30:22,360 : INFO : worker thread finished; awaiting finish of 0 more threads
2019-05-11 18:30:22,361 : INFO : EPOCH – 4 : training on 1424321 raw words (1015163 effective words) took 0.6s, 1804397 effective words/s
2019-05-11 18:30:22,900 : INFO : worker thread finished; awaiting finish of 3 more threads
2019-05-11 18:30:22,903 : INFO : worker thread finished; awaiting finish of 2 more threads
2019-05-11 18:30:22,905 : INFO : worker thread finished; awaiting finish of 1 more threads
2019-05-11 18:30:22,910 : INFO : worker thread finished; awaiting finish of 0 more threads
2019-05-11 18:30:22,911 : INFO : EPOCH – 5 : training on 1424321 raw words (1015248 effective words) took 0.5s, 1859318 effective words/s
2019-05-11 18:30:22,912 : INFO : training on a 7121605 raw words (5078570 effective words) took 2.8s, 1825846 effective words/s
In [8]:
# test word2vec model
print(“Top 5 words similar to word ‘sound'”)
wv_model.wv.most_similar(‘sound’, topn=5)
print(“Top 5 words similar to word ‘sound’ but not relevant to ‘film'”)
wv_model.wv.most_similar(positive=[‘sound’,’music’], \
negative=[‘film’], topn=5)
print(“Similarity between ‘movie’ and ‘film’:”)
wv_model.wv.similarity(‘movie’,’film’)
print(“Similarity between ‘movie’ and ‘city’:”)
wv_model.wv.similarity(‘movie’,’city’)
print(“Word does not match with others in the list of \
[‘sound’, ‘music’, ‘graphics’, ‘actor’, ‘book’]:”)
wv_model.wv.doesnt_match([“sound”, “music”, \
“graphics”, “actor”, “book”])
print(“Word vector for ‘movie’:”)
wv_model.wv[‘movie’]
2019-05-11 18:30:22,920 : INFO : precomputing L2-norms of word weight vectors
Top 5 words similar to word ‘sound’
Out[8]:
[(‘metal’, 0.7665110230445862),
(‘production’, 0.7487144470214844),
(‘beats’, 0.7405915856361389),
(‘band’, 0.7367587089538574),
(‘vocals’, 0.7265908718109131)]
Top 5 words similar to word ‘sound’ but not relevant to ‘film’
Out[8]:
[(‘pop’, 0.7851538062095642),
(‘rock’, 0.7586135268211365),
(‘beats’, 0.7513079643249512),
(‘lyrics’, 0.7490527629852295),
(‘songs’, 0.7204370498657227)]
Similarity between ‘movie’ and ‘film’:
Out[8]:
0.92190003
Similarity between ‘movie’ and ‘city’:
Out[8]:
0.036358505
Word does not match with others in the list of [‘sound’, ‘music’, ‘graphics’, ‘actor’, ‘book’]:
/Users/reneeyang/anaconda3/lib/python3.6/site-packages/gensim/models/keyedvectors.py:858: FutureWarning: arrays to stack must be passed as a “sequence” type such as list or tuple. Support for non-sequence iterables such as generators is deprecated as of NumPy 1.16 and will raise an error in the future.
vectors = vstack(self.word_vec(word, use_norm=True) for word in used_words).astype(REAL)
Out[8]:
‘book’
Word vector for ‘movie’:
Out[8]:
array([-2.389022 , 1.0435257 , -0.8804747 , 0.00473488, 0.31132925,
1.6171641 , 0.5633589 , -0.15090702, -0.6366517 , 0.41551402,
1.6518923 , 1.1224804 , -1.4249003 , -2.0730822 , 1.1130203 ,
0.8046525 , 1.1357425 , -0.45461947, 0.5983661 , -1.0128357 ,
0.3587231 , 0.5768975 , 1.0797025 , -0.8774983 , 0.8700393 ,
1.0006199 , -0.70523924, -0.19211315, 0.04572438, -0.40570858,
0.8454872 , -0.45635635, -0.42916945, -1.262856 , 0.47307125,
-0.00959528, -1.4367402 , 0.12007057, -0.18907486, -0.9071679 ,
2.332095 , 1.3548307 , 0.3976882 , 0.7633259 , 0.26417568,
1.4809455 , 0.70577705, 1.0172323 , -1.9716959 , -0.10707554,
-2.429532 , 0.88018936, 2.2999701 , 0.22111912, 0.54759645,
-1.2574146 , 0.24640192, 0.4445658 , -1.5756905 , 1.2426063 ,
-1.7546257 , 0.24705543, 1.7952273 , -1.2562603 , 2.9385993 ,
0.08260772, 0.08904109, -0.5732753 , 0.9776509 , -0.6333157 ,
0.851587 , 0.22371008, -0.9933962 , -0.714596 , -0.8176635 ,
0.60733783, -0.0863951 , 0.46537977, -1.1230742 , -1.2973163 ,
0.25950024, 0.76454854, 0.82739305, 0.26607445, 2.1698241 ,
0.26798692, 0.37030816, -0.6978241 , 0.39442074, -0.5514426 ,
0.5096595 , 0.5759544 , 0.4684672 , 0.5377743 , 0.4989032 ,
0.43044007, 2.5587463 , -0.4375565 , -1.5239153 , 0.18359943,
0.8545728 , 1.0993361 , -1.1433537 , -0.7969044 , 1.1575737 ,
-2.9570394 , -0.4047331 , -0.5326532 , -0.6769366 , 0.6004241 ,
1.5286473 , -0.11586563, 0.8277211 , 1.0462245 , 1.0396241 ,
0.05487772, 1.5691043 , 0.6376131 , -1.468211 , -0.26064992,
0.4596126 , 1.5408072 , -0.03437184, -1.3217175 , 0.76482856,
-1.1785313 , -1.5355107 , -0.8814636 , -0.4274001 , 0.5699125 ,
0.9174622 , -0.97774154, 0.39057374, -1.030156 , 0.77296144,
-0.18155251, 1.2617298 , -0.13823135, 0.8241313 , 0.976646 ,
1.0430503 , 0.56128514, 1.6335729 , 0.0407378 , 0.62826425,
0.2736898 , 1.5950273 , 0.3488634 , 0.6752356 , 0.8722785 ,
-1.8646349 , 0.67093796, -1.1355606 , -0.7134604 , -1.2181042 ,
0.1253755 , 0.7880624 , 0.68445134, -0.58338386, -0.8257311 ,
0.993532 , 0.47608185, 0.98643076, 0.43172672, -1.4937574 ,
0.33717242, -1.1172246 , -0.1280077 , -0.136955 , -1.4244331 ,
-0.26464006, -0.03547046, -0.14330696, 0.00406981, -1.2376485 ,
-1.2984059 , -0.9440582 , 1.6012847 , 0.8698546 , 0.10599361,
0.07323867, -1.2776217 , 0.5907448 , -0.76489526, -0.3622843 ,
-0.07321889, -0.52608806, 0.24011731, 0.33231127, 1.0320984 ,
-0.31278306, 1.6593688 , 0.04304637, 0.21618631, -0.217722 ,
-1.3623549 , -0.1108539 , 0.420944 , 0.5392607 , -0.0479231 ],
dtype=float32)
4.2. Pretrained Word Vectors¶
• Google published pre-trained 300-dimensional vectors for 3 million words and phrases that were trained on Google News dataset (about 100 billion words)(https://code.google.com/archive/p/word2vec/)
• GloVe (Global Vectors for Word Representation): Pretained word vectors from different data sources provided by Standford https://nlp.stanford.edu/projects/glove/
• FastText by Facebook https://github.com/facebookresearch/fastText/blob/master/pretrained-vectors.md
In [9]:
# Exercise 4.2.1: Use pretrained word vectors
# download the bin file for pretrained word vectors
# from above links, e.g. https://drive.google.com/file/d/0B7XkCwpI5KDYNlNUTTlSS21pQmM/edit?usp=sharing
# Warning: the bin file is very big (over 2G)
# You need a powerful machine to load it
import gensim
model = gensim.models.KeyedVectors.\
load_word2vec_format(‘GoogleNews-vectors-negative300.bin’, binary=True)
model.wv.most_similar(positive=[‘women’,’king’], \
negative=’man’)
2019-05-11 18:30:22,968 : INFO : loading projection weights from GoogleNews-vectors-negative300.bin
—————————————————————————
FileNotFoundError Traceback (most recent call last)
8 import gensim
9
—> 10 model = gensim.models.KeyedVectors.load_word2vec_format(‘GoogleNews-vectors-negative300.bin’, binary=True)
11
12 model.wv.most_similar(positive=[‘women’,’king’], negative=’man’)
~/anaconda3/lib/python3.6/site-packages/gensim/models/keyedvectors.py in load_word2vec_format(cls, fname, fvocab, binary, encoding, unicode_errors, limit, datatype)
1474 return _load_word2vec_format(
1475 cls, fname, fvocab=fvocab, binary=binary, encoding=encoding, unicode_errors=unicode_errors,
-> 1476 limit=limit, datatype=datatype)
1477
1478 def get_keras_embedding(self, train_embeddings=False):
~/anaconda3/lib/python3.6/site-packages/gensim/models/utils_any2vec.py in _load_word2vec_format(cls, fname, fvocab, binary, encoding, unicode_errors, limit, datatype)
340
341 logger.info(“loading projection weights from %s”, fname)
–> 342 with utils.smart_open(fname) as fin:
343 header = utils.to_unicode(fin.readline(), encoding=encoding)
344 vocab_size, vector_size = (int(x) for x in header.split()) # throws for invalid file format
~/anaconda3/lib/python3.6/site-packages/smart_open/smart_open_lib.py in smart_open(uri, mode, **kw)
179 raise TypeError(‘mode should be a string’)
180
–> 181 fobj = _shortcut_open(uri, mode, **kw)
182 if fobj is not None:
183 return fobj
~/anaconda3/lib/python3.6/site-packages/smart_open/smart_open_lib.py in _shortcut_open(uri, mode, **kw)
299 #
300 if six.PY3:
–> 301 return open(parsed_uri.uri_path, mode, buffering=buffering, **open_kwargs)
302 elif not open_kwargs:
303 return open(parsed_uri.uri_path, mode, buffering=buffering)
FileNotFoundError: [Errno 2] No such file or directory: ‘GoogleNews-vectors-negative300.bin’
4.3. Sentence/Paragraph/Document Vectors¶
• So far we learned vector representation of words
• A lot of times, our samples are sentences, paragraphs, or documents
• How to create vector representations of sentences, paragraphs, or documents?
▪ Weighted average of word vectors (however, word order is lost as “bag of words”)
▪ Concatenation of word vectors (large space)
▪ ??
• Paragraph Vector: A distributed memory model (PV-DM)
▪ Word vectors are shared across paragraphs
▪ The paragraph vector is shared across all contexts generated from the same paragraph but not across paragraphs
▪ Both pragraph vectors and word vectors are returned
▪ Paragraph vectors can be used for document retrival or as features for classification or clustering 
▪ Source: Le Q. and Mikolov, T. Distributed Representations of Sentences and Documents https://arxiv.org/pdf/1405.4053v2.pdf
In [ ]:
# Exercise 4.3.1 Train your word vector
# We have tokenized sentences
# Label each sentence with a unique tag
from gensim.models.doc2vec import TaggedDocument
docs=[TaggedDocument(sentences[i], [str(i)]) for i in range(len(sentences)) ]
docs[0]
In [ ]:
from random import shuffle
# package for doc2vec
from gensim.models import doc2vec
# for more parameters, check
# https://radimrehurek.com/gensim/models/doc2vec.html
# initialize the model without documents
# distributed memory model is used (dm=1)
model = doc2vec.Doc2Vec(dm=1, min_count=5, window=5, size=200, workers=4)
# build the vocabulary using the documents
model.build_vocab(docs)
# train the model in 20 epoches
# You may need to incease epoches
for epoch in range(30):
# shuffle the documents in each epoch
shuffle(docs)
# in each epoch, all samples are used
model.train(docs, total_examples=len(docs), epochs=1)
In [ ]:
# Inspect paragraph vectors and word vectors
# the pragraph vector of the first document
model.docvecs[‘0’]
# the word vector of ‘movie’
model.wv[‘movie’]
In [ ]:
# Check word similarity
print(“Top 5 words similar to word ‘sound'”)
model.wv.most_similar(‘sound’, topn=5)
print(“Top 5 words similar to word ‘sound’ but not relevant to ‘film'”)
model.wv.most_similar(positive=[‘sound’,’music’], negative=[‘film’], topn=5)
print(“Similarity between ‘movie’ and ‘film’:”)
model.wv.similarity(‘movie’,’film’)
print(“Similarity between ‘movie’ and ‘city’:”)
model.wv.similarity(‘movie’,’city’)
In [ ]:
# Inspect document similarity
model.docvecs.most_similar(‘0′)
5. Convolutional Neural Networks (CNN)¶
References (highly recommended):
• http://www.wildml.com/2015/11/understanding-convolutional-neural-networks-for-nlp/
• https://adeshpande3.github.io/adeshpande3.github.io/A-Beginner%27s-Guide-To-Understanding-Convolutional-Neural-Networks/
• CNNs are widely used in Computer Vision
• CNNs were responsible for major breakthroughs in Image Recognition and are the core of most Computer Vision systems including automated photo tagging, self-driving cars
• Recently, CNNs have been applied in NLP and achieved good performance. 
5.1. Convolution¶
• Convolution is the technique to extract distinguishing features from feature spaces
• Example: feature detection from image pixels
▪ Feature space: a matrix of pixels of 0 (black) or 1 (white)
▪ Filter/kernal/feature Detector: a function applied to every fixed subset of the feature matrix
◦ e.g. 3×3 filter (a 3×3 matrix $\begin{vmatrix} 1 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 1 \end{vmatrix}$ ) slides through every area of the matrix sequentially, multiplies its values element-wise with the original matrix, then sum them up
◦ e.g. a filter (e.g. $\begin{vmatrix} 0 & -1 & 0 \\ -1 & 4 & -1 \\ 0 & -1 & 0 \end{vmatrix}$ ) to take difference between a pixel and its neighbors –> detect edges 
• Typically, a larger number of filters in different sizes will be used
• Configuration of filters
▪ filter size ($h \text{x} w$)
▪ stride size (how much to shift a filter in each step) ($s$)
▪ number of filters (depth) ($d$)
• Questions:
▪ With 5×5 feature space, afte apply a filter of size 3×3 with stride size 2, what will be the size of the result?
▪ Formula to calculate the size?
5.2. Pooling Layer¶
• Pooling layers are typically applied after the convolutional layers.
• A pooling layer subsamples its input.
• The most common way to do pooling is to apply a max operation to the result of each filter (a.k.a 1-max pooling).
▪ e.g. for the example below, by 1-max pooling, we get 8.
▪ If 100 filters have been used, then we get 100 numbers
• Pooling can be applied over a window (e.g. 2×2) 
5.3. What are CNNs¶
• CNNs consists of several layers of convolutions with nonlinear activation functions like ReLU or tanh

• A CNN typically contains:
▪ A convolution layer (not dense layer) connected to the input layer
◦ Each convolution layer applies different filters.
◦ Typically hundreds or thousands filters used.
◦ The results of filters are concatenated.
▪ A pooling layer is used to subsample the result of convolution layer
▪ There may be multiple layers of convolution and pooling combined. E.g. image detection
◦ 1st layer: detect edges
◦ 2nd layer: detect shape, e.g. round, square
◦ 3rd layer: wheels, doors etc.
▪ Then each result out of convolution-pooling is connected to a neuron in the output (local connections). Such results results are high-level features used by classification algorithms.
• During the training phase, a CNN automatically learns the values of its filters based on the task you want to perform.
• Powerful capabilities of CNN:
▪ Location Invariance: CNN extracts distinguishing features by convolution-pooling and it does not care where these features are. So images can still be recognized after rotation and scaling.
▪ Compositionality: Each filter composes a local patch of lower-level features into higher-level representation. E.g., detect edges from pixels, shapes from edges, and more complex objects from shapes.
• If you’re interested in how CNNs are used in image recognition, follow the classical MNIST handwritten digit recognition tutorial
• Play with it! http://scs.ryerson.ca/~aharley/vis/conv/flat.html
5.4. Application of CNN in Text Classification¶
• Assume $m$ samples, each of which is a sentence with $n$ words (short sentences can be padded)
• Embedding: In each sentence, each word can be represented as its word vector of dimension $d$ (pretrained or to be trained)
• Convolution: Apply filters to n-grams of different lengths (e.g. unigram, bigrams, …).
▪ E.g. A filter can slide through every 2 words (bigram)
▪ So, the filter size (i.e. region size) can be $1\text{x}d$ (unigram), $2\text{x}d$ (bigram), $3\text{x}d$ (trigram), …
• At pooling layer, 1-max pooling is applied to the result of each filter. Then all results after pooling are concatenated as the input to the output layer
▪ This is equivalent to select words or phrases that are discriminative with regard to the classification goal

Illustration of a Convolutional Neural Network (CNN) architecture for sentence classification. Here we depict three filter region sizes: 2, 3 and 4, each of which has 2 filters. Every filter performs convolution on the sentence matrix and generates (variable-length) feature maps. Then 1-max pooling is performed over each map, i.e., the largest number from each feature map is recorded. Thus a univariate feature vector is generated from all six maps, and these 6 features are concatenated to form a feature vector for the penultimate layer. The final softmax layer then receives this feature vector as input and uses it to classify the sentence; here we assume binary classification and hence depict two possible output states. Source: Zhang, Y., & Wallace, B. (2015). A Sensitivity Analysis of (and Practitioners’ Guide to) Convolutional Neural Networks for Sentence Classification.
• Questions:
▪ How many parameters in total in the convolution layer?
5.5. How to deal with overfitting – Regularization & Dropout¶
• Deep neural nets with a large number of parameters can be easily suffer from overfitting
• Typical approaches to overcome overfitting
▪ Regularization
▪ Dropout (which is also a kind of regularization technique)
• What is dropout?
▪ During training, randomly remove units in the hidden layer from the network. Update parameters as normal, leaving dropped-out units unchanged
▪ No dropout during testing
▪ Typically, each hidden unit is set to 0 with probability 0.1~0.5 
▪ https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf
• Why dropout?
▪ Hidden units cannot co-adapt with other units since a unit may not always be present
▪ Sample data usually come with noise. Dropout constrains network adaptation to the data at training time
▪ After training, only very useful neurons are kept (have high weights)
5.6. Example: Use CNN for Sentiment Analysis (Single-Label Classification)¶
• Dataset: IMDB review
• 25,000 movie reviews, positive or negative
• Benchmark performance is 80-90% with CNN (https://arxiv.org/abs/1408.5882)
• We’re going to create a CNN with the following:
▪ Word embedding trained as part of CNN
▪ filters in 3 sizes:
◦ unigram (Conv1D, kernel_size=1)
◦ bigram (Conv1D, kernel_size=2)
◦ trigram (Conv1D, kernel_size=3)
▪ Maxpooling for each convolution layer
▪ Dropout 
In [ ]:
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = “all”
In [ ]:
# Exercise 5.1: Load data
import pandas as pd
import nltk,string
from gensim import corpora
data=pd.read_csv(“../../../dataset/imdb_reviews.csv”, header=0, delimiter=”\t”)
data.head()
len(data)
# if your computer does not have enough resource
# reduce the dataset
data=data.loc[0:8000]
In [ ]:
# Exercise 5.2 Prepocessing data: Tokenize, pad sentences
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
import numpy as np
# set the maximum number of words to be used
MAX_NB_WORDS=10000
# set sentence/document length
MAX_DOC_LEN=500
# get a Keras tokenizer
# https://keras.io/preprocessing/text/
tokenizer = Tokenizer(num_words=MAX_NB_WORDS)
tokenizer.fit_on_texts(data[“review”])
# convert each document to a list of word index as a sequence
sequences = tokenizer.\
texts_to_sequences(data[“review”])
# pad all sequences into the same length
# if a sentence is longer than maxlen, pad it in the right
# if a sentence is shorter than maxlen, truncate it in the right
padded_sequences = pad_sequences(sequences, \
maxlen=MAX_DOC_LEN, \
padding=’post’, \
truncating=’post’)
print(padded_sequences[0])
In [ ]:
# get the mapping between word and its index
tokenizer.word_index[‘film’]
# get the count of each word
tokenizer.word_counts[‘film’]
In [ ]:
# Split data for training and testing
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(\
padded_sequences, data[‘sentiment’],\
test_size=0.3, random_state=1)
In [ ]:
# Exercise 5.3: Create CNN model
from keras.layers import Embedding, Dense, Conv1D, MaxPooling1D, \
Dropout, Activation, Input, Flatten, Concatenate
from keras.models import Model
# The dimension for embedding
EMBEDDING_DIM=100
# define input layer, where a sentence represented as
# 1 dimension array with integers
main_input = Input(shape=(MAX_DOC_LEN,), \
dtype=’int32′, name=’main_input’)
# define the embedding layer
# input_dim is the size of all words +1
# where 1 is for the padding symbol
# output_dim is the word vector dimension
# input_length is the max. length of a document
# input to embedding layer is the “main_input” layer
embed_1 = Embedding(input_dim=MAX_NB_WORDS+1, \
output_dim=EMBEDDING_DIM, \
input_length=MAX_DOC_LEN,\
name=’embedding’)(main_input)
# define 1D convolution layer
# 64 filters are used
# a filter slides through each word (kernel_size=1)
# input to this layer is the embedding layer
conv1d_1= Conv1D(filters=64, kernel_size=1, \
name=’conv_unigram’,\
activation=’relu’)(embed_1)
# define a 1-dimension MaxPooling
# to take the output of the previous convolution layer
# the convolution layer produce
# MAX_DOC_LEN-1+1 values as ouput (???)
pool_1 = MaxPooling1D(MAX_DOC_LEN-1+1, \
name=’pool_unigram’)(conv1d_1)
# The pooling layer creates output
# in the size of (# of sample, 1, 64)
# remove one dimension since the size is 1
flat_1 = Flatten(name=’flat_unigram’)(pool_1)
# following the same logic to define
# filters for bigram
conv1d_2= Conv1D(filters=64, kernel_size=2, \
name=’conv_bigram’,\
activation=’relu’)(embed_1)
pool_2 = MaxPooling1D(MAX_DOC_LEN-2+1, name=’pool_bigram’)(conv1d_2)
flat_2 = Flatten(name=’flat_bigram’)(pool_2)
# filters for trigram
conv1d_3= Conv1D(filters=64, kernel_size=3, \
name=’conv_trigram’,activation=’relu’)(embed_1)
pool_3 = MaxPooling1D(MAX_DOC_LEN-3+1, name=’pool_trigram’)(conv1d_3)
flat_3 = Flatten(name=’flat_trigram’)(pool_3)
# Concatenate flattened output
z=Concatenate(name=’concate’)([flat_1, flat_2, flat_3])
# Create a dropout layer
# In each iteration only 50% units are turned on
drop_1=Dropout(rate=0.5, name=’dropout’)(z)
# Create a dense layer
dense_1 = Dense(192, activation=’relu’, name=’dense’)(drop_1)
# Create the output layer
preds = Dense(1, activation=’sigmoid’, name=’output’)(dense_1)
# create the model with input layer
# and the output layer
model = Model(inputs=main_input, outputs=preds)
In [ ]:
# Exercise 5.4: Show model configuration
model.summary()
#model.get_config()
#model.get_weights()
#from keras.utils import plot_model
#plot_model(model, to_file=’cnn_model.png’)
In [ ]:
# Exercise 5.4: Compile the model
model.compile(loss=”binary_crossentropy”, \
optimizer=”adam”, \
metrics=[“accuracy”])
In [ ]:
# Exercise 5.5: Fit the model
BATCH_SIZE = 64
NUM_EPOCHES = 10
# fit the model and save fitting history to “training”
training=model.fit(X_train, y_train, \
batch_size=BATCH_SIZE, \
epochs=NUM_EPOCHES,\
validation_data=[X_test, y_test], \
verbose=2)
In [ ]:
# Exercise 5.6. Investigate the training process
import matplotlib.pyplot as plt
import pandas as pd
# plot a figure with size 20×8
# the fitting history is saved as dictionary
# covert the dictionary to dataframe
df=pd.DataFrame.from_dict(training.history)
df.columns=[“train_acc”, “train_loss”, \
“val_acc”, “val_loss”]
df.index.name=’epoch’
print(df)
# plot training history
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(8,3));
df[[“train_acc”, “val_acc”]].plot(ax=axes[0]);
df[[“train_loss”, “val_loss”]].plot(ax=axes[1]);
plt.show();
Observations from training history:
• As training goes on, training accuracy/loss gets always better
• Testing accuracy/loss gets better at the beginning, the gets worse
• This indicates that model is overfitted and cannot be generalized after certain point
• Thus, we should stop training the model when testing accuracy/loss gets worse.
• This analysis can be used to determine hyperparameter NUM_EPOCHES
• Fortunately, this can be done automatically by “Early Stopping”
In [ ]:
# Exercise 5.6: Use early stopping to find the best model
from keras.callbacks import EarlyStopping, ModelCheckpoint
# the file path to save best model
BEST_MODEL_FILEPATH=”best_model”
# define early stopping based on validation loss
# if validation loss is not improved in
# an iteration compared with the previous one,
# stop training (i.e. patience=0).
# mode=’min’ indicate the loss needs to decrease
earlyStopping=EarlyStopping(monitor=’val_loss’, \
patience=0, verbose=2, \
mode=’min’)
# define checkpoint to save best model
# which has max. validation acc
checkpoint = ModelCheckpoint(BEST_MODEL_FILEPATH, \
monitor=’val_acc’, \
verbose=2, \
save_best_only=True, \
mode=’max’)
# compile model
model.compile(loss=”binary_crossentropy”, \
optimizer=”adam”, metrics=[“accuracy”])
# fit the model with earlystopping and checkpoint
# as callbacks (functions that are executed as soon as
# an asynchronous thread is completed)
model.fit(X_train, y_train, \
batch_size=BATCH_SIZE, epochs=NUM_EPOCHES, \
callbacks=[earlyStopping, checkpoint],
validation_data=[X_test, y_test],\
verbose=2)
In [ ]:
# Exercise 5.7: Load the best model
# load the model using the save file
model.load_weights(“best_model”)
# predict
pred=model.predict(X_test)
print(pred[0:5])
# evaluate the model
scores = model.evaluate(X_test, y_test, verbose=0)
print(“%s: %.2f%%” % (model.metrics_names[1], scores[1]*100))
In [ ]:
# Exercise 5.8: Put Everything as a function
from keras.layers import Embedding, Dense, Conv1D, MaxPooling1D, \
Dropout, Activation, Input, Flatten, Concatenate
from keras.models import Model
from keras.regularizers import l2
from keras.callbacks import EarlyStopping, ModelCheckpoint
def cnn_model(FILTER_SIZES, \
# filter sizes as a list
MAX_NB_WORDS, \
# total number of words
MAX_DOC_LEN, \
# max words in a doc
EMBEDDING_DIM=200, \
# word vector dimension
NUM_FILTERS=64, \
# number of filters for all size
DROP_OUT=0.5, \
# dropout rate
NUM_OUTPUT_UNITS=1, \
# number of output units
NUM_DENSE_UNITS=100,\
# number of units in dense layer
PRETRAINED_WORD_VECTOR=None,\
# Whether to use pretrained word vectors
LAM=0.0):
# regularization coefficient
main_input = Input(shape=(MAX_DOC_LEN,), \
dtype=’int32′, name=’main_input’)
if PRETRAINED_WORD_VECTOR is not None:
embed_1 = Embedding(input_dim=MAX_NB_WORDS+1, \
output_dim=EMBEDDING_DIM, \
input_length=MAX_DOC_LEN, \
# use pretrained word vectors
weights=[PRETRAINED_WORD_VECTOR],\
# word vectors can be further tuned
# set it to False if use static word vectors
trainable=True,\
name=’embedding’)(main_input)
else:
embed_1 = Embedding(input_dim=MAX_NB_WORDS+1, \
output_dim=EMBEDDING_DIM, \
input_length=MAX_DOC_LEN, \
name=’embedding’)(main_input)
# add convolution-pooling-flat block
conv_blocks = []
for f in FILTER_SIZES:
conv = Conv1D(filters=NUM_FILTERS, kernel_size=f, \
activation=’relu’, name=’conv_’+str(f))(embed_1)
conv = MaxPooling1D(MAX_DOC_LEN-f+1, name=’max_’+str(f))(conv)
conv = Flatten(name=’flat_’+str(f))(conv)
conv_blocks.append(conv)
if len(conv_blocks)>1:
z=Concatenate(name=’concate’)(conv_blocks)
else:
z=conv_blocks[0]
drop=Dropout(rate=DROP_OUT, name=’dropout’)(z)
dense = Dense(NUM_DENSE_UNITS, activation=’relu’,\
kernel_regularizer=l2(LAM),name=’dense’)(drop)
preds = Dense(NUM_OUTPUT_UNITS, activation=’sigmoid’, name=’output’)(dense)
model = Model(inputs=main_input, outputs=preds)
model.compile(loss=”binary_crossentropy”, \
optimizer=”adam”, metrics=[“accuracy”])
return model
5.7. Use CNN for multi-label classification¶
• In multi-label classification, a document can be classified into multiple classes
• We can use multiple ouput units, each responsible for predicating one class
• For multi-label classification ($K$ classes), do the following:
1. Represent the labels as indication matrix
▪ e.g. three classes [‘econ’,’biz’,’tech’] in total,
▪ sample 1: ‘eco’ only -> [1, 0, 0]
▪ sample 2: [‘eco’,’biz’] ->[1, 1, 0]
2. Accordingly, set output layer to have K output units
▪ each responsible for one class
▪ each unit gives the probabability of one class
• Example: Yahoo News Ranked Multilabel Learning dataset (http://research.yahoo.com)
▪ A subset is selected
▪ 4 classes, 6426 samples
In [ ]:
# Exercise 5.7.1: Load and process the data
import json
from sklearn.preprocessing import MultiLabelBinarizer
from numpy.random import shuffle
# load the data
data=json.load(open(“../../../dataset/ydata.json”,’rb’))
#data=json.load(open(“ydata.json”,’r’))
# shuffle the data
shuffle(data)
# split into text and label
text,labels=zip(*data)
text=list(text)
labels=list(labels)
text[1]
labels[1]
In [ ]:
# Exercise 5.7.2: create indicator matrix for labels
mlb = MultiLabelBinarizer()
Y=mlb.fit_transform(labels)
# check size of indicator matrix
Y.shape
# check classes
mlb.classes_
# check # of samples in each class
np.sum(Y, axis=0)
In [ ]:
# Exercise 5.7.3: Load and process the data
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
import numpy as np
# get a Keras tokenizer
MAX_NB_WORDS=8000
# documents are quite long in the dataset
MAX_DOC_LEN=1000
tokenizer = Tokenizer(num_words=MAX_NB_WORDS)
tokenizer.fit_on_texts(text)
voc=tokenizer.word_index
# convert each document to a list of word index as a sequence
sequences = tokenizer.texts_to_sequences(text)
# get the mapping between words to word index
# pad all sequences into the same length (the longest)
padded_sequences = pad_sequences(sequences, \
maxlen=MAX_DOC_LEN, \
padding=’post’, truncating=’post’)
#print(padded_sequences[0])
In [ ]:
# Exercise 5.7.4: Fit the model using the function
from sklearn.model_selection import train_test_split
EMBEDDING_DIM=100
FILTER_SIZES=[2,3,4]
# set the number of output units
# as the number of classes
output_units_num=len(mlb.classes_)
num_filters=64
# set the dense units
dense_units_num= num_filters*len(FILTER_SIZES)
BTACH_SIZE = 64
NUM_EPOCHES = 20
# split dataset into train (70%) and test sets (30%)
X_train, X_test, Y_train, Y_test = train_test_split(\
padded_sequences, Y, test_size=0.2, random_state=0)
model=cnn_model(FILTER_SIZES, MAX_NB_WORDS, \
MAX_DOC_LEN, \
NUM_FILTERS=num_filters,\
NUM_OUTPUT_UNITS=output_units_num, \
NUM_DENSE_UNITS=dense_units_num)
earlyStopping=EarlyStopping(monitor=’val_loss’, patience=0, verbose=2, mode=’min’)
checkpoint = ModelCheckpoint(BEST_MODEL_FILEPATH, monitor=’val_loss’, \
verbose=2, save_best_only=True, mode=’min’)
training=model.fit(X_train, Y_train, \
batch_size=BTACH_SIZE, epochs=NUM_EPOCHES, \
callbacks=[earlyStopping, checkpoint],\
validation_data=[X_test, Y_test], verbose=2)
In [ ]:
# Exercise 5.7.5: predicate using the best model
# calculate performance
# load the best model
model.load_weights(“best_model”)
pred=model.predict(X_test)
pred[0:5]
In [ ]:
# Exercise 5.7.6: Generate performance report
from sklearn.metrics import classification_report
pred=np.where(pred>0.5, 1, 0)
print(classification_report(Y_test, pred,\
target_names=mlb.classes_))
5.8. Use Pretrained Word Vectors¶
• If the size of labeled samples is small, it’s better use pretrained word vectors
▪ e.g. google or facebook pretrained word vectors
▪ or you can train word vectors using relevant context data using gensim
• Procedure:
1. Obtain/train pretrained word vectors (see Section 4.1 and Exercise 4.1.1)
2. Look for the word vector for each word in the vocabulary and create embedding matrix where each row represents one word vector
3. Set embedding layer with the embedding matrix and set it not trainable.
• With well-trained word vectors, often a small sample set can also achieve good performance
In [ ]:
# Exercise 5.8.1: Load full yahoo news dataset
# to train the word vector
# note this data can be unlabeled. only text is used
import json
data=json.load(open(“../../../dataset/ydata_full.json”,’r’))
text,labels=zip(*data)
text=list(text)
sentences=[ [token.strip(string.punctuation).strip() \
for token in nltk.word_tokenize(doc) \
if token not in string.punctuation and \
len(token.strip(string.punctuation).strip())>=2]\
for doc in text]
In [ ]:
# Exercise 5.8.2: Train word vector using
# the large data set
from gensim.models import word2vec
import logging
import pandas as pd
# print out tracking information
logging.basicConfig(format=’%(asctime)s : %(levelname)s : %(message)s’, \
level=logging.INFO)
EMBEDDING_DIM=200
# min_count: words with total frequency lower than this are ignored
# size: the dimension of word vector
# window: is the maximum distance
# between the current and predicted word
# within a sentence (i.e. the length of ngrams)
# workers: # of parallel threads in training
# for other parameters, check https://radimrehurek.com/gensim/models/word2vec.html
wv_model = word2vec.Word2Vec(sentences, \
min_count=5, \
size=EMBEDDING_DIM, \
window=5, workers=4 )
In [ ]:
# get word vector for all words in the vocabulary
# see reference at https://github.com/fchollet/keras/blob/master/examples/pretrained_word_embeddings.py
EMBEDDING_DIM=200
MAX_NB_WORDS=8000
# tokenizer.word_index provides the mapping
# between a word and word index for all words
NUM_WORDS = min(MAX_NB_WORDS, len(tokenizer.word_index))
# “+1” is for padding symbol
embedding_matrix = np.zeros((NUM_WORDS+1, EMBEDDING_DIM))
for word, i in tokenizer.word_index.items():
# if word_index is above the max number of words, ignore it
if i >= NUM_WORDS:
continue
if word in wv_model.wv:
embedding_matrix[i]=wv_model.wv[word]
In [ ]:
# Exercise 5.8.3: Fit model using pretrained word vectors
from sklearn.model_selection import train_test_split
EMBEDDING_DIM=200
FILTER_SIZES=[2,3,4]
# set the number of output units
# as the number of classes
output_units_num=len(mlb.classes_)
#Number of filters for each size
num_filters=64
# set the dense units
dense_units_num= num_filters*len(FILTER_SIZES)
BTACH_SIZE = 32
NUM_EPOCHES = 100
# With well trained word vectors, sample size can be reduced
# Assume we only have 500 labeled data
# split dataset into train (80%) and test sets (20%)
X_train, X_test, Y_train, Y_test = train_test_split(\
padded_sequences[0:500], Y[0:500], \
test_size=0.2, random_state=0, \
shuffle=True)
# create the model with embedding matrix
model=cnn_model(FILTER_SIZES, MAX_NB_WORDS, \
MAX_DOC_LEN, \
NUM_FILTERS=num_filters,\
NUM_OUTPUT_UNITS=output_units_num, \
NUM_DENSE_UNITS=dense_units_num,\
PRETRAINED_WORD_VECTOR=embedding_matrix)
earlyStopping=EarlyStopping(monitor=’val_loss’, patience=1, verbose=2, mode=’min’)
checkpoint = ModelCheckpoint(BEST_MODEL_FILEPATH, monitor=’val_loss’, \
verbose=2, save_best_only=True, mode=’min’)
training=model.fit(X_train, Y_train, \
batch_size=BTACH_SIZE, epochs=NUM_EPOCHES, \
callbacks=[earlyStopping, checkpoint],\
validation_data=[X_test, Y_test], verbose=2)
In [ ]:
# Exercise 5.8.4: check model configuration
# Note that parameters from embedding layer
# is not trainable
model.summary()
In [ ]:
# Exercise 5.8.5: Performance evaluation
# Let’s use samples[500:1000]
# as an evaluation set
from sklearn.metrics import classification_report
pred=model.predict(padded_sequences[500:1000])
Y_pred=np.copy(pred)
Y_pred=np.where(Y_pred>0.5,1,0)
Y_pred[0:10]
Y[500:510]
print(classification_report(Y[500:1000], \
Y_pred, target_names=mlb.classes_))
Observations:
• Note that we only trained the model with 500 samples
• The performance is only slightly lower, compared with the one trained with 6000 samples
• This shows that pre-trained word vectors can effectively improve the classification performance in the case of small labeled dataset
5.9. How to select hyperparameters?¶
• Fitting a neural network is a very empirical process
• See Section 3 of “Practical Recommendations for Gradient-Based Training of Deep Architectures” (https://arxiv.org/abs/1206.5533) for detailed discussion
• The following is some useful techniques to set
▪ MAX_NB_WORDS: max number words to be included in word embedding
◦ Based on word frequency histogram to include words that appear at least $n$ times
▪ MAX_DOC_LEN: max length of documents
◦ Based on document length frequency histogram to include complete sentences as many as possible
In [ ]:
# Exercise 5.9.1 Set MAX_NB_WORDS to
# include words that appear at least K times
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# get count of each word
df=pd.DataFrame.from_dict(tokenizer.word_counts, \
orient=”index”)
df.columns=[‘freq’]
print(df.head())
# get histogram of word count
df=df[‘freq’].value_counts().reset_index()
df.columns=[‘word_freq’,’count’]
# sort by word_freq
df=df.sort_values(by=’word_freq’)
# convert absolute counts to precentage
df[‘percent’]=df[‘count’]/len(tokenizer.word_counts)
# get cumulative percentage
df[‘cumsum’]=df[‘percent’].cumsum()
print(df.head())
df.iloc[0:50].plot(x=’word_freq’, y=’cumsum’);
plt.show();
# if set min count for word to 10,
# what % of words can be included?
# how many words will be included?
# This is the parameter MAX_NB_WORDS
# tokenizer = Tokenizer(num_words=MAX_NB_WORDS)
In [ ]:
# Exercise 5.9.2 Set MAX_DOC_LEN to
# include complete sentences as many as possible
# create a series based on the length of all sentences
sen_len=pd.Series([len(item) for item in sequences])
# create histogram of sentence length
# the “index” is the sentence length
# “counts” is the count of sentences at a length
df=sen_len.value_counts().reset_index().sort_values(by=’index’)
df.columns=[‘sent_length’,’counts’]
# sort by sentence length
# get percentage and cumulative percentage
df[‘percent’]=df[‘counts’]/len(sen_len)
df[‘cumsum’]=df[‘percent’].cumsum()
print(df.head(3))
# From the plot, 90% sentences have length<500 # so it makes sense to set MAX_DOC_LEN=4~500 df.plot(x="sent_length", y='cumsum'); plt.show(); # what will be the minimum sentence length # such that 99% of sentences will not be truncated? 6. Next, where to go?¶ • Recurrent Neural Networks (RNN)  • Application of RNN: Machine ▪ Language Modeling and Generating Text: Given a sequence of words we want to predict the probability of each word given the previous words ▪ Machine translation  ▪ Reference: http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/  In [ ]: