CS计算机代考程序代写 python GPU algorithm compiler cache Keras cuda In [0]:

In [0]:
from tensorflow.python.client import device_lib
print(“Show System RAM Memory:\n\n”)
!cat /proc/meminfo | egrep “MemTotal*”
print(“\n\nShow Devices:\n\n”+str(device_lib.list_local_devices()))

Show System RAM Memory:

MemTotal: 13335188 kB

Show Devices:

[name: “/device:CPU:0”
device_type: “CPU”
memory_limit: 268435456
locality {
}
incarnation: 8700158686858789265
, name: “/device:XLA_CPU:0”
device_type: “XLA_CPU”
memory_limit: 17179869184
locality {
}
incarnation: 8342104309289264332
physical_device_desc: “device: XLA_CPU device”
, name: “/device:XLA_GPU:0”
device_type: “XLA_GPU”
memory_limit: 17179869184
locality {
}
incarnation: 16107158158178871312
physical_device_desc: “device: XLA_GPU device”
, name: “/device:GPU:0”
device_type: “GPU”
memory_limit: 11330115994
locality {
bus_id: 1
links {
}
}
incarnation: 5997946455289902644
physical_device_desc: “device: 0, name: Tesla K80, pci bus id: 0000:00:04.0, compute capability: 3.7”
]
In [0]:
!pip install q tensorflow-gpu==2.0.0

Requirement already satisfied: q in /usr/local/lib/python3.6/dist-packages (2.6)
Requirement already satisfied: tensorflow-gpu==2.0.0 in /usr/local/lib/python3.6/dist-packages (2.0.0)
Requirement already satisfied: tensorboard<2.1.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0) (2.0.2)
Requirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0) (1.27.1)
Requirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0) (0.34.2)
Requirement already satisfied: astor>=0.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0) (0.8.1)
Requirement already satisfied: google-pasta>=0.1.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0) (0.1.8)
Requirement already satisfied: gast==0.2.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0) (0.2.2)
Requirement already satisfied: numpy<2.0,>=1.16.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0) (1.17.5)
Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0) (3.1.0)
Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0) (1.1.0)
Requirement already satisfied: protobuf>=3.6.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0) (3.10.0)
Requirement already satisfied: absl-py>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0) (0.9.0)
Requirement already satisfied: tensorflow-estimator<2.1.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0) (2.0.1)
Requirement already satisfied: keras-preprocessing>=1.0.5 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0) (1.1.0)
Requirement already satisfied: wrapt>=1.11.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0) (1.11.2)
Requirement already satisfied: keras-applications>=1.0.8 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0) (1.0.8)
Requirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0) (1.12.0)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0) (3.2.1)
Requirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0) (2.21.0)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0) (0.4.1)
Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0) (1.7.2)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0) (1.0.0)
Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0) (45.2.0)
Requirement already satisfied: h5py in /usr/local/lib/python3.6/dist-packages (from keras-applications>=1.0.8->tensorflow-gpu==2.0.0) (2.8.0)
Requirement already satisfied: idna<2.9,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0) (2.8)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0) (2019.11.28)
Requirement already satisfied: urllib3<1.25,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0) (1.24.3)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0) (3.0.4)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0) (1.3.0)
Requirement already satisfied: cachetools<3.2,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0) (3.1.1)
Requirement already satisfied: rsa<4.1,>=3.1.4 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0) (4.0)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0) (0.2.8)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0) (3.1.0)
Requirement already satisfied: pyasn1>=0.1.3 in /usr/local/lib/python3.6/dist-packages (from rsa<4.1,>=3.1.4->google-auth<2,>=1.6.3->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0) (0.4.8)
In [0]:
import tensorflow as tf
tf.__version__
Out[0]:
‘2.0.0’
In [0]:
from google.colab import drive
drive.mount(‘/content/gdrive’)

Drive already mounted at /content/gdrive; to attempt to forcibly remount, call drive.mount(“/content/gdrive”, force_remount=True).
In [0]:
import time
start = time.time()
!python3 “/content/gdrive/My Drive/mnist_cnn.py”
end = time.time()
print(end – start)

Using TensorFlow backend.
x_train shape: (60000, 28, 28, 1)
60000 train samples
10000 test samples
2020-03-13 04:32:10.848229: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2020-03-13 04:32:10.866204: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-03-13 04:32:10.866967: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: Tesla K80 major: 3 minor: 7 memoryClockRate(GHz): 0.8235
pciBusID: 0000:00:04.0
2020-03-13 04:32:10.867302: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-03-13 04:32:10.868532: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2020-03-13 04:32:10.869653: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2020-03-13 04:32:10.869979: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2020-03-13 04:32:10.871499: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2020-03-13 04:32:10.872513: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2020-03-13 04:32:10.875751: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-03-13 04:32:10.875871: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-03-13 04:32:10.876701: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-03-13 04:32:10.877436: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2020-03-13 04:32:10.877794: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-03-13 04:32:10.882560: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2299995000 Hz
2020-03-13 04:32:10.882973: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x16d2bc0 executing computations on platform Host. Devices:
2020-03-13 04:32:10.883013: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version
2020-03-13 04:32:10.936085: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-03-13 04:32:10.936926: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x16d2d80 executing computations on platform CUDA. Devices:
2020-03-13 04:32:10.936964: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Tesla K80, Compute Capability 3.7
2020-03-13 04:32:10.937204: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-03-13 04:32:10.937905: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: Tesla K80 major: 3 minor: 7 memoryClockRate(GHz): 0.8235
pciBusID: 0000:00:04.0
2020-03-13 04:32:10.937976: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-03-13 04:32:10.938003: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2020-03-13 04:32:10.938068: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2020-03-13 04:32:10.938110: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2020-03-13 04:32:10.938133: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2020-03-13 04:32:10.938157: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2020-03-13 04:32:10.938181: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-03-13 04:32:10.938266: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-03-13 04:32:10.939014: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-03-13 04:32:10.939743: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2020-03-13 04:32:10.939808: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-03-13 04:32:10.941234: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-03-13 04:32:10.941270: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0
2020-03-13 04:32:10.941284: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N
2020-03-13 04:32:10.941422: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-03-13 04:32:10.942256: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-03-13 04:32:10.943004: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
2020-03-13 04:32:10.943082: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10805 MB memory) -> physical GPU (device: 0, name: Tesla K80, pci bus id: 0000:00:04.0, compute capability: 3.7)
Train on 48000 samples, validate on 12000 samples
Epoch 1/12
2020-03-13 04:32:12.642382: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
48000/48000 [==============================] – 8s 171us/sample – loss: 0.2753 – accuracy: 0.9152 – val_loss: 0.0699 – val_accuracy: 0.9800
Epoch 2/12
48000/48000 [==============================] – 6s 128us/sample – loss: 0.0976 – accuracy: 0.9710 – val_loss: 0.0512 – val_accuracy: 0.9852
Epoch 3/12
48000/48000 [==============================] – 6s 129us/sample – loss: 0.0720 – accuracy: 0.9772 – val_loss: 0.0440 – val_accuracy: 0.9879
Epoch 4/12
48000/48000 [==============================] – 6s 129us/sample – loss: 0.0568 – accuracy: 0.9827 – val_loss: 0.0406 – val_accuracy: 0.9887
Epoch 5/12
48000/48000 [==============================] – 6s 131us/sample – loss: 0.0486 – accuracy: 0.9851 – val_loss: 0.0407 – val_accuracy: 0.9893
Epoch 6/12
48000/48000 [==============================] – 6s 132us/sample – loss: 0.0408 – accuracy: 0.9868 – val_loss: 0.0361 – val_accuracy: 0.9903
Epoch 7/12
48000/48000 [==============================] – 6s 129us/sample – loss: 0.0356 – accuracy: 0.9889 – val_loss: 0.0368 – val_accuracy: 0.9902
Epoch 8/12
48000/48000 [==============================] – 6s 128us/sample – loss: 0.0337 – accuracy: 0.9891 – val_loss: 0.0397 – val_accuracy: 0.9899
Epoch 9/12
48000/48000 [==============================] – 6s 129us/sample – loss: 0.0315 – accuracy: 0.9898 – val_loss: 0.0368 – val_accuracy: 0.9902
Epoch 10/12
48000/48000 [==============================] – 6s 129us/sample – loss: 0.0271 – accuracy: 0.9911 – val_loss: 0.0414 – val_accuracy: 0.9905
Epoch 11/12
48000/48000 [==============================] – 6s 128us/sample – loss: 0.0242 – accuracy: 0.9923 – val_loss: 0.0360 – val_accuracy: 0.9910
Epoch 12/12
48000/48000 [==============================] – 6s 129us/sample – loss: 0.0227 – accuracy: 0.9923 – val_loss: 0.0347 – val_accuracy: 0.9912
Test loss: 0.02725354956639948
Test accuracy: 0.9934
82.71783566474915
In [0]:
#Q1
#Run time on my computer:31.655904531478882 seconds
#Run time on GPU on Colab:82.71783566474915 seconds
#Interesting fact…I’m sure I did select runtime type “GPU”
In [0]:
#Step2. Implement handwritten recognition in Tensorflow using CNN
In [0]:
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten
from tensorflow.keras.layers import Conv2D, MaxPooling2D
from tensorflow.keras import backend as K
#Conv2D:build convolutional layer
#Maxpooling2D:build maxpooling layer
#Flatten:create fully connected layer
In [0]:
batch_size = 128
num_classes = 10
epochs = 12
In [0]:
# input image dimensions
img_rows, img_cols = 28, 28
In [0]:
# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
In [0]:
if K.image_data_format() == ‘channels_first’:
x_train= x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_test= x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
#Conv2D shows shape of the inputs (vector shape we put Conv1D, matrix shape we put Conv2D, 3Dimage we put Conv3D or Maxpooling3D)
#We fit matrix into convolutional layer in CNN
#in addtion to the dimension of the matrix (28*28), we need to add an additional layer: number of channels which shows the image is black&white or colorful.
#1 means B&W, 3 means RGB. Ex:(60000,28,28,1)
In [0]:
x_train = x_train.astype(‘float32’)
x_test = x_test.astype(‘float32’) #fit matrix into convolutional layer
x_train /= 255
x_test /= 255
print(‘x_train shape:’, x_train.shape)
print(x_train.shape[0], ‘train samples’)
print(x_test.shape[0], ‘test samples’)

x_train shape: (60000, 28, 28, 1)
60000 train samples
10000 test samples
In [0]:
import keras

Using TensorFlow backend.
In [0]:
import tensorflow as tf
In [0]:
# convert class vectors to binary class matrices
y_train= keras.utils.to_categorical(y_train, num_classes)
y_test= keras.utils.to_categorical(y_test, num_classes)
In [0]:
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),activation=’relu’,input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation=’relu’))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation=’relu’))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation=’softmax’))
In [0]:
#Q2- Explain the way that this model is designed. Talk about all the layers and their functionality.
#1.model.add(Conv2D(32, kernel_size=(3, 3),activation=’relu’,input_shape=input_shape))
#means add my first convolutional layer, with 32 filters, filter size is 3 by 3. (normal standard number for convolutional layer)
#first hiddenlayer has to define how many inputs are there that is the “input_shape”.
#2.model.add(Conv2D(64, (3, 3), activation=’relu’))
#then we add another convolutional layer with 64 filters
#3.model.add(MaxPooling2D(pool_size=(2, 2)))
#then add a Maxpooling layer with maxpooling size 2 by 2. (should be smaller than convolutional layer)
#4.model.add(Dropout(0.25))
#0.25 dropout for overfitting
#5.model.add(Flatten())
#fully connected layer (Flatten layer)
#from now is similar to nn
#add a layer with 128 nodes, activation is relu, dropout 0.5
#output layer num_classes, activation softmax
In [0]:
model.compile(loss=’categorical_crossentropy’, optimizer=’adam’, metrics=[‘accuracy’])
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_split=0.2) #20% of training data is used for validation
score = model.evaluate(x_test, y_test, verbose=0)
print(‘Test loss:’, score[0])
print(‘Test accuracy:’, score[1])

Train on 48000 samples, validate on 12000 samples
Epoch 1/12
48000/48000 [==============================] – 11s 225us/sample – loss: 0.2695 – accuracy: 0.9173 – val_loss: 0.0634 – val_accuracy: 0.9813
Epoch 2/12
48000/48000 [==============================] – 8s 172us/sample – loss: 0.0963 – accuracy: 0.9699 – val_loss: 0.0498 – val_accuracy: 0.9856
Epoch 3/12
48000/48000 [==============================] – 8s 167us/sample – loss: 0.0712 – accuracy: 0.9778 – val_loss: 0.0510 – val_accuracy: 0.9856
Epoch 4/12
48000/48000 [==============================] – 8s 164us/sample – loss: 0.0589 – accuracy: 0.9823 – val_loss: 0.0401 – val_accuracy: 0.9887
Epoch 5/12
48000/48000 [==============================] – 8s 166us/sample – loss: 0.0501 – accuracy: 0.9851 – val_loss: 0.0357 – val_accuracy: 0.9901
Epoch 6/12
48000/48000 [==============================] – 8s 165us/sample – loss: 0.0434 – accuracy: 0.9861 – val_loss: 0.0403 – val_accuracy: 0.9880
Epoch 7/12
48000/48000 [==============================] – 8s 164us/sample – loss: 0.0368 – accuracy: 0.9880 – val_loss: 0.0355 – val_accuracy: 0.9902
Epoch 8/12
48000/48000 [==============================] – 8s 166us/sample – loss: 0.0346 – accuracy: 0.9890 – val_loss: 0.0350 – val_accuracy: 0.9911
Epoch 9/12
48000/48000 [==============================] – 8s 164us/sample – loss: 0.0305 – accuracy: 0.9901 – val_loss: 0.0358 – val_accuracy: 0.9910
Epoch 10/12
48000/48000 [==============================] – 8s 168us/sample – loss: 0.0282 – accuracy: 0.9910 – val_loss: 0.0354 – val_accuracy: 0.9904
Epoch 11/12
48000/48000 [==============================] – 8s 165us/sample – loss: 0.0258 – accuracy: 0.9916 – val_loss: 0.0384 – val_accuracy: 0.9900
Epoch 12/12
48000/48000 [==============================] – 8s 174us/sample – loss: 0.0242 – accuracy: 0.9921 – val_loss: 0.0368 – val_accuracy: 0.9906
Test loss: 0.029268154423910148
Test accuracy: 0.9921
In [0]:
model.compile(loss=’categorical_crossentropy’, optimizer=’adam’, metrics=[‘accuracy’])
hist = model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_split=0.2)
loss, accuracy = model.evaluate(x_test, y_test, verbose=0)
print(‘Test loss:’, score[0])
print(‘Test accuracy:’, score[1])

Train on 48000 samples, validate on 12000 samples
Epoch 1/12
48000/48000 [==============================] – 7s 156us/sample – loss: 0.0098 – accuracy: 0.9966 – val_loss: 0.0458 – val_accuracy: 0.9917
Epoch 2/12
48000/48000 [==============================] – 7s 139us/sample – loss: 0.0084 – accuracy: 0.9972 – val_loss: 0.0537 – val_accuracy: 0.9903
Epoch 3/12
48000/48000 [==============================] – 7s 139us/sample – loss: 0.0078 – accuracy: 0.9976 – val_loss: 0.0543 – val_accuracy: 0.9919
Epoch 4/12
48000/48000 [==============================] – 7s 142us/sample – loss: 0.0098 – accuracy: 0.9968 – val_loss: 0.0470 – val_accuracy: 0.9930
Epoch 5/12
48000/48000 [==============================] – 7s 138us/sample – loss: 0.0092 – accuracy: 0.9966 – val_loss: 0.0514 – val_accuracy: 0.9916
Epoch 6/12
48000/48000 [==============================] – 7s 139us/sample – loss: 0.0062 – accuracy: 0.9980 – val_loss: 0.0488 – val_accuracy: 0.9917
Epoch 7/12
48000/48000 [==============================] – 7s 138us/sample – loss: 0.0073 – accuracy: 0.9977 – val_loss: 0.0495 – val_accuracy: 0.9918
Epoch 8/12
48000/48000 [==============================] – 7s 139us/sample – loss: 0.0082 – accuracy: 0.9974 – val_loss: 0.0537 – val_accuracy: 0.9912
Epoch 9/12
48000/48000 [==============================] – 7s 139us/sample – loss: 0.0085 – accuracy: 0.9975 – val_loss: 0.0504 – val_accuracy: 0.9925
Epoch 10/12
48000/48000 [==============================] – 7s 139us/sample – loss: 0.0084 – accuracy: 0.9973 – val_loss: 0.0505 – val_accuracy: 0.9907
Epoch 11/12
48000/48000 [==============================] – 7s 139us/sample – loss: 0.0086 – accuracy: 0.9971 – val_loss: 0.0410 – val_accuracy: 0.9924
Epoch 12/12
48000/48000 [==============================] – 7s 144us/sample – loss: 0.0075 – accuracy: 0.9974 – val_loss: 0.0458 – val_accuracy: 0.9913
Test loss: 0.029268154423910148
Test accuracy: 0.9921
In [0]:
#Q3- Design the learning curve and talk about what you see.
In [0]:
import matplotlib.pyplot as plt
plt.subplot(2,1,1)
plt.plot(hist.history[‘accuracy’])
plt.plot(hist.history[‘val_accuracy’])
plt.title(‘model accuracy’)
plt.ylabel(‘accuracy’)
plt.xlabel(‘epoch’)
plt.legend([‘train’, ‘test’], loc=’lower right’)

plt.subplot(2,1,2)
plt.plot(hist.history[‘loss’])
plt.plot(hist.history[‘val_loss’])
plt.title(‘model loss’)
plt.ylabel(‘loss’)
plt.xlabel(‘epoch’)
plt.legend([‘train’, ‘test’], loc=’upper right’)

plt.show()


In [0]:
#Step3. Text mining using CNN
In [0]:
import pandas as pd
In [0]:
df = pd.read_csv(‘/content/gdrive/My Drive//amazon_cells_labelled.txt’, names=[‘sentence’, ‘label’], sep=’\t’)
In [25]:
print(df.iloc[0]) #print first review

sentence So there is no way for me to plug it in here i…
label 0
Name: 0, dtype: object
In [0]:
sentences = [‘John likes ice cream’, ‘John hates chocolate.’]
In [0]:
from sklearn.feature_extraction.text import CountVectorizer
In [28]:
vectorizer = CountVectorizer(min_df=0, lowercase=False)
vectorizer.fit(sentences)
vectorizer.vocabulary_
Out[28]:
{‘John’: 0, ‘chocolate’: 1, ‘cream’: 2, ‘hates’: 3, ‘ice’: 4, ‘likes’: 5}
In [29]:
vectorizer.transform(sentences).toarray()
Out[29]:
array([[1, 0, 1, 0, 1, 1],
[1, 1, 0, 1, 0, 0]])
In [0]:
from sklearn.model_selection import train_test_split
sentences = df[‘sentence’].values
y = df[‘label’].values
In [0]:
sentences_train, sentences_test, y_train, y_test = train_test_split(sentences, y, test_size=0.25, random_state=1000) #25% for test 75% for train
In [0]:
from sklearn.feature_extraction.text import CountVectorizer
In [33]:
vectorizer = CountVectorizer()
vectorizer.fit(sentences_train)
Out[33]:
CountVectorizer(analyzer=’word’, binary=False, decode_error=’strict’,
dtype=, encoding=’utf-8′, input=’content’,
lowercase=True, max_df=1.0, max_features=None, min_df=1,
ngram_range=(1, 1), preprocessor=None, stop_words=None,
strip_accents=None, token_pattern='(?u)\\b\\w\\w+\\b’,
tokenizer=None, vocabulary=None)
In [34]:
X_train = vectorizer.transform(sentences_train)
X_test = vectorizer.transform(sentences_test)
X_train
#It shows 750 samples which are the number of training samples. Each sample has 1546 dimensions,which is the size of the vocabulary
Out[34]:
<750x1546 sparse matrix of type '
with 6817 stored elements in Compressed Sparse Row format>
In [0]:
#we really don’t need to always use fancy algorithms. For example here even using a logistic regression model, gives us a reasonable result:
In [35]:
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier.fit(X_train, y_train)
score = classifier.score(X_test, y_test)
print(“Accuracy:”, score)

Accuracy: 0.796
In [0]:
#Now, we can implement a normal DNN:
In [36]:
from tensorflow.keras.models import Sequential
from tensorflow.keras import layers
input_dim = X_train.shape[1] # Number of features
model = Sequential()
model.add(layers.Dense(10, input_dim=input_dim, activation=’relu’))
model.add(layers.Dense(1, activation=’sigmoid’))
model.compile(loss=’binary_crossentropy’, optimizer=’adam’,
metrics=[‘accuracy’])
hist = model.fit(X_train, y_train, epochs=100, validation_split=0.2 ,
batch_size=10)
loss, accuracy = model.evaluate(X_test, y_test, verbose=False)
print(“Test Accuracy: “,accuracy*100)

WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: ,
Train on 600 samples, validate on 150 samples
Epoch 1/100
600/600 [==============================] – 1s 910us/sample – loss: 0.6844 – accuracy: 0.6000 – val_loss: 0.6674 – val_accuracy: 0.7000
Epoch 2/100
600/600 [==============================] – 0s 524us/sample – loss: 0.6326 – accuracy: 0.7967 – val_loss: 0.6305 – val_accuracy: 0.7600
Epoch 3/100
600/600 [==============================] – 0s 492us/sample – loss: 0.5541 – accuracy: 0.8833 – val_loss: 0.5817 – val_accuracy: 0.8200
Epoch 4/100
600/600 [==============================] – 0s 759us/sample – loss: 0.4609 – accuracy: 0.9383 – val_loss: 0.5332 – val_accuracy: 0.8533
Epoch 5/100
600/600 [==============================] – 0s 492us/sample – loss: 0.3724 – accuracy: 0.9650 – val_loss: 0.4928 – val_accuracy: 0.8400
Epoch 6/100
600/600 [==============================] – 0s 557us/sample – loss: 0.2984 – accuracy: 0.9783 – val_loss: 0.4632 – val_accuracy: 0.8267
Epoch 7/100
600/600 [==============================] – 0s 575us/sample – loss: 0.2398 – accuracy: 0.9867 – val_loss: 0.4379 – val_accuracy: 0.8333
Epoch 8/100
600/600 [==============================] – 0s 512us/sample – loss: 0.1949 – accuracy: 0.9900 – val_loss: 0.4222 – val_accuracy: 0.8333
Epoch 9/100
600/600 [==============================] – 0s 675us/sample – loss: 0.1604 – accuracy: 0.9967 – val_loss: 0.4082 – val_accuracy: 0.8333
Epoch 10/100
600/600 [==============================] – 0s 452us/sample – loss: 0.1322 – accuracy: 0.9967 – val_loss: 0.4020 – val_accuracy: 0.8400
Epoch 11/100
600/600 [==============================] – 0s 456us/sample – loss: 0.1108 – accuracy: 0.9967 – val_loss: 0.3941 – val_accuracy: 0.8400
Epoch 12/100
600/600 [==============================] – 0s 678us/sample – loss: 0.0940 – accuracy: 0.9967 – val_loss: 0.3912 – val_accuracy: 0.8400
Epoch 13/100
600/600 [==============================] – 0s 458us/sample – loss: 0.0801 – accuracy: 0.9983 – val_loss: 0.3888 – val_accuracy: 0.8267
Epoch 14/100
600/600 [==============================] – 0s 547us/sample – loss: 0.0690 – accuracy: 0.9983 – val_loss: 0.3874 – val_accuracy: 0.8333
Epoch 15/100
600/600 [==============================] – 0s 636us/sample – loss: 0.0597 – accuracy: 0.9983 – val_loss: 0.3865 – val_accuracy: 0.8333
Epoch 16/100
600/600 [==============================] – 0s 469us/sample – loss: 0.0522 – accuracy: 1.0000 – val_loss: 0.3865 – val_accuracy: 0.8333
Epoch 17/100
600/600 [==============================] – 0s 418us/sample – loss: 0.0459 – accuracy: 1.0000 – val_loss: 0.3885 – val_accuracy: 0.8333
Epoch 18/100
600/600 [==============================] – 0s 521us/sample – loss: 0.0407 – accuracy: 1.0000 – val_loss: 0.3884 – val_accuracy: 0.8333
Epoch 19/100
600/600 [==============================] – 0s 632us/sample – loss: 0.0361 – accuracy: 1.0000 – val_loss: 0.3908 – val_accuracy: 0.8333
Epoch 20/100
600/600 [==============================] – 0s 433us/sample – loss: 0.0323 – accuracy: 1.0000 – val_loss: 0.3930 – val_accuracy: 0.8333
Epoch 21/100
600/600 [==============================] – 0s 678us/sample – loss: 0.0290 – accuracy: 1.0000 – val_loss: 0.3952 – val_accuracy: 0.8267
Epoch 22/100
600/600 [==============================] – 0s 476us/sample – loss: 0.0261 – accuracy: 1.0000 – val_loss: 0.3963 – val_accuracy: 0.8333
Epoch 23/100
600/600 [==============================] – 0s 448us/sample – loss: 0.0237 – accuracy: 1.0000 – val_loss: 0.4003 – val_accuracy: 0.8267
Epoch 24/100
600/600 [==============================] – 0s 788us/sample – loss: 0.0215 – accuracy: 1.0000 – val_loss: 0.4031 – val_accuracy: 0.8267
Epoch 25/100
600/600 [==============================] – 0s 471us/sample – loss: 0.0197 – accuracy: 1.0000 – val_loss: 0.4050 – val_accuracy: 0.8267
Epoch 26/100
600/600 [==============================] – 0s 638us/sample – loss: 0.0180 – accuracy: 1.0000 – val_loss: 0.4073 – val_accuracy: 0.8267
Epoch 27/100
600/600 [==============================] – 0s 592us/sample – loss: 0.0166 – accuracy: 1.0000 – val_loss: 0.4086 – val_accuracy: 0.8267
Epoch 28/100
600/600 [==============================] – 0s 426us/sample – loss: 0.0152 – accuracy: 1.0000 – val_loss: 0.4131 – val_accuracy: 0.8200
Epoch 29/100
600/600 [==============================] – 0s 749us/sample – loss: 0.0141 – accuracy: 1.0000 – val_loss: 0.4150 – val_accuracy: 0.8200
Epoch 30/100
600/600 [==============================] – 0s 498us/sample – loss: 0.0131 – accuracy: 1.0000 – val_loss: 0.4176 – val_accuracy: 0.8200
Epoch 31/100
600/600 [==============================] – 0s 708us/sample – loss: 0.0121 – accuracy: 1.0000 – val_loss: 0.4207 – val_accuracy: 0.8200
Epoch 32/100
600/600 [==============================] – 0s 451us/sample – loss: 0.0113 – accuracy: 1.0000 – val_loss: 0.4224 – val_accuracy: 0.8200
Epoch 33/100
600/600 [==============================] – 0s 480us/sample – loss: 0.0105 – accuracy: 1.0000 – val_loss: 0.4251 – val_accuracy: 0.8267
Epoch 34/100
600/600 [==============================] – 0s 672us/sample – loss: 0.0099 – accuracy: 1.0000 – val_loss: 0.4285 – val_accuracy: 0.8267
Epoch 35/100
600/600 [==============================] – 0s 449us/sample – loss: 0.0092 – accuracy: 1.0000 – val_loss: 0.4314 – val_accuracy: 0.8267
Epoch 36/100
600/600 [==============================] – 0s 625us/sample – loss: 0.0087 – accuracy: 1.0000 – val_loss: 0.4345 – val_accuracy: 0.8267
Epoch 37/100
600/600 [==============================] – 0s 549us/sample – loss: 0.0082 – accuracy: 1.0000 – val_loss: 0.4370 – val_accuracy: 0.8267
Epoch 38/100
600/600 [==============================] – 0s 462us/sample – loss: 0.0077 – accuracy: 1.0000 – val_loss: 0.4394 – val_accuracy: 0.8267
Epoch 39/100
600/600 [==============================] – 0s 688us/sample – loss: 0.0073 – accuracy: 1.0000 – val_loss: 0.4419 – val_accuracy: 0.8267
Epoch 40/100
600/600 [==============================] – 0s 477us/sample – loss: 0.0069 – accuracy: 1.0000 – val_loss: 0.4442 – val_accuracy: 0.8267
Epoch 41/100
600/600 [==============================] – 0s 545us/sample – loss: 0.0065 – accuracy: 1.0000 – val_loss: 0.4466 – val_accuracy: 0.8267
Epoch 42/100
600/600 [==============================] – 0s 566us/sample – loss: 0.0062 – accuracy: 1.0000 – val_loss: 0.4494 – val_accuracy: 0.8267
Epoch 43/100
600/600 [==============================] – 0s 521us/sample – loss: 0.0059 – accuracy: 1.0000 – val_loss: 0.4523 – val_accuracy: 0.8267
Epoch 44/100
600/600 [==============================] – 0s 661us/sample – loss: 0.0056 – accuracy: 1.0000 – val_loss: 0.4548 – val_accuracy: 0.8267
Epoch 45/100
600/600 [==============================] – 0s 483us/sample – loss: 0.0053 – accuracy: 1.0000 – val_loss: 0.4570 – val_accuracy: 0.8267
Epoch 46/100
600/600 [==============================] – 0s 495us/sample – loss: 0.0051 – accuracy: 1.0000 – val_loss: 0.4598 – val_accuracy: 0.8333
Epoch 47/100
600/600 [==============================] – 0s 443us/sample – loss: 0.0049 – accuracy: 1.0000 – val_loss: 0.4615 – val_accuracy: 0.8267
Epoch 48/100
600/600 [==============================] – 0s 762us/sample – loss: 0.0046 – accuracy: 1.0000 – val_loss: 0.4645 – val_accuracy: 0.8267
Epoch 49/100
600/600 [==============================] – 0s 471us/sample – loss: 0.0044 – accuracy: 1.0000 – val_loss: 0.4671 – val_accuracy: 0.8333
Epoch 50/100
600/600 [==============================] – 0s 470us/sample – loss: 0.0043 – accuracy: 1.0000 – val_loss: 0.4691 – val_accuracy: 0.8333
Epoch 51/100
600/600 [==============================] – 0s 697us/sample – loss: 0.0041 – accuracy: 1.0000 – val_loss: 0.4711 – val_accuracy: 0.8333
Epoch 52/100
600/600 [==============================] – 0s 481us/sample – loss: 0.0039 – accuracy: 1.0000 – val_loss: 0.4728 – val_accuracy: 0.8267
Epoch 53/100
600/600 [==============================] – 0s 642us/sample – loss: 0.0038 – accuracy: 1.0000 – val_loss: 0.4768 – val_accuracy: 0.8333
Epoch 54/100
600/600 [==============================] – 0s 477us/sample – loss: 0.0036 – accuracy: 1.0000 – val_loss: 0.4788 – val_accuracy: 0.8333
Epoch 55/100
600/600 [==============================] – 0s 460us/sample – loss: 0.0035 – accuracy: 1.0000 – val_loss: 0.4807 – val_accuracy: 0.8333
Epoch 56/100
600/600 [==============================] – 0s 629us/sample – loss: 0.0034 – accuracy: 1.0000 – val_loss: 0.4829 – val_accuracy: 0.8267
Epoch 57/100
600/600 [==============================] – 0s 492us/sample – loss: 0.0033 – accuracy: 1.0000 – val_loss: 0.4857 – val_accuracy: 0.8333
Epoch 58/100
600/600 [==============================] – 0s 461us/sample – loss: 0.0031 – accuracy: 1.0000 – val_loss: 0.4873 – val_accuracy: 0.8333
Epoch 59/100
600/600 [==============================] – 0s 664us/sample – loss: 0.0030 – accuracy: 1.0000 – val_loss: 0.4895 – val_accuracy: 0.8333
Epoch 60/100
600/600 [==============================] – 0s 467us/sample – loss: 0.0029 – accuracy: 1.0000 – val_loss: 0.4922 – val_accuracy: 0.8333
Epoch 61/100
600/600 [==============================] – 0s 464us/sample – loss: 0.0028 – accuracy: 1.0000 – val_loss: 0.4937 – val_accuracy: 0.8333
Epoch 62/100
600/600 [==============================] – 0s 666us/sample – loss: 0.0028 – accuracy: 1.0000 – val_loss: 0.4970 – val_accuracy: 0.8333
Epoch 63/100
600/600 [==============================] – 0s 481us/sample – loss: 0.0027 – accuracy: 1.0000 – val_loss: 0.4983 – val_accuracy: 0.8333
Epoch 64/100
600/600 [==============================] – 0s 652us/sample – loss: 0.0026 – accuracy: 1.0000 – val_loss: 0.5007 – val_accuracy: 0.8333
Epoch 65/100
600/600 [==============================] – 0s 488us/sample – loss: 0.0025 – accuracy: 1.0000 – val_loss: 0.5029 – val_accuracy: 0.8333
Epoch 66/100
600/600 [==============================] – 0s 512us/sample – loss: 0.0025 – accuracy: 1.0000 – val_loss: 0.5050 – val_accuracy: 0.8333
Epoch 67/100
600/600 [==============================] – 0s 687us/sample – loss: 0.0024 – accuracy: 1.0000 – val_loss: 0.5075 – val_accuracy: 0.8333
Epoch 68/100
600/600 [==============================] – 0s 513us/sample – loss: 0.0023 – accuracy: 1.0000 – val_loss: 0.5119 – val_accuracy: 0.8400
Epoch 69/100
600/600 [==============================] – 0s 697us/sample – loss: 0.0023 – accuracy: 1.0000 – val_loss: 0.5133 – val_accuracy: 0.8400
Epoch 70/100
600/600 [==============================] – 0s 508us/sample – loss: 0.0022 – accuracy: 1.0000 – val_loss: 0.5154 – val_accuracy: 0.8400
Epoch 71/100
600/600 [==============================] – 0s 430us/sample – loss: 0.0022 – accuracy: 1.0000 – val_loss: 0.5173 – val_accuracy: 0.8333
Epoch 72/100
600/600 [==============================] – 0s 763us/sample – loss: 0.0021 – accuracy: 1.0000 – val_loss: 0.5185 – val_accuracy: 0.8333
Epoch 73/100
600/600 [==============================] – 0s 498us/sample – loss: 0.0021 – accuracy: 1.0000 – val_loss: 0.5232 – val_accuracy: 0.8467
Epoch 74/100
600/600 [==============================] – 0s 625us/sample – loss: 0.0020 – accuracy: 1.0000 – val_loss: 0.5240 – val_accuracy: 0.8400
Epoch 75/100
600/600 [==============================] – 0s 609us/sample – loss: 0.0020 – accuracy: 1.0000 – val_loss: 0.5253 – val_accuracy: 0.8467
Epoch 76/100
600/600 [==============================] – 0s 480us/sample – loss: 0.0019 – accuracy: 1.0000 – val_loss: 0.5276 – val_accuracy: 0.8467
Epoch 77/100
600/600 [==============================] – 0s 440us/sample – loss: 0.0019 – accuracy: 1.0000 – val_loss: 0.5287 – val_accuracy: 0.8467
Epoch 78/100
600/600 [==============================] – 0s 684us/sample – loss: 0.0018 – accuracy: 1.0000 – val_loss: 0.5304 – val_accuracy: 0.8467
Epoch 79/100
600/600 [==============================] – 0s 461us/sample – loss: 0.0018 – accuracy: 1.0000 – val_loss: 0.5324 – val_accuracy: 0.8467
Epoch 80/100
600/600 [==============================] – 0s 447us/sample – loss: 0.0018 – accuracy: 1.0000 – val_loss: 0.5345 – val_accuracy: 0.8467
Epoch 81/100
600/600 [==============================] – 1s 853us/sample – loss: 0.0017 – accuracy: 1.0000 – val_loss: 0.5367 – val_accuracy: 0.8467
Epoch 82/100
600/600 [==============================] – 0s 468us/sample – loss: 0.0017 – accuracy: 1.0000 – val_loss: 0.5377 – val_accuracy: 0.8467
Epoch 83/100
600/600 [==============================] – 0s 595us/sample – loss: 0.0017 – accuracy: 1.0000 – val_loss: 0.5400 – val_accuracy: 0.8467
Epoch 84/100
600/600 [==============================] – 0s 553us/sample – loss: 0.0016 – accuracy: 1.0000 – val_loss: 0.5409 – val_accuracy: 0.8467
Epoch 85/100
600/600 [==============================] – 0s 525us/sample – loss: 0.0016 – accuracy: 1.0000 – val_loss: 0.5438 – val_accuracy: 0.8467
Epoch 86/100
600/600 [==============================] – 0s 689us/sample – loss: 0.0016 – accuracy: 1.0000 – val_loss: 0.5446 – val_accuracy: 0.8467
Epoch 87/100
600/600 [==============================] – 0s 456us/sample – loss: 0.0015 – accuracy: 1.0000 – val_loss: 0.5469 – val_accuracy: 0.8467
Epoch 88/100
600/600 [==============================] – 0s 461us/sample – loss: 0.0015 – accuracy: 1.0000 – val_loss: 0.5470 – val_accuracy: 0.8467
Epoch 89/100
600/600 [==============================] – 0s 658us/sample – loss: 0.0015 – accuracy: 1.0000 – val_loss: 0.5510 – val_accuracy: 0.8467
Epoch 90/100
600/600 [==============================] – 0s 568us/sample – loss: 0.0015 – accuracy: 1.0000 – val_loss: 0.5525 – val_accuracy: 0.8467
Epoch 91/100
600/600 [==============================] – 0s 732us/sample – loss: 0.0014 – accuracy: 1.0000 – val_loss: 0.5538 – val_accuracy: 0.8467
Epoch 92/100
600/600 [==============================] – 0s 465us/sample – loss: 0.0014 – accuracy: 1.0000 – val_loss: 0.5556 – val_accuracy: 0.8467
Epoch 93/100
600/600 [==============================] – 0s 458us/sample – loss: 0.0014 – accuracy: 1.0000 – val_loss: 0.5606 – val_accuracy: 0.8533
Epoch 94/100
600/600 [==============================] – 0s 791us/sample – loss: 0.0014 – accuracy: 1.0000 – val_loss: 0.5621 – val_accuracy: 0.8533
Epoch 95/100
600/600 [==============================] – 0s 427us/sample – loss: 0.0014 – accuracy: 1.0000 – val_loss: 0.5617 – val_accuracy: 0.8533
Epoch 96/100
600/600 [==============================] – 0s 574us/sample – loss: 0.0013 – accuracy: 1.0000 – val_loss: 0.5641 – val_accuracy: 0.8533
Epoch 97/100
600/600 [==============================] – 0s 583us/sample – loss: 0.0013 – accuracy: 1.0000 – val_loss: 0.5651 – val_accuracy: 0.8533
Epoch 98/100
600/600 [==============================] – 0s 478us/sample – loss: 0.0013 – accuracy: 1.0000 – val_loss: 0.5668 – val_accuracy: 0.8533
Epoch 99/100
600/600 [==============================] – 0s 682us/sample – loss: 0.0013 – accuracy: 1.0000 – val_loss: 0.5681 – val_accuracy: 0.8533
Epoch 100/100
600/600 [==============================] – 0s 478us/sample – loss: 0.0013 – accuracy: 1.0000 – val_loss: 0.5686 – val_accuracy: 0.8533
WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: ,
Test Accuracy: 78.39999794960022
In [44]:
import matplotlib.pyplot as plt
plt.subplot(2,1,1)
plt.plot(hist.history[‘accuracy’])
plt.plot(hist.history[‘val_accuracy’])
plt.title(‘model accuracy’)
plt.ylabel(‘accuracy’)
plt.xlabel(‘epoch’)
plt.legend([‘train’, ‘test’], loc=’lower right’)

plt.subplot(2,1,2)
plt.plot(hist.history[‘loss’])
plt.plot(hist.history[‘val_loss’])
plt.title(‘model loss’)
plt.ylabel(‘loss’)
plt.xlabel(‘epoch’)
plt.legend([‘train’, ‘test’], loc=’upper right’)

plt.show()


In [0]:
#Q4- Explain these graphs. If you see any issue, suggest a solution to resolve it. Make the model by creating 3 hidden layers
#(first one 200 nodes, second one 100 nodes and last one 50 nodes and after each step, add dropout of 0.2 and report the accuracy.
#If you don’t see a huge improvement, don’t worry we are not done with the model yet.
#Answer: Overfitting problem
#Please see below
#Now the accuracy is 46.00000083446503
#Conclusion: we don’t see a huge different. The problem is not because of the model that we used, is because of the natural language processing technic we used.
#So we will improve the accuracy in embedding word section.
In [45]:
from tensorflow.keras.models import Sequential
from tensorflow.keras import layers
input_dim = X_train.shape[1] # Number of features
model = Sequential()
model.add(layers.Dense(10, input_dim=input_dim, activation=’relu’))
model.add(Dense(200, activation=’relu’))
model.add(Dropout(0.2))
model.add(Dense(100, activation=’relu’))
model.add(Dropout(0.2))
model.add(Dense(50, activation=’relu’))
model.add(Dropout(0.2))
model.add(layers.Dense(1, activation=’sigmoid’))
model.compile(loss=’binary_crossentropy’, optimizer=’adam’,
metrics=[‘accuracy’])
hist = model.fit(X_train, y_train, epochs=100, validation_split=0.2 ,
batch_size=10)
loss, accuracy = model.evaluate(X_test, y_test, verbose=False)
print(“Test Accuracy: “,accuracy*100)

Train on 600 samples, validate on 150 samples
Epoch 1/100
600/600 [==============================] – 1s 2ms/sample – loss: 5.8718 – accuracy: 0.5100 – val_loss: 3.1877 – val_accuracy: 0.4933
Epoch 2/100
600/600 [==============================] – 0s 760us/sample – loss: 2.7704 – accuracy: 0.5250 – val_loss: 1.4880 – val_accuracy: 0.4667
Epoch 3/100
600/600 [==============================] – 0s 579us/sample – loss: 1.8348 – accuracy: 0.5200 – val_loss: 1.3824 – val_accuracy: 0.5000
Epoch 4/100
600/600 [==============================] – 0s 735us/sample – loss: 1.3707 – accuracy: 0.5333 – val_loss: 0.9951 – val_accuracy: 0.5467
Epoch 5/100
600/600 [==============================] – 0s 571us/sample – loss: 1.2721 – accuracy: 0.5033 – val_loss: 0.8464 – val_accuracy: 0.5533
Epoch 6/100
600/600 [==============================] – 0s 793us/sample – loss: 1.0062 – accuracy: 0.5300 – val_loss: 0.8145 – val_accuracy: 0.5600
Epoch 7/100
600/600 [==============================] – 0s 588us/sample – loss: 0.8598 – accuracy: 0.5617 – val_loss: 0.7458 – val_accuracy: 0.6000
Epoch 8/100
600/600 [==============================] – 0s 806us/sample – loss: 0.8672 – accuracy: 0.5217 – val_loss: 0.7034 – val_accuracy: 0.5333
Epoch 9/100
600/600 [==============================] – 0s 541us/sample – loss: 0.8152 – accuracy: 0.5417 – val_loss: 0.7008 – val_accuracy: 0.5800
Epoch 10/100
600/600 [==============================] – 0s 705us/sample – loss: 0.7113 – accuracy: 0.5417 – val_loss: 0.6975 – val_accuracy: 0.6000
Epoch 11/100
600/600 [==============================] – 0s 563us/sample – loss: 0.7296 – accuracy: 0.5333 – val_loss: 0.7016 – val_accuracy: 0.5800
Epoch 12/100
600/600 [==============================] – 0s 524us/sample – loss: 0.7062 – accuracy: 0.5733 – val_loss: 0.7173 – val_accuracy: 0.5733
Epoch 13/100
600/600 [==============================] – 0s 608us/sample – loss: 0.7144 – accuracy: 0.5500 – val_loss: 0.6891 – val_accuracy: 0.5667
Epoch 14/100
600/600 [==============================] – 0s 652us/sample – loss: 0.7131 – accuracy: 0.5533 – val_loss: 0.6932 – val_accuracy: 0.6133
Epoch 15/100
600/600 [==============================] – 0s 494us/sample – loss: 0.6929 – accuracy: 0.5633 – val_loss: 0.6899 – val_accuracy: 0.5867
Epoch 16/100
600/600 [==============================] – 0s 775us/sample – loss: 0.7063 – accuracy: 0.5650 – val_loss: 0.7144 – val_accuracy: 0.5667
Epoch 17/100
600/600 [==============================] – 0s 526us/sample – loss: 0.6979 – accuracy: 0.5550 – val_loss: 0.6980 – val_accuracy: 0.5600
Epoch 18/100
600/600 [==============================] – 0s 792us/sample – loss: 0.6810 – accuracy: 0.5817 – val_loss: 0.7063 – val_accuracy: 0.5867
Epoch 19/100
600/600 [==============================] – 0s 611us/sample – loss: 0.6830 – accuracy: 0.5717 – val_loss: 0.7086 – val_accuracy: 0.5733
Epoch 20/100
600/600 [==============================] – 0s 805us/sample – loss: 0.6823 – accuracy: 0.5517 – val_loss: 0.7192 – val_accuracy: 0.5800
Epoch 21/100
600/600 [==============================] – 0s 524us/sample – loss: 0.6963 – accuracy: 0.5800 – val_loss: 0.7075 – val_accuracy: 0.5533
Epoch 22/100
600/600 [==============================] – 0s 806us/sample – loss: 0.6715 – accuracy: 0.5567 – val_loss: 0.7174 – val_accuracy: 0.5000
Epoch 23/100
600/600 [==============================] – 0s 533us/sample – loss: 0.6997 – accuracy: 0.5600 – val_loss: 0.7267 – val_accuracy: 0.5067
Epoch 24/100
600/600 [==============================] – 0s 800us/sample – loss: 0.6798 – accuracy: 0.5633 – val_loss: 0.7154 – val_accuracy: 0.5467
Epoch 25/100
600/600 [==============================] – 0s 539us/sample – loss: 0.6986 – accuracy: 0.5767 – val_loss: 0.7647 – val_accuracy: 0.4867
Epoch 26/100
600/600 [==============================] – 0s 777us/sample – loss: 0.6767 – accuracy: 0.5583 – val_loss: 0.7860 – val_accuracy: 0.4733
Epoch 27/100
600/600 [==============================] – 0s 529us/sample – loss: 0.6547 – accuracy: 0.5867 – val_loss: 0.7892 – val_accuracy: 0.5067
Epoch 28/100
600/600 [==============================] – 0s 775us/sample – loss: 0.6701 – accuracy: 0.5750 – val_loss: 0.8510 – val_accuracy: 0.5200
Epoch 29/100
600/600 [==============================] – 0s 640us/sample – loss: 0.6678 – accuracy: 0.5633 – val_loss: 0.8967 – val_accuracy: 0.4867
Epoch 30/100
600/600 [==============================] – 0s 768us/sample – loss: 0.6629 – accuracy: 0.5917 – val_loss: 0.8757 – val_accuracy: 0.5067
Epoch 31/100
600/600 [==============================] – 0s 550us/sample – loss: 0.6707 – accuracy: 0.5483 – val_loss: 0.8364 – val_accuracy: 0.5200
Epoch 32/100
600/600 [==============================] – 0s 790us/sample – loss: 0.6577 – accuracy: 0.5850 – val_loss: 0.8451 – val_accuracy: 0.5133
Epoch 33/100
600/600 [==============================] – 0s 590us/sample – loss: 0.6713 – accuracy: 0.5833 – val_loss: 0.7978 – val_accuracy: 0.5267
Epoch 34/100
600/600 [==============================] – 0s 786us/sample – loss: 0.6570 – accuracy: 0.5900 – val_loss: 0.8311 – val_accuracy: 0.5333
Epoch 35/100
600/600 [==============================] – 0s 527us/sample – loss: 0.6777 – accuracy: 0.5567 – val_loss: 0.8636 – val_accuracy: 0.4800
Epoch 36/100
600/600 [==============================] – 0s 535us/sample – loss: 0.6678 – accuracy: 0.5800 – val_loss: 0.9193 – val_accuracy: 0.5333
Epoch 37/100
600/600 [==============================] – 0s 773us/sample – loss: 0.6471 – accuracy: 0.5683 – val_loss: 0.9848 – val_accuracy: 0.5133
Epoch 38/100
600/600 [==============================] – 0s 502us/sample – loss: 0.6587 – accuracy: 0.5967 – val_loss: 1.0117 – val_accuracy: 0.5533
Epoch 39/100
600/600 [==============================] – 0s 764us/sample – loss: 0.6435 – accuracy: 0.5783 – val_loss: 0.9547 – val_accuracy: 0.5333
Epoch 40/100
600/600 [==============================] – 0s 554us/sample – loss: 0.6475 – accuracy: 0.6017 – val_loss: 1.0281 – val_accuracy: 0.5267
Epoch 41/100
600/600 [==============================] – 0s 725us/sample – loss: 0.6324 – accuracy: 0.6050 – val_loss: 1.0327 – val_accuracy: 0.5400
Epoch 42/100
600/600 [==============================] – 0s 624us/sample – loss: 0.6467 – accuracy: 0.6050 – val_loss: 1.1355 – val_accuracy: 0.5133
Epoch 43/100
600/600 [==============================] – 0s 673us/sample – loss: 0.6327 – accuracy: 0.6117 – val_loss: 1.4349 – val_accuracy: 0.5200
Epoch 44/100
600/600 [==============================] – 0s 627us/sample – loss: 0.6187 – accuracy: 0.6150 – val_loss: 1.5602 – val_accuracy: 0.5067
Epoch 45/100
600/600 [==============================] – 0s 569us/sample – loss: 0.6306 – accuracy: 0.5867 – val_loss: 1.4496 – val_accuracy: 0.5667
Epoch 46/100
600/600 [==============================] – 0s 729us/sample – loss: 0.6210 – accuracy: 0.6100 – val_loss: 1.4279 – val_accuracy: 0.5400
Epoch 47/100
600/600 [==============================] – 0s 506us/sample – loss: 0.6438 – accuracy: 0.6183 – val_loss: 1.1894 – val_accuracy: 0.5467
Epoch 48/100
600/600 [==============================] – 0s 815us/sample – loss: 0.6386 – accuracy: 0.5733 – val_loss: 1.2127 – val_accuracy: 0.5133
Epoch 49/100
600/600 [==============================] – 0s 564us/sample – loss: 0.6213 – accuracy: 0.6000 – val_loss: 1.4162 – val_accuracy: 0.5267
Epoch 50/100
600/600 [==============================] – 0s 793us/sample – loss: 0.6253 – accuracy: 0.5833 – val_loss: 2.0205 – val_accuracy: 0.5600
Epoch 51/100
600/600 [==============================] – 0s 538us/sample – loss: 0.6075 – accuracy: 0.6200 – val_loss: 1.5109 – val_accuracy: 0.5333
Epoch 52/100
600/600 [==============================] – 0s 772us/sample – loss: 0.6168 – accuracy: 0.6200 – val_loss: 1.5006 – val_accuracy: 0.5600
Epoch 53/100
600/600 [==============================] – 0s 584us/sample – loss: 0.6106 – accuracy: 0.6200 – val_loss: 1.5067 – val_accuracy: 0.5667
Epoch 54/100
600/600 [==============================] – 0s 777us/sample – loss: 0.6341 – accuracy: 0.5983 – val_loss: 1.4693 – val_accuracy: 0.5867
Epoch 55/100
600/600 [==============================] – 0s 530us/sample – loss: 0.6339 – accuracy: 0.6017 – val_loss: 0.9667 – val_accuracy: 0.5600
Epoch 56/100
600/600 [==============================] – 0s 790us/sample – loss: 0.6054 – accuracy: 0.6267 – val_loss: 1.0641 – val_accuracy: 0.5667
Epoch 57/100
600/600 [==============================] – 0s 580us/sample – loss: 0.6352 – accuracy: 0.6033 – val_loss: 0.9542 – val_accuracy: 0.5600
Epoch 58/100
600/600 [==============================] – 0s 816us/sample – loss: 0.6298 – accuracy: 0.5917 – val_loss: 0.8995 – val_accuracy: 0.5733
Epoch 59/100
600/600 [==============================] – 0s 603us/sample – loss: 0.6220 – accuracy: 0.6000 – val_loss: 1.0500 – val_accuracy: 0.5667
Epoch 60/100
600/600 [==============================] – 0s 547us/sample – loss: 0.5962 – accuracy: 0.6300 – val_loss: 1.2672 – val_accuracy: 0.5600
Epoch 61/100
600/600 [==============================] – 0s 771us/sample – loss: 0.6043 – accuracy: 0.6233 – val_loss: 1.2960 – val_accuracy: 0.5667
Epoch 62/100
600/600 [==============================] – 0s 617us/sample – loss: 0.5974 – accuracy: 0.6500 – val_loss: 1.0839 – val_accuracy: 0.5400
Epoch 63/100
600/600 [==============================] – 0s 798us/sample – loss: 0.6043 – accuracy: 0.6117 – val_loss: 1.3776 – val_accuracy: 0.5267
Epoch 64/100
600/600 [==============================] – 0s 544us/sample – loss: 0.6079 – accuracy: 0.6450 – val_loss: 1.5320 – val_accuracy: 0.5133
Epoch 65/100
600/600 [==============================] – 0s 752us/sample – loss: 0.6042 – accuracy: 0.6250 – val_loss: 1.6441 – val_accuracy: 0.5267
Epoch 66/100
600/600 [==============================] – 0s 558us/sample – loss: 0.5982 – accuracy: 0.6450 – val_loss: 1.5440 – val_accuracy: 0.5667
Epoch 67/100
600/600 [==============================] – 0s 755us/sample – loss: 0.5854 – accuracy: 0.6500 – val_loss: 1.5415 – val_accuracy: 0.5333
Epoch 68/100
600/600 [==============================] – 0s 653us/sample – loss: 0.6081 – accuracy: 0.6217 – val_loss: 1.7702 – val_accuracy: 0.5200
Epoch 69/100
600/600 [==============================] – 0s 766us/sample – loss: 0.5890 – accuracy: 0.6183 – val_loss: 1.3550 – val_accuracy: 0.5800
Epoch 70/100
600/600 [==============================] – 0s 558us/sample – loss: 0.5864 – accuracy: 0.6650 – val_loss: 1.8608 – val_accuracy: 0.5400
Epoch 71/100
600/600 [==============================] – 0s 790us/sample – loss: 0.5814 – accuracy: 0.6200 – val_loss: 1.5492 – val_accuracy: 0.5400
Epoch 72/100
600/600 [==============================] – 0s 526us/sample – loss: 0.5632 – accuracy: 0.6767 – val_loss: 1.6557 – val_accuracy: 0.5600
Epoch 73/100
600/600 [==============================] – 0s 771us/sample – loss: 0.5579 – accuracy: 0.6900 – val_loss: 1.4125 – val_accuracy: 0.5800
Epoch 74/100
600/600 [==============================] – 0s 515us/sample – loss: 0.5614 – accuracy: 0.6633 – val_loss: 1.6543 – val_accuracy: 0.5533
Epoch 75/100
600/600 [==============================] – 0s 756us/sample – loss: 0.5613 – accuracy: 0.6483 – val_loss: 1.6981 – val_accuracy: 0.5333
Epoch 76/100
600/600 [==============================] – 0s 519us/sample – loss: 0.5368 – accuracy: 0.6900 – val_loss: 1.5792 – val_accuracy: 0.5867
Epoch 77/100
600/600 [==============================] – 0s 757us/sample – loss: 0.5421 – accuracy: 0.6750 – val_loss: 1.5782 – val_accuracy: 0.5467
Epoch 78/100
600/600 [==============================] – 0s 608us/sample – loss: 0.5412 – accuracy: 0.6783 – val_loss: 1.8515 – val_accuracy: 0.5867
Epoch 79/100
600/600 [==============================] – 0s 713us/sample – loss: 0.5488 – accuracy: 0.6667 – val_loss: 2.0190 – val_accuracy: 0.5667
Epoch 80/100
600/600 [==============================] – 0s 609us/sample – loss: 0.5361 – accuracy: 0.7017 – val_loss: 1.3904 – val_accuracy: 0.5667
Epoch 81/100
600/600 [==============================] – 0s 621us/sample – loss: 0.5370 – accuracy: 0.6633 – val_loss: 1.8229 – val_accuracy: 0.5733
Epoch 82/100
600/600 [==============================] – 0s 669us/sample – loss: 0.5423 – accuracy: 0.6583 – val_loss: 1.6505 – val_accuracy: 0.5667
Epoch 83/100
600/600 [==============================] – 0s 603us/sample – loss: 0.5336 – accuracy: 0.6667 – val_loss: 1.7338 – val_accuracy: 0.5733
Epoch 84/100
600/600 [==============================] – 0s 591us/sample – loss: 0.5327 – accuracy: 0.6683 – val_loss: 1.6506 – val_accuracy: 0.5667
Epoch 85/100
600/600 [==============================] – 0s 766us/sample – loss: 0.5170 – accuracy: 0.6900 – val_loss: 1.7775 – val_accuracy: 0.5533
Epoch 86/100
600/600 [==============================] – 0s 592us/sample – loss: 0.5237 – accuracy: 0.6767 – val_loss: 1.7860 – val_accuracy: 0.5867
Epoch 87/100
600/600 [==============================] – 0s 783us/sample – loss: 0.4950 – accuracy: 0.7217 – val_loss: 2.1661 – val_accuracy: 0.5600
Epoch 88/100
600/600 [==============================] – 0s 552us/sample – loss: 0.5184 – accuracy: 0.7100 – val_loss: 2.0094 – val_accuracy: 0.5733
Epoch 89/100
600/600 [==============================] – 0s 805us/sample – loss: 0.5333 – accuracy: 0.6967 – val_loss: 1.5577 – val_accuracy: 0.5733
Epoch 90/100
600/600 [==============================] – 0s 557us/sample – loss: 0.5161 – accuracy: 0.6967 – val_loss: 1.6601 – val_accuracy: 0.5867
Epoch 91/100
600/600 [==============================] – 0s 775us/sample – loss: 0.5409 – accuracy: 0.6933 – val_loss: 1.7147 – val_accuracy: 0.6000
Epoch 92/100
600/600 [==============================] – 0s 663us/sample – loss: 0.5588 – accuracy: 0.6900 – val_loss: 1.3529 – val_accuracy: 0.5867
Epoch 93/100
600/600 [==============================] – 0s 690us/sample – loss: 0.5191 – accuracy: 0.7067 – val_loss: 1.9707 – val_accuracy: 0.6000
Epoch 94/100
600/600 [==============================] – 0s 601us/sample – loss: 0.5129 – accuracy: 0.7100 – val_loss: 1.8133 – val_accuracy: 0.5600
Epoch 95/100
600/600 [==============================] – 0s 742us/sample – loss: 0.5091 – accuracy: 0.7050 – val_loss: 1.6854 – val_accuracy: 0.5867
Epoch 96/100
600/600 [==============================] – 0s 581us/sample – loss: 0.5119 – accuracy: 0.7217 – val_loss: 1.9700 – val_accuracy: 0.5600
Epoch 97/100
600/600 [==============================] – 0s 751us/sample – loss: 0.5174 – accuracy: 0.7300 – val_loss: 1.7087 – val_accuracy: 0.5733
Epoch 98/100
600/600 [==============================] – 0s 521us/sample – loss: 0.5045 – accuracy: 0.7167 – val_loss: 1.5030 – val_accuracy: 0.5333
Epoch 99/100
600/600 [==============================] – 0s 814us/sample – loss: 0.4879 – accuracy: 0.7233 – val_loss: 1.7001 – val_accuracy: 0.5400
Epoch 100/100
600/600 [==============================] – 0s 540us/sample – loss: 0.4970 – accuracy: 0.7133 – val_loss: 1.8021 – val_accuracy: 0.5400
Test Accuracy: 46.00000083446503
In [59]:
from keras.preprocessing.text import Tokenizer

tokenizer = Tokenizer(num_words=5000)
tokenizer.fit_on_texts(sentences_train)
X_train = tokenizer.texts_to_sequences(sentences_train)
X_test = tokenizer.texts_to_sequences(sentences_test)
vocab_size = len(tokenizer.word_index) + 1 # Adding 1 because of reserved 0 index

print(sentences_train[3])

This is the phone to get for 2005…. I just bought my S710a and all I can say is WOW!
In [60]:
print(X_train[3])

[7, 5, 1, 9, 8, 92, 11, 676, 2, 59, 101, 10, 677, 3, 32, 2, 71, 225, 5, 449]
In [61]:
for word in [‘the’, ‘all’, ‘happy’]:
print(‘{}: {}’.format(word, tokenizer.word_index[word]))

the: 1
all: 32
happy: 86
In [70]:
from keras.preprocessing.sequence import pad_sequences
maxlen = 100
# Pad variables with zeros
X_train = pad_sequences(X_train, padding=’post’, maxlen=maxlen)
X_test = pad_sequences(X_test, padding=’post’, maxlen=maxlen)
print(X_train[0, :])

[ 7 24 5 16 4 137 148 6 223 315 2 71 224 8 1 673 111 444
18 316 11 445 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0]
In [71]:
from tensorflow.keras.models import Sequential
from tensorflow.keras import layers
embedding_dim = 50
model = Sequential()
model.add(layers.Embedding(input_dim=vocab_size,
output_dim=embedding_dim,
input_length=maxlen))
model.add(layers.GlobalMaxPool1D())
model.add(layers.Dense(10, activation=’relu’))
model.add(layers.Dense(1, activation=’sigmoid’))
model.compile(optimizer=’adam’, loss=’binary_crossentropy’,
metrics=[‘accuracy’])
model.summary()

Model: “sequential_11”
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_8 (Embedding) (None, 100, 50) 78700
_________________________________________________________________
global_max_pooling1d_8 (Glob (None, 50) 0
_________________________________________________________________
dense_37 (Dense) (None, 10) 510
_________________________________________________________________
dense_38 (Dense) (None, 1) 11
=================================================================
Total params: 79,221
Trainable params: 79,221
Non-trainable params: 0
_________________________________________________________________
In [72]:
hist = model.fit(X_train, y_train,epochs=50,validation_split=0.2,batch_size=10)
loss, accuracy = model.evaluate(X_test, y_test, verbose=False)
print(“Accuracy: “,accuracy)

Train on 600 samples, validate on 150 samples
Epoch 1/50
600/600 [==============================] – 2s 3ms/sample – loss: 0.6898 – accuracy: 0.6033 – val_loss: 0.6830 – val_accuracy: 0.7333
Epoch 2/50
600/600 [==============================] – 1s 989us/sample – loss: 0.6617 – accuracy: 0.8450 – val_loss: 0.6562 – val_accuracy: 0.7267
Epoch 3/50
600/600 [==============================] – 1s 1ms/sample – loss: 0.5936 – accuracy: 0.9183 – val_loss: 0.6010 – val_accuracy: 0.7267
Epoch 4/50
600/600 [==============================] – 1s 1ms/sample – loss: 0.4728 – accuracy: 0.9483 – val_loss: 0.5293 – val_accuracy: 0.8133
Epoch 5/50
600/600 [==============================] – 1s 1ms/sample – loss: 0.3319 – accuracy: 0.9700 – val_loss: 0.4705 – val_accuracy: 0.8067
Epoch 6/50
600/600 [==============================] – 1s 1ms/sample – loss: 0.2169 – accuracy: 0.9767 – val_loss: 0.4361 – val_accuracy: 0.8133
Epoch 7/50
600/600 [==============================] – 1s 1ms/sample – loss: 0.1390 – accuracy: 0.9817 – val_loss: 0.4153 – val_accuracy: 0.8333
Epoch 8/50
600/600 [==============================] – 1s 1ms/sample – loss: 0.0896 – accuracy: 0.9917 – val_loss: 0.4049 – val_accuracy: 0.8200
Epoch 9/50
600/600 [==============================] – 1s 1ms/sample – loss: 0.0590 – accuracy: 0.9950 – val_loss: 0.3976 – val_accuracy: 0.8267
Epoch 10/50
600/600 [==============================] – 1s 1ms/sample – loss: 0.0400 – accuracy: 0.9983 – val_loss: 0.3945 – val_accuracy: 0.8267
Epoch 11/50
600/600 [==============================] – 1s 1ms/sample – loss: 0.0284 – accuracy: 0.9983 – val_loss: 0.3943 – val_accuracy: 0.8333
Epoch 12/50
600/600 [==============================] – 1s 1ms/sample – loss: 0.0207 – accuracy: 1.0000 – val_loss: 0.3954 – val_accuracy: 0.8333
Epoch 13/50
600/600 [==============================] – 1s 1ms/sample – loss: 0.0156 – accuracy: 1.0000 – val_loss: 0.3954 – val_accuracy: 0.8333
Epoch 14/50
600/600 [==============================] – 1s 1ms/sample – loss: 0.0119 – accuracy: 1.0000 – val_loss: 0.3958 – val_accuracy: 0.8333
Epoch 15/50
600/600 [==============================] – 1s 1ms/sample – loss: 0.0095 – accuracy: 1.0000 – val_loss: 0.3975 – val_accuracy: 0.8333
Epoch 16/50
600/600 [==============================] – 1s 1ms/sample – loss: 0.0077 – accuracy: 1.0000 – val_loss: 0.3983 – val_accuracy: 0.8333
Epoch 17/50
600/600 [==============================] – 1s 1ms/sample – loss: 0.0064 – accuracy: 1.0000 – val_loss: 0.4008 – val_accuracy: 0.8333
Epoch 18/50
600/600 [==============================] – 1s 1ms/sample – loss: 0.0053 – accuracy: 1.0000 – val_loss: 0.4033 – val_accuracy: 0.8333
Epoch 19/50
600/600 [==============================] – 1s 965us/sample – loss: 0.0045 – accuracy: 1.0000 – val_loss: 0.4044 – val_accuracy: 0.8333
Epoch 20/50
600/600 [==============================] – 1s 1ms/sample – loss: 0.0039 – accuracy: 1.0000 – val_loss: 0.4073 – val_accuracy: 0.8267
Epoch 21/50
600/600 [==============================] – 1s 1ms/sample – loss: 0.0033 – accuracy: 1.0000 – val_loss: 0.4092 – val_accuracy: 0.8267
Epoch 22/50
600/600 [==============================] – 1s 1ms/sample – loss: 0.0029 – accuracy: 1.0000 – val_loss: 0.4103 – val_accuracy: 0.8267
Epoch 23/50
600/600 [==============================] – 1s 1ms/sample – loss: 0.0026 – accuracy: 1.0000 – val_loss: 0.4126 – val_accuracy: 0.8267
Epoch 24/50
600/600 [==============================] – 1s 1ms/sample – loss: 0.0023 – accuracy: 1.0000 – val_loss: 0.4136 – val_accuracy: 0.8267
Epoch 25/50
600/600 [==============================] – 1s 1ms/sample – loss: 0.0020 – accuracy: 1.0000 – val_loss: 0.4152 – val_accuracy: 0.8333
Epoch 26/50
600/600 [==============================] – 1s 1ms/sample – loss: 0.0018 – accuracy: 1.0000 – val_loss: 0.4171 – val_accuracy: 0.8400
Epoch 27/50
600/600 [==============================] – 1s 1ms/sample – loss: 0.0016 – accuracy: 1.0000 – val_loss: 0.4187 – val_accuracy: 0.8400
Epoch 28/50
600/600 [==============================] – 1s 1ms/sample – loss: 0.0015 – accuracy: 1.0000 – val_loss: 0.4208 – val_accuracy: 0.8400
Epoch 29/50
600/600 [==============================] – 1s 1ms/sample – loss: 0.0013 – accuracy: 1.0000 – val_loss: 0.4223 – val_accuracy: 0.8400
Epoch 30/50
600/600 [==============================] – 1s 1ms/sample – loss: 0.0012 – accuracy: 1.0000 – val_loss: 0.4234 – val_accuracy: 0.8400
Epoch 31/50
600/600 [==============================] – 1s 1ms/sample – loss: 0.0011 – accuracy: 1.0000 – val_loss: 0.4262 – val_accuracy: 0.8400
Epoch 32/50
600/600 [==============================] – 1s 1ms/sample – loss: 0.0010 – accuracy: 1.0000 – val_loss: 0.4273 – val_accuracy: 0.8400
Epoch 33/50
600/600 [==============================] – 1s 1ms/sample – loss: 9.2396e-04 – accuracy: 1.0000 – val_loss: 0.4281 – val_accuracy: 0.8400
Epoch 34/50
600/600 [==============================] – 1s 1ms/sample – loss: 8.5011e-04 – accuracy: 1.0000 – val_loss: 0.4299 – val_accuracy: 0.8400
Epoch 35/50
600/600 [==============================] – 1s 994us/sample – loss: 7.8562e-04 – accuracy: 1.0000 – val_loss: 0.4315 – val_accuracy: 0.8333
Epoch 36/50
600/600 [==============================] – 1s 1ms/sample – loss: 7.2541e-04 – accuracy: 1.0000 – val_loss: 0.4329 – val_accuracy: 0.8333
Epoch 37/50
600/600 [==============================] – 1s 1ms/sample – loss: 6.7276e-04 – accuracy: 1.0000 – val_loss: 0.4346 – val_accuracy: 0.8333
Epoch 38/50
600/600 [==============================] – 1s 1ms/sample – loss: 6.2464e-04 – accuracy: 1.0000 – val_loss: 0.4359 – val_accuracy: 0.8467
Epoch 39/50
600/600 [==============================] – 1s 879us/sample – loss: 5.8186e-04 – accuracy: 1.0000 – val_loss: 0.4373 – val_accuracy: 0.8467
Epoch 40/50
600/600 [==============================] – 1s 1ms/sample – loss: 5.4191e-04 – accuracy: 1.0000 – val_loss: 0.4384 – val_accuracy: 0.8467
Epoch 41/50
600/600 [==============================] – 1s 1ms/sample – loss: 5.0596e-04 – accuracy: 1.0000 – val_loss: 0.4398 – val_accuracy: 0.8467
Epoch 42/50
600/600 [==============================] – 1s 984us/sample – loss: 4.7294e-04 – accuracy: 1.0000 – val_loss: 0.4413 – val_accuracy: 0.8467
Epoch 43/50
600/600 [==============================] – 1s 1ms/sample – loss: 4.4311e-04 – accuracy: 1.0000 – val_loss: 0.4428 – val_accuracy: 0.8467
Epoch 44/50
600/600 [==============================] – 1s 1ms/sample – loss: 4.1589e-04 – accuracy: 1.0000 – val_loss: 0.4437 – val_accuracy: 0.8467
Epoch 45/50
600/600 [==============================] – 1s 1ms/sample – loss: 3.8964e-04 – accuracy: 1.0000 – val_loss: 0.4448 – val_accuracy: 0.8467
Epoch 46/50
600/600 [==============================] – 1s 1ms/sample – loss: 3.6612e-04 – accuracy: 1.0000 – val_loss: 0.4464 – val_accuracy: 0.8467
Epoch 47/50
600/600 [==============================] – 1s 1ms/sample – loss: 3.4430e-04 – accuracy: 1.0000 – val_loss: 0.4474 – val_accuracy: 0.8467
Epoch 48/50
600/600 [==============================] – 1s 1ms/sample – loss: 3.2409e-04 – accuracy: 1.0000 – val_loss: 0.4490 – val_accuracy: 0.8467
Epoch 49/50
600/600 [==============================] – 1s 1ms/sample – loss: 3.0545e-04 – accuracy: 1.0000 – val_loss: 0.4511 – val_accuracy: 0.8467
Epoch 50/50
600/600 [==============================] – 1s 1ms/sample – loss: 2.8793e-04 – accuracy: 1.0000 – val_loss: 0.4515 – val_accuracy: 0.8467
Accuracy: 0.812
In [73]:
import matplotlib.pyplot as plt
plt.subplot(2,1,1)
plt.plot(hist.history[‘accuracy’])
plt.plot(hist.history[‘val_accuracy’])
plt.title(‘model accuracy’)
plt.ylabel(‘accuracy’)
plt.xlabel(‘epoch’)
plt.legend([‘train’, ‘test’], loc=’lower right’)

plt.subplot(2,1,2)
plt.plot(hist.history[‘loss’])
plt.plot(hist.history[‘val_loss’])
plt.title(‘model loss’)
plt.ylabel(‘loss’)
plt.xlabel(‘epoch’)
plt.legend([‘train’, ‘test’], loc=’upper right’)

plt.show()


In [0]:
#Q5- How do you interpret these results?
#Overfitting problem
#Q6- What is your recommendation to improve the accuracy? Implement your idea.
#Add more layers and dropout 0.2 for each layer.
In [81]:
from tensorflow.keras.models import Sequential
from tensorflow.keras import layers
embedding_dim = 50
model = Sequential()
model.add(layers.Embedding(input_dim=vocab_size,
output_dim=embedding_dim,
input_length=maxlen))
model.add(layers.GlobalMaxPool1D())
model.add(layers.Dense(10, activation=’relu’))
model.add(layers.Dense(200, activation=’relu’))
model.add(Dropout(0.2))
model.add(layers.Dense(100, activation=’relu’))
model.add(Dropout(0.2))
model.add(layers.Dense(50, activation=’relu’))
model.add(Dropout(0.2))
model.add(layers.Dense(1, activation=’sigmoid’))
model.compile(optimizer=’adam’, loss=’binary_crossentropy’,
metrics=[‘accuracy’])
model.summary()

Model: “sequential_15”
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_12 (Embedding) (None, 100, 50) 78700
_________________________________________________________________
global_max_pooling1d_12 (Glo (None, 50) 0
_________________________________________________________________
dense_50 (Dense) (None, 10) 510
_________________________________________________________________
dense_51 (Dense) (None, 200) 2200
_________________________________________________________________
dropout_31 (Dropout) (None, 200) 0
_________________________________________________________________
dense_52 (Dense) (None, 100) 20100
_________________________________________________________________
dropout_32 (Dropout) (None, 100) 0
_________________________________________________________________
dense_53 (Dense) (None, 50) 5050
_________________________________________________________________
dropout_33 (Dropout) (None, 50) 0
_________________________________________________________________
dense_54 (Dense) (None, 1) 51
=================================================================
Total params: 106,611
Trainable params: 106,611
Non-trainable params: 0
_________________________________________________________________
In [82]:
hist = model.fit(X_train, y_train, epochs=50, validation_split=0.2, batch_size=10)
loss, accuracy = model.evaluate(X_test, y_test, verbose=False)
print(“Accuracy: “,accuracy)

Train on 600 samples, validate on 150 samples
Epoch 1/50
600/600 [==============================] – 2s 3ms/sample – loss: 0.6926 – accuracy: 0.4950 – val_loss: 0.6848 – val_accuracy: 0.6533
Epoch 2/50
600/600 [==============================] – 1s 928us/sample – loss: 0.5378 – accuracy: 0.8083 – val_loss: 0.4606 – val_accuracy: 0.7533
Epoch 3/50
600/600 [==============================] – 1s 846us/sample – loss: 0.1849 – accuracy: 0.9433 – val_loss: 0.5121 – val_accuracy: 0.8000
Epoch 4/50
600/600 [==============================] – 1s 861us/sample – loss: 0.0582 – accuracy: 0.9800 – val_loss: 0.5641 – val_accuracy: 0.8267
Epoch 5/50
600/600 [==============================] – 0s 831us/sample – loss: 0.0113 – accuracy: 0.9983 – val_loss: 0.5668 – val_accuracy: 0.8333
Epoch 6/50
600/600 [==============================] – 1s 865us/sample – loss: 0.0029 – accuracy: 1.0000 – val_loss: 0.6306 – val_accuracy: 0.8000
Epoch 7/50
600/600 [==============================] – 1s 871us/sample – loss: 0.0014 – accuracy: 1.0000 – val_loss: 0.6737 – val_accuracy: 0.8067
Epoch 8/50
600/600 [==============================] – 1s 902us/sample – loss: 7.3220e-04 – accuracy: 1.0000 – val_loss: 0.7014 – val_accuracy: 0.8200
Epoch 9/50
600/600 [==============================] – 1s 970us/sample – loss: 4.6758e-04 – accuracy: 1.0000 – val_loss: 0.7314 – val_accuracy: 0.8133
Epoch 10/50
600/600 [==============================] – 1s 1ms/sample – loss: 4.0291e-04 – accuracy: 1.0000 – val_loss: 0.7619 – val_accuracy: 0.8133
Epoch 11/50
600/600 [==============================] – 1s 836us/sample – loss: 2.3432e-04 – accuracy: 1.0000 – val_loss: 0.7751 – val_accuracy: 0.8200
Epoch 12/50
600/600 [==============================] – 1s 886us/sample – loss: 1.9244e-04 – accuracy: 1.0000 – val_loss: 0.8033 – val_accuracy: 0.8133
Epoch 13/50
600/600 [==============================] – 1s 906us/sample – loss: 1.6418e-04 – accuracy: 1.0000 – val_loss: 0.8088 – val_accuracy: 0.8200
Epoch 14/50
600/600 [==============================] – 1s 897us/sample – loss: 1.5856e-04 – accuracy: 1.0000 – val_loss: 0.8432 – val_accuracy: 0.8133
Epoch 15/50
600/600 [==============================] – 1s 891us/sample – loss: 2.0500e-04 – accuracy: 1.0000 – val_loss: 0.8376 – val_accuracy: 0.8200
Epoch 16/50
600/600 [==============================] – 1s 878us/sample – loss: 1.4520e-04 – accuracy: 1.0000 – val_loss: 0.8530 – val_accuracy: 0.8200
Epoch 17/50
600/600 [==============================] – 1s 840us/sample – loss: 7.0704e-05 – accuracy: 1.0000 – val_loss: 0.8557 – val_accuracy: 0.8267
Epoch 18/50
600/600 [==============================] – 1s 903us/sample – loss: 1.6211e-04 – accuracy: 1.0000 – val_loss: 0.8625 – val_accuracy: 0.8200
Epoch 19/50
600/600 [==============================] – 1s 953us/sample – loss: 1.0238e-04 – accuracy: 1.0000 – val_loss: 0.9177 – val_accuracy: 0.8200
Epoch 20/50
600/600 [==============================] – 1s 919us/sample – loss: 5.6170e-05 – accuracy: 1.0000 – val_loss: 0.9471 – val_accuracy: 0.8133
Epoch 21/50
600/600 [==============================] – 1s 884us/sample – loss: 4.6952e-05 – accuracy: 1.0000 – val_loss: 0.9638 – val_accuracy: 0.8133
Epoch 22/50
600/600 [==============================] – 1s 985us/sample – loss: 4.4554e-05 – accuracy: 1.0000 – val_loss: 0.9715 – val_accuracy: 0.8200
Epoch 23/50
600/600 [==============================] – 1s 957us/sample – loss: 5.1662e-05 – accuracy: 1.0000 – val_loss: 0.9743 – val_accuracy: 0.8267
Epoch 24/50
600/600 [==============================] – 1s 850us/sample – loss: 1.9702e-05 – accuracy: 1.0000 – val_loss: 0.9931 – val_accuracy: 0.8200
Epoch 25/50
600/600 [==============================] – 1s 851us/sample – loss: 2.0589e-05 – accuracy: 1.0000 – val_loss: 1.0076 – val_accuracy: 0.8200
Epoch 26/50
600/600 [==============================] – 1s 890us/sample – loss: 1.9530e-05 – accuracy: 1.0000 – val_loss: 1.0207 – val_accuracy: 0.8133
Epoch 27/50
600/600 [==============================] – 1s 974us/sample – loss: 1.7708e-05 – accuracy: 1.0000 – val_loss: 1.0324 – val_accuracy: 0.8200
Epoch 28/50
600/600 [==============================] – 1s 932us/sample – loss: 2.3582e-05 – accuracy: 1.0000 – val_loss: 1.0263 – val_accuracy: 0.8267
Epoch 29/50
600/600 [==============================] – 1s 1ms/sample – loss: 2.3115e-05 – accuracy: 1.0000 – val_loss: 1.0540 – val_accuracy: 0.8200
Epoch 30/50
600/600 [==============================] – 1s 894us/sample – loss: 2.3032e-05 – accuracy: 1.0000 – val_loss: 1.0677 – val_accuracy: 0.8267
Epoch 31/50
600/600 [==============================] – 1s 931us/sample – loss: 1.0808e-05 – accuracy: 1.0000 – val_loss: 1.0756 – val_accuracy: 0.8267
Epoch 32/50
600/600 [==============================] – 1s 956us/sample – loss: 1.5850e-05 – accuracy: 1.0000 – val_loss: 1.0853 – val_accuracy: 0.8267
Epoch 33/50
600/600 [==============================] – 1s 1ms/sample – loss: 6.8045e-06 – accuracy: 1.0000 – val_loss: 1.0924 – val_accuracy: 0.8267
Epoch 34/50
600/600 [==============================] – 1s 1ms/sample – loss: 7.0125e-06 – accuracy: 1.0000 – val_loss: 1.1027 – val_accuracy: 0.8267
Epoch 35/50
600/600 [==============================] – 1s 868us/sample – loss: 8.6901e-06 – accuracy: 1.0000 – val_loss: 1.1081 – val_accuracy: 0.8267
Epoch 36/50
600/600 [==============================] – 1s 1ms/sample – loss: 1.6190e-05 – accuracy: 1.0000 – val_loss: 1.1256 – val_accuracy: 0.8200
Epoch 37/50
600/600 [==============================] – 1s 895us/sample – loss: 9.0051e-06 – accuracy: 1.0000 – val_loss: 1.1856 – val_accuracy: 0.8067
Epoch 38/50
600/600 [==============================] – 1s 1ms/sample – loss: 6.7574e-06 – accuracy: 1.0000 – val_loss: 1.1794 – val_accuracy: 0.8200
Epoch 39/50
600/600 [==============================] – 1s 933us/sample – loss: 6.2754e-06 – accuracy: 1.0000 – val_loss: 1.1881 – val_accuracy: 0.8200
Epoch 40/50
600/600 [==============================] – 1s 931us/sample – loss: 9.9441e-06 – accuracy: 1.0000 – val_loss: 1.1947 – val_accuracy: 0.8200
Epoch 41/50
600/600 [==============================] – 1s 923us/sample – loss: 4.5435e-05 – accuracy: 1.0000 – val_loss: 1.3385 – val_accuracy: 0.7933
Epoch 42/50
600/600 [==============================] – 1s 888us/sample – loss: 1.3820e-05 – accuracy: 1.0000 – val_loss: 1.3131 – val_accuracy: 0.7933
Epoch 43/50
600/600 [==============================] – 1s 950us/sample – loss: 1.2045e-05 – accuracy: 1.0000 – val_loss: 1.2286 – val_accuracy: 0.8067
Epoch 44/50
600/600 [==============================] – 1s 971us/sample – loss: 2.3739e-05 – accuracy: 1.0000 – val_loss: 1.1944 – val_accuracy: 0.8267
Epoch 45/50
600/600 [==============================] – 1s 1ms/sample – loss: 6.1295e-06 – accuracy: 1.0000 – val_loss: 1.1958 – val_accuracy: 0.8333
Epoch 46/50
600/600 [==============================] – 1s 1ms/sample – loss: 6.8983e-06 – accuracy: 1.0000 – val_loss: 1.2092 – val_accuracy: 0.8267
Epoch 47/50
600/600 [==============================] – 1s 921us/sample – loss: 5.0695e-06 – accuracy: 1.0000 – val_loss: 1.2336 – val_accuracy: 0.8267
Epoch 48/50
600/600 [==============================] – 1s 1ms/sample – loss: 4.2116e-06 – accuracy: 1.0000 – val_loss: 1.2428 – val_accuracy: 0.8267
Epoch 49/50
600/600 [==============================] – 1s 941us/sample – loss: 4.7360e-06 – accuracy: 1.0000 – val_loss: 1.2501 – val_accuracy: 0.8267
Epoch 50/50
600/600 [==============================] – 1s 1ms/sample – loss: 9.0512e-06 – accuracy: 1.0000 – val_loss: 1.2661 – val_accuracy: 0.8267
Accuracy: 0.768
In [58]:
import matplotlib.pyplot as plt
plt.subplot(2,1,1)
plt.plot(hist.history[‘accuracy’])
plt.plot(hist.history[‘val_accuracy’])
plt.title(‘model accuracy’)
plt.ylabel(‘accuracy’)
plt.xlabel(‘epoch’)
plt.legend([‘train’, ‘test’], loc=’lower right’)

plt.subplot(2,1,2)
plt.plot(hist.history[‘loss’])
plt.plot(hist.history[‘val_loss’])
plt.title(‘model loss’)
plt.ylabel(‘loss’)
plt.xlabel(‘epoch’)
plt.legend([‘train’, ‘test’], loc=’upper right’)

plt.show()


In [0]:

In [0]: