CS计算机代考程序代写 python GPU compiler cache Keras cuda In [0]:

In [0]:
from tensorflow.python.client import device_lib
print(“Show System RAM Memory:\n\n”)
!cat /proc/meminfo | egrep “MemTotal*”
print(“\n\nShow Devices:\n\n”+str(device_lib.list_local_devices()))

Show System RAM Memory:

MemTotal: 13335188 kB

Show Devices:

[name: “/device:CPU:0”
device_type: “CPU”
memory_limit: 268435456
locality {
}
incarnation: 8700158686858789265
, name: “/device:XLA_CPU:0”
device_type: “XLA_CPU”
memory_limit: 17179869184
locality {
}
incarnation: 8342104309289264332
physical_device_desc: “device: XLA_CPU device”
, name: “/device:XLA_GPU:0”
device_type: “XLA_GPU”
memory_limit: 17179869184
locality {
}
incarnation: 16107158158178871312
physical_device_desc: “device: XLA_GPU device”
, name: “/device:GPU:0”
device_type: “GPU”
memory_limit: 11330115994
locality {
bus_id: 1
links {
}
}
incarnation: 5997946455289902644
physical_device_desc: “device: 0, name: Tesla K80, pci bus id: 0000:00:04.0, compute capability: 3.7”
]
In [0]:
!pip install q tensorflow-gpu==2.0.0

Requirement already satisfied: q in /usr/local/lib/python3.6/dist-packages (2.6)
Requirement already satisfied: tensorflow-gpu==2.0.0 in /usr/local/lib/python3.6/dist-packages (2.0.0)
Requirement already satisfied: tensorboard<2.1.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0) (2.0.2)
Requirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0) (1.27.1)
Requirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0) (0.34.2)
Requirement already satisfied: astor>=0.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0) (0.8.1)
Requirement already satisfied: google-pasta>=0.1.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0) (0.1.8)
Requirement already satisfied: gast==0.2.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0) (0.2.2)
Requirement already satisfied: numpy<2.0,>=1.16.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0) (1.17.5)
Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0) (3.1.0)
Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0) (1.1.0)
Requirement already satisfied: protobuf>=3.6.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0) (3.10.0)
Requirement already satisfied: absl-py>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0) (0.9.0)
Requirement already satisfied: tensorflow-estimator<2.1.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0) (2.0.1)
Requirement already satisfied: keras-preprocessing>=1.0.5 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0) (1.1.0)
Requirement already satisfied: wrapt>=1.11.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0) (1.11.2)
Requirement already satisfied: keras-applications>=1.0.8 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0) (1.0.8)
Requirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0) (1.12.0)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0) (3.2.1)
Requirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0) (2.21.0)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0) (0.4.1)
Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0) (1.7.2)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0) (1.0.0)
Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0) (45.2.0)
Requirement already satisfied: h5py in /usr/local/lib/python3.6/dist-packages (from keras-applications>=1.0.8->tensorflow-gpu==2.0.0) (2.8.0)
Requirement already satisfied: idna<2.9,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0) (2.8)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0) (2019.11.28)
Requirement already satisfied: urllib3<1.25,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0) (1.24.3)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0) (3.0.4)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0) (1.3.0)
Requirement already satisfied: cachetools<3.2,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0) (3.1.1)
Requirement already satisfied: rsa<4.1,>=3.1.4 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0) (4.0)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0) (0.2.8)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0) (3.1.0)
Requirement already satisfied: pyasn1>=0.1.3 in /usr/local/lib/python3.6/dist-packages (from rsa<4.1,>=3.1.4->google-auth<2,>=1.6.3->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0) (0.4.8)
In [0]:
import tensorflow as tf
tf.__version__
Out[0]:
‘2.0.0’
In [0]:
from google.colab import drive
drive.mount(‘/content/gdrive’)

Drive already mounted at /content/gdrive; to attempt to forcibly remount, call drive.mount(“/content/gdrive”, force_remount=True).
In [0]:
import time
start = time.time()
!python3 “/content/gdrive/My Drive/mnist_cnn.py”
end = time.time()
print(end – start)

Using TensorFlow backend.
x_train shape: (60000, 28, 28, 1)
60000 train samples
10000 test samples
2020-03-13 04:32:10.848229: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2020-03-13 04:32:10.866204: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-03-13 04:32:10.866967: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: Tesla K80 major: 3 minor: 7 memoryClockRate(GHz): 0.8235
pciBusID: 0000:00:04.0
2020-03-13 04:32:10.867302: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-03-13 04:32:10.868532: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2020-03-13 04:32:10.869653: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2020-03-13 04:32:10.869979: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2020-03-13 04:32:10.871499: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2020-03-13 04:32:10.872513: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2020-03-13 04:32:10.875751: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-03-13 04:32:10.875871: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-03-13 04:32:10.876701: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-03-13 04:32:10.877436: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2020-03-13 04:32:10.877794: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-03-13 04:32:10.882560: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2299995000 Hz
2020-03-13 04:32:10.882973: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x16d2bc0 executing computations on platform Host. Devices:
2020-03-13 04:32:10.883013: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version
2020-03-13 04:32:10.936085: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-03-13 04:32:10.936926: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x16d2d80 executing computations on platform CUDA. Devices:
2020-03-13 04:32:10.936964: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Tesla K80, Compute Capability 3.7
2020-03-13 04:32:10.937204: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-03-13 04:32:10.937905: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: Tesla K80 major: 3 minor: 7 memoryClockRate(GHz): 0.8235
pciBusID: 0000:00:04.0
2020-03-13 04:32:10.937976: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-03-13 04:32:10.938003: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2020-03-13 04:32:10.938068: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2020-03-13 04:32:10.938110: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2020-03-13 04:32:10.938133: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2020-03-13 04:32:10.938157: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2020-03-13 04:32:10.938181: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-03-13 04:32:10.938266: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-03-13 04:32:10.939014: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-03-13 04:32:10.939743: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2020-03-13 04:32:10.939808: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-03-13 04:32:10.941234: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-03-13 04:32:10.941270: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0
2020-03-13 04:32:10.941284: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N
2020-03-13 04:32:10.941422: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-03-13 04:32:10.942256: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-03-13 04:32:10.943004: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
2020-03-13 04:32:10.943082: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10805 MB memory) -> physical GPU (device: 0, name: Tesla K80, pci bus id: 0000:00:04.0, compute capability: 3.7)
Train on 48000 samples, validate on 12000 samples
Epoch 1/12
2020-03-13 04:32:12.642382: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
48000/48000 [==============================] – 8s 171us/sample – loss: 0.2753 – accuracy: 0.9152 – val_loss: 0.0699 – val_accuracy: 0.9800
Epoch 2/12
48000/48000 [==============================] – 6s 128us/sample – loss: 0.0976 – accuracy: 0.9710 – val_loss: 0.0512 – val_accuracy: 0.9852
Epoch 3/12
48000/48000 [==============================] – 6s 129us/sample – loss: 0.0720 – accuracy: 0.9772 – val_loss: 0.0440 – val_accuracy: 0.9879
Epoch 4/12
48000/48000 [==============================] – 6s 129us/sample – loss: 0.0568 – accuracy: 0.9827 – val_loss: 0.0406 – val_accuracy: 0.9887
Epoch 5/12
48000/48000 [==============================] – 6s 131us/sample – loss: 0.0486 – accuracy: 0.9851 – val_loss: 0.0407 – val_accuracy: 0.9893
Epoch 6/12
48000/48000 [==============================] – 6s 132us/sample – loss: 0.0408 – accuracy: 0.9868 – val_loss: 0.0361 – val_accuracy: 0.9903
Epoch 7/12
48000/48000 [==============================] – 6s 129us/sample – loss: 0.0356 – accuracy: 0.9889 – val_loss: 0.0368 – val_accuracy: 0.9902
Epoch 8/12
48000/48000 [==============================] – 6s 128us/sample – loss: 0.0337 – accuracy: 0.9891 – val_loss: 0.0397 – val_accuracy: 0.9899
Epoch 9/12
48000/48000 [==============================] – 6s 129us/sample – loss: 0.0315 – accuracy: 0.9898 – val_loss: 0.0368 – val_accuracy: 0.9902
Epoch 10/12
48000/48000 [==============================] – 6s 129us/sample – loss: 0.0271 – accuracy: 0.9911 – val_loss: 0.0414 – val_accuracy: 0.9905
Epoch 11/12
48000/48000 [==============================] – 6s 128us/sample – loss: 0.0242 – accuracy: 0.9923 – val_loss: 0.0360 – val_accuracy: 0.9910
Epoch 12/12
48000/48000 [==============================] – 6s 129us/sample – loss: 0.0227 – accuracy: 0.9923 – val_loss: 0.0347 – val_accuracy: 0.9912
Test loss: 0.02725354956639948
Test accuracy: 0.9934
82.71783566474915
In [0]:
#Q1
#Run time on my computer:31.655904531478882 seconds
#Run time on GPU on Colab:82.71783566474915 seconds
#Interesting fact…I’m sure I did select runtime type “GPU”
In [0]:
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten
from tensorflow.keras.layers import Conv2D, MaxPooling2D
from tensorflow.keras import backend as K
#Conv2D:build convolutional layer
#Maxpooling2D:build maxpooling layer
#Flatten:create fully connected layer
In [0]:
batch_size = 128
num_classes = 10
epochs = 12
In [0]:
# input image dimensions
img_rows, img_cols = 28, 28
In [0]:
# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
In [0]:
if K.image_data_format() == ‘channels_first’:
x_train= x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_test= x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
#Conv2D shows shape of the inputs (vector shape we put Conv1D, matrix shape we put Conv2D, 3Dimage we put Conv3D or Maxpooling3D)
#We fit matrix into convolutional layer in CNN
#in addtion to the dimension of the matrix (28*28), we need to add an additional layer: number of channels which shows the image is black&white or colorful.
#1 means B&W, 3 means RGB. Ex:(60000,28,28,1)
In [0]:
x_train = x_train.astype(‘float32’)
x_test = x_test.astype(‘float32’) #fit matrix into convolutional layer
x_train /= 255
x_test /= 255
print(‘x_train shape:’, x_train.shape)
print(x_train.shape[0], ‘train samples’)
print(x_test.shape[0], ‘test samples’)

x_train shape: (60000, 28, 28, 1)
60000 train samples
10000 test samples
In [0]:
import keras

Using TensorFlow backend.
In [0]:
import tensorflow as tf
In [0]:
# convert class vectors to binary class matrices
y_train= keras.utils.to_categorical(y_train, num_classes)
y_test= keras.utils.to_categorical(y_test, num_classes)
In [0]:
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),activation=’relu’,input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation=’relu’))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation=’relu’))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation=’softmax’))
In [0]:
#Q2- Explain the way that this model is designed. Talk about all the layers and their functionality.
#1.model.add(Conv2D(32, kernel_size=(3, 3),activation=’relu’,input_shape=input_shape))
#means add my first convolutional layer, with 32 filters, filter size is 3 by 3. (normal standard number for convolutional layer)
#first hiddenlayer has to define how many inputs are there that is the “input_shape”.
#2.model.add(Conv2D(64, (3, 3), activation=’relu’))
#then we add another convolutional layer with 64 filters
#3.model.add(MaxPooling2D(pool_size=(2, 2)))
#then add a Maxpooling layer with maxpooling size 2 by 2. (should be smaller than convolutional layer)
#4.model.add(Dropout(0.25))
#0.25 dropout for overfitting
#5.model.add(Flatten())
#fully connected layer (Flatten layer)
#from now is similar to nn
#add a layer with 128 nodes, activation is relu, dropout 0.5
#output layer num_classes, activation softmax
In [0]:
model.compile(loss=’categorical_crossentropy’, optimizer=’adam’, metrics=[‘accuracy’])
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_split=0.2) #20% of training data is used for validation
score = model.evaluate(x_test, y_test, verbose=0)
print(‘Test loss:’, score[0])
print(‘Test accuracy:’, score[1])

Train on 48000 samples, validate on 12000 samples
Epoch 1/12
48000/48000 [==============================] – 11s 225us/sample – loss: 0.2695 – accuracy: 0.9173 – val_loss: 0.0634 – val_accuracy: 0.9813
Epoch 2/12
48000/48000 [==============================] – 8s 172us/sample – loss: 0.0963 – accuracy: 0.9699 – val_loss: 0.0498 – val_accuracy: 0.9856
Epoch 3/12
48000/48000 [==============================] – 8s 167us/sample – loss: 0.0712 – accuracy: 0.9778 – val_loss: 0.0510 – val_accuracy: 0.9856
Epoch 4/12
48000/48000 [==============================] – 8s 164us/sample – loss: 0.0589 – accuracy: 0.9823 – val_loss: 0.0401 – val_accuracy: 0.9887
Epoch 5/12
48000/48000 [==============================] – 8s 166us/sample – loss: 0.0501 – accuracy: 0.9851 – val_loss: 0.0357 – val_accuracy: 0.9901
Epoch 6/12
48000/48000 [==============================] – 8s 165us/sample – loss: 0.0434 – accuracy: 0.9861 – val_loss: 0.0403 – val_accuracy: 0.9880
Epoch 7/12
48000/48000 [==============================] – 8s 164us/sample – loss: 0.0368 – accuracy: 0.9880 – val_loss: 0.0355 – val_accuracy: 0.9902
Epoch 8/12
48000/48000 [==============================] – 8s 166us/sample – loss: 0.0346 – accuracy: 0.9890 – val_loss: 0.0350 – val_accuracy: 0.9911
Epoch 9/12
48000/48000 [==============================] – 8s 164us/sample – loss: 0.0305 – accuracy: 0.9901 – val_loss: 0.0358 – val_accuracy: 0.9910
Epoch 10/12
48000/48000 [==============================] – 8s 168us/sample – loss: 0.0282 – accuracy: 0.9910 – val_loss: 0.0354 – val_accuracy: 0.9904
Epoch 11/12
48000/48000 [==============================] – 8s 165us/sample – loss: 0.0258 – accuracy: 0.9916 – val_loss: 0.0384 – val_accuracy: 0.9900
Epoch 12/12
48000/48000 [==============================] – 8s 174us/sample – loss: 0.0242 – accuracy: 0.9921 – val_loss: 0.0368 – val_accuracy: 0.9906
Test loss: 0.029268154423910148
Test accuracy: 0.9921
In [0]:
model.compile(loss=’categorical_crossentropy’, optimizer=’adam’, metrics=[‘accuracy’])
hist = model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_split=0.2)
loss, accuracy = model.evaluate(x_test, y_test, verbose=0)
print(‘Test loss:’, score[0])
print(‘Test accuracy:’, score[1])

Train on 48000 samples, validate on 12000 samples
Epoch 1/12
48000/48000 [==============================] – 7s 156us/sample – loss: 0.0098 – accuracy: 0.9966 – val_loss: 0.0458 – val_accuracy: 0.9917
Epoch 2/12
48000/48000 [==============================] – 7s 139us/sample – loss: 0.0084 – accuracy: 0.9972 – val_loss: 0.0537 – val_accuracy: 0.9903
Epoch 3/12
48000/48000 [==============================] – 7s 139us/sample – loss: 0.0078 – accuracy: 0.9976 – val_loss: 0.0543 – val_accuracy: 0.9919
Epoch 4/12
48000/48000 [==============================] – 7s 142us/sample – loss: 0.0098 – accuracy: 0.9968 – val_loss: 0.0470 – val_accuracy: 0.9930
Epoch 5/12
48000/48000 [==============================] – 7s 138us/sample – loss: 0.0092 – accuracy: 0.9966 – val_loss: 0.0514 – val_accuracy: 0.9916
Epoch 6/12
48000/48000 [==============================] – 7s 139us/sample – loss: 0.0062 – accuracy: 0.9980 – val_loss: 0.0488 – val_accuracy: 0.9917
Epoch 7/12
48000/48000 [==============================] – 7s 138us/sample – loss: 0.0073 – accuracy: 0.9977 – val_loss: 0.0495 – val_accuracy: 0.9918
Epoch 8/12
48000/48000 [==============================] – 7s 139us/sample – loss: 0.0082 – accuracy: 0.9974 – val_loss: 0.0537 – val_accuracy: 0.9912
Epoch 9/12
48000/48000 [==============================] – 7s 139us/sample – loss: 0.0085 – accuracy: 0.9975 – val_loss: 0.0504 – val_accuracy: 0.9925
Epoch 10/12
48000/48000 [==============================] – 7s 139us/sample – loss: 0.0084 – accuracy: 0.9973 – val_loss: 0.0505 – val_accuracy: 0.9907
Epoch 11/12
48000/48000 [==============================] – 7s 139us/sample – loss: 0.0086 – accuracy: 0.9971 – val_loss: 0.0410 – val_accuracy: 0.9924
Epoch 12/12
48000/48000 [==============================] – 7s 144us/sample – loss: 0.0075 – accuracy: 0.9974 – val_loss: 0.0458 – val_accuracy: 0.9913
Test loss: 0.029268154423910148
Test accuracy: 0.9921
In [0]:
#Q3- Design the learning curve and talk about what you see.
In [0]:
import matplotlib.pyplot as plt
plt.subplot(2,1,1)
plt.plot(hist.history[‘accuracy’])
plt.plot(hist.history[‘val_accuracy’])
plt.title(‘model accuracy’)
plt.ylabel(‘accuracy’)
plt.xlabel(‘epoch’)
plt.legend([‘train’, ‘test’], loc=’lower right’)

plt.subplot(2,1,2)
plt.plot(hist.history[‘loss’])
plt.plot(hist.history[‘val_loss’])
plt.title(‘model loss’)
plt.ylabel(‘loss’)
plt.xlabel(‘epoch’)
plt.legend([‘train’, ‘test’], loc=’upper right’)

plt.show()


In [0]:
import pandas as pd
In [0]:
df = pd.read_csv(‘/content/gdrive/My Drive//amazon_cells_labelled.txt’, names=[‘sentence’, ‘label’], sep=’\t’)
In [25]:
print(df.iloc[0]) #print first review

sentence So there is no way for me to plug it in here i…
label 0
Name: 0, dtype: object
In [0]:
sentences = [‘John likes ice cream’, ‘John hates chocolate.’]
In [0]:
from sklearn.feature_extraction.text import CountVectorizer
In [28]:
vectorizer = CountVectorizer(min_df=0, lowercase=False)
vectorizer.fit(sentences)
vectorizer.vocabulary_
Out[28]:
{‘John’: 0, ‘chocolate’: 1, ‘cream’: 2, ‘hates’: 3, ‘ice’: 4, ‘likes’: 5}
In [29]:
vectorizer.transform(sentences).toarray()
Out[29]:
array([[1, 0, 1, 0, 1, 1],
[1, 1, 0, 1, 0, 0]])
In [0]:
from sklearn.model_selection import train_test_split
sentences = df[‘sentence’].values
y = df[‘label’].values
In [0]:
sentences_train, sentences_test, y_train, y_test = train_test_split(sentences, y, test_size=0.25, random_state=1000)
In [0]:
from sklearn.feature_extraction.text import CountVectorizer
In [33]:
vectorizer = CountVectorizer()
vectorizer.fit(sentences_train)
Out[33]:
CountVectorizer(analyzer=’word’, binary=False, decode_error=’strict’,
dtype=, encoding=’utf-8′, input=’content’,
lowercase=True, max_df=1.0, max_features=None, min_df=1,
ngram_range=(1, 1), preprocessor=None, stop_words=None,
strip_accents=None, token_pattern='(?u)\\b\\w\\w+\\b’,
tokenizer=None, vocabulary=None)
In [34]:
X_train = vectorizer.transform(sentences_train)
X_test = vectorizer.transform(sentences_test)
X_train
Out[34]:
<750x1546 sparse matrix of type '
with 6817 stored elements in Compressed Sparse Row format>
In [35]:
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier.fit(X_train, y_train)
score = classifier.score(X_test, y_test)
print(“Accuracy:”, score)

Accuracy: 0.796
In [36]:
from tensorflow.keras.models import Sequential
from tensorflow.keras import layers
input_dim = X_train.shape[1] # Number of features
model = Sequential()
model.add(layers.Dense(10, input_dim=input_dim, activation=’relu’))
model.add(layers.Dense(1, activation=’sigmoid’))
model.compile(loss=’binary_crossentropy’, optimizer=’adam’,
metrics=[‘accuracy’])
hist = model.fit(X_train, y_train, epochs=100, validation_split=0.2 ,
batch_size=10)
loss, accuracy = model.evaluate(X_test, y_test, verbose=False)
print(“Test Accuracy: “,accuracy*100)

WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: ,
Train on 600 samples, validate on 150 samples
Epoch 1/100
600/600 [==============================] – 1s 910us/sample – loss: 0.6844 – accuracy: 0.6000 – val_loss: 0.6674 – val_accuracy: 0.7000
Epoch 2/100
600/600 [==============================] – 0s 524us/sample – loss: 0.6326 – accuracy: 0.7967 – val_loss: 0.6305 – val_accuracy: 0.7600
Epoch 3/100
600/600 [==============================] – 0s 492us/sample – loss: 0.5541 – accuracy: 0.8833 – val_loss: 0.5817 – val_accuracy: 0.8200
Epoch 4/100
600/600 [==============================] – 0s 759us/sample – loss: 0.4609 – accuracy: 0.9383 – val_loss: 0.5332 – val_accuracy: 0.8533
Epoch 5/100
600/600 [==============================] – 0s 492us/sample – loss: 0.3724 – accuracy: 0.9650 – val_loss: 0.4928 – val_accuracy: 0.8400
Epoch 6/100
600/600 [==============================] – 0s 557us/sample – loss: 0.2984 – accuracy: 0.9783 – val_loss: 0.4632 – val_accuracy: 0.8267
Epoch 7/100
600/600 [==============================] – 0s 575us/sample – loss: 0.2398 – accuracy: 0.9867 – val_loss: 0.4379 – val_accuracy: 0.8333
Epoch 8/100
600/600 [==============================] – 0s 512us/sample – loss: 0.1949 – accuracy: 0.9900 – val_loss: 0.4222 – val_accuracy: 0.8333
Epoch 9/100
600/600 [==============================] – 0s 675us/sample – loss: 0.1604 – accuracy: 0.9967 – val_loss: 0.4082 – val_accuracy: 0.8333
Epoch 10/100
600/600 [==============================] – 0s 452us/sample – loss: 0.1322 – accuracy: 0.9967 – val_loss: 0.4020 – val_accuracy: 0.8400
Epoch 11/100
600/600 [==============================] – 0s 456us/sample – loss: 0.1108 – accuracy: 0.9967 – val_loss: 0.3941 – val_accuracy: 0.8400
Epoch 12/100
600/600 [==============================] – 0s 678us/sample – loss: 0.0940 – accuracy: 0.9967 – val_loss: 0.3912 – val_accuracy: 0.8400
Epoch 13/100
600/600 [==============================] – 0s 458us/sample – loss: 0.0801 – accuracy: 0.9983 – val_loss: 0.3888 – val_accuracy: 0.8267
Epoch 14/100
600/600 [==============================] – 0s 547us/sample – loss: 0.0690 – accuracy: 0.9983 – val_loss: 0.3874 – val_accuracy: 0.8333
Epoch 15/100
600/600 [==============================] – 0s 636us/sample – loss: 0.0597 – accuracy: 0.9983 – val_loss: 0.3865 – val_accuracy: 0.8333
Epoch 16/100
600/600 [==============================] – 0s 469us/sample – loss: 0.0522 – accuracy: 1.0000 – val_loss: 0.3865 – val_accuracy: 0.8333
Epoch 17/100
600/600 [==============================] – 0s 418us/sample – loss: 0.0459 – accuracy: 1.0000 – val_loss: 0.3885 – val_accuracy: 0.8333
Epoch 18/100
600/600 [==============================] – 0s 521us/sample – loss: 0.0407 – accuracy: 1.0000 – val_loss: 0.3884 – val_accuracy: 0.8333
Epoch 19/100
600/600 [==============================] – 0s 632us/sample – loss: 0.0361 – accuracy: 1.0000 – val_loss: 0.3908 – val_accuracy: 0.8333
Epoch 20/100
600/600 [==============================] – 0s 433us/sample – loss: 0.0323 – accuracy: 1.0000 – val_loss: 0.3930 – val_accuracy: 0.8333
Epoch 21/100
600/600 [==============================] – 0s 678us/sample – loss: 0.0290 – accuracy: 1.0000 – val_loss: 0.3952 – val_accuracy: 0.8267
Epoch 22/100
600/600 [==============================] – 0s 476us/sample – loss: 0.0261 – accuracy: 1.0000 – val_loss: 0.3963 – val_accuracy: 0.8333
Epoch 23/100
600/600 [==============================] – 0s 448us/sample – loss: 0.0237 – accuracy: 1.0000 – val_loss: 0.4003 – val_accuracy: 0.8267
Epoch 24/100
600/600 [==============================] – 0s 788us/sample – loss: 0.0215 – accuracy: 1.0000 – val_loss: 0.4031 – val_accuracy: 0.8267
Epoch 25/100
600/600 [==============================] – 0s 471us/sample – loss: 0.0197 – accuracy: 1.0000 – val_loss: 0.4050 – val_accuracy: 0.8267
Epoch 26/100
600/600 [==============================] – 0s 638us/sample – loss: 0.0180 – accuracy: 1.0000 – val_loss: 0.4073 – val_accuracy: 0.8267
Epoch 27/100
600/600 [==============================] – 0s 592us/sample – loss: 0.0166 – accuracy: 1.0000 – val_loss: 0.4086 – val_accuracy: 0.8267
Epoch 28/100
600/600 [==============================] – 0s 426us/sample – loss: 0.0152 – accuracy: 1.0000 – val_loss: 0.4131 – val_accuracy: 0.8200
Epoch 29/100
600/600 [==============================] – 0s 749us/sample – loss: 0.0141 – accuracy: 1.0000 – val_loss: 0.4150 – val_accuracy: 0.8200
Epoch 30/100
600/600 [==============================] – 0s 498us/sample – loss: 0.0131 – accuracy: 1.0000 – val_loss: 0.4176 – val_accuracy: 0.8200
Epoch 31/100
600/600 [==============================] – 0s 708us/sample – loss: 0.0121 – accuracy: 1.0000 – val_loss: 0.4207 – val_accuracy: 0.8200
Epoch 32/100
600/600 [==============================] – 0s 451us/sample – loss: 0.0113 – accuracy: 1.0000 – val_loss: 0.4224 – val_accuracy: 0.8200
Epoch 33/100
600/600 [==============================] – 0s 480us/sample – loss: 0.0105 – accuracy: 1.0000 – val_loss: 0.4251 – val_accuracy: 0.8267
Epoch 34/100
600/600 [==============================] – 0s 672us/sample – loss: 0.0099 – accuracy: 1.0000 – val_loss: 0.4285 – val_accuracy: 0.8267
Epoch 35/100
600/600 [==============================] – 0s 449us/sample – loss: 0.0092 – accuracy: 1.0000 – val_loss: 0.4314 – val_accuracy: 0.8267
Epoch 36/100
600/600 [==============================] – 0s 625us/sample – loss: 0.0087 – accuracy: 1.0000 – val_loss: 0.4345 – val_accuracy: 0.8267
Epoch 37/100
600/600 [==============================] – 0s 549us/sample – loss: 0.0082 – accuracy: 1.0000 – val_loss: 0.4370 – val_accuracy: 0.8267
Epoch 38/100
600/600 [==============================] – 0s 462us/sample – loss: 0.0077 – accuracy: 1.0000 – val_loss: 0.4394 – val_accuracy: 0.8267
Epoch 39/100
600/600 [==============================] – 0s 688us/sample – loss: 0.0073 – accuracy: 1.0000 – val_loss: 0.4419 – val_accuracy: 0.8267
Epoch 40/100
600/600 [==============================] – 0s 477us/sample – loss: 0.0069 – accuracy: 1.0000 – val_loss: 0.4442 – val_accuracy: 0.8267
Epoch 41/100
600/600 [==============================] – 0s 545us/sample – loss: 0.0065 – accuracy: 1.0000 – val_loss: 0.4466 – val_accuracy: 0.8267
Epoch 42/100
600/600 [==============================] – 0s 566us/sample – loss: 0.0062 – accuracy: 1.0000 – val_loss: 0.4494 – val_accuracy: 0.8267
Epoch 43/100
600/600 [==============================] – 0s 521us/sample – loss: 0.0059 – accuracy: 1.0000 – val_loss: 0.4523 – val_accuracy: 0.8267
Epoch 44/100
600/600 [==============================] – 0s 661us/sample – loss: 0.0056 – accuracy: 1.0000 – val_loss: 0.4548 – val_accuracy: 0.8267
Epoch 45/100
600/600 [==============================] – 0s 483us/sample – loss: 0.0053 – accuracy: 1.0000 – val_loss: 0.4570 – val_accuracy: 0.8267
Epoch 46/100
600/600 [==============================] – 0s 495us/sample – loss: 0.0051 – accuracy: 1.0000 – val_loss: 0.4598 – val_accuracy: 0.8333
Epoch 47/100
600/600 [==============================] – 0s 443us/sample – loss: 0.0049 – accuracy: 1.0000 – val_loss: 0.4615 – val_accuracy: 0.8267
Epoch 48/100
600/600 [==============================] – 0s 762us/sample – loss: 0.0046 – accuracy: 1.0000 – val_loss: 0.4645 – val_accuracy: 0.8267
Epoch 49/100
600/600 [==============================] – 0s 471us/sample – loss: 0.0044 – accuracy: 1.0000 – val_loss: 0.4671 – val_accuracy: 0.8333
Epoch 50/100
600/600 [==============================] – 0s 470us/sample – loss: 0.0043 – accuracy: 1.0000 – val_loss: 0.4691 – val_accuracy: 0.8333
Epoch 51/100
600/600 [==============================] – 0s 697us/sample – loss: 0.0041 – accuracy: 1.0000 – val_loss: 0.4711 – val_accuracy: 0.8333
Epoch 52/100
600/600 [==============================] – 0s 481us/sample – loss: 0.0039 – accuracy: 1.0000 – val_loss: 0.4728 – val_accuracy: 0.8267
Epoch 53/100
600/600 [==============================] – 0s 642us/sample – loss: 0.0038 – accuracy: 1.0000 – val_loss: 0.4768 – val_accuracy: 0.8333
Epoch 54/100
600/600 [==============================] – 0s 477us/sample – loss: 0.0036 – accuracy: 1.0000 – val_loss: 0.4788 – val_accuracy: 0.8333
Epoch 55/100
600/600 [==============================] – 0s 460us/sample – loss: 0.0035 – accuracy: 1.0000 – val_loss: 0.4807 – val_accuracy: 0.8333
Epoch 56/100
600/600 [==============================] – 0s 629us/sample – loss: 0.0034 – accuracy: 1.0000 – val_loss: 0.4829 – val_accuracy: 0.8267
Epoch 57/100
600/600 [==============================] – 0s 492us/sample – loss: 0.0033 – accuracy: 1.0000 – val_loss: 0.4857 – val_accuracy: 0.8333
Epoch 58/100
600/600 [==============================] – 0s 461us/sample – loss: 0.0031 – accuracy: 1.0000 – val_loss: 0.4873 – val_accuracy: 0.8333
Epoch 59/100
600/600 [==============================] – 0s 664us/sample – loss: 0.0030 – accuracy: 1.0000 – val_loss: 0.4895 – val_accuracy: 0.8333
Epoch 60/100
600/600 [==============================] – 0s 467us/sample – loss: 0.0029 – accuracy: 1.0000 – val_loss: 0.4922 – val_accuracy: 0.8333
Epoch 61/100
600/600 [==============================] – 0s 464us/sample – loss: 0.0028 – accuracy: 1.0000 – val_loss: 0.4937 – val_accuracy: 0.8333
Epoch 62/100
600/600 [==============================] – 0s 666us/sample – loss: 0.0028 – accuracy: 1.0000 – val_loss: 0.4970 – val_accuracy: 0.8333
Epoch 63/100
600/600 [==============================] – 0s 481us/sample – loss: 0.0027 – accuracy: 1.0000 – val_loss: 0.4983 – val_accuracy: 0.8333
Epoch 64/100
600/600 [==============================] – 0s 652us/sample – loss: 0.0026 – accuracy: 1.0000 – val_loss: 0.5007 – val_accuracy: 0.8333
Epoch 65/100
600/600 [==============================] – 0s 488us/sample – loss: 0.0025 – accuracy: 1.0000 – val_loss: 0.5029 – val_accuracy: 0.8333
Epoch 66/100
600/600 [==============================] – 0s 512us/sample – loss: 0.0025 – accuracy: 1.0000 – val_loss: 0.5050 – val_accuracy: 0.8333
Epoch 67/100
600/600 [==============================] – 0s 687us/sample – loss: 0.0024 – accuracy: 1.0000 – val_loss: 0.5075 – val_accuracy: 0.8333
Epoch 68/100
600/600 [==============================] – 0s 513us/sample – loss: 0.0023 – accuracy: 1.0000 – val_loss: 0.5119 – val_accuracy: 0.8400
Epoch 69/100
600/600 [==============================] – 0s 697us/sample – loss: 0.0023 – accuracy: 1.0000 – val_loss: 0.5133 – val_accuracy: 0.8400
Epoch 70/100
600/600 [==============================] – 0s 508us/sample – loss: 0.0022 – accuracy: 1.0000 – val_loss: 0.5154 – val_accuracy: 0.8400
Epoch 71/100
600/600 [==============================] – 0s 430us/sample – loss: 0.0022 – accuracy: 1.0000 – val_loss: 0.5173 – val_accuracy: 0.8333
Epoch 72/100
600/600 [==============================] – 0s 763us/sample – loss: 0.0021 – accuracy: 1.0000 – val_loss: 0.5185 – val_accuracy: 0.8333
Epoch 73/100
600/600 [==============================] – 0s 498us/sample – loss: 0.0021 – accuracy: 1.0000 – val_loss: 0.5232 – val_accuracy: 0.8467
Epoch 74/100
600/600 [==============================] – 0s 625us/sample – loss: 0.0020 – accuracy: 1.0000 – val_loss: 0.5240 – val_accuracy: 0.8400
Epoch 75/100
600/600 [==============================] – 0s 609us/sample – loss: 0.0020 – accuracy: 1.0000 – val_loss: 0.5253 – val_accuracy: 0.8467
Epoch 76/100
600/600 [==============================] – 0s 480us/sample – loss: 0.0019 – accuracy: 1.0000 – val_loss: 0.5276 – val_accuracy: 0.8467
Epoch 77/100
600/600 [==============================] – 0s 440us/sample – loss: 0.0019 – accuracy: 1.0000 – val_loss: 0.5287 – val_accuracy: 0.8467
Epoch 78/100
600/600 [==============================] – 0s 684us/sample – loss: 0.0018 – accuracy: 1.0000 – val_loss: 0.5304 – val_accuracy: 0.8467
Epoch 79/100
600/600 [==============================] – 0s 461us/sample – loss: 0.0018 – accuracy: 1.0000 – val_loss: 0.5324 – val_accuracy: 0.8467
Epoch 80/100
600/600 [==============================] – 0s 447us/sample – loss: 0.0018 – accuracy: 1.0000 – val_loss: 0.5345 – val_accuracy: 0.8467
Epoch 81/100
600/600 [==============================] – 1s 853us/sample – loss: 0.0017 – accuracy: 1.0000 – val_loss: 0.5367 – val_accuracy: 0.8467
Epoch 82/100
600/600 [==============================] – 0s 468us/sample – loss: 0.0017 – accuracy: 1.0000 – val_loss: 0.5377 – val_accuracy: 0.8467
Epoch 83/100
600/600 [==============================] – 0s 595us/sample – loss: 0.0017 – accuracy: 1.0000 – val_loss: 0.5400 – val_accuracy: 0.8467
Epoch 84/100
600/600 [==============================] – 0s 553us/sample – loss: 0.0016 – accuracy: 1.0000 – val_loss: 0.5409 – val_accuracy: 0.8467
Epoch 85/100
600/600 [==============================] – 0s 525us/sample – loss: 0.0016 – accuracy: 1.0000 – val_loss: 0.5438 – val_accuracy: 0.8467
Epoch 86/100
600/600 [==============================] – 0s 689us/sample – loss: 0.0016 – accuracy: 1.0000 – val_loss: 0.5446 – val_accuracy: 0.8467
Epoch 87/100
600/600 [==============================] – 0s 456us/sample – loss: 0.0015 – accuracy: 1.0000 – val_loss: 0.5469 – val_accuracy: 0.8467
Epoch 88/100
600/600 [==============================] – 0s 461us/sample – loss: 0.0015 – accuracy: 1.0000 – val_loss: 0.5470 – val_accuracy: 0.8467
Epoch 89/100
600/600 [==============================] – 0s 658us/sample – loss: 0.0015 – accuracy: 1.0000 – val_loss: 0.5510 – val_accuracy: 0.8467
Epoch 90/100
600/600 [==============================] – 0s 568us/sample – loss: 0.0015 – accuracy: 1.0000 – val_loss: 0.5525 – val_accuracy: 0.8467
Epoch 91/100
600/600 [==============================] – 0s 732us/sample – loss: 0.0014 – accuracy: 1.0000 – val_loss: 0.5538 – val_accuracy: 0.8467
Epoch 92/100
600/600 [==============================] – 0s 465us/sample – loss: 0.0014 – accuracy: 1.0000 – val_loss: 0.5556 – val_accuracy: 0.8467
Epoch 93/100
600/600 [==============================] – 0s 458us/sample – loss: 0.0014 – accuracy: 1.0000 – val_loss: 0.5606 – val_accuracy: 0.8533
Epoch 94/100
600/600 [==============================] – 0s 791us/sample – loss: 0.0014 – accuracy: 1.0000 – val_loss: 0.5621 – val_accuracy: 0.8533
Epoch 95/100
600/600 [==============================] – 0s 427us/sample – loss: 0.0014 – accuracy: 1.0000 – val_loss: 0.5617 – val_accuracy: 0.8533
Epoch 96/100
600/600 [==============================] – 0s 574us/sample – loss: 0.0013 – accuracy: 1.0000 – val_loss: 0.5641 – val_accuracy: 0.8533
Epoch 97/100
600/600 [==============================] – 0s 583us/sample – loss: 0.0013 – accuracy: 1.0000 – val_loss: 0.5651 – val_accuracy: 0.8533
Epoch 98/100
600/600 [==============================] – 0s 478us/sample – loss: 0.0013 – accuracy: 1.0000 – val_loss: 0.5668 – val_accuracy: 0.8533
Epoch 99/100
600/600 [==============================] – 0s 682us/sample – loss: 0.0013 – accuracy: 1.0000 – val_loss: 0.5681 – val_accuracy: 0.8533
Epoch 100/100
600/600 [==============================] – 0s 478us/sample – loss: 0.0013 – accuracy: 1.0000 – val_loss: 0.5686 – val_accuracy: 0.8533
WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: ,
Test Accuracy: 78.39999794960022
In [37]:
from keras.preprocessing.text import Tokenizer

tokenizer = Tokenizer(num_words=5000)
tokenizer.fit_on_texts(sentences_train)
X_train = tokenizer.texts_to_sequences(sentences_train)
X_test = tokenizer.texts_to_sequences(sentences_test)
vocab_size = len(tokenizer.word_index) + 1 # Adding 1 because of reserved 0 index

print(sentences_train[3])

This is the phone to get for 2005…. I just bought my S710a and all I can say is WOW!
In [38]:
print(X_train[3])

[7, 5, 1, 9, 8, 92, 11, 676, 2, 59, 101, 10, 677, 3, 32, 2, 71, 225, 5, 449]
In [39]:
for word in [‘the’, ‘all’, ‘happy’]:
print(‘{}: {}’.format(word, tokenizer.word_index[word]))

the: 1
all: 32
happy: 86
In [40]:
from keras.preprocessing.sequence import pad_sequences
maxlen = 100
# Pad variables with zeros
X_train = pad_sequences(X_train, padding=’post’, maxlen=maxlen)
X_test = pad_sequences(X_test, padding=’post’, maxlen=maxlen)
print(X_train[0, :])

[ 7 24 5 16 4 137 148 6 223 315 2 71 224 8 1 673 111 444
18 316 11 445 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0]
In [41]:
from tensorflow.keras.models import Sequential
from tensorflow.keras import layers
embedding_dim = 50
model = Sequential()
model.add(layers.Embedding(input_dim=vocab_size,
output_dim=embedding_dim,
input_length=maxlen))
model.add(layers.GlobalMaxPool1D())
model.add(layers.Dense(10, activation=’relu’))
model.add(layers.Dense(1, activation=’sigmoid’))
model.compile(optimizer=’adam’, loss=’binary_crossentropy’,
metrics=[‘accuracy’])
model.summary()

Model: “sequential_2”
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding (Embedding) (None, 100, 50) 78700
_________________________________________________________________
global_max_pooling1d (Global (None, 50) 0
_________________________________________________________________
dense_4 (Dense) (None, 10) 510
_________________________________________________________________
dense_5 (Dense) (None, 1) 11
=================================================================
Total params: 79,221
Trainable params: 79,221
Non-trainable params: 0
_________________________________________________________________
In [42]:
hist = model.fit(X_train, y_train,
epochs=50,
validation_split=0.2,
batch_size=10)
loss, accuracy = model.evaluate(X_test, y_test, verbose=False)
print(“Accuracy: “,accuracy)

Train on 600 samples, validate on 150 samples
Epoch 1/50
600/600 [==============================] – 1s 2ms/sample – loss: 0.6908 – accuracy: 0.5767 – val_loss: 0.6837 – val_accuracy: 0.5667
Epoch 2/50
600/600 [==============================] – 1s 850us/sample – loss: 0.6656 – accuracy: 0.7233 – val_loss: 0.6523 – val_accuracy: 0.6400
Epoch 3/50
600/600 [==============================] – 0s 739us/sample – loss: 0.5952 – accuracy: 0.8683 – val_loss: 0.5887 – val_accuracy: 0.7800
Epoch 4/50
600/600 [==============================] – 1s 881us/sample – loss: 0.4673 – accuracy: 0.9467 – val_loss: 0.5050 – val_accuracy: 0.8133
Epoch 5/50
600/600 [==============================] – 1s 908us/sample – loss: 0.3214 – accuracy: 0.9617 – val_loss: 0.4421 – val_accuracy: 0.8267
Epoch 6/50
600/600 [==============================] – 0s 761us/sample – loss: 0.2054 – accuracy: 0.9667 – val_loss: 0.4048 – val_accuracy: 0.8267
Epoch 7/50
600/600 [==============================] – 1s 976us/sample – loss: 0.1304 – accuracy: 0.9800 – val_loss: 0.3872 – val_accuracy: 0.8267
Epoch 8/50
600/600 [==============================] – 0s 698us/sample – loss: 0.0846 – accuracy: 0.9883 – val_loss: 0.3776 – val_accuracy: 0.8267
Epoch 9/50
600/600 [==============================] – 1s 965us/sample – loss: 0.0564 – accuracy: 0.9933 – val_loss: 0.3777 – val_accuracy: 0.8267
Epoch 10/50
600/600 [==============================] – 1s 881us/sample – loss: 0.0384 – accuracy: 0.9983 – val_loss: 0.3717 – val_accuracy: 0.8333
Epoch 11/50
600/600 [==============================] – 0s 758us/sample – loss: 0.0268 – accuracy: 1.0000 – val_loss: 0.3754 – val_accuracy: 0.8333
Epoch 12/50
600/600 [==============================] – 1s 883us/sample – loss: 0.0198 – accuracy: 1.0000 – val_loss: 0.3780 – val_accuracy: 0.8400
Epoch 13/50
600/600 [==============================] – 1s 900us/sample – loss: 0.0150 – accuracy: 1.0000 – val_loss: 0.3818 – val_accuracy: 0.8400
Epoch 14/50
600/600 [==============================] – 0s 721us/sample – loss: 0.0115 – accuracy: 1.0000 – val_loss: 0.3843 – val_accuracy: 0.8400
Epoch 15/50
600/600 [==============================] – 1s 947us/sample – loss: 0.0091 – accuracy: 1.0000 – val_loss: 0.3879 – val_accuracy: 0.8400
Epoch 16/50
600/600 [==============================] – 1s 988us/sample – loss: 0.0074 – accuracy: 1.0000 – val_loss: 0.3894 – val_accuracy: 0.8400
Epoch 17/50
600/600 [==============================] – 1s 977us/sample – loss: 0.0061 – accuracy: 1.0000 – val_loss: 0.3933 – val_accuracy: 0.8400
Epoch 18/50
600/600 [==============================] – 0s 721us/sample – loss: 0.0051 – accuracy: 1.0000 – val_loss: 0.3953 – val_accuracy: 0.8400
Epoch 19/50
600/600 [==============================] – 1s 863us/sample – loss: 0.0043 – accuracy: 1.0000 – val_loss: 0.3976 – val_accuracy: 0.8400
Epoch 20/50
600/600 [==============================] – 0s 709us/sample – loss: 0.0037 – accuracy: 1.0000 – val_loss: 0.3995 – val_accuracy: 0.8400
Epoch 21/50
600/600 [==============================] – 1s 886us/sample – loss: 0.0032 – accuracy: 1.0000 – val_loss: 0.4005 – val_accuracy: 0.8467
Epoch 22/50
600/600 [==============================] – 0s 823us/sample – loss: 0.0028 – accuracy: 1.0000 – val_loss: 0.4019 – val_accuracy: 0.8467
Epoch 23/50
600/600 [==============================] – 1s 835us/sample – loss: 0.0025 – accuracy: 1.0000 – val_loss: 0.4039 – val_accuracy: 0.8467
Epoch 24/50
600/600 [==============================] – 1s 912us/sample – loss: 0.0022 – accuracy: 1.0000 – val_loss: 0.4067 – val_accuracy: 0.8467
Epoch 25/50
600/600 [==============================] – 0s 733us/sample – loss: 0.0019 – accuracy: 1.0000 – val_loss: 0.4098 – val_accuracy: 0.8467
Epoch 26/50
600/600 [==============================] – 1s 850us/sample – loss: 0.0017 – accuracy: 1.0000 – val_loss: 0.4116 – val_accuracy: 0.8467
Epoch 27/50
600/600 [==============================] – 1s 964us/sample – loss: 0.0016 – accuracy: 1.0000 – val_loss: 0.4132 – val_accuracy: 0.8467
Epoch 28/50
600/600 [==============================] – 0s 821us/sample – loss: 0.0014 – accuracy: 1.0000 – val_loss: 0.4167 – val_accuracy: 0.8533
Epoch 29/50
600/600 [==============================] – 0s 815us/sample – loss: 0.0013 – accuracy: 1.0000 – val_loss: 0.4189 – val_accuracy: 0.8533
Epoch 30/50
600/600 [==============================] – 1s 931us/sample – loss: 0.0012 – accuracy: 1.0000 – val_loss: 0.4209 – val_accuracy: 0.8533
Epoch 31/50
600/600 [==============================] – 1s 934us/sample – loss: 0.0011 – accuracy: 1.0000 – val_loss: 0.4227 – val_accuracy: 0.8533
Epoch 32/50
600/600 [==============================] – 0s 730us/sample – loss: 9.7609e-04 – accuracy: 1.0000 – val_loss: 0.4245 – val_accuracy: 0.8533
Epoch 33/50
600/600 [==============================] – 1s 906us/sample – loss: 8.9717e-04 – accuracy: 1.0000 – val_loss: 0.4269 – val_accuracy: 0.8533
Epoch 34/50
600/600 [==============================] – 1s 913us/sample – loss: 8.2301e-04 – accuracy: 1.0000 – val_loss: 0.4279 – val_accuracy: 0.8533
Epoch 35/50
600/600 [==============================] – 0s 827us/sample – loss: 7.6008e-04 – accuracy: 1.0000 – val_loss: 0.4294 – val_accuracy: 0.8533
Epoch 36/50
600/600 [==============================] – 1s 875us/sample – loss: 7.0327e-04 – accuracy: 1.0000 – val_loss: 0.4311 – val_accuracy: 0.8533
Epoch 37/50
600/600 [==============================] – 1s 846us/sample – loss: 6.5161e-04 – accuracy: 1.0000 – val_loss: 0.4326 – val_accuracy: 0.8533
Epoch 38/50
600/600 [==============================] – 0s 780us/sample – loss: 6.0560e-04 – accuracy: 1.0000 – val_loss: 0.4339 – val_accuracy: 0.8533
Epoch 39/50
600/600 [==============================] – 0s 705us/sample – loss: 5.6037e-04 – accuracy: 1.0000 – val_loss: 0.4352 – val_accuracy: 0.8533
Epoch 40/50
600/600 [==============================] – 0s 741us/sample – loss: 5.2155e-04 – accuracy: 1.0000 – val_loss: 0.4369 – val_accuracy: 0.8533
Epoch 41/50
600/600 [==============================] – 0s 775us/sample – loss: 4.8525e-04 – accuracy: 1.0000 – val_loss: 0.4385 – val_accuracy: 0.8533
Epoch 42/50
600/600 [==============================] – 0s 788us/sample – loss: 4.5236e-04 – accuracy: 1.0000 – val_loss: 0.4403 – val_accuracy: 0.8533
Epoch 43/50
600/600 [==============================] – 0s 735us/sample – loss: 4.2374e-04 – accuracy: 1.0000 – val_loss: 0.4425 – val_accuracy: 0.8533
Epoch 44/50
600/600 [==============================] – 0s 675us/sample – loss: 3.9628e-04 – accuracy: 1.0000 – val_loss: 0.4448 – val_accuracy: 0.8533
Epoch 45/50
600/600 [==============================] – 0s 758us/sample – loss: 3.7176e-04 – accuracy: 1.0000 – val_loss: 0.4462 – val_accuracy: 0.8533
Epoch 46/50
600/600 [==============================] – 0s 700us/sample – loss: 3.4947e-04 – accuracy: 1.0000 – val_loss: 0.4479 – val_accuracy: 0.8533
Epoch 47/50
600/600 [==============================] – 0s 741us/sample – loss: 3.2922e-04 – accuracy: 1.0000 – val_loss: 0.4493 – val_accuracy: 0.8533
Epoch 48/50
600/600 [==============================] – 0s 722us/sample – loss: 3.0962e-04 – accuracy: 1.0000 – val_loss: 0.4506 – val_accuracy: 0.8533
Epoch 49/50
600/600 [==============================] – 0s 735us/sample – loss: 2.9143e-04 – accuracy: 1.0000 – val_loss: 0.4527 – val_accuracy: 0.8467
Epoch 50/50
600/600 [==============================] – 0s 680us/sample – loss: 2.7490e-04 – accuracy: 1.0000 – val_loss: 0.4543 – val_accuracy: 0.8467
Accuracy: 0.792
In [43]:
import matplotlib.pyplot as plt
plt.subplot(2,1,1)
plt.plot(hist.history[‘accuracy’])
plt.plot(hist.history[‘val_accuracy’])
plt.title(‘model accuracy’)
plt.ylabel(‘accuracy’)
plt.xlabel(‘epoch’)
plt.legend([‘train’, ‘test’], loc=’lower right’)

plt.subplot(2,1,2)
plt.plot(hist.history[‘loss’])
plt.plot(hist.history[‘val_loss’])
plt.title(‘model loss’)
plt.ylabel(‘loss’)
plt.xlabel(‘epoch’)
plt.legend([‘train’, ‘test’], loc=’upper right’)

plt.show()