CS计算机代考程序代写 flex python AI deep learning Keras Laboratory #1 Tensorflow

Laboratory #1 Tensorflow
Table of Contents
Step1. Warm-up ……………………………………………………………………………………………………………… 2 Step2. Implement OCR code in Tensorflow……………………………………………………………………….. 2 Step3. Structured data ……………………………………………………………………………………………………… 6
Based on the definition that tensorflow website has provided: ¡°TensorFlow is an open source software library for high performance numerical computation. Its flexible architecture allows easy deployment of computation across a variety of platforms (CPUs, GPUs, TPUs), and from desktops to clusters of servers to mobile and edge devices. Originally developed by researchers and engineers from the Google Brain team within Google¡¯s AI organization, it comes with strong support for machine learning and deep learning and the flexible numerical computation core is used across many other scientific domains.¡± Its first version was released in February 2017. One of the interesting properties of this application is that it is an interdisciplinary software whereas it uses several kernels, software, GPU,…. Figure 1 is from Google and is a good overview of its structure.
Figure 1- Tensorflow flowchart (from Google)
In this lab, we are going to see some examples on Tensorflow and its applications.
What is a Tensors?
Page 1 of 7

A tensor is a multidimensional array. You can see some figures of tensors in figure 2.
Figure 2- Different tensors
Please run through all the steps of the instruction and answer to the questions carefully. Finally write a report that contains your analysis plus the answer to questions that are indicated by Q.
Step1. Warm-up
Let us start with basic commands in Tensorflow. Run following code in Python and analyze the output:
Analyze the response and explain what tf.constant() command do.
Step2. Implement OCR code in Tensorflow
We have already seen how to implement OCR (Optical Character Recognition) in python and Keras. The goal of this section is to implement same code in Tensorflow. We start with loading the data. We can use following code to load MNIST dataset in Python:
mnist = tf.keras.datasets.mnist
import tensorflow as tf
hello = tf.constant(‘Hello, TensorFlow!’) print(hello)
Page 2 of 7

(X_train, y_train), (X_test, y_test) = mnist.load_data()
Q1- Use same method that explain in lecture to show first 6 elements of the dataset. You should
see:
Q2- Normalize the data by dividing the values by 255.
Next step is to design the model. For this part we use only one hidden layer with 128 nodes on it.
model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation=’sigmoid’), tf.keras.layers.Dense(10)
])
We can run this model on our training data and see the result of the network. For each example the model returns a vector of “logits” or “log-odds” scores, one for each class.
predictions= model(X_train[:1]).numpy() predictions
Page 3 of 7

If we want to see the sigmoid result, we can add the desired activation function at last node:
tf.nn.sigmoid(predictions).numpy()
Now is the time to define the loss function. We use a cross entropy as the loss function.
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
This loss is equal to the negative log probability of the true class: It is zero if the model is sure of the correct class. This untrained model gives probabilities close to random (1/10 for each class), so the initial loss should be close to -tf.log(1/10) ~= 2.3. We can see this by typing:
loss_fn(y_train[:1], predictions).numpy() Next step is to design the model and start training:
model.compile(optimizer=’SGD’, loss=loss_fn,
metrics=[‘accuracy’]) model.fit(X_train, y_train, epochs=5)
Page 4 of 7

You may see slightly different results because we did not initialize the weights and biases the same way.
Q3- Add another layer with 128 nodes, increase the epochs to 10 and report the result. How much is the accuracy? (model.evaluate can tell you the accuracy).
Use following command on your training section. This will consider 20% of your data for validation:
hist = model.fit(X_train, y_train, validation_split=0.2, epochs=10) Now use following commands to draw the learning curves.
plt.subplot(2,1,1) plt.plot(hist.history[‘accuracy’]) plt.plot(hist.history[‘val_accuracy’]) plt.title(‘model accuracy’) plt.ylabel(‘accuracy’)
plt.xlabel(‘epoch’)
plt.legend([‘train’, ‘test’], loc=’lower right’)
plt.subplot(2,1,2) plt.plot(hist.history[‘loss’])
plt.plot(hist.history[‘val_loss’]) plt.title(‘model loss’)
plt.ylabel(‘loss’)
plt.xlabel(‘epoch’)
plt.legend([‘train’, ‘test’], loc=’upper right’)
plt.show()
Page 5 of 7

As we discussed, dropout can help to overcome the overfitting. In following code we add the dropout to the model.
Q4- Now train the model again with adding stochastic gradient descent with batch size of 200. What do you expect to be changed? Speed or accuracy?
Step3. Structured data
For this part, we will use a small dataset. In this dataset, the diagnostic, binary-valued variable investigated is whether the patient shows signs of diabetes according to World Health Organization criteria (i.e., if the 2 hour post-load plasma glucose was at least 200 mg/dl at any survey examination or if found during routine medical care). The population lives near Phoenix, Arizona, USA.
Attribute Information:
0. Patient ID
1. Number of times pregnant
2. Plasma glucose concentration a 2 hours in an oral glucose tolerance test 3. Diastolic blood pressure (mm Hg)
4. Triceps skin fold thickness (mm)
5. 2-Hour serum insulin (mu U/ml)
6. Body mass index (weight in kg/(height in m)^2)
7. Diabetes pedigree function
8. Age (years)
9. Class variable (0 or 1)
You can load the dataset using:
model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation=’sigmoid’), tf.keras.layers.Dense(128, activation=’sigmoid’), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10)
file = ‘address/to/file/pima-indians-diabetes.csv’ dataframe = pd.read_csv(file)
dataframe.head()
Page 6 of 7

Use describe() to see more information about dataset.
Let¡¯s identify input and outputs:
Now use your knowledge and what we have learnt in class, to design a good neural netwrk that makes prediction for this dataset. Please consider 80% of data for training and 20% for testing using:
Q5- Design a neural network with 2 hidden layers (hidden layer 1 with 12 nodes and hidden layer 2 with 8 nodes), report the accuracy and draw the learning curves. Apply a dropout to see if you could get any better result. Use ¡®adam¡¯ as optimizer.
We don¡¯t expect a good accuracy from this example. That is one of the issues with NN. We need more data to create an accurate model.
y = dataframe[‘diabete’]
X = dataframe.drop([‘ID’,’diabete’], axis=1)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=1234)
Page 7 of 7