CS计算机代考程序代写 python Keras new assignment

new assignment

Assignment Instructions¶
This assignment gives you the chance to explore some of the most advanced pretrained networks available. Keras comes with around 20 pretrained neural networks built-in. You can use these networks right out of the box without modification or extend these networks through transfer learning. For this assignment, I will show you how you can explore these networks and examine their structure. This technique can be a great learning aid to see the structure of some of the most advanced neural networks.

To create one of the pretrained neural networks in Keras use the blah package. For example, you can create the Xception neural network with the following command:

net = tf.keras.applications.Xception()

To see the neural network structure issue the summary command:

net.summary()

The dir command will tell you what methods and properties are available for the neural network. You will use these functions to extract data from this structure. For example, to see the first layer:

net.layers[0]

To see what type the first layer is:

type(net.layers[0])

To see the internals of that layer:

dir(net.layers[0])

Use these sort of commands to build a table that looks similar to this:

name input output layers max_layer_wgt wgt_count
Xception 299 x 299 x 3 1000 134 3.0M 21.8M
VGG16 224 x 224 x 3 1000 23 98.0M 131.9M
VGG19 224 x 224 x 3 1000 26 98.0M 137.0M
… … … … … …

The meanings of these columns are:

name – The name of the network.
input – The count/structure of input neurons.
output – The count/structure of output neurons.
layers – The count of layers.
max_layer_wgt – The maximum number of weights in any layer. (as a string)
wgt_count – The total count of weights. (as a string)

Note, that I do request you to output weight counts a string, such as 10M. I provide a helper function for this. Also note, that I do request the input structure, such as 128 x 128 x 3. You should create a helper function of your own to format this output.

Report on the following pretrained neural networks:

Xception
VGG16
VGG19
ResNet50
ResNet101
ResNet152
ResNet50V2
ResNet101V2
ResNet152V2
InceptionV3
InceptionResNetV2
MobileNet
MobileNetV2
DenseNet121
DenseNet169
DenseNet201
NASNetMobile
NASNetLarge
EfficientNetB0
EfficientNetB1
EfficientNetB2
EfficientNetB3
EfficientNetB4
EfficientNetB5
EfficientNetB6
EfficientNetB7

Google CoLab Instructions¶
If you are using Google CoLab, it will be necessary to mount your GDrive so that you can send your notebook during the submit process. Running the following code will map your GDrive to /content/drive.

In [ ]:

try:
from google.colab import drive
drive.mount(‘/content/drive’, force_remount=True)
COLAB = True
print(“Note: using Google CoLab”)
%tensorflow_version 2.x
except:
print(“Note: not using Google CoLab”)
COLAB = False

Assignment Submit Function¶
You will submit the ten programming assignments electronically. The following submit function can be used to do this. My server will perform a basic check of each assignment and let you know if it sees any underlying problems.

It is unlikely that should need to modify this function.

In [ ]:

import base64
import os
import numpy as np
import pandas as pd
import requests
import PIL
import PIL.Image
import io

# This function submits an assignment. You can submit an assignment as much as you like, only the final
# submission counts. The paramaters are as follows:
# data – List of pandas dataframes or images.
# key – Your student key that was emailed to you.
# no – The assignment class number, should be 1 through 1.
# source_file – The full path to your Python or IPYNB file. This must have “_class1” as part of its name.
# . The number must match your assignment number. For example “_class2” for class assignment #2.
def submit(data,key,no,source_file=None):
if source_file is None and ‘__file__’ not in globals(): raise Exception(‘Must specify a filename when a Jupyter notebook.’)
if source_file is None: source_file = __file__
suffix = ‘_class{}’.format(no)
if suffix not in source_file: raise Exception(‘{} must be part of the filename.’.format(suffix))
with open(source_file, “rb”) as image_file:
encoded_python = base64.b64encode(image_file.read()).decode(‘ascii’)
ext = os.path.splitext(source_file)[-1].lower()
if ext not in [‘.ipynb’,’.py’]: raise Exception(“Source file is {} must be .py or .ipynb”.format(ext))
payload = []
for item in data:
if type(item) is PIL.Image.Image:
buffered = BytesIO()
item.save(buffered, format=”PNG”)
payload.append({‘PNG’:base64.b64encode(buffered.getvalue()).decode(‘ascii’)})
elif type(item) is pd.core.frame.DataFrame:
payload.append({‘CSV’:base64.b64encode(item.to_csv(index=False).encode(‘ascii’)).decode(“ascii”)})
r= requests.post(“https://api.heatonresearch.com/assignment-submit”,
headers={‘x-api-key’:key}, json={ ‘payload’: payload,’assignment’: no, ‘ext’:ext, ‘py’:encoded_python})
if r.status_code==200:
print(“Success: {}”.format(r.text))
else: print(“Failure: {}”.format(r.text))

Assignment #9 Sample Code¶
The following code provides a starting point for this assignment.

In [ ]:

import os
import pandas as pd
from scipy.stats import zscore
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, Dropout
from tensorflow.keras.models import load_model
import pandas as pd
import io
import requests
import numpy as np
from sklearn import metrics
from sklearn.model_selection import KFold
import sklearn
from sklearn.linear_model import Lasso

# This is your student key that I emailed to you at the beginnning of the semester.
key = “PPboscDL2djekrbkbvfOLakXXNy3dh5x2VV1Mlpm” # This is an example key and will not work.

# You must also identify your source file. (modify for your local setup)
file=’/content/drive/My Drive/Colab Notebooks/new_assignment_yourname_class9.ipynb’ # Google CoLab
# file=’C:\\Users\\jeffh\\projects\\t81_558_deep_learning\\assignments\\assignment_yourname_class9.ipynb’ # Windows
#file=’/Users/jheaton/projects/t81_558_deep_learning/assignments/assignment_yourname_class9.ipynb’ # Mac/Linux

In [ ]:

import numpy as np
import pandas as pd
import tensorflow as tf

lst_names = []
lst_input_count = []
lst_all_weights = []
lst_max_weights = []
lst_input = []
lst_output = []
lst_layers = []
lst_sort = []

# This function is based on the following:
# https://stackoverflow.com/questions/1094841/reusable-library-to-get-human-readable-version-of-file-size
def sizeof_fmt(num, suffix=’B’):
for unit in [”,’K’,’M’,’G’,’T’,’P’,’E’,’Z’]:
if abs(num) < 1024.0: return "%3.1f%s" % (num, unit) num /= 1024.0 return "%.1f%s" % (num, 'Y') def process_network(name,net): pass # Add code here process_network("Xception", tf.keras.applications.Xception()) process_network("VGG16", tf.keras.applications.VGG16()) process_network("VGG19", tf.keras.applications.VGG19()) # Add code here df = pd.DataFrame() df['name'] = lst_names df['input'] = lst_input df['output'] = lst_output df['layers'] = lst_layers df['max_layer_wgt'] = lst_max_weights df['wgt_count'] = lst_all_weights submit(source_file=file,data=[df],key="y75zXVg7BSaB9FrVznQCA3dSLcKmY1Rp8h00I1QS",no=9)