程序代写代做代考 python Java javascript For this homework, you are going to:

For this homework, you are going to:
*write a small program to calculate hypotenuse
*train a neural network [3 hidden layers] on classic Iris Dataset, use the learning to classify new data
Using the Google Cloud Platform (GCP), a 100% web-hosted, managed service from Google – you do not have to install ANY piece of software on your machine! What you need to submit:
* hypotenuse.ipynb: a Python ‘notebook’ (see below) that shows your computation of a right triangle’s hypotenuse, given its other two sides
* a screen capture that shows your browser running code that will be provided to you [you have to edit it just a bit – see below], that shows the output (predicted classes) of the network that was trained.
0. Getting GCP set up
1. Go to https://cloud.google.com/, sign up for a free trial. You need to have a Google account (the one you use for GMail etc) prior to this. You will be asked to provide your credit card number, but you will NOT be charged. What you get is a free trial that is worth $300, and this homework will not have you go over that limit (not even close!).
2. After signing up, click on ‘Console’ at the top right of your browser, or click on this link: https://console.cloud.google.com/home/dashboard – this takes you to the main console for GCP. Click on ‘Select a project’, to create a new project:

I called project ‘P-TF’, you can call it ‘HW4’. GCP assigns your project an ID (mine is p-tf-165407); you are going to need this ID, for first of the three ‘gcloud’ commands you will type in shortly.
3. Next, you need to enable the Google Compute Engine API for this project, this is what will let you execute it on the cloud (and not just be hosted there). Click on https://console.cloud.google.com/apis/api/compute_component/?_ga=1.172320422.462420136.1492792803 and click ‘Enable’.
Now you are ready to start doing cloud computing on the GCP, in your project 🙂 What this means is this: you will launch an in-browser ‘cloud shell’ (see below) to type some commands to set up computing parameters, then work in a Python ‘notebook’ (script container) in another browser page. Using this approach, you can run VERY sophisticated code, including creating arbitrarily complex neural networks and other ML pipelines, reading in terabytes of cloud-based data, and executing the code on powerful cloud hardware.
4. Launch the Cloud Shell:

This is a Unix shell, where you can type in usual commands such as ls, pwd, more, cat, cd etc:

5. Next, set up the computing environment, and create a ‘Datalab’ project called hw4. A Datalab is an environment built on top of the GCP, to make it possible to run Python code interactively, using a Python ‘notebook’ shell (IDE). Follow what I’m doing, in the next 5 screenshots (you need to substitute your own project’s ID, in place of mine) – you are entering just 3 commands:

As you can see above, after ‘datalab create hw4’, you are going to launch a browser page, which will create a Python notebook that connects to the datalab instance (hw4, in my case) via port 8081 instead of the usual 8080 port.
6. Step 5 should result in a new Datalab browser page, like so:

Congrats! Now you are ready to start creating Python notebooks, to do homework.

1.Writing a TensorFlow (TF) program that computes a hypotenuse.
As you know, the hypotenuse of a right triangle is computed using the Pythagorean theorem.
In a standard programming language, this is expressed like so:
create/define variable ‘a’ to hold one of the non-hypotenuse sides
create/define variable ‘b’ to hold the other one
compute the hypotenuse, store it in ‘c’: c = sqrt(a*a+b*b)
You can compute hypotenuse (or else – mean, standard deviation, derivative, perimeter…) using TF, by specifying ‘a’ and ‘b’, as ‘tensors’ (very simply, arrays) instead of single values. In other words:
a = [3,10.5,6.7]
b = [4,12.6,8.4]
c = sqrt(a\*a+b\*b)
As you would expect, c will also be a tensor (array) of hypotenuses (hypotenusen?!): [5,16.401,10.745], for the arrays a and b above.
So, what’s the big deal with tensors, and tensorflow? Couldn’t we loop the array calculations ourselves, eg. in JavaScript:
a = [3,10.5,6.7]
b = [4,12.6,8.4]
for(var i=0;i<=2;i++){ alert(Math.sqrt(a[i]*a[i]+b[i]*b[i])); } Here is the BIG DEAL why: the underlying TF engine can automagically PARALLELIZE the computation (eg. our three sqrt() [one for each tensor element] calculations would run on three processors/cores simultaneously). Further, the TF engine will also run its input 'dataflow graph' in parallel mode! TF programs contain 'nodes' (vertices in a graph), and 'connections' (arcs/edges) through which data flows: Our a and b arrays would be tensor creation (source) nodes, and the hypotenuse tensor computing node would receive a and b as inputs. We can imagine extending the graph further - c, and another creation node d, get passed into node e, which would compute an element-by-element max() which gets output (to downstream nodes connected to e). The TF engine would run a and b and d in parallel; once a and b are done, it runs c; after c is done, it runs e - the dataflow 'dependencies' are automagically accounted for. If after all this, d changes, the TF engine will only re-run e, and not re-run c. a,b,c,d.. can be Twitter feeds, sensor data, CNN stages, interfaces to a self-driving car.. you can imagine the possibilities. This is the magic of TensorFlow! So the TF workflow is this: * create a TF dataflow graph - EACH NODE'S INPUTS AND OUTPUT WOULD ALWAYS BE TENSORS! * create a TF execution 'session' - this is the supervisor engine that will parse our graph and manage its dataflow (Python will not directly run the graph) * provide the session, our graph to run * the engine executes the graph, creates outputs of the graph * any modification of inputs will result in just the affected nodes being automatically re-run How do we create a graph that will add two tensors a and b? Here is the entire code: import tensorflow as tf a = tf.constant([5, 2, 8]) b = tf.constant([3, -1, 2]) c = tf.add(a,b) session = tf.InteractiveSession() result = session.run(c) print result The above Python code will run in any standard Python environment (that has TensorFlow installed), eg. IDLE. But in GCP Datalab, we are going to use a special environment called a Python notebook - this is a 'literate programming' environment, where we can mix 'text cells' containing formatted text and 'code cells' that contain code, to create a computable document (this is how Mathematica works as well). When we run the above in a notebook (see below on creating one), we get this: As you can see, the tensors a and b have been added to create c. Let's create a notebook that will become your first submittable. In the Datalab window, click on 'Notebook' - this will create a blank new notebook. Don't type in anything yet - go to Notebook->Rename…, and name your notebook, ‘hw4-nb’.

Now you can start entering code. The notebook starts you off with a code cell, as opposed to a text cell; going forward, you can create more cells, using ‘Add Code’ (code cell) and ‘Add Markdown’ (text cell). For now, just copy & paste the following into the code cell, and click on ‘Run’ – we are importing the TensorFlow module (library, ie. API) into our notebook:
import tensorflow as tf
The code runs, after which TensorFlow is ready for use. Advice: always have the TF import statement in its own cell.
Next, create a new code cell, paste the following in, and click ‘Run’:
a = tf.constant([5, 2, 8])
b = tf.constant([3, -1, 2])
c = tf.add(a,b)
session = tf.InteractiveSession()
result = session.run(c)
print result
Congrats! You just executed a TF graph 🙂
Get in the habit of doing Notebook->Save and Checkpoint often, to save your code.
You can experiment with creating a text cell and adding whatever text you want, and moving the cells up/down to freely intermix code and text. This facility helps you create well-documented code that is pleasing to look at (if you known Markdown, you can use it to format your text). Note – you need to double-click in a text cell, to get into edit mode.
Do Notebook->Download to download the notebook to your own laptop – this is what you will be submitting, after making appropriate mods. Note that the extension is .ipynb, which stands for ‘interactive Python notebook’. This is how you can save GCP notebooks to your own machines, as a form of backup. Conversely, you can upload notebook files from your laptop, onto Datalab and run it there – look for the ‘Upload’ button.
In order to construct a dataflow to manage, the TF engine is able to parse expressions in your code, in addition to function calls. So rather than use tf.add(), we could simply use + (enter this in a new cell and run it):
print session.run(a+b)
Modify what you have, to make a and b be used for the hypotenuse calculation. Hint: Look in here: https://www.tensorflow.org/api_docs/python/tf; you will likely also need to use tf.to_float(). Submit hw4-nb.ipynb after you have modified above code to do the hypotenuse calculation. Want to see something cool? Create a new notebook (eg ‘test’), and run following, each in its own cell:
import tensorflow as tf
print tf.__version__
print dir(tf) # EVERYTHING that TF offers!
import google.datalab.bigquery as bq
import pandas as pd
import numpy as np
import shutil
%bigquery schema –table “nyc-tlc:green.trips_2015”
%bq query
SELECT EXTRACT (DAYOFYEAR from pickup_datetime) AS daynumber FROM `nyc-tlc.green.trips_2015` LIMIT 10
Wow 🙂 Right in our notebook, we can pull in massive datasets from BigQuery, and use SQL-like statements to do analysis on the data. Likewise we can also use Google’s APIs for vision, video, speech understanding, language translation and natural language processing, right inside our notebooks.. VERY powerful and vast functionality, yours for the taking 🙂

2. Training an NN, and using it to classify. 
Now that you are familiar with the overall TF idea, you can jump in with both feet, into using TF’s NN functionality 🙂
Below is the complete code [FYI – modification of existing code] to train a network on learning the Iris Dataset – sepal length and width, and petal length and width (4 input vars), for 50 flowers from each of 3 species of iris (150 rows of data); the output (class) of 0,1,2 corresponds to one of the three iris species (FYI the species are Iris setosa, Iris versicolor, and Iris virginica). Here is what the data looks like (inputs,output):

If the NN learns how the 4 input variables determine the output, we can feed it NEW inputs (from measurements on new flowers), and ask it to classify the flowers as belonging to one of the trained classes.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import os
import urllib

import numpy as np
import tensorflow as tf

# Data sets
IRIS_TRAINING = “iris_training.csv”
IRIS_TRAINING_URL = “http://download.tensorflow.org/data/iris_training.csv”

IRIS_TEST = “iris_test.csv”
IRIS_TEST_URL = “http://download.tensorflow.org/data/iris_test.csv”

def main():
# fetch data
if not os.path.exists(IRIS_TRAINING):
raw = urllib.urlopen(IRIS_TRAINING_URL).read()
with open(IRIS_TRAINING, “w”) as f:
f.write(raw)
if not os.path.exists(IRIS_TEST):
raw = urllib.urlopen(IRIS_TEST_URL).read()
with open(IRIS_TEST, “w”) as f:
f.write(raw)

# load data
training_set = tf.contrib.learn.datasets.base.load_csv_with_header(
filename=IRIS_TRAINING,
target_dtype=np.int,
features_dtype=np.float32)
test_set = tf.contrib.learn.datasets.base.load_csv_with_header(
filename=IRIS_TEST,
target_dtype=np.int,
features_dtype=np.float32)

feature_columns = [tf.contrib.layers.real_valued_column(“”, dimension=4)]

# build 3 layer DNN with 10, 20, 10 units respectively.
# for activation_fn, can try tf.{sigmoid,tanh,nn.elu,nn.relu}
mdir = “/tmp/iris”
classifier = tf.contrib.learn.DNNClassifier(feature_columns=feature_columns,
hidden_units=[10,20,10],
activation_fn=tf.nn.relu,
n_classes=3,
model_dir=mdir)
# prep training inputs
def get_train_inputs():
x = tf.constant(training_set.data)
y = tf.constant(training_set.target)
return x, y

# DO THE TRAINING!
classifier.fit(input_fn=get_train_inputs, steps=80)

# prep test inputs
def get_test_inputs():
x = tf.constant(test_set.data)
y = tf.constant(test_set.target)
return x, y

# how good is the training?
accuracy_score = classifier.evaluate(input_fn=get_test_inputs,
steps=1)[“accuracy”]
print(“\nAccuracy: {0:f}\n”.format(accuracy_score))

# classify new flower samples.
def new_samples():
return np.array(
[[6.4, 3.2, 4.5, 1.5],
[5.8, 3.1, 5.0, 1.7]], dtype=np.float32)

predictions = list(classifier.predict(input_fn=new_samples))
print(“New inputs, output class predictions:”)
print(new_samples(),”\n”,predictions)

main() # run!
Create a new notebook (eg. ‘iris’), copy and paste the above, then run – you should see something like this:

Success! We made the machine learn, and are able to feed new input data (measurements of two flowers), and be provided outputs (that the flowers are type#1, type#2).
In your ‘iris’ notebook, modify the arrays in new_samples() to provide three flowers’ inputs (the code shows two) so that the prediction is [0,1,2]. Submit a screenshot that shows your inputs and outputs (just like the one above). Note – for the inputs, don’t use verbatim, data from the pre-existing 150 rows! What you’re asked to do is to make an educated guess of sorts – start with a known set of values, tweak the inputs just a bit (eg. + or – 0.1), see what you get for output, iterate the tweaking. This should take just a few minutes!

/docProps/thumbnail.jpeg