courses/machine_learning/cloudmle/cloudmle.ipynb (512 lines of code) (raw):
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h1> Scaling up ML using Cloud ML Engine </h1>\n",
"\n",
"In this notebook, we take a previously developed TensorFlow model to predict taxifare rides and package it up so that it can be run in Cloud MLE. For now, we'll run this on a small dataset. The model that was developed is rather simplistic, and therefore, the accuracy of the model is not great either. However, this notebook illustrates *how* to package up a TensorFlow model to run it within Cloud ML. \n",
"\n",
"Later in the course, we will look at ways to make a more effective machine learning model."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h2> Environment variables for project and bucket </h2>\n",
"\n",
"Note that:\n",
"<ol>\n",
"<li> Your project id is the *unique* string that identifies your project (not the project name). You can find this from the GCP Console dashboard's Home page. My dashboard reads: <b>Project ID:</b> cloud-training-demos </li>\n",
"<li> Cloud training often involves saving and restoring model files. If you don't have a bucket already, I suggest that you create one from the GCP console (because it will dynamically check whether the bucket name you want is available). A common pattern is to prefix the bucket name by the project id, so that it is unique. Also, for cost reasons, you might want to use a single region bucket. </li>\n",
"</ol>\n",
"<b>Change the cell below</b> to reflect your Project ID and bucket name.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID\n",
"REGION = 'us-central1' # Choose an available region for Cloud MLE from https://cloud.google.com/ml-engine/docs/regions.\n",
"BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME. Use a regional bucket in the region you selected."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# for bash\n",
"os.environ['PROJECT'] = PROJECT\n",
"os.environ['BUCKET'] = BUCKET\n",
"os.environ['REGION'] = REGION\n",
"os.environ['TFVERSION'] = '2.1' # Tensorflow version"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"gcloud config set project $PROJECT\n",
"gcloud config set compute/region $REGION"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Allow the Cloud ML Engine service account to read/write to the bucket containing training data."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"PROJECT_ID=$PROJECT\n",
"AUTH_TOKEN=$(gcloud auth print-access-token)\n",
"SVC_ACCOUNT=$(curl -X GET -H \"Content-Type: application/json\" \\\n",
" -H \"Authorization: Bearer $AUTH_TOKEN\" \\\n",
" https://ml.googleapis.com/v1/projects/${PROJECT_ID}:getConfig \\\n",
" | python -c \"import json; import sys; response = json.load(sys.stdin); \\\n",
" print(response['serviceAccount'])\")\n",
"\n",
"echo \"Authorizing the Cloud ML Service account $SVC_ACCOUNT to access files in $BUCKET\"\n",
"gsutil -m defacl ch -u $SVC_ACCOUNT:R gs://$BUCKET\n",
"gsutil -m acl ch -u $SVC_ACCOUNT:R -r gs://$BUCKET # error message (if bucket is empty) can be ignored\n",
"gsutil -m acl ch -u $SVC_ACCOUNT:W gs://$BUCKET"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h2> Packaging up the code </h2>\n",
"\n",
"Take your code and put into a standard Python package structure. <a href=\"taxifare/trainer/model.py\">model.py</a> and <a href=\"taxifare/trainer/task.py\">task.py</a> contain the Tensorflow code from earlier (explore the <a href=\"taxifare/trainer/\">directory structure</a>)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!find taxifare"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!cat taxifare/trainer/model.py"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h2> Find absolute paths to your data </h2>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note the absolute paths below. /content is mapped in Jupyterlab to where the home icon takes you"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"echo $PWD\n",
"rm -rf $PWD/taxi_trained\n",
"cp $PWD/../tensorflow/taxi-train.csv .\n",
"cp $PWD/../tensorflow/taxi-valid.csv .\n",
"head -1 $PWD/taxi-train.csv\n",
"head -1 $PWD/taxi-valid.csv"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h2> Running the Python module from the command-line </h2>"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"rm -rf taxifare.tar.gz taxi_trained\n",
"export PYTHONPATH=${PYTHONPATH}:${PWD}/taxifare\n",
"python -m trainer.task \\\n",
" --train_data_paths=\"${PWD}/taxi-train*\" \\\n",
" --eval_data_paths=${PWD}/taxi-valid.csv \\\n",
" --output_dir=${PWD}/taxi_trained \\\n",
" --train_steps=1000 --job-dir=./tmp"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"ls $PWD/taxi_trained/export/exporter/"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile ./test.json\n",
"{\"pickuplon\": -73.885262,\"pickuplat\": 40.773008,\"dropofflon\": -73.987232,\"dropofflat\": 40.732403,\"passengers\": 2}"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"## local predict doesn't work with Python 3 yet\n",
"#%bash\n",
"#model_dir=$(ls ${PWD}/taxi_trained/export/exporter)\n",
"#gcloud ai-platform local predict \\\n",
"# --model-dir=${PWD}/taxi_trained/export/exporter/${model_dir} \\\n",
"# --json-instances=./test.json"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h2> Running locally using gcloud </h2>"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"rm -rf taxifare.tar.gz taxi_trained\n",
"gcloud ai-platform local train \\\n",
" --module-name=trainer.task \\\n",
" --package-path=${PWD}/taxifare/trainer \\\n",
" -- \\\n",
" --train_data_paths=${PWD}/taxi-train.csv \\\n",
" --eval_data_paths=${PWD}/taxi-valid.csv \\\n",
" --train_steps=1000 \\\n",
" --output_dir=${PWD}/taxi_trained "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"When I ran it (due to random seeds, your results will be different), the ```average_loss``` (Mean Squared Error) on the evaluation dataset was 187, meaning that the RMSE was around 13."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If the above step (to stop TensorBoard) appears stalled, just move on to the next step. You don't need to wait for it to return."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!ls $PWD/taxi_trained"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h2> Submit training job using gcloud </h2>\n",
"\n",
"First copy the training data to the cloud. Then, launch a training job.\n",
"\n",
"After you submit the job, go to the cloud console (http://console.cloud.google.com) and select <b>AI Platform | Jobs</b> to monitor progress. \n",
"\n",
"<b>Note:</b> Don't be concerned if the notebook stalls (with a blue progress bar) or returns with an error about being unable to refresh auth tokens. This is a long-lived Cloud job and work is going on in the cloud. Use the Cloud Console link (above) to monitor the job."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"echo $BUCKET\n",
"gsutil -m rm -rf gs://${BUCKET}/taxifare/smallinput/\n",
"gsutil -m cp ${PWD}/*.csv gs://${BUCKET}/taxifare/smallinput/"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"OUTDIR=gs://${BUCKET}/taxifare/smallinput/taxi_trained\n",
"JOBNAME=lab3a_$(date -u +%y%m%d_%H%M%S)\n",
"echo $OUTDIR $REGION $JOBNAME\n",
"gsutil -m rm -rf $OUTDIR\n",
"gcloud ai-platform jobs submit training $JOBNAME \\\n",
" --region=$REGION \\\n",
" --module-name=trainer.task \\\n",
" --package-path=${PWD}/taxifare/trainer \\\n",
" --job-dir=$OUTDIR \\\n",
" --staging-bucket=gs://$BUCKET \\\n",
" --scale-tier=BASIC \\\n",
" --runtime-version=2.1 \\\n",
" --python-version=3.7 \\\n",
" -- \\\n",
" --train_data_paths=\"gs://${BUCKET}/taxifare/smallinput/taxi-train*\" \\\n",
" --eval_data_paths=\"gs://${BUCKET}/taxifare/smallinput/taxi-valid*\" \\\n",
" --output_dir=$OUTDIR \\\n",
" --train_steps=10000"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Don't be concerned if the notebook appears stalled (with a blue progress bar) or returns with an error about being unable to refresh auth tokens. This is a long-lived Cloud job and work is going on in the cloud. \n",
"\n",
"<b>Use the Cloud Console link to monitor the job and do NOT proceed until the job is done.</b>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h2> Deploy model </h2>\n",
"\n",
"Find out the actual name of the subdirectory where the model is stored and use it to deploy the model. Deploying model will take up to <b>5 minutes</b>."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"gsutil cp -r ${PWD}/taxi_trained gs://${BUCKET}/taxifare/smallinput/ \n",
"gsutil ls gs://${BUCKET}/taxifare/smallinput/taxi_trained/export/exporter"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"MODEL_NAME=\"taxifare\"\n",
"MODEL_VERSION=\"v1\"\n",
"MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/taxifare/smallinput/taxi_trained/export/exporter | tail -1)\n",
"echo \"Run these commands one-by-one (the very first time, you'll create a model and then create a version)\"\n",
"#gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}\n",
"#gcloud ai-platform models delete ${MODEL_NAME}\n",
"gcloud ai-platform models create ${MODEL_NAME} --regions $REGION\n",
"gcloud ai-platform versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version $TFVERSION --region global"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h2> Prediction </h2>"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"gcloud ai-platform predict --model=taxifare --version=v1 --json-instances=./test.json"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from googleapiclient import discovery\n",
"from oauth2client.client import GoogleCredentials\n",
"import json\n",
"\n",
"credentials = GoogleCredentials.get_application_default()\n",
"api = discovery.build('ml', 'v1', credentials=credentials,\n",
" discoveryServiceUrl='https://storage.googleapis.com/cloud-ml/discovery/ml_v1_discovery.json')\n",
"\n",
"request_data = {'instances':\n",
" [\n",
" {\n",
" 'pickuplon': -73.885262,\n",
" 'pickuplat': 40.773008,\n",
" 'dropofflon': -73.987232,\n",
" 'dropofflat': 40.732403,\n",
" 'passengers': 2,\n",
" }\n",
" ]\n",
"}\n",
"\n",
"parent = 'projects/%s/models/%s/versions/%s' % (PROJECT, 'taxifare', 'v1')\n",
"response = api.projects().predict(body=request_data, name=parent).execute()\n",
"print(\"response={0}\".format(response))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h2> Train on larger dataset </h2>\n",
"\n",
"I have already followed the steps below and the files are already available. <b> You don't need to do the steps in this comment. </b> In the next chapter (on feature engineering), we will avoid all this manual processing by using Cloud Dataflow.\n",
"\n",
"Go to http://bigquery.cloud.google.com/ and type the query:\n",
"<pre>\n",
"SELECT\n",
" (tolls_amount + fare_amount) AS fare_amount,\n",
" pickup_longitude AS pickuplon,\n",
" pickup_latitude AS pickuplat,\n",
" dropoff_longitude AS dropofflon,\n",
" dropoff_latitude AS dropofflat,\n",
" passenger_count*1.0 AS passengers,\n",
" 'nokeyindata' AS key\n",
"FROM\n",
" [nyc-tlc:yellow.trips]\n",
"WHERE\n",
" trip_distance > 0\n",
" AND fare_amount >= 2.5\n",
" AND pickup_longitude > -78\n",
" AND pickup_longitude < -70\n",
" AND dropoff_longitude > -78\n",
" AND dropoff_longitude < -70\n",
" AND pickup_latitude > 37\n",
" AND pickup_latitude < 45\n",
" AND dropoff_latitude > 37\n",
" AND dropoff_latitude < 45\n",
" AND passenger_count > 0\n",
" AND ABS(HASH(pickup_datetime)) % 1000 == 1\n",
"</pre>\n",
"\n",
"Note that this is now 1,000,000 rows (i.e. 100x the original dataset). Export this to CSV using the following steps (Note that <b>I have already done this and made the resulting GCS data publicly available</b>, so you don't need to do it.):\n",
"<ol>\n",
"<li> Click on the \"Save As Table\" button and note down the name of the dataset and table.\n",
"<li> On the BigQuery console, find the newly exported table in the left-hand-side menu, and click on the name.\n",
"<li> Click on \"Export Table\"\n",
"<li> Supply your bucket name and give it the name train.csv (for example: gs://cloud-training-demos-ml/taxifare/ch3/train.csv). Note down what this is. Wait for the job to finish (look at the \"Job History\" on the left-hand-side menu)\n",
"<li> In the query above, change the final \"== 1\" to \"== 2\" and export this to Cloud Storage as valid.csv (e.g. gs://cloud-training-demos-ml/taxifare/ch3/valid.csv)\n",
"<li> Download the two files, remove the header line and upload it back to GCS.\n",
"</ol>\n",
"\n",
"<p/>\n",
"<p/>\n",
"\n",
"<h2> Run Cloud training on 1-million row dataset </h2>\n",
"\n",
"This took 60 minutes and uses as input 1-million rows. The model is exactly the same as above. The only changes are to the input (to use the larger dataset) and to the Cloud MLE tier (to use STANDARD_1 instead of BASIC -- STANDARD_1 is approximately 10x more powerful than BASIC). At the end of the training the loss was 32, but the RMSE (calculated on the validation dataset) was stubbornly at 9.03. So, simply adding more data doesn't help."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"\n",
"XXXXX this takes 60 minutes. if you are sure you want to run it, then remove this line.\n",
"\n",
"OUTDIR=gs://${BUCKET}/taxifare/ch3/taxi_trained\n",
"JOBNAME=lab3a_$(date -u +%y%m%d_%H%M%S)\n",
"CRS_BUCKET=cloud-training-demos # use the already exported data\n",
"echo $OUTDIR $REGION $JOBNAME\n",
"gsutil -m rm -rf $OUTDIR\n",
"gcloud ai-platform jobs submit training $JOBNAME \\\n",
" --region=$REGION \\\n",
" --module-name=trainer.task \\\n",
" --package-path=${PWD}/taxifare/trainer \\\n",
" --job-dir=$OUTDIR \\\n",
" --staging-bucket=gs://$BUCKET \\\n",
" --scale-tier=STANDARD_1 \\\n",
" --runtime-version=2.1 \\\n",
" --python-version=3.7 \\\n",
" -- \\\n",
" --train_data_paths=\"gs://${CRS_BUCKET}/taxifare/ch3/train.csv\" \\\n",
" --eval_data_paths=\"gs://${CRS_BUCKET}/taxifare/ch3/valid.csv\" \\\n",
" --output_dir=$OUTDIR \\\n",
" --train_steps=100000"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright 2020 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.5"
}
},
"nbformat": 4,
"nbformat_minor": 1
}