courses/machine_learning/deepdive/04_features/taxifeateng/feateng.ipynb (731 lines of code) (raw):
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h1> Feature Engineering </h1>\n",
"\n",
"In this notebook, you will learn how to incorporate feature engineering into your pipeline.\n",
"<ul>\n",
"<li> Working with feature columns </li>\n",
"<li> Adding feature crosses in TensorFlow </li>\n",
"<li> Reading data from BigQuery </li>\n",
"<li> Creating datasets using Dataflow </li>\n",
"<li> Using a wide-and-deep model </li>\n",
"</ul>"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst"
]
},
{
"cell_type": "code",
"metadata": {
"id": "Nny3m465gKkY",
"colab_type": "code",
"colab": {}
},
"source": [
"!pip install --user google-cloud-bigquery==1.25.0"
],
"execution_count": null,
"outputs": [],
"source": [
"!pip install --user apache-beam[gcp]==2.61.0 \n",
"!pip install --user protobuf==3.20.0 \n",
"!pip install --user httplib2==0.12.0 "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**NOTE:** In the output of the above cell you may ignore any WARNINGS or ERRORS related to the following: \"apache-beam\", \"pyarrow\", \"tensorflow-transform\", \"tensorflow-model-analysis\", \"tensorflow-data-validation\", \"joblib\", \"google-cloud-storage\" etc."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you get any related errors mentioned above please rerun the above cell."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Note**: Restart your kernel to use updated packages."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"2.6.0\n"
]
}
],
"source": [
"import tensorflow as tf\n",
"import apache_beam as beam\n",
"import shutil\n",
"print(tf.__version__)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h2> 1. Environment variables for project and bucket </h2>\n",
"\n",
"1. Your project id is the *unique* string that identifies your project (not the project name). You can find this from the GCP Console dashboard's Home page. My dashboard reads: <b>Project ID:</b> cloud-training-demos \n",
"2. Cloud training often involves saving and restoring model files. Therefore, we should <b>create a single-region bucket</b>. If you don't have a bucket already, I suggest that you create one from the GCP console (because it will dynamically check whether the bucket name you want is available) \n",
"<b>Change the cell below</b> to reflect your Project ID and bucket name.\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"PROJECT = 'cloud-training-demos' # CHANGE THIS\n",
"BUCKET = 'cloud-training-demos' # REPLACE WITH YOUR BUCKET NAME. Use a regional bucket in the region you selected.\n",
"REGION = 'us-central1' # Choose an available region for Cloud AI Platform"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# for bash\n",
"os.environ['PROJECT'] = PROJECT\n",
"os.environ['BUCKET'] = BUCKET\n",
"os.environ['REGION'] = REGION\n",
"os.environ['TFVERSION'] = '2.11' \n",
"\n",
"## ensure we're using python3 env\n",
"os.environ['CLOUDSDK_PYTHON'] = 'python3.10'"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"gcloud config set project $PROJECT\n",
"gcloud config set compute/region $REGION\n",
"\n",
"## ensure we predict locally with our current Python environment\n",
"gcloud config set ml_engine/local_python `which python`"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h2> 2. Specifying query to pull the data </h2>\n",
"\n",
"Let's pull out a few extra columns from the timestamp."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def create_query(phase, EVERY_N):\n",
" if EVERY_N == None:\n",
" EVERY_N = 4 #use full dataset\n",
" \n",
" #select and pre-process fields\n",
" base_query = \"\"\"\n",
"#legacySQL\n",
"SELECT\n",
" (tolls_amount + fare_amount) AS fare_amount,\n",
" DAYOFWEEK(pickup_datetime) AS dayofweek,\n",
" HOUR(pickup_datetime) AS hourofday,\n",
" pickup_longitude AS pickuplon,\n",
" pickup_latitude AS pickuplat,\n",
" dropoff_longitude AS dropofflon,\n",
" dropoff_latitude AS dropofflat,\n",
" passenger_count*1.0 AS passengers,\n",
" CONCAT(STRING(pickup_datetime), STRING(pickup_longitude), STRING(pickup_latitude), STRING(dropoff_latitude), STRING(dropoff_longitude)) AS key\n",
"FROM\n",
" [nyc-tlc:yellow.trips]\n",
"WHERE\n",
" trip_distance > 0\n",
" AND fare_amount >= 2.5\n",
" AND pickup_longitude > -78\n",
" AND pickup_longitude < -70\n",
" AND dropoff_longitude > -78\n",
" AND dropoff_longitude < -70\n",
" AND pickup_latitude > 37\n",
" AND pickup_latitude < 45\n",
" AND dropoff_latitude > 37\n",
" AND dropoff_latitude < 45\n",
" AND passenger_count > 0\n",
" \"\"\"\n",
" \n",
" #add subsampling criteria by modding with hashkey\n",
" if phase == 'train': \n",
" query = \"{} AND ABS(HASH(pickup_datetime)) % {} < 2\".format(base_query,EVERY_N)\n",
" elif phase == 'valid': \n",
" query = \"{} AND ABS(HASH(pickup_datetime)) % {} == 2\".format(base_query,EVERY_N)\n",
" elif phase == 'test':\n",
" query = \"{} AND ABS(HASH(pickup_datetime)) % {} == 3\".format(base_query,EVERY_N)\n",
" return query\n",
" \n",
"print(create_query('valid', 100)) #example query using 1% of data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Try the query above in https://bigquery.cloud.google.com/table/nyc-tlc:yellow.trips if you want to see what it does (ADD LIMIT 10 to the query!)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h2> 3. Preprocessing Dataflow job from BigQuery </h2>\n",
"\n",
"This code reads from BigQuery and saves the data as-is on Google Cloud Storage. We can do additional preprocessing and cleanup inside Dataflow, but then we'll have to remember to repeat that prepreprocessing during inference. It is better to use tf.transform which will do this book-keeping for you, or to do preprocessing within your TensorFlow model. We will look at this in future notebooks. For now, we are simply moving data from BigQuery to CSV using Dataflow.\n",
"\n",
"While we could read from BQ directly from TensorFlow (See: https://www.tensorflow.org/api_docs/python/tf/contrib/cloud/BigQueryReader), it is quite convenient to export to CSV and do the training off CSV. Let's use Dataflow to do this at scale.\n",
"\n",
"Because we are running this on the Cloud, you should go to the GCP Console (https://console.cloud.google.com/dataflow) to look at the status of the job. It will take several minutes for the preprocessing job to launch."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"if gsutil ls | grep -q gs://${BUCKET}/taxifare/ch4/taxi_preproc/; then\n",
" gsutil -m rm -rf gs://$BUCKET/taxifare/ch4/taxi_preproc/\n",
"fi"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First, let's define a function for preprocessing the data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import datetime\n",
"\n",
"####\n",
"# Arguments:\n",
"# -rowdict: Dictionary. The beam bigquery reader returns a PCollection in\n",
"# which each row is represented as a python dictionary\n",
"# Returns:\n",
"# -rowstring: a comma separated string representation of the record with dayofweek\n",
"# converted from int to string (e.g. 3 --> Tue)\n",
"####\n",
"def to_csv(rowdict):\n",
" days = ['null', 'Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']\n",
" CSV_COLUMNS = 'fare_amount,dayofweek,hourofday,pickuplon,pickuplat,dropofflon,dropofflat,passengers,key'.split(',')\n",
" rowdict['dayofweek'] = days[rowdict['dayofweek']]\n",
" rowstring = ','.join([str(rowdict[k]) for k in CSV_COLUMNS])\n",
" return rowstring\n",
"\n",
"\n",
"####\n",
"# Arguments:\n",
"# -EVERY_N: Integer. Sample one out of every N rows from the full dataset.\n",
"# Larger values will yield smaller sample\n",
"# -RUNNER: 'DirectRunner' or 'DataflowRunner'. Specify to run the pipeline\n",
"# locally or on Google Cloud respectively. \n",
"# Side-effects:\n",
"# -Creates and executes dataflow pipeline. \n",
"# See https://beam.apache.org/documentation/programming-guide/#creating-a-pipeline\n",
"####\n",
"def preprocess(EVERY_N, RUNNER):\n",
" job_name = 'preprocess-taxifeatures' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S')\n",
" print('Launching Dataflow job {} ... hang on'.format(job_name))\n",
" OUTPUT_DIR = 'gs://{0}/taxifare/ch4/taxi_preproc/'.format(BUCKET)\n",
"\n",
" #dictionary of pipeline options\n",
" options = {\n",
" 'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),\n",
" 'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),\n",
" 'job_name': 'preprocess-taxifeatures' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S'),\n",
" 'project': PROJECT,\n",
" 'runner': RUNNER,\n",
" 'num_workers' : 4,\n",
" 'max_num_workers' : 5\n",
" }\n",
" #instantiate PipelineOptions object using options dictionary\n",
" opts = beam.pipeline.PipelineOptions(flags=[], **options)\n",
" #instantantiate Pipeline object using PipelineOptions\n",
" with beam.Pipeline(options=opts) as p:\n",
" for phase in ['train', 'valid']:\n",
" query = create_query(phase, EVERY_N) \n",
" outfile = os.path.join(OUTPUT_DIR, '{}.csv'.format(phase))\n",
" (\n",
" p | 'read_{}'.format(phase) >> beam.io.Read(beam.io.BigQuerySource(query=query))\n",
" | 'tocsv_{}'.format(phase) >> beam.Map(to_csv)\n",
" | 'write_{}'.format(phase) >> beam.io.Write(beam.io.WriteToText(outfile))\n",
" )\n",
" print(\"Done\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, let's run pipeline locally. This takes upto <b>5 minutes</b>. You will see a message \"Done\" when it is done."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Launching Dataflow job preprocess-taxifeatures-201007-104302 ... hang on\n",
"Done\n"
]
}
],
"source": [
"preprocess(50*10000, 'DirectRunner') "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"gs://qwiklabs-gcp-03-6f3d948638d3/taxifare/ch4/taxi_preproc/train.csv-00000-of-00005\n",
"gs://qwiklabs-gcp-03-6f3d948638d3/taxifare/ch4/taxi_preproc/train.csv-00001-of-00005\n",
"gs://qwiklabs-gcp-03-6f3d948638d3/taxifare/ch4/taxi_preproc/train.csv-00002-of-00005\n",
"gs://qwiklabs-gcp-03-6f3d948638d3/taxifare/ch4/taxi_preproc/train.csv-00003-of-00005\n",
"gs://qwiklabs-gcp-03-6f3d948638d3/taxifare/ch4/taxi_preproc/train.csv-00004-of-00005\n",
"gs://qwiklabs-gcp-03-6f3d948638d3/taxifare/ch4/taxi_preproc/valid.csv-00000-of-00002\n",
"gs://qwiklabs-gcp-03-6f3d948638d3/taxifare/ch4/taxi_preproc/valid.csv-00001-of-00002\n"
]
}
],
"source": [
"%%bash\n",
"gsutil ls gs://$BUCKET/taxifare/ch4/taxi_preproc/"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 4. Run Beam pipeline on Cloud Dataflow"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Run pipeline on cloud on a larger sample size."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"if gsutil ls | grep -q gs://${BUCKET}/taxifare/ch4/taxi_preproc/; then\n",
" gsutil -m rm -rf gs://$BUCKET/taxifare/ch4/taxi_preproc/\n",
"fi"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The following step will take <b>10-15 minutes.</b> Monitor job progress on the [Cloud Console in the Dataflow](https://console.cloud.google.com/dataflow) section.\n",
"__Note__: If the error occurred regarding enabling of `Dataflow API` then disable and re-enable the `Dataflow API` and re-run the below cell."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Launching Dataflow job preprocess-taxifeatures-201007-104302 ... hang on\n",
"Done\n"
]
}
],
"source": [
"preprocess(50*100, 'DataflowRunner') \n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Once the job completes, observe the files created in Google Cloud Storage"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"49169783 2020-10-07T11:11:48Z gs://qwiklabs-gcp-c8b7c0b514e76634/taxifare/ch4/taxi_preproc/train.csv-00000-of-00001\n",
" 116320 2020-10-07T11:06:21Z gs://qwiklabs-gcp-c8b7c0b514e76634/taxifare/ch4/taxi_preproc/train.csv-00000-of-00005\n",
" 108804 2020-10-07T11:06:21Z gs://qwiklabs-gcp-c8b7c0b514e76634/taxifare/ch4/taxi_preproc/train.csv-00001-of-00005\n",
" 116799 2020-10-07T11:06:21Z gs://qwiklabs-gcp-c8b7c0b514e76634/taxifare/ch4/taxi_preproc/train.csv-00002-of-00005\n",
" 114697 2020-10-07T11:06:21Z gs://qwiklabs-gcp-c8b7c0b514e76634/taxifare/ch4/taxi_preproc/train.csv-00003-of-00005\n",
" 116173 2020-10-07T11:06:21Z gs://qwiklabs-gcp-c8b7c0b514e76634/taxifare/ch4/taxi_preproc/train.csv-00004-of-00005\n",
" 24666705 2020-10-07T11:12:01Z gs://qwiklabs-gcp-c8b7c0b514e76634/taxifare/ch4/taxi_preproc/valid.csv-00000-of-00001\n",
" 114332 2020-10-07T11:06:20Z gs://qwiklabs-gcp-c8b7c0b514e76634/taxifare/ch4/taxi_preproc/valid.csv-00000-of-00002\n",
" 107689 2020-10-07T11:06:20Z gs://qwiklabs-gcp-c8b7c0b514e76634/taxifare/ch4/taxi_preproc/valid.csv-00001-of-00002\n",
" gs://qwiklabs-gcp-c8b7c0b514e76634/taxifare/ch4/taxi_preproc/tmp/\n",
"TOTAL: 9 objects, 74631302 bytes (71.17 MiB)\n"
]
}
],
"source": [
"%%bash\n",
"gsutil ls -l gs://$BUCKET/taxifare/ch4/taxi_preproc/"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"11.3,Fri,13,-73.975687,40.76003,-73.999972,40.762227,2.0,2009-09-11 13:56:00.000000-73.975740.7640.7622-74\n",
"6.1,Wed,12,-73.979987,40.757182,-73.999998,40.748792,2.0,2009-12-16 12:26:39.000000-73.9840.757240.7488-74\n",
"21.3,Tue,17,-74.00001,40.72167,-73.933223,40.679975,2.0,2012-05-15 17:38:00.000000-7440.721740.68-73.9332\n",
"6.5,Sat,15,-73.992025,40.725997,-74.000012,40.72185,2.0,2012-08-18 15:13:00.000000-73.99240.72640.7219-74\n",
"7.0,Sun,21,-73.991645,40.74996,-74.000044,40.730599,2.0,2014-04-06 21:49:00.000000-73.991640.7540.7306-74\n",
"14.0,Tue,22,-73.9999771118164,40.72161102294922,-73.95999145507812,40.76264572143555,2.0,2015-06-09 \n",
"22:19:48.000000-7440.721640.7626-73.96\n",
"10.9,Tue,14,-73.999958,40.730595,-73.980769,40.763996,2.0,2009-01-13 14:35:50.000000-7440.730640.764-73.9808\n",
"10.9,Sat,18,-73.999975,40.717953,-73.99127,40.750233,2.0,2009-04-18 18:31:00.000000-7440.71840.7502-73.9913\n",
"5.7,Thu,1,-73.99998,40.738628,-73.981518,40.741005,2.0,2009-09-10 01:02:14.000000-7440.738640.741-73.9815\n",
"6.1,Sat,1,-73.999968,40.734777,-74.002785,40.752014,2.0,2010-02-20 01:25:52.000000-7440.734840.752-74.0028\n"
]
}
],
"source": [
"%%bash\n",
"#print first 10 lines of first shard of train.csv\n",
"gsutil cat \"gs://$BUCKET/taxifare/ch4/taxi_preproc/train.csv-00000-of-*\" | head"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 5. Develop model with new inputs\n",
"\n",
"Download the first shard of the preprocessed data to enable local development."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"if [ -d sample ]; then\n",
" rm -rf sample\n",
"fi\n",
"mkdir sample\n",
"gsutil cat \"gs://$BUCKET/taxifare/ch4/taxi_preproc/train.csv-00000-of-*\" > sample/train.csv\n",
"gsutil cat \"gs://$BUCKET/taxifare/ch4/taxi_preproc/valid.csv-00000-of-*\" > sample/valid.csv"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We have two new inputs in the INPUT_COLUMNS, three engineered features, and the estimator involves bucketization and feature crosses."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"grep -A 20 \"INPUT_COLUMNS =\" taxifare/trainer/model.py"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"grep -A 50 \"build_estimator\" taxifare/trainer/model.py"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"grep -A 15 \"add_engineered(\" taxifare/trainer/model.py"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Try out the new model on the local sample (this takes <b>5 minutes</b>) to make sure it works fine."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"INFO:tensorflow:Using config:\n",
"..................................................................................................\n",
"..................................................................................................\n",
"..................................................................................................\n",
"INFO:tensorflow:Loss for final step: 184.59918.\n"
]
}
],
"source": [
"%%bash\n",
"rm -rf taxifare.tar.gz taxi_trained\n",
"export PYTHONPATH=${PYTHONPATH}:${PWD}/taxifare\n",
"python -m trainer.task \\\n",
" --train_data_paths=${PWD}/sample/train.csv \\\n",
" --eval_data_paths=${PWD}/sample/valid.csv \\\n",
" --output_dir=${PWD}/taxi_trained \\\n",
" --train_steps=10 \\\n",
" --job-dir=/tmp"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"1602060307\n"
]
}
],
"source": [
"%%bash\n",
"ls taxi_trained/export/exporter/"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can use ```saved_model_cli``` to look at the exported signature. Note that the model doesn't need any of the engineered features as inputs. It will compute latdiff, londiff, euclidean from the provided inputs, thanks to the ```add_engineered``` call in the serving_input_fn."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"model_dir=$(ls ${PWD}/taxi_trained/export/exporter | tail -1)\n",
"saved_model_cli show --dir ${PWD}/taxi_trained/export/exporter/${model_dir} --all"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Writing /tmp/test.json\n"
]
}
],
"source": [
"%%writefile /tmp/test.json\n",
"{\"dayofweek\": \"Sun\", \"hourofday\": 17, \"pickuplon\": -73.885262, \"pickuplat\": 40.773008, \"dropofflon\": -73.987232, \"dropofflat\": 40.732403, \"passengers\": 2}"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"PREDICTIONS\n",
"[2.600170373916626]\n"
]
}
],
"source": [
"%%bash\n",
"model_dir=$(ls ${PWD}/taxi_trained/export/exporter)\n",
"gcloud ai-platform local predict \\\n",
" --model-dir=${PWD}/taxi_trained/export/exporter/${model_dir} \\\n",
" --json-instances=/tmp/test.json"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 6. Train on cloud\n",
"\n",
"This will take <b> 10-15 minutes </b> even though the prompt immediately returns after the job is submitted. Monitor job progress on the [Cloud Console, in the AI Platform](https://console.cloud.google.com/ai-platform/jobs) section and wait for the training job to complete.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text":[
"gs://qwiklabs-gcp-c8b7c0b514e76634/taxifare/ch4/taxi_trained us-central1 lab4a_201007_111601\n",
"jobId: lab4a_201007_111601 \n ",
"state: QUEUED\n",
"CommandException: 1 files/objects could not be removed. \n",
"Job [lab4a_201007_111601] submitted successfully.\n",
"Your job is still active. You may view the status of your job with the command \n",
" $ gcloud ai-platform jobs describe lab4a_201007_111601 \n",
"or continue streaming the logs with the command \n",
" $ gcloud ai-platform jobs stream-logs lab4a_201007_111601"
]
}
],
"source": [
"%%bash\n",
"OUTDIR=gs://${BUCKET}/taxifare/ch4/taxi_trained\n",
"JOBNAME=lab4a_$(date -u +%y%m%d_%H%M%S)\n",
"echo $OUTDIR $REGION $JOBNAME\n",
"gsutil -m rm -rf $OUTDIR\n",
"gcloud ai-platform jobs submit training $JOBNAME \\\n",
" --region=$REGION \\\n",
" --module-name=trainer.task \\\n",
" --package-path=${PWD}/taxifare/trainer \\\n",
" --job-dir=$OUTDIR \\\n",
" --staging-bucket=gs://$BUCKET \\\n",
" --scale-tier=BASIC \\\n",
" --runtime-version 2.3 \\\n",
" --python-version 3.5 \\\n",
" -- \\\n",
" --train_data_paths=\"gs://$BUCKET/taxifare/ch4/taxi_preproc/train*\" \\\n",
" --eval_data_paths=\"gs://${BUCKET}/taxifare/ch4/taxi_preproc/valid*\" \\\n",
" --train_steps=5000 \\\n",
" --output_dir=$OUTDIR"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The RMSE is now 8.33249, an improvement over the 9.3 that we were getting ... of course, we won't know until we train/validate on a larger dataset. Still, this is promising. But before we do that, let's do hyper-parameter tuning.\n",
"\n",
"<b>Use the Cloud Console link to monitor the job and wait till the job is done.</b>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright 2020 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 4
}