courses/machine_learning/asl/04_advanced_preprocessing/a_dataflow.ipynb (766 lines of code) (raw):
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Data Preprocessing for Machine Learning\n",
"\n",
"**Learning Objectives**\n",
"* Understand the different approaches for data preprocessing in developing ML models\n",
"* Use Dataflow to perform data preprocessing steps\n",
"\n",
"## Introduction\n",
"\n",
"In the previous notebook we achieved an RMSE of **3.85**. Let's see if we can improve upon that by creating a data preprocessing pipeline in Cloud Dataflow.\n",
"\n",
"Preprocessing data for a machine learning model involves both data engineering and feature engineering. During data engineering, we convert raw data into prepared data which is necessary for the model. Feature engineering then takes that prepared data and creates the features expected by the model. We have already seen various ways we can engineer new features for a machine learning model and where those steps take place. We also have flexibility as to where data preprocessing steps can take place; for example, BigQuery, Cloud Dataflow and Tensorflow. In this lab, we'll explore different data preprocessing strategies and see how they can be accomplished with Cloud Dataflow.\n",
"\n",
"One perspective in which to categorize different types of data preprocessing operations is in terms of the granularity of the operation. Here, we will consider the following three types of operations:\n",
"1. Instance-level transformations\n",
"2. Full-pass transformations\n",
"3. Time-windowed aggregations\n",
"\n",
"Cloud Dataflow can perform each of these types of operations and is particularly useful when performing computationally expensive operations as it is an autoscaling service for batch and streaming data processing pipelines. We'll say a few words about each of these below. For more information, have a look at this article about [data preprocessing for machine learning from Google Cloud](https://cloud.google.com/solutions/machine-learning/data-preprocessing-for-ml-with-tf-transform-pt1).\n",
"\n",
"**1. Instance-level transformations**\n",
"These are transformations which take place during training and prediction, looking only at values from a single data point. For example, they might include clipping the value of a feature, polynomially expand a feature, multiply two features, or compare two features to create a Boolean flag.\n",
"\n",
"It is necessary to apply the same transformations at training time and at prediction time. Failure to do this results in training/serving skew and will negatively affect the performance of the model.\n",
"\n",
"**2. Full-pass transformations**\n",
"These transformations occur during training, but occur as instance-level operations during prediction. That is, during training you must analyze the entirety of the training data to compute quantities such as maximum, minimum, mean or variance while at prediction time you need only use those values to rescale or normalize a single data point. \n",
"\n",
"A good example to keep in mind is standard scaling (z-score normalization) of features for training. You need to compute the mean and standard deviation of that feature across the whole training data set, thus it is called a full-pass transformation. At prediction time you use those previously computed values to appropriately normalize the new data point. Failure to do so results in training/serving skew.\n",
"\n",
"**3. Time-windowed aggregations**\n",
"These types of transformations occur during training and at prediction time. They involve creating a feature by summarizing real-time values by aggregating over some temporal window clause. For example, if we wanted our model to estimate the taxi trip time based on the traffic metrics for the route in the last 5 minutes, in the last 10 minutes or the last 30 minutes we would want to create a time-window to aggreagate these values. \n",
"\n",
"At prediction time these aggregations have to be computed in real-time from a data stream."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Set environment variables and load necessary libraries\n",
"\n",
"Apache Beam only works in Python 2 at the moment, so switch to the Python 2 kernel in the upper right hand side. Then execute the following cells to install the necessary libraries if they have not been installed already."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#Ensure that we have the correct version of Apache Beam installed\n",
"!pip freeze | grep apache-beam || sudo pip install apache-beam[gcp]==2.12.0"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"After installation, restart the kernel by selecting **Kernel** > **Restart Kernel**"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import tensorflow as tf\n",
"import apache_beam as beam\n",
"import shutil\n",
"import os\n",
"print(tf.__version__)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, set the environment variables related to your GCP Project."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"PROJECT = \"cloud-training-demos\" # Replace with your PROJECT\n",
"BUCKET = \"cloud-training-bucket\" # Replace with your BUCKET\n",
"REGION = \"us-central1\" # Choose an available region for Cloud MLE\n",
"TFVERSION = \"1.13\" # TF version for CMLE to use"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"os.environ[\"PROJECT\"] = PROJECT\n",
"os.environ[\"BUCKET\"] = BUCKET\n",
"os.environ[\"REGION\"] = REGION\n",
"os.environ[\"TFVERSION\"] = TFVERSION"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"gcloud config set project $PROJECT\n",
"gcloud config set compute/region $REGION\n",
"\n",
"## ensure we predict locally with our current Python environment\n",
"gcloud config set ml_engine/local_python `which python`"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create data preprocessing job with Cloud Dataflow\n",
"\n",
"The following code reads from BigQuery and saves the data as-is on Google Cloud Storage. We could also do additional preprocessing and cleanup inside Dataflow. Note that, in this case we'd have to remember to repeat that prepreprocessing at prediction time to avoid training/serving skew. In general, it is better to use tf.transform which will do this book-keeping for you, or to do preprocessing within your TensorFlow model. We will look at how tf.transform works in another notebook. For now, we are simply moving data from BigQuery to CSV using Dataflow.\n",
"\n",
"It's worth noting that while we could read from [BQ directly from TensorFlow](https://www.tensorflow.org/api_docs/python/tf/contrib/cloud/BigQueryReader), it is quite convenient to export to CSV and do the training off CSV. We can do this at scale with Cloud Dataflow. Furthermore, because we are running this on the cloud, you should go to the [GCP Console](https://console.cloud.google.com/dataflow) to view the status of the job. It will take several minutes for the preprocessing job to launch."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Define our query and pipeline functions\n",
"\n",
"To start we'll copy over the `create_query` function we created in the `01_bigquery/c_extract_and_benchmark` notebook. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def create_query(phase, sample_size):\n",
" basequery = \"\"\"\n",
" SELECT\n",
" (tolls_amount + fare_amount) AS fare_amount,\n",
" EXTRACT(DAYOFWEEK from pickup_datetime) AS dayofweek,\n",
" EXTRACT(HOUR from pickup_datetime) AS hourofday,\n",
" pickup_longitude AS pickuplon,\n",
" pickup_latitude AS pickuplat,\n",
" dropoff_longitude AS dropofflon,\n",
" dropoff_latitude AS dropofflat\n",
" FROM\n",
" `nyc-tlc.yellow.trips`\n",
" WHERE\n",
" trip_distance > 0\n",
" AND fare_amount >= 2.5\n",
" AND pickup_longitude > -78\n",
" AND pickup_longitude < -70\n",
" AND dropoff_longitude > -78\n",
" AND dropoff_longitude < -70\n",
" AND pickup_latitude > 37\n",
" AND pickup_latitude < 45\n",
" AND dropoff_latitude > 37\n",
" AND dropoff_latitude < 45\n",
" AND passenger_count > 0\n",
" AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N)) = 1\n",
" \"\"\"\n",
"\n",
" if phase == 'TRAIN':\n",
" subsample = \"\"\"\n",
" AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) >= (EVERY_N * 0)\n",
" AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) < (EVERY_N * 70)\n",
" \"\"\"\n",
" elif phase == 'VALID':\n",
" subsample = \"\"\"\n",
" AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) >= (EVERY_N * 70)\n",
" AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) < (EVERY_N * 85)\n",
" \"\"\"\n",
" elif phase == 'TEST':\n",
" subsample = \"\"\"\n",
" AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) >= (EVERY_N * 85)\n",
" AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) < (EVERY_N * 100)\n",
" \"\"\"\n",
"\n",
" query = basequery + subsample\n",
" return query.replace(\"EVERY_N\", sample_size)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then, we'll write the csv we create to a Cloud Storage bucket. So, we'll look to see that the location is empty, and if not clear out its contents so that it is."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"if gsutil ls | grep -q gs://${BUCKET}/taxifare/ch4/taxi_preproc/; then\n",
" gsutil -m rm -rf gs://$BUCKET/taxifare/ch4/taxi_preproc/\n",
"fi"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we'll create a function and pipeline for preprocessing the data. First, we'll define a `to_csv` function which takes a row dictionary (a dictionary created from a BigQuery reader representing each row of a dataset) and returns a comma separated string for each record"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def to_csv(rowdict):\n",
" \"\"\"\n",
" Arguments:\n",
" -rowdict: Dictionary. The beam bigquery reader returns a PCollection in\n",
" which each row is represented as a python dictionary\n",
" Returns:\n",
" -rowstring: a comma separated string representation of the record\n",
" \"\"\"\n",
" days = [\"null\", \"Sun\", \"Mon\", \"Tue\", \"Wed\", \"Thu\", \"Fri\", \"Sat\"]\n",
" CSV_COLUMNS = \"fare_amount,dayofweek,hourofday,pickuplon,pickuplat,dropofflon,dropofflat\".split(',')\n",
" rowstring = ','.join([str(rowdict[k]) for k in CSV_COLUMNS])\n",
" return rowstring"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we define our primary preprocessing function. Reading through the code this creates a pipeline to read data from BigQuery, use our `to_csv` function above to make a comma separated string, then write to a file in Google Cloud Storage. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import datetime\n",
"\n",
"def preprocess(EVERY_N, RUNNER):\n",
" \"\"\"\n",
" Arguments:\n",
" -EVERY_N: Integer. Sample one out of every N rows from the full dataset.\n",
" Larger values will yield smaller sample\n",
" -RUNNER: \"DirectRunner\" or \"DataflowRunner\". Specfy to run the pipeline\n",
" locally or on Google Cloud respectively. \n",
" Side-effects:\n",
" -Creates and executes dataflow pipeline. \n",
" See https://beam.apache.org/documentation/programming-guide/#creating-a-pipeline\n",
" \"\"\"\n",
" job_name = \"preprocess-taxifeatures\" + \"-\" + datetime.datetime.now().strftime(\"%y%m%d-%H%M%S\")\n",
" print(\"Launching Dataflow job {} ... hang on\".format(job_name))\n",
" OUTPUT_DIR = \"gs://{0}/taxifare/ch4/taxi_preproc/\".format(BUCKET)\n",
"\n",
" #dictionary of pipeline options\n",
" options = {\n",
" \"staging_location\": os.path.join(OUTPUT_DIR, \"tmp\", \"staging\"),\n",
" \"temp_location\": os.path.join(OUTPUT_DIR, \"tmp\"),\n",
" \"job_name\": \"preprocess-taxifeatures\" + \"-\" + datetime.datetime.now().strftime(\"%y%m%d-%H%M%S\"),\n",
" \"project\": PROJECT,\n",
" \"runner\": RUNNER\n",
" }\n",
" \n",
" #instantiate PipelineOptions object using options dictionary\n",
" opts = beam.pipeline.PipelineOptions(flags = [], **options)\n",
"\n",
" #instantantiate Pipeline object using PipelineOptions\n",
" with beam.Pipeline(options=opts) as p:\n",
" for phase in [\"TRAIN\", \"VALID\", \"TEST\"]:\n",
" query = create_query(phase, EVERY_N)\n",
" outfile = os.path.join(OUTPUT_DIR, \"{}.csv\".format(phase))\n",
" (\n",
" p | \"read_{}\".format(phase) >> beam.io.Read(beam.io.BigQuerySource(query = query, use_standard_sql = True))\n",
" | \"tocsv_{}\".format(phase) >> beam.Map(to_csv)\n",
" | \"write_{}\".format(phase) >> beam.io.Write(beam.io.WriteToText(outfile))\n",
" )\n",
" print(\"Done\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that we have the preprocessing pipeline function, we can execute the pipeline locally or on the cloud. To run our pipeline locally, we specify the `RUNNER` variable as `DirectRunner`. To run our pipeline in the cloud, we set `RUNNER` to be `DataflowRunner`. In either case, this variable is passed to the options dictionary that we use to instantiate the pipeline. \n",
"\n",
"As with training a model, it is good practice to test your preprocessing pipeline locally with a subset of your data before running it against your entire dataset."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Run Beam pipeline locally\n",
"\n",
"We'll start by testing our pipeline locally. This takes upto 5 minutes. You will see a message \"Done\" when it has finished."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"preprocess(\"50*10000\", \"DirectRunner\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Run Beam pipeline on Cloud Dataflow¶\n",
"\n",
"Again, we'll clear out our bucket to GCS to ensure a fresh run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"if gsutil ls -r gs://${BUCKET} | grep -q gs://${BUCKET}/taxifare/ch4/taxi_preproc/; then\n",
" gsutil -m rm -rf gs://${BUCKET}/taxifare/ch4/taxi_preproc/\n",
"fi"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The following step will take **15-20 minutes**. Monitor job progress on the Dataflow section of [GCP Console](https://console.cloud.google.com/dataflow). Note, you can change the first arugment to \"None\" to process the full dataset."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"preprocess(\"50*100\", \"DataflowRunner\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Note:** If your Dataflow job gets failed then re-run the above cell."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Once the job finishes, we can look at the files that have been created and have a look at what they contain. You will notice that the files have been sharded into many csv files."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"gsutil ls -l gs://$BUCKET/taxifare/ch4/taxi_preproc/"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"gsutil cat \"gs://$BUCKET/taxifare/ch4/taxi_preproc/TRAIN.csv-00000-of-*\" | head"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Develop a model with new inputs\n",
"\n",
"We can now develop a model with these inputs. Download the first shard of the preprocessed data to a subfolder called `sample` so we can develop locally first. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"if [ -d sample ]; then\n",
" rm -rf sample\n",
"fi\n",
"mkdir sample\n",
"gsutil cat \"gs://$BUCKET/taxifare/ch4/taxi_preproc/TRAIN.csv-00000-of-*\" > sample/train.csv\n",
"gsutil cat \"gs://$BUCKET/taxifare/ch4/taxi_preproc/VALID.csv-00000-of-*\" > sample/valid.csv"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To begin let's copy the `model.py` and `task.py` we developed in the previous notebooks here."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"\n",
"MODELDIR=./taxifaremodel\n",
"\n",
"test -d $MODELDIR || mkdir $MODELDIR\n",
"cp -r ../../03_model_performance/labs/taxifaremodel/* $MODELDIR"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's have a look at the files contained within the `taxifaremodel` folder. Within `model.py` we see that `feature_cols` has three engineered features. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"grep -A 15 \"feature_cols =\" taxifaremodel/model.py"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can also see the engineered features that are created by the `add_engineered_features` function here."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"grep -A 5 \"add_engineered_features\" taxifaremodel/model.py"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can try out this model on the local sample we've created to make sure everything works as expected. Note, this takes about **5 minutes** to complete."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"rm -rf taxifare.tar.gz taxi_trained\n",
"export PYTHONPATH=${PYTHONPATH}:${PWD}/taxifare\n",
"python -m taxifaremodel.task \\\n",
" --train_data_path=${PWD}/sample/train.csv \\\n",
" --eval_data_path=${PWD}/sample/valid.csv \\\n",
" --output_dir=${PWD}/taxi_trained \\\n",
" --train_steps=10 \\\n",
" --job-dir=/tmp"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We've only done 10 training steps, so we don't expect the model to have good performance. Let's have a look at the exported files from our training job. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"ls -R taxi_trained/export"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can use `saved_model_cli` to look at the exported signature. Note that the model doesn't need any of the engineered features as inputs. It will compute latdiff, londiff, euclidean from the provided inputs, thanks to the add_engineered call in the serving_input_fn."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"model_dir=$(ls ${PWD}/taxi_trained/export/exporter | tail -1)\n",
"saved_model_cli show --dir ${PWD}/taxi_trained/export/exporter/${model_dir} --all"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To test out prediciton with out model, we create a temporary json file containing the expected feature values."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%writefile /tmp/test.json\n",
"{\"dayofweek\": 0, \"hourofday\": 17, \"pickuplon\": -73.885262, \"pickuplat\": 40.773008, \"dropofflon\": -73.987232, \"dropofflat\": 40.732403}"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"model_dir=$(ls ${PWD}/taxi_trained/export/exporter)\n",
"gcloud ml-engine local predict \\\n",
" --model-dir=${PWD}/taxi_trained/export/exporter/${model_dir} \\\n",
" --json-instances=/tmp/test.json"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Train on the Cloud\n",
"\n",
"This will take 10-15 minutes even though the prompt immediately returns after the job is submitted. Monitor job progress on the [ML Engine section of Cloud Console](https://console.cloud.google.com/mlengine/jobs) and wait for the training job to complete."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"OUTDIR=gs://${BUCKET}/taxifare/ch4/taxi_trained\n",
"JOBNAME=lab4a_$(date -u +%y%m%d_%H%M%S)\n",
"echo $OUTDIR $REGION $JOBNAME\n",
"gsutil -m rm -rf $OUTDIR\n",
"gcloud ml-engine jobs submit training $JOBNAME \\\n",
" --region=$REGION \\\n",
" --module-name=taxifaremodel.task \\\n",
" --package-path=${PWD}/taxifaremodel \\\n",
" --job-dir=$OUTDIR \\\n",
" --staging-bucket=gs://$BUCKET \\\n",
" --scale-tier=BASIC \\\n",
" --runtime-version=$TFVERSION \\\n",
" -- \\\n",
" --train_data_path=\"gs://${BUCKET}/taxifare/ch4/taxi_preproc/TRAIN*\" \\\n",
" --eval_data_path=\"gs://${BUCKET}/taxifare/ch4/taxi_preproc/VALID*\" \\\n",
" --train_steps=5000 \\\n",
" --output_dir=$OUTDIR"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Once the model has finished training on the cloud, we can check the export folder to see that a model has been correctly saved. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"gsutil ls gs://${BUCKET}/taxifare/ch4/taxi_trained/export/exporter | tail -1"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As before, we can use the `saved_model_cli` to examine the exported signature."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"model_dir=$(gsutil ls gs://${BUCKET}/taxifare/ch4/taxi_trained/export/exporter | tail -1)\n",
"saved_model_cli show --dir ${model_dir} --all"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And check out model's prediction with a local predict job on our test file. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"model_dir=$(gsutil ls gs://${BUCKET}/taxifare/ch4/taxi_trained/export/exporter | tail -1)\n",
"gcloud ml-engine local predict \\\n",
" --model-dir=${model_dir} \\\n",
" --json-instances=/tmp/test.json"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Hyperparameter tuning\n",
"\n",
"Recall the [hyper-parameter tuning notebook](https://github.com/GoogleCloudPlatform/training-data-analyst/blob/asl/courses/machine_learning/deepdive/03_model_performance/b_hyperparameter_tuning.ipynb). We can repeat the process there to decide the best parameters to use for model. Based on that run, I ended up choosing:\n",
"\n",
"- train_batch_size: 512\n",
"- hidden_units: \"64 64 64 8\"\n",
"\n",
"Let's now try a training job over a larger dataset."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## (Optional) Run Cloud training on 2 million row dataset\n",
"\n",
"This run uses as input 2 million rows and takes ~20 minutes with 10 workers (STANDARD_1 pricing tier). The model is exactly the same as above. The only changes are to the input (to use the larger dataset) and to the Cloud MLE tier (to use STANDARD_1 instead of BASIC -- STANDARD_1 is approximately 10x more powerful than BASIC). Because the Dataflow preprocessing takes about 15 minutes, we train here using csv files in a public bucket.\n",
"\n",
"When doing distributed training, use train_steps instead of num_epochs. The distributed workers don't know how many rows there are, but we can calculate train_steps = num_rows * num_epochs / train_batch_size. In this case, we have 2141023 * 100 / 512 = 418168 train steps."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"if gsutil ls -r gs://${BUCKET} | grep -q gs://${BUCKET}/taxifare/ch4/taxi_preproc/; then\n",
" gsutil -m rm -rf gs://${BUCKET}/taxifare/ch4/taxi_preproc/\n",
"fi"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Preprocess the entire dataset \n",
"preprocess(None, \"DataflowRunner\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"\n",
"WARNING -- this uses significant resources and is optional. Remove this line to run the block.\n",
"\n",
"OUTDIR=gs://${BUCKET}/taxifare/feateng2m\n",
"JOBNAME=lab4a_$(date -u +%y%m%d_%H%M%S)\n",
"echo $OUTDIR $REGION $JOBNAME\n",
"TIER=STANDARD_1 \n",
"gsutil -m rm -rf $OUTDIR\n",
"gcloud ml-engine jobs submit training $JOBNAME \\\n",
" --region=$REGION \\\n",
" --module-name=taxifaremodel.task \\\n",
" --package-path=${PWD}/taxifaremodel \\\n",
" --job-dir=$OUTDIR \\\n",
" --staging-bucket=gs://$BUCKET \\\n",
" --scale-tier=$TIER \\\n",
" --runtime-version=$TFVERSION \\\n",
" -- \\\n",
" --train_data_path=\"gs://${BUCKET}/taxifare/ch4/taxi_preproc/TRAIN*\" \\\n",
" --eval_data_path=\"gs://${BUCKET}/taxifare/ch4/taxi_preproc/VALID*\" \\\n",
" --output_dir=$OUTDIR \\\n",
" --train_steps=418168 \\\n",
" --hidden_units=\"64,64,64,8\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright 2022 Google Inc.\n",
"Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\n",
"http://www.apache.org/licenses/LICENSE-2.0\n",
"Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.3"
}
},
"nbformat": 4,
"nbformat_minor": 4
}