courses/machine_learning/deepdive/06_structured/4_preproc.ipynb (346 lines of code) (raw):
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h1> Preprocessing using Dataflow </h1>\n",
"\n",
"This notebook illustrates:\n",
"<ol>\n",
"<li> Creating datasets for Machine Learning using Dataflow\n",
"</ol>\n",
"<p>\n",
"While Pandas is fine for experimenting, for operationalization of your workflow, it is better to do preprocessing in Apache Beam. This will also help if you need to preprocess data in flight, since Apache Beam also allows for streaming."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"collapsed": false,
"deletable": true,
"editable": true,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"pip install --user apache-beam[gcp]==2.16.0"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Run the command again if you are getting oauth2client error.\n",
"\n",
"Note: You may ignore the following responses in the cell output above:\n",
"\n",
"ERROR (in Red text) related to: witwidget-gpu, fairing\n",
"\n",
"WARNING (in Yellow text) related to: hdfscli, hdfscli-avro, pbr, fastavro, gen_client\n",
"\n",
"<b>Restart</b> the kernel before proceeding further.\n",
"\n",
"Make sure the Dataflow API is enabled by going to this [link](https://console.developers.google.com/apis/api/dataflow.googleapis.com). Ensure that you've installed Beam by importing it and printing the version number."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Ensure the right version of Tensorflow is installed.\n",
"!pip freeze | grep tensorflow==2.1"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import apache_beam as beam\n",
"print(beam.__version__)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You may receive a `UserWarning` about the Apache Beam SDK for Python 3 as not being yet fully supported. Don't worry about this."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# change these to try this notebook out\n",
"BUCKET = 'cloud-training-demos-ml'\n",
"PROJECT = 'cloud-training-demos'\n",
"REGION = 'us-central1'"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"os.environ['BUCKET'] = BUCKET\n",
"os.environ['PROJECT'] = PROJECT\n",
"os.environ['REGION'] = REGION"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"if ! gsutil ls | grep -q gs://${BUCKET}/; then\n",
" gsutil mb -l ${REGION} gs://${BUCKET}\n",
"fi"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h2> Save the query from earlier </h2>\n",
"\n",
"The data is natality data (record of births in the US). My goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Create SQL query using natality data after the year 2000\n",
"query = \"\"\"\n",
"SELECT\n",
" weight_pounds,\n",
" is_male,\n",
" mother_age,\n",
" plurality,\n",
" gestation_weeks,\n",
" FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth\n",
"FROM\n",
" publicdata.samples.natality\n",
"WHERE year > 2000\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Call BigQuery and examine in dataframe\n",
"from google.cloud import bigquery\n",
"df = bigquery.Client().query(query + \" LIMIT 100\").to_dataframe()\n",
"df.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h2> Create ML dataset using Dataflow </h2>\n",
"Let's use Cloud Dataflow to read in the BigQuery data, do some preprocessing, and write it out as CSV files.\n",
"\n",
"Instead of using Beam/Dataflow, I had three other options:\n",
"\n",
"* Use Cloud Dataprep to visually author a Dataflow pipeline. Cloud Dataprep also allows me to explore the data, so we could have avoided much of the handcoding of Python/Seaborn calls above as well!\n",
"* Read from BigQuery directly using TensorFlow.\n",
"* Use the BigQuery console (http://bigquery.cloud.google.com) to run a Query and save the result as a CSV file. For larger datasets, you may have to select the option to \"allow large results\" and save the result into a CSV file on Google Cloud Storage. \n",
"\n",
"<p>\n",
"\n",
"However, in this case, I want to do some preprocessing, modifying data so that we can simulate what is known if no ultrasound has been performed. If I didn't need preprocessing, I could have used the web console. Also, I prefer to script it out rather than run queries on the user interface, so I am using Cloud Dataflow for the preprocessing.\n",
"\n",
"Note that after you launch this, the actual processing is happening on the cloud. Go to the GCP webconsole to the Dataflow section and monitor the running job. It took about 20 minutes for me.\n",
"<p>\n",
"If you wish to continue without doing this step, you can copy my preprocessed output:\n",
"<pre>\n",
"gsutil -m cp -r gs://cloud-training-demos/babyweight/preproc gs://your-bucket/\n",
"</pre>"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import datetime, os\n",
"\n",
"def to_csv(rowdict):\n",
" # Pull columns from BQ and create a line\n",
" import hashlib\n",
" import copy\n",
" CSV_COLUMNS = 'weight_pounds,is_male,mother_age,plurality,gestation_weeks'.split(',')\n",
"\n",
" # Create synthetic data where we assume that no ultrasound has been performed\n",
" # and so we don't know sex of the baby. Let's assume that we can tell the difference\n",
" # between single and multiple, but that the errors rates in determining exact number\n",
" # is difficult in the absence of an ultrasound.\n",
" no_ultrasound = copy.deepcopy(rowdict)\n",
" w_ultrasound = copy.deepcopy(rowdict)\n",
"\n",
" no_ultrasound['is_male'] = 'Unknown'\n",
" if rowdict['plurality'] > 1:\n",
" no_ultrasound['plurality'] = 'Multiple(2+)'\n",
" else:\n",
" no_ultrasound['plurality'] = 'Single(1)'\n",
"\n",
" # Change the plurality column to strings\n",
" w_ultrasound['plurality'] = ['Single(1)', 'Twins(2)', 'Triplets(3)', 'Quadruplets(4)', 'Quintuplets(5)'][rowdict['plurality'] - 1]\n",
"\n",
" # Write out two rows for each input row, one with ultrasound and one without\n",
" for result in [no_ultrasound, w_ultrasound]:\n",
" data = ','.join([str(result[k]) if k in result else 'None' for k in CSV_COLUMNS])\n",
" key = hashlib.sha224(data.encode('utf-8')).hexdigest() # hash the columns to form a key\n",
" yield str('{},{}'.format(data, key))\n",
" \n",
"def preprocess(in_test_mode):\n",
" import shutil, os, subprocess\n",
" job_name = 'preprocess-babyweight-features' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S')\n",
"\n",
" if in_test_mode:\n",
" print('Launching local job ... hang on')\n",
" OUTPUT_DIR = './preproc'\n",
" shutil.rmtree(OUTPUT_DIR, ignore_errors=True)\n",
" os.makedirs(OUTPUT_DIR)\n",
" else:\n",
" print('Launching Dataflow job {} ... hang on'.format(job_name))\n",
" OUTPUT_DIR = 'gs://{0}/babyweight/preproc/'.format(BUCKET)\n",
" try:\n",
" subprocess.check_call('gsutil -m rm -r {}'.format(OUTPUT_DIR).split())\n",
" except:\n",
" pass\n",
"\n",
" options = {\n",
" 'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),\n",
" 'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),\n",
" 'job_name': job_name,\n",
" 'project': PROJECT,\n",
" 'region': REGION,\n",
" 'teardown_policy': 'TEARDOWN_ALWAYS',\n",
" 'no_save_main_session': True,\n",
" 'num_workers': 4,\n",
" 'max_num_workers': 5\n",
" }\n",
" opts = beam.pipeline.PipelineOptions(flags = [], **options)\n",
" if in_test_mode:\n",
" RUNNER = 'DirectRunner'\n",
" else:\n",
" RUNNER = 'DataflowRunner'\n",
" p = beam.Pipeline(RUNNER, options = opts)\n",
" query = \"\"\"\n",
"SELECT\n",
" weight_pounds,\n",
" is_male,\n",
" mother_age,\n",
" plurality,\n",
" gestation_weeks,\n",
" FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth\n",
"FROM\n",
" publicdata.samples.natality\n",
"WHERE year > 2000\n",
"AND weight_pounds > 0\n",
"AND mother_age > 0\n",
"AND plurality > 0\n",
"AND gestation_weeks > 0\n",
"AND month > 0\n",
" \"\"\"\n",
"\n",
" if in_test_mode:\n",
" query = query + ' LIMIT 100' \n",
"\n",
" for step in ['train', 'eval']:\n",
" if step == 'train':\n",
" selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) < 3'.format(query)\n",
" else:\n",
" selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) = 3'.format(query)\n",
"\n",
" (p \n",
" | '{}_read'.format(step) >> beam.io.Read(beam.io.BigQuerySource(query = selquery, use_standard_sql = True))\n",
" | '{}_csv'.format(step) >> beam.FlatMap(to_csv)\n",
" | '{}_out'.format(step) >> beam.io.Write(beam.io.WriteToText(os.path.join(OUTPUT_DIR, '{}.csv'.format(step))))\n",
" )\n",
"\n",
" job = p.run()\n",
" if in_test_mode:\n",
" job.wait_until_finish()\n",
" print(\"Done!\")\n",
" \n",
"preprocess(in_test_mode = False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The above step will take 20+ minutes. Go to the GCP web console, navigate to the Dataflow section and <b>wait for the job to finish</b> before you run the following step."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"gsutil ls gs://${BUCKET}/babyweight/preproc/*-00000*"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright 2020 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.3"
}
},
"nbformat": 4,
"nbformat_minor": 4
}