courses/machine_learning/deepdive/03_tensorflow/c_dataset.ipynb (261 lines of code) (raw):
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"deletable": true,
"editable": true
},
"source": [
"<h1> 2c. Loading large datasets progressively with the tf.data.Dataset </h1>\n",
"\n",
"In this notebook, we continue reading the same small dataset, but refactor our ML pipeline in two small, but significant, ways:\n",
"\n",
"1. Refactor the input to read data from disk progressively.\n",
"2. Refactor the feature creation so that it is not one-to-one with inputs.\n",
"\n",
"The Pandas function in the previous notebook first read the whole data into memory -- on a large dataset, this won't be an option."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Ensure the right version of Tensorflow is installed.\n",
"!pip freeze | grep tensorflow==2.5"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"deletable": true,
"editable": true
},
"outputs": [],
"source": [
"from google.cloud import bigquery\n",
"import tensorflow as tf\n",
"import numpy as np\n",
"import shutil\n",
"print(tf.__version__)"
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": true,
"editable": true
},
"source": [
"<h2> 1. Refactor the input </h2>\n",
"\n",
"Read data created in Lab1a, but this time make it more general, so that we can later handle large datasets. We use the Dataset API for this. It ensures that, as data gets delivered to the model in mini-batches, it is loaded from disk only when needed."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true,
"deletable": true,
"editable": true
},
"outputs": [],
"source": [
"CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']\n",
"DEFAULTS = [[0.0], [-74.0], [40.0], [-74.0], [40.7], [1.0], ['nokey']]\n",
"\n",
"def read_dataset(filename, mode, batch_size = 512):\n",
" def decode_csv(row):\n",
" columns = tf.compat.v1.decode_csv(row, record_defaults = DEFAULTS)\n",
" features = dict(zip(CSV_COLUMNS, columns))\n",
" features.pop('key') # discard, not a real feature\n",
" label = features.pop('fare_amount') # remove label from features and store\n",
" return features, label\n",
"\n",
" # Create list of file names that match \"glob\" pattern (i.e. data_file_*.csv)\n",
" filenames_dataset = tf.data.Dataset.list_files(filename, shuffle=False)\n",
" # Read lines from text files\n",
" textlines_dataset = filenames_dataset.flat_map(tf.data.TextLineDataset)\n",
" # Parse text lines as comma-separated values (CSV)\n",
" dataset = textlines_dataset.map(decode_csv)\n",
"\n",
" # Note:\n",
" # use tf.data.Dataset.flat_map to apply one to many transformations (here: filename -> text lines)\n",
" # use tf.data.Dataset.map to apply one to one transformations (here: text line -> feature list)\n",
"\n",
" if mode == tf.estimator.ModeKeys.TRAIN:\n",
" num_epochs = None # loop indefinitely\n",
" dataset = dataset.shuffle(buffer_size = 10 * batch_size, seed=2)\n",
" else:\n",
" num_epochs = 1 # end-of-input after this\n",
"\n",
" dataset = dataset.repeat(num_epochs).batch(batch_size)\n",
"\n",
" return dataset\n",
"\n",
"def get_train_input_fn():\n",
" return read_dataset('./taxi-train.csv', mode = tf.estimator.ModeKeys.TRAIN)\n",
"\n",
"def get_valid_input_fn():\n",
" return read_dataset('./taxi-valid.csv', mode = tf.estimator.ModeKeys.EVAL)"
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": true,
"editable": true
},
"source": [
"<h2> 2. Refactor the way features are created. </h2>\n",
"\n",
"For now, pass these through (same as previous lab). However, refactoring this way will enable us to break the one-to-one relationship between inputs and features."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true,
"deletable": true,
"editable": true
},
"outputs": [],
"source": [
"INPUT_COLUMNS = [\n",
" tf.feature_column.numeric_column('pickuplon'),\n",
" tf.feature_column.numeric_column('pickuplat'),\n",
" tf.feature_column.numeric_column('dropofflat'),\n",
" tf.feature_column.numeric_column('dropofflon'),\n",
" tf.feature_column.numeric_column('passengers'),\n",
"]\n",
"\n",
"def add_more_features(feats):\n",
" # Nothing to add (yet!)\n",
" return feats\n",
"\n",
"feature_cols = add_more_features(INPUT_COLUMNS)"
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": true,
"editable": true
},
"source": [
"<h2> Create and train the model </h2>\n",
"\n",
"Note that we train for num_steps * batch_size examples."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"deletable": true,
"editable": true
},
"outputs": [],
"source": [
"tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.INFO)\n",
"OUTDIR = 'taxi_trained'\n",
"shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time\n",
"model = tf.compat.v1.estimator.LinearRegressor(\n",
" feature_columns = feature_cols, model_dir = OUTDIR)\n",
"model.train(input_fn = get_train_input_fn, steps = 200)"
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": true,
"editable": true
},
"source": [
"<h3> Evaluate model </h3>\n",
"\n",
"As before, evaluate on the validation data. We'll do the third refactoring (to move the evaluation into the training loop) in the next lab."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"deletable": true,
"editable": true
},
"outputs": [],
"source": [
"metrics = model.evaluate(input_fn = get_valid_input_fn, steps = None)\n",
"print('RMSE on dataset = {}'.format(np.sqrt(metrics['average_loss'])))"
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": true,
"editable": true
},
"source": [
"## Challenge Exercise\n",
"\n",
"Create a neural network that is capable of finding the volume of a cylinder given the radius of its base (r) and its height (h). Assume that the radius and height of the cylinder are both in the range 0.5 to 2.0. Unlike in the challenge exercise for b_estimator.ipynb, assume that your measurements of r, h and V are all rounded off to the nearest 0.1. Simulate the necessary training dataset. This time, you will need a lot more data to get a good predictor.\n",
"\n",
"Hint (highlight to see):\n",
"<p style='color:white'>\n",
"Create random values for r and h and compute V. Then, round off r, h and V (i.e., the volume is computed from the true value of r and h; it's only your measurement that is rounded off). Your dataset will consist of the round values of r, h and V. Do this for both the training and evaluation datasets.\n",
"</p>\n",
"\n",
"Now modify the \"noise\" so that instead of just rounding off the value, there is up to a 10% error (uniformly distributed) in the measurement followed by rounding off."
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": true,
"editable": true
},
"source": [
"Copyright 2020 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.3"
}
},
"nbformat": 4,
"nbformat_minor": 2
}