courses/machine_learning/deepdive/02_generalization/create_datasets.ipynb (466 lines of code) (raw):
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h1> Explore and create ML datasets </h1>\n",
"\n",
"In this notebook, we will explore data corresponding to taxi rides in New York City to build a Machine Learning model in support of a fare-estimation tool. The idea is to suggest a likely fare to taxi riders so that they are not surprised, and so that they can protest if the charge is much higher than expected.\n",
"\n",
"<div id=\"toc\"></div>\n",
"\n",
"Let's start off with the Python imports that we need."
]
},
{
"cell_type": "code",
"metadata": {
"id": "Nny3m465gKkY",
"colab_type": "code",
"colab": {}
},
"source": [
"!pip install --user google-cloud-bigquery==1.25.0"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Note**: Restart your kernel to use updated packages."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Kindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from google.cloud import bigquery\n",
"import seaborn as sns\n",
"import pandas as pd\n",
"import numpy as np\n",
"import shutil"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h3> Extract sample data from BigQuery </h3>\n",
"\n",
"The dataset that we will use is <a href=\"https://bigquery.cloud.google.com/table/nyc-tlc:yellow.trips\">a BigQuery public dataset</a>. Click on the link, and look at the column names. Switch to the Details tab to verify that the number of records is one billion, and then switch to the Preview tab to look at a few rows.\n",
"\n",
"Write a SQL query to pick up the following fields\n",
"<pre>\n",
" pickup_datetime,\n",
" pickup_longitude, pickup_latitude, \n",
" dropoff_longitude, dropoff_latitude,\n",
" passenger_count,\n",
" trip_distance,\n",
" tolls_amount,\n",
" fare_amount,\n",
" total_amount\n",
"</pre>\n",
"from the dataset and explore a small part of the data. Make sure to pick a repeatable subset of the data so that if someone reruns this notebook, they will get the same results."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"rawdata = \"\"\"\n",
"SELECT\n",
" pickup_datetime,\n",
" pickup_longitude, pickup_latitude, \n",
" dropoff_longitude, dropoff_latitude,\n",
" passenger_count,\n",
" trip_distance,\n",
" tolls_amount,\n",
" fare_amount,\n",
" total_amount\n",
"FROM\n",
" `nyc-tlc.yellow.trips`\n",
"WHERE\n",
" ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N)) = 1\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"query = rawdata.replace(\"EVERY_N\", \"100000\")\n",
"print(query)\n",
"trips = bigquery.Client().query(query).to_dataframe()\n",
"print(\"Total dataset is {} taxi rides\".format(len(trips)))\n",
"trips[:10]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h3> Exploring data </h3>\n",
"\n",
"Let's explore this dataset and clean it up as necessary. We'll use the Python Seaborn package to visualize graphs and Pandas to do the slicing and filtering."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ax = sns.regplot(x = \"trip_distance\", y = \"fare_amount\", ci = None, truncate = True, data = trips)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Hmm ... do you see something wrong with the data that needs addressing?\n",
"\n",
"It appears that we have a lot of invalid data that is being coded as zero distance and some fare amounts that are definitely illegitimate. Let's remove them from our analysis. We can do this by modifying the BigQuery query to keep only trips longer than zero miles and fare amounts that are at least the minimum cab fare ($2.50).\n",
"\n",
"What's up with the streaks at \\$45 and \\$50? Those are fixed-amount rides from JFK and La Guardia airports into anywhere in Manhattan, i.e. to be expected. Let's list the data to make sure the values look reasonable.\n",
"\n",
"Let's examine whether the toll amount is captured in the total amount."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"tollrides = trips[trips['tolls_amount'] > 0]\n",
"tollrides[tollrides['pickup_datetime'] == '2014-05-20 23:09:00']"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Looking a few samples above, it should be clear that the total amount reflects fare amount, toll and tip somewhat arbitrarily -- this is because when customers pay cash, the tip is not known. So, we'll use the sum of fare_amount + tolls_amount as what needs to be predicted. Tips are discretionary and do not have to be included in our fare estimation tool.\n",
"\n",
"Let's also look at the distribution of values within the columns."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"trips.describe()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Hmm ... The min, max of longitude look strange.\n",
"\n",
"Finally, let's actually look at the start and end of a few of the trips."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def showrides(df, numlines):\n",
" import matplotlib.pyplot as plt\n",
" lats = []\n",
" lons = []\n",
" goodrows = df[df['pickup_longitude'] < -70]\n",
" for iter, row in goodrows[:numlines].iterrows():\n",
" lons.append(row['pickup_longitude'])\n",
" lons.append(row['dropoff_longitude'])\n",
" lons.append(None)\n",
" lats.append(row['pickup_latitude'])\n",
" lats.append(row['dropoff_latitude'])\n",
" lats.append(None)\n",
"\n",
" sns.set_style(\"darkgrid\")\n",
" plt.plot(lons, lats)\n",
"\n",
"showrides(trips, 10)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"showrides(tollrides, 10)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As you'd expect, rides that involve a toll are longer than the typical ride."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h3> Quality control and other preprocessing </h3>\n",
"\n",
"We need to do some clean-up of the data:\n",
"<ol>\n",
"<li>New York city longitudes are around -74 and latitudes are around 41.</li>\n",
"<li>We shouldn't have zero passengers.</li>\n",
"<li>Clean up the total_amount column to reflect only fare_amount and tolls_amount, and then remove those two columns.</li>\n",
"<li>Before the ride starts, we'll know the pickup and dropoff locations, but not the trip distance (that depends on the route taken), so remove it from the ML dataset</li>\n",
"<li>Discard the timestamp</li>\n",
"</ol>\n",
"\n",
"Let's change the BigQuery query appropriately. In production, we'll have to carry out the same preprocessing on the real-time input data. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def sample_between(a, b):\n",
" basequery = \"\"\"\n",
"SELECT\n",
" (tolls_amount + fare_amount) AS fare_amount,\n",
" pickup_longitude AS pickuplon,\n",
" pickup_latitude AS pickuplat,\n",
" dropoff_longitude AS dropofflon,\n",
" dropoff_latitude AS dropofflat,\n",
" passenger_count*1.0 AS passengers\n",
"FROM\n",
" `nyc-tlc.yellow.trips`\n",
"WHERE\n",
" trip_distance > 0\n",
" AND fare_amount >= 2.5\n",
" AND pickup_longitude > -78\n",
" AND pickup_longitude < -70\n",
" AND dropoff_longitude > -78\n",
" AND dropoff_longitude < -70\n",
" AND pickup_latitude > 37\n",
" AND pickup_latitude < 45\n",
" AND dropoff_latitude > 37\n",
" AND dropoff_latitude < 45\n",
" AND passenger_count > 0\n",
" \"\"\"\n",
" sampler = \"AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N)) = 1\"\n",
" sampler2 = \"AND {0} >= {1}\\n AND {0} < {2}\".format(\n",
" \"ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100))\",\n",
" \"(EVERY_N * {})\".format(a), \"(EVERY_N * {})\".format(b)\n",
" )\n",
" return \"{}\\n{}\\n{}\".format(basequery, sampler, sampler2)\n",
"\n",
"def create_query(phase, EVERY_N):\n",
" \"\"\"Phase: train (70%) valid (15%) or test (15%)\"\"\"\n",
" query = \"\"\n",
" if phase == 'train':\n",
" # Training\n",
" query = sample_between(0, 70)\n",
" elif phase == 'valid':\n",
" # Validation\n",
" query = sample_between(70, 85)\n",
" else:\n",
" # Test\n",
" query = sample_between(85, 100)\n",
" return query.replace(\"EVERY_N\", str(EVERY_N))\n",
"\n",
"print(create_query('train', 100000))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def to_csv(df, filename):\n",
" outdf = df.copy(deep = False)\n",
" outdf.loc[:, 'key'] = np.arange(0, len(outdf)) # rownumber as key\n",
" # Reorder columns so that target is first column\n",
" cols = outdf.columns.tolist()\n",
" cols.remove('fare_amount')\n",
" cols.insert(0, 'fare_amount')\n",
" print(cols) # new order of columns\n",
" outdf = outdf[cols]\n",
" outdf.to_csv(filename, header = False, index_label = False, index = False)\n",
" print(\"Wrote {} to {}\".format(len(outdf), filename))\n",
"\n",
"for phase in ['train', 'valid', 'test']:\n",
" query = create_query(phase, 100000)\n",
" df = bigquery.Client().query(query).to_dataframe()\n",
" to_csv(df, 'taxi-{}.csv'.format(phase))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h3> Verify that datasets exist </h3>"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!ls -l *.csv"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We have 3 .csv files corresponding to train, valid, test. The ratio of file-sizes correspond to our split of the data."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!head taxi-train.csv"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Looks good! We now have our ML datasets and are ready to train ML models, validate them and evaluate them."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h3> Benchmark </h3>\n",
"\n",
"Before we start building complex ML models, it is a good idea to come up with a very simple model and use that as a benchmark.\n",
"\n",
"My model is going to be to simply divide the mean fare_amount by the mean trip_distance to come up with a rate and use that to predict. Let's compute the RMSE of such a model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"import numpy as np\n",
"import shutil\n",
"\n",
"def distance_between(lat1, lon1, lat2, lon2):\n",
" # Haversine formula to compute distance \"as the crow flies\". Taxis can't fly of course.\n",
" dist = np.degrees(np.arccos(np.sin(np.radians(lat1)) * np.sin(np.radians(lat2)) + np.cos(np.radians(lat1)) * np.cos(np.radians(lat2)) * np.cos(np.radians(lon2 - lon1)))) * 60 * 1.515 * 1.609344\n",
" return dist\n",
"\n",
"def estimate_distance(df):\n",
" return distance_between(df['pickuplat'], df['pickuplon'], df['dropofflat'], df['dropofflon'])\n",
"\n",
"def compute_rmse(actual, predicted):\n",
" return np.sqrt(np.mean((actual - predicted)**2))\n",
"\n",
"def print_rmse(df, rate, name):\n",
" print(\"{1} RMSE = {0}\".format(compute_rmse(df['fare_amount'], rate * estimate_distance(df)), name))\n",
"\n",
"FEATURES = ['pickuplon','pickuplat','dropofflon','dropofflat','passengers']\n",
"TARGET = 'fare_amount'\n",
"columns = list([TARGET])\n",
"columns.extend(FEATURES) # in CSV, target is the first column, after the features\n",
"columns.append('key')\n",
"df_train = pd.read_csv('taxi-train.csv', header = None, names = columns)\n",
"df_valid = pd.read_csv('taxi-valid.csv', header = None, names = columns)\n",
"df_test = pd.read_csv('taxi-test.csv', header = None, names = columns)\n",
"rate = df_train['fare_amount'].mean() / estimate_distance(df_train).mean()\n",
"print(\"Rate = ${0}/km\".format(rate))\n",
"print_rmse(df_train, rate, 'Train')\n",
"print_rmse(df_valid, rate, 'Valid') \n",
"print_rmse(df_test, rate, 'Test') "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The simple distance-based rule gives us a RMSE of <b>$9.35</b> on the validation dataset. We have to beat this, of course, but you will find that simple rules of thumb like this can be surprisingly difficult to beat. You don't wnat to set a goal on the test dataset because you want to change the architecture of the network etc. to get the best validation error. Then, you can evaluate ONCE on the test data."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Challenge Exercise\n",
"\n",
"Let's say that you want to predict whether a Stackoverflow question will be acceptably answered. Using this [public dataset of questions](https://bigquery.cloud.google.com/table/bigquery-public-data:stackoverflow.posts_questions), create a machine learning dataset that you can use for classification.\n",
"<p>\n",
"What is a reasonable benchmark for this problem?\n",
"What features might be useful?\n",
"<p>\n",
"If you got the above easily, try this harder problem: you want to predict whether a question will be acceptably answered within 2 days. How would you create the dataset?\n",
"<p>\n",
"Hint (highlight to see):\n",
"<p style='color:white' linkstyle='color:white'> \n",
"You will need to do a SQL join with the table of [answers]( https://bigquery.cloud.google.com/table/bigquery-public-data:stackoverflow.posts_answers) to determine whether the answer was within 2 days.\n",
"</p>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright 2018 Google Inc.\n",
"Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\n",
"http://www.apache.org/licenses/LICENSE-2.0\n",
"Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.3"
}
},
"nbformat": 4,
"nbformat_minor": 4
}