training/sagemaker-automatic-model-tuning/hpo_xgboost_direct_marketing_sagemaker_APIs.ipynb (585 lines of code) (raw):
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Direct Marketing with Amazon SageMaker XGBoost and Hyperparameter Tuning (SageMaker API)\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"This notebook's CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook. \n",
"\n",
"\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"_**Supervised Learning with Gradient Boosted Trees: A Binary Prediction Problem With Unbalanced Classes**_\n",
"\n",
"---\n",
"\n",
"---\n",
"\n",
"Kernel `Python 3 (Data Science)` works well with this notebook.\n",
"\n",
"## Contents\n",
"\n",
"1. [Background](#Background)\n",
"1. [Prepration](#Preparation)\n",
"1. [Data Downloading](#Data_Downloading)\n",
"1. [Data Transformation](#Data_Transformation)\n",
"1. [Setup Hyperparameter Tuning](#Setup_Hyperparameter_Tuning)\n",
"1. [Launch Hyperparameter Tuning](#Launch_Hyperparameter_Tuning)\n",
"1. [Analyze Hyperparameter Tuning Results](#Analyze_Hyperparameter_Tuning_Results)\n",
"1. [Deploy The Best Model](#Deploy_The_Best_Model)\n",
"\n",
"\n",
"---\n",
"\n",
"## Background\n",
"Direct marketing, either through mail, email, phone, etc., is a common tactic to acquire customers. Because resources and a customer's attention is limited, the goal is to only target the subset of prospects who are likely to engage with a specific offer. Predicting those potential customers based on readily available information like demographics, past interactions, and environmental factors is a common machine learning problem.\n",
"\n",
"This notebook will train a model which can be used to predict if a customer will enroll for a term deposit at a bank, after one or more phone calls. Hyperparameter tuning will be used in order to try multiple hyperparameter settings and produce the best model.\n",
"\n",
"---\n",
"\n",
"## Preparation\n",
"\n",
"Let's start by specifying:\n",
"\n",
"- The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as SageMaker training.\n",
"- The IAM role used to give training access to your data. See SageMaker documentation for how to create these."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip3 install -U sagemaker"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true,
"isConfigCell": true
},
"outputs": [],
"source": [
"import sagemaker\n",
"import boto3\n",
"\n",
"import numpy as np # For matrix operations and numerical processing\n",
"import pandas as pd # For munging tabular data\n",
"from time import gmtime, strftime\n",
"import os\n",
"\n",
"region = boto3.Session().region_name\n",
"smclient = boto3.Session().client(\"sagemaker\")\n",
"\n",
"role = sagemaker.get_execution_role()\n",
"\n",
"bucket = sagemaker.Session().default_bucket()\n",
"prefix = \"sagemaker/DEMO-hpo-xgboost-dm\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"## Data_Downloading\n",
"Let's start by downloading the [direct marketing dataset](https://archive.ics.uci.edu/ml/datasets/bank+marketing) from UCI's ML Repository."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"!wget -N https://archive.ics.uci.edu/ml/machine-learning-databases/00222/bank-additional.zip\n",
"!unzip -o bank-additional.zip"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now lets read this into a Pandas data frame and take a look."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"data = pd.read_csv(\"./bank-additional/bank-additional-full.csv\", sep=\";\")\n",
"pd.set_option(\"display.max_columns\", 500) # Make sure we can see all of the columns\n",
"pd.set_option(\"display.max_rows\", 50) # Keep the output on one page\n",
"data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's talk about the data. At a high level, we can see:\n",
"\n",
"* We have a little over 40K customer records, and 20 features for each customer\n",
"* The features are mixed; some numeric, some categorical\n",
"* The data appears to be sorted, at least by `time` and `contact`, maybe more\n",
"\n",
"_**Specifics on each of the features:**_\n",
"\n",
"*Demographics:*\n",
"* `age`: Customer's age (numeric)\n",
"* `job`: Type of job (categorical: 'admin.', 'services', ...)\n",
"* `marital`: Marital status (categorical: 'married', 'single', ...)\n",
"* `education`: Level of education (categorical: 'basic.4y', 'high.school', ...)\n",
"\n",
"*Past customer events:*\n",
"* `default`: Has credit in default? (categorical: 'no', 'unknown', ...)\n",
"* `housing`: Has housing loan? (categorical: 'no', 'yes', ...)\n",
"* `loan`: Has personal loan? (categorical: 'no', 'yes', ...)\n",
"\n",
"*Past direct marketing contacts:*\n",
"* `contact`: Contact communication type (categorical: 'cellular', 'telephone', ...)\n",
"* `month`: Last contact month of year (categorical: 'may', 'nov', ...)\n",
"* `day_of_week`: Last contact day of the week (categorical: 'mon', 'fri', ...)\n",
"* `duration`: Last contact duration, in seconds (numeric). Important note: If duration = 0 then `y` = 'no'.\n",
" \n",
"*Campaign information:*\n",
"* `campaign`: Number of contacts performed during this campaign and for this client (numeric, includes last contact)\n",
"* `pdays`: Number of days that passed by after the client was last contacted from a previous campaign (numeric)\n",
"* `previous`: Number of contacts performed before this campaign and for this client (numeric)\n",
"* `poutcome`: Outcome of the previous marketing campaign (categorical: 'nonexistent','success', ...)\n",
"\n",
"*External environment factors:*\n",
"* `emp.var.rate`: Employment variation rate - quarterly indicator (numeric)\n",
"* `cons.price.idx`: Consumer price index - monthly indicator (numeric)\n",
"* `cons.conf.idx`: Consumer confidence index - monthly indicator (numeric)\n",
"* `euribor3m`: Euribor 3 month rate - daily indicator (numeric)\n",
"* `nr.employed`: Number of employees - quarterly indicator (numeric)\n",
"\n",
"*Target variable:*\n",
"* `y`: Has the client subscribed a term deposit? (binary: 'yes','no')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Data_Transformation\n",
"Cleaning up data is part of nearly every machine learning project. It arguably presents the biggest risk if done incorrectly and is one of the more subjective aspects in the process. Several common techniques include:\n",
"\n",
"* Handling missing values: Some machine learning algorithms are capable of handling missing values, but most would rather not. Options include:\n",
" * Removing observations with missing values: This works well if only a very small fraction of observations have incomplete information.\n",
" * Removing features with missing values: This works well if there are a small number of features which have a large number of missing values.\n",
" * Imputing missing values: Entire [books](https://www.amazon.com/Flexible-Imputation-Missing-Interdisciplinary-Statistics/dp/1439868247) have been written on this topic, but common choices are replacing the missing value with the mode or mean of that column's non-missing values.\n",
"* Converting categorical to numeric: The most common method is one hot encoding, which for each feature maps every distinct value of that column to its own feature which takes a value of 1 when the categorical feature is equal to that value, and 0 otherwise.\n",
"* Oddly distributed data: Although for non-linear models like Gradient Boosted Trees, this has very limited implications, parametric models like regression can produce wildly inaccurate estimates when fed highly skewed data. In some cases, simply taking the natural log of the features is sufficient to produce more normally distributed data. In others, bucketing values into discrete ranges is helpful. These buckets can then be treated as categorical variables and included in the model when one hot encoded.\n",
"* Handling more complicated data types: Mainpulating images, text, or data at varying grains.\n",
"\n",
"Luckily, some of these aspects have already been handled for us, and the algorithm we are showcasing tends to do well at handling sparse or oddly distributed data. Therefore, let's keep pre-processing simple."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First of all, Many records have the value of \"999\" for pdays, number of days that passed by after a client was last contacted. It is very likely to be a magic number to represent that no contact was made before. Considering that, we create a new column called \"no_previous_contact\", then grant it value of \"1\" when pdays is 999 and \"0\" otherwise.\n",
"\n",
"In the \"job\" column, there are categories that mean the customer is not working, e.g., \"student\", \"retire\", and \"unemployed\". Since it is very likely whether or not a customer is working will affect his/her decision to enroll in the term deposit, we generate a new column to show whether the customer is working based on \"job\" column.\n",
"\n",
"Last but not the least, we convert categorical to numeric, as is suggested above."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"data[\"no_previous_contact\"] = np.where(\n",
" data[\"pdays\"] == 999, 1, 0\n",
") # Indicator variable to capture when pdays takes a value of 999\n",
"data[\"not_working\"] = np.where(\n",
" np.in1d(data[\"job\"], [\"student\", \"retired\", \"unemployed\"]), 1, 0\n",
") # Indicator for individuals not actively employed\n",
"model_data = pd.get_dummies(data) # Convert categorical variables to sets of indicators\n",
"model_data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Another question to ask yourself before building a model is whether certain features will add value in your final use case. For example, if your goal is to deliver the best prediction, then will you have access to that data at the moment of prediction? Knowing it's raining is highly predictive for umbrella sales, but forecasting weather far enough out to plan inventory on umbrellas is probably just as difficult as forecasting umbrella sales without knowledge of the weather. So, including this in your model may give you a false sense of precision.\n",
"\n",
"Following this logic, let's remove the economic features and `duration` from our data as they would need to be forecasted with high precision to use as inputs in future predictions.\n",
"\n",
"Even if we were to use values of the economic indicators from the previous quarter, this value is likely not as relevant for prospects contacted early in the next quarter as those contacted later on."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"model_data = model_data.drop(\n",
" [\"duration\", \"emp.var.rate\", \"cons.price.idx\", \"cons.conf.idx\", \"euribor3m\", \"nr.employed\"],\n",
" axis=1,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We'll then split the dataset into training (70%), validation (20%), and test (10%) datasets and convert the datasets to the right format the algorithm expects. We will use training and validation datasets during training. Test dataset will be used to evaluate model performance after it is deployed to an endpoint.\n",
"\n",
"Amazon SageMaker's XGBoost algorithm expects data in the libSVM or CSV data format. For this example, we'll stick to CSV. Note that the first column must be the target variable and the CSV should not include headers. Also, notice that although repetitive it's easiest to do this after the train|validation|test split rather than before. This avoids any misalignment issues due to random reordering."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"train_data, validation_data, test_data = np.split(\n",
" model_data.sample(frac=1, random_state=1729),\n",
" [int(0.7 * len(model_data)), int(0.9 * len(model_data))],\n",
")\n",
"\n",
"pd.concat([train_data[\"y_yes\"], train_data.drop([\"y_no\", \"y_yes\"], axis=1)], axis=1).to_csv(\n",
" \"train.csv\", index=False, header=False\n",
")\n",
"pd.concat(\n",
" [validation_data[\"y_yes\"], validation_data.drop([\"y_no\", \"y_yes\"], axis=1)], axis=1\n",
").to_csv(\"validation.csv\", index=False, header=False)\n",
"pd.concat([test_data[\"y_yes\"], test_data.drop([\"y_no\", \"y_yes\"], axis=1)], axis=1).to_csv(\n",
" \"test.csv\", index=False, header=False\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we'll copy the file to S3 for Amazon SageMaker training to pickup."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"boto3.Session().resource(\"s3\").Bucket(bucket).Object(\n",
" os.path.join(prefix, \"train/train.csv\")\n",
").upload_file(\"train.csv\")\n",
"boto3.Session().resource(\"s3\").Bucket(bucket).Object(\n",
" os.path.join(prefix, \"validation/validation.csv\")\n",
").upload_file(\"validation.csv\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"## Setup_Hyperparameter_Tuning \n",
"*Note, with the default setting below, the hyperparameter tuning job can take about 30 minutes to complete.*\n",
"\n",
"Now that we have prepared the dataset, we are ready to train models. Before we do that, one thing to note is there are algorithm settings which are called \"hyperparameters\" that can dramtically affect the performance of the trained models. For example, XGBoost algorithm has dozens of hyperparameters and we need to pick the right values for those hyperparameters in order to achieve the desired model training results. Since which hyperparameter setting can lead to the best result depends on the dataset as well, it is almost impossible to pick the best hyperparameter setting without searching for it, and a good search algorithm can search for the best hyperparameter setting in an automated and effective way.\n",
"\n",
"We will use SageMaker hyperparameter tuning to automate the searching process effectively. Specifically, we specify a range, or a list of possible values in the case of categorical hyperparameters, for each of the hyperparameter that we plan to tune. SageMaker hyperparameter tuning will automatically launch multiple training jobs with different hyperparameter settings, evaluate results of those training jobs based on a predefined \"objective metric\", and select the hyperparameter settings for future attempts based on previous results. For each hyperparameter tuning job, we will give it a budget (max number of training jobs) and it will complete once that many training jobs have been executed.\n",
"\n",
"Now we configure the hyperparameter tuning job by defining a JSON object that specifies following information:\n",
"* The ranges of hyperparameters we want to tune\n",
"* Number of training jobs to run in total and how many training jobs should be run simultaneously. More parallel jobs will finish tuning sooner, but may sacrifice accuracy. We recommend you set the parallel jobs value to less than 10% of the total number of training jobs (we'll set it higher just for this example to keep it short).\n",
"* The objective metric that will be used to evaluate training results, in this example, we select *validation:auc* to be the objective metric and the goal is to maximize the value throughout the hyperparameter tuning process. One thing to note is the objective metric has to be among the metrics that are emitted by the algorithm during training. In this example, the built-in XGBoost algorithm emits a bunch of metrics and *validation:auc* is one of them. If you bring your own algorithm to SageMaker, then you need to make sure whatever objective metric you select, your algorithm actually emits it.\n",
"\n",
"We will tune four hyperparameters in this examples:\n",
"* *eta*: Step size shrinkage used in updates to prevent overfitting. After each boosting step, you can directly get the weights of new features. The eta parameter actually shrinks the feature weights to make the boosting process more conservative. \n",
"* *alpha*: L1 regularization term on weights. Increasing this value makes models more conservative. \n",
"* *min_child_weight*: Minimum sum of instance weight (hessian) needed in a child. If the tree partition step results in a leaf node with the sum of instance weight less than min_child_weight, the building process gives up further partitioning. In linear regression models, this simply corresponds to a minimum number of instances needed in each node. The larger the algorithm, the more conservative it is. \n",
"* *max_depth*: Maximum depth of a tree. Increasing this value makes the model more complex and likely to be overfitted. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"from time import gmtime, strftime, sleep\n",
"\n",
"tuning_job_name = \"xgboost-tuningjob-\" + strftime(\"%d-%H-%M-%S\", gmtime())\n",
"\n",
"print(tuning_job_name)\n",
"\n",
"tuning_job_config = {\n",
" \"ParameterRanges\": {\n",
" \"CategoricalParameterRanges\": [],\n",
" \"ContinuousParameterRanges\": [\n",
" {\n",
" \"MaxValue\": \"1\",\n",
" \"MinValue\": \"0\",\n",
" \"Name\": \"eta\",\n",
" },\n",
" {\n",
" \"MaxValue\": \"10\",\n",
" \"MinValue\": \"1\",\n",
" \"Name\": \"min_child_weight\",\n",
" },\n",
" {\n",
" \"MaxValue\": \"2\",\n",
" \"MinValue\": \"0\",\n",
" \"Name\": \"alpha\",\n",
" },\n",
" ],\n",
" \"IntegerParameterRanges\": [\n",
" {\n",
" \"MaxValue\": \"10\",\n",
" \"MinValue\": \"1\",\n",
" \"Name\": \"max_depth\",\n",
" }\n",
" ],\n",
" },\n",
" \"ResourceLimits\": {\"MaxNumberOfTrainingJobs\": 20, \"MaxParallelTrainingJobs\": 3},\n",
" \"Strategy\": \"Bayesian\",\n",
" \"HyperParameterTuningJobObjective\": {\"MetricName\": \"validation:auc\", \"Type\": \"Maximize\"},\n",
"}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then we configure the training jobs the hyperparameter tuning job will launch by defining a JSON object that specifies following information:\n",
"* The container image for the algorithm (XGBoost)\n",
"* The input configuration for the training and validation data\n",
"* Configuration for the output of the algorithm\n",
"* The values of any algorithm hyperparameters that are not tuned in the tuning job (StaticHyperparameters)\n",
"* The type and number of instances to use for the training jobs\n",
"* The stopping condition for the training jobs\n",
"\n",
"Again, since we are using built-in XGBoost algorithm here, it emits two predefined metrics: *validation:auc* and *train:auc*, and we elected to monitor *validation_auc* as you can see above. One thing to note is if you bring your own algorithm, your algorithm emits metrics by itself. In that case, you'll need to add a MetricDefinition object here to define the format of those metrics through regex, so that SageMaker knows how to extract those metrics."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"from sagemaker.image_uris import retrieve\n",
"\n",
"training_image = retrieve(framework=\"xgboost\", region=region, version=\"1.7-1\")\n",
"\n",
"s3_input_train = \"s3://{}/{}/train\".format(bucket, prefix)\n",
"s3_input_validation = \"s3://{}/{}/validation/\".format(bucket, prefix)\n",
"\n",
"training_job_definition = {\n",
" \"AlgorithmSpecification\": {\"TrainingImage\": training_image, \"TrainingInputMode\": \"File\"},\n",
" \"InputDataConfig\": [\n",
" {\n",
" \"ChannelName\": \"train\",\n",
" \"CompressionType\": \"None\",\n",
" \"ContentType\": \"csv\",\n",
" \"DataSource\": {\n",
" \"S3DataSource\": {\n",
" \"S3DataDistributionType\": \"FullyReplicated\",\n",
" \"S3DataType\": \"S3Prefix\",\n",
" \"S3Uri\": s3_input_train,\n",
" }\n",
" },\n",
" },\n",
" {\n",
" \"ChannelName\": \"validation\",\n",
" \"CompressionType\": \"None\",\n",
" \"ContentType\": \"csv\",\n",
" \"DataSource\": {\n",
" \"S3DataSource\": {\n",
" \"S3DataDistributionType\": \"FullyReplicated\",\n",
" \"S3DataType\": \"S3Prefix\",\n",
" \"S3Uri\": s3_input_validation,\n",
" }\n",
" },\n",
" },\n",
" ],\n",
" \"OutputDataConfig\": {\"S3OutputPath\": \"s3://{}/{}/output\".format(bucket, prefix)},\n",
" \"ResourceConfig\": {\"InstanceCount\": 1, \"InstanceType\": \"ml.m4.xlarge\", \"VolumeSizeInGB\": 10},\n",
" \"RoleArn\": role,\n",
" \"StaticHyperParameters\": {\n",
" \"eval_metric\": \"auc\",\n",
" \"num_round\": \"100\",\n",
" \"objective\": \"binary:logistic\",\n",
" \"rate_drop\": \"0.3\",\n",
" \"tweedie_variance_power\": \"1.4\",\n",
" },\n",
" \"StoppingCondition\": {\"MaxRuntimeInSeconds\": 43200},\n",
"}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Launch_Hyperparameter_Tuning\n",
"Now we can launch a hyperparameter tuning job by calling create_hyper_parameter_tuning_job API. After the hyperparameter tuning job is created, we can go to SageMaker console to track the progress of the hyperparameter tuning job until it is completed."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"smclient.create_hyper_parameter_tuning_job(\n",
" HyperParameterTuningJobName=tuning_job_name,\n",
" HyperParameterTuningJobConfig=tuning_job_config,\n",
" TrainingJobDefinition=training_job_definition,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's just run a quick check of the hyperparameter tuning jobs status to make sure it started successfully."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"smclient.describe_hyper_parameter_tuning_job(HyperParameterTuningJobName=tuning_job_name)[\n",
" \"HyperParameterTuningJobStatus\"\n",
"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Analyze tuning job results - after tuning job is completed\n",
"Please refer to \"HPO_Analyze_TuningJob_Results.ipynb\" to see example code to analyze the tuning job results."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Deploy the best model\n",
"Now that we have got the best model, we can deploy it to an endpoint. Please refer to other SageMaker sample notebooks or SageMaker documentation to see how to deploy a model."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Notebook CI Test Results\n",
"\n",
"This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n"
]
}
],
"metadata": {
"instance_type": "ml.t3.medium",
"kernelspec": {
"display_name": "Python 3 (Data Science)",
"language": "python",
"name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-2:429704687514:image/datascience-1.0"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6"
},
"notice": "Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the \"License\"). You may not use this file except in compliance with the License. A copy of the License is located at http://aws.amazon.com/apache2.0/ or in the \"license\" file accompanying this file. This file is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License."
},
"nbformat": 4,
"nbformat_minor": 4
}