training/sagemaker-automatic-model-tuning/hpo_mxnet_mnist.ipynb (365 lines of code) (raw):
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Hyperparameter Tuning with Amazon SageMaker and MXNet\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"This notebook's CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook. \n",
"\n",
"\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"_**Creating a Hyperparameter Tuning Job for an MXNet Network**_\n",
"\n",
"---\n",
"\n",
"---\n",
"\n",
"\n",
"## Contents\n",
"\n",
"1. [Background](#Background)\n",
"1. [Setup](#Setup)\n",
"1. [Data](#Data)\n",
"1. [Code](#Code)\n",
"1. [Tune](#Train)\n",
"1. [Wrap-up](#Wrap-up)\n",
"\n",
"---\n",
"\n",
"## Background\n",
"\n",
"This example notebook focuses on how to create a convolutional neural network model to train the [MNIST dataset](http://yann.lecun.com/exdb/mnist/) using MXNet distributed training. It leverages SageMaker's hyperparameter tuning to kick off multiple training jobs with different hyperparameter combinations, to find the set with best model performance. This is an important step in the machine learning process as hyperparameter settings can have a large impact on model accuracy. In this example, we'll use the [SageMaker Python SDK](https://github.com/aws/sagemaker-python-sdk) to create a hyperparameter tuning job for an MXNet estimator.\n",
"\n",
"---\n",
"\n",
"## Setup\n",
"\n",
"_This notebook was created and tested on an ml.m4.xlarge notebook instance._\n",
"\n",
"Let's start by specifying:\n",
"\n",
"- The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the notebook instance, training, and hosting.\n",
"- The IAM role arn used to give training and hosting access to your data. See the [documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/using-identity-based-policies.html) for more details on creating these. Note, if a role not associated with the current notebook instance, or more than one role is required for training and/or hosting, please replace `sagemaker.get_execution_role()` with a the appropriate full IAM role arn string(s)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"isConfigCell": true
},
"outputs": [],
"source": [
"import sagemaker\n",
"\n",
"role = sagemaker.get_execution_role()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we'll import the Python libraries we'll need."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import sagemaker\n",
"import boto3\n",
"from sagemaker.mxnet import MXNet\n",
"from sagemaker.tuner import (\n",
" IntegerParameter,\n",
" CategoricalParameter,\n",
" ContinuousParameter,\n",
" HyperparameterTuner,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"## Data\n",
"\n",
"The MNIST dataset is widely used for handwritten digit classification, and consists of 70,000 labeled 28x28 pixel grayscale images of hand-written digits. The dataset is split into 60,000 training images and 10,000 test images. There are 10 classes (one for each of the 10 digits). See [here](http://yann.lecun.com/exdb/mnist/) for more details on MNIST.\n",
"\n",
"For this example notebook we'll use a version of the dataset that's already been published in the desired format to a shared S3 bucket. Let's specify that location now."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"region = boto3.Session().region_name\n",
"train_data_location = \"s3://sagemaker-sample-data-{}/mxnet/mnist/train\".format(region)\n",
"test_data_location = \"s3://sagemaker-sample-data-{}/mxnet/mnist/test\".format(region)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"## Code\n",
"\n",
"To use SageMaker's pre-built MXNet containers, we need to pass in an MXNet script for the container to run. For our example, we'll define several functions, including:\n",
"- `load_data()` and `find_file()` which help bring in our MNIST dataset as NumPy arrays\n",
"- `build_graph()` which defines our neural network structure\n",
"- `train()` which is the main function that is run during each training job and calls the other functions in order to read in the dataset, create a neural network, and train it.\n",
"\n",
"There are also several functions for hosting which we won't define, like `input_fn()`, `output_fn()`, and `predict_fn()`. These will take on their default values as described [here](https://github.com/aws/sagemaker-python-sdk#model-serving), and are not important for the purpose of showcasing SageMaker's hyperparameter tuning."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!cat mnist.py"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Once we've specified and tested our training script to ensure it works, we can start our tuning job. Testing can be done in either local mode or using SageMaker training. Please see the [MXNet MNIST example notebooks](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/sagemaker-python-sdk/mxnet_mnist/mxnet_mnist.ipynb) for more detail."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"## Tune\n",
"\n",
"Similar to training a single MXNet job in SageMaker, we define our MXNet estimator passing in the MXNet script, IAM role, (per job) hardware configuration, and any hyperparameters we're not tuning."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"estimator = MXNet(\n",
" entry_point=\"mnist.py\",\n",
" role=role,\n",
" instance_count=1,\n",
" instance_type=\"ml.m4.xlarge\",\n",
" sagemaker_session=sagemaker.Session(),\n",
" py_version=\"py3\",\n",
" framework_version=\"1.4.1\",\n",
" base_job_name=\"DEMO-hpo-mxnet\",\n",
" hyperparameters={\"batch_size\": 100},\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Once we've defined our estimator we can specify the hyperparameters we'd like to tune and their possible values. We have three different types of hyperparameters.\n",
"- Categorical parameters need to take one value from a discrete set. We define this by passing the list of possible values to `CategoricalParameter(list)`\n",
"- Continuous parameters can take any real number value between the minimum and maximum value, defined by `ContinuousParameter(min, max)`\n",
"- Integer parameters can take any integer value between the minimum and maximum value, defined by `IntegerParameter(min, max)`\n",
"\n",
"*Note, if possible, it's almost always best to specify a value as the least restrictive type. For example, tuning `thresh` as a continuous value between 0.01 and 0.2 is likely to yield a better result than tuning as a categorical parameter with possible values of 0.01, 0.1, 0.15, or 0.2.*"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"hyperparameter_ranges = {\n",
" \"optimizer\": CategoricalParameter([\"sgd\", \"Adam\"]),\n",
" \"learning_rate\": ContinuousParameter(0.01, 0.2),\n",
" \"num_epoch\": IntegerParameter(10, 50),\n",
"}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next we'll specify the objective metric that we'd like to tune and its definition. This includes the regular expression (Regex) needed to extract that metric from the CloudWatch logs of our training job."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"objective_metric_name = \"Validation-accuracy\"\n",
"metric_definitions = [{\"Name\": \"Validation-accuracy\", \"Regex\": \"Validation-accuracy=([0-9\\\\.]+)\"}]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, we'll create a `HyperparameterTuner` object, which we pass:\n",
"- The MXNet estimator we created above\n",
"- Our hyperparameter ranges\n",
"- Objective metric name and definition\n",
"- Number of training jobs to run in total and how many training jobs should be run simultaneously. More parallel jobs will finish tuning sooner, but may sacrifice accuracy. We recommend you set the parallel jobs value to less than 10% of the total number of training jobs (we'll set it higher just for this example to keep it short).\n",
"- Whether we should maximize or minimize our objective metric (we haven't specified here since it defaults to 'Maximize', which is what we want for validation accuracy)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"tuner = HyperparameterTuner(\n",
" estimator,\n",
" objective_metric_name,\n",
" hyperparameter_ranges,\n",
" metric_definitions,\n",
" max_jobs=9,\n",
" max_parallel_jobs=3,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And finally, we can start our tuning job by calling `.fit()` and passing in the S3 paths to our train and test datasets."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"tuner.fit({\"train\": train_data_location, \"test\": test_data_location})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's just run a quick check of the hyperparameter tuning jobs status to make sure it started successfully and is `InProgress`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"boto3.client(\"sagemaker\").describe_hyper_parameter_tuning_job(\n",
" HyperParameterTuningJobName=tuner.latest_tuning_job.job_name\n",
")[\"HyperParameterTuningJobStatus\"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"## Wrap-up\n",
"\n",
"Now that we've started our hyperparameter tuning job, it will run in the background and we can close this notebook. Once finished, we can use the [HPO Analysis notebook](https://github.com/awslabs/amazon-sagemaker-examples/tree/master/hyperparameter_tuning/analyze_results/HPO_Analyze_TuningJob_Results.ipynb) to determine which set of hyperparameters worked best.\n",
"\n",
"For more detail on Amazon SageMaker's Hyperparameter Tuning, please refer to the AWS documentation. "
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Notebook CI Test Results\n",
"\n",
"This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "conda_python3",
"language": "python",
"name": "conda_python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.4"
},
"notice": "Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the \"License\"). You may not use this file except in compliance with the License. A copy of the License is located at http://aws.amazon.com/apache2.0/ or in the \"license\" file accompanying this file. This file is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License."
},
"nbformat": 4,
"nbformat_minor": 2
}