training/distributed-training/pytorch_smdataparallel_efficientnet_demo.ipynb (578 lines of code) (raw):
{
"cells": [
{
"cell_type": "markdown",
"id": "54f4abf7",
"metadata": {},
"source": [
"# Distributed Data Parallel EfficientNet Training with PyTorch and SageMaker Distributed\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"This notebook's CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook. \n",
"\n",
"\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "54f4abf7",
"metadata": {},
"source": [
"\n",
"[Amazon SageMaker's distributed library](https://docs.aws.amazon.com/sagemaker/latest/dg/distributed-training.html) can be used to train deep learning models faster and cheaper. The [data parallel](https://docs.aws.amazon.com/sagemaker/latest/dg/data-parallel.html) feature in this library (`smdistributed.dataparallel`) is a distributed data parallel training framework for PyTorch, TensorFlow, and MXNet.\n",
"\n",
"This notebook demonstrates how to use `smdistributed.dataparallel` with PyTorch on [Amazon SageMaker](https://aws.amazon.com/sagemaker/) to train an EfficientNet model on a large image dataset such as [ImageNet](https://image-net.org/download.php) using [Amazon FSx for Lustre file-system](https://aws.amazon.com/fsx/lustre/) as data source.\n",
"\n",
"The outline of steps is as follows:\n",
"\n",
"1. Stage the ImageNet dataset files on [Amazon S3](https://aws.amazon.com/s3/)\n",
"2. Create [Amazon FSx Lustre file-system](https://docs.aws.amazon.com/fsx/) and import data into the file-system from S3\n",
"3. Configure a data channel for training using Amazon FSx Lustre file-system.\n",
"3. Build Docker training image and push it to [Amazon ECR](https://aws.amazon.com/ecr/)\n",
"5. Configure the estimator function options, like distribution strategy and hyperparameters.\n",
"7. Start training\n",
"\n",
"**NOTE:** With large training dataset such as ImageNet, we recommend using [Amazon FSx](https://aws.amazon.com/fsx/) as the input file system for the SageMaker training job. FSx file input to SageMaker significantly cuts down training start up time on SageMaker because it avoids downloading the training data each time you start the training job (as done with S3 input for SageMaker training job) and provides good data read throughput.\n",
"\n",
"\n",
"**NOTE:** This example requires SageMaker Python SDK v2.X."
]
},
{
"cell_type": "markdown",
"id": "9f8e7c57",
"metadata": {},
"source": [
"## Amazon SageMaker Initialization\n",
"\n",
"Initialize the notebook instance. Get the AWS Region and a SageMaker execution role.\n",
"\n",
"### SageMaker role\n",
"\n",
"The following code cell defines `role` which is the IAM role ARN used to create and run SageMaker training and hosting jobs. This is the same IAM role used to create this SageMaker Notebook instance. \n",
"\n",
"`role` must have permission to create a SageMaker training job and host a model. For granular policies you can use to grant these permissions, see [Amazon SageMaker Roles](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html). If you do not require fine-tuned permissions for this demo, you can use the IAM managed policy AmazonSageMakerFullAccess to complete this demo. \n",
"\n",
"As described above, since we will be using FSx, please make sure to attach `FSx Access` permission to this IAM role. If you do not require fine-tuned permissions for this demo, you can use the IAM managed policy AmazonFSxFullAccess to complete this demo."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ef899b94",
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"! python3 -m pip install --upgrade sagemaker\n",
"import sagemaker\n",
"from sagemaker import get_execution_role\n",
"from sagemaker.estimator import Estimator\n",
"import boto3\n",
"\n",
"sagemaker_session = sagemaker.Session()\n",
"bucket = sagemaker_session.default_bucket()\n",
"\n",
"role = (\n",
" get_execution_role()\n",
") # provide a pre-existing role ARN as an alternative to creating a new role\n",
"role_name = role.split([\"/\"][-1])\n",
"print(f\"SageMaker Execution Role: {role}\")\n",
"print(f\"The name of the Execution role: {role_name[-1]}\")\n",
"\n",
"client = boto3.client(\"sts\")\n",
"account = client.get_caller_identity()[\"Account\"]\n",
"print(f\"AWS account: {account}\")\n",
"\n",
"session = boto3.session.Session()\n",
"region = session.region_name\n",
"print(f\"AWS region: {region}\")"
]
},
{
"cell_type": "markdown",
"id": "9c8502a8",
"metadata": {},
"source": [
"To verify that the role above has required permissions:\n",
"\n",
"1. Go to the [IAM console](https://console.aws.amazon.com/iam/home).\n",
"2. Select **Roles**.\n",
"3. Enter the role name in the search box to search for that role. \n",
"4. Select the role.\n",
"5. Use the **Permissions** tab to verify this role has required permissions attached."
]
},
{
"cell_type": "markdown",
"id": "283a7ea6",
"metadata": {},
"source": [
"## Preparing FSx Input for SageMaker\n",
"\n",
"Pre-requisite: [Create an Amazon S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html).\n",
"\n",
"1. Download, prepare, and upload your training dataset on Amazon S3. Follow [step 2-5 from this guide to download and decompress the training images into a local directory](https://github.com/HerringForks/SMDDP-Examples/tree/main/pytorch/image_classification/efficientnet#quick-start-guide). Then, [upload the data to Amazon S3](https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html).\n",
"2. Follow these [steps to create a FSx linked with your S3 bucket with training data](https://docs.aws.amazon.com/fsx/latest/LustreGuide/getting-started-step1.html). Make sure to add an endpoint to your VPC allowing S3 access.\n",
"\n",
"**Important Caveats**\n",
"\n",
"- You need to use the same `subnet`, `vpc` and `security group` used with FSx when launching the SageMaker notebook instance. The same configurations will be used by your SageMaker training job.\n",
"- Make sure you set the [appropriate inbound/output rules in the `security group`](https://docs.aws.amazon.com/fsx/latest/LustreGuide/limit-access-security-groups.html). Specifically, opening up these ports is necessary for SageMaker to access the FSx filesystem in the training job.\n",
"- Make sure `SageMaker IAM Role` used to launch this SageMaker training job has access to `AmazonFSx`.\n",
"\n",
"### Specify the FSx information for the training job\n",
"Go to your [FSx console](console.aws.amazon.com/fsx/) and access your recently created filesystem. In the **Summary** section, note the _File system ID_ and the _Mount name_. Also, in the **Network & security** tab, click on the Network Interface to display the _Subnet ID_ and _Security groups_. Use this information to complete the cell below and run it to configure the FSx input for your SageMaker training job."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "062ddf8f",
"metadata": {},
"outputs": [],
"source": [
"# Configure FSx Input for your SageMaker Training job\n",
"from sagemaker.inputs import FileSystemInput\n",
"\n",
"# FSx file system ID with your training dataset. Example: 'fs-0bYYYYYY'\n",
"file_system_id = \"<fsx_id>\"\n",
"# FSx path with your training data # Example: '/fsx_mount_name/imagenet'\n",
"file_system_directory_path = \"/<fsx_mount_name>/<path_to_data>\"\n",
"file_system_access_mode = \"rw\"\n",
"file_system_type = \"FSxLustre\"\n",
"train_fs = FileSystemInput(\n",
" file_system_id=file_system_id,\n",
" file_system_type=file_system_type,\n",
" directory_path=file_system_directory_path,\n",
" file_system_access_mode=file_system_access_mode,\n",
")\n",
"# Specify the training data channel using the FSx filesystem. This will be provided to the SageMaker training job later\n",
"data_channels = {\"train\": train_fs}\n",
"\n",
"# The following variables will be used later in the notebook\n",
"fsx_subnets = [\"<subnet_id>\"] # Should be the subnet used for FSx. Example: subnet-0f9XXXX\n",
"fsx_security_group_ids = [\n",
" \"<security_group_id>\"\n",
"] # Should be the security group used for FSx. sg-03ZZZZZZ"
]
},
{
"cell_type": "markdown",
"id": "06b81abd",
"metadata": {},
"source": [
"## Prepare SageMaker Training Images\n",
"\n",
"SageMaker by default uses the latest [Amazon Deep Learning Container Images (DLC)](https://github.com/aws/deep-learning-containers/blob/master/available_images.md) PyTorch training image. In this step, we use it as a base image and install additional dependencies required for training EfficientNet model.\n",
"\n",
"Run the cell below to indicate the account that hosts the DLC. Note that the account ID of the PyTorch DLC image might vary depending on AWS Regions. To look up the right DLC image URI with a corresponding account ID for your Region, see [Available Deep Learning Containers Images](https://github.com/aws/deep-learning-containers/blob/master/available_images.md)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "373bc431",
"metadata": {},
"outputs": [],
"source": [
"dlc_account_id = 763104351884 # By default, set the account ID used for most regions"
]
},
{
"cell_type": "markdown",
"id": "84b97204",
"metadata": {},
"source": [
"**Minimum PyTorch version**\n",
"\n",
"Note that the training script used in this notebook references the SageMaker distributed data parallel library as a backend for PyTorch. That capability was first introduced in the version 1.4.0 of the library, which was integrated with the support for PyTorch 1.10.2. Hence, the base DLC should use PT >= 1.10.2.\n",
"\n",
"Run the cells below to assign a name for your image and observe the dockerfile."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7c6d9f76",
"metadata": {},
"outputs": [],
"source": [
"image = (\n",
" \"pt-smdataparallel-efficientnet-sagemaker\" # Example: pt-smdataparallel-efficientnet-sagemaker\n",
")\n",
"tag = \"latest\" # Example: latest"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ac4af024",
"metadata": {},
"outputs": [],
"source": [
"!pygmentize ./Dockerfile"
]
},
{
"cell_type": "markdown",
"id": "46849267",
"metadata": {},
"source": [
"### Get the training script\n",
"In this [GitHub repository](https://github.com/HerringForks/SMDDP-Examples/tree/main/pytorch/image_classification), we have forked an EfficientNet example from [NVIDIA/DeepLearningExamples](https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/Classification/ConvNets) and modified the training script to work with SageMaker distributed data parallel library. Starting from v1.4.0, the library is available as a backend option for the PyTorch distributed package. Hence, only 2 changes are required to adapt any PyTorch DDP script. Those are:\n",
" \n",
"- To import the `smdistributed.dataparallel.torch.torch_smddp` module.\n",
" \n",
"- To use `\"smddp\"` as the backend when calling `dist.init_process_group`.\n",
" \n",
" Example:\n",
" \n",
"```python\n",
"import smdistributed.dataparallel.torch.torch_smddp\n",
"import torch.distributed as dist\n",
"\n",
"dist.init_process_group(backend='smddp')\n",
"```\n",
"Learn more about how to [modify a PyTorch training script to use SageMaker data parallel library](https://docs.aws.amazon.com/sagemaker/latest/dg/data-parallel-modify-sdp-pt.html).\n",
"\n",
"\n",
"Run the cell below to clone the repository that contains the adaptation of EfficientNet with PyTorch-SMDataParallel"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "24136cab",
"metadata": {},
"outputs": [],
"source": [
"# Note that the requirements file is removed as those dependencies are built into the docker container.\n",
"!rm -rf SMDDP-Examples\n",
"!git clone --recursive https://github.com/HerringForks/SMDDP-Examples.git && \\\n",
" rm SMDDP-Examples/pytorch/image_classification/requirements.txt"
]
},
{
"cell_type": "markdown",
"id": "72c147c3",
"metadata": {},
"source": [
"### Build the docker container and push it to ECR\n",
"\n",
"The last step to prepare the training image is to build the docker container and push it to [Amazon ECR](https://aws.amazon.com/ecr/). To do that, we provide with the following script that takes care of setting up Amazon ECR repository, handling permissions, building the docker container and pushing it to Amazon ECR."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8c487dcf",
"metadata": {},
"outputs": [],
"source": [
"!pygmentize ./build_and_push.sh"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3b637411",
"metadata": {},
"outputs": [],
"source": [
"! aws ecr get-login-password --region {region} | docker login --username AWS --password-stdin {dlc_account_id}.dkr.ecr.{region}.amazonaws.com\n",
"! chmod +x build_and_push.sh; bash build_and_push.sh {dlc_account_id} {region} {image} {tag}"
]
},
{
"cell_type": "markdown",
"id": "6f5af32a",
"metadata": {},
"source": [
"### Save the docker image name from Amazon ECR\n",
"Now that the docker image is in Amazon ECR, it is ready to be used in the training job. Save the name in a variable."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "668d3549",
"metadata": {},
"outputs": [],
"source": [
"docker_image = f\"{account}.dkr.ecr.{region}.amazonaws.com/{image}:{tag}\""
]
},
{
"cell_type": "markdown",
"id": "6bee2a31",
"metadata": {},
"source": [
"## Configure SageMaker PyTorch Estimator function options\n",
"\n",
"In the following code blocks, you can update the estimator function to use a different instance type, instance count, distribution strategy and hyperparameters. You're also passing an entry point to the training script you downloaded in previous steps.\n",
"\n",
"**Instance types**\n",
"\n",
"`smdistributed.dataparallel` supports model training on SageMaker with the following instance types only. For best performance, it is recommended you use an instance type that supports [Amazon Elastic Fabric Adapter (EFA)](https://aws.amazon.com/hpc/efa/).\n",
"\n",
"1. `ml.p3.16xlarge`\n",
"1. `ml.p3dn.24xlarge` [Recommended]\n",
"1. `ml.p4d.24xlarge` [Recommended]\n",
"\n",
"**Instance count**\n",
"\n",
"To get the best performance and the most out of `smdistributed.dataparallel`, you should use at least 2 instances, but you can also use 1 for testing this example."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2a572ab2",
"metadata": {},
"outputs": [],
"source": [
"instance_type = \"ml.p4d.24xlarge\" # Other supported instance type: ml.p3.16xlarge, ml.p3dn.24xlarge\n",
"instance_count = 2 # You can use 2, 4, 8, etc."
]
},
{
"cell_type": "markdown",
"id": "ff4a1c4f",
"metadata": {},
"source": [
"**Distribution strategy**\n",
"\n",
"Note that to use DDP mode, you need to update the `distribution` strategy, and set it to use `smdistributed dataparallel`."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "387dcb5a",
"metadata": {},
"outputs": [],
"source": [
"dist_strategy = {\"smdistributed\": {\"dataparallel\": {\"enabled\": True}}}"
]
},
{
"cell_type": "markdown",
"id": "1a6dc2e5",
"metadata": {},
"source": [
"**Hyperparameters**\n",
"\n",
"The EfficientNet training script used in this notebook, provides an internal mechanism to select a set of default hyperparameters based on model, training mode, precision and platform. You can read more in the [default configuration section](https://github.com/HerringForks/SMDDP-Examples/tree/main/pytorch/image_classification/efficientnet#default-configuration).\n",
"If you want to override any of the values, you can specify the hyperparameters manually in the cell below. For example, you could add `\"epochs\": 10` to run only for 10 epochs."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f8e7ca33",
"metadata": {},
"outputs": [],
"source": [
"# Configure the hyperparameters\n",
"model_name = \"efficientnet-b4\" # Either efficientnet-b0 or efficientnet-b4\n",
"hyperparameters = {\n",
" \"model\": model_name,\n",
" \"mode\": \"benchmark_training\", # Use benchmark_training_short for a quick run with syntetic data\n",
" \"precision\": \"AMP\", # TF32 or AMP\n",
" \"platform\": \"P4D\",\n",
"}"
]
},
{
"cell_type": "markdown",
"id": "bd629a7c",
"metadata": {},
"source": [
"**Configure metrics to be displayed for the training job**\n",
"\n",
"In this example, we show how to record a custom training throughput metric based on a regex expression. Learn more about [defining training metrics](https://docs.aws.amazon.com/sagemaker/latest/dg/training-metrics.html)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "834a12a6",
"metadata": {},
"outputs": [],
"source": [
"metric_definitions = [\n",
" {\"Name\": \"train_throughput\", \"Regex\": \"train.compute_ips : (.*?) \"},\n",
"]"
]
},
{
"cell_type": "markdown",
"id": "80f134ce",
"metadata": {},
"source": [
"**Create the estimator function and pass the parameters**\n",
"\n",
"Use all parameters from previous sections to configure the estimator function."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c4758d36",
"metadata": {},
"outputs": [],
"source": [
"from sagemaker.pytorch import PyTorch\n",
"\n",
"estimator = PyTorch(\n",
" entry_point=\"entry_point.py\",\n",
" role=role,\n",
" image_uri=docker_image,\n",
" source_dir=\".\",\n",
" instance_count=instance_count,\n",
" instance_type=instance_type,\n",
" py_version=\"py38\",\n",
" sagemaker_session=sagemaker_session,\n",
" hyperparameters=hyperparameters,\n",
" subnets=fsx_subnets,\n",
" security_group_ids=fsx_security_group_ids,\n",
" debugger_hook_config=False,\n",
" distribution=dist_strategy,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "239ee873",
"metadata": {},
"source": [
"## Start the SageMaker training job\n",
"The last step before launching the training job is to assign it a name. It is used as prefix to the SageMaker training job, so you can identify it easily in the [SageMaker console](console.aws.amazon.com/sagemaker/)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9519f413",
"metadata": {},
"outputs": [],
"source": [
"job_name = f\"pt-smddp-{model_name}-{instance_count}p4d\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "693dfcaf",
"metadata": {},
"outputs": [],
"source": [
"# Submit SageMaker training job\n",
"estimator.fit(inputs=data_channels, job_name=job_name)"
]
},
{
"cell_type": "markdown",
"id": "2eb41e8f",
"metadata": {},
"source": [
"## Next steps\n",
"\n",
"Now that you have a trained model, you can deploy an endpoint to host the model. After you deploy the endpoint, you can then test it with inference requests. The following cell will store the model_data variable to be used with the inference notebook."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4be7704d",
"metadata": {},
"outputs": [],
"source": [
"model_data = estimator.model_data\n",
"print(\"Storing {} as model_data\".format(model_data))\n",
"%store model_data"
]
},
{
"cell_type": "markdown",
"id": "c4a55ea2",
"metadata": {},
"source": [
"## Clean Up\n",
"\n",
"To avoid incurring unnecessary charges, follow these [steps to use the AWS Management Console to delete resources such as endpoints, notebook instances, S3 buckets, and CloudWatch logs](https://docs.aws.amazon.com/sagemaker/latest/dg/ex1-cleanup.html)."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Notebook CI Test Results\n",
"\n",
"This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "conda_amazonei_pytorch_latest_p36",
"language": "python",
"name": "conda_amazonei_pytorch_latest_p36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.13"
}
},
"nbformat": 4,
"nbformat_minor": 5
}