training/distributed-training/tensorflow2_smdataparallel_efficientnet_demo.ipynb (461 lines of code) (raw):
{
"cells": [
{
"cell_type": "markdown",
"id": "db45f0c5",
"metadata": {},
"source": [
"# Distributed Data Parallel EfficientNet Training with TensorFlow2 and SageMaker Distributed\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"This notebook's CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook. \n",
"\n",
"\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "db45f0c5",
"metadata": {},
"source": [
"\n",
"[Amazon SageMaker's distributed library](https://docs.aws.amazon.com/sagemaker/latest/dg/distributed-training.html) can be used to train deep learning models faster and cheaper. The [data parallel](https://docs.aws.amazon.com/sagemaker/latest/dg/data-parallel.html) feature in this library (`smdistributed.dataparallel`) is a distributed data parallel training framework for PyTorch, TensorFlow, and MXNet.\n",
"\n",
"This notebook demonstrates how to use `smdistributed.dataparallel` with TensorFlow(version 2.6.0) on [Amazon SageMaker](https://aws.amazon.com/sagemaker/) to train an EfficientNet model on a large image dataset such as [ImageNet](https://image-net.org/download.php) using [Amazon FSx for Lustre file-system](https://aws.amazon.com/fsx/lustre/) as data source.\n",
"\n",
"The outline of steps is as follows:\n",
"\n",
"1. Stage the ImageNet dataset as a collection of [TFRecord](https://www.tensorflow.org/tutorials/load_data/tfrecord) files on [Amazon S3](https://aws.amazon.com/s3/)\n",
"2. Create Amazon FSx Lustre file-system and import data into the file-system from S3\n",
"3. Build Docker training image and push it to [Amazon ECR](https://aws.amazon.com/ecr/)\n",
"4. Configure data input channels for SageMaker\n",
"5. Configure hyper-prarameters\n",
"6. Define training metrics\n",
"7. Define training job, set distribution strategy to SMDataParallel and start training\n",
"\n",
"**NOTE:** With large training dataset such as ImageNet, we recommend using [Amazon FSx](https://aws.amazon.com/fsx/) as the input file system for the SageMaker training job. FSx file input to SageMaker significantly cuts down training start up time on SageMaker because it avoids downloading the training data each time you start the training job (as done with S3 input for SageMaker training job) and provides good data read throughput.\n",
"\n",
"\n",
"**NOTE:** This example requires SageMaker Python SDK v2.X."
]
},
{
"cell_type": "markdown",
"id": "62efb0c1",
"metadata": {},
"source": [
"## Amazon SageMaker Initialization\n",
"\n",
"Initialize the notebook instance. Get the AWS Region and a SageMaker execution role.\n",
"\n",
"### SageMaker role\n",
"\n",
"The following code cell defines `role` which is the IAM role ARN used to create and run SageMaker training and hosting jobs. This is the same IAM role used to create this SageMaker Notebook instance. \n",
"\n",
"`role` must have permission to create a SageMaker training job and host a model. For granular policies you can use to grant these permissions, see [Amazon SageMaker Roles](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html). If you do not require fine-tuned permissions for this demo, you can use the IAM managed policy AmazonSageMakerFullAccess to complete this demo. \n",
"\n",
"As described above, since we will be using FSx, please make sure to attach `FSx Access` permission to this IAM role. If you do not require fine-tuned permissions for this demo, you can use the IAM managed policy AmazonFSxFullAccess to complete this demo."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "94d8089f",
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"! python3 -m pip install --upgrade sagemaker\n",
"import sagemaker\n",
"from sagemaker import get_execution_role\n",
"from sagemaker.estimator import Estimator\n",
"import boto3\n",
"\n",
"sagemaker_session = sagemaker.Session()\n",
"bucket = sagemaker_session.default_bucket()\n",
"\n",
"role = (\n",
" get_execution_role()\n",
") # provide a pre-existing role ARN as an alternative to creating a new role\n",
"role_name = role.split([\"/\"][-1])\n",
"print(f\"SageMaker Execution Role: {role}\")\n",
"print(f\"The name of the Execution role: {role_name[-1]}\")\n",
"\n",
"client = boto3.client(\"sts\")\n",
"account = client.get_caller_identity()[\"Account\"]\n",
"print(f\"AWS account: {account}\")\n",
"\n",
"session = boto3.session.Session()\n",
"region = session.region_name\n",
"print(f\"AWS region: {region}\")"
]
},
{
"cell_type": "markdown",
"id": "76353ab4",
"metadata": {},
"source": [
"To verify that the role above has required permissions:\n",
"\n",
"1. Go to the IAM console: https://console.aws.amazon.com/iam/home.\n",
"2. Select **Roles**.\n",
"3. Enter the role name in the search box to search for that role. \n",
"4. Select the role.\n",
"5. Use the **Permissions** tab to verify this role has required permissions attached."
]
},
{
"cell_type": "markdown",
"id": "9836f45e",
"metadata": {},
"source": [
"## Prepare SageMaker Training Images\n",
"\n",
"1. SageMaker by default use the latest [Amazon Deep Learning Container Images (DLC)](https://github.com/aws/deep-learning-containers/blob/master/available_images.md) TensorFlow training image. In this step, we use it as a base image and install additional dependencies required for training EfficientNet model.\n",
"2. In this [GitHub repository](https://github.com/HerringForks/SMDDP-Examples/tree/main/tensorflow/efficientnet), we have forked an EfficientNet example from [NVIDIA/DeepLearningExamples](https://github.com/NVIDIA/DeepLearningExamples/tree/master/TensorFlow2/Classification/ConvNets/efficientnet) and adapted the training script to work with `smdistributed.dataparallel`.\n",
"\n",
"### Build and Push Docker Image to ECR\n",
"\n",
"Run the below command build the docker image and push it to ECR."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fb5b7fa7",
"metadata": {},
"outputs": [],
"source": [
"image = \"<IMAGE_NAME>\" # Example: tf2-smdataparallel-efficientnet-sagemaker\n",
"tag = \"<IMAGE_TAG>\" # Example: latest"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ce70c7c8",
"metadata": {},
"outputs": [],
"source": [
"!pygmentize ./Dockerfile"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c712784f",
"metadata": {},
"outputs": [],
"source": [
"!pygmentize ./build_and_push.sh"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5f00e075",
"metadata": {},
"outputs": [],
"source": [
"# build and tag the image and upload it to ECR.\n",
"%%time\n",
"! aws ecr get-login-password --region {region} | docker login --username AWS --password-stdin 763104351884.dkr.ecr.{region}.amazonaws.com\n",
"! chmod +x build_and_push.sh; bash build_and_push.sh {region} {image} {tag}"
]
},
{
"cell_type": "markdown",
"id": "ec8919bc",
"metadata": {},
"source": [
"## Preparing FSx Input for SageMaker\n",
"\n",
"1. Download, prepare, and upload your training dataset on Amazon S3. Follow these [steps to download and convert the ImageNet dataset to TFRecords format](https://github.com/kmonachopoulos/ImageNet-to-TFrecord).\n",
"2. Follow these [steps to create a FSx linked with your S3 bucket with training data](https://docs.aws.amazon.com/fsx/latest/LustreGuide/create-fs-linked-data-repo.html). Make sure to add an endpoint to your VPC allowing S3 access.\n",
"3. Follow these [steps to configure your SageMaker training job to use FSx](https://aws.amazon.com/blogs/machine-learning/speed-up-training-on-amazon-sagemaker-using-amazon-efs-or-amazon-fsx-for-lustre-file-systems/).\n",
"\n",
"### Important Caveats\n",
"\n",
"1. You need to use the same `subnet` and `vpc` and `security group` used with FSx when launching the SageMaker notebook instance. The same configurations will be used by your SageMaker training job.\n",
"2. Make sure you set the [appropriate inbound/output rules in the `security group`](https://docs.aws.amazon.com/fsx/latest/LustreGuide/limit-access-security-groups.html). Specifically, opening up these ports is necessary for SageMaker to access the FSx filesystem in the training job.\n",
"3. Make sure `SageMaker IAM Role` used to launch this SageMaker training job has access to `AmazonFSx`."
]
},
{
"cell_type": "markdown",
"id": "8ace4f98",
"metadata": {},
"source": [
"## SageMaker TensorFlow Estimator function options\n",
"\n",
"In the following code block, you can update the estimator function to use a different instance type, instance count, and distribution strategy. You're also passing in the training script you reviewed in the previous cell.\n",
"\n",
"**Instance types**\n",
"\n",
"`smdistributed.dataparallel` supports model training on SageMaker with the following instance types only. For best performance, it is recommended you use an instance type that supports Amazon Elastic Fabric Adapter (ml.p3dn.24xlarge and ml.p4d.24xlarge).\n",
"\n",
"1. ml.p3.16xlarge\n",
"1. ml.p3dn.24xlarge [Recommended]\n",
"1. ml.p4d.24xlarge [Recommended]\n",
"\n",
"**Instance count**\n",
"\n",
"To get the best performance and the most out of `smdistributed.dataparallel`, you should use at least 2 instances, but you can also use 1 for testing this example.\n",
"\n",
"**Distribution strategy**\n",
"\n",
"Note that to use DDP mode, you need to update the `distribution` strategy, and set it to use `smdistributed dataparallel`.\n",
"\n",
"### Training script\n",
"\n",
"In this [GitHub repository](https://github.com/HerringForks/SMDDP-Examples/tree/main/tensorflow/efficientnet), we have made reference `smdistributed.dataparallel` TensorFlow EfficientNet training script available for your use. Clone the repository."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d2bbed39",
"metadata": {},
"outputs": [],
"source": [
"# Clone herring forks repository for reference implementation BERT with TensorFlow2-SMDataParallel\n",
"!rm -rf SMDDP-Examples\n",
"!git clone --recursive https://github.com/HerringForks/SMDDP-Examples.git"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1832a42e",
"metadata": {},
"outputs": [],
"source": [
"from sagemaker.tensorflow import TensorFlow"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6123f021",
"metadata": {},
"outputs": [],
"source": [
"instance_type = \"ml.p4d.24xlarge\" # Other supported instance type: ml.p3.16xlarge, ml.p3dn.24xlarge\n",
"instance_count = 2 # You can use 2, 4, 8 etc.\n",
"docker_image = f\"{account}.dkr.ecr.{region}.amazonaws.com/{image}:{tag}\" # YOUR_ECR_IMAGE_BUILT_WITH_ABOVE_DOCKER_FILE\n",
"username = \"AWS\"\n",
"subnets = [\"<SUBNET_ID>\"] # Should be same as Subnet used for FSx. Example: subnet-0f9XXXX\n",
"security_group_ids = [\n",
" \"<SECURITY_GROUP_ID>\"\n",
"] # Should be same as Security group used for FSx. sg-03ZZZZZZ\n",
"job_name = \"smdataparallel-efficientnet-tf2-fsx-2p4d\" # This job name is used as prefix to the sagemaker training job. Makes it easy for your look for your training job in SageMaker Training job console.\n",
"file_system_id = \"<FSX_ID>\" # FSx file system ID with your training dataset. Example: 'fs-0bYYYYYY'"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "61391941",
"metadata": {},
"outputs": [],
"source": [
"# Configure the hyper-parameters\n",
"hyperparameters = {\n",
" \"mode\": \"train\",\n",
" \"arch\": \"efficientnet-b4\",\n",
" \"use_amp\": \"\",\n",
" \"use_xla\": \"\",\n",
" \"augmenter_name\": \"autoaugment\",\n",
" \"weight_init\": \"fan_out\",\n",
" \"lr_decay\": \"cosine\",\n",
" \"max_epochs\": 5,\n",
" \"train_batch_size\": 64,\n",
" \"log_steps\": 10,\n",
" \"save_checkpoint_freq\": 10,\n",
" \"lr_init\": 0.005,\n",
" \"batch_norm\": \"syncbn\",\n",
" \"mixup_alpha\": 0.2,\n",
" \"weight_decay\": 5e-6,\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "161d67cd",
"metadata": {},
"outputs": [],
"source": [
"# Configure metrics to be displayed for the training job\n",
"# In this example, we show how to record a custom training throughput metric\n",
"# Reference: https://docs.aws.amazon.com/sagemaker/latest/dg/training-metrics.html\n",
"metric_definitions = [\n",
" {\"Name\": \"train_throughput\", \"Regex\": \"examples/second : (.*?) \"},\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "aad83355",
"metadata": {},
"outputs": [],
"source": [
"estimator = TensorFlow(\n",
" entry_point=\"main.py\",\n",
" role=role,\n",
" image_uri=docker_image,\n",
" source_dir=\"./SMDDP-Examples/tensorflow/efficientnet\",\n",
" instance_count=instance_count,\n",
" instance_type=instance_type,\n",
" framework_version=\"2.6\",\n",
" py_version=\"py38\",\n",
" sagemaker_session=sagemaker_session,\n",
" hyperparameters=hyperparameters,\n",
" subnets=subnets,\n",
" security_group_ids=security_group_ids,\n",
" debugger_hook_config=False,\n",
" # Training using SMDataParallel Distributed Training Framework\n",
" distribution={\"smdistributed\": {\"dataparallel\": {\"enabled\": True}}},\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c9a7eafa",
"metadata": {},
"outputs": [],
"source": [
"# Configure FSx Input for your SageMaker Training job\n",
"\n",
"from sagemaker.inputs import FileSystemInput\n",
"\n",
"# YOUR_MOUNT_PATH_FOR_TRAINING_DATA # NOTE: '/fsx/' will be the root mount path. Example: '/fsx/efficientnet''''\n",
"file_system_directory_path = \"<FSX_DIRECTORY_PATH>\"\n",
"file_system_access_mode = \"rw\"\n",
"file_system_type = \"FSxLustre\"\n",
"train_fs = FileSystemInput(\n",
" file_system_id=file_system_id,\n",
" file_system_type=file_system_type,\n",
" directory_path=file_system_directory_path,\n",
" file_system_access_mode=file_system_access_mode,\n",
")\n",
"data_channels = {\"train\": train_fs}"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e34fbd06",
"metadata": {},
"outputs": [],
"source": [
"# Submit SageMaker training job\n",
"estimator.fit(inputs=data_channels, job_name=job_name)"
]
},
{
"cell_type": "markdown",
"id": "8ccab7b1",
"metadata": {},
"source": [
"## Next steps\n",
"\n",
"Now that you have trained a model using Amazon SageMaker's distributed library, you can deploy an endpoint to host the model. After you deploy the endpoint, you can then test it with inference requests by following this [blog post](https://aws.amazon.com/blogs/machine-learning/deploy-trained-keras-or-tensorflow-models-using-amazon-sagemaker/). The following cell will store the model_data variable to be used for inference."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dbaa9162",
"metadata": {},
"outputs": [],
"source": [
"model_data = estimator.model_data\n",
"print(\"Storing {} as model_data\".format(model_data))\n",
"%store model_data"
]
},
{
"cell_type": "markdown",
"id": "5963f464",
"metadata": {},
"source": [
"## Clean Up\n",
"\n",
"To avoid incurring unnecessary charges, follow these [steps to use the AWS Management Console to delete resources such as endpoints, notebook instances, S3 buckets, and CloudWatch logs](https://docs.aws.amazon.com/sagemaker/latest/dg/ex1-cleanup.html)."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Notebook CI Test Results\n",
"\n",
"This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.12"
}
},
"nbformat": 4,
"nbformat_minor": 5
}