miscellaneous/distributed_tensorflow_mask_rcnn/mask-rcnn-inference.ipynb (737 lines of code) (raw):
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Mask-RCNN Model Inference in Amazon SageMaker\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"This notebook's CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook. \n",
"\n",
"\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"This notebook is a step-by-step tutorial on [Mask R-CNN](https://arxiv.org/abs/1703.06870) model inference using [Amazon SageMaker model deployment hosting service](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-hosting.html).\n",
"\n",
"To get started, we initialize an Amazon execution role and initialize a `boto3` session to find our AWS region name."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import boto3\n",
"import sagemaker\n",
"from sagemaker import get_execution_role\n",
"\n",
"role = (\n",
" get_execution_role()\n",
") # provide a pre-existing role ARN as an alternative to creating a new role\n",
"print(f\"SageMaker Execution Role:{role}\")\n",
"\n",
"session = boto3.session.Session()\n",
"aws_region = session.region_name\n",
"print(f\"AWS region:{aws_region}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Build and Push Amazon SageMaker Serving Container Images\n",
"\n",
"For this step, the [IAM Role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) attached to this notebook instance needs full access to [Amazon ECR service](https://aws.amazon.com/ecr/). If you created this notebook instance using the ```./stack-sm.sh``` script in this repository, the IAM Role attached to this notebook instance is already setup with full access to Amazon ECR service. \n",
"\n",
"Below, we have a choice of two different models for doing inference:\n",
"\n",
"1. [TensorPack Faster-RCNN/Mask-RCNN](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN)\n",
"\n",
"2. [AWS Samples Mask R-CNN](https://github.com/aws-samples/mask-rcnn-tensorflow)\n",
"\n",
"It is recommended that you build and push both Amazon SageMaker <b>serving</b> container images below and use one of the two container images for serving the model from an Amazon SageMaker Endpoint.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Build and Push TensorPack Faster-RCNN/Mask-RCNN Serving Container Image\n",
"\n",
"Use ```./container-serving/build_tools/build_and_push.sh``` script to build and push the [TensorPack Faster-RCNN/Mask-RCNN](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) <b>serving</b> container image to Amazon ECR. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!cat ./container-serving/build_tools/build_and_push.sh"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Using your *AWS region* as argument, run the cell below."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"! ./container-serving/build_tools/build_and_push.sh {aws_region}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Set ```tensorpack_image``` below to Amazon ECR URI of the <b>serving</b> image you pushed above."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"tensorpack_image = # mask-rcnn-tensorpack-serving-sagemaker ECR URI"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Build and Push AWS Samples Mask R-CNN Serving Container Image\n",
"Use ```./container-serving-optimized/build_tools/build_and_push.sh``` script to build and push the [AWS Samples Mask R-CNN](https://github.com/aws-samples/mask-rcnn-tensorflow) <b>serving</b> container image to Amazon ECR."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!cat ./container-serving-optimized/build_tools/build_and_push.sh"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Using your *AWS region* as argument, run the cell below."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"! ./container-serving-optimized/build_tools/build_and_push.sh {aws_region}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
" Set ```aws_samples_image``` below to Amazon ECR URI of the <b>serving</b> image you pushed above."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"aws_samples_image = # mask-rcnn-tensorflow-serving-sagemaker ECR URI"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Select Serving Container Image\n",
"Above, we built and pushed [TensorPack Faster-RCNN/Mask-RCNN](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) and [AWS Samples Mask R-CNN](https://github.com/aws-samples/mask-rcnn-tensorflow) serving container images to Amazon ECR. Now we are ready to deploy our trained model to an Amazon SageMaker Endpoint using one of the two container images.\n",
"\n",
"Next, we set ```serving_image``` to either the `tensorpack_image` or the `aws_samples_image` variable you defined above, making sure that the serving container image we set below matches our trained model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"serving_image = # set to tensorpack_image or aws_samples_image variable (no string quotes)\n",
"print(f'serving image: {serving_image}')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Amazon SageMaker Session \n",
"Next, we create a SageMaker session."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"sagemaker_session = sagemaker.session.Session(boto_session=session)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Define Amazon SageMaker Model\n",
"Next, we define an Amazon SageMaker Model that defines the deployed model we will serve from an Amazon SageMaker Endpoint. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model_name = \"mask-rcnn-model-1\" # Name of the model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This model assumes you are using ResNet-50 pre-trained model weights for the ResNet backbone. If this is not true, please adjust `PRETRAINED_MODEL` value below. Please ensure that the `s3_model_url` of your trained model used below is consistent with the container `serving_image` you set above."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"s3_model_url = # Trained Model Amazon S3 URI in the format s3://<your path>/model.tar.gz\n",
"serving_container_def = {\n",
" 'Image': serving_image,\n",
" 'ModelDataUrl': s3_model_url,\n",
" 'Mode': 'SingleModel',\n",
" 'Environment': { 'SM_MODEL_DIR' : '/opt/ml/model',\n",
" 'RESNET_ARCH': 'resnet50' # 'resnet50' or 'resnet101'\n",
" }\n",
"}\n",
"\n",
"create_model_response = sagemaker_session.create_model(name=model_name, \n",
" role=role, \n",
" container_defs=serving_container_def)\n",
"\n",
"print(create_model_response)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
" Next, we set the name of the Amaozn SageMaker hosted service endpoint configuration."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"endpoint_config_name = f\"{model_name}-endpoint-config\"\n",
"print(endpoint_config_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we create the Amazon SageMaker hosted service endpoint configuration that uses one instance of `ml.p3.2xlarge` to serve the model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"epc = sagemaker_session.create_endpoint_config(\n",
" name=endpoint_config_name,\n",
" model_name=model_name,\n",
" initial_instance_count=1,\n",
" instance_type=\"ml.g4dn.xlarge\",\n",
")\n",
"print(epc)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next we specify the Amazon SageMaker endpoint name for the endpoint used to serve the model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"endpoint_name = f\"{model_name}-endpoint\"\n",
"print(endpoint_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we create the Amazon SageMaker endpoint using the endpoint configuration we created above."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ep = sagemaker_session.create_endpoint(\n",
" endpoint_name=endpoint_name, config_name=endpoint_config_name, wait=True\n",
")\n",
"print(ep)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that the Amazon SageMaker endpoint is in service, we will use the endpoint to do inference for test images. \n",
"\n",
"Next, we download [COCO 2017 Test images](http://cocodataset.org/#download)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!wget -O ./test2017.zip http://images.cocodataset.org/zips/test2017.zip"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We extract the downloaded COCO 2017 Test images to the home directory."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!unzip -q ./test2017.zip\n",
"!rm ./test2017.zip"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Below, we will use the downloaded COCO 2017 Test images to test our deployed Mask R-CNN model. However, in order to visualize the detection results, we need to define some helper functions."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Visualization Helper Functions\n",
"Next, we define a helper function to convert COCO Run Length Encoding (RLE) to a binary image mask. \n",
"\n",
"The RLE encoding is a dictionary with two keys `counts` and `size`. The `counts` value is a list of counts of run lengths of alternating 0s and 1s for an image binary mask for a specific instance segmentation, with the image is scanned row-wise. The `counts` list starts with a count of 0s. If the binary mask value at `(0,0)` pixel is 1, then the `counts` list starts with a `0`. The `size` value is a list containing image height and width."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"\n",
"\n",
"def rle_to_binary_mask(rle, img_shape):\n",
" value = 0\n",
" mask_array = []\n",
" for count in rle:\n",
" mask_array.extend([int(value)] * count)\n",
" value = (value + 1) % 2\n",
"\n",
" assert len(mask_array) == img_shape[0] * img_shape[1]\n",
" b_mask = np.array(mask_array, dtype=np.uint8).reshape(img_shape)\n",
"\n",
" return b_mask"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we define a helper function for generating random colors for visualizing detection results."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import colorsys\n",
"import random\n",
"\n",
"\n",
"def random_colors(N, bright=False):\n",
" brightness = 1.0 if bright else 0.7\n",
" hsv = [(i / N, 1, brightness) for i in range(N)]\n",
" colors = list(map(lambda c: colorsys.hsv_to_rgb(*c), hsv))\n",
" random.shuffle(colors)\n",
" return colors"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we define a helper function to apply an image binary mask for an instance segmentation to the image. Each image binary mask is of the size of the image."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def apply_mask(image, mask, color, alpha=0.5):\n",
" a_mask = np.stack([mask] * 3, axis=2).astype(np.int8)\n",
" for c in range(3):\n",
" image[:, :, c] = np.where(\n",
" mask == 1, image[:, :, c] * (1 - alpha) + alpha * color[c] * 255, image[:, :, c]\n",
" )\n",
" return image"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we define a helper function to show the applied detection results."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\n",
"from matplotlib import patches\n",
"\n",
"\n",
"def show_detection_results(img=None, annotations=None):\n",
" \"\"\"\n",
" img: image numpy array\n",
" annotations: annotations array for image where each annotation is in COCO format\n",
" \"\"\"\n",
" num_annotations = len(annotations)\n",
" colors = random_colors(num_annotations)\n",
"\n",
" fig, ax = plt.subplots(figsize=(img.shape[1] // 50, img.shape[0] // 50))\n",
"\n",
" for i, a in enumerate(annotations):\n",
" segm = a[\"segmentation\"]\n",
"\n",
" img_shape = tuple(segm[\"size\"])\n",
" rle = segm[\"counts\"]\n",
" binary_image_mask = rle_to_binary_mask(rle, img_shape)\n",
"\n",
" bbox = a[\"bbox\"]\n",
" category_id = a[\"category_id\"]\n",
" category_name = a[\"category_name\"]\n",
"\n",
" # select color from random colors\n",
" color = colors[i]\n",
"\n",
" # Show bounding box\n",
" bbox_x, bbox_y, bbox_w, bbox_h = bbox\n",
"\n",
" box_patch = patches.Rectangle(\n",
" (bbox_x, bbox_y),\n",
" bbox_w,\n",
" bbox_h,\n",
" linewidth=1,\n",
" alpha=0.7,\n",
" linestyle=\"dashed\",\n",
" edgecolor=color,\n",
" facecolor=\"none\",\n",
" )\n",
" ax.add_patch(box_patch)\n",
" label = f\"{category_name}:{category_id}\"\n",
" ax.text(bbox_x, bbox_y + 8, label, color=\"w\", size=11, backgroundcolor=\"none\")\n",
"\n",
" # Show mask\n",
" img = apply_mask(img, binary_image_mask.astype(np.bool), color)\n",
"\n",
" ax.imshow(img.astype(int))\n",
" plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Visualize Detection Results\n",
"Next, we select a random image from COCO 2017 Test image dataset. After you are done visualizing the detection results for this image, you can come back to the cell below and select your next random image to test."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import random\n",
"\n",
"test2017_dir = os.path.join(\".\", \"test2017\")\n",
"img_id = random.choice(os.listdir(test2017_dir))\n",
"img_local_path = os.path.join(test2017_dir, img_id)\n",
"print(img_local_path)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we read the image and convert it from BGR color to RGB color format."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import cv2\n",
"\n",
"img = cv2.imread(img_local_path, cv2.IMREAD_COLOR)\n",
"print(img.shape)\n",
"img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we show the image that we randomly selected."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"fig, ax = plt.subplots(figsize=(img.shape[1] // 50, img.shape[0] // 50))\n",
"ax.imshow(img.astype(int))\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we invoke the Amazon SageMaker Endpoint to detect objects in the test image that we randomly selected.\n",
"\n",
"This REST API endpoint only accepts HTTP POST requests with `ContentType` set to `application/json`. The content of the POST request must conform to following JSON schema:\n",
"\n",
"`{ \n",
" \"img_id\": \"YourImageId\", \n",
" \"img_data\": \"Base64 encoded image file content, encoded as utf-8 string\" \n",
" }`\n",
"\n",
"The response of the POST request conforms to following JSON schema:\n",
"\n",
"`{ \n",
" \"annotations\": [ \n",
" {\n",
" \"bbox\": [X, Y, width, height], \n",
" \"category_id\": \"class id\", \n",
" \"category_name\": \"class name\", \n",
" \"segmentation\": { \"counts\": [ run-length-encoding, ], \"size\": [height, width]} \n",
" },\n",
" ]\n",
" }`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import boto3\n",
"import base64\n",
"import json\n",
"\n",
"client = boto3.client(\"sagemaker-runtime\")\n",
"\n",
"with open(img_local_path, \"rb\") as image_file:\n",
" img_data = base64.b64encode(image_file.read())\n",
" data = {\"img_id\": img_id}\n",
" data[\"img_data\"] = img_data.decode(\"utf-8\")\n",
" body = json.dumps(data).encode(\"utf-8\")\n",
"\n",
"response = client.invoke_endpoint(\n",
" EndpointName=endpoint_name, ContentType=\"application/json\", Accept=\"application/json\", Body=body\n",
")\n",
"body = response[\"Body\"].read()\n",
"msg = body.decode(\"utf-8\")\n",
"data = json.loads(msg)\n",
"assert data is not None"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The response from the endpoint includes annotations for the detected objects in COCO annotations format. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we aplly all the detection results to the image. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"annotations = data[\"annotations\"]\n",
"show_detection_results(img, annotations)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Delete SageMaker Endpoint, Endpoint Config and Model\n",
"If you are done testing, delete the deployed Amazon SageMaker endpoint, endpoint config, and the model below. The trained model in S3. bucket is not deleted. If you are not done testing, go back to the section <b>Visualize Detection Results</b> and select another test image."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"sagemaker_session.delete_endpoint(endpoint_name=endpoint_name)\n",
"sagemaker_session.delete_endpoint_config(endpoint_config_name=endpoint_config_name)\n",
"sagemaker_session.delete_model(model_name=model_name)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Notebook CI Test Results\n",
"\n",
"This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "conda_tensorflow_p36",
"language": "python",
"name": "conda_tensorflow_p36"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.10"
}
},
"nbformat": 4,
"nbformat_minor": 4
}