training/built-in-algorithms/Image-classification-lst-format-highlevel.ipynb (883 lines of code) (raw):

{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Image classification training with image format demo\n" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "\n", "This notebook's CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook. \n", "\n", "![This us-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-2/introduction_to_amazon_algorithms|imageclassification_caltech|Image-classification-lst-format-highlevel.ipynb)\n", "\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "1. [Introduction](#Introduction)\n", "2. [Prerequisites and Preprocessing](#Prerequisites-and-Preprocessing)\n", " 1. [Permissions and environment variables](#Permissions-and-environment-variables)\n", " 2. [Prepare the data](#Prepare-the-data)\n", "3. [Fine-tuning The Image Classification Model](#Fine-tuning-the-Image-classification-model)\n", " 1. [Training parameters](#Training-parameters)\n", " 2. [Start the training](#Start-the-training)\n", "4. [Inference](#Inference)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Introduction\n", "\n", "Welcome to our end-to-end example of the image classification algorithm training with image format. In this demo, we will use the Amazon SageMaker image classification algorithm in transfer learning mode to fine-tune a pre-trained model (trained on ImageNet data) to learn to classify a new dataset. In particular, the pre-trained model will be fine-tuned using the [Caltech-256 dataset](https://paperswithcode.com/dataset/caltech-256). \n", "\n", "To get started, we need to set up the environment with a few prerequisite steps, for permissions, configurations, and so on." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Prerequisites and Preprocessing\n", "\n", "### Permissions and environment variables\n", "\n", "Here we set up the linkage and authentication to AWS services. There are three parts to this:\n", "\n", "* The roles used to give learning and hosting access to your data. This will automatically be obtained from the role used to start the notebook\n", "* The S3 bucket that you want to use for training and model data\n", "* The Amazon SageMaker image classification docker image which need not be changed" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pip install opencv-python\n", "!pip install opencv-python-headless\n", "!pip install mxnet\n", "!pip install --upgrade sagemaker" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "import sagemaker\n", "from sagemaker import get_execution_role\n", "\n", "role = get_execution_role()\n", "print(role)\n", "\n", "sess = sagemaker.Session()\n", "bucket = sess.default_bucket()\n", "prefix = \"ic-lstformat\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker import image_uris\n", "\n", "training_image = image_uris.retrieve(region=sess.boto_region_name, framework=\"image-classification\")\n", "print(training_image)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### Prepare the data\n", "The Caltech-256 dataset consist of images from 257 categories (the last one being a clutter category) and has 30k images with a minimum of 80 images and a maximum of about 800 images per category. \n", "\n", "The image classification algorithm can take two types of input formats. The first is a [RecordIO format](https://mxnet.incubator.apache.org/tutorials/basic/record_io.html) (content type: application/x-recordio) and the other is a [lst format](https://mxnet.incubator.apache.org/how_to/recordio.html?highlight=im2rec) (content type: application/x-jpeg). Files for both these formats are available at http://data.dmlc.ml/mxnet/data/caltech-256/. In this example, we will use the lst format for training and use the training/validation split [specified here](http://data.dmlc.ml/mxnet/data/caltech-256/)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import boto3\n", "import os\n", "import urllib.request\n", "\n", "\n", "def download(url):\n", " filename = url.split(\"/\")[-1]\n", " if not os.path.exists(filename):\n", " urllib.request.urlretrieve(url, filename)\n", "\n", "\n", "# Caltech-256 image files\n", "s3 = boto3.client(\"s3\")\n", "s3.download_file(\n", " \"sagemaker-sample-files\",\n", " \"datasets/image/caltech-256/256_ObjectCategories.tar\",\n", " \"256_ObjectCategories.tar\",\n", ")\n", "!tar -xf 256_ObjectCategories.tar --no-same-owner\n", "\n", "# Tool for creating lst file\n", "download(\"https://raw.githubusercontent.com/apache/incubator-mxnet/master/tools/im2rec.py\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%bash\n", "\n", "mkdir -p caltech_256_train_60\n", "for i in 256_ObjectCategories/*; do\n", " c=`basename $i`\n", " mkdir -p caltech_256_train_60/$c\n", " for j in `ls $i/*.jpg | shuf | head -n 60`; do\n", " mv $j caltech_256_train_60/$c/\n", " done\n", "done\n", "\n", "python im2rec.py --list --recursive caltech-256-60-train caltech_256_train_60/\n", "python im2rec.py --list --recursive caltech-256-60-val 256_ObjectCategories/" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A .lst file is a tab-separated file with three columns that contains a list of image files. The first column specifies the image index, the second column specifies the class label index for the image, and the third column specifies the relative path of the image file. The image index in the first column should be unique across all of the images. Here we make an image list file using the [im2rec](https://github.com/apache/incubator-mxnet/blob/master/tools/im2rec.py) tool from MXNet. You can also create the .lst file in your own way. An example of .lst file is shown as follows. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!head -n 3 ./caltech-256-60-train.lst > example.lst\n", "f = open(\"example.lst\", \"r\")\n", "lst_content = f.read()\n", "print(lst_content)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When you are bringing your own image files to train, please ensure that the .lst file follows the same format as described above. In order to train with the lst format interface, passing the lst file for both training and validation in the appropriate format is mandatory. Once we have the data available in the correct format for training, the next step is to upload the image and .lst file to S3 bucket." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Four channels: train, validation, train_lst, and validation_lst\n", "s3train = \"s3://{}/{}/train/\".format(bucket, prefix)\n", "s3validation = \"s3://{}/{}/validation/\".format(bucket, prefix)\n", "s3train_lst = \"s3://{}/{}/train_lst/\".format(bucket, prefix)\n", "s3validation_lst = \"s3://{}/{}/validation_lst/\".format(bucket, prefix)\n", "\n", "# upload the image files to train and validation channels\n", "!aws s3 cp caltech_256_train_60 $s3train --recursive --quiet\n", "!aws s3 cp 256_ObjectCategories $s3validation --recursive --quiet\n", "\n", "# upload the lst files to train_lst and validation_lst channels\n", "!aws s3 cp caltech-256-60-train.lst $s3train_lst --quiet\n", "!aws s3 cp caltech-256-60-val.lst $s3validation_lst --quiet" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we have all the data stored in S3 bucket. The image and lst files will be converted to RecordIO file internally by the image classification algorithm. But if you want to do the conversion, the following cell shows how to do it using the [im2rec](https://github.com/apache/incubator-mxnet/blob/master/tools/im2rec.py) tool. Note that this is just an example of creating RecordIO files. We are **_not_** using them for training in this notebook. More details on creating RecordIO files can be found in this [tutorial](https://mxnet.incubator.apache.org/how_to/recordio.html?highlight=im2rec)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%bash\n", "python im2rec.py --resize 256 --quality 90 --num-thread 16 caltech-256-60-val 256_ObjectCategories/\n", "python im2rec.py --resize 256 --quality 90 --num-thread 16 caltech-256-60-train caltech_256_train_60/" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "After you created the RecordIO files, you can upload them to the train and validation channels for training. To train with RecordIO format, you can follow \"[Image-classification-fulltraining.ipynb](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/imageclassification_caltech/Image-classification-fulltraining.ipynb)\" and \"[Image-classification-transfer-learning.ipynb](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/imageclassification_caltech/Image-classification-transfer-learning.ipynb)\". Again, we will **_not_** use the RecordIO file for the training. The following sections will only show you how to train a model with images and list files." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Before training the model, we need to set up the training parameters. The next section will explain the parameters in detail." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Fine-tuning the Image Classification Model\n", "Now that we are done with all the setup that is needed, we are ready to train our object detector. \n", "\n", "Training can be done by either calling SageMaker Training with a set of hyperparameters values to train with, or by leveraging SageMaker Automatic Model Tuning ([AMT](https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning.html)). AMT, also known as hyperparameter tuning (HPO), finds the best version of a model by running many training jobs on your dataset using the algorithm and ranges of hyperparameters that you specify. It then chooses the hyperparameter values that result in a model that performs the best, as measured by a metric that you choose.\n", "\n", "In this notebook, both methods are used for demonstration purposes, but the model that the HPO job creates is the one that is used as the base one for incremental training. You can instead choose to use the model created by the standalone training job by changing the below variable `deploy_amt_model` to False." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "deploy_amt_model = True" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Training with SageMaker Training\n", "\n", "To begin, let us create a ``sageMaker.estimator.Estimator`` object. This estimator will launch the training job.\n", "\n", "#### Training parameters\n", "There are two kinds of parameters that need to be set for training. The first one are the parameters for the training job. These include:\n", "\n", "* **Training instance count**: This is the number of instances on which to run the training. When the number of instances is greater than one, then the image classification algorithm will run in distributed settings. \n", "* **Training instance type**: This indicates the type of machine on which to run the training. Typically, we use GPU instances for these training \n", "* **Output path**: This the s3 folder in which the training output is stored" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "s3_output_location = \"s3://{}/{}/output\".format(bucket, prefix)\n", "\n", "ic = sagemaker.estimator.Estimator(\n", " training_image,\n", " role,\n", " instance_count=1,\n", " instance_type=\"ml.p2.xlarge\",\n", " volume_size=50,\n", " max_run=360000,\n", " input_mode=\"File\",\n", " output_path=s3_output_location,\n", " sagemaker_session=sess,\n", " num_classes=257,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Apart from the above set of parameters, there are hyperparameters that are specific to the algorithm. These are:\n", "\n", "* **num_layers**: The number of layers (depth) for the network. We use 18 in this sample but other values such as 50, 152 can be used.\n", "* **use_pretrained_model**: Set to 1 to use pretrained model for transfer learning.\n", "* **image_shape**: The input image dimensions,'num_channels, height, width', for the network. It should be no larger than the actual image size. The number of channels should be same as the actual image.\n", "* **num_classes**: This is the number of output classes for the new dataset. ImageNet was trained with 1000 output classes but the number of output classes can be changed for fine-tuning. For Caltech, we use 257 because it has 256 object categories + 1 clutter class.\n", "* **num_training_samples**: This is the total number of training samples. It is set to 15240 for the Caltech dataset with the current split.\n", "* **mini_batch_size**: The number of training samples used for each mini batch. In distributed training, the number of training samples used per batch will be N * mini_batch_size where N is the number of hosts on which training is run.\n", "* **epochs**: Number of training epochs.\n", "* **learning_rate**: Learning rate for training.\n", "* **top_k**: Report the top-k accuracy during training.\n", "* **resize**: Resize the image before using it for training. The images are resized so that the shortest side is of this parameter. If the parameter is not set, then the training data is used as such without resizing.\n", "* **precision_dtype**: Training datatype precision (default: float32). If set to 'float16', the training will be done in mixed_precision mode and will be faster than float32 mode\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "isConfigCell": true }, "outputs": [], "source": [ "ic.set_hyperparameters(\n", " num_layers=18,\n", " use_pretrained_model=1,\n", " image_shape=\"3,224,224\",\n", " num_classes=257,\n", " mini_batch_size=128,\n", " epochs=2,\n", " learning_rate=0.01,\n", " top_k=2,\n", " num_training_samples=15420,\n", " resize=256,\n", " precision_dtype=\"float32\",\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Input data specification\n", "Set the data type and channels used for training" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "train_data = sagemaker.inputs.TrainingInput(\n", " s3train,\n", " distribution=\"FullyReplicated\",\n", " content_type=\"application/jpeg\",\n", " s3_data_type=\"S3Prefix\",\n", ")\n", "validation_data = sagemaker.inputs.TrainingInput(\n", " s3validation,\n", " distribution=\"FullyReplicated\",\n", " content_type=\"application/jpeg\",\n", " s3_data_type=\"S3Prefix\",\n", ")\n", "train_data_lst = sagemaker.inputs.TrainingInput(\n", " s3train_lst,\n", " distribution=\"FullyReplicated\",\n", " content_type=\"application/jpeg\",\n", " s3_data_type=\"S3Prefix\",\n", ")\n", "validation_data_lst = sagemaker.inputs.TrainingInput(\n", " s3validation_lst,\n", " distribution=\"FullyReplicated\",\n", " content_type=\"application/jpeg\",\n", " s3_data_type=\"S3Prefix\",\n", ")\n", "\n", "data_channels = {\n", " \"train\": train_data,\n", " \"validation\": validation_data,\n", " \"train_lst\": train_data_lst,\n", " \"validation_lst\": validation_data_lst,\n", "}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Start the training\n", "Start training by calling the fit method in the estimator" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ic.fit(inputs=data_channels, logs=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Training with Automatic Model Tuning ([HPO](https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning.html)) <a id='AMT'></a>\n", "***\n", "As mentioned above, instead of manually configuring our hyper parameter values and training with SageMaker Training, we'll use Amazon SageMaker Automatic Model Tuning. \n", " \n", "The code sample below shows you how to use the HyperParameterTuner. For recommended default hyparameter ranges, check the [Amazon SageMaker Image Classification HPs documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/IC-Hyperparameter.html). \n", "\n", "The tuning job will take 15 to 20 minutes to complete.\n", "***" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import time\n", "from sagemaker.tuner import IntegerParameter, ContinuousParameter\n", "from sagemaker.tuner import HyperparameterTuner\n", "\n", "job_name = \"DEMO-ic-lst-\" + time.strftime(\"%Y-%m-%d-%H-%M-%S\", time.gmtime())\n", "print(\"Tuning job name: \", job_name)\n", "\n", "# Image Classification tunable hyper parameters can be found here https://docs.aws.amazon.com/sagemaker/latest/dg/IC-tuning.html\n", "hyperparameter_ranges = {\n", " \"beta_1\": ContinuousParameter(1e-6, 0.999, scaling_type=\"Auto\"),\n", " \"beta_2\": ContinuousParameter(1e-6, 0.999, scaling_type=\"Auto\"),\n", " \"eps\": ContinuousParameter(1e-8, 1.0, scaling_type=\"Auto\"),\n", " \"gamma\": ContinuousParameter(1e-8, 0.999, scaling_type=\"Auto\"),\n", " \"learning_rate\": ContinuousParameter(1e-6, 0.5, scaling_type=\"Auto\"),\n", " \"mini_batch_size\": IntegerParameter(8, 64, scaling_type=\"Auto\"),\n", " \"momentum\": ContinuousParameter(0.0, 0.999, scaling_type=\"Auto\"),\n", " \"weight_decay\": ContinuousParameter(0.0, 0.999, scaling_type=\"Auto\"),\n", "}\n", "\n", "# Increase the total number of training jobs run by AMT, for increased accuracy (and training time).\n", "max_jobs = 6\n", "# Change parallel training jobs run by AMT to reduce total training time, constrained by your account limits.\n", "# if max_jobs=max_parallel_jobs then Bayesian search turns to Random.\n", "max_parallel_jobs = 2\n", "\n", "\n", "hp_tuner = HyperparameterTuner(\n", " ic,\n", " \"validation:accuracy\",\n", " hyperparameter_ranges,\n", " max_jobs=max_jobs,\n", " max_parallel_jobs=max_parallel_jobs,\n", " objective_type=\"Maximize\",\n", ")\n", "\n", "# Launch a SageMaker Tuning job to search for the best hyperparameters\n", "hp_tuner.fit(inputs=data_channels, job_name=job_name)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Inference\n", "\n", "***\n", "\n", "A trained model does nothing on its own. We now want to use the model to perform inference. For this example, that means predicting the topic mixture representing a given document. You can deploy the created model by using the deploy method in the tuner or estimator." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ic_classifier = (hp_tuner if deploy_amt_model else ic).deploy(\n", " initial_instance_count=1, instance_type=\"ml.m4.xlarge\"\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Download test image" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "file_name = \"/tmp/test.jpg\"\n", "s3.download_file(\n", " \"sagemaker-sample-files\",\n", " \"datasets/image/caltech-256/256_ObjectCategories/008.bathtub/008_0007.jpg\",\n", " file_name,\n", ")\n", "\n", "# test image\n", "from IPython.display import Image\n", "\n", "Image(file_name)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import json\n", "import numpy as np\n", "from sagemaker.serializers import IdentitySerializer\n", "\n", "with open(file_name, \"rb\") as f:\n", " payload = f.read()\n", "\n", "ic_classifier.serializer = IdentitySerializer(\"image/jpeg\")\n", "result = json.loads(ic_classifier.predict(payload))\n", "# the result will output the probabilities for all classes\n", "# find the class with maximum probability and print the class index\n", "index = np.argmax(result)\n", "\n", "object_categories = [\n", " \"ak47\",\n", " \"american-flag\",\n", " \"backpack\",\n", " \"baseball-bat\",\n", " \"baseball-glove\",\n", " \"basketball-hoop\",\n", " \"bat\",\n", " \"bathtub\",\n", " \"bear\",\n", " \"beer-mug\",\n", " \"billiards\",\n", " \"binoculars\",\n", " \"birdbath\",\n", " \"blimp\",\n", " \"bonsai-101\",\n", " \"boom-box\",\n", " \"bowling-ball\",\n", " \"bowling-pin\",\n", " \"boxing-glove\",\n", " \"brain-101\",\n", " \"breadmaker\",\n", " \"buddha-101\",\n", " \"bulldozer\",\n", " \"butterfly\",\n", " \"cactus\",\n", " \"cake\",\n", " \"calculator\",\n", " \"camel\",\n", " \"cannon\",\n", " \"canoe\",\n", " \"car-tire\",\n", " \"cartman\",\n", " \"cd\",\n", " \"centipede\",\n", " \"cereal-box\",\n", " \"chandelier-101\",\n", " \"chess-board\",\n", " \"chimp\",\n", " \"chopsticks\",\n", " \"cockroach\",\n", " \"coffee-mug\",\n", " \"coffin\",\n", " \"coin\",\n", " \"comet\",\n", " \"computer-keyboard\",\n", " \"computer-monitor\",\n", " \"computer-mouse\",\n", " \"conch\",\n", " \"cormorant\",\n", " \"covered-wagon\",\n", " \"cowboy-hat\",\n", " \"crab-101\",\n", " \"desk-globe\",\n", " \"diamond-ring\",\n", " \"dice\",\n", " \"dog\",\n", " \"dolphin-101\",\n", " \"doorknob\",\n", " \"drinking-straw\",\n", " \"duck\",\n", " \"dumb-bell\",\n", " \"eiffel-tower\",\n", " \"electric-guitar-101\",\n", " \"elephant-101\",\n", " \"elk\",\n", " \"ewer-101\",\n", " \"eyeglasses\",\n", " \"fern\",\n", " \"fighter-jet\",\n", " \"fire-extinguisher\",\n", " \"fire-hydrant\",\n", " \"fire-truck\",\n", " \"fireworks\",\n", " \"flashlight\",\n", " \"floppy-disk\",\n", " \"football-helmet\",\n", " \"french-horn\",\n", " \"fried-egg\",\n", " \"frisbee\",\n", " \"frog\",\n", " \"frying-pan\",\n", " \"galaxy\",\n", " \"gas-pump\",\n", " \"giraffe\",\n", " \"goat\",\n", " \"golden-gate-bridge\",\n", " \"goldfish\",\n", " \"golf-ball\",\n", " \"goose\",\n", " \"gorilla\",\n", " \"grand-piano-101\",\n", " \"grapes\",\n", " \"grasshopper\",\n", " \"guitar-pick\",\n", " \"hamburger\",\n", " \"hammock\",\n", " \"harmonica\",\n", " \"harp\",\n", " \"harpsichord\",\n", " \"hawksbill-101\",\n", " \"head-phones\",\n", " \"helicopter-101\",\n", " \"hibiscus\",\n", " \"homer-simpson\",\n", " \"horse\",\n", " \"horseshoe-crab\",\n", " \"hot-air-balloon\",\n", " \"hot-dog\",\n", " \"hot-tub\",\n", " \"hourglass\",\n", " \"house-fly\",\n", " \"human-skeleton\",\n", " \"hummingbird\",\n", " \"ibis-101\",\n", " \"ice-cream-cone\",\n", " \"iguana\",\n", " \"ipod\",\n", " \"iris\",\n", " \"jesus-christ\",\n", " \"joy-stick\",\n", " \"kangaroo-101\",\n", " \"kayak\",\n", " \"ketch-101\",\n", " \"killer-whale\",\n", " \"knife\",\n", " \"ladder\",\n", " \"laptop-101\",\n", " \"lathe\",\n", " \"leopards-101\",\n", " \"license-plate\",\n", " \"lightbulb\",\n", " \"light-house\",\n", " \"lightning\",\n", " \"llama-101\",\n", " \"mailbox\",\n", " \"mandolin\",\n", " \"mars\",\n", " \"mattress\",\n", " \"megaphone\",\n", " \"menorah-101\",\n", " \"microscope\",\n", " \"microwave\",\n", " \"minaret\",\n", " \"minotaur\",\n", " \"motorbikes-101\",\n", " \"mountain-bike\",\n", " \"mushroom\",\n", " \"mussels\",\n", " \"necktie\",\n", " \"octopus\",\n", " \"ostrich\",\n", " \"owl\",\n", " \"palm-pilot\",\n", " \"palm-tree\",\n", " \"paperclip\",\n", " \"paper-shredder\",\n", " \"pci-card\",\n", " \"penguin\",\n", " \"people\",\n", " \"pez-dispenser\",\n", " \"photocopier\",\n", " \"picnic-table\",\n", " \"playing-card\",\n", " \"porcupine\",\n", " \"pram\",\n", " \"praying-mantis\",\n", " \"pyramid\",\n", " \"raccoon\",\n", " \"radio-telescope\",\n", " \"rainbow\",\n", " \"refrigerator\",\n", " \"revolver-101\",\n", " \"rifle\",\n", " \"rotary-phone\",\n", " \"roulette-wheel\",\n", " \"saddle\",\n", " \"saturn\",\n", " \"school-bus\",\n", " \"scorpion-101\",\n", " \"screwdriver\",\n", " \"segway\",\n", " \"self-propelled-lawn-mower\",\n", " \"sextant\",\n", " \"sheet-music\",\n", " \"skateboard\",\n", " \"skunk\",\n", " \"skyscraper\",\n", " \"smokestack\",\n", " \"snail\",\n", " \"snake\",\n", " \"sneaker\",\n", " \"snowmobile\",\n", " \"soccer-ball\",\n", " \"socks\",\n", " \"soda-can\",\n", " \"spaghetti\",\n", " \"speed-boat\",\n", " \"spider\",\n", " \"spoon\",\n", " \"stained-glass\",\n", " \"starfish-101\",\n", " \"steering-wheel\",\n", " \"stirrups\",\n", " \"sunflower-101\",\n", " \"superman\",\n", " \"sushi\",\n", " \"swan\",\n", " \"swiss-army-knife\",\n", " \"sword\",\n", " \"syringe\",\n", " \"tambourine\",\n", " \"teapot\",\n", " \"teddy-bear\",\n", " \"teepee\",\n", " \"telephone-box\",\n", " \"tennis-ball\",\n", " \"tennis-court\",\n", " \"tennis-racket\",\n", " \"theodolite\",\n", " \"toaster\",\n", " \"tomato\",\n", " \"tombstone\",\n", " \"top-hat\",\n", " \"touring-bike\",\n", " \"tower-pisa\",\n", " \"traffic-light\",\n", " \"treadmill\",\n", " \"triceratops\",\n", " \"tricycle\",\n", " \"trilobite-101\",\n", " \"tripod\",\n", " \"t-shirt\",\n", " \"tuning-fork\",\n", " \"tweezer\",\n", " \"umbrella-101\",\n", " \"unicorn\",\n", " \"vcr\",\n", " \"video-projector\",\n", " \"washing-machine\",\n", " \"watch-101\",\n", " \"waterfall\",\n", " \"watermelon\",\n", " \"welding-mask\",\n", " \"wheelbarrow\",\n", " \"windmill\",\n", " \"wine-bottle\",\n", " \"xylophone\",\n", " \"yarmulke\",\n", " \"yo-yo\",\n", " \"zebra\",\n", " \"airplanes-101\",\n", " \"car-side-101\",\n", " \"faces-easy-101\",\n", " \"greyhound\",\n", " \"tennis-shoes\",\n", " \"toad\",\n", " \"clutter\",\n", "]\n", "print(\"Result: label - \" + object_categories[index] + \", probability - \" + str(result[index]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Clean up\n", "\n", "When we're done with the endpoint, we can just delete it and the backing instances will be released." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ic_classifier.delete_endpoint()" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Notebook CI Test Results\n", "\n", "This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.\n", "\n", "![This us-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-1/introduction_to_amazon_algorithms|imageclassification_caltech|Image-classification-lst-format-highlevel.ipynb)\n", "\n", "![This us-east-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-2/introduction_to_amazon_algorithms|imageclassification_caltech|Image-classification-lst-format-highlevel.ipynb)\n", "\n", "![This us-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-1/introduction_to_amazon_algorithms|imageclassification_caltech|Image-classification-lst-format-highlevel.ipynb)\n", "\n", "![This ca-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ca-central-1/introduction_to_amazon_algorithms|imageclassification_caltech|Image-classification-lst-format-highlevel.ipynb)\n", "\n", "![This sa-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/sa-east-1/introduction_to_amazon_algorithms|imageclassification_caltech|Image-classification-lst-format-highlevel.ipynb)\n", "\n", "![This eu-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-1/introduction_to_amazon_algorithms|imageclassification_caltech|Image-classification-lst-format-highlevel.ipynb)\n", "\n", "![This eu-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-2/introduction_to_amazon_algorithms|imageclassification_caltech|Image-classification-lst-format-highlevel.ipynb)\n", "\n", "![This eu-west-3 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-3/introduction_to_amazon_algorithms|imageclassification_caltech|Image-classification-lst-format-highlevel.ipynb)\n", "\n", "![This eu-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-central-1/introduction_to_amazon_algorithms|imageclassification_caltech|Image-classification-lst-format-highlevel.ipynb)\n", "\n", "![This eu-north-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-north-1/introduction_to_amazon_algorithms|imageclassification_caltech|Image-classification-lst-format-highlevel.ipynb)\n", "\n", "![This ap-southeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-1/introduction_to_amazon_algorithms|imageclassification_caltech|Image-classification-lst-format-highlevel.ipynb)\n", "\n", "![This ap-southeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-2/introduction_to_amazon_algorithms|imageclassification_caltech|Image-classification-lst-format-highlevel.ipynb)\n", "\n", "![This ap-northeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-1/introduction_to_amazon_algorithms|imageclassification_caltech|Image-classification-lst-format-highlevel.ipynb)\n", "\n", "![This ap-northeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-2/introduction_to_amazon_algorithms|imageclassification_caltech|Image-classification-lst-format-highlevel.ipynb)\n", "\n", "![This ap-south-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-south-1/introduction_to_amazon_algorithms|imageclassification_caltech|Image-classification-lst-format-highlevel.ipynb)\n" ] } ], "metadata": { "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "Python 3 (Data Science)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:081325390199:image/datascience-1.0" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.10" }, "notice": "Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the \"License\"). You may not use this file except in compliance with the License. A copy of the License is located at http://aws.amazon.com/apache2.0/ or in the \"license\" file accompanying this file. This file is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License." }, "nbformat": 4, "nbformat_minor": 4 }