sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.ipynb (841 lines of code) (raw):
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Train, hyperparameter tune, and deploy with TensorFlow\n",
"In this article, learn how to run your [TensorFlow](https://www.tensorflow.org/overview) training scripts at scale using the Azure Machine Learning (Azure ML) Python SDK v2.\n",
"\n",
"This example trains and registers a TensorFlow model to classify handwritten digits using a deep neural network (DNN). MNIST is a popular dataset consisting of 70,000 grayscale images. Each image is a handwritten digit of 28x28 pixels, representing number from 0 to 9. The goal is to create a multi-class classifier to identify the digit each image represents, and deploy it as a web service in Azure.\n",
"\n",
"Whether you're developing a TensorFlow model from the ground-up or you're bringing an existing model into the cloud, you can use Azure Machine Learning to scale out open-source training jobs to build, deploy, version, and monitor production-grade models.\n",
"\n",
"## Requirements\n",
"In order to benefit from this article, you need to have:\n",
"* an Azure subscription. If you don't have an Azure subscription, [create a free account](https://aka.ms/AMLFree) before you begin.\n",
"* Run this code on either of these environments:\n",
" 1. an Azure Machine Learning compute instance - no downloads or installation necessary\n",
" * Complete the [Quickstart: Get started with Azure Machine Learning](https://docs.microsoft.com/azure/machine-learning/quickstart-create-resources) to create a dedicated notebook server pre-loaded with the SDK and the sample repository.\n",
" * In the samples deep learning folder on the notebook server, find a completed and expanded notebook by navigating to this directory: * v2 > jobs > single-step > scikit-learn > train-hyperparameter-tune-deploy-with-sklearn* folder.\n",
" 1. your own Jupyter Notebook server\n",
" * [Install the Azure Machine Learning SDK v2](https://docs.microsoft.com/python/api/overview/azure/ml/installv2?view=azure-ml-py)\n",
" * [Create a workspace configuration file](https://docs.microsoft.com/azure/machine-learning/how-to-configure-environment#workspace)\n",
" 1. the following files\n",
" * the training script [tf_mnist.py](./src/tf_mnist.py)\n",
" * the scoring script [score.py](./src/score.py)\n",
" * the sample request file [sample-request.json](./request/sample-request.json)\n",
" \n",
" You can also find a completed [Jupyter Notebook version](./train-hyperparameter-tune-deploy-with-tensorflow.ipynb) of this guide on the GitHub samples page."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Connect to the workspace\n",
"\n",
"First, you'll need to connect to your Azure ML workspace. The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning.\n",
"\n",
"We are using `DefaultAzureCredential` to get access to workspace. \n",
"`DefaultAzureCredential` should be capable of handling most Azure SDK authentication scenarios. \n",
"\n",
"Reference for more available credentials if it does not work for you: [configure credential example](../../configuration.ipynb), [azure-identity reference doc](https://docs.microsoft.com/python/api/azure-identity/azure.identity?view=azure-python)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"name": "credential"
},
"outputs": [],
"source": [
"# Handle to the workspace\n",
"from azure.ai.ml import MLClient\n",
"\n",
"# Authentication package\n",
"from azure.identity import DefaultAzureCredential\n",
"\n",
"credential = DefaultAzureCredential()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you want to use a browser instead to login and authenticate, you can use the following code instead. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Handle to the workspace\n",
"# from azure.ai.ml import MLClient\n",
"\n",
"# Authentication package\n",
"# from azure.identity import InteractiveBrowserCredential\n",
"# credential = InteractiveBrowserCredential()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In the next cell, enter your Subscription ID, Resource Group name and Workspace name. To find subscription ID and resource group:\n",
"\n",
"1. In the upper right Azure Machine Learning Studio toolbar, select your workspace name.\n",
"1. Copy the value for Resource group and subsccription ID into the code."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"name": "ml_client"
},
"outputs": [],
"source": [
"# Get a handle to the workspace\n",
"ml_client = MLClient(\n",
" credential=credential,\n",
" subscription_id=\"<SUBSCRIPTION_ID>\",\n",
" resource_group_name=\"<RESOURCE_GROUP>\",\n",
" workspace_name=\"<AML_WORKSPACE_NAME>\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The result is a handler to the workspace that you'll use to manage other resources and jobs.\n",
"\n",
"> [!IMPORTANT]\n",
"> Creating MLClient will not connect to the workspace. The client initialization is lazy, it will wait for the first time it needs to make a call (in the notebook below, that will happen during compute creation)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a Compute Resource to run our job\n",
"\n",
"AzureML needs a compute resource for running a job. It can be single or multi-node machines with Linux or Windows OS, or a specific compute fabric like Spark.\n",
"\n",
"In this example, we provision a Linux [compute cluster](https://docs.microsoft.com/azure/machine-learning/how-to-create-attach-compute-cluster?tabs=python). See the [full list on VM sizes and prices](https://azure.microsoft.com/pricing/details/machine-learning/) .\n",
"\n",
"For this example we need a gpu cluster, let's pick a STANDARD_NC6 model and create an Azure ML Compute"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"name": "cpu_compute_target"
},
"outputs": [],
"source": [
"from azure.ai.ml.entities import AmlCompute\n",
"\n",
"gpu_compute_target = \"gpu-cluster\"\n",
"\n",
"try:\n",
" # let's see if the compute target already exists\n",
" gpu_cluster = ml_client.compute.get(gpu_compute_target)\n",
" print(\n",
" f\"You already have a cluster named {gpu_compute_target}, we'll reuse it as is.\"\n",
" )\n",
"\n",
"except Exception:\n",
" print(\"Creating a new gpu compute target...\")\n",
"\n",
" # Let's create the Azure ML compute object with the intended parameters\n",
" gpu_cluster = AmlCompute(\n",
" # Name assigned to the compute cluster\n",
" name=\"gpu-cluster\",\n",
" # Azure ML Compute is the on-demand VM service\n",
" type=\"amlcompute\",\n",
" # VM Family\n",
" size=\"STANDARD_NC6s_v3\",\n",
" # Minimum running nodes when there is no job running\n",
" min_instances=0,\n",
" # Nodes in cluster\n",
" max_instances=4,\n",
" # How many seconds will the node running after the job termination\n",
" idle_time_before_scale_down=180,\n",
" # Dedicated or LowPriority. The latter is cheaper but there is a chance of job termination\n",
" tier=\"Dedicated\",\n",
" )\n",
"\n",
" # Now, we pass the object to MLClient's create_or_update method\n",
" gpu_cluster = ml_client.begin_create_or_update(gpu_cluster).result()\n",
"\n",
"print(\n",
" f\"AMLCompute with name {gpu_cluster.name} is created, the compute size is {gpu_cluster.size}\"\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create an environment for the job\n",
"\n",
"To run an AzureML job, you'll need an [environment](https://docs.microsoft.com/azure/machine-learning/concept-environments). An environment is the software runtime and libraries that you want installed on the compute where you’ll be training. It is similar to your python emvironment on your local machine.\n",
"\n",
"AzureML provides many curated or readymade environments which are useful for common training and inference scenarios. You can also create your own “custom” environments using a docker image, or a conda configuration \n",
"\n",
"In this example, you'll reuse the curated AzureML environment `AzureML-tensorflow-2.7-ubuntu20.04-py38-cuda11-gpu`. You will use the latest version of this environment using the `@latest` directive."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"name": "curated_env_name"
},
"outputs": [],
"source": [
"curated_env_name = \"AzureML-tensorflow-2.12-cuda11@latest\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Data for training\n",
"\n",
"You'll consume mnist data from an azure storage account. This data is sourced from [Yan LeCun's website](http://yann.lecun.com/exdb/mnist/) and is copied on to azure storage.\n",
"\n",
"For more information about the MNIST dataset, please visit [Yan LeCun's website](http://yann.lecun.com/exdb/mnist/)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"name": "data_url"
},
"outputs": [],
"source": [
"web_path = \"wasbs://datasets@azuremlexamples.blob.core.windows.net/mnist/\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Build the command job to train\n",
"\n",
"Now that you have all assets required to run your job, it's time to build the job itself, using the Azure ML Python SDK v2. We will be creating a `command` job.\n",
"\n",
"An AzureML `command` job is a resource that specifies all the details needed to execute your training code in the cloud: inputs and outputs, the type of hardware to use, software to install, and how to run your code. the `command` job contains information to execute a single command.\n",
"\n",
"## The training script\n",
"\n",
"We will use the training script - *tf_mnist.py* python file. This script handles the preprocessing of the data, splitting it into test and train data. It then consumes this data to train a model and return the output model. \n",
"\n",
"[MLFlow](https://mlflow.org/docs/latest/tracking.html) will be used to log the parameters and metrics during our pipeline run.\n",
"\n",
"In the training script `tf_mnist.py`, it creates a very simple DNN (deep neural network), with just 2 hidden layers. The input layer has 28 * 28 = 784 neurons, each representing a pixel in an image. The first hidden layer has 300 neurons, and the second hidden layer has 100 neurons. The output layer has 10 neurons, each representing a targeted label from 0 to 9.\n",
"\n",
""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Configure the Command\n",
"Now that you have a script that can perform the desired tasks, you'll use the general purpose **command** to run this script. \n",
"\n",
"* The inputs used in this command are the data location, batch size, number of neurons in first and second layer, learning rate\n",
"* Note that we are passing the webpath directly as an input\n",
"* Use the compute created earlier to run this command.\n",
"* Use the curated environment which was declared earlier. \n",
"* In this example, we are using the `UserIdentityConfiguration` to run the command, which means it will be using your identity to run this command and access the data from the blob\n",
"* Configure the command line action itself - in this case, the command is `python tf_mnist.py`. You can access the inputs/outputs in the command via the `${{ ... }}` notation.\n",
"* Configure some metadata like display name, experiment name etc. An experiment is a container for all the iterations one does on a certain project. All the jobs submitted under the same experiment name would be listed next to each other in Azure ML studio.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"name": "job"
},
"outputs": [],
"source": [
"from azure.ai.ml import command\n",
"from azure.ai.ml import UserIdentityConfiguration\n",
"from azure.ai.ml import Input\n",
"\n",
"web_path = \"wasbs://datasets@azuremlexamples.blob.core.windows.net/mnist/\"\n",
"\n",
"job = command(\n",
" inputs=dict(\n",
" data_folder=Input(type=\"uri_folder\", path=web_path),\n",
" batch_size=64,\n",
" first_layer_neurons=256,\n",
" second_layer_neurons=128,\n",
" learning_rate=0.01,\n",
" ),\n",
" compute=gpu_compute_target,\n",
" environment=curated_env_name,\n",
" code=\"./src/\",\n",
" command=\"python tf_mnist.py --data-folder ${{inputs.data_folder}} --batch-size ${{inputs.batch_size}} --first-layer-neurons ${{inputs.first_layer_neurons}} --second-layer-neurons ${{inputs.second_layer_neurons}} --learning-rate ${{inputs.learning_rate}}\",\n",
" experiment_name=\"tf-dnn-image-classify\",\n",
" display_name=\"tensorflow-classify-mnist-digit-images-with-dnn\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Submit the job \n",
"\n",
"It's now time to submit the job to run in AzureML. This time you'll use `create_or_update` on `ml_client.jobs`.\n",
"\n",
"Once completed, the job will register a model in your workspace as a result of training. You can view the job in AzureML studio by clicking on the link in the output of the next cell."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"name": "create_job"
},
"outputs": [],
"source": [
"ml_client.jobs.create_or_update(job)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## What happens during job execution\n",
"As the job is executed, it goes through the following stages:\n",
"\n",
"* *Preparing*: A docker image is created according to the environment defined. The image is uploaded to the workspace's container registry and cached for later runs. Logs are also streamed to the job history and can be viewed to monitor progress. If a curated environment is used, the cached image backing that curated environment will be used.\n",
"* *Scaling*: The cluster attempts to scale up if the cluster requires more nodes to execute the run than are currently available.\n",
"* *Running*: All scripts in the `src` folder are uploaded to the compute target, data stores are mounted or copied, and the script is executed. Outputs from stdout and the ./logs folder are streamed to the job history and can be used to monitor the job."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Tune model hyperparameters\n",
"We have trained the model with one set of parameters, let's see if we can further improve the accuracy of our model. We can optimize our model's hyperparameters using Azure Machine Learning's sweep capabilities.\n",
"\n",
"You will replace some of the parameters passed to the training job with special inputs from the `azure.ml.sweep` package – that way, you are defining the parameter space in which to search.\n",
"\n",
"Let's tune the `batch_size`, `first_layer_neurons`, `second_layer_neurons` and `learning_rate` parameters. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"name": "job_for_sweep"
},
"outputs": [],
"source": [
"from azure.ai.ml.sweep import Choice, LogUniform\n",
"\n",
"# we will reuse the command_job created before. we call it as a function so that we can apply inputs\n",
"# we do not apply the 'iris_csv' input again -- we will just use what was already defined earlier\n",
"job_for_sweep = job(\n",
" batch_size=Choice(values=[32, 64, 128]),\n",
" first_layer_neurons=Choice(values=[16, 64, 128, 256, 512]),\n",
" second_layer_neurons=Choice(values=[16, 64, 256, 512]),\n",
" learning_rate=LogUniform(min_value=-6, max_value=-1),\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then you configure sweep on the command job, with some sweep-specific parameters like the primary metric to watch and the sampling algorithm to use.\n",
"\n",
"In this example we will use random sampling to try different configuration sets of hyperparameters to maximize our primary metric, `validation_acc`.\n",
"\n",
"We will define an early termnination policy. The `BanditPolicy` basically states to check the job every 2 iterations. If the primary metric falls outside of the top 10% range, Azure ML will terminate the job. This saves us from continuing to explore hyperparameters that don't show promise of helping reach our target metric."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"name": "sweep_job"
},
"outputs": [],
"source": [
"from azure.ai.ml.sweep import BanditPolicy\n",
"\n",
"sweep_job = job_for_sweep.sweep(\n",
" compute=gpu_compute_target,\n",
" sampling_algorithm=\"random\",\n",
" primary_metric=\"validation_acc\",\n",
" goal=\"Maximize\",\n",
" max_total_trials=8,\n",
" max_concurrent_trials=4,\n",
" early_termination_policy=BanditPolicy(slack_factor=0.1, evaluation_interval=2),\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now you can submit this job as before. This will now run a sweep job that sweeps over our train job."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"name": "create_sweep_job"
},
"outputs": [],
"source": [
"returned_sweep_job = ml_client.create_or_update(sweep_job)\n",
"\n",
"# stream the output and wait until the job is finished\n",
"ml_client.jobs.stream(returned_sweep_job.name)\n",
"\n",
"# refresh the latest status of the job after streaming\n",
"returned_sweep_job = ml_client.jobs.get(name=returned_sweep_job.name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can monitor the job using the studio UI link presented when you run the job.\n",
"\n",
"## Find and register the best model\n",
"Once **all the runs complete**, you can find the run that produced the model with the highest accuracy."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"name": "model"
},
"outputs": [],
"source": [
"from azure.ai.ml.entities import Model\n",
"\n",
"if returned_sweep_job.status == \"Completed\":\n",
"\n",
" # First let us get the run which gave us the best result\n",
" best_run = returned_sweep_job.properties[\"best_child_run_id\"]\n",
"\n",
" # lets get the model from this run\n",
" model = Model(\n",
" # the script stores the model as \"model\"\n",
" path=\"azureml://jobs/{}/outputs/artifacts/paths/outputs/model/\".format(\n",
" best_run\n",
" ),\n",
" name=\"run-model-example\",\n",
" description=\"Model created from run.\",\n",
" type=\"custom_model\",\n",
" )\n",
"\n",
"else:\n",
" print(\n",
" \"Sweep job status: {}. Please wait until it completes\".format(\n",
" returned_sweep_job.status\n",
" )\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can now register this model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"name": "register_model"
},
"outputs": [],
"source": [
"registered_model = ml_client.models.create_or_update(model=model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Deploy the model as an online endpoint\n",
"\n",
"Now deploy your machine learning model as a web service in the Azure cloud, an [`online endpoint`](https://docs.microsoft.com/azure/machine-learning/concept-endpoints).\n",
"\n",
"To deploy a machine learning service, you usually need:\n",
"\n",
"* The model assets (file, metadata) that you want to deploy. You've already registered these assets in your training job.\n",
"* Some code to run as a service. The code executes the model on a given input request. This entry script receives data submitted to a deployed web service and passes it to the model, then returns the model's response to the client. The script is specific to your model. The entry script must understand the data that the model expects and returns. When using a MLFlow model, this script is automatically created for you."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a new online endpoint\n",
"\n",
"As a firsdt step, you need to create your online endpoint. The endpoint name needs to be unique in the entire Azure region. For this article, you'll create a unique name using [`UUID`](https://en.wikipedia.org/wiki/Universally_unique_identifier#:~:text=A%20universally%20unique%20identifier%20(UUID,%2C%20for%20practical%20purposes%2C%20unique.)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"name": "online_endpoint_name"
},
"outputs": [],
"source": [
"import uuid\n",
"\n",
"# Creating a unique name for the endpoint\n",
"online_endpoint_name = \"tff-dnn-endpoint-\" + str(uuid.uuid4())[:8]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"name": "endpoint"
},
"outputs": [],
"source": [
"from azure.ai.ml.entities import (\n",
" ManagedOnlineEndpoint,\n",
" ManagedOnlineDeployment,\n",
" Model,\n",
" Environment,\n",
")\n",
"\n",
"# create an online endpoint\n",
"endpoint = ManagedOnlineEndpoint(\n",
" name=online_endpoint_name,\n",
" description=\"Classify handwritten digits using a deep neural network (DNN) using TensorFlow\",\n",
" auth_mode=\"key\",\n",
")\n",
"\n",
"endpoint = ml_client.begin_create_or_update(endpoint).result()\n",
"\n",
"print(f\"Endpint {endpoint.name} provisioning state: {endpoint.provisioning_state}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Once you've created an endpoint, you can retrieve it as below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"name": "get_endpoint"
},
"outputs": [],
"source": [
"endpoint = ml_client.online_endpoints.get(name=online_endpoint_name)\n",
"\n",
"print(\n",
" f'Endpint \"{endpoint.name}\" with provisioning state \"{endpoint.provisioning_state}\" is retrieved'\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Deploy the model to the endpoint\n",
"\n",
"Once the endpoint is created, deploy the model with the entry script. Each endpoint can have multiple deployments and direct traffic to these deployments can be specified using rules. Here you'll create a single deployment that handles 100% of the incoming traffic. We have chosen a color name for the deployment, for example, *tff-blue*, *tff-green*, *tff-red* deployments, which is arbitrary.\n",
"\n",
"* Deploy the best version of the model which we registered earlier\n",
"* To score we will use the `score.py` file\n",
"* we will use the same curated environment we created earlier for inferencing as well\n",
"\n",
"> [!NOTE]\n",
"> Expect this deployment to take approximately 6 to 8 minutes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"name": "blue_deployment"
},
"outputs": [],
"source": [
"model = registered_model\n",
"\n",
"from azure.ai.ml.entities import CodeConfiguration\n",
"\n",
"# create an online deployment.\n",
"blue_deployment = ManagedOnlineDeployment(\n",
" name=\"tff-blue\",\n",
" endpoint_name=online_endpoint_name,\n",
" model=model,\n",
" code_configuration=CodeConfiguration(code=\"./src\", scoring_script=\"score.py\"),\n",
" environment=curated_env_name,\n",
" instance_type=\"Standard_DS3_v2\",\n",
" instance_count=1,\n",
")\n",
"\n",
"blue_deployment = ml_client.begin_create_or_update(blue_deployment).result()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Test the deployed model\n",
"\n",
"Now that the model is deployed to the endpoint, you can run inference with it using the `invoke` on the endpoint. \n",
"\n",
"To test the endpoint we need some test data. Let us locally download the test data which we used in our training script."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"name": "download_test_data"
},
"outputs": [],
"source": [
"import urllib.request\n",
"import os\n",
"\n",
"data_folder = os.path.join(os.getcwd(), \"data\")\n",
"os.makedirs(data_folder, exist_ok=True)\n",
"\n",
"urllib.request.urlretrieve(\n",
" \"https://azureopendatastorage.blob.core.windows.net/mnist/t10k-images-idx3-ubyte.gz\",\n",
" filename=os.path.join(data_folder, \"t10k-images-idx3-ubyte.gz\"),\n",
")\n",
"urllib.request.urlretrieve(\n",
" \"https://azureopendatastorage.blob.core.windows.net/mnist/t10k-labels-idx1-ubyte.gz\",\n",
" filename=os.path.join(data_folder, \"t10k-labels-idx1-ubyte.gz\"),\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Lets create a function to load the downloaded files"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import gzip\n",
"import struct\n",
"import numpy as np\n",
"\n",
"# load compressed MNIST gz files and return numpy arrays\n",
"def load_data(filename, label=False):\n",
" print(\"Filename:\", filename)\n",
" with gzip.open(filename) as gz:\n",
" struct.unpack(\"I\", gz.read(4))\n",
" n_items = struct.unpack(\">I\", gz.read(4))\n",
" if not label:\n",
" n_rows = struct.unpack(\">I\", gz.read(4))[0]\n",
" n_cols = struct.unpack(\">I\", gz.read(4))[0]\n",
" res = np.frombuffer(gz.read(n_items[0] * n_rows * n_cols), dtype=np.uint8)\n",
" res = res.reshape(n_items[0], n_rows * n_cols)\n",
" else:\n",
" res = np.frombuffer(gz.read(n_items[0]), dtype=np.uint8)\n",
" res = res.reshape(n_items[0], 1)\n",
" return res"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Lets load these into a test dataset"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"name": "load_test_data"
},
"outputs": [],
"source": [
"X_test = load_data(os.path.join(data_folder, \"t10k-images-idx3-ubyte.gz\"), False)\n",
"y_test = load_data(\n",
" os.path.join(data_folder, \"t10k-labels-idx1-ubyte.gz\"), True\n",
").reshape(-1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Pick 30 random samples from the test set and write it to a json file."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"name": "generate_test_json"
},
"outputs": [],
"source": [
"import json\n",
"import numpy as np\n",
"\n",
"# find 30 random samples from test set\n",
"n = 30\n",
"sample_indices = np.random.permutation(X_test.shape[0])[0:n]\n",
"\n",
"test_samples = json.dumps({\"data\": X_test[sample_indices].tolist()})\n",
"\n",
"with open(\"request/sample-request.json\", \"w\") as outfile:\n",
" outfile.write(test_samples)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Lets `invoke` the endpoint. After the invocation, we print the returned predictions and plot them along with the input images. Use red font color and inversed image (white on black) to highlight the misclassified samples. Note since the model accuracy is pretty high, you might have to run the below cell a few times before you can see a misclassified sample. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"name": "invoke"
},
"outputs": [],
"source": [
"# # predict using the deployed model\n",
"result = ml_client.online_endpoints.invoke(\n",
" endpoint_name=online_endpoint_name,\n",
" request_file=\"./request/sample-request.json\",\n",
" deployment_name=\"tff-blue\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"After the invocation, we print the returned predictions and plot them along with the input images. Use red font color and inversed image (white on black) to highlight the misclassified samples. Note since the model accuracy is pretty high, you might have to run the below cell a few times before you can see a misclassified sample."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"name": "test_invoke"
},
"outputs": [],
"source": [
"# compare actual value vs. the predicted values:\n",
"import matplotlib.pyplot as plt\n",
"\n",
"i = 0\n",
"plt.figure(figsize=(20, 1))\n",
"\n",
"for s in sample_indices:\n",
" plt.subplot(1, n, i + 1)\n",
" plt.axhline(\"\")\n",
" plt.axvline(\"\")\n",
"\n",
" # use different color for misclassified sample\n",
" font_color = \"red\" if y_test[s] != result[i] else \"black\"\n",
" clr_map = plt.cm.gray if y_test[s] != result[i] else plt.cm.Greys\n",
"\n",
" plt.text(x=10, y=-10, s=result[i], fontsize=18, color=font_color)\n",
" plt.imshow(X_test[s].reshape(28, 28), cmap=clr_map)\n",
"\n",
" i = i + 1\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Clean up resources\n",
"\n",
"If you're not going to use the endpoint, delete it to stop using the resource. Make sure no other deployments are using an endpoint before you delete it.\n",
"\n",
"\n",
"> [!NOTE]\n",
"> Expect this step to take approximately 6 to 8 minutes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"name": "delete_endpoint"
},
"outputs": [],
"source": [
"ml_client.online_endpoints.begin_delete(name=online_endpoint_name)"
]
}
],
"metadata": {
"description": {
"description": "Train, hyperparameter tune, and deploy a Tensorflow model to classify handwritten digits using a deep neural network (DNN)."
},
"kernelspec": {
"display_name": "Python 3.10 - SDK V2",
"language": "python",
"name": "python310-sdkv2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.12"
},
"orig_nbformat": 4,
"vscode": {
"interpreter": {
"hash": "27e3d0e72e4fca658b4cea21737d79da5e68f90d3ccf7f33207fcc73892eee38"
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}