sdk/python/foundation-models/system/inference/image-classification/image-classification-online-endpoint.ipynb (372 lines of code) (raw):

{ "cells": [ { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Image Classification Inference using Online Endpoints\n", "\n", "This sample shows how deploy `image-classification` type models to an online endpoint for inference.\n", "\n", "### Task\n", "`image-classification` tasks assign label(s) or class(es) to an image. There are two common types of `image-classification` tasks:\n", "\n", "* MultiClass: An image is categorised into one of the three or more classes.\n", "* MultiLabel: An image can be categorised into more than one class.\n", " \n", "### Model\n", "Models that can perform the `image-classification` task are tagged with `image-classification`. We will use the `microsoft-beit-base-patch16-224-pt22k-ft22k` model in this notebook. If you opened this notebook from a specific model card, remember to replace the specific model name. If you don't find a model that suits your scenario or domain, you can discover and [import models from HuggingFace hub](../../import/import_model_into_registry.ipynb) and then use them for inference.\n", "\n", "### Inference data\n", "We will use the [fridgeObjects](https://automlsamplenotebookdata-adcuc7f7bqhhh8a4.b02.azurefd.net/image-classification/fridgeObjects.zip) dataset.\n", "\n", "\n", "### Outline\n", "1. Setup pre-requisites\n", "2. Pick a model to deploy\n", "3. Prepare data for inference\n", "4. Deploy the model to an online endpoint for real time inference\n", "5. Test the endpoint\n", "6. Clean up resources - delete the online endpoint" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### 1. Setup pre-requisites\n", "* Install dependencies\n", "* Connect to AzureML Workspace. Learn more at [set up SDK authentication](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-setup-authentication?tabs=sdk). Replace `<WORKSPACE_NAME>`, `<RESOURCE_GROUP>` and `<SUBSCRIPTION_ID>` below.\n", "* Connect to `azureml` system registry" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from azure.ai.ml import MLClient\n", "from azure.identity import (\n", " DefaultAzureCredential,\n", " InteractiveBrowserCredential,\n", " ClientSecretCredential,\n", ")\n", "from azure.ai.ml.entities import AmlCompute\n", "import time\n", "\n", "try:\n", " credential = DefaultAzureCredential()\n", " credential.get_token(\"https://management.azure.com/.default\")\n", "except Exception as ex:\n", " credential = InteractiveBrowserCredential()\n", "\n", "try:\n", " workspace_ml_client = MLClient.from_config(credential)\n", " subscription_id = workspace_ml_client.subscription_id\n", " resource_group = workspace_ml_client.resource_group_name\n", " workspace_name = workspace_ml_client.workspace_name\n", "except Exception as ex:\n", " print(ex)\n", " # Enter details of your AML workspace\n", " subscription_id = \"<SUBSCRIPTION_ID>\"\n", " resource_group = \"<RESOURCE_GROUP>\"\n", " workspace_name = \"<AML_WORKSPACE_NAME>\"\n", "workspace_ml_client = MLClient(\n", " credential, subscription_id, resource_group, workspace_name\n", ")\n", "\n", "# The models, fine tuning pipelines and environments are available in the AzureML system registry, \"azureml\"\n", "registry_ml_client = MLClient(\n", " credential,\n", " subscription_id,\n", " resource_group,\n", " registry_name=\"azureml\",\n", ")\n", "# Generating a unique timestamp that can be used for names and versions that need to be unique\n", "timestamp = str(int(time.time()))" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### 2. Pick a model to deploy\n", "\n", "Browse models in the Model Catalog in the AzureML Studio, filtering by the `image-classification` task. In this example, we use the `microsoft-beit-base-patch16-224-pt22k-ft22k ` model. If you have opened this notebook for a different model, replace the model name accordingly. This is a pre-trained model and may not give correct prediction for your dataset. We strongly recommend to finetune this model on a down-stream task to be able to use it for predictions and inference. Please refer to the [multi-class classification finetuning notebook](../../finetune/image-classification/multiclass-classification/hftransformers-fridgeobjects-multiclass-classification.ipynb)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model_name = \"microsoft-beit-base-patch16-224-pt22k-ft22k\"\n", "foundation_models = registry_ml_client.models.list(name=model_name)\n", "foundation_model = max(foundation_models, key=lambda x: int(x.version))\n", "print(\n", " f\"\\n\\nUsing model name: {foundation_model.name}, version: {foundation_model.version}, id: {foundation_model.id} for inferencing\"\n", ")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### 3. Prepare data for inference\n", "\n", "We will use the [fridgeObjects](https://automlsamplenotebookdata-adcuc7f7bqhhh8a4.b02.azurefd.net/image-classification/fridgeObjects.zip) dataset for multi-class classification task. The fridge object dataset is stored in a directory. There are four different folders inside:\n", "- /water_bottle\n", "- /milk_bottle\n", "- /carton\n", "- /can\n", "\n", "This is the most common data format for multiclass image classification. Each folder title corresponds to the image label for the images contained inside. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import os\n", "import urllib\n", "from zipfile import ZipFile\n", "\n", "# Change to a different location if you prefer\n", "dataset_parent_dir = \"./data\"\n", "\n", "# Create data folder if it doesnt exist.\n", "os.makedirs(dataset_parent_dir, exist_ok=True)\n", "\n", "# Download data\n", "download_url = \"https://automlsamplenotebookdata-adcuc7f7bqhhh8a4.b02.azurefd.net/image-classification/fridgeObjects.zip\"\n", "\n", "# Extract current dataset name from dataset url\n", "dataset_name = os.path.split(download_url)[-1].split(\".\")[0]\n", "# Get dataset path for later use\n", "dataset_dir = os.path.join(dataset_parent_dir, dataset_name)\n", "\n", "# Get the data zip file path\n", "data_file = os.path.join(dataset_parent_dir, f\"{dataset_name}.zip\")\n", "\n", "# Download the dataset\n", "urllib.request.urlretrieve(download_url, filename=data_file)\n", "\n", "# Extract files\n", "with ZipFile(data_file, \"r\") as zip:\n", " print(\"extracting files...\")\n", " zip.extractall(path=dataset_parent_dir)\n", " print(\"done\")\n", "# Delete zip file\n", "os.remove(data_file)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from IPython.display import Image\n", "\n", "sample_image = os.path.join(dataset_dir, \"milk_bottle\", \"99.jpg\")\n", "Image(filename=sample_image)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### 4. Deploy the model to an online endpoint for real time inference\n", "Online endpoints give a durable REST API that can be used to integrate with applications that need to use the model." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import time, sys\n", "from azure.ai.ml.entities import (\n", " ManagedOnlineEndpoint,\n", " ManagedOnlineDeployment,\n", " OnlineRequestSettings,\n", ")\n", "\n", "# Endpoint names need to be unique in a region, hence using timestamp to create unique endpoint name\n", "timestamp = int(time.time())\n", "online_endpoint_name = \"hf-image-classif-\" + str(timestamp)\n", "# Create an online endpoint\n", "endpoint = ManagedOnlineEndpoint(\n", " name=online_endpoint_name,\n", " description=\"Online endpoint for \"\n", " + foundation_model.name\n", " + \", for image-classification task\",\n", " auth_mode=\"key\",\n", ")\n", "workspace_ml_client.begin_create_or_update(endpoint).wait()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from azure.ai.ml.entities import OnlineRequestSettings, ProbeSettings\n", "\n", "deployment_name = \"hf-image-classif-mlflow-deploy\"\n", "\n", "print(foundation_model.id)\n", "print(online_endpoint_name)\n", "print(deployment_name)\n", "\n", "# Create a deployment\n", "demo_deployment = ManagedOnlineDeployment(\n", " name=deployment_name,\n", " endpoint_name=online_endpoint_name,\n", " model=foundation_model.id,\n", " instance_type=\"Standard_DS3_V2\", # Use GPU instance type like Standard_NC6s_v3 for faster inference\n", " instance_count=1,\n", " request_settings=OnlineRequestSettings(\n", " max_concurrent_requests_per_instance=1,\n", " request_timeout_ms=90000,\n", " max_queue_wait_ms=500,\n", " ),\n", " liveness_probe=ProbeSettings(\n", " failure_threshold=49,\n", " success_threshold=1,\n", " timeout=299,\n", " period=180,\n", " initial_delay=180,\n", " ),\n", " readiness_probe=ProbeSettings(\n", " failure_threshold=10,\n", " success_threshold=1,\n", " timeout=10,\n", " period=10,\n", " initial_delay=10,\n", " ),\n", ")\n", "workspace_ml_client.online_deployments.begin_create_or_update(demo_deployment).wait()\n", "endpoint.traffic = {deployment_name: 100}\n", "workspace_ml_client.begin_create_or_update(endpoint).result()" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### 5. Test the endpoint\n", "\n", "We will fetch some sample data from the test dataset and submit to online endpoint for inference." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "demo_deployment = workspace_ml_client.online_deployments.get(\n", " name=deployment_name,\n", " endpoint_name=online_endpoint_name,\n", ")\n", "\n", "# Get the details for online endpoint\n", "endpoint = workspace_ml_client.online_endpoints.get(name=online_endpoint_name)\n", "\n", "# Existing traffic details\n", "print(endpoint.traffic)\n", "\n", "# Get the scoring URI\n", "print(endpoint.scoring_uri)\n", "print(demo_deployment)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import base64\n", "import json\n", "\n", "sample_image = os.path.join(dataset_dir, \"milk_bottle\", \"99.jpg\")\n", "\n", "\n", "def read_image(image_path):\n", " with open(image_path, \"rb\") as f:\n", " return f.read()\n", "\n", "\n", "request_json = {\n", " \"input_data\": [base64.b64encode(read_image(sample_image)).decode(\"utf-8\")]\n", "}\n", "\n", "# Create request json\n", "request_file_name = \"sample_request_data.json\"\n", "with open(request_file_name, \"w\") as request_file:\n", " json.dump(request_json, request_file)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Score the sample_score.json file using the online endpoint with the azureml endpoint invoke method\n", "response = workspace_ml_client.online_endpoints.invoke(\n", " endpoint_name=online_endpoint_name,\n", " deployment_name=demo_deployment.name,\n", " request_file=request_file_name,\n", ")\n", "print(f\"raw response: {response}\\n\")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### 6. Clean up resources - delete the online endpoint\n", "Don't forget to delete the online endpoint, else you will leave the billing meter running for the compute used by the endpoint." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "workspace_ml_client.online_endpoints.begin_delete(name=online_endpoint_name).wait()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "language_info": { "name": "python" } }, "nbformat": 4, "nbformat_minor": 2 }