sdk/python/foundation-models/system/finetune/video-multi-object-tracking/mmtracking-video-multi-object-tracking.ipynb (1,173 lines of code) (raw):

{ "cells": [ { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Video Multi-Object Tracking using MMTracking specific pipeline component\n", "\n", "This sample shows how to use `mmtracking_video_multi_object_tracking_pipeline` component from the `azureml` registry to fine tune a model for video multi-object tracking task using MOT17 tiny Dataset. We then deploy the fine tuned model to an online endpoint for real time inference.\n", "\n", "### Training data\n", "We will use the [MOT17 tiny](https://download.openmmlab.com/mmtracking/data/MOT17_tiny.zip) dataset.\n", "\n", "### Model\n", "We will use the `bytetrack-yolox-x-crowdhuman-mot17-private-half` model in this notebook. If you need to fine tune a model that is available on MmTracking model zoo, but not available in `azureml` system registry, you can either register the model and use the registered model or use the `model_name` parameter to instruct the components to pull the model directly from MMTracking model zoo.\n", "\n", "### Outline\n", "1. Install dependencies\n", "2. Setup pre-requisites such as compute\n", "3. Pick a model to fine tune\n", "4. Prepare dataset for finetuning the model\n", "5. Submit the fine tuning job using MMTracking specific video-multi-object-tracking component\n", "6. Review training and evaluation metrics\n", "7. Register the fine tuned model\n", "8. Deploy the fine tuned model for real time inference\n", "9. Test deployed end point\n", "10. Clean up resources" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### 1. Install dependencies\n", "Before starting off, if you are running the notebook on Azure Machine Learning Studio or running first time locally, you will need the following packages" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "! pip install azure-ai-ml>=1.23.1\n", "! pip install azure-identity==1.13.0" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### 2. Setup pre-requisites" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "#### 2.1 Connect to Azure Machine Learning workspace\n", "\n", "Before we dive in the code, you'll need to connect to your workspace. The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning.\n", "\n", "We are using `DefaultAzureCredential` to get access to workspace. `DefaultAzureCredential` should be capable of handling most scenarios. If you want to learn more about other available credentials, go to [set up authentication doc](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-setup-authentication?tabs=sdk), [azure-identity reference doc](https://learn.microsoft.com/en-us/python/api/azure-identity/azure.identity?view=azure-python).\n", "\n", "Replace `<AML_WORKSPACE_NAME>`, `<RESOURCE_GROUP>` and `<SUBSCRIPTION_ID>` with their respective values in the below cell." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from azure.ai.ml import MLClient\n", "from azure.identity import DefaultAzureCredential\n", "\n", "\n", "experiment_name = (\n", " \"AzureML-Train-Finetune-Vision-MOT-Samples\" # can rename to any valid name\n", ")\n", "\n", "credential = DefaultAzureCredential()\n", "workspace_ml_client = None\n", "try:\n", " workspace_ml_client = MLClient.from_config(credential)\n", " subscription_id = workspace_ml_client.subscription_id\n", " resource_group = workspace_ml_client.resource_group_name\n", " workspace_name = workspace_ml_client.workspace_name\n", "except Exception as ex:\n", " print(ex)\n", " # Enter details of your AML workspace\n", " subscription_id = \"<SUBSCRIPTION_ID>\"\n", " resource_group = \"<RESOURCE_GROUP>\"\n", " workspace_name = \"<AML_WORKSPACE_NAME>\"\n", "\n", "workspace_ml_client = MLClient(\n", " credential, subscription_id, resource_group, workspace_name\n", ")\n", "registry_ml_client = MLClient(\n", " credential,\n", " subscription_id,\n", " resource_group,\n", " registry_name=\"azureml\",\n", ")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "#### 2.2 Create compute\n", "\n", "In order to finetune a model on Azure Machine Learning studio, you will need to create a compute resource first. **Creating a compute will take 3-4 minutes.** \n", "\n", "For additional references, see [Azure Machine Learning in a Day](https://github.com/Azure/azureml-examples/blob/main/tutorials/azureml-in-a-day/azureml-in-a-day.ipynb). " ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "##### Create CPU compute for model selection component" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from azure.ai.ml.entities import AmlCompute\n", "from azure.core.exceptions import ResourceNotFoundError\n", "\n", "model_import_cluster_name = \"sample-model-import-cluster\"\n", "try:\n", " _ = workspace_ml_client.compute.get(model_import_cluster_name)\n", " print(\"Found existing compute target.\")\n", "except ResourceNotFoundError:\n", " print(\"Creating a new compute target...\")\n", " compute_config = AmlCompute(\n", " name=model_import_cluster_name,\n", " type=\"amlcompute\",\n", " size=\"Standard_D12_v2\",\n", " idle_time_before_scale_down=120,\n", " min_instances=0,\n", " max_instances=4,\n", " )\n", " workspace_ml_client.begin_create_or_update(compute_config).result()" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "##### Create GPU compute for finetune component\n", "\n", "The list of GPU machines can be found [here](https://learn.microsoft.com/en-us/azure/virtual-machines/sizes-gpu)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "finetune_cluster_name = \"sample-finetune-cluster-gpu\"\n", "\n", "try:\n", " _ = workspace_ml_client.compute.get(finetune_cluster_name)\n", " print(\"Found existing compute target.\")\n", "except ResourceNotFoundError:\n", " print(\"Creating a new compute target...\")\n", " compute_config = AmlCompute(\n", " name=finetune_cluster_name,\n", " type=\"amlcompute\",\n", " size=\"STANDARD_NC6s_v3\",\n", " idle_time_before_scale_down=120,\n", " min_instances=0,\n", " max_instances=4,\n", " )\n", " workspace_ml_client.begin_create_or_update(compute_config).result()" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### 3. Pick a foundation model to fine tune\n", "\n", "We will use the `bytetrack-yolox-x-crowdhuman-mot17-private-half` model in this notebook. If you need to fine tune a model that is available on MMTracking model zoo, but not available in `azureml` registry, you can either register the model and use the registered model or use the `model_name` parameter to instruct the components to pull the model directly from MMTracking model zoo.\n", "\n", "Currently we support tracking-by-detection models, ByteTrack and OCSort, as follows:\n", "\n", "| Model Name | Source |\n", "| :------------: | :-------: |\n", "| [bytetrack_yolox_x_crowdhuman-mot17_private-half](https://ml.azure.com/registries/azureml/models/bytetrack_yolox_x_crowdhuman_mot17-private-half/version/6) | azureml registry |\n", "| [ocsort_yolox_x_crowdhuman_mot17-private-half](https://ml.azure.com/registries/azureml/models/ocsort_yolox_x_crowdhuman_mot17-private-half/version/6) | azureml registry |\n", "| [Variants of bytetrack models from MMTracking](https://github.com/open-mmlab/mmtracking/tree/v0.14.0/configs/mot/bytetrack) | MMTracking |" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "aml_registry_model_name = \"bytetrack_yolox_x_crowdhuman_mot17-private-half\"\n", "foundation_model = registry_ml_client.models.get(\n", " name=aml_registry_model_name, label=\"latest\"\n", ")\n", "\n", "print(\n", " f\"\\n\\nUsing model name: {foundation_model.name}, version: {foundation_model.version}, id: {foundation_model.id} for inferencing\"\n", ")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### 4. Prepare the dataset for fine-tuning the model\n", "\n", "We will use the [MOT17 tiny](https://download.openmmlab.com/mmtracking/data/MOT17_tiny.zip) dataset, a subset of the [MOT17 Challenge](https://motchallenge.net/data/MOT17/). It consists of two video sequences of class {`pedestrian`}.\n", "\n", "\n", "#### 4.1 Download the Data\n", "We first download and unzip the data locally. By default, the data would be downloaded in `./data` folder in current directory. \n", "If you prefer to download the data at a different location, update it in `dataset_parent_dir = ...` in the following cell." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import os\n", "import urllib\n", "from zipfile import ZipFile\n", "\n", "# Change to a different location if you prefer\n", "dataset_parent_dir = \"data\"\n", "\n", "# create data folder if it doesnt exist.\n", "os.makedirs(dataset_parent_dir, exist_ok=True)\n", "\n", "# download data\n", "download_url = \"https://download.openmmlab.com/mmtracking/data/MOT17_tiny.zip\"\n", "\n", "# Extract current dataset name from dataset url\n", "dataset_name = os.path.split(download_url)[-1].split(\".\")[0]\n", "# Get dataset path for later use\n", "dataset_dir = os.path.join(dataset_parent_dir, dataset_name)\n", "\n", "# Get the data zip file path\n", "data_file = os.path.join(dataset_parent_dir, f\"{dataset_name}.zip\")\n", "\n", "# Download the dataset\n", "urllib.request.urlretrieve(download_url, filename=data_file)\n", "\n", "# extract files\n", "with ZipFile(data_file, \"r\") as zzip:\n", " print(\"extracting files...\")\n", " zzip.extractall(path=dataset_parent_dir)\n", " print(\"done\")\n", "# delete zip file\n", "os.remove(data_file)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pip install Pillow" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from PIL import Image\n", "\n", "sample_image = os.path.join(dataset_dir, \"train/MOT17-02-FRCNN/img1/000001.jpg\")\n", "sample_image = Image.open(sample_image)\n", "print(sample_image.size)\n", "sample_image" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "#### 4.2 Upload the images to Datastore through an AML Data asset (URI Folder)\n", "\n", "In order to use the data for training in Azure ML, we upload it to our default Azure Blob Storage of our Azure ML Workspace." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Uploading image files by creating a 'data asset URI FOLDER':\n", "\n", "from azure.ai.ml.entities import Data\n", "from azure.ai.ml import Input\n", "from azure.ai.ml.constants import AssetTypes\n", "\n", "my_data = Data(\n", " path=dataset_dir,\n", " type=AssetTypes.URI_FOLDER,\n", " description=f\"{dataset_name} dataset folder\",\n", " name=f\"{dataset_name}_sample_folder\",\n", ")\n", "\n", "uri_folder_data_asset = workspace_ml_client.data.create_or_update(my_data)\n", "# uri_folder_data_asset = workspace_ml_client.data.get(name=f\"{dataset_name}_sample_folder\", version=1)\n", "\n", "# or if the uri_folder was uploaded, we could get it with:\n", "# uri_folder_data_asset = workspace_ml_client.data.get(name = f\"{dataset_name}_sample_folder\", version=1)\n", "\n", "print(uri_folder_data_asset)\n", "print(\"\")\n", "print(\"Path to folder in Blob Storage:\")\n", "print(uri_folder_data_asset.path)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "#### 4.3 Convert the downloaded data to JSONL\n", "\n", "In this example, the MOT17_tiny dataset is stored in a directory. There are two different folders inside `train` image folder, each denoting a different video:\n", "\n", "- train/MOT17-02-FRCNN\n", "- train/MOT17-04-FRCNN\n", "\n", "This is the most common data format for image object tracking. Inside each of video folder, video frames are sorted in sequence.\n", "\n", "Note that, in the above folders, videos are already parsed into image frames. If you have a video available at hand, you can install [`ffmpeg`](https://ffmpeg.org/download.html) , and run the following command:\n", "\n", "```\n", "mkdir video_name\n", "ffmpeg -i video_name.mp4 -vf fps=30 video_name/%6d.png\n", "```\n", "where `-i` denotes input, `-vf fps=30` is the most commonly used frame per second rate.\n", "\n", "\n", "For documentation on preparing the datasets beyond this notebook, please refer to the [documentation on how to prepare datasets](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-prepare-datasets-for-automl-images).\n", "\n", "The following code block converts original dataset to [CocoVid format](https://github.com/open-mmlab/mmtracking/blob/master/tests/data/demo_cocovid_data/ann.json). Most of the datasets are available in Coco-Vid format.\n", "\n", "AzureML pipelines accepts dataset in MLTable format. We will convert MOT17_tiny dataset to Coco-Vid format and then convert Coco-Vid format to MLTable." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# !pip install numpy\n", "!python ./mot2coco.py -i {dataset_dir} -o {dataset_dir}/annotations --split-train --convert-det" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import json\n", "import os\n", "\n", "\n", "# We'll copy each JSONL file within its related MLTable folder\n", "training_mltable_path = os.path.join(dataset_dir, \"training-mltable-folder\")\n", "validation_mltable_path = os.path.join(dataset_dir, \"validation-mltable-folder\")\n", "testing_mltable_path = os.path.join(dataset_dir, \"testing-mltable-folder\")\n", "\n", "# First, let's create the folders if they don't exist\n", "os.makedirs(training_mltable_path, exist_ok=True)\n", "os.makedirs(validation_mltable_path, exist_ok=True)\n", "os.makedirs(testing_mltable_path, exist_ok=True)\n", "\n", "train_annotations_file = os.path.join(training_mltable_path, \"train_annotations.jsonl\")\n", "validation_annotations_file = os.path.join(\n", " validation_mltable_path, \"validation_annotations.jsonl\"\n", ")\n", "testing_annotations_file = os.path.join(\n", " testing_mltable_path, \"testing_annotations.jsonl\"\n", ")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "#### 4.4 Convert annotation file from COCO to JSONL\n", "AzureML recommends MLTable format for dataset. In order to create MLTable we first need to convert it to JSONL format. The following script will create two `.jsonl` files (one for training and one for validation) in the corresponding MLTable folder.\n", "\n", "The next step is to convert CocoVid format to jsonl format, which is required by the following step of mltable creation. The jsonl schema is similar to [object detection schema](https://learn.microsoft.com/en-us/azure/machine-learning/reference-automl-images-schema?view=azureml-api-2#object-detection), with additional information of `video_details`, and `instance_id` to the label part.\n", "\n", "Note that, for test jsonl creation, we do not require `label` field.\n", "\n", " {\n", " \"image_url\":\"azureml://subscriptions/<my-subscription-id>/resourcegroups/<my-resource-group>/workspaces/<my-workspace>/datastores/<my-datastore>/paths/<path_to_image>\",\n", " \"image_details\":{\n", " \"format\":\"image_format\",\n", " \"width\":\"image_width\",\n", " \"height\":\"image_height\"\n", " },\n", " \"video_details\": {\n", " \"frame_id\": \"zero_based_frame_id(int)\",\n", " \"video_name\": \"video_name\",\n", " },\n", " \"label\":[\n", " {\n", " \"label\":\"class_name_1\",\n", " \"topX\":\"xmin/width\",\n", " \"topY\":\"ymin/height\",\n", " \"bottomX\":\"xmax/width\",\n", " \"bottomY\":\"ymax/height\",\n", " \"isCrowd\":\"isCrowd\"\n", " \"instance_id\": \"instance_id\"\n", " },\n", " {\n", " \"label\":\"class_name_2\",\n", " \"topX\":\"xmin/width\",\n", " \"topY\":\"ymin/height\",\n", " \"bottomX\":\"xmax/width\",\n", " \"bottomY\":\"ymax/height\",\n", " \"instance_id\": \"instance_id\"\n", " },\n", " \"...\"\n", " ]\n", " }" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!python cocovid2jsonl.py \\\n", " --input_cocovid_file_path {dataset_dir}/annotations/half-train_cocoformat.json \\\n", " --output_dir {training_mltable_path} \\\n", " --output_file_name train_annotations.jsonl \\\n", " --task_type ObjectTracking \\\n", " --base_url {uri_folder_data_asset.path}train\n", "!python cocovid2jsonl.py \\\n", " --input_cocovid_file_path {dataset_dir}/annotations/half-val_cocoformat.json \\\n", " --output_dir {validation_mltable_path} \\\n", " --output_file_name validation_annotations.jsonl \\\n", " --task_type ObjectTracking \\\n", " --base_url {uri_folder_data_asset.path}train" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "#### 4.5 Create MLTable data input\n", "\n", "Create MLTable data input using the jsonl files created above.\n", "\n", "For documentation on creating your own MLTable assets for jobs beyond this notebook, please refer to below resources\n", "- [MLTable YAML Schema](https://learn.microsoft.com/en-us/azure/machine-learning/reference-yaml-mltable) - covers how to write MLTable YAML, which is required for each MLTable asset.\n", "- [Create MLTable data asset](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-create-data-assets?tabs=Python-SDK#create-a-mltable-data-asset) - covers how to create MLTable data asset. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def create_ml_table_file(filename):\n", " \"\"\"Create ML Table definition\"\"\"\n", "\n", " return (\n", " \"paths:\\n\"\n", " \" - file: ./{0}\\n\"\n", " \"transformations:\\n\"\n", " \" - read_json_lines:\\n\"\n", " \" encoding: utf8\\n\"\n", " \" invalid_lines: error\\n\"\n", " \" include_path_column: false\\n\"\n", " \" - convert_column_types:\\n\"\n", " \" - columns: image_url\\n\"\n", " \" column_type: stream_info\"\n", " ).format(filename)\n", "\n", "\n", "def save_ml_table_file(output_path, mltable_file_contents):\n", " with open(os.path.join(output_path, \"MLTable\"), \"w\") as f:\n", " f.write(mltable_file_contents)\n", "\n", "\n", "# Create and save train mltable\n", "train_mltable_file_contents = create_ml_table_file(\n", " os.path.basename(train_annotations_file)\n", ")\n", "save_ml_table_file(training_mltable_path, train_mltable_file_contents)\n", "\n", "# Create and save validation mltable\n", "validation_mltable_file_contents = create_ml_table_file(\n", " os.path.basename(validation_annotations_file)\n", ")\n", "save_ml_table_file(validation_mltable_path, validation_mltable_file_contents)\n", "\n", "# Create and save testing mltable\n", "testing_mltable_file_contents = create_ml_table_file(\n", " os.path.basename(testing_annotations_file)\n", ")\n", "save_ml_table_file(testing_mltable_path, testing_mltable_file_contents)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### 5. Submit the fine tuning job using `mmtracking_video_multi_object_tracking_pipeline` component\n", " \n", "Create the job that uses the `mmtracking_video_multi_object_tracking_pipeline` component for `video-multi-object-tracking` tasks. Learn more in 5.2 about all the parameters supported for fine tuning." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "#### 5.1 Receive component" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "FINETUNE_PIPELINE_COMPONENT_NAME = \"mmtracking_video_multi_object_tracking_pipeline\"\n", "pipeline_component_mmtracking_func = registry_ml_client.components.get(\n", " name=FINETUNE_PIPELINE_COMPONENT_NAME, label=\"latest\"\n", ")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "#### 5.2 Create arguments to be passed to `mmtracking_video_multi_object_tracking_pipeline` component\n", "\n", "The `mmtracking_video_multi_object_tracking_pipeline` component consists of model selection and finetuning components. The detailed arguments for each component can be found at following README files:\n", "- [Model Import Component](../../docs/component_docs/image_finetune/mmd_model_import_component.md)\n", "- [Finetune Component](../../docs/component_docs/image_finetune/mmd_finetune_component.md)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mmtracking_model_name = aml_registry_model_name\n", "pipeline_component_args = {\n", " # # Model import args\n", " \"model_family\": \"MmTrackingVideo\",\n", " \"mlflow_model\": foundation_model.id, # foundation_model.id is provided, only foundation_model gives UserErrorException: only path input is supported now but get: ...\n", " # \"model_name\": mmtracking_model_name, # specify the model_name instead of mlflow_model if you want to use a model from the mmtracking model zoo\n", " # Finetune args\n", " \"task_name\": \"video-multi-object-tracking\",\n", " \"number_of_workers\": 8,\n", " \"image_width\": sample_image.size[0],\n", " \"image_height\": sample_image.size[1],\n", " \"number_of_epochs\": 5,\n", " # \"learning_rate\": 0.0001,\n", " # \"metric_for_best_model\": \"MOTA\",\n", " # \"extra_optim_args\": \"\",\n", " # \"evaluation_strategy\": \"epoch\",\n", " # \"evaluation_steps\": 500,\n", " # \"logging_strategy\": \"epoch\",\n", " # \"logging_steps\": 500,\n", " # \"save_strategy\": \"epoch\",\n", " # \"save_steps\": 500,\n", " # \"save_total_limit\": -1,\n", " # \"early_stopping\": False,\n", " # \"early_stopping_patience\": 1,\n", " # \"resume_from_checkpoint\": False,\n", " # \"save_as_mlflow_model\": True,\n", " # # Uncomment one or more lines below to provide specific values, if you wish you override the autoselected default values.\n", " # \"max_steps\": -1,\n", " # \"training_batch_size\": 4, # note that: validation_batch_size is not supported as for mot task, it only allows training_batch_size = 1 to remain the sequence order\n", " # \"learning_rate_scheduler\": \"warmup_cosine\",\n", " # \"warmup_steps\": 0,\n", " # \"optimizer\": \"sgd\",\n", " # \"weight_decay\": 0.0,\n", " # \"gradient_accumulation_step\": 1,\n", " # \"max_grad_norm\": 1.0,\n", " # \"iou_threshold\": 0.5,\n", " # \"box_score_threshold\": 0.3,\n", " # \"precision\": \"32\",\n", " # \"random_seed\": 42,\n", " # The following parameters map to the dataset fields\n", " # Uncomment one or more lines below to provide specific values, if you wish you override the autoselected default values.\n", "}\n", "\n", "# Ensure that the user provides only one of mlflow_model or model_name\n", "if (\n", " pipeline_component_args.get(\"mlflow_model\") is None\n", " and pipeline_component_args.get(\"model_name\") is None\n", "):\n", " raise ValueError(\n", " \"You must specify either mlflow_model or model_name for the model to finetune\"\n", " )\n", "if (\n", " pipeline_component_args.get(\"mlflow_model\") is not None\n", " and pipeline_component_args.get(\"model_name\") is not None\n", "):\n", " raise ValueError(\n", " \"You must specify ONLY one of mlflow_model and model_name for the model to finetune\"\n", " )\n", "elif (\n", " pipeline_component_args.get(\"mlflow_model\") is None\n", " and pipeline_component_args.get(\"model_name\") is not None\n", "):\n", " use_model_name = mmtracking_model_name\n", "elif (\n", " pipeline_component_args.get(\"mlflow_model\") is not None\n", " and pipeline_component_args.get(\"model_name\") is None\n", "):\n", " use_model_name = aml_registry_model_name\n", "print(f\"Finetuning model {use_model_name}\")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "#### 5.3 Utility function to create pipeline using `mmtracking_video_multi_object_tracking_pipeline` component" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from azure.ai.ml.dsl import pipeline\n", "from azure.ai.ml.entities import PipelineComponent\n", "from azure.ai.ml import Input\n", "from azure.ai.ml.constants import AssetTypes\n", "\n", "\n", "@pipeline()\n", "def create_pipeline_mmtracking():\n", " \"\"\"Create pipeline.\"\"\"\n", "\n", " mmtracking_pipeline_component: PipelineComponent = (\n", " pipeline_component_mmtracking_func(\n", " compute_model_import=model_import_cluster_name,\n", " compute_finetune=finetune_cluster_name,\n", " training_data=Input(type=AssetTypes.MLTABLE, path=training_mltable_path),\n", " validation_data=Input(\n", " type=AssetTypes.MLTABLE, path=validation_mltable_path\n", " ),\n", " **pipeline_component_args,\n", " )\n", " )\n", " return {\n", " # Map the output of the fine tuning job to the output of pipeline job so that we can easily register the fine tuned model. Registering the model is required to deploy the model to an online or batch endpoint.\n", " \"trained_model\": mmtracking_pipeline_component.outputs.mlflow_model_folder,\n", " }" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "#### 5.4 Run the fine tuning job using `mmtracking_video_multi_object_tracking_pipeline` component" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mmtracking_pipeline_object = create_pipeline_mmtracking()\n", "\n", "# don't use cached results from previous jobs\n", "mmtracking_pipeline_object.settings.force_rerun = True\n", "\n", "# set continue on step failure to False\n", "mmtracking_pipeline_object.settings.continue_on_step_failure = False\n", "\n", "mmtracking_pipeline_object.display_name = (\n", " use_model_name + \"_mmtracking_pipeline_component_run_\" + \"mot\"\n", ")\n", "# Don't use cached results from previous jobs\n", "mmtracking_pipeline_object.settings.force_rerun = True\n", "\n", "print(\"Submitting pipeline\")\n", "\n", "mmtracking_pipeline_run = workspace_ml_client.jobs.create_or_update(\n", " mmtracking_pipeline_object, experiment_name=experiment_name\n", ")\n", "\n", "print(f\"Pipeline created. URL: {mmtracking_pipeline_run.studio_url}\")" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [], "source": [ "workspace_ml_client.jobs.stream(mmtracking_pipeline_run.name)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### 6. Get metrics from finetune component\n", "\n", "The model training happens as part of the finetune component. Please follow below steps to extract validation metrics from the run." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "##### 6.1 Initialize MLFlow Client\n", "\n", "The models and artifacts can be accessed via the MLFlow interface.\n", "Initialize the MLFlow client here, and set the backend as Azure ML, via. the MLFlow Client.\n", "\n", "IMPORTANT - You need to have installed the latest MLFlow packages with:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pip install azureml-mlflow\n", "!pip install mlflow" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import mlflow\n", "\n", "# Obtain the tracking URL from MLClient\n", "MLFLOW_TRACKING_URI = workspace_ml_client.workspaces.get(\n", " name=workspace_ml_client.workspace_name\n", ").mlflow_tracking_uri\n", "\n", "print(MLFLOW_TRACKING_URI)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Set the MLFLOW TRACKING URI\n", "mlflow.set_tracking_uri(MLFLOW_TRACKING_URI)\n", "print(f\"\\nCurrent tracking uri: {mlflow.get_tracking_uri()}\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from mlflow.tracking.client import MlflowClient\n", "\n", "# Initialize MLFlow client\n", "mlflow_client = MlflowClient()" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "#### 6.2 Get the training run\n", "\n", "Fetch the training run ids from the above pipeline run. We will later use these run ids to fetch the metrics. We will use the training run id to register the model." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Concat 'tags.mlflow.rootRunId=' and pipeline_job.name in single quotes as filter variable\n", "filter = \"tags.mlflow.rootRunId='\" + mmtracking_pipeline_run.name + \"'\"\n", "runs = mlflow.search_runs(\n", " experiment_names=[experiment_name], filter_string=filter, output_format=\"list\"\n", ")\n", "\n", "# Get the training runs.\n", "# Using a hacky way till 'Bug 2320997: not able to show eval metrics in FT notebooks - mlflow client now showing display names' is fixed\n", "for run in runs:\n", " # Check if run.data.metrics.epoch exists\n", " if \"epoch\" in run.data.metrics:\n", " training_run = run\n", " # Else, check if run.data.metrics.MOTA exists\n", " elif \"MOTA\" in run.data.metrics:\n", " evaluation_run = run" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "#### 6.3 Get training metrics\n", "\n", "Access the results (such as Models, Artifacts, Metrics) of a previously completed run." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "\n", "pd.DataFrame(training_run.data.metrics, index=[0]).T" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### 7. Register the fine tuned model with the workspace\n", "\n", "We will register the model from the output of the fine tuning job. This will track lineage between the fine tuned model and the fine tuning job. The fine tuning job, further, tracks lineage to the foundation model, data and training code." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import time\n", "\n", "# Generating a unique timestamp that can be used for names and versions that need to be unique\n", "timestamp = str(int(time.time()))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from azure.ai.ml.entities import Model\n", "from azure.ai.ml.constants import AssetTypes\n", "\n", "# Check if the `trained_model` output is available\n", "print(\n", " f\"Pipeline job outputs: {workspace_ml_client.jobs.get(mmtracking_pipeline_run.name).outputs}\"\n", ")\n", "\n", "# Fetch the model from pipeline job output - not working, hence fetching from fine tune child job\n", "model_path_from_job = (\n", " f\"azureml://jobs/{mmtracking_pipeline_run.name}/outputs/trained_model\"\n", ")\n", "print(f\"Path to register model: {model_path_from_job}\")\n", "\n", "finetuned_model_name = f\"{use_model_name.replace('/', '-')}-mot17-tiny\"\n", "finetuned_model_description = f\"{use_model_name.replace('/', '-')} fine tuned model for mot17 tiny video-multi-object-tracking\"\n", "prepare_to_register_model = Model(\n", " path=model_path_from_job,\n", " type=AssetTypes.MLFLOW_MODEL,\n", " name=finetuned_model_name,\n", " version=timestamp, # Use timestamp as version to avoid version conflict\n", " description=finetuned_model_description,\n", ")\n", "print(f\"Prepare to register model: \\n{prepare_to_register_model}\")\n", "\n", "# Register the model from pipeline job output\n", "registered_model = workspace_ml_client.models.create_or_update(\n", " prepare_to_register_model\n", ")\n", "print(f\"Registered model: {registered_model}\")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### 8. Deploy the fine tuned model to an online endpoint\n", "Online endpoints give a durable REST API that can be used to integrate with applications that need to use the model." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import datetime\n", "from azure.ai.ml.entities import ManagedOnlineEndpoint, ManagedOnlineDeployment\n", "\n", "# Endpoint names need to be unique in a region, hence using timestamp to create unique endpoint name\n", "online_endpoint_name = \"mmt-mot17tiny-\" + datetime.datetime.now().strftime(\"%m%d%H%M\")\n", "online_endpoint_description = f\"Online endpoint for {registered_model.name}, fine tuned model for mot17 tiny video-multi-object-tracking\"\n", "# Create an online endpoint\n", "endpoint = ManagedOnlineEndpoint(\n", " name=online_endpoint_name,\n", " description=online_endpoint_description,\n", " auth_mode=\"key\",\n", " tags={\"foo\": \"bar\"},\n", ")\n", "workspace_ml_client.begin_create_or_update(endpoint).result()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from azure.ai.ml.entities import OnlineRequestSettings, ProbeSettings\n", "\n", "deployment_name = \"mmt-mot17tiny-mlflow-deploy\"\n", "print(registered_model.id)\n", "print(online_endpoint_name)\n", "print(deployment_name)\n", "\n", "# Create a deployment\n", "demo_deployment = ManagedOnlineDeployment(\n", " name=deployment_name,\n", " endpoint_name=online_endpoint_name,\n", " model=registered_model.id,\n", " instance_type=\"Standard_NC6s_V3\",\n", " instance_count=1,\n", " request_settings=OnlineRequestSettings(\n", " max_concurrent_requests_per_instance=1,\n", " request_timeout_ms=90000,\n", " max_queue_wait_ms=500,\n", " ),\n", " liveness_probe=ProbeSettings(\n", " failure_threshold=49,\n", " success_threshold=1,\n", " timeout=299,\n", " period=200,\n", " initial_delay=180,\n", " ),\n", " readiness_probe=ProbeSettings(\n", " failure_threshold=10,\n", " success_threshold=1,\n", " timeout=10,\n", " period=10,\n", " initial_delay=10,\n", " ),\n", ")\n", "workspace_ml_client.online_deployments.begin_create_or_update(demo_deployment).wait()\n", "endpoint.traffic = {deployment_name: 100}\n", "workspace_ml_client.begin_create_or_update(endpoint).result()" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### 9. Test the endpoint with sample data\n", "\n", "We will fetch some sample data from the test dataset and submit to online endpoint for inference. We will then display the scored labels alongside the ground truth labels." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "demo_deployment = workspace_ml_client.online_deployments.get(\n", " name=deployment_name,\n", " endpoint_name=online_endpoint_name,\n", ")\n", "\n", "# Get the details for online endpoint\n", "endpoint = workspace_ml_client.online_endpoints.get(name=online_endpoint_name)\n", "\n", "# existing traffic details\n", "print(endpoint.traffic)\n", "# Get the scoring URI\n", "print(endpoint.scoring_uri)\n", "print(demo_deployment)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Create request json\n", "import base64\n", "import json\n", "\n", "sample_video_link = \"https://github.com/open-mmlab/mmtracking/raw/master/demo/demo.mp4\"\n", "request_json = {\"input_data\": {\"columns\": [\"video\"], \"data\": [sample_video_link]}}\n", "request_file_name = \"sample_request_data.json\"\n", "with open(request_file_name, \"w\") as request_file:\n", " json.dump(request_json, request_file)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "resp = workspace_ml_client.online_endpoints.invoke(\n", " endpoint_name=online_endpoint_name,\n", " deployment_name=demo_deployment.name,\n", " request_file=request_file_name,\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "resp" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Visualize tracking\n", "Now we can visualize the tracking in the video:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pip install opencv-python-headless\n", "!pip install mmcv-full==1.7.1" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import cv2\n", "import mmcv\n", "from time import sleep\n", "from PIL import Image\n", "from IPython.display import display, clear_output\n", "\n", "img_frames = mmcv.VideoReader(sample_video_link)\n", "predictions = json.loads(resp)\n", "assert len(img_frames) == len(predictions)\n", "\n", "\n", "def draw_bbox_on_image(img, track_bbox):\n", " x0, y0, x1, y1 = (\n", " track_bbox[\"topX\"],\n", " track_bbox[\"topY\"],\n", " track_bbox[\"bottomX\"],\n", " track_bbox[\"bottomY\"],\n", " )\n", " x0, y0, x1, y1 = int(x0), int(y0), int(x1), int(y1)\n", " instance_id = track_bbox[\"instance_id\"]\n", " text = f\"ID: {instance_id}\"\n", " cv2.putText(img, text, (x0, y0), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 0, 0), 1)\n", " cv2.rectangle(img, (x0, y0), (x1, y1), color=(0, 0, 0), thickness=2)\n", "\n", "\n", "visualized_results = []\n", "for img, prediction in zip(img_frames, predictions):\n", " track_bboxes = prediction[\"track_bboxes\"]\n", " for track_bbox in track_bboxes:\n", " draw_bbox_on_image(img, track_bbox[\"box\"])\n", " visualized_results.append(img)\n", "\n", "fps = 10 # frames per second, for most videos fps=30, pls change it according to your video\n", "for img_array in visualized_results:\n", " display(Image.fromarray(img_array))\n", " sleep(1.0 / fps)\n", " clear_output(wait=True)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### 10. Clean up resources - delete the online endpoint\n", "Don't forget to delete the online endpoint, else you will leave the billing meter running for the compute used by the endpoint." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "workspace_ml_client.online_endpoints.begin_delete(name=online_endpoint_name).wait()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "language_info": { "name": "python" } }, "nbformat": 4, "nbformat_minor": 2 }