sdk/python/foundation-models/system/inference/visual-question-answering/visual-question-answering-online-endpoint.ipynb (359 lines of code) (raw):
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Visual Question Answering Inference using Online Endpoints\n",
"\n",
"This sample shows how to deploy `visual-question-answering` type models to an online endpoint for inference.\n",
"\n",
"### Task\n",
"`visual-question-answering` takes in images and for each image, generates a text/caption describing the image.\n",
"\n",
"### Model\n",
"Models that can perform the `visual-question-answering` task are tagged with `visual-question-answering`. We will use the `Salesforce/blip-vqa-base` model in this notebook. If you opened this notebook from a specific model card, remember to replace the specific model name.\n",
"\n",
"### Inference data\n",
"We will use the [odFridgeObjects](https://automlsamplenotebookdata-adcuc7f7bqhhh8a4.b02.azurefd.net/image-object-detection/odFridgeObjects.zip) dataset.\n",
"\n",
"\n",
"### Outline\n",
"1. Setup pre-requisites\n",
"2. Pick a model to deploy\n",
"3. Prepare data for inference\n",
"4. Deploy the model to an online endpoint for real time inference\n",
"5. Test the endpoint\n",
"6. Clean up resources - delete the online endpoint"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### 1. Setup pre-requisites\n",
"* Install dependencies\n",
"* Connect to AzureML Workspace. Learn more at [set up SDK authentication](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-setup-authentication?tabs=sdk). Replace `<WORKSPACE_NAME>`, `<RESOURCE_GROUP>` and `<SUBSCRIPTION_ID>` below.\n",
"* Connect to `azureml` system registry"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azure.ai.ml import MLClient\n",
"from azure.identity import (\n",
" DefaultAzureCredential,\n",
" InteractiveBrowserCredential,\n",
")\n",
"import time\n",
"\n",
"try:\n",
" credential = DefaultAzureCredential()\n",
" credential.get_token(\"https://management.azure.com/.default\")\n",
"except Exception as ex:\n",
" credential = InteractiveBrowserCredential()\n",
"\n",
"try:\n",
" workspace_ml_client = MLClient.from_config(credential)\n",
" subscription_id = workspace_ml_client.subscription_id\n",
" resource_group = workspace_ml_client.resource_group_name\n",
" workspace_name = workspace_ml_client.workspace_name\n",
"except Exception as ex:\n",
" print(ex)\n",
" # Enter details of your AML workspace\n",
" subscription_id = \"<SUBSCRIPTION_ID>\"\n",
" resource_group = \"<RESOURCE_GROUP>\"\n",
" workspace_name = \"<WORKSPACE_NAME>\"\n",
"workspace_ml_client = MLClient(\n",
" credential, subscription_id, resource_group, workspace_name\n",
")\n",
"\n",
"# The models are available in the AzureML system registry, \"azureml\"\n",
"registry_ml_client = MLClient(\n",
" credential,\n",
" subscription_id,\n",
" resource_group,\n",
" registry_name=\"azureml\",\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2. Pick a model to deploy\n",
"\n",
"Browse models in the Model Catalog in the AzureML Studio, filtering by the `visual-question-answering` task. In this example, we use the `Salesforce-BLIP-vqa-base` model. If you have opened this notebook for a different model, replace the model name accordingly."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model_name = \"Salesforce-BLIP-vqa-base\"\n",
"\n",
"# Use model name below for BLIP-2\n",
"# model_name = \"Salesforce-BLIP-2-opt-2-7b-vqa\"\n",
"\n",
"foundation_model = registry_ml_client.models.get(name=model_name, label=\"latest\")\n",
"print(\n",
" f\"\\n\\nUsing model name: {foundation_model.name}, version: {foundation_model.version}, id: {foundation_model.id} for inferencing\"\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3. Prepare data for inference\n",
"\n",
"We will use the [odFridgeObjects](https://automlsamplenotebookdata-adcuc7f7bqhhh8a4.b02.azurefd.net/image-object-detection/odFridgeObjects.zip) dataset for this image-to-text task."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import urllib\n",
"from zipfile import ZipFile\n",
"\n",
"# Change to a different location if you prefer\n",
"dataset_parent_dir = \"./data\"\n",
"\n",
"# Create data folder if it doesnt exist.\n",
"os.makedirs(dataset_parent_dir, exist_ok=True)\n",
"\n",
"# Download data\n",
"download_url = \"https://automlsamplenotebookdata-adcuc7f7bqhhh8a4.b02.azurefd.net/image-object-detection/odFridgeObjects.zip\"\n",
"\n",
"# Extract current dataset name from dataset url\n",
"dataset_name = os.path.split(download_url)[-1].split(\".\")[0]\n",
"# Get dataset path for later use\n",
"dataset_dir = os.path.join(dataset_parent_dir, dataset_name)\n",
"\n",
"# Get the data zip file path\n",
"data_file = os.path.join(dataset_parent_dir, f\"{dataset_name}.zip\")\n",
"\n",
"# Download the dataset\n",
"urllib.request.urlretrieve(download_url, filename=data_file)\n",
"\n",
"# Extract files\n",
"with ZipFile(data_file, \"r\") as zip:\n",
" print(\"extracting files...\")\n",
" zip.extractall(path=dataset_parent_dir)\n",
" print(\"done\")\n",
"# Delete zip file\n",
"os.remove(data_file)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from IPython.display import Image\n",
"\n",
"sample_image = os.path.join(dataset_dir, \"images\", \"99.jpg\")\n",
"Image(filename=sample_image)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### 4. Deploy the model to an online endpoint for real time inference\n",
"Online endpoints give a durable REST API that can be used to integrate with applications that need to use the model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import time\n",
"from azure.ai.ml.entities import (\n",
" ManagedOnlineEndpoint,\n",
" ManagedOnlineDeployment,\n",
")\n",
"\n",
"# Endpoint names need to be unique in a region, hence using timestamp to create unique endpoint name\n",
"timestamp = int(time.time())\n",
"online_endpoint_name = \"vqa-\" + str(timestamp)\n",
"# Create an online endpoint\n",
"endpoint = ManagedOnlineEndpoint(\n",
" name=online_endpoint_name,\n",
" description=\"Online endpoint for \"\n",
" + foundation_model.name\n",
" + \", for visual-question-answering task\",\n",
" auth_mode=\"key\",\n",
")\n",
"workspace_ml_client.begin_create_or_update(endpoint).wait()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from azure.ai.ml.entities import OnlineRequestSettings, ProbeSettings\n",
"\n",
"deployment_name = \"vqa-mlflow-deploy\"\n",
"\n",
"# Create a deployment\n",
"demo_deployment = ManagedOnlineDeployment(\n",
" name=deployment_name,\n",
" endpoint_name=online_endpoint_name,\n",
" model=foundation_model.id,\n",
" instance_type=\"Standard_DS5_V2\", # Use GPU instance type like Standard_NC6s_v3 for faster inference\n",
" instance_count=1,\n",
" request_settings=OnlineRequestSettings(\n",
" max_concurrent_requests_per_instance=1,\n",
" request_timeout_ms=90000,\n",
" max_queue_wait_ms=500,\n",
" ),\n",
" liveness_probe=ProbeSettings(\n",
" failure_threshold=49,\n",
" success_threshold=1,\n",
" timeout=299,\n",
" period=180,\n",
" initial_delay=180,\n",
" ),\n",
" readiness_probe=ProbeSettings(\n",
" failure_threshold=10,\n",
" success_threshold=1,\n",
" timeout=10,\n",
" period=10,\n",
" initial_delay=10,\n",
" ),\n",
")\n",
"workspace_ml_client.online_deployments.begin_create_or_update(demo_deployment).wait()\n",
"endpoint.traffic = {deployment_name: 100}\n",
"workspace_ml_client.begin_create_or_update(endpoint).result()"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### 5. Test the endpoint\n",
"\n",
"We will fetch some sample data from the test dataset and submit to online endpoint for inference."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import base64\n",
"import json\n",
"\n",
"sample_image_1 = os.path.join(dataset_dir, \"images\", \"99.jpg\")\n",
"sample_image_2 = os.path.join(dataset_dir, \"images\", \"1.jpg\")\n",
"\n",
"\n",
"def read_image(image_path):\n",
" with open(image_path, \"rb\") as f:\n",
" return f.read()\n",
"\n",
"\n",
"request_json = {\n",
" \"input_data\": {\n",
" \"columns\": [\"image\", \"text\"],\n",
" \"index\": [0, 1],\n",
" \"data\": [\n",
" [\n",
" base64.encodebytes(read_image(sample_image_1)).decode(\"utf-8\"),\n",
" # For BLIP2 append \"Answer:\" to the below prompt\n",
" \"Describe the beverage in the image?\",\n",
" ],\n",
" [\n",
" base64.encodebytes(read_image(sample_image_2)).decode(\"utf-8\"),\n",
" # For BLIP2 append \"Answer:\" to the below prompt\n",
" \"What are the drinks on the table?\",\n",
" ],\n",
" ],\n",
" }\n",
"}\n",
"\n",
"# Create request json\n",
"request_file_name = \"sample_request_data.json\"\n",
"with open(request_file_name, \"w\") as request_file:\n",
" json.dump(request_json, request_file)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Score the sample_score.json file using the online endpoint with the azureml endpoint invoke method\n",
"response = workspace_ml_client.online_endpoints.invoke(\n",
" endpoint_name=online_endpoint_name,\n",
" deployment_name=demo_deployment.name,\n",
" request_file=request_file_name,\n",
")\n",
"\n",
"print(f\"raw response: {response}\\n\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### 6. Clean up resources - delete the online endpoint\n",
"Don't forget to delete the online endpoint, else you will leave the billing meter running for the compute used by the endpoint."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"workspace_ml_client.online_endpoints.begin_delete(name=online_endpoint_name).wait()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "sdkv2",
"language": "python",
"name": "sdkv2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.0"
}
},
"nbformat": 4,
"nbformat_minor": 2
}