sdk/python/endpoints/online/kubernetes/kubernetes-online-endpoints-simple-deployment.ipynb (583 lines of code) (raw):

{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Deploy and score a machine learning model by using an online endpoint \n", "\n", "Learn how to use an online endpoint to deploy your model, so you don't have to create and manage the underlying infrastructure. You'll begin by deploying a model on your local machine to debug any errors, and then you'll deploy and test it in Azure.\n", "\n", "For more information, see [What are Azure Machine Learning endpoints?](https://docs.microsoft.com/azure/machine-learning/concept-endpoints)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Prerequisites\n", "\n", "* To use Azure Machine Learning, you must have an Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).\n", "\n", "* Install and configure the [Python SDK v2](sdk/setup.sh).\n", "\n", "* You must have an Azure resource group, and you (or the service principal you use) must have Contributor access to it.\n", "\n", "* You must have an Azure Machine Learning workspace. \n", "\n", "* To deploy locally, you must install Docker Engine on your local computer. We highly recommend this option, so it's easier to debug issues." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 1. Connect to Azure Machine Learning Workspace\n", "\n", "The [workspace](https://docs.microsoft.com/en-us/azure/machine-learning/concept-workspace) is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section we will connect to the workspace in which the job will be run.\n", "\n", "## 1.1. Import the required libraries" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# import required libraries\n", "from azure.ai.ml import MLClient\n", "from azure.ai.ml.entities import (\n", " KubernetesOnlineEndpoint,\n", " KubernetesOnlineDeployment,\n", " Model,\n", " Environment,\n", " CodeConfiguration,\n", ")\n", "from azure.identity import DefaultAzureCredential\n", "from azure.ai.ml.entities._deployment.resource_requirements_settings import (\n", " ResourceRequirementsSettings,\n", ")\n", "from azure.ai.ml.entities._deployment.container_resource_settings import (\n", " ResourceSettings,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1.2. Configure workspace details and get a handle to the workspace\n", "\n", "To connect to a workspace, we need identifier parameters - a subscription, resource group and workspace name. We will use these details in the `MLClient` from `azure.ai.ml` to get a handle to the required Azure Machine Learning workspace. We use the default [interactive authentication](https://docs.microsoft.com/python/api/azure-identity/azure.identity.interactivebrowsercredential?view=azure-python) for this tutorial. More advanced connection methods can be found [here](https://docs.microsoft.com/python/api/azure-identity/azure.identity?view=azure-python)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# enter details of your AML workspace\n", "subscription_id = \"<SUBSCRIPTION_ID>\"\n", "resource_group = \"<RESOURCE_GROUP>\"\n", "workspace = \"<AML_WORKSPACE_NAME>\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# get a handle to the workspace\n", "ml_client = MLClient(\n", " DefaultAzureCredential(), subscription_id, resource_group, workspace\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Deploy and debug locally by using local endpoints" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Note\n", "* To deploy locally, [Docker Engine](https://docs.docker.com/engine/install/) must be installed.\n", "* Docker Engine must be running. Docker Engine typically starts when the computer starts. If it doesn't, you can [troubleshoot Docker Engine](https://docs.docker.com/config/daemon/#start-the-daemon-manually)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 2. Create local endpoint and deployment\n", "\n", "## 2.1 Create local endpoint\n", "\n", "The goal of a local endpoint deployment is to validate and debug your code and configuration before you deploy to Azure. Local deployment has the following limitations:\n", "* Local endpoints *do not support* traffic rules, authentication, or probe settings.\n", "* Local endpoints support only one deployment per endpoint" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Creating a local endpoint\n", "import datetime\n", "\n", "local_endpoint_name = \"local-\" + datetime.datetime.now().strftime(\"%m%d%H%M%f\")\n", "\n", "# create an online endpoint\n", "endpoint = KubernetesOnlineEndpoint(\n", " name=local_endpoint_name, description=\"this is a sample local endpoint\"\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Install docker package in the current Jupyter kernel\n", "import sys\n", "\n", "!{sys.executable} -m pip install docker" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ml_client.online_endpoints.begin_create_or_update(endpoint, local=True)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## 2.2 Create local deployment\n", "\n", "The example contains all the files needed to deploy a model on an online endpoint. To deploy a model, you must have:\n", "\n", "* Model files (or the name and version of a model that's already registered in your workspace). In the example, we have a scikit-learn model that does regression.\n", "* The code that's required to score the model. In this case, we have a score.py file.\n", "* An environment in which your model runs. As you'll see, the environment might be a Docker image with Conda dependencies, or it might be a Dockerfile.\n", "* Settings to specify the instance type and scaling capacity.\n", "\n", "### Key aspects of deployment \n", "\n", "- `name` - Name of the deployment.\n", "- `endpoint_name` - Name of the endpoint to create the deployment under.\n", "- `model` - The model to use for the deployment. This value can be either a reference to an existing versioned model in the workspace or an inline model specification.\n", "- `environment` - The environment to use for the deployment. This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification.\n", "- `code_configuration` - the configuration for the source code and scoring script\n", " - `path`- Path to the source code directory for scoring the model\n", " - `scoring_script` - Relative path to the scoring file in the source code directory\n", "- `instance_type` - The VM size to use for the deployment.\n", "- `instance_count` - The number of instances to use for the deployment\n", "- `resources` - The resource ask for the deployment with requests and limits.\n", " - `requests` - The minimum resource ask for one deployment instance to be scheduled. For all deployment, requests for CPU and Memory should always be given.\n", " - `limits` - (Optional) The maximum resource that one deployment instance can use. When values in limits missing, it will take the settings in instance type. For GPU workloads, limits for CPU and Memory should also be given. If specifying GPU for your deployment, it should be in field limits, or both requests and limits with the same value." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model = Model(path=\"../model-1/model/sklearn_regression_model.pkl\")\n", "env = Environment(\n", " conda_file=\"../model-1/environment/conda.yaml\",\n", " image=\"mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04\",\n", ")\n", "\n", "blue_deployment = KubernetesOnlineDeployment(\n", " name=\"blue\",\n", " endpoint_name=local_endpoint_name,\n", " model=model,\n", " environment=env,\n", " code_configuration=CodeConfiguration(\n", " code=\"../model-1/onlinescoring\", scoring_script=\"score.py\"\n", " ),\n", " instance_count=1,\n", " resources=ResourceRequirementsSettings(\n", " requests=ResourceSettings(\n", " cpu=\"100m\",\n", " memory=\"0.5Gi\",\n", " ),\n", " ),\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ml_client.online_deployments.begin_create_or_update(\n", " deployment=blue_deployment, local=True\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 3. Verify the local deployment succeeded\n", "\n", "## 3.1 Check the status to see whether the model was deployed without error" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ml_client.online_endpoints.get(name=local_endpoint_name, local=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3.2 Get logs" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ml_client.online_deployments.get_logs(\n", " name=\"blue\", endpoint_name=local_endpoint_name, local=True, lines=50\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3.3 Invoke the local endpoint\n", "Invoke the endpoint to score the model by using the convenience command invoke and passing query parameters that are stored in a JSON file" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ml_client.online_endpoints.invoke(\n", " endpoint_name=local_endpoint_name,\n", " request_file=\"../model-1/sample-request.json\",\n", " local=True,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 4. Configure Kubernetes cluster for machine learning\n", "Next, configure Azure Kubernetes Service (AKS) and Azure Arc-enabled Kubernetes clusters for inferencing machine learning workloads.\n", "There're some prerequisites for below steps, you can check them [here](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-attach-arc-kubernetes).\n", "\n", "## 4.1 Connect an existing Kubernetes cluster to Azure Arc\n", "This step is optional for [AKS cluster](https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough).\n", "Follow this [guidance](https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/quickstart-connect-cluster) to connect Kubernetes clusters.\n", "\n", "## 4.2 Deploy Azure Machine Learning extension\n", "Depending on your network setup, Kubernetes distribution variant, and where your Kubernetes cluster is hosted (on-premises or the cloud), choose one of options to deploy the Azure Machine Learning extension and enable inferencing workloads on your Kubernetes cluster.\n", "Follow this [guidance](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-attach-arc-kubernetes?tabs=studio#inferencing).\n", "\n", "## 4.3 Attach Arc Cluster\n", "You can use Studio, Python SDK and CLI to attach Arc cluster to Machine Learning workspace.\n", "Below code shows the attachment of AKS that the compute type is managedClusters. For Arc connected cluster, it should be connectedClusters.\n", "Follow this [guidance](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-attach-arc-kubernetes?tabs=studio#attach-arc-cluster) for more details." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from azure.ai.ml import load_compute\n", "\n", "compute_name = \"<COMPUTE_NAME>\"\n", "\n", "# for arc connected cluster, the resource_id should be something like '/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.ContainerService/connectedClusters/<CLUSTER_NAME>''\n", "compute_params = [\n", " {\"name\": compute_name},\n", " {\"type\": \"kubernetes\"},\n", " {\n", " \"resource_id\": \"/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.ContainerService/managedClusters/<CLUSTER_NAME>\"\n", " },\n", "]\n", "k8s_compute = load_compute(source=None, params_override=compute_params)\n", "\n", "# !!!bug https://msdata.visualstudio.com/Vienna/_workitems/edit/1740311\n", "compute_list = {c.name: c.type for c in ml_client.compute.list()}\n", "\n", "if compute_name not in compute_list or compute_list[compute_name] != \"kubernetes\":\n", " ml_client.begin_create_or_update(k8s_compute).result()\n", "else:\n", " print(\"Compute already exists\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 5. Deploy your online endpoint to Azure\n", "Next, deploy your online endpoint to Azure.\n", "\n", "## 5.1 Configure online endpoint\n", "`endpoint_name`: The name of the endpoint.\n", "\n", "`auth_mode` : Use `key` for key-based authentication. Use `aml_token` for Azure Machine Learning token-based authentication. A `key` does not expire, but `aml_token` does expire. \n", "\n", "Optionally, you can add description, tags to your endpoint." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Creating a unique endpoint name with current datetime to avoid conflicts\n", "import datetime\n", "\n", "online_endpoint_name = \"k8s-endpoint-\" + datetime.datetime.now().strftime(\"%m%d%H%M%f\")\n", "\n", "# create an online endpoint\n", "endpoint = KubernetesOnlineEndpoint(\n", " name=online_endpoint_name,\n", " compute=\"<COMPUTE_NAME>\",\n", " description=\"this is a sample online endpoint\",\n", " auth_mode=\"key\",\n", " tags={\"foo\": \"bar\"},\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 5.2 Create the endpoint\n", "\n", "Using the `MLClient` created earlier, we will now create the Endpoint in the workspace. This command will start the endpoint creation and return a confirmation response while the endpoint creation continues." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ml_client.begin_create_or_update(endpoint).result()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 5.3 Configure online deployment\n", "\n", "A deployment is a set of resources required for hosting the model that does the actual inferencing. We will create a deployment for our endpoint using the `KubernetesOnlineDeployment` class." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model = Model(path=\"../model-1/model/sklearn_regression_model.pkl\")\n", "env = Environment(\n", " conda_file=\"../model-1/environment/conda.yaml\",\n", " image=\"mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04\",\n", ")\n", "\n", "blue_deployment = KubernetesOnlineDeployment(\n", " name=\"blue\",\n", " endpoint_name=online_endpoint_name,\n", " model=model,\n", " environment=env,\n", " code_configuration=CodeConfiguration(\n", " code=\"../model-1/onlinescoring\", scoring_script=\"score.py\"\n", " ),\n", " instance_count=1,\n", " resources=ResourceRequirementsSettings(\n", " requests=ResourceSettings(\n", " cpu=\"100m\",\n", " memory=\"0.5Gi\",\n", " ),\n", " ),\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 5.4 Create the deployment\n", "\n", "Using the `MLClient` created earlier, we will now create the deployment in the workspace. This command will start the deployment creation and return a confirmation response while the deployment creation continues." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ml_client.begin_create_or_update(blue_deployment).result()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# blue deployment takes 100 traffic\n", "endpoint.traffic = {\"blue\": 100}\n", "ml_client.begin_create_or_update(endpoint).result()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 6. Test the endpoint with sample data\n", "Using the `MLClient` created earlier, we will get a handle to the endpoint. The endpoint can be invoked using the `invoke` command with the following parameters:\n", "- `endpoint_name` - Name of the endpoint\n", "- `request_file` - File with request data\n", "- `deployment_name` - Name of the specific deployment to test in an endpoint\n", "\n", "We will send a sample request using a [json](./model-1/sample-request.json) file. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# test the blue deployment with some sample data\n", "# comment this out as cluster under dev subscription can't be accessed from public internet.\n", "# ml_client.online_endpoints.invoke(\n", "# endpoint_name=online_endpoint_name,\n", "# deployment_name='blue',\n", "# request_file='../model-1/sample-request.json')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 7. Managing endpoints and deployments\n", "\n", "## 7.1 Get details of the endpoint" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Get the details for online endpoint\n", "endpoint = ml_client.online_endpoints.get(name=online_endpoint_name)\n", "\n", "# existing traffic details\n", "print(endpoint.traffic)\n", "\n", "# Get the scoring URI\n", "print(endpoint.scoring_uri)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 7.2 Get the logs for the new deployment\n", "Get the logs for the green deployment and verify as needed" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ml_client.online_deployments.get_logs(\n", " name=\"blue\", endpoint_name=online_endpoint_name, lines=50\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 8. Delete the endpoint\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ml_client.online_endpoints.begin_delete(name=online_endpoint_name)" ] } ], "metadata": { "description": { "description": "Use an online endpoint to deploy your model, so you don't have to create and manage the underlying infrastructure" }, "interpreter": { "hash": "4b2636843c93e81a716550cfb0ebc30193495b504987b26827cfbf05b43a1104" }, "kernelspec": { "display_name": "Python 3.10 - SDK V2", "language": "python", "name": "python310-sdkv2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.12" }, "orig_nbformat": 4 }, "nbformat": 4, "nbformat_minor": 2 }