sdk/python/endpoints/batch/deploy-pipelines/training-with-components/sdk-deploy-and-test.ipynb (1,159 lines of code) (raw):

{ "cells": [ { "attachments": {}, "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "# How to operationalize a training pipeline with batch endpoints\n", "\n", "This example deploys a training pipeline that takes input training data (labeled) and produces a predictive model, along with the evaluation results and the transformations applied during preprocessing. The pipeline will use tabular data from the [UCI Heart Disease Data Set](https://archive.ics.uci.edu/ml/datasets/Heart+Disease) to train an XGBoost model. We use a data preprocessing component to preprocess the data before it is sent to the training component to fit and evaluate the model." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## 1. Connect to Azure Machine Learning Workspace\n", "\n", "The [workspace](https://docs.microsoft.com/en-us/azure/machine-learning/concept-workspace) is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section we will connect to the workspace in which the job will be run.\n", "\n", "### 1.1. Import the required libraries" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from azure.ai.ml import MLClient, Input\n", "from azure.ai.ml import load_component\n", "from azure.ai.ml.entities import (\n", " Data,\n", " BatchEndpoint,\n", " PipelineComponentBatchDeployment,\n", " AmlCompute,\n", " Environment,\n", ")\n", "from azure.ai.ml.constants import AssetTypes\n", "from azure.ai.ml.dsl import pipeline\n", "from azure.core.exceptions import ResourceExistsError\n", "from azure.identity import DefaultAzureCredential" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### 1.2. Configure workspace details and get a handle to the workspace\n", "\n", "To connect to a workspace, we need identifier parameters - a subscription, resource group and workspace name. We will use these details in the `MLClient` from `azure.ai.ml` to get a handle to the required Azure Machine Learning workspace. We use the default [default azure authentication](https://docs.microsoft.com/en-us/python/api/azure-identity/azure.identity.defaultazurecredential?view=azure-python) for this tutorial. Check the [configuration notebook](../../jobs/configuration.ipynb) for more details on how to configure credentials and connect to a workspace." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "subscription_id = \"<SUBSCRIPTION_ID>\"\n", "resource_group = \"<RESOURCE_GROUP>\"\n", "workspace = \"<AML_WORKSPACE_NAME>\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ml_client = MLClient(\n", " DefaultAzureCredential(), subscription_id, resource_group, workspace\n", ")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "If you are working in a Azure Machine Learning compute, you can simply:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ml_client = MLClient.from_config(DefaultAzureCredential())" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "## 2. Create the pipeline component\n", "\n", "In this section, we'll create all the assets required for our training pipeline. We'll begin by creating an environment that includes necessary libraries to train the model. We'll then create a compute cluster on which the batch deployment will run, and finally, we'll register the input data as a data asset." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### 2.1 Create the environment\n", "\n", "The components in this example will use an environment with the `XGBoost` and `scikit-learn` libraries. The `environment/conda.yml` file contains the environment's configuration:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "name": "configure_environment" }, "outputs": [], "source": [ "environment = Environment(\n", " name=\"xgboost-sklearn-py38\",\n", " description=\"An environment for models built with XGBoost and Scikit-learn.\",\n", " image=\"mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04:latest\",\n", " conda_file=\"environment/conda.yml\",\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "name": "create_environment" }, "outputs": [], "source": [ "try:\n", " ml_client.environments.create_or_update(environment)\n", "except ResourceExistsError:\n", " pass" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### 2.2 Create a compute cluster" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "name": "create_compute" }, "outputs": [], "source": [ "compute_name = \"batch-cluster\"\n", "if not any(filter(lambda m: m.name == compute_name, ml_client.compute.list())):\n", " compute_cluster = AmlCompute(\n", " name=compute_name,\n", " description=\"Batch endpoints compute cluster\",\n", " min_instances=0,\n", " max_instances=5,\n", " )\n", " ml_client.begin_create_or_update(compute_cluster).result()" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### 2.3 Register the training data as a data asset" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "name": "configure_data_asset" }, "outputs": [], "source": [ "data_path = \"data/train\"\n", "dataset_name = \"heart-dataset-train\"\n", "\n", "heart_dataset_train = Data(\n", " path=data_path,\n", " type=AssetTypes.URI_FOLDER,\n", " description=\"A training dataset for heart classification\",\n", " name=dataset_name,\n", ")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Create the data asset" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "name": "create_data_asset" }, "outputs": [], "source": [ "ml_client.data.create_or_update(heart_dataset_train)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Let's get a reference to the new data asset:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "name": "get_data_asset" }, "outputs": [], "source": [ "heart_dataset_train = ml_client.data.get(name=dataset_name, label=\"latest\")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### 2.4 Create the pipeline\n", "\n", "The pipeline we want to operationalize has takes 1 input, the training data, and produces 3 outputs, the trained model, the evaluation results, and the data transformations applied as preprocessing. It is composed of 2 components:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "name": "load_component" }, "outputs": [], "source": [ "prepare_data = load_component(source=\"components/prepare/prepare.yml\")\n", "train_xgb = load_component(source=\"components/train_xgb/train_xgb.yml\")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Construct the pipeline:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "name": "configure_pipeline" }, "outputs": [], "source": [ "@pipeline()\n", "def uci_heart_classifier_trainer(input_data: Input(type=AssetTypes.URI_FOLDER)):\n", " prepared_data = prepare_data(data=input_data)\n", " trained_model = train_xgb(\n", " data=prepared_data.outputs.prepared_data,\n", " target_column=\"target\",\n", " register_best_model=False,\n", " eval_size=0.3,\n", " )\n", "\n", " return {\n", " \"model\": trained_model.outputs.model,\n", " \"evaluation_results\": trained_model.outputs.evaluation_results,\n", " \"transformations_output\": prepared_data.outputs.transformations_output,\n", " }" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "> In the pipeline, the `transformations` input is missing; therefore, the script will learn the parameters from the input data." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### 2.5 Test the pipeline\n", "\n", "Let's test the pipeline with some sample data. To do that, we'll create a job using the pipeline and the `batch-cluster` compute cluster created previously." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "name": "configure_pipeline_job" }, "outputs": [], "source": [ "pipeline_job = uci_heart_classifier_trainer(\n", " Input(type=\"uri_folder\", path=heart_dataset_train.id)\n", ")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Now, we'll configure some run settings to run the test:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "name": "configure_pipeline_job_defaults" }, "outputs": [], "source": [ "pipeline_job.settings.default_datastore = \"workspaceblobstore\"\n", "pipeline_job.settings.default_compute = \"batch-cluster\"" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Create the test job:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "name": "test_pipeline" }, "outputs": [], "source": [ "pipeline_job_run = ml_client.jobs.create_or_update(\n", " pipeline_job, experiment_name=\"uci-heart-train-pipeline\"\n", ")\n", "pipeline_job_run" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "## 3 Create Batch Endpoint\n", "\n", "Batch endpoints are endpoints that are used batch inferencing on large volumes of data over a period of time. Batch endpoints receive pointers to data and run jobs asynchronously to process the data in parallel on compute clusters. Batch endpoints store outputs to a data store for further analysis.\n", "\n", "### 3.1 Configure the endpoint\n", "\n", "First, let's create the endpoint that is going to host the batch deployments. To ensure that our endpoint name is unique, let's create a random suffix to append to it. \n", "\n", "> In general, you won't need to use this technique but you will use more meaningful names. Please skip the following cell if your case:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "name": "name_endpoint" }, "outputs": [], "source": [ "endpoint_name = \"uci-classifier-train\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import random\n", "import string\n", "\n", "# Creating a unique endpoint name by including a random suffix\n", "allowed_chars = string.ascii_lowercase + string.digits\n", "endpoint_suffix = \"\".join(random.choice(allowed_chars) for x in range(5))\n", "endpoint_name = f\"{endpoint_name}-{endpoint_suffix}\"\n", "\n", "print(f\"Endpoint name: {endpoint_name}\")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Let's configure the endpoint:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "name": "configure_endpoint", "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "endpoint = BatchEndpoint(\n", " name=endpoint_name,\n", " description=\"An endpoint to perform training of the Heart Disease Data Set prediction task\",\n", ")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### 3.2 Create the endpoint\n", "Using the `MLClient` created earlier, we will now create the Endpoint in the workspace. This command will start the endpoint creation and return a confirmation response while the endpoint creation continues." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "name": "create_endpoint", "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "ml_client.batch_endpoints.begin_create_or_update(endpoint).result()" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "You can see the endpoint as follows:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "name": "query_endpoint" }, "outputs": [], "source": [ "endpoint = ml_client.batch_endpoints.get(name=endpoint_name)\n", "print(endpoint)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## 4. Deploy the pipeline component\n", "\n", "To deploy the pipeline component, we have to create a batch deployment. A deployment is a set of resources required for hosting the asset that does the actual work." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "### 4.1 Creating the component\n", "\n", "Our pipeline is defined in a function. We are going to create a component out of it. Pipeline components are reusable compute graphs that can be included in batch deployments or used to compose more complex pipelines." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "name": "build_pipeline_component", "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "pipeline_component = ml_client.components.create_or_update(\n", " uci_heart_classifier_trainer().component\n", ")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### 4.2 Configuring the deployment" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "name": "configure_deployment", "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "deployment = PipelineComponentBatchDeployment(\n", " name=\"uci-classifier-train-xgb\",\n", " description=\"A sample deployment that trains an XGBoost model for the UCI dataset.\",\n", " endpoint_name=endpoint.name,\n", " component=pipeline_component,\n", " settings={\"continue_on_step_failure\": False, \"default_compute\": compute_name},\n", ")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### 4.3 Create the deployment\n", "Using the `MLClient` created earlier, we will now create the deployment in the workspace. This command will start the deployment creation and return a confirmation response while the deployment creation continues." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "name": "create_deployment", "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "ml_client.batch_deployments.begin_create_or_update(deployment).result()" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Once created, let's configure this new deployment as the default one:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "name": "update_default_deployment" }, "outputs": [], "source": [ "endpoint = ml_client.batch_endpoints.get(endpoint_name)\n", "endpoint.defaults.deployment_name = deployment.name\n", "ml_client.batch_endpoints.begin_create_or_update(endpoint).result()" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "We can see the endpoint URL as follows:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(f\"The default deployment is {endpoint.defaults.deployment_name}\")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "### 4.6 Testing the deployment\n", "\n", "Once the deployment is created, it is ready to recieve jobs." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "#### 4.6.1 Run a batch job " ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Define the input data asset:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "name": "configure_inputs" }, "outputs": [], "source": [ "input_data = Input(type=AssetTypes.URI_FOLDER, path=heart_dataset_train.id)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "You can invoke the default deployment as follows:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "name": "invoke_deployment", "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "job = ml_client.batch_endpoints.invoke(\n", " endpoint_name=endpoint.name, inputs={\"input_data\": input_data}\n", ")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "#### 4.6.2 Get the details of the invoked job\n", "\n", "Let us get details and logs of the invoked job:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "name": "get_job", "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "ml_client.jobs.get(job.name)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "We can wait for the job to finish using the following code:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "name": "stream_job_logs" }, "outputs": [], "source": [ "ml_client.jobs.stream(name=job.name)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "#### 4.6.3 Access job outputs\n", "\n", "Once the job is completed, we can access some of its outputs. This pipeline produces the following outputs for its components:\n", "- `preprocess job`: output is `transformations_output`\n", "- `train job`: outputs are `model` and `evaluation_results`" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "We can download the outputs as follows:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "name": "download_outputs" }, "outputs": [], "source": [ "ml_client.jobs.download(\n", " name=job.name, download_path=\".\", output_name=\"transformations_output\"\n", ")\n", "ml_client.jobs.download(name=job.name, download_path=\".\", output_name=\"model\")\n", "ml_client.jobs.download(\n", " name=job.name, download_path=\".\", output_name=\"evaluation_results\"\n", ")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "In pipelines, each step is executed as child job. You can use `parent_job_name` to find all the child jobs from a given job:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "name": "get_child_jobs" }, "outputs": [], "source": [ "pipeline_job_steps = {\n", " step.properties[\"azureml.moduleName\"]: step\n", " for step in ml_client.jobs.list(parent_job_name=job.name)\n", "}" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "This dictonary contains the module name as key, and the job as values. This makes easier to work with them:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "name": "get_child_jobs_by_module" }, "outputs": [], "source": [ "preprocessing_job = pipeline_job_steps[\"uci_heart_prepare\"]\n", "train_job = pipeline_job_steps[\"uci_heart_train\"]" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Confirm the jobs' statuses using the following:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(f\"Preprocessing job: {preprocessing_job.status}\")\n", "print(f\"Training job: {train_job.status}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can also access the outputs of each of those intermediate steps as we did for the pipeline job." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## 5. Create a new deployment in the endpoint\n", "\n", "Endpoints can host multiple deployments at once, while keeping only one deployment as the default. Therefore, you can iterate over your different models, deploy the different models to your endpoint and test them, and finally, switch the default deployment to the model deployment that works best for you.\n", "\n", "Let's change the way preprocessing is done in the pipeline to see if we get a model that performs better." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### 5.1 Change the pipeline configuration" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "name": "configure_nondefault_pipeline" }, "outputs": [], "source": [ "@pipeline()\n", "def uci_heart_classifier_onehot(input_data: Input(type=AssetTypes.URI_FOLDER)):\n", " prepared_data = prepare_data(data=input_data, categorical_encoding=\"onehot\")\n", " trained_model = train_xgb(\n", " data=prepared_data.outputs.prepared_data,\n", " target_column=\"target\",\n", " register_best_model=False,\n", " eval_size=0.3,\n", " )\n", "\n", " return {\n", " \"model\": trained_model.outputs.model,\n", " \"evaluation_results\": trained_model.outputs.evaluation_results,\n", " \"transformations_output\": prepared_data.outputs.transformations_output,\n", " }" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Build the pipeline:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "name": "build_nondefault_pipeline" }, "outputs": [], "source": [ "pipeline_component = uci_heart_classifier_onehot._pipeline_builder.build()" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### 5.2 Configure a new deployment\n", "\n", "Now we can define the deployment:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "name": "configure_nondefault_deployment" }, "outputs": [], "source": [ "deployment_onehot = PipelineComponentBatchDeployment(\n", " name=\"uci-classifier-train-onehot\",\n", " description=\"A sample deployment that trains an XGBoost model for the UCI dataset with one hot encoding of categorical variables.\",\n", " endpoint_name=endpoint.name,\n", " component=pipeline_component,\n", " settings={\"continue_on_step_failure\": False, \"default_compute\": compute_name},\n", ")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### 5.3 Create the deployment" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "name": "create_nondefault_deployment" }, "outputs": [], "source": [ "ml_client.batch_deployments.begin_create_or_update(deployment_onehot).result()" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### 5.4 Test a non-default deployment" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "name": "invoke_nondefault_deployment" }, "outputs": [], "source": [ "job = ml_client.batch_endpoints.invoke(\n", " endpoint_name=endpoint.name,\n", " deployment_name=deployment_onehot.name,\n", " inputs={\"input_data\": input_data},\n", ")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### 5.5 Monitor the job" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "name": "get_nondefault_job" }, "outputs": [], "source": [ "ml_client.jobs.get(name=job.name)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "name": "stream_nondefault_job_logs" }, "outputs": [], "source": [ "ml_client.jobs.stream(name=job.name)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### 5.6 Configure the new deployment as the default one" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "endpoint = ml_client.batch_endpoints.get(endpoint.name)\n", "endpoint.defaults.deployment_name = deployment_onehot.name\n", "ml_client.batch_endpoints.begin_create_or_update(endpoint).result()" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### 5.7 Delete the old deployment" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(f\"The old deployment name is: {deployment.name}\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "name": "delete_deployment" }, "outputs": [], "source": [ "ml_client.batch_deployments.begin_delete(\n", " name=deployment.name, endpoint_name=endpoint.name\n", ").result()" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## 6. Clean up resources\n", "\n", "Clean-up the resources created. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "name": "delete_endpoint" }, "outputs": [], "source": [ "ml_client.batch_endpoints.begin_delete(endpoint_name).result()" ] } ], "metadata": { "kernel_info": { "name": "amlv2" }, "kernelspec": { "display_name": "Python 3.10 - SDK v2", "language": "python", "name": "python310-sdkv2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.11" }, "nteract": { "version": "nteract-front-end@1.0.0" } }, "nbformat": 4, "nbformat_minor": 1 }