sdk/python/jobs/automl-standalone-jobs/automl-regression-task-hardware-performance/automl-regression-task-hardware-performance.ipynb (856 lines of code) (raw):
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AutoML: Train \"the best\" Regression model for the Hardware dataset.\n",
"\n",
"**Requirements** - In order to benefit from this tutorial, you will need:\n",
"- A basic understanding of Machine Learning\n",
"- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)\n",
"- An Azure ML workspace. [Check this notebook for creating a workspace](../../../resources/workspace/workspace.ipynb) \n",
"- A python environment\n",
"- Installed Azure Machine Learning Python SDK v2 - [install instructions](../../../README.md) - check the getting started section\n",
"\n",
"**Learning Objectives** - By the end of this tutorial, you should be able to:\n",
"- Connect to your AML workspace from the Python SDK\n",
"- Create an `AutoML regression Job` with the 'regression()' factory-function.\n",
"- Train the model using [serverless compute (preview)](https://learn.microsoft.com/azure/machine-learning/how-to-use-serverless-compute?view=azureml-api-2&tabs=python) by submitting/running the AutoML regression training job\n",
"- Obtaing the model and score predictions with it\n",
"\n",
"**Motivations** - This notebook explains how to setup and run an AutoML regression job. This is one of the nine ML-tasks supported by AutoML. Other ML-tasks are 'forecasting', 'classification', 'image classification', 'image object detection', 'nlp text classification', etc.\n",
"\n",
"In this notebook, we go over how you can use AutoML for training a Regression model. We will use the Hardware Performance dataset to train and deploy the model to use in inference scenarios. The Regression goal is to predict the performance of certain combinations of hardware parts."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 1. Connect to Azure Machine Learning Workspace\n",
"\n",
"The [workspace](https://docs.microsoft.com/en-us/azure/machine-learning/concept-workspace) is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section we will connect to the workspace in which the job will be run.\n",
"\n",
"## 1.1. Import the required libraries"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"gather": {
"logged": 1634852261599
}
},
"outputs": [],
"source": [
"# Import required libraries\n",
"from azure.identity import DefaultAzureCredential\n",
"from azure.ai.ml import MLClient\n",
"\n",
"from azure.ai.ml.constants import AssetTypes\n",
"from azure.ai.ml import automl\n",
"from azure.ai.ml import Input"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 1.2. Configure workspace details and get a handle to the workspace\n",
"\n",
"To connect to a workspace, we need identifier parameters - a subscription, resource group and workspace name. We will use these details in the `MLClient` from `azure.ai.ml` to get a handle to the required Azure Machine Learning workspace. We use the [default azure authentication](https://docs.microsoft.com/en-us/python/api/azure-identity/azure.identity.defaultazurecredential?view=azure-python) for this tutorial. Check the [configuration notebook](../../configuration.ipynb) for more details on how to configure credentials and connect to a workspace."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"gather": {
"logged": 1634852261884
},
"jupyter": {
"outputs_hidden": false,
"source_hidden": false
},
"nteract": {
"transient": {
"deleting": false
}
}
},
"outputs": [],
"source": [
"credential = DefaultAzureCredential()\n",
"ml_client = None\n",
"try:\n",
" ml_client = MLClient.from_config(credential)\n",
"except Exception as ex:\n",
" print(ex)\n",
" # Enter details of your AML workspace\n",
" subscription_id = \"<SUBSCRIPTION_ID>\"\n",
" resource_group = \"<RESOURCE_GROUP>\"\n",
" workspace = \"<AML_WORKSPACE_NAME>\"\n",
" ml_client = MLClient(credential, subscription_id, resource_group, workspace)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Show Azure ML Workspace information"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"workspace = ml_client.workspaces.get(name=ml_client.workspace_name)\n",
"\n",
"subscription_id = ml_client.subscription_id\n",
"resource_group = workspace.resource_group\n",
"workspace_name = ml_client.workspace_name\n",
"\n",
"output = {}\n",
"output[\"Workspace\"] = workspace_name\n",
"output[\"Subscription ID\"] = subscription_id\n",
"output[\"Resource Group\"] = resource_group\n",
"output[\"Location\"] = workspace.location\n",
"output"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 2. MLTable with input Training Data\n",
"\n",
"## 2.1. Create MLTable data input\n",
"Please make use of the MLTable files present within the data folder at the same location (in the repo) as this notebook."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Training MLTable defined locally, with local data to be uploaded\n",
"my_training_data_input = Input(\n",
" type=AssetTypes.MLTABLE, path=\"./data/training-mltable-folder\"\n",
")\n",
"\n",
"# WITH REMOTE PATH If available already in the cloud/workspace-blob-store\n",
"# my_training_data_input = Input(type=AssetTypes.MLTABLE, path=\"azureml://datastores/workspaceblobstore/paths/my-regression-mltable\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 4. Configure and run the AutoML Regression training job\n",
"In this section we will configure and run the AutoML regression job.\n",
"\n",
"## 4.1 Configure the job through the regression() factory function\n",
"\n",
"### regression() function parameters:\n",
"\n",
"The `regression()` factory function allows user to configure AutoML for the regression task for the most common scenarios with the following properties.\n",
"\n",
"- `target_column_name` - The name of the column to target for predictions. It must always be specified. This parameter is applicable to 'training_data', 'validation_data' and 'test_data'.\n",
"- `primary_metric` - The metric that AutoML will optimize for model selection.\n",
"- `training_data` - The data to be used for training. It should contain both training feature columns and a target column. Optionally, this data can be split for segregating a validation or test dataset. \n",
"You can use a registered MLTable in the workspace using the format '<mltable_name>:<version>' OR you can use a local file or folder as a MLTable. For e.g Input(mltable='my_mltable:1') OR Input(mltable=MLTable(local_path=\"./data\"))\n",
"The parameter 'training_data' must always be provided.\n",
"- `compute` - The compute on which the AutoML job will run. In this example we are using serverless compute. You can alternatively use a compute cluster as well. \n",
"- `name` - The name of the Job/Run. This is an optional property. If not specified, a random name will be generated.\n",
"- `experiment_name` - The name of the Experiment. An Experiment is like a folder with multiple runs in Azure ML Workspace that should be related to the same logical machine learning experiment.\n",
"\n",
"### set_limits() function parameters:\n",
"This is an optional configuration method to configure limits parameters such as timeouts. \n",
" \n",
"- `timeout_minutes` - Maximum amount of time in minutes that the whole AutoML job can take before the job terminates. This timeout includes setup, featurization and training runs but does not include the ensembling and model explainability runs at the end of the process since those actions need to happen once all the trials (children jobs) are done. If not specified, the default job's total timeout is 6 days (8,640 minutes). To specify a timeout less than or equal to 1 hour (60 minutes), make sure your dataset's size is not greater than 10,000,000 (rows times column) or an error results.\n",
"\n",
"- `trial_timeout_minutes` - Maximum time in minutes that each trial (child job) can run for before it terminates. If not specified, a value of 1 month or 43200 minutes is used.\n",
" \n",
"- `max_trials` - The maximum number of trials/runs each with a different combination of algorithm and hyperparameters to try during an AutoML job. If not specified, the default is 1000 trials. If using 'enable_early_termination' the number of trials used can be smaller.\n",
" \n",
"- `max_concurrent_trials` - Represents the maximum number of trials (children jobs) that would be executed in parallel. It's a good practice to match this number with the number of nodes your cluster.\n",
" \n",
"- `enable_early_termination` - Whether to enable early termination if the score is not improving in the short term. \n",
" "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"parameters"
]
},
"outputs": [],
"source": [
"# general job parameters\n",
"max_trials = 5\n",
"exp_name = \"dpv2-regression-experiment\""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"gather": {
"logged": 1634852262026
},
"jupyter": {
"outputs_hidden": false,
"source_hidden": false
},
"name": "regression-configuration",
"nteract": {
"transient": {
"deleting": false
}
}
},
"outputs": [],
"source": [
"# Create the AutoML regression job with the related factory-function.\n",
"\n",
"regression_job = automl.regression(\n",
" experiment_name=exp_name,\n",
" training_data=my_training_data_input,\n",
" target_column_name=\"ERP\",\n",
" primary_metric=\"R2Score\",\n",
" n_cross_validations=5,\n",
" enable_model_explainability=True,\n",
" tags={\"my_custom_tag\": \"My custom value\"},\n",
")\n",
"\n",
"# Limits are all optional\n",
"regression_job.set_limits(\n",
" timeout_minutes=600,\n",
" trial_timeout_minutes=20,\n",
" max_trials=max_trials,\n",
" # max_concurrent_trials = 4,\n",
" # max_cores_per_trial: -1,\n",
" enable_early_termination=True,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Configure custom featurization\n",
"# You can skip this cell to run AutoML using automatic featurization\n",
"from azure.ai.ml.automl import ColumnTransformer\n",
"\n",
"transformer_params = {\n",
" \"imputer\": [\n",
" ColumnTransformer(fields=[\"CACH\"], parameters={\"strategy\": \"most_frequent\"}),\n",
" ColumnTransformer(fields=[\"PRP\"], parameters={\"strategy\": \"most_frequent\"}),\n",
" ],\n",
"}\n",
"regression_job.set_featurization(\n",
" mode=\"custom\",\n",
" transformer_params=transformer_params,\n",
" blocked_transformers=[\"LabelEncoder\"],\n",
" column_name_and_types={\"CHMIN\": \"Categorical\"},\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 4.2 Run the Command\n",
"Using the `MLClient` created earlier, we will now run this Command in the workspace."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"gather": {
"logged": 1634852267930
},
"jupyter": {
"outputs_hidden": false,
"source_hidden": false
},
"nteract": {
"transient": {
"deleting": false
}
}
},
"outputs": [],
"source": [
"# Submit the AutoML job\n",
"returned_job = ml_client.jobs.create_or_update(\n",
" regression_job\n",
") # submit the job to the backend\n",
"\n",
"print(f\"Created job: {returned_job}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Wait until the AutoML job is finished\n",
"ml_client.jobs.stream(returned_job.name) waits until the specified job is finished"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Wait for job to complete and stream updates\n",
"ml_client.jobs.stream(returned_job.name)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Get a URL for the status of the job\n",
"returned_job.services[\"Studio\"].endpoint"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(returned_job.name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 5. Retrieve the Best Trial (Best Model's trial/run)\n",
"Use the MLFLowClient to access the results (such as Models, Artifacts, Metrics) of a previously completed AutoML Trial."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize MLFlow Client\n",
"The models and artifacts that are produced by AutoML can be accessed via the MLFlow interface. \n",
"Initialize the MLFlow client here, and set the backend as Azure ML, via. the MLFlow Client.\n",
"\n",
"*IMPORTANT*, you need to have installed the latest MLFlow packages with:\n",
"\n",
" pip install azureml-mlflow\n",
"\n",
" pip install mlflow"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Obtain the tracking URI for MLFlow"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import mlflow\n",
"\n",
"# Obtain the tracking URL from MLClient\n",
"MLFLOW_TRACKING_URI = ml_client.workspaces.get(\n",
" name=ml_client.workspace_name\n",
").mlflow_tracking_uri\n",
"\n",
"print(MLFLOW_TRACKING_URI)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Set the MLFLOW TRACKING URI\n",
"\n",
"mlflow.set_tracking_uri(MLFLOW_TRACKING_URI)\n",
"\n",
"print(\"\\nCurrent tracking uri: {}\".format(mlflow.get_tracking_uri()))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from mlflow.tracking.client import MlflowClient\n",
"from mlflow.artifacts import download_artifacts\n",
"\n",
"# Initialize MLFlow client\n",
"mlflow_client = MlflowClient()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Get the AutoML parent Job"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"job_name = returned_job.name\n",
"\n",
"# Example if providing an specific Job name/ID\n",
"# job_name = \"b4e95546-0aa1-448e-9ad6-002e3207b4fc\"\n",
"\n",
"# Get the parent run\n",
"mlflow_parent_run = mlflow_client.get_run(job_name)\n",
"\n",
"print(\"Parent Run: \")\n",
"print(mlflow_parent_run)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Print parent run tags. 'automl_best_child_run_id' tag should be there.\n",
"print(mlflow_parent_run.data.tags)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Get the AutoML best child run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Get the best model's child run\n",
"\n",
"best_child_run_id = mlflow_parent_run.data.tags[\"automl_best_child_run_id\"]\n",
"print(\"Found best child run id: \", best_child_run_id)\n",
"\n",
"best_run = mlflow_client.get_run(best_child_run_id)\n",
"\n",
"print(\"Best child run: \")\n",
"print(best_run)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Get best model run's metrics\n",
"\n",
"Access the results (such as Models, Artifacts, Metrics) of a previously completed AutoML Run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_run.data.metrics"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Download the best model locally\n",
"\n",
"Access the results (such as Models, Artifacts, Metrics) of a previously completed AutoML Run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"# Create local folder\n",
"local_dir = \"./artifact_downloads\"\n",
"if not os.path.exists(local_dir):\n",
" os.mkdir(local_dir)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Download run's artifacts/outputs\n",
"local_path = download_artifacts(\n",
" run_id=best_run.info.run_id, artifact_path=\"outputs\", dst_path=local_dir\n",
")\n",
"print(\"Artifacts downloaded in: {}\".format(local_path))\n",
"print(\"Artifacts: {}\".format(os.listdir(local_path)))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Show the contents of the MLFlow model folder\n",
"os.listdir(\"./artifact_downloads/outputs/mlflow-model\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 6. Register Best Model and Deploy"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 6.1 Create managed online endpoint"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# import required libraries\n",
"from azure.ai.ml.entities import (\n",
" ManagedOnlineEndpoint,\n",
" ManagedOnlineDeployment,\n",
" Model,\n",
" Environment,\n",
" CodeConfiguration,\n",
" ProbeSettings,\n",
")\n",
"from azure.ai.ml.constants import ModelType"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Creating a unique endpoint name with current datetime to avoid conflicts\n",
"import datetime\n",
"\n",
"online_endpoint_name = \"regression-\" + datetime.datetime.now().strftime(\"%m%d%H%M%f\")\n",
"\n",
"# create an online endpoint\n",
"endpoint = ManagedOnlineEndpoint(\n",
" name=online_endpoint_name,\n",
" description=\"this is a sample online endpoint for mlflow model\",\n",
" auth_mode=\"key\",\n",
" tags={\"foo\": \"bar\"},\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ml_client.begin_create_or_update(endpoint).result()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 6.2 Register best model and deploy"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Register model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model_name = \"hardware-performance-model\"\n",
"model = Model(\n",
" path=f\"azureml://jobs/{best_run.info.run_id}/outputs/artifacts/outputs/mlflow-model/\",\n",
" name=model_name,\n",
" description=\"my sample regression model\",\n",
" type=AssetTypes.MLFLOW_MODEL,\n",
")\n",
"\n",
"# for downloaded file\n",
"# model = Model(path=\"artifact_downloads/outputs/model.pkl\", name=model_name)\n",
"\n",
"registered_model = ml_client.models.create_or_update(model)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"registered_model.id"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Deploy"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"deployment = ManagedOnlineDeployment(\n",
" name=\"hardware-performance-deploy\",\n",
" endpoint_name=online_endpoint_name,\n",
" model=registered_model.id,\n",
" instance_type=\"Standard_DS3_V2\",\n",
" instance_count=1,\n",
" liveness_probe=ProbeSettings(\n",
" failure_threshold=30,\n",
" success_threshold=1,\n",
" timeout=2,\n",
" period=10,\n",
" initial_delay=2000,\n",
" ),\n",
" readiness_probe=ProbeSettings(\n",
" failure_threshold=10,\n",
" success_threshold=1,\n",
" timeout=10,\n",
" period=10,\n",
" initial_delay=2000,\n",
" ),\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ml_client.online_deployments.begin_create_or_update(deployment).result()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"# hardware performance deployment to take 100% traffic\n",
"endpoint.traffic = {\"hardware-performance-deploy\": 100}\n",
"ml_client.begin_create_or_update(endpoint).result()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Test the deployment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# test the blue deployment with some sample data\n",
"import pandas as pd\n",
"\n",
"test_data = pd.read_csv(\"./data/training-mltable-folder/training-machine-dataset.csv\")\n",
"\n",
"test_data = test_data.drop(\"ERP\", axis=1)\n",
"\n",
"test_data_json = test_data.to_json(orient=\"records\", indent=4)\n",
"data = (\n",
" '{ \\\n",
" \"input_data\": {\"data\": '\n",
" + test_data_json\n",
" + \"}}\"\n",
")\n",
"\n",
"request_file_name = \"sample-request-hardware-performance.json\"\n",
"\n",
"with open(request_file_name, \"w\") as request_file:\n",
" request_file.write(data)\n",
"\n",
"# test the blue deployment with some sample data\n",
"ml_client.online_endpoints.invoke(\n",
" endpoint_name=online_endpoint_name,\n",
" deployment_name=\"hardware-performance-deploy\",\n",
" request_file=\"sample-request-hardware-performance.json\",\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# wait and delete endpoint\n",
"import time\n",
"\n",
"time.sleep(60)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Get endpoint details"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Get the details for online endpoint\n",
"endpoint = ml_client.online_endpoints.get(name=online_endpoint_name)\n",
"\n",
"# existing traffic details\n",
"print(endpoint.traffic)\n",
"\n",
"# Get the scoring URI\n",
"print(endpoint.scoring_uri)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Delete the deployment and endpoint"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ml_client.online_endpoints.begin_delete(name=online_endpoint_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Next Step: Load the best model and try predictions\n",
"\n",
"Loading the models locally assume that you are running the notebook in an environment compatible with the model. The list of dependencies that is expected by the model is specified in the MLFlow model produced by AutoML (in the 'conda.yaml' file within the mlflow-model folder).\n",
"\n",
"Since the AutoML model was trained remotelly in a different environment with different dependencies to your current local conda environment where you are running this notebook, if you want to load the model you have several options:\n",
"\n",
"1. A recommended way to locally load the model in memory and try predictions is to create a new/clean conda environment with the dependencies specified in the conda.yaml file within the MLFlow model's folder, then use MLFlow to load the model and call .predict() as explained in the notebook **mlflow-model-local-inference-test.ipynb** in this same folder.\n",
"\n",
"2. You can install all the packages/dependencies specified in conda.yaml into your current conda environment you used for using Azure ML SDK and AutoML. MLflow SDK also have a method to install the dependencies in the current environment. However, this option could have risks of package version conflicts depending on what's installed in your current environment.\n",
"\n",
"3. You can also use: mlflow models serve -m 'xxxxxxx'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Next Steps\n",
"You can see further examples of other AutoML tasks such as Image-Classification, Image-Object-Detection, NLP-Text-Classification, Time-Series-Forcasting, etc."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
}
],
"metadata": {
"kernel_info": {
"name": "python3-azureml"
},
"kernelspec": {
"display_name": "Python 3.10 - SDK V2",
"language": "python",
"name": "python310-sdkv2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.11"
},
"microsoft": {
"host": {
"AzureML": {
"notebookHasBeenCompleted": true
}
}
},
"nteract": {
"version": "nteract-front-end@1.0.0"
}
},
"nbformat": 4,
"nbformat_minor": 4
}