training/built-in-algorithms/Amazon_Tabular_Regression_XGBoost_LinearLearner.ipynb (704 lines of code) (raw):

{ "cells": [ { "cell_type": "markdown", "id": "9140105a", "metadata": {}, "source": [ "# Tabular regression with Amazon SageMaker XGBoost and Scikit-learn Linear Learner algorithm" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "\n", "This notebook's CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook. \n", "\n", "![This us-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-2/introduction_to_amazon_algorithms|xgboost_linear_learner_tabular|Amazon_Tabular_Regression_XGBoost_LinearLearner.ipynb)\n", "\n", "---" ] }, { "cell_type": "markdown", "id": "da569839", "metadata": {}, "source": [ "---\n", "This notebook demonstrates the use of Amazon SageMaker\u2019s implementation of the [XGBoost](https://xgboost.readthedocs.io/en/latest/) and [scikit-learn Linear Learner](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegression) algorithm to train and host a tabular regression model. Tabular regression is the task of analyzing the relationship between predictor variables and a response variable in a structured or relational data.\n", "\n", "In this notebook, we demonstrate two use cases of tabular regression models:\n", "\n", "* How to train a tabular model on an example dataset to do regression.\n", "* How to use the trained tabular model to perform inference, i.e., predicting new samples.\n", "\n", "Note: This notebook was tested in Amazon SageMaker Studio on ml.t3.medium instance with Python 3 (Data Science) kernel.\n", "\n", "---" ] }, { "cell_type": "markdown", "id": "299305b0", "metadata": {}, "source": [ "1. [Set Up](#1.-Set-Up)\n", "2. [Train A Tabular Model on Abalone Dataset](#2.-Train-a-Tabular-Model-on-Abalone-Dataset)\n", " * [Retrieve Training Artifacts](#2.1.-Retrieve-Training-Artifacts)\n", " * [Set Training Parameters](#2.2.-Set-Training-Parameters)\n", " * [Train with Automatic Model Tuning](#2.3.-Train-with-Automatic-Model-Tuning) \n", " * [Start Training](#2.4.-Start-Training) \n", "3. [Deploy and Run Inference on the Trained Tabular Model](#3.-Deploy-and-Run-Inference-on-the-Trained-Tabular-Model)\n", "4. [Evaluate the Prediction Results Returned from the Endpoint](#4.-Evaluate-the-Prediction-Results-Returned-from-the-Endpoint)" ] }, { "cell_type": "markdown", "id": "5abb73c6", "metadata": {}, "source": [ "## 1. Set Up\n" ] }, { "cell_type": "markdown", "id": "acd0b206", "metadata": {}, "source": [ "---\n", "Before executing the notebook, there are some initial steps required for setup. This notebook requires latest version of sagemaker and ipywidgets.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "e38e8992", "metadata": {}, "outputs": [], "source": [ "!pip install sagemaker ipywidgets --upgrade --quiet" ] }, { "cell_type": "markdown", "id": "7b86c006", "metadata": {}, "source": [ "\n", "---\n", "To train and host on Amazon SageMaker, we need to setup and authenticate the use of AWS services. Here, we use the execution role associated with the current notebook instance as the AWS account role with SageMaker access. It has necessary permissions, including access to your data in S3.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "65ba69dc", "metadata": {}, "outputs": [], "source": [ "import sagemaker, boto3, json\n", "from sagemaker import get_execution_role\n", "\n", "aws_role = get_execution_role()\n", "aws_region = boto3.Session().region_name\n", "sess = sagemaker.Session()" ] }, { "cell_type": "markdown", "id": "b487add4", "metadata": {}, "source": [ "## 2. Train a Tabular Model on Abalone Dataset\n", "\n", "---\n", "In this demonstration, we will train a tabular algorithm on the [Abalone](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html) dataset. The dataset contains examples of eight physical measurements such as length, diameter, and height to predict the age of abalone.\n", "Among the eight physical measurements (features), there are one categorical feature and seven numerical features. Abalone dataset is downloaded from [LIBSVM](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html).\n", "\n", "Below is the table of the first 5 examples in the Abalone dataset. Note. Since [XGBoost](https://xgboost.readthedocs.io/en/latest/) and [Linear Learner](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegression) cannot take categorical features as input, the categorical feature with number of classes as three has been one-hot encoded into columns\n", "from ```Feature_1``` to ```Feature_3``` as shown below.\n", "\n", "| Target | Feature_0 | Feature_1 | Feature_2 | Feature_3 | Feature_4 | Feature_5 | Feature_6 | Feature_7 | Feature_8 | Feature_9 |\n", "|:------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|\n", "| 11 | 1.0 | 0.0 | 0.0 | 0.585 | 0.455 | 0.150 | 0.9870 | 0.4355 | 0.2075 | 0.3100 |\n", "| 5 | 0.0 | 0.0 | 1.0 | 0.325 | 0.245 | 0.075 | 0.1495 | 0.0605 | 0.0330 | 0.0450 |\n", "| 9 | 0.0 | 0.0 | 1.0 | 0.580 | 0.420 | 0.140 | 0.7010 | 0.3285 | 0.1020 | 0.2255 |\n", "| 12 | 0.0 | 1.0 | 0.0 | 0.480 | 0.380 | 0.145 | 0.5900 | 0.2320 | 0.1410 | 0.2300 |\n", "| 11 | 0.0 | 1.0 | 0.0 | 0.440 | 0.355 | 0.115 | 0.4150 | 0.1585 | 0.0925 | 0.1310 |\n", "\n", "If you want to bring your own dataset, below are the instructions on how the training data should be formatted as input to the model.\n", "\n", "A S3 path should contain two sub-directories 'train/' and 'validation/' (optional). Each sub-directory contains a 'data.csv' file (The ABALONE dataset used in this example has been prepared and saved in `training_dataset_s3_path` shown below).\n", "* The 'data.csv' files under sub-directory 'train/' and 'validation/' are for training and validation, respectively. The validation data is used to compute a validation score at the end of each boosting iteration. An early stopping is applied when the validation score stops improving. If the validation data is not provided, a 20% of training data is randomly sampled to serve as the validation data. \n", "\n", ">Note. For [Linear Learner](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegression), the validation data is used to compute a validation score at the end of entire training process. There is no early stopping in the training process for scikit-learn linear model. If the validation data is not provided, the corresponding validation scores are not computed. \n", "\n", "* The first column of the 'data.csv' should have the corresponding target variable. The rest of other columns should have the corresponding predictor variables (features). \n", "* For the training and validation data, all the categorical features must be encoded as numerical values through encoding techniques such as the label encoding and one-hot encoding.\n", "\n", "Citations:\n", "\n", "- Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science\n" ] }, { "cell_type": "markdown", "id": "71de24e8", "metadata": {}, "source": [ "### 2.1. Retrieve Training Artifacts\n", "\n", "___\n", "\n", "Here, we retrieve the training docker container, the training algorithm source, and the tabular algorithm. Note that model_version=\"*\" fetches the latest model.\n", "\n", "For the training algorithm, we have two choices in this demonstration.\n", "* [XGBoost](https://xgboost.readthedocs.io/en/latest/): To use this algorithm, specify `train_model_id` as `xgboost-regression-model` in the cell below.\n", "* [Linear Learner](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegression): To use this algorithm, specify `train_model_id` as `sklearn-regression-linear` in the cell below.\n", "\n", "\n", "Note. [LightGBM](https://lightgbm.readthedocs.io/en/latest/) (`train_model_id: lightgbm-regression-model`), [CatBoost](https://catboost.ai/en/docs/) (`train_model_id:catboost-regression-model`), [TabTransformer](https://arxiv.org/abs/2012.06678) (`train_model_id: pytorch-regression-model`), and [AutoGluon Tabular](https://auto.gluon.ai/stable/tutorials/tabular_prediction/index.html) (`train_model_id: autogluon-regression-ensemble`) are the other choices in the tabular regression category. Since they have different input-format requirements, please check separate notebooks `lightgbm_catboost_tabular/Amazon_Tabular_Regression_LightGBM_CatBoost.ipynb`, `tabtransformer_tabular/Amazon_Tabular_Regression_TabTransformer.ipynb`, and `autogluon_tabular/Amazon_Tabular_Regression_AutoGluon.ipynb` for details.\n", "\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "4d6b0910", "metadata": {}, "outputs": [], "source": [ "from sagemaker import image_uris, model_uris, script_uris\n", "\n", "\n", "train_model_id, train_model_version, train_scope = \"xgboost-regression-model\", \"*\", \"training\"\n", "training_instance_type = \"ml.m5.xlarge\"\n", "\n", "# Retrieve the docker image\n", "train_image_uri = image_uris.retrieve(\n", " region=None,\n", " framework=None,\n", " model_id=train_model_id,\n", " model_version=train_model_version,\n", " image_scope=train_scope,\n", " instance_type=training_instance_type,\n", ")\n", "# Retrieve the training script\n", "train_source_uri = script_uris.retrieve(\n", " model_id=train_model_id, model_version=train_model_version, script_scope=train_scope\n", ")\n", "# Retrieve the pre-trained model tarball to further fine-tune\n", "train_model_uri = model_uris.retrieve(\n", " model_id=train_model_id, model_version=train_model_version, model_scope=train_scope\n", ")" ] }, { "cell_type": "markdown", "id": "db5cdb5a", "metadata": {}, "source": [ "### 2.2. Set Training Parameters\n", "\n", "---\n", "Now that we are done with all the setup that is needed, we are ready to train our tabular algorithm. To begin, let us create a [``sageMaker.estimator.Estimator``](https://sagemaker.readthedocs.io/en/stable/api/training/estimators.html) object. This estimator will launch the training job. \n", "\n", "There are two kinds of parameters that need to be set for training. The first one are the parameters for the training job. These include: (i) Training data path. This is S3 folder in which the input data is stored, (ii) Output path: This the s3 folder in which the training output is stored. (iii) Training instance type: This indicates the type of machine on which to run the training.\n", "\n", "The second set of parameters are algorithm specific training hyper-parameters. \n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "58a3544a", "metadata": {}, "outputs": [], "source": [ "# Sample training data is available in this bucket\n", "training_data_bucket = f\"jumpstart-cache-prod-{aws_region}\"\n", "training_data_prefix = \"training-datasets/tabular_regressonehot/\"\n", "\n", "training_dataset_s3_path = f\"s3://{training_data_bucket}/{training_data_prefix}\"\n", "\n", "output_bucket = sess.default_bucket()\n", "output_prefix = \"jumpstart-example-tabular-training\"\n", "\n", "s3_output_location = f\"s3://{output_bucket}/{output_prefix}/output\"" ] }, { "cell_type": "markdown", "id": "12dee919", "metadata": {}, "source": [ "---\n", "For algorithm specific hyper-parameters, we start by fetching python dictionary of the training hyper-parameters that the algorithm accepts with their default values. This can then be overridden to custom values.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "f5a77491", "metadata": {}, "outputs": [], "source": [ "from sagemaker import hyperparameters\n", "\n", "# Retrieve the default hyper-parameters for fine-tuning the model\n", "hyperparameters = hyperparameters.retrieve_default(\n", " model_id=train_model_id, model_version=train_model_version\n", ")\n", "\n", "# [Optional] Override default hyperparameters with custom values\n", "hyperparameters[\"num_boost_round\"] = \"500\"\n", "print(hyperparameters)" ] }, { "cell_type": "markdown", "id": "a5f66d86", "metadata": {}, "source": [ "### 2.3. Train with Automatic Model Tuning\n", "\n", "\n", "Amazon SageMaker automatic model tuning, also known as hyperparameter tuning, finds the best version of a model by running many training jobs on your dataset using the algorithm and ranges of hyperparameters that you specify. It then chooses the hyperparameter values that result in a model that performs the best, as measured by a metric that you choose. We will use a HyperparameterTuner object to interact with Amazon SageMaker hyperparameter tuning APIs. " ] }, { "cell_type": "code", "execution_count": null, "id": "106a1647", "metadata": {}, "outputs": [], "source": [ "from sagemaker.tuner import ContinuousParameter, IntegerParameter, HyperparameterTuner\n", "\n", "use_amt = True\n", "\n", "if train_model_id == \"xgboost-regression-model\":\n", " hyperparameter_ranges = {\n", " \"num_round\": IntegerParameter(1000, 10000),\n", " \"lambda\": IntegerParameter(1, 50),\n", " \"colsample_bytree\": ContinuousParameter(0, 1),\n", " \"alpha\": IntegerParameter(1, 50),\n", " \"subsample\": ContinuousParameter(0, 1),\n", " \"min_child_weight\": IntegerParameter(1, 50),\n", " \"max_delta_step\": IntegerParameter(1, 50),\n", " \"gamma\": IntegerParameter(1, 50),\n", " \"eta\": ContinuousParameter(0.01, 1, scaling_type=\"Logarithmic\"),\n", " \"max_depth\": IntegerParameter(1, 10),\n", " }\n", "\n", "\n", "elif train_model_id == \"sklearn-regression-linear\":\n", " hyperparameter_ranges = {\n", " \"alpha\": ContinuousParameter(1e-6, 1, scaling_type=\"Logarithmic\"),\n", " \"tol\": ContinuousParameter(1e-6, 1e-1, scaling_type=\"Logarithmic\"),\n", " \"l1_ratio\": ContinuousParameter(0, 1),\n", " }" ] }, { "cell_type": "markdown", "id": "68940cf4", "metadata": {}, "source": [ "### 2.4. Start Training" ] }, { "cell_type": "markdown", "id": "66546a7c", "metadata": {}, "source": [ "---\n", "We start by creating the estimator object with all the required assets and then launch the training job.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "b339a2e3", "metadata": {}, "outputs": [], "source": [ "from sagemaker.estimator import Estimator\n", "from sagemaker.utils import name_from_base\n", "\n", "training_job_name = name_from_base(f\"jumpstart-{train_model_id}-training\")\n", "\n", "# Create SageMaker Estimator instance\n", "tabular_estimator = Estimator(\n", " role=aws_role,\n", " image_uri=train_image_uri,\n", " source_dir=train_source_uri,\n", " model_uri=train_model_uri,\n", " entry_point=\"transfer_learning.py\",\n", " instance_count=1,\n", " instance_type=training_instance_type,\n", " max_run=360000,\n", " hyperparameters=hyperparameters,\n", " output_path=s3_output_location,\n", ")\n", "\n", "if use_amt:\n", " if train_model_id == \"xgboost-regression-model\":\n", "\n", " tuner = HyperparameterTuner(\n", " tabular_estimator,\n", " \"validation:rmse\",\n", " hyperparameter_ranges,\n", " max_jobs=10,\n", " max_parallel_jobs=2,\n", " objective_type=\"Minimize\",\n", " base_tuning_job_name=training_job_name,\n", " )\n", "\n", " elif train_model_id == \"sklearn-regression-linear\":\n", "\n", " tuner = HyperparameterTuner(\n", " tabular_estimator,\n", " \"mse\",\n", " hyperparameter_ranges,\n", " [{\"Name\": \"mse\", \"Regex\": \"mean_squared_error: ([0-9\\\\.]+)\"}],\n", " max_jobs=10,\n", " max_parallel_jobs=2,\n", " objective_type=\"Minimize\",\n", " base_tuning_job_name=training_job_name,\n", " )\n", "\n", " tuner.fit({\"training\": training_dataset_s3_path}, logs=True)\n", "else:\n", " # Launch a SageMaker Training job by passing s3 path of the training data\n", " tabular_estimator.fit(\n", " {\"training\": training_dataset_s3_path}, logs=True, job_name=training_job_name\n", " )" ] }, { "cell_type": "markdown", "id": "dfe90806", "metadata": {}, "source": [ "## 3. Deploy and Run Inference on the Trained Tabular Model\n", "\n", "---\n", "\n", "In this section, you learn how to query an existing endpoint and make predictions of the examples you input. For each example, the model will output a numerical value to estimate the corresponding target value.\n", "\n", "We start by retrieving the artifacts and deploy the `tabular_estimator` that we trained.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "8331130e", "metadata": {}, "outputs": [], "source": [ "inference_instance_type = \"ml.m5.large\"\n", "\n", "# Retrieve the inference docker container uri\n", "deploy_image_uri = image_uris.retrieve(\n", " region=None,\n", " framework=None,\n", " image_scope=\"inference\",\n", " model_id=train_model_id,\n", " model_version=train_model_version,\n", " instance_type=inference_instance_type,\n", ")\n", "# Retrieve the inference script uri\n", "deploy_source_uri = script_uris.retrieve(\n", " model_id=train_model_id, model_version=train_model_version, script_scope=\"inference\"\n", ")\n", "\n", "endpoint_name = name_from_base(f\"jumpstart-example-{train_model_id}-\")\n", "\n", "predictor = (tuner if use_amt else tabular_estimator).deploy(\n", " initial_instance_count=1,\n", " instance_type=inference_instance_type,\n", " entry_point=\"inference.py\",\n", " image_uri=deploy_image_uri,\n", " source_dir=deploy_source_uri,\n", " endpoint_name=endpoint_name,\n", " enable_network_isolation=True,\n", ")" ] }, { "cell_type": "markdown", "id": "c15e7fd5", "metadata": {}, "source": [ "---\n", "Next, we download a hold-out Abalone test data from the S3 bucket for inference.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "d4946404", "metadata": {}, "outputs": [], "source": [ "jumpstart_assets_bucket = f\"jumpstart-cache-prod-{aws_region}\"\n", "test_data_prefix = \"training-datasets/tabular_regressonehot/test\"\n", "test_data_file_name = \"data.csv\"\n", "\n", "boto3.client(\"s3\").download_file(\n", " jumpstart_assets_bucket, f\"{test_data_prefix}/{test_data_file_name}\", test_data_file_name\n", ")" ] }, { "cell_type": "markdown", "id": "33f41c9d", "metadata": {}, "source": [ "---\n", "Next, we read the Abalone test data into pandas data frame, prepare the ground truth target and predicting features to send into the endpoint.\n", "\n", "Below is the screenshot of the first 5 examples in the Abalone test set. All of the test examples with features\n", "from ```Feature_1``` to ```Feature_10``` are sent into the deployed model \n", "to get model predictions, to estimate the ground truth ```Target``` column. \n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "188e1523", "metadata": {}, "outputs": [], "source": [ "newline, bold, unbold = \"\\n\", \"\\033[1m\", \"\\033[0m\"\n", "\n", "import numpy as np\n", "import pandas as pd\n", "from sklearn.metrics import mean_absolute_error\n", "from sklearn.metrics import mean_squared_error\n", "from sklearn.metrics import r2_score\n", "import matplotlib.pyplot as plt\n", "\n", "# read the data\n", "test_data = pd.read_csv(test_data_file_name, header=None)\n", "test_data.columns = [\"Target\"] + [f\"Feature_{i}\" for i in range(1, test_data.shape[1])]\n", "\n", "num_examples, num_columns = test_data.shape\n", "print(\n", " f\"{bold}The test dataset contains {num_examples} examples and {num_columns} columns.{unbold}\\n\"\n", ")\n", "\n", "# prepare the ground truth target and predicting features to send into the endpoint.\n", "ground_truth_label, features = test_data.iloc[:, :1], test_data.iloc[:, 1:]\n", "\n", "print(\n", " f\"{bold}The first 5 observations of the test data: {unbold}\"\n", ") # Feature_1 is the categorical variables and rest of other features are numeric variables.\n", "test_data.head(5)" ] }, { "cell_type": "markdown", "id": "2ebfd6a1", "metadata": {}, "source": [ "---\n", "The following code queries the endpoint you have created to get the prediction for each test example. \n", "The `query_endpoint()` function returns a array-like of shape (num_examples, ).\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "0030cff2", "metadata": {}, "outputs": [], "source": [ "content_type = \"text/csv\"\n", "\n", "\n", "def query_endpoint(encoded_tabular_data):\n", " client = boto3.client(\"runtime.sagemaker\")\n", " response = client.invoke_endpoint(\n", " EndpointName=endpoint_name, ContentType=content_type, Body=encoded_tabular_data\n", " )\n", " return response\n", "\n", "\n", "def parse_resonse(query_response):\n", " predictions = json.loads(query_response[\"Body\"].read())\n", " return np.array(predictions[\"prediction\"])\n", "\n", "\n", "query_response = query_endpoint(features.to_csv(header=False, index=False).encode(\"utf-8\"))\n", "model_predictions = parse_resonse(query_response)" ] }, { "cell_type": "markdown", "id": "96e25d97", "metadata": {}, "source": [ "## 4. Evaluate the Prediction Results Returned from the Endpoint\n", "\n", "---\n", "We evaluate the predictions results returned from the endpoint by following two ways.\n", "\n", "* Visualize the prediction results by a residual plot to compare the model predictions and ground truth targets.\n", "\n", "* Measure the prediction results quantitatively.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "49d1a715", "metadata": {}, "outputs": [], "source": [ "# Visualization: a residual plot to compare the model predictions and ground truth targets. For each example, the residual value\n", "# is the subtraction between the prediction and ground truth target.\n", "# We can see that the points in the residual plot are randomly dispersed around the horizontal axis y = 0,\n", "# which indicates the fitted regression model is appropriate for the Abalone data\n", "\n", "residuals = ground_truth_label.values[:, 0] - model_predictions\n", "plt.scatter(model_predictions, residuals, color=\"blue\", s=40)\n", "plt.hlines(y=0, xmin=4, xmax=18)\n", "plt.xlabel(\"Predicted Values\", fontsize=18)\n", "plt.ylabel(\"Residuals\", fontsize=18)\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": null, "id": "dd35c08a", "metadata": {}, "outputs": [], "source": [ "# Evaluate the model predictions quantitatively.\n", "eval_r2_score = r2_score(ground_truth_label.values, model_predictions)\n", "eval_mse_score = mean_squared_error(ground_truth_label.values, model_predictions)\n", "eval_mae_score = mean_absolute_error(ground_truth_label.values, model_predictions)\n", "print(\n", " f\"{bold}Evaluation result on test data{unbold}:{newline}\"\n", " f\"{bold}{r2_score.__name__}{unbold}: {eval_r2_score}{newline}\"\n", " f\"{bold}{mean_squared_error.__name__}{unbold}: {eval_mse_score}{newline}\"\n", " f\"{bold}{mean_absolute_error.__name__}{unbold}: {eval_mae_score}{newline}\"\n", ")" ] }, { "cell_type": "markdown", "id": "c135ec15", "metadata": {}, "source": [ "---\n", "Next, we delete the endpoint corresponding to the finetuned model.\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "id": "fb4d7bc0", "metadata": {}, "outputs": [], "source": [ "# Delete the SageMaker endpoint and the attached resources\n", "predictor.delete_model()\n", "predictor.delete_endpoint()" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Notebook CI Test Results\n", "\n", "This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.\n", "\n", "![This us-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-1/introduction_to_amazon_algorithms|xgboost_linear_learner_tabular|Amazon_Tabular_Regression_XGBoost_LinearLearner.ipynb)\n", "\n", "![This us-east-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-east-2/introduction_to_amazon_algorithms|xgboost_linear_learner_tabular|Amazon_Tabular_Regression_XGBoost_LinearLearner.ipynb)\n", "\n", "![This us-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/us-west-1/introduction_to_amazon_algorithms|xgboost_linear_learner_tabular|Amazon_Tabular_Regression_XGBoost_LinearLearner.ipynb)\n", "\n", "![This ca-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ca-central-1/introduction_to_amazon_algorithms|xgboost_linear_learner_tabular|Amazon_Tabular_Regression_XGBoost_LinearLearner.ipynb)\n", "\n", "![This sa-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/sa-east-1/introduction_to_amazon_algorithms|xgboost_linear_learner_tabular|Amazon_Tabular_Regression_XGBoost_LinearLearner.ipynb)\n", "\n", "![This eu-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-1/introduction_to_amazon_algorithms|xgboost_linear_learner_tabular|Amazon_Tabular_Regression_XGBoost_LinearLearner.ipynb)\n", "\n", "![This eu-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-2/introduction_to_amazon_algorithms|xgboost_linear_learner_tabular|Amazon_Tabular_Regression_XGBoost_LinearLearner.ipynb)\n", "\n", "![This eu-west-3 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-west-3/introduction_to_amazon_algorithms|xgboost_linear_learner_tabular|Amazon_Tabular_Regression_XGBoost_LinearLearner.ipynb)\n", "\n", "![This eu-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-central-1/introduction_to_amazon_algorithms|xgboost_linear_learner_tabular|Amazon_Tabular_Regression_XGBoost_LinearLearner.ipynb)\n", "\n", "![This eu-north-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/eu-north-1/introduction_to_amazon_algorithms|xgboost_linear_learner_tabular|Amazon_Tabular_Regression_XGBoost_LinearLearner.ipynb)\n", "\n", "![This ap-southeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-1/introduction_to_amazon_algorithms|xgboost_linear_learner_tabular|Amazon_Tabular_Regression_XGBoost_LinearLearner.ipynb)\n", "\n", "![This ap-southeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-southeast-2/introduction_to_amazon_algorithms|xgboost_linear_learner_tabular|Amazon_Tabular_Regression_XGBoost_LinearLearner.ipynb)\n", "\n", "![This ap-northeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-1/introduction_to_amazon_algorithms|xgboost_linear_learner_tabular|Amazon_Tabular_Regression_XGBoost_LinearLearner.ipynb)\n", "\n", "![This ap-northeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-northeast-2/introduction_to_amazon_algorithms|xgboost_linear_learner_tabular|Amazon_Tabular_Regression_XGBoost_LinearLearner.ipynb)\n", "\n", "![This ap-south-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable](https://h75twx4l60.execute-api.us-west-2.amazonaws.com/sagemaker-nb/ap-south-1/introduction_to_amazon_algorithms|xgboost_linear_learner_tabular|Amazon_Tabular_Regression_XGBoost_LinearLearner.ipynb)\n" ] } ], "metadata": { "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.6" } }, "nbformat": 4, "nbformat_minor": 5 }