sdk/python/using-mlflow/train-and-log/xgboost_classification_mlflow.ipynb (701 lines of code) (raw):

{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Training and tracking an XGBoost classifier with MLflow\n", "\n", "This notebook demonstrates how to use MLflow for tracking experiment using MLflow in Azure ML. We will consider the [Heart Disease Data Set](https://archive.ics.uci.edu/ml/datasets/heart+disease). This database contains 76 attributes, but we will be using a subset of 14 of them. The \"goal\" field refers to the presence of heart disease in the patient. It is integer valued from 0 (no presence) to 4. In this example we will concentrated on simply attempting to distinguish presence (values 1,2,3,4) from absence (value 0)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Ensure you have the dependencies for this notebook\n", "%pip install -r xgboost_classification_mlflow.txt" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import warnings\n", "\n", "warnings.simplefilter(\"ignore\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Configuring the experiment" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's get started. It's always a good idea to start by configuring the name of the experiment we are working with in MLflow. Experiments allows you to organize runs in a comprehensive way so you can compare different experiment's runs with different parameters and configuration. MLflow configures the default experiment named \"Default\" but you can change this name." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import mlflow\n", "\n", "mlflow.set_experiment(experiment_name=\"heart-condition-classifier\")" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "## Exploring the data" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "file_url = \"https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data/heart.csv\"\n", "df = pd.read_csv(file_url)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we can see, some of the variables are categorical. To make it simpler for our model to handle these values, let's use their encoded values:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df[\"thal\"] = df[\"thal\"].astype(\"category\").cat.codes" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The encoded values looks then as follows:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df[\"thal\"].unique()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's split our dataset in train and test, so we can assess the performance of the model without overfitting the dataset." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.model_selection import train_test_split\n", "\n", "X_train, X_test, y_train, y_test = train_test_split(\n", " df.drop(\"target\", axis=1), df[\"target\"], test_size=0.3\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Training a model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We are going to use autologging capabilities in MLflow to track parameters and metrics:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mlflow.xgboost.autolog()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's create a simple classifier and train it:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from xgboost import XGBClassifier\n", "\n", "model = XGBClassifier(use_label_encoder=False, eval_metric=\"logloss\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As soon as the `train` method is executed, MLflow will stat a run in Azure ML to start tracking the experiment's run. However, it is always a good idea to start the run manually so you have the run ID at hand quickly. This is not required though.\n", "\n", "> Important: When running training routines in Azure ML as jobs, you don't need to start or end the run in your training code as it is automatically done for you by Azure ML." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "run = mlflow.start_run()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model.fit(X_train, y_train, eval_set=[(X_test, y_test)], verbose=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Logging extra metrics\n", "\n", "Autolog capabilities in XGBoost will log metrics like validation loss, however, it won't log any specific metric in a classification problem. In this case, we are going to pay closer attention to our ability to detect heart condition while avoiding a type II error as much as possible. To calculate the metric, we are going to use our test dataset:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "y_pred = model.predict(X_test)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.metrics import accuracy_score, recall_score\n", "\n", "accuracy = accuracy_score(y_test, y_pred)\n", "recall = recall_score(y_test, y_pred)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"Accuracy: %.2f%%\" % (accuracy * 100.0))\n", "print(\"Recall: %.2f%%\" % (recall * 100.0))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exploring the expriment with MLFlow\n", "\n", "Let's first end the experiment run so we can review it:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mlflow.end_run()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To see what's has been logged, we can query the run again:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "run = mlflow.get_run(run.info.run_id)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's explore the parameters that got logged:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pd.DataFrame(data=[run.data.params], index=[\"Value\"]).T" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's explore the metrics values:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pd.DataFrame(data=[run.data.metrics], index=[\"Value\"]).T" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ ">Pay attention how metrics calculated with Scikit-Learn where automatically tracked for us. None of them were manually added to the run. Also, MLflow uses naming conventions including the variable's names to help undestand what was logged. `X_test` was added to the name of the metric meaning that they correspond to the metric in the testing split of the dataset." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's explore artifacts that got logged in the run. This requires to use the MLflow client:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "client = mlflow.tracking.MlflowClient()\n", "client.list_artifacts(run_id=run.info.run_id)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As you can see in this example, three artifacts are availble in the run:\n", "\n", "* `feature_importance_weight.json` -> the feature importance of the model we created.\n", "* `feature_importance_weight.png` -> a plot of the feature importance mentioned above, stored as an image.\n", "* `metric_info.json` -> contains a json representation of all the metrics captured by the XGBoost.\n", "* `model`, the path where the model is stored. Note that this artifact is a directory.\n", "\n", "You can download any artifact using the method `download_artifact`" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "file_path = mlflow.artifacts.download_artifacts(\n", " run_id=run.info.run_id, artifact_path=\"feature_importance_weight.png\"\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since the artifact is an image, we can display it in the following way:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "import matplotlib.image as img\n", "\n", "image = img.imread(file_path)\n", "plt.imshow(image)\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Loading the model back\n", "\n", "`autolog` has also logged the model for us, let's try to get it back" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "classifier = mlflow.xgboost.load_model(f\"runs:/{run.info.run_id}/model\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "See that the type returned by this method is an XGBoost model's classifier" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "type(classifier)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can get prediction back from the model" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "classifier.predict(X_test)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Logging models with preprocessing\n", "\n", "As can be seen, MLflow automatically logs models for you, but some times you need to log a different model, specially when you are doing preprocessing. In this example we did some categorical encoding, so our model right now expects the values of the column `thal` to be integers, not strings.\n", "\n", "To make that requirement go away, we can create a Pipeline object with Scikit-Learn and log that model instead of the one automatically logged for us. Let's see how:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Reload the dataset\n", "df = pd.read_csv(file_url)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.model_selection import train_test_split\n", "\n", "X_train, X_test, y_train, y_test = train_test_split(\n", " df.drop(\"target\", axis=1), df[\"target\"], test_size=0.3\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Using an encoder" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First let's use a `OrdinalEncoder` instead of the categorical types" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "from sklearn.preprocessing import OrdinalEncoder" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We only need to transform the column `thal`. A convenient way to do this is by applying a `ColumnTransformer` to that column, the remaining columns will be sent directly to the model:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.compose import ColumnTransformer\n", "from xgboost import XGBClassifier\n", "\n", "encoder = ColumnTransformer(\n", " [\n", " (\n", " \"cat_encoding\",\n", " OrdinalEncoder(\n", " categories=\"auto\",\n", " encoded_missing_value=np.nan,\n", " ),\n", " [\"thal\"],\n", " )\n", " ],\n", " remainder=\"passthrough\",\n", " verbose_feature_names_out=False,\n", ")\n", "\n", "model = XGBClassifier(use_label_encoder=False, eval_metric=\"logloss\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.pipeline import Pipeline\n", "from sklearn.compose import ColumnTransformer" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pipeline = Pipeline(steps=[(\"encoding\", encoder), (\"model\", model)])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pipeline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can log this model in our run. Since this is a Scikit-Learn object, we will log it using such flavor instead of `xgboost`. Let's create a new complete run so we can see the difference." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Signatures\n", "\n", "One extra thing we need to take care of is the expected signature of the model. Signatures are use by MLflow to know what type of inputs are expected for a given model. This allows the model builder to be explicit about which types are being expected. In the first model we logged, all inputs needed to be numeric, including the column `thal`. However, our new pipeline can encode this values automatically so we can take `thal` values in string format." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from mlflow.models import infer_signature\n", "\n", "signature = infer_signature(X_test, y_test)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's see the signature" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "signature" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Logging the pipeline model\n", "\n", "Now, it's time to to fit our entire pipeline and log it inside the run." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "with mlflow.start_run() as run:\n", " pipeline.fit(X_train, y_train)\n", " mlflow.sklearn.log_model(pipeline, artifact_path=\"pipeline\", signature=signature)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "> **Note:** You are not required to log the pipeline model manually as you can also turn `mlflow.sklearn.autolog()` on. If you do that, the model will automatically be logged for for by Scikit-Learn integration with MLflow. However, we have preferred to it in this way to show the different approaches and to be explit about logging a pipeline." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If we explore try to get this model back now:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pipeline_model = mlflow.sklearn.load_model(f\"runs:/{run.info.run_id}/pipeline\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's check the type of what's returned" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "type(pipeline_model)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's see how we can submit data in with categorical columns:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pipeline_model.predict(X_test)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3.10 - SDK V2", "language": "python", "name": "python310-sdkv2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.4" } }, "nbformat": 4, "nbformat_minor": 4 }