notebooks/official/automl/automl-text-classification.ipynb (920 lines of code) (raw):
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ur8xi4C7S06n"
},
"outputs": [],
"source": [
"# Copyright 2021 Google LLC\n",
"#\n",
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the Lice`nse is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "0bdfa24a80ed"
},
"source": [
"Starting on September 15, 2024, you can only customize classification, entity extraction, and sentiment analysis models by moving to Vertex AI Gemini prompts and tuning. Training or updating models for Vertex AI AutoML for Text classification, entity extraction, and sentiment analysis objectives will no longer be available. You can continue using existing Vertex AI AutoML Text objectives until June 15, 2025. For more information about how Gemini offers enhanced user experience through improved prompting capabilities, see \n",
"[Introduction to tuning](https://cloud.google.com/vertex-ai/generative-ai/docs/models/tune-gemini-overview)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "0d2298941703"
},
"source": [
"# Vertex AI: Create, train, and deploy an AutoML text classification model"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "JAPoU8Sm5E6e"
},
"source": [
"<table align=\"left\">\n",
"\n",
" <td>\n",
" <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/automl/automl-text-classification.ipynb\">\n",
" <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Colab logo\"> Run in Colab\n",
" </a>\n",
" </td>\n",
" <td>\n",
" <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/automl/automl-text-classification.ipynb\">\n",
" <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n",
" View on GitHub\n",
" </a>\n",
" </td>\n",
" <td>\n",
" <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/main/notebooks/official/automl/automl-text-classification.ipynb\">\n",
" <img src=\"https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32\" alt=\"Vertex AI logo\">\n",
" Open in Vertex AI Workbench\n",
" </a>\n",
" </td>\n",
"</table>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "1adb10a59bc3"
},
"source": [
"## Overview\n",
"\n",
"This notebook walks you through the major phases of building and using an AutoML text classification model on [Vertex AI](https://cloud.google.com/vertex-ai/docs/). \n",
"\n",
"Learn more about [Classification for text data](https://cloud.google.com/vertex-ai/docs/training-overview#classification_for_text)."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "9b9824ae2c91"
},
"source": [
"### Objective\n",
"\n",
"In this tutorial, you learn how to use AutoML to train a text classification model.\n",
"\n",
"This tutorial uses the following Google Cloud ML services:\n",
"\n",
"- AutoML training\n",
"- Vertex AI model resource\n",
"\n",
"The steps performed include:\n",
"\n",
"* Create a Vertex AI dataset.\n",
"* Train an AutoML text classification model resource.\n",
"* Obtain the evaluation metrics for the model resource.\n",
"* Create an endpoint resource.\n",
"* Deploy the model resource to the endpoint resource.\n",
"* Make an online prediction\n",
"* Make a batch prediction"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "f67c62885df4"
},
"source": [
"### Dataset\n",
"\n",
"In this notebook, you use the \"Happy Moments\" sample dataset to train a model. The resulting model classifies happy moments into categores that reflect the causes of happiness. "
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "0259a7ce8120"
},
"source": [
"### Costs\n",
"\n",
"This tutorial uses billable components of Google Cloud:\n",
"\n",
"* Vertex AI training and serving\n",
"* Cloud Storage\n",
"\n",
"Learn about [Vertex AI\n",
"pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage\n",
"pricing](https://cloud.google.com/storage/pricing), and use the [Pricing\n",
"Calculator](https://cloud.google.com/products/calculator/)\n",
"to generate a cost estimate based on your projected usage"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "db52a0a61fca"
},
"source": [
"### Installation\n",
"\n",
"Install the following packages for executing this notebook."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "b75757581291"
},
"outputs": [],
"source": [
"# install packages\n",
"! pip3 install --upgrade --quiet google-cloud-aiplatform \\\n",
" google-cloud-storage \\\n",
" jsonlines "
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "e9255e3b156f"
},
"source": [
"### Colab Only: Uncomment the following cell to restart the kernel"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "0c0b2427998a"
},
"outputs": [],
"source": [
"# Automatically restart kernel after installs so that your environment can access the new packages\n",
"# import IPython\n",
"\n",
"# app = IPython.Application.instance()\n",
"# app.kernel.do_shutdown(True)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "435b8e413535"
},
"source": [
"### Before you begin\n",
"\n",
"#### Set your project ID\n",
"\n",
"**If you don't know your project ID**, try the following:\n",
"- Run `gcloud config list`\n",
"- Run `gcloud projects list`\n",
"- See the support page: [Locate the project ID](https://support.google.com/googleapi/answer/7014113)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "be175254a715"
},
"outputs": [],
"source": [
"PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}\n",
"\n",
"# set the project id\n",
"! gcloud config set project $PROJECT_ID"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "2e6b8b324ce1"
},
"source": [
"#### Region\n",
"\n",
"You can also change the `REGION` variable used by Vertex AI. \n",
"Learn more about [Vertex AI regions](https://cloud.google.com/vertex-ai/docs/general/locations)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ae43d96c4b1b"
},
"outputs": [],
"source": [
"REGION = \"[your-region]\" # @param {type: \"string\"}"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6c43a8673066"
},
"source": [
"### Authenticate your Google Cloud account\n",
"\n",
"Depending on your Jupyter environment, you may have to manually authenticate. Follow the relevant instructions below.\n",
"\n",
"**1. Vertex AI Workbench** \n",
"- Do nothing since you're already authenticated.\n",
"\n",
"**2. Local JupyterLab Instance,** uncomment and run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "fbc9cd30cc4b"
},
"outputs": [],
"source": [
"# ! gcloud auth login"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "cd0da2c26879"
},
"source": [
"**3. Colab,** uncomment and run:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "a336a05c6149"
},
"outputs": [],
"source": [
"# from google.colab import auth\n",
"# auth.authenticate_user()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "0461097edfa5"
},
"source": [
"**4. Service Account or other**\n",
"- See all the authentication options here: [Google Cloud Platform Jupyter Notebook Authentication Guide](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/notebook_authentication_guide.ipynb)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "e5755d1a554f"
},
"source": [
"### Create a Cloud Storage bucket\n",
"\n",
"Create a storage bucket to store intermediate artifacts such as datasets."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "d2de92accb67"
},
"outputs": [],
"source": [
"BUCKET_NAME = f\"your-bucket-name-{PROJECT_ID}-unique\" # @param {type:\"string\"}\n",
"BUCKET_URI = f\"gs://{BUCKET_NAME}\""
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "b72bfdf29dae"
},
"source": [
"**If your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "a4453435d115"
},
"outputs": [],
"source": [
"! gsutil mb -l {REGION} {BUCKET_URI}"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "93d685084cf2"
},
"source": [
"### Import libraries and define constants"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "152013538e59"
},
"outputs": [],
"source": [
"import jsonlines\n",
"from google.cloud import aiplatform, storage\n",
"from google.cloud.aiplatform import jobs"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "03101a4492f3"
},
"source": [
"### Initialize Vertex AI \n",
"\n",
"Initialize the Vertex AI SDK for Python for your project."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "740cd5c67c79"
},
"outputs": [],
"source": [
"aiplatform.init(project=PROJECT_ID, location=REGION)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "32c971919605"
},
"source": [
"## Create a dataset resource and import your data\n",
"\n",
"The notebook uses the 'Happy Moments' dataset for demonstration purposes. You can change it to another text classification dataset that [conforms to the data preparation requirements](https://cloud.google.com/vertex-ai/docs/datasets/prepare-text#classification).\n",
"\n",
"Using the Python SDK, create a dataset and import the dataset in one call to `TextDataset.create()`, as shown in the following cell.\n",
"\n",
"Creating and importing data is a long running operation. This next step can take a while. The `create()` method waits for the operation to complete, outputting statements as the operation progresses. The statements contain the full name of the dataset used in the following section.\n",
"\n",
"**Note**: You can close the noteboook while waiting for this operation to complete. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "d35b8b6b94ae"
},
"outputs": [],
"source": [
"# Use a timestamp to ensure unique resources\n",
"src_uris = \"gs://cloud-ml-data/NL-classification/happiness.csv\"\n",
"display_name = \"e2e-text-dataset-unique\"\n",
"\n",
"text_dataset = aiplatform.TextDataset.create(\n",
" display_name=display_name,\n",
" gcs_source=src_uris,\n",
" import_schema_uri=aiplatform.schema.dataset.ioformat.text.single_label_classification,\n",
" sync=True,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "68f10356cab9"
},
"source": [
"## Train your text classification model\n",
"\n",
"Now you can begin training your model. Training the model is a two part process:\n",
"\n",
"1. **Define the training job.** You must provide a display name and the type of training you want when defining the training job.\n",
"2. **Run the training job.** When you run the training job, you need to supply a reference to the dataset to use for training. You can also configure the data split percentages.\n",
"\n",
"You don't need to specify [data splits](https://cloud.google.com/vertex-ai/docs/general/ml-use). The training job has a default setting of training 80%/ testing 10%/ validate 10% if you don't provide values.\n",
"\n",
"To train your model, you call `AutoMLTextTrainingJob.run()` as shown in the following snippets. The method returns a reference to your new model object.\n",
"\n",
"As with importing data into the dataset, training your model can take a substantial amount of time. The client library prints out operation status messages while the training pipeline operation processes. You must wait for the training process to complete before you can get the resource name and ID of your new model, which is required for model evaluation and model deployment.\n",
"\n",
"**Note**: You can close the notebook while waiting for the operation to complete."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "0aa0f01805ea"
},
"outputs": [],
"source": [
"# Define the training job\n",
"training_job_display_name = \"e2e-text-training-job-unique\"\n",
"job = aiplatform.AutoMLTextTrainingJob(\n",
" display_name=training_job_display_name,\n",
" prediction_type=\"classification\",\n",
" multi_label=False,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "1ec60baf2c51"
},
"outputs": [],
"source": [
"model_display_name = \"e2e-text-classification-model-unique\"\n",
"\n",
"# Run the training job\n",
"model = job.run(\n",
" dataset=text_dataset,\n",
" model_display_name=model_display_name,\n",
" training_fraction_split=0.1,\n",
" validation_fraction_split=0.1,\n",
" test_fraction_split=0.1,\n",
" sync=True,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "caaa3f32b12e"
},
"source": [
"## Review model evaluation scores\n",
"\n",
"After your model training has finished, you can review the evaluation scores for it using the `list_model_evaluations()` method. This method returns an iterator for each evaluation slice."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "b0bb6be8621a"
},
"outputs": [],
"source": [
"model_evaluations = model.list_model_evaluations()\n",
"\n",
"for model_evaluation in model_evaluations:\n",
" print(model_evaluation.to_dict())"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "b5dbe4dbaa60"
},
"source": [
"## Deploy your text classification model\n",
"\n",
"Once your model has completed training, you must deploy it to an _endpoint_ to get online predictions from the model. When you deploy the model to an endpoint, a copy of the model is made on the endpoint with a new resource name and display name.\n",
"\n",
"You can deploy multiple models to the same endpoint and split traffic between the various models assigned to the endpoint. However, you must deploy one model at a time to the endpoint. To change the traffic split percentages, you must assign new values on your second (and subsequent) models each time you deploy a new model.\n",
"\n",
"The following code block demonstrates how to deploy a model. The code snippet relies on the Python SDK to create a new endpoint for deployment. The call to `modely.deploy()` returns a reference to an endpoint object--you need this reference for online predictions in the next section."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "19bc4a55ccfe"
},
"outputs": [],
"source": [
"deployed_model_display_name = \"e2e-deployed-text-classification-model-unique\"\n",
"\n",
"endpoint = model.deploy(\n",
" deployed_model_display_name=deployed_model_display_name, sync=True\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "351a6e8be3a5"
},
"source": [
"## Get online predictions from your model\n",
"\n",
"Now that you have your endpoint, you can get online predictions from the text classification model. To get the online prediction, you send a prediction request to your endpoint."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "953b333fc0fc"
},
"outputs": [],
"source": [
"content = \"I got a high score on my math final!\"\n",
"\n",
"response = endpoint.predict(instances=[{\"content\": content}])\n",
"\n",
"for prediction_ in response.predictions:\n",
" ids = prediction_[\"ids\"]\n",
" display_names = prediction_[\"displayNames\"]\n",
" confidence_scores = prediction_[\"confidences\"]\n",
" for count, id in enumerate(ids):\n",
" print(f\"Prediction ID: {id}\")\n",
" print(f\"Prediction display name: {display_names[count]}\")\n",
" print(f\"Prediction confidence score: {confidence_scores[count]}\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "f18811cd0477"
},
"source": [
"## Get batch predictions from your model\n",
"\n",
"You can get batch predictions from a text classification model without deploying it. You must first format all of your prediction instances (prediction input) in JSONL format and store the JSONL file in a Google Cloud storage bucket. You must also provide a Google Cloud storage bucket to hold your prediction output.\n",
"\n",
"To start, you must first create your predictions input file in JSONL format. Each line in the JSONL document needs to be formatted as follows:\n",
"\n",
"```\n",
"{ \"content\": \"gs://sourcebucket/datasets/texts/source_text.txt\", \"mimeType\": \"text/plain\"}\n",
"```\n",
"\n",
"The `content` field in the JSON structure must be a Google Cloud Storage URI to another document that contains the text input for prediction.\n",
"[See the documentation for more information.](https://cloud.google.com/ai-platform-unified/docs/predictions/batch-predictions#text)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "e4b838cbcd99"
},
"outputs": [],
"source": [
"instances = [\n",
" \"We hiked through the woods and up the hill to the ice caves\",\n",
" \"My kitten is so cute\",\n",
"]\n",
"input_file_name = \"batch-prediction-input.jsonl\""
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "76ac422ab8dd"
},
"source": [
"For batch prediction, supply the following:\n",
"\n",
"+ All of your prediction instances as individual files on Google Cloud Storage, as TXT files for your instances.\n",
"+ A JSONL file that lists the URIs of all your prediction instances.\n",
"+ A Cloud Storage bucket to hold the output from batch prediction.\n",
"\n",
"For this tutorial, the following cells create a new Storage bucket, upload individual prediction instances as text files to the bucket, and then create the JSONL file with the URIs of your prediction instances."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "8b7cabbb86ad"
},
"outputs": [],
"source": [
"# Instantiate the Storage client and create the new bucket\n",
"# from google.cloud import storage\n",
"storage_client = storage.Client()\n",
"bucket = storage_client.get_bucket(BUCKET_NAME)\n",
"# Iterate over the prediction instances, creating a new TXT file\n",
"# for each.\n",
"input_file_data = []\n",
"for count, instance in enumerate(instances):\n",
" instance_name = f\"input_{count}.txt\"\n",
" instance_file_uri = f\"{BUCKET_URI}/{instance_name}\"\n",
" # Add the data to store in the JSONL input file.\n",
" tmp_data = {\"content\": instance_file_uri, \"mimeType\": \"text/plain\"}\n",
" input_file_data.append(tmp_data)\n",
"\n",
" # Create the new instance file\n",
" blob = bucket.blob(instance_name)\n",
" blob.upload_from_string(instance)\n",
"\n",
"input_str = \"\\n\".join([str(d) for d in input_file_data])\n",
"file_blob = bucket.blob(f\"{input_file_name}\")\n",
"file_blob.upload_from_string(input_str)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "31c262320610"
},
"source": [
"Now that you have the bucket with the prediction instances ready, you can send a batch prediction https://storage.googleapis.com/upload/storage/v1/b/gs://vertex-ai-devaip-20220728004429/o?uploadType=multipartequest to Vertex AI. When you send a request to the service, you must provide the URI of your JSONL file and your output bucket, including the `gs://` protocols.\n",
"\n",
"With the Python SDK, you can create a batch prediction job by calling `Model.batch_predict()`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "f5ab2139d52d"
},
"outputs": [],
"source": [
"job_display_name = \"e2e-text-classification-batch-prediction-job\"\n",
"# model = aiplatform.Model(model_name=model.name)\n",
"batch_prediction_job = model.batch_predict(\n",
" job_display_name=job_display_name,\n",
" gcs_source=f\"{BUCKET_URI}/{input_file_name}\",\n",
" gcs_destination_prefix=f\"{BUCKET_URI}/output\",\n",
" sync=True,\n",
")\n",
"batch_prediction_job_name = batch_prediction_job.resource_name"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "11503f2e08a2"
},
"source": [
"Once the batch prediction job completes, the Python SDK prints out the resource name of the batch prediction job in the format `projects/[PROJECT_ID]/locations/[LOCATION]/batchPredictionJobs/[BATCH_PREDICTION_JOB_ID]`. You can query the Vertex AI service for the status of the batch prediction job using its ID.\n",
"\n",
"The following code snippet demonstrates how to create an instance of the `BatchPredictionJob` class to review its status. Note that you need the full resource name printed out from the Python SDK for this snippet.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "cd014de40e2f"
},
"source": [
"## Batch prediction job"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "bf6e614723ed"
},
"outputs": [],
"source": [
"batch_job = jobs.BatchPredictionJob(batch_prediction_job_name)\n",
"print(f\"Batch prediction job state: {str(batch_job.state)}\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "1f9a12dadf6f"
},
"source": [
"After the batch job has completed, you can view the results of the job in your output Storage bucket. You might want to first list all of the files in your output bucket to find the URI of the output file."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "8ff1ec03205c"
},
"outputs": [],
"source": [
"BUCKET_OUTPUT = f\"{BUCKET_URI}/output\"\n",
"\n",
"! gsutil ls -a $BUCKET_OUTPUT"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "52f3f8af2e41"
},
"source": [
"The output from the batch prediction job should be contained in a folder (or _prefix_) that includes the name of the batch prediction job plus a time stamp for when it was created.\n",
"\n",
"For example, if your batch prediction job name is `my-job` and your bucket name is `my-bucket`, the URI of the folder containing your output might look like the following:\n",
"\n",
"```\n",
"gs://my-bucket/output/prediction-my-job-2021-06-04T19:54:25.889262Z/\n",
"```\n",
"\n",
"To read the batch prediction results, you must download the file locally and open the file. The next cell copies all of the files in the `BUCKET_OUTPUT_FOLDER` into a local folder."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "4bb16e040942"
},
"outputs": [],
"source": [
"import os\n",
"\n",
"RESULTS_DIRECTORY = \"prediction_results\"\n",
"RESULTS_DIRECTORY_FULL = f\"{RESULTS_DIRECTORY}/output\"\n",
"\n",
"# Create missing directories\n",
"os.makedirs(RESULTS_DIRECTORY, exist_ok=True)\n",
"\n",
"# Get the Cloud Storage paths for each result\n",
"! gsutil -m cp -r $BUCKET_OUTPUT $RESULTS_DIRECTORY\n",
"\n",
"# Get most recently modified directory\n",
"latest_directory = max(\n",
" (\n",
" os.path.join(RESULTS_DIRECTORY_FULL, d)\n",
" for d in os.listdir(RESULTS_DIRECTORY_FULL)\n",
" ),\n",
" key=os.path.getmtime,\n",
")\n",
"\n",
"print(f\"Local results folder: {latest_directory}\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "e375109b7e40"
},
"source": [
"## Review results"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "f406e1e4d5ec"
},
"source": [
"With all of the results files downloaded locally, you can open them and read the results. In this tutorial, you use the [`jsonlines`](https://jsonlines.readthedocs.io/en/latest/) library to read the output results.\n",
"\n",
"The following cell opens up the JSONL output file and then prints the predictions for each instance."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "91d7f2a74a7c"
},
"outputs": [],
"source": [
"# Get downloaded results in directory\n",
"results_files = []\n",
"for dirpath, _, files in os.walk(latest_directory):\n",
" for file in files:\n",
" if file.find(\"predictions\") >= 0:\n",
" results_files.append(os.path.join(dirpath, file))\n",
"\n",
"\n",
"# Consolidate all the results into a list\n",
"results = []\n",
"for results_file in results_files:\n",
" # Open each result\n",
" with jsonlines.open(results_file) as reader:\n",
" for result in reader.iter(type=dict, skip_invalid=True):\n",
" instance = result[\"instance\"]\n",
" prediction = result[\"prediction\"]\n",
" print(f\"\\ninstance: {instance['content']}\")\n",
" for key, output in prediction.items():\n",
" print(f\"\\n{key}: {output}\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "af3874f08502"
},
"source": [
"## Cleaning up\n",
"\n",
"To clean up all Google Cloud resources used in this project, you can [delete the Google Cloud project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.\n",
"\n",
"Otherwise, you can delete the individual resources you created in this tutorial:\n",
"\n",
"* Dataset\n",
"* Training job\n",
"* Model\n",
"* Endpoint\n",
"* Batch prediction\n",
"* Batch prediction bucket"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "adce73b48b72"
},
"outputs": [],
"source": [
"delete_bucket = False\n",
"\n",
"if delete_bucket or os.getenv(\"IS_TESTING\"):\n",
" ! gsutil rm -r $BUCKET_URI\n",
"\n",
"# Delete batch\n",
"batch_job.delete()\n",
"\n",
"# Undeploy endpoint\n",
"endpoint.undeploy_all()\n",
"\n",
"# `force` parameter ensures that models are undeployed before deletion\n",
"endpoint.delete()\n",
"\n",
"# Delete model\n",
"model.delete()\n",
"\n",
"# Delete text dataset\n",
"text_dataset.delete()\n",
"\n",
"# Delete training job\n",
"job.delete()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "fa6a8c434c79"
},
"source": [
"## Next steps\n",
"\n",
"After completing this tutorial, see the following documentation pages to learn more about Vertex AI:\n",
"\n",
"* [Preparing text training data](https://cloud.google.com/vertex-ai/docs/training-overview#text_data)\n",
"* [Training an AutoML model using the API](https://cloud.google.com/vertex-ai/docs/training-overview#automl)\n",
"* [Evaluating AutoML models](https://cloud.google.com/vertex-ai/docs/training-overview#automl)\n",
"* [Deploying a model using ther Vertex AI API](https://cloud.google.com/vertex-ai/docs/predictions/overview#model_deployment)\n",
"* [Getting online predictions from AutoML models](https://cloud.google.com/vertex-ai/docs/predictions/overview#model_deployment)\n",
"* [Getting batch predictions](https://cloud.google.com/vertex-ai/docs/predictions/overview#batch_predictions)"
]
}
],
"metadata": {
"colab": {
"name": "automl-text-classification.ipynb",
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 0
}