09_deploying/09d_bytes.ipynb (678 lines of code) (raw):
{
"cells": [
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 72
},
"id": "hiQ6zAoYhyaA",
"outputId": "0acee878-1207-42c3-9bee-a594acd44365"
},
"outputs": [
{
"data": {
"text/markdown": [
"\n",
"<table class=\"tfo-notebook-buttons\" align=\"left\">\n",
" <td>\n",
" <a target=\"_blank\" href=\"https://console.cloud.google.com/ai-platform/notebooks/deploy-notebook?name=Handling image bytes&url=https%3A%2F%2Fgithub.com%2FGoogleCloudPlatform%2Fpractical-ml-vision-book%2Fblob%2Fmaster%2F09_deploying%2F09d_bytes.ipynb&download_url=https%3A%2F%2Fgithub.com%2FGoogleCloudPlatform%2Fpractical-ml-vision-book%2Fraw%2Fmaster%2F09_deploying%2F09d_bytes.ipynb\">\n",
" <img src=\"https://raw.githubusercontent.com/GoogleCloudPlatform/practical-ml-vision-book/master/logo-cloud.png\"/> Run in AI Platform Notebook</a>\n",
" </td>\n",
" </td>\n",
" <td>\n",
" <a target=\"_blank\" href=\"https://colab.research.google.com/github/GoogleCloudPlatform/practical-ml-vision-book/blob/master/09_deploying/09d_bytes.ipynb\">\n",
" <img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n",
" </td>\n",
" <td>\n",
" <a target=\"_blank\" href=\"https://github.com/GoogleCloudPlatform/practical-ml-vision-book/blob/master/09_deploying/09d_bytes.ipynb\">\n",
" <img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n",
" </td>\n",
" <td>\n",
" <a href=\"https://raw.githubusercontent.com/GoogleCloudPlatform/practical-ml-vision-book/master/09_deploying/09d_bytes.ipynb\">\n",
" <img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n",
" </td>\n",
"</table>\n"
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from IPython.display import Markdown as md\n",
"\n",
"### change to reflect your notebook\n",
"_nb_loc = \"09_deploying/09d_bytes.ipynb\"\n",
"_nb_title = \"Handling image bytes\"\n",
"\n",
"### no need to change any of this\n",
"_nb_safeloc = _nb_loc.replace('/', '%2F')\n",
"md(\"\"\"\n",
"<table class=\"tfo-notebook-buttons\" align=\"left\">\n",
" <td>\n",
" <a target=\"_blank\" href=\"https://console.cloud.google.com/ai-platform/notebooks/deploy-notebook?name={1}&url=https%3A%2F%2Fgithub.com%2FGoogleCloudPlatform%2Fpractical-ml-vision-book%2Fblob%2Fmaster%2F{2}&download_url=https%3A%2F%2Fgithub.com%2FGoogleCloudPlatform%2Fpractical-ml-vision-book%2Fraw%2Fmaster%2F{2}\">\n",
" <img src=\"https://raw.githubusercontent.com/GoogleCloudPlatform/practical-ml-vision-book/master/logo-cloud.png\"/> Run in AI Platform Notebook</a>\n",
" </td>\n",
" </td>\n",
" <td>\n",
" <a target=\"_blank\" href=\"https://colab.research.google.com/github/GoogleCloudPlatform/practical-ml-vision-book/blob/master/{0}\">\n",
" <img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n",
" </td>\n",
" <td>\n",
" <a target=\"_blank\" href=\"https://github.com/GoogleCloudPlatform/practical-ml-vision-book/blob/master/{0}\">\n",
" <img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n",
" </td>\n",
" <td>\n",
" <a href=\"https://raw.githubusercontent.com/GoogleCloudPlatform/practical-ml-vision-book/master/{0}\">\n",
" <img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n",
" </td>\n",
"</table>\n",
"\"\"\".format(_nb_loc, _nb_title, _nb_safeloc))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "a8HQYsAtC0Fv"
},
"source": [
"# Handling image bytes\n",
"\n",
"In this notebook, we start from the checkpoints of an already trained and saved model (as in Chapter 7).\n",
"For convenience, we have put this model in a public bucket in gs://practical-ml-vision-book-data/flowers_5_trained\n",
"\n",
"What we want to do is to directly handle bytes over the wire. That ways clients will not have to put their\n",
"images on Google Cloud Storage."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "5UOm2etrwYCs"
},
"source": [
"## Enable GPU and set up helper functions\n",
"\n",
"This notebook and pretty much every other notebook in this repository\n",
"will run faster if you are using a GPU.\n",
"On Colab:\n",
"- Navigate to Edit→Notebook Settings\n",
"- Select GPU from the Hardware Accelerator drop-down\n",
"\n",
"On Cloud AI Platform Notebooks:\n",
"- Navigate to https://console.cloud.google.com/ai-platform/notebooks\n",
"- Create an instance with a GPU or select your instance and add a GPU\n",
"\n",
"Next, we'll confirm that we can connect to the GPU with tensorflow:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "ugGJcxKAwhc2",
"outputId": "8e946159-46cf-4aba-f53e-622e9ea8adee"
},
"outputs": [],
"source": [
"import tensorflow as tf\n",
"print('TensorFlow version' + tf.version.VERSION)\n",
"print('Built with GPU support? ' + ('Yes!' if tf.test.is_built_with_cuda() else 'Noooo!'))\n",
"print('There are {} GPUs'.format(len(tf.config.experimental.list_physical_devices(\"GPU\"))))\n",
"device_name = tf.test.gpu_device_name()\n",
"if device_name != '/device:GPU:0':\n",
" raise SystemError('GPU device not found')\n",
"print('Found GPU at: {}'.format(device_name))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Read from checkpoints.\n",
"\n",
"We start from *the checkpoints* not the saved model because we want the full model\n",
"not just the signatures."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Model: \"flower_classification\"\n",
"_________________________________________________________________\n",
"Layer (type) Output Shape Param # \n",
"=================================================================\n",
"random/center_crop (RandomCr (None, 224, 224, 3) 0 \n",
"_________________________________________________________________\n",
"random_lr_flip/none (RandomF (None, 224, 224, 3) 0 \n",
"_________________________________________________________________\n",
"mobilenet_embedding (KerasLa (None, 1280) 2257984 \n",
"_________________________________________________________________\n",
"dense_hidden (Dense) (None, 32) 40992 \n",
"_________________________________________________________________\n",
"flower_prob (Dense) (None, 5) 165 \n",
"=================================================================\n",
"Total params: 2,299,141\n",
"Trainable params: 2,265,029\n",
"Non-trainable params: 34,112\n",
"_________________________________________________________________\n",
"None\n"
]
}
],
"source": [
"import os\n",
"import shutil\n",
"import tensorflow as tf\n",
"\n",
"CHECK_POINT_DIR='gs://practical-ml-vision-book-data/flowers_5_trained/chkpts'\n",
"model = tf.keras.models.load_model(CHECK_POINT_DIR)\n",
"print(model.summary())"
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {},
"outputs": [],
"source": [
"IMG_HEIGHT = 345\n",
"IMG_WIDTH = 345\n",
"IMG_CHANNELS = 3\n",
"CLASS_NAMES = 'daisy dandelion roses sunflowers tulips'.split()\n",
"\n",
"def read_from_jpegfile(filename):\n",
" img_bytes = tf.io.read_file(filename)\n",
" return img_bytes\n",
" \n",
"def preprocess(img_bytes):\n",
" img = tf.image.decode_jpeg(img_bytes, channels=IMG_CHANNELS)\n",
" img = tf.image.convert_image_dtype(img, tf.float32)\n",
" return tf.image.resize_with_pad(img, IMG_HEIGHT, IMG_WIDTH)"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[[0.3507376 0.3983379 0.02309519 0.07595135 0.15187794]]\n",
"[[3.1782882e-05 9.9996090e-01 5.1874702e-07 3.2268999e-06 3.5444552e-06]]\n",
"[[9.9471879e-01 3.5855272e-03 2.1374140e-05 1.5876008e-03 8.6639280e-05]]\n",
"[[1.5454909e-03 2.2907292e-04 3.6099207e-02 3.1195192e-03 9.5900667e-01]]\n",
"[[4.7941930e-06 3.9310632e-07 5.8220904e-02 9.1497981e-07 9.4177294e-01]]\n"
]
}
],
"source": [
"filenames = [\n",
" 'gs://practical-ml-vision-book-data/flowers_5_jpeg/flower_photos/dandelion/9818247_e2eac18894.jpg',\n",
" 'gs://practical-ml-vision-book-data/flowers_5_jpeg/flower_photos/dandelion/9853885425_4a82356f1d_m.jpg',\n",
" 'gs://practical-ml-vision-book-data/flowers_5_jpeg/flower_photos/daisy/9299302012_958c70564c_n.jpg',\n",
" 'gs://practical-ml-vision-book-data/flowers_5_jpeg/flower_photos/tulips/8733586143_3139db6e9e_n.jpg',\n",
" 'gs://practical-ml-vision-book-data/flowers_5_jpeg/flower_photos/tulips/8713397358_0505cc0176_n.jpg'\n",
"]\n",
"for filename in filenames:\n",
" img_bytes = read_from_jpegfile(filename)\n",
" img = preprocess(img_bytes)\n",
" img = tf.expand_dims(img, axis=0)\n",
" pred = model.predict(img)\n",
" print(pred)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Export signature that will handle bytes from client"
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"INFO:tensorflow:Assets written to: export/flowers_model3/assets\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:tensorflow:Assets written to: export/flowers_model3/assets\n"
]
}
],
"source": [
"@tf.function(input_signature=[tf.TensorSpec([None,], dtype=tf.string)])\n",
"def predict_bytes(img_bytes):\n",
" input_images = tf.map_fn(\n",
" preprocess,\n",
" img_bytes,\n",
" fn_output_signature=tf.float32\n",
" )\n",
" batch_pred = model(input_images) # same as model.predict()\n",
" top_prob = tf.math.reduce_max(batch_pred, axis=[1])\n",
" pred_label_index = tf.math.argmax(batch_pred, axis=1)\n",
" pred_label = tf.gather(tf.convert_to_tensor(CLASS_NAMES), pred_label_index)\n",
" return {\n",
" 'probability': top_prob,\n",
" 'flower_type_int': pred_label_index,\n",
" 'flower_type_str': pred_label\n",
" }\n",
"\n",
"@tf.function(input_signature=[tf.TensorSpec([None,], dtype=tf.string)])\n",
"def predict_filename(filenames):\n",
" img_bytes = tf.map_fn(\n",
" tf.io.read_file,\n",
" filenames\n",
" )\n",
" result = predict_bytes(img_bytes)\n",
" result['filename'] = filenames\n",
" return result\n",
"\n",
"shutil.rmtree('export', ignore_errors=True)\n",
"os.mkdir('export')\n",
"model.save('export/flowers_model3',\n",
" signatures={\n",
" 'serving_default': predict_filename,\n",
" 'from_bytes': predict_bytes\n",
" })"
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The given SavedModel MetaGraphDef contains SignatureDefs with the following keys:\n",
"SignatureDef key: \"__saved_model_init_op\"\n",
"SignatureDef key: \"from_bytes\"\n",
"SignatureDef key: \"serving_default\"\n"
]
}
],
"source": [
"!saved_model_cli show --tag_set serve --dir export/flowers_model3"
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The given SavedModel SignatureDef contains the following input(s):\n",
" inputs['filenames'] tensor_info:\n",
" dtype: DT_STRING\n",
" shape: (-1)\n",
" name: serving_default_filenames:0\n",
"The given SavedModel SignatureDef contains the following output(s):\n",
" outputs['filename'] tensor_info:\n",
" dtype: DT_STRING\n",
" shape: (-1)\n",
" name: StatefulPartitionedCall_1:0\n",
" outputs['flower_type_int'] tensor_info:\n",
" dtype: DT_INT64\n",
" shape: (-1)\n",
" name: StatefulPartitionedCall_1:1\n",
" outputs['flower_type_str'] tensor_info:\n",
" dtype: DT_STRING\n",
" shape: (-1)\n",
" name: StatefulPartitionedCall_1:2\n",
" outputs['probability'] tensor_info:\n",
" dtype: DT_FLOAT\n",
" shape: (-1)\n",
" name: StatefulPartitionedCall_1:3\n",
"Method name is: tensorflow/serving/predict\n"
]
}
],
"source": [
"!saved_model_cli show --tag_set serve --dir export/flowers_model3 --signature_def serving_default"
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The given SavedModel SignatureDef contains the following input(s):\n",
" inputs['img_bytes'] tensor_info:\n",
" dtype: DT_STRING\n",
" shape: (-1)\n",
" name: from_bytes_img_bytes:0\n",
"The given SavedModel SignatureDef contains the following output(s):\n",
" outputs['flower_type_int'] tensor_info:\n",
" dtype: DT_INT64\n",
" shape: (-1)\n",
" name: StatefulPartitionedCall:0\n",
" outputs['flower_type_str'] tensor_info:\n",
" dtype: DT_STRING\n",
" shape: (-1)\n",
" name: StatefulPartitionedCall:1\n",
" outputs['probability'] tensor_info:\n",
" dtype: DT_FLOAT\n",
" shape: (-1)\n",
" name: StatefulPartitionedCall:2\n",
"Method name is: tensorflow/serving/predict\n"
]
}
],
"source": [
"!saved_model_cli show --tag_set serve --dir export/flowers_model3 --signature_def from_bytes"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Send img bytes over the wire\n",
"\n",
"No need for intermediate file on GCS. Note that we are simply using Python's file reading method."
]
},
{
"cell_type": "code",
"execution_count": 29,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Copying gs://practical-ml-vision-book-data/flowers_5_jpeg/flower_photos/daisy/9299302012_958c70564c_n.jpg...\n",
"/ [1 files][ 19.4 KiB/ 19.4 KiB] \n",
"Operation completed over 1 objects/19.4 KiB. \n"
]
}
],
"source": [
"!gsutil cp gs://practical-ml-vision-book-data/flowers_5_jpeg/flower_photos/daisy/9299302012_958c70564c_n.jpg /tmp/test.jpg"
]
},
{
"cell_type": "code",
"execution_count": 33,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'probability': <tf.Tensor: shape=(1,), dtype=float32, numpy=array([0.9947188], dtype=float32)>, 'flower_type_str': <tf.Tensor: shape=(1,), dtype=string, numpy=array([b'daisy'], dtype=object)>, 'flower_type_int': <tf.Tensor: shape=(1,), dtype=int64, numpy=array([0])>}\n"
]
}
],
"source": [
"with open('/tmp/test.jpg', 'rb') as ifp:\n",
" img_bytes = ifp.read()\n",
" serving_fn = tf.keras.models.load_model('./export/flowers_model3').signatures['from_bytes']\n",
" pred = serving_fn(tf.convert_to_tensor([img_bytes]))\n",
" print(pred)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Deploy bytes-handling model to CAIP"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Copying file://./export/flowers_model3/variables/variables.data-00000-of-00001 [Content-Type=application/octet-stream]...\n",
"Copying file://./export/flowers_model3/variables/variables.index [Content-Type=application/octet-stream]...\n",
"Copying file://./export/flowers_model3/saved_model.pb [Content-Type=application/octet-stream]...\n",
"/ [3/3 files][ 10.7 MiB/ 10.7 MiB] 100% Done \n",
"Operation completed over 3 objects/10.7 MiB. \n"
]
}
],
"source": [
"%%bash\n",
"BUCKET=\"ai-analytics-solutions-mlvisionbook\" # CHANGE\n",
"gsutil -m cp -r ./export/flowers_model3 gs://${BUCKET}/flowers_model3"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Deploying model bytes\n",
"Creating bytes endpoint now.\n",
"The endpoint_id is 7318683646011899904\n",
"Uploading bytes model now.\n",
"The model_id is 2990680423643742208\n",
"Deploying model now\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"Using endpoint [https://us-central1-aiplatform.googleapis.com/]\n",
"Using endpoint [https://us-central1-aiplatform.googleapis.com/]\n",
"Waiting for operation [1561614649575604224]...\n",
".....done.\n",
"Created Vertex AI endpoint: projects/563535018348/locations/us-central1/endpoints/7318683646011899904.\n",
"Using endpoint [https://us-central1-aiplatform.googleapis.com/]\n",
"Using endpoint [https://us-central1-aiplatform.googleapis.com/]\n",
"Using endpoint [https://us-central1-aiplatform.googleapis.com/]\n",
"Waiting for operation [8091834109262823424]...\n",
".....done.\n",
"Using endpoint [https://us-central1-aiplatform.googleapis.com/]\n",
"Using endpoint [https://us-central1-aiplatform.googleapis.com/]\n",
"Waiting for operation [3867457658789298176]...\n",
"........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................done.\n",
"Deployed a model to the endpoint 7318683646011899904. Id of the deployed model: 6992243041771716608.\n"
]
}
],
"source": [
"%%bash\n",
"BUCKET=\"ai-analytics-solutions-mlvisionbook\" # CHANGE\n",
"./vertex_deploy.sh \\\n",
"--endpoint_name=bytes \\\n",
"--model_name=bytes \\\n",
"--model_location=gs://${BUCKET}/flowers_model3"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## IMPORTANT: CHANGE THIS CELL\n",
"\n",
"Note the endpoint ID and deployed model ID above. Set it in the cell below."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# CHANGE THESE TO REFLECT WHERE YOU DEPLOYED THE MODEL\n",
"import os\n",
"os.environ['ENDPOINT_ID'] = '7318683646011899904' # CHANGE\n",
"os.environ['MODEL_ID'] = '6992243041771716608' # CHANGE\n",
"os.environ['PROJECT'] = 'ai-analytics-solutions' # CHANGE\n",
"os.environ['BUCKET'] = 'ai-analytics-solutions-mlvisionbook' # CHANGE\n",
"os.environ['REGION'] = 'us-central1' # CHANGE"
]
},
{
"cell_type": "code",
"execution_count": 43,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Copying gs://practical-ml-vision-book-data/flowers_5_jpeg/flower_photos/daisy/9299302012_958c70564c_n.jpg...\n",
"/ [1 files][ 19.4 KiB/ 19.4 KiB] \n",
"Operation completed over 1 objects/19.4 KiB. \n",
"Copying gs://practical-ml-vision-book-data/flowers_5_jpeg/flower_photos/tulips/8713397358_0505cc0176_n.jpg...\n",
"/ [1 files][ 34.6 KiB/ 34.6 KiB] \n",
"Operation completed over 1 objects/34.6 KiB. \n"
]
}
],
"source": [
"%%bash\n",
"gsutil cp gs://practical-ml-vision-book-data/flowers_5_jpeg/flower_photos/daisy/9299302012_958c70564c_n.jpg /tmp/test1.jpg\n",
"gsutil cp gs://practical-ml-vision-book-data/flowers_5_jpeg/flower_photos/tulips/8713397358_0505cc0176_n.jpg /tmp/test2.jpg"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note how we pass the base-64 encoded data"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"b'{\\n \"error\": {\\n \"code\": 400,\\n \"message\": \"Invalid JSON payload received. Unknown name \\\\\"signature_name\\\\\": Cannot find field.\",\\n \"status\": \"INVALID_ARGUMENT\",\\n \"details\": [\\n {\\n \"@type\": \"type.googleapis.com/google.rpc.BadRequest\",\\n \"fieldViolations\": [\\n {\\n \"description\": \"Invalid JSON payload received. Unknown name \\\\\"signature_name\\\\\": Cannot find field.\"\\n }\\n ]\\n }\\n ]\\n }\\n}\\n'\n"
]
}
],
"source": [
"# Invoke from Python.\n",
"import base64\n",
"import json\n",
"from oauth2client.client import GoogleCredentials\n",
"import requests\n",
"\n",
"PROJECT = \"ai-analytics-solutions\" # CHANGE\n",
"REGION = \"us-central1\" # make sure you have GPU/TPU quota in this region\n",
"ENDPOINT_ID = \"7318683646011899904\"\n",
"\n",
"def b64encode(filename):\n",
" with open(filename, 'rb') as ifp:\n",
" img_bytes = ifp.read()\n",
" return base64.b64encode(img_bytes)\n",
"\n",
"token = GoogleCredentials.get_application_default().get_access_token().access_token\n",
"api = \"https://{}-aiplatform.googleapis.com/v1/projects/{}/locations/{}/endpoints/{}:predict\".format(\n",
" REGION, PROJECT, REGION, ENDPOINT_ID)\n",
"headers = {\"Authorization\": \"Bearer \" + token }\n",
"data = {\n",
" \"signature_name\": \"from_bytes\", # currently bugged\n",
" \"instances\": [\n",
" {\n",
" \"img_bytes\": {\"b64\": b64encode('/tmp/test1.jpg')}\n",
" },\n",
" {\n",
" \"img_bytes\": {\"b64\": b64encode('/tmp/test2.jpg')}\n",
" },\n",
" ]\n",
"}\n",
"response = requests.post(api, json=data, headers=headers)\n",
"print(response.content)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Duu8mX3iXANE"
},
"source": [
"## License\n",
"Copyright 2020 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License."
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"collapsed_sections": [
"5UOm2etrwYCs"
],
"name": "03a_transfer_learning.ipynb",
"provenance": [],
"toc_visible": true
},
"environment": {
"kernel": "python3",
"name": "tf2-gpu.2-6.m87",
"type": "gcloud",
"uri": "gcr.io/deeplearning-platform-release/tf2-gpu.2-6:m87"
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.12"
}
},
"nbformat": 4,
"nbformat_minor": 4
}