diff --git a/introduction_to_amazon_algorithms/image_classification_tensorflow/Amazon_TensorFlow_Image_Classification.ipynb b/introduction_to_amazon_algorithms/image_classification_tensorflow/Amazon_TensorFlow_Image_Classification.ipynb
new file mode 100644
index 0000000000..40f68adf54
--- /dev/null
+++ b/introduction_to_amazon_algorithms/image_classification_tensorflow/Amazon_TensorFlow_Image_Classification.ipynb
@@ -0,0 +1,816 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "c5b50b55",
+ "metadata": {},
+ "source": [
+ "# Introduction to SageMaker TensorFlow - Image Classification"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e718cb54",
+ "metadata": {},
+ "source": [
+ "---\n",
+ "Welcome to [Amazon SageMaker Built-in Algorithms](https://sagemaker.readthedocs.io/en/stable/algorithms/index.html)! You can use SageMaker Built-in algorithms to solve many Machine Learning tasks through [SageMaker Python SDK](https://sagemaker.readthedocs.io/en/stable/overview.html). You can also use these algorithms through one-click in SageMaker Studio via [JumpStart](https://docs.aws.amazon.com/sagemaker/latest/dg/studio-jumpstart.html).\n",
+ "\n",
+ "In this demo notebook, we demonstrate how to use the TensorFlow Image Classification algorithm. Image Classification refers to classifying an image to one of the class labels of the training dataset. We demonstrate two use cases of TensorFlow Image Classification models:\n",
+ "\n",
+ "* How to use a model pre-trained on ImageNet dataset to classify an image. [ImageNetLabels](https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt).\n",
+ "* How to fine-tune a pre-trained model to a custom dataset, and then run inference on the fine-tuned model.\n",
+ "\n",
+ "Note: This notebook was tested on ml.t3.medium instance in Amazon SageMaker Studio with Python 3 (Data Science) kernel and in Amazon SageMaker Notebook instance with conda_python3 kernel.\n",
+ "\n",
+ "---"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "7d1939f5",
+ "metadata": {},
+ "source": [
+ "1. [Set Up](#1.-Set-Up)\n",
+ "2. [Select a pre-trained model](#2.-Select-a-pre-trained-model)\n",
+ "3. [Run inference on the pre-trained model](#3.-Run-inference-on-the-pre-trained-model)\n",
+ " * [Retrieve Artifacts & Deploy an Endpoint](#3.1.-Retrieve-Artifacts-&-Deploy-an-Endpoint)\n",
+ " * [Download example images for inference](#3.2.-Download-example-images-for-inference)\n",
+ " * [Query endpoint and parse response](#3.3.-Query-endpoint-and-parse-response)\n",
+ " * [Clean up the endpoint](#3.4.-Clean-up-the-endpoint)\n",
+ "4. [Fine-tune the pre-trained model on a custom dataset](#4.-Fine-tune-the-pre-trained-model-on-a-custome-dataset)\n",
+ " * [Retrieve Training artifacts](#4.1.-Retrieve-Training-artifacts)\n",
+ " * [Set Training parameters](#4.2.-Set-Training-parameters)\n",
+ " * [Train with Automatic Model Tuning (HPO)](#AMT)\n",
+ " * [Start Training](#4.4.-Start-Training)\n",
+ " * [Deploy & run Inference on the fine-tuned model](#4.5.-Deploy-&-run-Inference-on-the-fine-tuned-model)\n",
+ " * [Incrementally train the fine-tuned model](#4.6.-Incrementally-train-the-fine-tuned-model)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f9f252e3",
+ "metadata": {},
+ "source": [
+ "## 1. Set Up\n",
+ "***\n",
+ "Before executing the notebook, there are some initial steps required for setup. This notebook requires latest version of sagemaker and ipywidgets.\n",
+ "***"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "e45065a1",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "!pip install sagemaker ipywidgets --upgrade --quiet"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "fe18f520",
+ "metadata": {},
+ "source": [
+ "---\n",
+ "\n",
+ "To train and host on Amazon Sagemaker, we need to setup and authenticate the use of AWS services. Here, we use the execution role associated with the current notebook instance as the AWS account role with SageMaker access. It has necessary permissions, including access to your data in S3. \n",
+ "\n",
+ "---"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "343deffb",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import sagemaker, boto3, json\n",
+ "from sagemaker.session import Session\n",
+ "\n",
+ "sagemaker_session = Session()\n",
+ "aws_role = sagemaker_session.get_caller_identity_arn()\n",
+ "aws_region = boto3.Session().region_name\n",
+ "sess = sagemaker.Session()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "1a88d949",
+ "metadata": {},
+ "source": [
+ "## 2. Select a pre-trained model\n",
+ "***\n",
+ "You can continue with the default model, or can choose a different model from the dropdown generated upon running the next cell. A complete list of SageMaker pre-trained models can also be accessed at [Sagemaker pre-trained Models](https://sagemaker.readthedocs.io/en/stable/doc_utils/pretrainedmodels.html#).\n",
+ "***"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "41357d17",
+ "metadata": {
+ "jumpStartAlterations": [
+ "modelIdVersion"
+ ]
+ },
+ "outputs": [],
+ "source": [
+ "model_id, model_version = \"tensorflow-ic-imagenet-mobilenet-v2-100-224-classification-4\", \"*\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "ec35e3e5",
+ "metadata": {},
+ "source": [
+ "***\n",
+ "[Optional] Select a different Sagemaker pre-trained model. Here, we download the model_manifest file from the Built-In Algorithms s3 bucket, filter-out all the Image Classification models and select a model for inference.\n",
+ "***"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "cb0807c3",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import IPython\n",
+ "from ipywidgets import Dropdown\n",
+ "from sagemaker.jumpstart.notebook_utils import list_jumpstart_models\n",
+ "from sagemaker.jumpstart.filters import And\n",
+ "\n",
+ "# Retrieves all TensorFlow Image Classification models made available by SageMaker Built-In Algorithms.\n",
+ "filter_value = And(\"task == ic\", \"framework == tensorflow\")\n",
+ "ic_models = list_jumpstart_models(filter=filter_value)\n",
+ "\n",
+ "# display the model-ids in a dropdown, for user to select a model.\n",
+ "dropdown = Dropdown(\n",
+ " options=ic_models,\n",
+ " value=model_id,\n",
+ " description=\"SageMaker Built-In TensorFlow Image Classification Models:\",\n",
+ " style={\"description_width\": \"initial\"},\n",
+ " layout={\"width\": \"max-content\"},\n",
+ ")\n",
+ "display(IPython.display.Markdown(\"## Select a SageMaker pre-trained model from the dropdown below\"))\n",
+ "display(dropdown)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "39939c07",
+ "metadata": {},
+ "source": [
+ "## 3. Run inference on the pre-trained model\n",
+ "***\n",
+ "Using SageMaker, we can perform inference on the pre-trained model, even without fine-tuning it first on a custom dataset. For this example, that means on an input image, predicting the [class label from one of the 1000 classes of the ImageNet dataset](https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt).\n",
+ "***"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "429361b2",
+ "metadata": {},
+ "source": [
+ "### 3.1. Retrieve Artifacts & Deploy an Endpoint\n",
+ "***\n",
+ "We retrieve the deploy_image_uri, deploy_source_uri, and base_model_uri for the pre-trained model. To host the pre-trained base-model, we create an instance of [`sagemaker.model.Model`](https://sagemaker.readthedocs.io/en/stable/api/inference/model.html) and deploy it.\n",
+ "***"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "3175cd6c",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from sagemaker import image_uris, model_uris, script_uris\n",
+ "from sagemaker.model import Model\n",
+ "from sagemaker.predictor import Predictor\n",
+ "from sagemaker.utils import name_from_base\n",
+ "\n",
+ "# model_version=\"*\" fetches the latest version of the model.\n",
+ "infer_model_id, infer_model_version = dropdown.value, \"*\"\n",
+ "\n",
+ "endpoint_name = name_from_base(f\"jumpstart-example-{infer_model_id}\")\n",
+ "\n",
+ "inference_instance_type = \"ml.p2.xlarge\"\n",
+ "\n",
+ "# Retrieve the inference docker container uri.\n",
+ "deploy_image_uri = image_uris.retrieve(\n",
+ " region=None,\n",
+ " framework=None,\n",
+ " image_scope=\"inference\",\n",
+ " model_id=infer_model_id,\n",
+ " model_version=infer_model_version,\n",
+ " instance_type=inference_instance_type,\n",
+ ")\n",
+ "# Retrieve the inference script uri.\n",
+ "deploy_source_uri = script_uris.retrieve(\n",
+ " model_id=infer_model_id, model_version=infer_model_version, script_scope=\"inference\"\n",
+ ")\n",
+ "# Retrieve the base model uri.\n",
+ "base_model_uri = model_uris.retrieve(\n",
+ " model_id=infer_model_id, model_version=infer_model_version, model_scope=\"inference\"\n",
+ ")\n",
+ "# Create the SageMaker model instance. Note that we need to pass Predictor class when we deploy model through Model class,\n",
+ "# for being able to run inference through the sagemaker API.\n",
+ "model = Model(\n",
+ " image_uri=deploy_image_uri,\n",
+ " source_dir=deploy_source_uri,\n",
+ " model_data=base_model_uri,\n",
+ " entry_point=\"inference.py\",\n",
+ " role=aws_role,\n",
+ " predictor_cls=Predictor,\n",
+ " name=endpoint_name,\n",
+ ")\n",
+ "# deploy the Model.\n",
+ "base_model_predictor = model.deploy(\n",
+ " initial_instance_count=1,\n",
+ " instance_type=inference_instance_type,\n",
+ " endpoint_name=endpoint_name,\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "bb880a6d",
+ "metadata": {},
+ "source": [
+ "### 3.2. Download example images for inference\n",
+ "***\n",
+ "We download example images from a public S3 bucket.\n",
+ "***"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "bc773407",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "s3_bucket = f\"jumpstart-cache-prod-{aws_region}\"\n",
+ "key_prefix = \"inference-notebook-assets\"\n",
+ "\n",
+ "\n",
+ "def download_from_s3(images):\n",
+ " for filename, image_key in images.items():\n",
+ " boto3.client(\"s3\").download_file(s3_bucket, f\"{key_prefix}/{image_key}\", filename)\n",
+ "\n",
+ "\n",
+ "images = {\"img1.jpg\": \"cat.jpg\", \"img2.jpg\": \"dog.jpg\"}\n",
+ "download_from_s3(images)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "7ff3fb64",
+ "metadata": {},
+ "source": [
+ "### 3.3. Query endpoint and parse response\n",
+ "***\n",
+ "Input to the endpoint is a single image in binary format. Response from the endpoint is a dictionary containing the top-1 predicted class label, and a list of class probabilities.\n",
+ "***"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "e6627767",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from IPython.core.display import HTML\n",
+ "\n",
+ "\n",
+ "def predict_top_k_labels(probabilities, labels, k):\n",
+ " topk_prediction_ids = sorted(\n",
+ " range(len(probabilities)), key=lambda index: probabilities[index], reverse=True\n",
+ " )[:k]\n",
+ " topk_class_labels = \", \".join([labels[id] for id in topk_prediction_ids])\n",
+ " return topk_class_labels\n",
+ "\n",
+ "\n",
+ "for image_filename in images.keys():\n",
+ " with open(image_filename, \"rb\") as file:\n",
+ " img = file.read()\n",
+ " query_response = base_model_predictor.predict(\n",
+ " img, {\"ContentType\": \"application/x-image\", \"Accept\": \"application/json;verbose\"}\n",
+ " )\n",
+ " model_predictions = json.loads(query_response)\n",
+ " labels, probabilities = model_predictions[\"labels\"], model_predictions[\"probabilities\"]\n",
+ " top5_class_labels = predict_top_k_labels(probabilities, labels, 5)\n",
+ " display(\n",
+ " HTML(\n",
+ " f''\n",
+ " f\"Top-5 predictions: {top5_class_labels} \"\n",
+ " )\n",
+ " )"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "08b7fa6f",
+ "metadata": {},
+ "source": [
+ "### 3.4. Clean up the endpoint"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "36c93e25",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Delete the SageMaker endpoint and the attached resources\n",
+ "base_model_predictor.delete_model()\n",
+ "base_model_predictor.delete_endpoint()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "cf1c3ed4",
+ "metadata": {},
+ "source": [
+ "## 4. Fine-tune the pre-trained model on a custom dataset\n",
+ "***\n",
+ "Previously, we saw how to run inference on a pre-trained model. Next, we discuss how a model can be fine-tuned to a custom dataset with any number of classes. \n",
+ "\n",
+ "The model available for fine-tuning attaches a classification layer to the corresponding feature extractor model available on TensorFlow/PyTorch hub, and initializes the layer parameters to random values. The output dimension of the classification layer is determined based on the number of classes in the input data. The fine-tuning step fine-tunes the model parameters. The objective is to minimize classification error on the input data. The model returned by fine-tuning can be further deployed for inference. Below are the instructions for how the training data should be formatted for input to the model.\n",
+ "\n",
+ "- **Input:** A directory with as many sub-directories as the number of classes. \n",
+ " - Each sub-directory should have images belonging to that class in .jpg format. \n",
+ "- **Output:** A trained model that can be deployed for inference. \n",
+ " - A label mapping file is saved along with the trained model file on the s3 bucket. \n",
+ " \n",
+ "The input directory should look like below if the training data contains images from two classes: roses and dandelion. The s3 path should look like `s3://bucket_name/input_directory/`. Note the trailing `/` is required. The names of the folders and 'roses', 'dandelion', and the .jpg filenames can be anything. The label mapping file that is saved along with the trained model on the s3 bucket maps the folder names 'roses' and 'dandelion' to the indices in the list of class probabilities the model outputs. The mapping follows alphabetical ordering of the folder names. In the example below, index 0 in the model output list would correspond to 'dandelion' and index 1 would correspond to 'roses'.\n",
+ "\n",
+ " input_directory\n",
+ " |--roses\n",
+ " |--abc.jpg\n",
+ " |--def.jpg\n",
+ " |--dandelion\n",
+ " |--ghi.jpg\n",
+ " |--jkl.jpg\n",
+ "\n",
+ "We provide tf_flowers dataset as a default dataset for fine-tuning the model. tf_flower comprises images of five types of flowers. The dataset has been downloaded from [TensorFlow](https://www.tensorflow.org/datasets/catalog/tf_flowers) under [Apache 2.0 License](https://jumpstart-cache-prod-us-west-2.s3-us-west-2.amazonaws.com/licenses/Apache-License/LICENSE-2.0.txt).\n",
+ "***"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "d38e20c6",
+ "metadata": {},
+ "source": [
+ "### 4.1. Retrieve Training artifacts\n",
+ "***\n",
+ "Here, for the selected model, we retrieve the training docker container, the training algorithm source, the pre-trained base model, and a python dictionary of the training hyper-parameters that the algorithm accepts with their default values. Note that the model_version=\"*\" fetches the lates model. Also, we do need to specify the training_instance_type to fetch train_image_uri.\n",
+ "***"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "e7ef93bb",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from sagemaker import image_uris, model_uris, script_uris, hyperparameters\n",
+ "\n",
+ "model_id, model_version = dropdown.value, \"*\"\n",
+ "training_instance_type = \"ml.p3.2xlarge\"\n",
+ "\n",
+ "# Retrieve the docker image\n",
+ "train_image_uri = image_uris.retrieve(\n",
+ " region=None,\n",
+ " framework=None,\n",
+ " model_id=model_id,\n",
+ " model_version=model_version,\n",
+ " image_scope=\"training\",\n",
+ " instance_type=training_instance_type,\n",
+ ")\n",
+ "# Retrieve the training script\n",
+ "train_source_uri = script_uris.retrieve(\n",
+ " model_id=model_id, model_version=model_version, script_scope=\"training\"\n",
+ ")\n",
+ "# Retrieve the pre-trained model tarball to further fine-tune\n",
+ "train_model_uri = model_uris.retrieve(\n",
+ " model_id=model_id, model_version=model_version, model_scope=\"training\"\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "522d8fa6",
+ "metadata": {},
+ "source": [
+ "### 4.2. Set Training parameters\n",
+ "***\n",
+ "Now that we are done with all the setup that is needed, we are ready to fine-tune our Image Classification model. To begin, let us create a [``sageMaker.estimator.Estimator``](https://sagemaker.readthedocs.io/en/stable/api/training/estimators.html) object. This estimator will launch the training job. \n",
+ "\n",
+ "There are two kinds of parameters that need to be set for training. \n",
+ "\n",
+ "The first one are the parameters for the training job. These include: (i) Training data path. This is S3 folder in which the input data is stored, (ii) Output path: This the s3 folder in which the training output is stored. (iii) Training instance type: This indicates the type of machine on which to run the training. Typically, we use GPU instances for these training. We defined the training instance type above to fetch the correct train_image_uri. \n",
+ "\n",
+ "The second set of parameters are algorithm specific training hyper-parameters.\n",
+ "***"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "d2b1f26a",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Sample training data is available in this bucket\n",
+ "training_data_bucket = f\"jumpstart-cache-prod-{aws_region}\"\n",
+ "training_data_prefix = \"training-datasets/tf_flowers/\"\n",
+ "\n",
+ "training_dataset_s3_path = f\"s3://{training_data_bucket}/{training_data_prefix}\"\n",
+ "\n",
+ "output_bucket = sess.default_bucket()\n",
+ "output_prefix = \"jumpstart-example-ic-training\"\n",
+ "\n",
+ "s3_output_location = f\"s3://{output_bucket}/{output_prefix}/output\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "abf366a1",
+ "metadata": {},
+ "source": [
+ "***\n",
+ "For algorithm specific hyper-parameters, we start by fetching python dictionary of the training hyper-parameters that the algorithm accepts with their default values. This can then be overridden to custom values.\n",
+ "***"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "2ce3e271",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from sagemaker import hyperparameters\n",
+ "\n",
+ "# Retrieve the default hyper-parameters for fine-tuning the model\n",
+ "hyperparameters = hyperparameters.retrieve_default(model_id=model_id, model_version=model_version)\n",
+ "\n",
+ "# [Optional] Override default hyperparameters with custom values\n",
+ "hyperparameters[\"epochs\"] = \"5\"\n",
+ "print(hyperparameters)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0ccb5352",
+ "metadata": {},
+ "source": [
+ "### 4.3. Train with Automatic Model Tuning ([HPO](https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning.html)) \n",
+ "***\n",
+ "Amazon SageMaker automatic model tuning, also known as hyperparameter tuning, finds the best version of a model by running many training jobs on your dataset using the algorithm and ranges of hyperparameters that you specify. It then chooses the hyperparameter values that result in a model that performs the best, as measured by a metric that you choose. We will use a [HyperparameterTuner](https://sagemaker.readthedocs.io/en/stable/api/training/tuner.html) object to interact with Amazon SageMaker hyperparameter tuning APIs.\n",
+ "***"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "812e2197",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from sagemaker.tuner import ContinuousParameter\n",
+ "\n",
+ "# Use AMT for tuning and selecting the best model\n",
+ "use_amt = False\n",
+ "\n",
+ "# Define objective metric per framework, based on which the best model will be selected.\n",
+ "metric_definitions_per_model = {\n",
+ " \"tensorflow\": {\n",
+ " \"metrics\": [{\"Name\": \"val_accuracy\", \"Regex\": \"val_accuracy: ([0-9\\\\.]+)\"}],\n",
+ " \"type\": \"Maximize\",\n",
+ " },\n",
+ " \"pytorch\": {\n",
+ " \"metrics\": [{\"Name\": \"val_accuracy\", \"Regex\": \"val Acc: ([0-9\\\\.]+)\"}],\n",
+ " \"type\": \"Maximize\",\n",
+ " },\n",
+ "}\n",
+ "\n",
+ "# You can select from the hyperparameters supported by the model, and configure ranges of values to be searched for training the optimal model.(https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning-define-ranges.html)\n",
+ "hyperparameter_ranges = {\n",
+ " \"adam-learning-rate\": ContinuousParameter(0.0001, 0.1, scaling_type=\"Logarithmic\")\n",
+ "}\n",
+ "\n",
+ "# Increase the total number of training jobs run by AMT, for increased accuracy (and training time).\n",
+ "max_jobs = 6\n",
+ "# Change parallel training jobs run by AMT to reduce total training time, constrained by your account limits.\n",
+ "# if max_jobs=max_parallel_jobs then Bayesian search turns to Random.\n",
+ "max_parallel_jobs = 2"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "59a61921",
+ "metadata": {},
+ "source": [
+ "### 4.4. Start Training\n",
+ "***\n",
+ "We start by creating the estimator object with all the required assets and then launch the training job.\n",
+ "***"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "f3b68607",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from sagemaker.estimator import Estimator\n",
+ "from sagemaker.utils import name_from_base\n",
+ "from sagemaker.tuner import HyperparameterTuner\n",
+ "\n",
+ "training_job_name = name_from_base(f\"jumpstart-example-{model_id}-transfer-learning\")\n",
+ "\n",
+ "# Create SageMaker Estimator instance\n",
+ "ic_estimator = Estimator(\n",
+ " role=aws_role,\n",
+ " image_uri=train_image_uri,\n",
+ " source_dir=train_source_uri,\n",
+ " model_uri=train_model_uri,\n",
+ " entry_point=\"transfer_learning.py\",\n",
+ " instance_count=1,\n",
+ " instance_type=training_instance_type,\n",
+ " max_run=360000,\n",
+ " hyperparameters=hyperparameters,\n",
+ " output_path=s3_output_location,\n",
+ " base_job_name=training_job_name,\n",
+ ")\n",
+ "\n",
+ "if use_amt:\n",
+ " metric_definitions = next(\n",
+ " value for key, value in metric_definitions_per_model.items() if model_id.startswith(key)\n",
+ " )\n",
+ "\n",
+ " hp_tuner = HyperparameterTuner(\n",
+ " ic_estimator,\n",
+ " metric_definitions[\"metrics\"][0][\"Name\"],\n",
+ " hyperparameter_ranges,\n",
+ " metric_definitions[\"metrics\"],\n",
+ " max_jobs=max_jobs,\n",
+ " max_parallel_jobs=max_parallel_jobs,\n",
+ " objective_type=metric_definitions[\"type\"],\n",
+ " base_tuning_job_name=training_job_name,\n",
+ " )\n",
+ "\n",
+ " # Launch a SageMaker Tuning job to search for the best hyperparameters\n",
+ " hp_tuner.fit({\"training\": training_dataset_s3_path})\n",
+ "else:\n",
+ " # Launch a SageMaker Training job by passing s3 path of the training data\n",
+ " ic_estimator.fit({\"training\": training_dataset_s3_path}, logs=True)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "90304e0c",
+ "metadata": {},
+ "source": [
+ "## 4.5. Deploy & run Inference on the fine-tuned model\n",
+ "***\n",
+ "A trained model does nothing on its own. We now want to use the model to perform inference. For this example, that means predicting the class label of an image. We follow the same steps as in the [Section 3 - Run inference on the pre-trained model](#3.-Run-inference-on-the-pre-trained-model). We start by retrieving the artifacts for deploying an endpoint. However, instead of base_predictor, we deploy the `ic_estimator` that we fine-tuned.\n",
+ "***"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "1e1b318a",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "inference_instance_type = \"ml.p2.xlarge\"\n",
+ "\n",
+ "# Retrieve the inference docker container uri\n",
+ "deploy_image_uri = image_uris.retrieve(\n",
+ " region=None,\n",
+ " framework=None,\n",
+ " image_scope=\"inference\",\n",
+ " model_id=model_id,\n",
+ " model_version=model_version,\n",
+ " instance_type=inference_instance_type,\n",
+ ")\n",
+ "# Retrieve the inference script uri\n",
+ "deploy_source_uri = script_uris.retrieve(\n",
+ " model_id=model_id, model_version=model_version, script_scope=\"inference\"\n",
+ ")\n",
+ "\n",
+ "endpoint_name = name_from_base(f\"jumpstart-example-FT-{model_id}-\")\n",
+ "\n",
+ "# Use the estimator from the previous step to deploy to a SageMaker endpoint\n",
+ "finetuned_predictor = (hp_tuner if use_amt else ic_estimator).deploy(\n",
+ " initial_instance_count=1,\n",
+ " instance_type=inference_instance_type,\n",
+ " entry_point=\"inference.py\",\n",
+ " image_uri=deploy_image_uri,\n",
+ " source_dir=deploy_source_uri,\n",
+ " endpoint_name=endpoint_name,\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "1680c7b9",
+ "metadata": {},
+ "source": [
+ "---\n",
+ "Next, we download example images of a rose and a sunflower from the S3 bucket for inference.\n",
+ "\n",
+ "---"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "f0a8d503",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "s3_bucket = f\"jumpstart-cache-prod-{aws_region}\"\n",
+ "key_prefix = \"training-datasets/tf_flowers\"\n",
+ "\n",
+ "\n",
+ "def download_from_s3(images):\n",
+ " for filename, image_key in images.items():\n",
+ " boto3.client(\"s3\").download_file(s3_bucket, f\"{key_prefix}/{image_key}\", filename)\n",
+ "\n",
+ "\n",
+ "flower_images = {\n",
+ " \"img1.jpg\": \"roses/10503217854_e66a804309.jpg\",\n",
+ " \"img2.jpg\": \"sunflowers/1008566138_6927679c8a.jpg\",\n",
+ "}\n",
+ "download_from_s3(flower_images)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "006165b6",
+ "metadata": {},
+ "source": [
+ "---\n",
+ "Next, we query the fine-tuned model, parse the response and display the predictions.\n",
+ "\n",
+ "---"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "1bf49f4d",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from IPython.core.display import HTML\n",
+ "\n",
+ "for image_filename in flower_images.keys():\n",
+ " with open(image_filename, \"rb\") as file:\n",
+ " img = file.read()\n",
+ " query_response = finetuned_predictor.predict(\n",
+ " img, {\"ContentType\": \"application/x-image\", \"Accept\": \"application/json;verbose\"}\n",
+ " )\n",
+ " model_predictions = json.loads(query_response)\n",
+ " predicted_label = model_predictions[\"predicted_label\"]\n",
+ " display(\n",
+ " HTML(\n",
+ " f''\n",
+ " f\"Predicted Label: {predicted_label}\"\n",
+ " )\n",
+ " )"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "cda81d4d",
+ "metadata": {},
+ "source": [
+ "---\n",
+ "Next, we clean up the deployed endpoint.\n",
+ "\n",
+ "---"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "7f58f448",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Delete the SageMaker endpoint and the attached resources\n",
+ "finetuned_predictor.delete_model()\n",
+ "finetuned_predictor.delete_endpoint()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "c70b96df",
+ "metadata": {},
+ "source": [
+ "## 4.6. Incrementally train the fine-tuned model\n",
+ "\n",
+ "***\n",
+ "Incremental training allows you to train a new model using an expanded dataset that contains an underlying pattern that was not accounted for in the previous training and which resulted in poor model performance. You can use the artifacts from an existing model and use an expanded dataset to train a new model. Incremental training saves both time and resources as you don’t need to retrain a model from scratch.\n",
+ "\n",
+ "One may use any dataset (old or new) as long as the dataset format remain the same (set of classes). Incremental training step is similar to the finetuning step discussed above with the following difference: In fine-tuning above, we start with a pre-trained model whereas in incremental training, we start with an existing fine-tuned model.\n",
+ "***"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "8b716544",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Identify the previously trained model path based on the output location where artifacts are stored previously and the training job name.\n",
+ "\n",
+ "if use_amt: # If using amt, select the model for the best training job.\n",
+ " sage_client = boto3.Session().client(\"sagemaker\")\n",
+ " tuning_job_result = sage_client.describe_hyper_parameter_tuning_job(\n",
+ " HyperParameterTuningJobName=hp_tuner._current_job_name\n",
+ " )\n",
+ " last_training_job_name = tuning_job_result[\"BestTrainingJob\"][\"TrainingJobName\"]\n",
+ "else:\n",
+ " last_training_job_name = ic_estimator._current_job_name\n",
+ "\n",
+ "last_trained_model_path = f\"{s3_output_location}/{last_training_job_name}/output/model.tar.gz\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "0f2b7c2a",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "incremental_train_output_prefix = \"jumpstart-example-ic-incremental-training\"\n",
+ "\n",
+ "incremental_s3_output_location = f\"s3://{output_bucket}/{incremental_train_output_prefix}/output\"\n",
+ "\n",
+ "incremental_training_job_name = name_from_base(f\"jumpstart-example-{model_id}-incremental-training\")\n",
+ "\n",
+ "incremental_train_estimator = Estimator(\n",
+ " role=aws_role,\n",
+ " image_uri=train_image_uri,\n",
+ " source_dir=train_source_uri,\n",
+ " model_uri=last_trained_model_path,\n",
+ " entry_point=\"transfer_learning.py\",\n",
+ " instance_count=1,\n",
+ " instance_type=training_instance_type,\n",
+ " max_run=360000,\n",
+ " hyperparameters=hyperparameters,\n",
+ " output_path=incremental_s3_output_location,\n",
+ " base_job_name=incremental_training_job_name,\n",
+ ")\n",
+ "\n",
+ "incremental_train_estimator.fit({\"training\": training_dataset_s3_path}, logs=True)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "a54aa7a5",
+ "metadata": {},
+ "source": [
+ "Once trained, we can use the same steps as in [Deploy & run Inference on the fine-tuned model](#4.5.-Deploy-&-run-Inference-on-the-fine-tuned-model) to deploy the model."
+ ]
+ }
+ ],
+ "metadata": {
+ "instance_type": "ml.t3.medium",
+ "kernelspec": {
+ "display_name": "conda_python3",
+ "language": "python",
+ "name": "conda_python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.6.13"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
\ No newline at end of file
diff --git a/introduction_to_amazon_algorithms/image_classification_tensorflow/README.md b/introduction_to_amazon_algorithms/image_classification_tensorflow/README.md
new file mode 100644
index 0000000000..600c76de0f
--- /dev/null
+++ b/introduction_to_amazon_algorithms/image_classification_tensorflow/README.md
@@ -0,0 +1,2 @@
+### SageMaker TensorFlow Image classification Training & Deployment
+This notebook `Amazon_TensorFlow_Image_Classification.ipynb` demos how to fine-tune and deploy a pre-trained image classification model using SageMaker API. It shows how to select a pre-trained TensorFlow image classification model and fine-tune it on an example dataset containing raw .jpg/.png images, while varying training hyperparameters such as learning rate, batch-size and number of epochs. AMT (Automatic Model Tuning) is used to search for the best hyperparameters. Once the training is complete, the notebook shows how to host the trained model for inference. It also shows how to host the pre-trained model as-it-is without first fine-tuning it.