diff --git a/README.md b/README.md
index edbc3c21e1..0ce3ac3958 100644
--- a/README.md
+++ b/README.md
@@ -47,7 +47,6 @@ These examples provide a gentle introduction to machine learning concepts as the
- [Targeted Direct Marketing](introduction_to_applying_machine_learning/xgboost_direct_marketing) predicts potential customers that are most likely to convert based on customer and aggregate level metrics, using Amazon SageMaker's implementation of [XGBoost](https://github.com/dmlc/xgboost).
- [Predicting Customer Churn](introduction_to_applying_machine_learning/xgboost_customer_churn) uses customer interaction and service usage data to find those most likely to churn, and then walks through the cost/benefit trade-offs of providing retention incentives. This uses Amazon SageMaker's implementation of [XGBoost](https://github.com/dmlc/xgboost) to create a highly predictive model.
-- [Time-series Forecasting](introduction_to_applying_machine_learning/linear_time_series_forecast) generates a forecast for topline product demand using Amazon SageMaker's Linear Learner algorithm.
- [Cancer Prediction](introduction_to_applying_machine_learning/breast_cancer_prediction) predicts Breast Cancer based on features derived from images, using SageMaker's Linear Learner.
- [Ensembling](introduction_to_applying_machine_learning/ensemble_modeling) predicts income using two Amazon SageMaker models to show the advantages in ensembling.
- [Video Game Sales](introduction_to_applying_machine_learning/video_game_sales) develops a binary prediction model for the success of video games based on review scores.
diff --git a/advanced_functionality/pytorch_extending_our_containers/pytorch_extending_our_containers.ipynb b/advanced_functionality/pytorch_extending_our_containers/pytorch_extending_our_containers.ipynb
index 113ba85f1f..c9a7717378 100644
--- a/advanced_functionality/pytorch_extending_our_containers/pytorch_extending_our_containers.ipynb
+++ b/advanced_functionality/pytorch_extending_our_containers/pytorch_extending_our_containers.ipynb
@@ -77,7 +77,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Part 1: Packaging and Uploading your Algorithm for use with Amazon SageMaker\n",
+ "## Part 1: Packaging and Uploading your Algorithm for use with Amazon SageMaker\n",
"\n",
"### An overview of Docker\n",
"\n",
@@ -517,7 +517,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Part 2: Training and Hosting your Algorithm in Amazon SageMaker\n",
+ "## Part 2: Training and Hosting your Algorithm in Amazon SageMaker\n",
"Once you have your container packaged, you can use it to train and serve models. Let's do that with the algorithm we made above.\n",
"\n",
"## Set up the environment\n",
@@ -684,7 +684,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Reference\n",
+ "## Reference\n",
"- [How Amazon SageMaker interacts with your Docker container for training](https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-training-algo.html)\n",
"- [How Amazon SageMaker interacts with your Docker container for inference](https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-inference-code.html)\n",
"- [CIFAR-10 Dataset](https://www.cs.toronto.edu/~kriz/cifar.html)\n",
diff --git a/advanced_functionality/scikit_bring_your_own/scikit_bring_your_own.ipynb b/advanced_functionality/scikit_bring_your_own/scikit_bring_your_own.ipynb
index 0e91a813ae..9fab0f8d5b 100644
--- a/advanced_functionality/scikit_bring_your_own/scikit_bring_your_own.ipynb
+++ b/advanced_functionality/scikit_bring_your_own/scikit_bring_your_own.ipynb
@@ -77,7 +77,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Part 1: Packaging and Uploading your Algorithm for use with Amazon SageMaker\n",
+ "## Part 1: Packaging and Uploading your Algorithm for use with Amazon SageMaker\n",
"\n",
"### An overview of Docker\n",
"\n",
@@ -303,7 +303,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Part 2: Using your Algorithm in Amazon SageMaker\n",
+ "## Part 2: Using your Algorithm in Amazon SageMaker\n",
"\n",
"Once you have your container packaged, you can use it to train models and use the model for hosting or batch transforms. Let's do that with the algorithm we made above.\n",
"\n",
diff --git a/advanced_functionality/search/ml_experiment_management_using_search.ipynb b/advanced_functionality/search/ml_experiment_management_using_search.ipynb
index c7834e34a3..8bfdfd8db6 100644
--- a/advanced_functionality/search/ml_experiment_management_using_search.ipynb
+++ b/advanced_functionality/search/ml_experiment_management_using_search.ipynb
@@ -35,7 +35,7 @@
"tags": []
},
"source": [
- "# Introduction\n",
+ "## Introduction\n",
"\n",
"Welcome to our example introducing Amazon SageMaker Search! Amazon SageMaker Search lets you quickly find and evaluate the most relevant model training runs from potentially hundreds and thousands of your Amazon SageMaker model training jobs.\n",
"Developing a machine learning model requires continuous experimentation, trying new learning algorithms and tuning hyper parameters, all the while observing the impact of such changes on model performance and accuracy. This iterative exercise often leads to explosion of hundreds of model training experiments and model versions, slowing down the convergence and discovery of “winning” model. In addition, the information explosion makes it very hard down the line to trace back the lineage of a model version i.e. the unique combination of datasets, algorithms and parameters that brewed that model in the first place. \n",
@@ -292,7 +292,7 @@
"tags": []
},
"source": [
- "# Training the linear model\n",
+ "## Training the linear model\n",
"Once we have the data preprocessed and available in the correct format for training, the next step is to actually train the model using the data. First, let's specify our algorithm container. More details on algorithm containers can be found in [AWS documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html)."
]
},
@@ -457,7 +457,7 @@
"tags": []
},
"source": [
- "# Use Amazon SageMaker Search to organize and evaluate experiments\n",
+ "## Use Amazon SageMaker Search to organize and evaluate experiments\n",
"Usually you will experiment with tuning multiple hyperparameters or even try new learning algorithms and training datasets resulting in potentially hundreds of model training runs and model versions. However, for the sake of simplicity, we are only tuning mini_batch_size in this example, trying only three different values resulting in as many model versions. Now we will use [Search](https://docs.aws.amazon.com/sagemaker/latest/dg/search.html) to **group together** the three model training runs and **evaluate** the best performing model by ranking and comparing them on a metric of our choice. \n",
"\n",
"**For grouping** the relevant model training runs together, we will search the model training jobs by the unique label or tag that we have been using as a tracking label to track our experiments. \n",
@@ -556,7 +556,7 @@
"tags": []
},
"source": [
- "# Set up hosting for the model\n",
+ "## Set up hosting for the model\n",
"Now that we've found our best performing model (in this example the one with mini_batch_size=100), we can deploy it behind an Amazon SageMaker real-time hosted endpoint. This will allow out to make predictions (or inference) from the model dyanamically."
]
},
@@ -601,7 +601,7 @@
"tags": []
},
"source": [
- "# Tracing the lineage of a model starting from an endpoint\n",
+ "## Tracing the lineage of a model starting from an endpoint\n",
"Now we will present an example of how you can use the Amazon SageMaker Search to trace the antecedents of a model deployed at an endpoint i.e. unique combination of algorithms, datasets, and parameters that brewed the model in first place."
]
},
diff --git a/advanced_functionality/tensorflow_bring_your_own/tensorflow_bring_your_own.ipynb b/advanced_functionality/tensorflow_bring_your_own/tensorflow_bring_your_own.ipynb
index 5a34064b57..16899555fc 100644
--- a/advanced_functionality/tensorflow_bring_your_own/tensorflow_bring_your_own.ipynb
+++ b/advanced_functionality/tensorflow_bring_your_own/tensorflow_bring_your_own.ipynb
@@ -72,7 +72,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Part 1: Packaging and Uploading your Algorithm for use with Amazon SageMaker\n",
+ "## Part 1: Packaging and Uploading your Algorithm for use with Amazon SageMaker\n",
"\n",
"### An overview of Docker\n",
"\n",
@@ -449,7 +449,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Part 2: Training and Hosting your Algorithm in Amazon SageMaker\n",
+ "## Part 2: Training and Hosting your Algorithm in Amazon SageMaker\n",
"Once you have your container packaged, you can use it to train and serve models. Let's do that with the algorithm we made above.\n",
"\n",
"## Set up the environment\n",
@@ -608,7 +608,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Reference\n",
+ "## Reference\n",
"- [How Amazon SageMaker interacts with your Docker container for training](https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-training-algo.html)\n",
"- [How Amazon SageMaker interacts with your Docker container for inference](https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-inference-code.html)\n",
"- [CIFAR-10 Dataset](https://www.cs.toronto.edu/~kriz/cifar.html)\n",
diff --git a/autopilot/custom-feature-selection/Feature_selection_autopilot.ipynb b/autopilot/custom-feature-selection/Feature_selection_autopilot.ipynb
index 45aa1b4801..f0beb3326e 100644
--- a/autopilot/custom-feature-selection/Feature_selection_autopilot.ipynb
+++ b/autopilot/custom-feature-selection/Feature_selection_autopilot.ipynb
@@ -10,7 +10,7 @@
"In some cases, customer wants to have the flexibility to bring custom data processing code to SageMaker Autopilot. For example, customer might have datasets with large number of independent variables. Customer would like to have a custom feature selection step to remove irrelevant variables first. The resulted smaller dataset is then used to launch SageMaker Autopilot job. Customer would also like to include both the custom processing code and models from SageMaker Autopilot for easy deployment—either on a real-time endpoint or for batch processing. We will demonstrate how to achieve this in this notebook. \n",
"\n",
"\n",
- "### Table of contents\n",
+ "## Table of contents\n",
"* [Setup](#setup)\n",
" * [Generate dataset](#data_gene)\n",
" * [Upload data to S3](#upload)\n",
@@ -29,7 +29,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Setup "
+ "## Setup "
]
},
{
@@ -139,7 +139,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Feature Selection \n",
+ "## Feature Selection \n",
"\n",
"We use Scikit-learn on Sagemaker `SKLearn` Estimator with a feature selection script as an entry point. The script is very similar to a training script you might run outside of SageMaker, but you can access useful properties about the training environment through various environment variables, such as:\n",
"\n",
@@ -439,7 +439,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Autopilot "
+ "## Autopilot "
]
},
{
@@ -652,7 +652,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Serial Inference Pipeline that combines feature selection and autopilot \n"
+ "## Serial Inference Pipeline that combines feature selection and autopilot \n"
]
},
{
diff --git a/aws_marketplace/creating_marketplace_products/algorithms/Bring_Your_Own-Creating_Algorithm_and_Model_Package.ipynb b/aws_marketplace/creating_marketplace_products/algorithms/Bring_Your_Own-Creating_Algorithm_and_Model_Package.ipynb
index 7b830dc953..f2b627bfba 100644
--- a/aws_marketplace/creating_marketplace_products/algorithms/Bring_Your_Own-Creating_Algorithm_and_Model_Package.ipynb
+++ b/aws_marketplace/creating_marketplace_products/algorithms/Bring_Your_Own-Creating_Algorithm_and_Model_Package.ipynb
@@ -77,7 +77,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Part 1: Packaging and Uploading your Algorithm for use with Amazon SageMaker\n",
+ "## Part 1: Packaging and Uploading your Algorithm for use with Amazon SageMaker\n",
"\n",
"### An overview of Docker\n",
"\n",
@@ -286,7 +286,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "## Testing your algorithm on your local machine or on an Amazon SageMaker notebook instance\n",
+ "### Testing your algorithm on your local machine or on an Amazon SageMaker notebook instance\n",
"\n",
"While you're first packaging an algorithm use with Amazon SageMaker, you probably want to test it yourself to make sure it's working right. In the directory `container/local_test`, there is a framework for doing this. It includes three shell scripts for running and using the container and a directory structure that mimics the one outlined above.\n",
"\n",
@@ -303,11 +303,11 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Part 2: Training, Batch Inference and Hosting your Algorithm in Amazon SageMaker\n",
+ "## Part 2: Training, Batch Inference and Hosting your Algorithm in Amazon SageMaker\n",
"\n",
"Once you have your container packaged, you can use it to train and serve models. Let's do that with the algorithm we made above.\n",
"\n",
- "## Set up the environment\n",
+ "### Set up the environment\n",
"\n",
"Here we specify a bucket to use and the role that will be used for working with Amazon SageMaker."
]
@@ -333,7 +333,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "## Create the session\n",
+ "### Create the session\n",
"\n",
"The session remembers our connection parameters to Amazon SageMaker. We'll use it to perform all of our SageMaker operations."
]
@@ -353,7 +353,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "## Upload the data for training\n",
+ "### Upload the data for training\n",
"\n",
"When training large models with huge amounts of data, you'll typically use big data tools, like Amazon Athena, AWS Glue, or Amazon EMR, to create your data in S3. For the purposes of this example, we're using some the classic [Iris dataset](https://en.wikipedia.org/wiki/Iris_flower_data_set), which we have included. \n",
"\n",
@@ -376,7 +376,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "## Create an estimator and fit the model\n",
+ "### Create an estimator and fit the model\n",
"\n",
"In order to use Amazon SageMaker to fit our algorithm, we'll create an `Estimator` that defines how to use the container to train. This includes the configuration we need to invoke SageMaker training:\n",
"\n",
@@ -422,7 +422,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "## Batch Transform Job\n",
+ "### Batch Transform Job\n",
"\n",
"Now let's use the model built to run a batch inference job and verify it works.\n"
]
@@ -431,7 +431,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "### Batch Transform Input Preparation\n",
+ "#### Batch Transform Input Preparation\n",
"\n",
"The snippet below is removing the \"label\" column (column indexed at 0) and retaining the rest to be batch transform's input. \n",
"\n",
@@ -463,7 +463,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "### Run Batch Transform\n",
+ "#### Run Batch Transform\n",
"\n",
"Now that our batch transform input is setup, we run the transformation job next"
]
@@ -485,7 +485,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### Inspect the Batch Transform Output in S3"
+ "##### Inspect the Batch Transform Output in S3"
]
},
{
@@ -511,7 +511,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "## Deploy the model\n",
+ "### Deploy the model\n",
"\n",
"Deploying the model to Amazon SageMaker hosting just requires a `deploy` call on the fitted model. This call takes an instance count, instance type, and optionally serializer and deserializer functions. These are used when the resulting predictor is created on the endpoint."
]
@@ -532,7 +532,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "### Choose some data and use it for a prediction\n",
+ "#### Choose some data and use it for a prediction\n",
"\n",
"In order to do some predictions, we'll extract some of the data we used for training and do predictions against it. This is, of course, bad statistical practice, but a good way to see how the mechanism works."
]
@@ -576,7 +576,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "### Cleanup Endpoint\n",
+ "#### Cleanup Endpoint\n",
"\n",
"When you're done with the endpoint, you'll want to clean it up."
]
@@ -594,7 +594,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Part 3 - Package your resources as an Amazon SageMaker Algorithm\n",
+ "## Part 3 - Package your resources as an Amazon SageMaker Algorithm\n",
"(If you looking to sell a pretrained model (ModelPackage), please skip to Part 4.)\n",
"\n",
"Now that you have verified that the algorithm code works for training, live inference and batch inference in the above sections, you can start packaging it up as an Amazon SageMaker Algorithm."
@@ -615,7 +615,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "## Algorithm Definition\n",
+ "### Algorithm Definition\n",
"\n",
"SageMaker Algorithm is comprised of 2 parts:\n",
"\n",
@@ -741,7 +741,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "## Putting it all together\n",
+ "### Putting it all together\n",
"\n",
"Now we put all the pieces together in the next cell and create an Amazon SageMaker Algorithm"
]
@@ -777,7 +777,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "### Describe the algorithm\n",
+ "#### Describe the algorithm\n",
"\n",
"The next cell describes the Algorithm and waits until it reaches a terminal state (Completed or Failed)"
]
@@ -805,11 +805,11 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Part 4 - Package your resources as an Amazon SageMaker ModelPackage\n",
+ "## Part 4 - Package your resources as an Amazon SageMaker ModelPackage\n",
"\n",
"In this section, we will see how you can package your artifacts (ECR image and the trained artifact from your previous training job) into a ModelPackage. Once you complete this, you can list your product as a pretrained model in the AWS Marketplace.\n",
"\n",
- "## Model Package Definition\n",
+ "### Model Package Definition\n",
"A Model Package is a reusable model artifacts abstraction that packages all ingredients necessary for inference. It consists of an inference specification that defines the inference image to use along with an optional model weights location.\n"
]
},
@@ -892,7 +892,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "## Putting it all together\n",
+ "### Putting it all together\n",
"\n",
"Now we put all the pieces together in the next cell and create an Amazon SageMaker Model Package."
]
@@ -951,7 +951,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "## Debugging Creation Issues\n",
+ "### Debugging Creation Issues\n",
"\n",
"Entity creation typically never fails in the synchronous path. However, the validation process can fail for many reasons. If the above Algorithm creation fails, you can investigate the cause for the failure by looking at the \"AlgorithmStatusDetails\" field in the Algorithm object or \"ModelPackageStatusDetails\" field in the ModelPackage object. You can also look for the Training Jobs / Transform Jobs created in your account as part of our validation and inspect their logs for more hints on what went wrong. \n",
"\n",
@@ -963,7 +963,7 @@
"metadata": {},
"source": [
"\n",
- "## List on AWS Marketplace\n",
+ "### List on AWS Marketplace\n",
"\n",
"Next, please go back to the Amazon SageMaker console, click on \"Algorithms\" (or \"Model Packages\") and you'll find the entity you created above. If it was successfully created and validated, you should be able to select the entity and \"Publish new ML Marketplace listing\" from SageMaker console.\n",
""
diff --git a/aws_marketplace/curating_aws_marketplace_listing_and_sample_notebook/Algorithm/Sample_Notebook_Template/title_of_your_product-Algorithm.ipynb b/aws_marketplace/curating_aws_marketplace_listing_and_sample_notebook/Algorithm/Sample_Notebook_Template/title_of_your_product-Algorithm.ipynb
index d8cada109e..6ea9505492 100644
--- a/aws_marketplace/curating_aws_marketplace_listing_and_sample_notebook/Algorithm/Sample_Notebook_Template/title_of_your_product-Algorithm.ipynb
+++ b/aws_marketplace/curating_aws_marketplace_listing_and_sample_notebook/Algorithm/Sample_Notebook_Template/title_of_your_product-Algorithm.ipynb
@@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "## Train, tune, and deploy a custom ML model using For Seller to update: Title_of_your_ML Algorithm Algorithm from AWS Marketplace \n",
+ "# Train, tune, and deploy a custom ML model using For Seller to update: Title_of_your_ML Algorithm Algorithm from AWS Marketplace \n",
"\n",
"\n",
" For Seller to update: Add overview of the algorithm here \n",
@@ -15,7 +15,7 @@
"\n",
"> **Note**: This is a reference notebook and it cannot run unless you make changes suggested in the notebook.\n",
"\n",
- "#### Pre-requisites:\n",
+ "## Pre-requisites\n",
"1. **Note**: This notebook contains elements which render correctly in Jupyter interface. Open this notebook from an Amazon SageMaker Notebook Instance or Amazon SageMaker Studio.\n",
"1. Ensure that IAM role used has **AmazonSageMakerFullAccess**\n",
"1. Some hands-on experience using [Amazon SageMaker](https://aws.amazon.com/sagemaker/).\n",
@@ -26,7 +26,7 @@
" 1. **aws-marketplace:Subscribe** \n",
" 2. or your AWS account has a subscription to For Seller to update:[Title_of_your_algorithm](Provide link to your marketplace listing of your product). \n",
"\n",
- "#### Contents:\n",
+ "## Contents\n",
"1. [Subscribe to the algorithm](#1.-Subscribe-to-the-algorithm)\n",
"1. [Prepare dataset](#2.-Prepare-dataset)\n",
"\t1. [Dataset format expected by the algorithm](#A.-Dataset-format-expected-by-the-algorithm)\n",
@@ -52,7 +52,7 @@
"\t1. [Unsubscribe to the listing (optional)](#B.-Unsubscribe-to-the-listing-(optional))\n",
"\n",
"\n",
- "#### Usage instructions\n",
+ "## Usage instructions\n",
"You can run this notebook one cell at a time (By using Shift+Enter for running a cell)."
]
},
@@ -60,7 +60,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "### 1. Subscribe to the algorithm"
+ "## 1. Subscribe to the algorithm"
]
},
{
@@ -87,7 +87,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "### 2. Prepare dataset"
+ "## 2. Prepare dataset"
]
},
{
@@ -123,7 +123,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### A. Dataset format expected by the algorithm"
+ "### A. Dataset format expected by the algorithm"
]
},
{
@@ -149,7 +149,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### B. Configure and visualize train and test dataset"
+ "### B. Configure and visualize train and test dataset"
]
},
{
@@ -211,7 +211,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### C. Upload datasets to Amazon S3"
+ "### C. Upload datasets to Amazon S3"
]
},
{
@@ -386,7 +386,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "### 4: Deploy model and verify results"
+ "## 4: Deploy model and verify results"
]
},
{
@@ -425,7 +425,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### A. Deploy trained model"
+ "### A. Deploy trained model"
]
},
{
@@ -450,7 +450,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### B. Create input payload"
+ "### B. Create input payload"
]
},
{
@@ -494,7 +494,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### C. Perform real-time inference"
+ "### C. Perform real-time inference"
]
},
{
@@ -522,7 +522,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### D. Visualize output"
+ "### D. Visualize output"
]
},
{
@@ -543,7 +543,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### E. Calculate relevant metrics"
+ "### E. Calculate relevant metrics"
]
},
{
@@ -574,7 +574,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### F. Delete the endpoint"
+ "### F. Delete the endpoint"
]
},
{
@@ -605,7 +605,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "### 5: Tune your model! (optional)"
+ "## 5: Tune your model! (optional)"
]
},
{
@@ -626,7 +626,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### A. Tuning Guidelines"
+ "### A. Tuning Guidelines"
]
},
{
@@ -645,7 +645,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### B. Define Tuning configuration"
+ "### B. Define Tuning configuration"
]
},
{
@@ -703,7 +703,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### C. Run a model tuning job"
+ "### C. Run a model tuning job"
]
},
{
@@ -766,7 +766,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "### 6. Perform Batch inference"
+ "## 6. Perform Batch inference"
]
},
{
@@ -821,14 +821,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "### 7. Clean-up"
+ "## 7. Clean-up"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### A. Delete the model"
+ "### A. Delete the model"
]
},
{
@@ -844,7 +844,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### B. Unsubscribe to the listing (optional)"
+ "### B. Unsubscribe to the listing (optional)"
]
},
{
diff --git a/aws_marketplace/curating_aws_marketplace_listing_and_sample_notebook/ModelPackage/Sample_Notebook_Template/title_of_your_product-Model.ipynb b/aws_marketplace/curating_aws_marketplace_listing_and_sample_notebook/ModelPackage/Sample_Notebook_Template/title_of_your_product-Model.ipynb
index f4f7e7cff4..517d47bac8 100644
--- a/aws_marketplace/curating_aws_marketplace_listing_and_sample_notebook/ModelPackage/Sample_Notebook_Template/title_of_your_product-Model.ipynb
+++ b/aws_marketplace/curating_aws_marketplace_listing_and_sample_notebook/ModelPackage/Sample_Notebook_Template/title_of_your_product-Model.ipynb
@@ -4,16 +4,16 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "## Deploy For Seller to update: Title_of_your_ML Model Model Package from AWS Marketplace \n",
+ "# Deploy For Seller to update: Title_of_your_ML Model Model Package from AWS Marketplace \n",
"\n",
"\n",
- "## For Seller to update: Add overview of the ML Model here \n",
+ " For Seller to update: Add overview of the ML Model here \n",
"\n",
"This sample notebook shows you how to deploy For Seller to update:[Title_of_your_ML Model](Provide link to your marketplace listing of your product) using Amazon SageMaker.\n",
"\n",
"> **Note**: This is a reference notebook and it cannot run unless you make changes suggested in the notebook.\n",
"\n",
- "#### Pre-requisites:\n",
+ "## Pre-requisites:\n",
"1. **Note**: This notebook contains elements which render correctly in Jupyter interface. Open this notebook from an Amazon SageMaker Notebook Instance or Amazon SageMaker Studio.\n",
"1. Ensure that IAM role used has **AmazonSageMakerFullAccess**\n",
"1. To deploy this ML model successfully, ensure that:\n",
@@ -23,7 +23,7 @@
" 1. **aws-marketplace:Subscribe** \n",
" 2. or your AWS account has a subscription to For Seller to update:[Title_of_your_ML Model](Provide link to your marketplace listing of your product). If so, skip step: [Subscribe to the model package](#1.-Subscribe-to-the-model-package)\n",
"\n",
- "#### Contents:\n",
+ "## Contents:\n",
"1. [Subscribe to the model package](#1.-Subscribe-to-the-model-package)\n",
"2. [Create an endpoint and perform real-time inference](#2.-Create-an-endpoint-and-perform-real-time-inference)\n",
" 1. [Create an endpoint](#A.-Create-an-endpoint)\n",
@@ -37,7 +37,7 @@
" 2. [Unsubscribe to the listing (optional)](#B.-Unsubscribe-to-the-listing-(optional))\n",
" \n",
"\n",
- "#### Usage instructions\n",
+ "## Usage instructions\n",
"You can run this notebook one cell at a time (By using Shift+Enter for running a cell)."
]
},
@@ -45,7 +45,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "### 1. Subscribe to the model package"
+ "## 1. Subscribe to the model package"
]
},
{
@@ -114,7 +114,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "### 2. Create an endpoint and perform real-time inference"
+ "## 2. Create an endpoint and perform real-time inference"
]
},
{
@@ -154,7 +154,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### A. Create an endpoint"
+ "### A. Create an endpoint"
]
},
{
@@ -183,7 +183,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### B. Create input payload"
+ "### B. Create input payload"
]
},
{
@@ -227,7 +227,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### C. Perform real-time inference"
+ "### C. Perform real-time inference"
]
},
{
@@ -250,7 +250,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### D. Visualize output"
+ "### D. Visualize output"
]
},
{
@@ -287,7 +287,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### E. Delete the endpoint"
+ "### E. Delete the endpoint"
]
},
{
@@ -311,7 +311,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "### 3. Perform batch inference"
+ "## 3. Perform batch inference"
]
},
{
@@ -369,14 +369,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "### 4. Clean-up"
+ "## 4. Clean-up"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### A. Delete the model"
+ "### A. Delete the model"
]
},
{
@@ -392,7 +392,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### B. Unsubscribe to the listing (optional)"
+ "### B. Unsubscribe to the listing (optional)"
]
},
{
diff --git a/aws_marketplace/using-open-source-model-packages/pytorch-ic-model/using-image-classification-models.ipynb b/aws_marketplace/using-open-source-model-packages/pytorch-ic-model/using-image-classification-models.ipynb
index 675f47f47a..79c30735a8 100644
--- a/aws_marketplace/using-open-source-model-packages/pytorch-ic-model/using-image-classification-models.ipynb
+++ b/aws_marketplace/using-open-source-model-packages/pytorch-ic-model/using-image-classification-models.ipynb
@@ -10,7 +10,7 @@
"\n",
"This notebook is compatible only with those image classification model packages which this notebook is linked to.\n",
"\n",
- "#### Pre-requisites:\n",
+ "## Pre-requisites:\n",
"1. **Note**: This notebook contains elements which render correctly in Jupyter interface. Open this notebook from an Amazon SageMaker Notebook Instance or Amazon SageMaker Studio.\n",
"1. Ensure that IAM role used has **AmazonSageMakerFullAccess**\n",
"1. To deploy this ML model successfully, ensure that:\n",
@@ -20,7 +20,7 @@
" 1. **aws-marketplace:Subscribe** \n",
" 2. or your AWS account has a subscription to this image classification model. If so, skip step: [Subscribe to the model package](#1.-Subscribe-to-the-model-package)\n",
"\n",
- "#### Contents:\n",
+ "## Contents\n",
"1. [Subscribe to the model package](#1.-Subscribe-to-the-model-package)\n",
"2. [Create an endpoint and perform real-time inference](#2.-Create-an-endpoint-and-perform-real-time-inference)\n",
" 1. [Create an endpoint](#A.-Create-an-endpoint)\n",
@@ -34,7 +34,7 @@
" 2. [Unsubscribe to the listing (optional)](#B.-Unsubscribe-to-the-listing-(optional))\n",
" \n",
"\n",
- "#### Usage instructions\n",
+ "## Usage instructions\n",
"You can run this notebook one cell at a time (By using Shift+Enter for running a cell).\n",
"\n",
"**Note** - This notebook requires you to follow instructions and specify values for parameters, as instructed."
@@ -44,7 +44,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "### 1. Subscribe to the model package"
+ "## 1. Subscribe to the model package"
]
},
{
@@ -107,7 +107,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "### 2. Create an endpoint and perform real-time inference"
+ "## 2. Create an endpoint and perform real-time inference"
]
},
{
@@ -151,7 +151,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### A. Create an endpoint"
+ "### A. Create an endpoint"
]
},
{
@@ -180,7 +180,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### B. Prepare input file for performing real-time inference\n",
+ "### B. Prepare input file for performing real-time inference\n",
"In this step, we will download class_id_to_label_mapping from S3 bucket. The mapping files has been downloaded from [TensorFlow](https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt). [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0)."
]
},
@@ -240,7 +240,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### C. Query endpoint that you have created with the opened images"
+ "### C. Query endpoint that you have created with the opened images"
]
},
{
@@ -286,7 +286,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### D. Delete the endpoint"
+ "### D. Delete the endpoint"
]
},
{
@@ -310,7 +310,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "### 3. Perform batch inference"
+ "## 3. Perform batch inference"
]
},
{
@@ -386,14 +386,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "### 4. Clean-up"
+ "## 4. Clean-up"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### A. Delete the model"
+ "### A. Delete the model"
]
},
{
@@ -409,7 +409,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### B. Unsubscribe to the listing (optional)"
+ "### B. Unsubscribe to the listing (optional)"
]
},
{
diff --git a/aws_marketplace/using_data/image_classification_with_shutterstock_image_datasets/image-classification-with-shutterstock-datasets.ipynb b/aws_marketplace/using_data/image_classification_with_shutterstock_image_datasets/image-classification-with-shutterstock-datasets.ipynb
index 5aaa01fb43..ae6a1a7528 100644
--- a/aws_marketplace/using_data/image_classification_with_shutterstock_image_datasets/image-classification-with-shutterstock-datasets.ipynb
+++ b/aws_marketplace/using_data/image_classification_with_shutterstock_image_datasets/image-classification-with-shutterstock-datasets.ipynb
@@ -10,7 +10,7 @@
{
"cell_type": "markdown",
"source": [
- "# Introduction\n",
+ "## Introduction\n",
"\n",
"This example of **multi-label image classification** trains the **Amazon SageMaker 1P image classification algorithm**. We will use the Amazon SageMaker image classification algorithm in transfer learning mode to fine-tune a pre-trained model (trained on ImageNet data) to learn to classify a new multi-label dataset. The pre-trained model will be fine-tuned using the [Free Sample: Images & Metadata of “Whole Foods” Shoppers dataset from Shutterstock’s Image Datasets](https://aws.amazon.com/marketplace/pp/prodview-y6xuddt42fmbu?ref_=srh_res_product_title). \n",
"\n",
@@ -408,7 +408,7 @@
{
"cell_type": "markdown",
"source": [
- "# Inference\n",
+ "## Inference\n",
"\n",
"### Step 9: Deploy the Model for Inference\n",
"\n",
diff --git a/aws_marketplace/using_model_packages/amazon_augmented_ai_with_aws_marketplace_ml_models/amazon_augmented_ai_with_aws_marketplace_ml_models.ipynb b/aws_marketplace/using_model_packages/amazon_augmented_ai_with_aws_marketplace_ml_models/amazon_augmented_ai_with_aws_marketplace_ml_models.ipynb
index cb3ed05361..ed9945b15e 100644
--- a/aws_marketplace/using_model_packages/amazon_augmented_ai_with_aws_marketplace_ml_models/amazon_augmented_ai_with_aws_marketplace_ml_models.ipynb
+++ b/aws_marketplace/using_model_packages/amazon_augmented_ai_with_aws_marketplace_ml_models/amazon_augmented_ai_with_aws_marketplace_ml_models.ipynb
@@ -138,7 +138,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### Setup Variables, Bucket and Paths"
+ "### Setup Variables, Bucket and Paths"
]
},
{
@@ -234,7 +234,7 @@
"\n",
"This will bring you back to the Private tab under labeling workforces, where you can view and manage your private teams and workers.\n",
"\n",
- "### **IMPORTANT: After you have created your workteam, from the Team summary section copy the value of the ARN and uncomment and replace `` below:**"
+ "**IMPORTANT: After you have created your workteam, from the Team summary section copy the value of the ARN and uncomment and replace `` below:**"
]
},
{
diff --git a/aws_marketplace/using_model_packages/auto_insurance/automating_auto_insurance_claim_processing.ipynb b/aws_marketplace/using_model_packages/auto_insurance/automating_auto_insurance_claim_processing.ipynb
index 2cd37c1bd2..db17f4f390 100644
--- a/aws_marketplace/using_model_packages/auto_insurance/automating_auto_insurance_claim_processing.ipynb
+++ b/aws_marketplace/using_model_packages/auto_insurance/automating_auto_insurance_claim_processing.ipynb
@@ -4,12 +4,12 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "## Goal: Automate Auto Insurance Claim Processing Using Pre-trained Models \n",
+ "# Goal: Automate Auto Insurance Claim Processing Using Pre-trained Models \n",
"Auto insurance claim process requires extracting metadata from images and performing validations to ensure that the claim is not fraudulent. This sample notebook shows how third party pre-trained machine learning models can be used to extract such metadata from images.\n",
"\n",
"This notebook uses [Vehicle Damage Inspection](https://aws.amazon.com/marketplace/pp/Persistent-Systems-Vehicle-Damage-Inspection/prodview-xhj66rbazm6oe) model to identify the type of damage and [Deep Vision vehicle recognition](https://aws.amazon.com/marketplace/pp/prodview-a7wgrolhu54ts?qid=1558356141251&sr=0-4&ref_=srh_res_product_title) to identify the make, model, year, and bounding box of the car. This notebook also shows how to use the bounding box to extract license information from the using [Amazon Rekognition](https://aws.amazon.com/rekognition/).\n",
"\n",
- "### Pre-requisites:\n",
+ "## Pre-requisites\n",
"This sample notebook requires subscription to following pre-trained machine learning model packages from AWS Marketplace:\n",
"\n",
"1. [Vehicle Damage Inspection](https://aws.amazon.com/marketplace/pp/Persistent-Systems-Vehicle-Damage-Inspection/prodview-xhj66rbazm6oe)\n",
@@ -33,7 +33,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "### Set up environment and view a sample image\n",
+ "## Set up environment and view a sample image\n",
"\n",
"In this section, we will import necessary libraries and define variables such as an S3 bucket, an IAM role, and SageMaker session to be used."
]
@@ -311,7 +311,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "## Step 3. Extract labels from the picture (optional)\n",
+ "## Step 3: Extract labels from the picture (optional)\n",
"\n",
"Let us use the car image extracted from the original image for extracting license information using [Amazon Rekognition](https://aws.amazon.com/rekognition/).\n",
"\n",
@@ -392,7 +392,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "### 5. Cleanup "
+ "## 5: Cleanup "
]
},
{
diff --git a/aws_marketplace/using_model_packages/creative-writing-using-gpt-2-text-generation/creative-writing-using-gpt-2-text-generation.ipynb b/aws_marketplace/using_model_packages/creative-writing-using-gpt-2-text-generation/creative-writing-using-gpt-2-text-generation.ipynb
index 3f0a251237..8580887093 100644
--- a/aws_marketplace/using_model_packages/creative-writing-using-gpt-2-text-generation/creative-writing-using-gpt-2-text-generation.ipynb
+++ b/aws_marketplace/using_model_packages/creative-writing-using-gpt-2-text-generation/creative-writing-using-gpt-2-text-generation.ipynb
@@ -22,7 +22,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### Overview:\n",
+ "## Overview\n",
"In [step 1](#Step-1:-Determine-input-prompt-and-visualize-word-dependencies) of this notebook, you will determine an input prompt that will be used to condition the GPT-2 model for text generation. You will also [visualize attention](#Step-1.1-Introduction-to-attention) mechanism of GPT-2 model. In [step 2](#Step-2:-Use-an-ML-model-to-generate-text-based-on-prompt), you will create the model from an AWS Marketplace subscription, and deploy to an Amazon SageMaker endpoint. In [step 3](#Step-3:-Explore-use-cases-and-model-parameters), you will explore text generation use cases with various model parameter settings. In [step 4](#Step-4:-Use-Amazon-SageMaker-batch-transform), you will perform inference asynchronously using SageMaker batch transform instead of the endpoint. In [Step 5](#Step-5:-Next-steps) you will find additional models to explore and experiment with."
]
},
@@ -30,7 +30,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### Contents:\n",
+ "## Contents\n",
"* [Pre-requisites](#Pre-requisites)\n",
"* [Step 1: Determine input prompt and visualize word dependencies](#Step-1:-Determine-input-prompt-and-visualize-word-dependencies)\n",
" * [Step 1.1 Introduction to attention](#Step-1.1-Introduction-to-attention)\n",
@@ -54,7 +54,7 @@
" * [Step 5.1: Additional resources](#Step-5.1:-Additional-resources)\n",
" * [Step 5.2: Cancel AWS Marketplace subscription](#Step-5.2:-Cancel-AWS-Marketplace-subscription)\n",
"\n",
- "#### Usage instructions\n",
+ "## Usage instructions\n",
"You can run this notebook one cell at a time (By using Shift+Enter for running a cell)."
]
},
@@ -170,7 +170,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "### Step 1: Determine input prompt and visualize word dependencies"
+ "## Step 1: Determine input prompt and visualize word dependencies"
]
},
{
@@ -186,7 +186,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### Step 1.1 Introduction to attention\n",
+ "### Step 1.1 Introduction to attention\n",
"\n",
"[Self-Attention mechanism](https://arxiv.org/abs/1706.03762) is one of the key components for Transformers architectures, including GPT-2. It helps to relate different positions of a specific sequence of tokens in order to compute contextual representation of the sequence.\n",
"\n",
@@ -219,7 +219,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### Step 1.2 Specify input prompt \n",
+ "### Step 1.2 Specify input prompt \n",
"\n",
"You can experiment with different prompts and see what contextual dependencies exist in your own examples. "
]
@@ -237,7 +237,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### Step 1.3 Visualize attention mechanism\n",
+ "### Step 1.3 Visualize attention mechanism\n",
"\n",
"In this step, let's call BertViz package to produce attention visualization of our input."
]
@@ -303,7 +303,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "### Step 2: Use an ML model to generate text based on prompt\n",
+ "## Step 2: Use an ML model to generate text based on prompt\n",
"\n",
"Because you utilize [GPT-2 XL - Text generation](https://aws.amazon.com/marketplace/pp/prodview-cdujckyfypprg) algorithm from AWS Marketplace - all you need to do to start using it - is to deploy it as an inference endpoint in your account. Alternatively, we can use SageMaker Batch Transformation to run inference on batch payloads. \n",
"\n",
@@ -355,7 +355,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### Step 2.1: Specify model arn from AWS Marketplace subscription"
+ "### Step 2.1: Specify model arn from AWS Marketplace subscription"
]
},
{
@@ -380,7 +380,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### Step 2.2: Create model from model package and deploy to endpoint"
+ "### Step 2.2: Create model from model package and deploy to endpoint"
]
},
{
@@ -406,7 +406,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "### Step 3: Explore use cases and model parameters"
+ "## Step 3: Explore use cases and model parameters"
]
},
{
@@ -445,7 +445,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### Step 3.1: Use case 1: Assisted writing of prose"
+ "### Step 3.1: Use case 1: Assisted writing of prose"
]
},
{
@@ -511,7 +511,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### Step 3.2: Use case 2: Autonomous authoring of poem"
+ "### Step 3.2: Use case 2: Autonomous authoring of poem"
]
},
{
@@ -619,7 +619,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### Step 3.3: Additional Use-Cases"
+ "### Step 3.3: Additional Use-Cases"
]
},
{
@@ -812,7 +812,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### Step 3.4: Delete Amazon SageMaker endpoint"
+ "### Step 3.4: Delete Amazon SageMaker endpoint"
]
},
{
@@ -836,7 +836,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "### Step 4: Use Amazon SageMaker batch transform"
+ "## Step 4: Use Amazon SageMaker batch transform"
]
},
{
@@ -854,7 +854,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### Step 4.1: Create input file for batch transform job"
+ "### Step 4.1: Create input file for batch transform job"
]
},
{
@@ -886,7 +886,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### Step 4.2: Upload file to S3"
+ "### Step 4.2: Upload file to S3"
]
},
{
@@ -904,7 +904,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### Step 4.3: Execute the batch transform job"
+ "### Step 4.3: Execute the batch transform job"
]
},
{
@@ -948,7 +948,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### Step 4.4: Visualize output"
+ "### Step 4.4: Visualize output"
]
},
{
@@ -983,7 +983,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### Step 4.5: Delete the model"
+ "### Step 4.5: Delete the model"
]
},
{
@@ -999,14 +999,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "### Step 5: Next steps"
+ "## Step 5: Next steps"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### Step 5.1: Additional resources"
+ "### Step 5.1: Additional resources"
]
},
{
@@ -1025,7 +1025,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### Step 5.2: Cancel AWS Marketplace subscription"
+ "### Step 5.2: Cancel AWS Marketplace subscription"
]
},
{
diff --git a/aws_marketplace/using_model_packages/generic_sample_notebook/A_generic_sample_notebook_to_perform_inference_on_ML_model_packages_from_AWS_Marketplace.ipynb b/aws_marketplace/using_model_packages/generic_sample_notebook/A_generic_sample_notebook_to_perform_inference_on_ML_model_packages_from_AWS_Marketplace.ipynb
index 621db718e4..1f118c91a3 100644
--- a/aws_marketplace/using_model_packages/generic_sample_notebook/A_generic_sample_notebook_to_perform_inference_on_ML_model_packages_from_AWS_Marketplace.ipynb
+++ b/aws_marketplace/using_model_packages/generic_sample_notebook/A_generic_sample_notebook_to_perform_inference_on_ML_model_packages_from_AWS_Marketplace.ipynb
@@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "## Deploy and perform inference on ML Model packages from AWS Marketplace.\n",
+ "# Deploy and perform inference on ML Model packages from AWS Marketplace\n",
"\n",
"There are two simple ways to try/deploy [ML model packages from AWS Marketplace](https://aws.amazon.com/marketplace/search/results?page=1&filters=FulfillmentOptionType%2CSageMaker::ResourceType&FulfillmentOptionType=SageMaker&SageMaker::ResourceType=ModelPackage), either using AWS console to deploy an ML model package (see [this blog](https://aws.amazon.com/blogs/machine-learning/adding-ai-to-your-applications-with-ready-to-use-models-from-aws-marketplace/)) or via code written typically in a Jupyter notebook. Many listings have a high-quality sample Jupyter notebooks provided by the seller itself, usually, these sample notebooks are linked to the AWS Marketplace listing (E.g. [Source Separation](https://aws.amazon.com/marketplace/pp/prodview-23n4vi2zw67we?qid=1579739476471&sr=0-1&ref_=srh_res_product_title)), If a sample notebook exists, try it out. \n",
"\n",
@@ -14,7 +14,7 @@
"\n",
"> **Note**:If you are facing technical issues while trying an ML model package from AWS Marketplace and need help, please open a support ticket or write to the team on aws-mp-bd-ml@amazon.com for additional assistance.\n",
"\n",
- "#### Pre-requisites:\n",
+ "## Pre-requisites\n",
"1. Open this notebook from an Amazon SageMaker Notebook instance.\n",
"1. Ensure that Amazon SageMaker notebook instance used has IAMExecutionRole with **AmazonSageMakerFullAccess**\n",
"1. Your IAM role has these three permisions - **aws-marketplace:ViewSubscriptions**, **aws-marketplace:Unsubscribe**, **aws-marketplace:Subscribe** and you have authority to make AWS Marketplace subscriptions in the AWS account used.\n",
@@ -23,7 +23,7 @@
"\n",
"\n",
"\n",
- "#### Additional Resources:\n",
+ "## Additional Resources\n",
"**Background on Model Packages**:\n",
"1. An ML model can be created from a Model Package, to know how, see [Use a Model Package to Create a Model](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-mkt-model-pkg-model.html). \n",
"2. An ML Model accepts data and generates predictions.\n",
@@ -39,7 +39,7 @@
"* For a Jupyter notebook of the sample solution for **Automating auto insurance claim processing workflow** outlined in [this re:Mars session](https://www.youtube.com/watch?v=GkKZt0s_ku0), see [amazon-sagemaker-examples/aws-marketplace](https://github.com/awslabs/amazon-sagemaker-examples/tree/master/aws_marketplace/using_model_packages/auto_insurance) GitHub repository.\n",
"* For a Jupyter notebook of the sample solution for **Improving workplace safety solution** outlined in [this re:Invent session](https://www.youtube.com/watch?v=iLOXaWpK6ag), see [amazon-sagemaker-examples/aws-marketplace](https://github.com/awslabs/amazon-sagemaker-examples/tree/master/aws_marketplace/using_model_packages/improving_industrial_workplace_safety) GitHub repository.\n",
"\n",
- "#### Contents:\n",
+ "## Contents\n",
"1. [Subscribe to the model package](#Subscribe-to-the-model-package)\n",
" 1. [Identify compatible instance-type](#A.-Identify-compatible-instance-type)\n",
" 2. [Identify content-type](#B.-Identify-content_type)\n",
@@ -57,7 +57,7 @@
"4. [Delete the model](#4.-Delete-the-model)\n",
"5. [Unsubscribe to the model package](#Unsubscribe-to-the-model-package)\n",
"\n",
- "#### Usage instructions\n",
+ "## Usage instructions\n",
"You can run this notebook one cell at a time (By using Shift+Enter for running a cell)."
]
},
@@ -112,7 +112,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "### 1. Subscribe to the model package"
+ "## 1. Subscribe to the model package"
]
},
{
@@ -140,7 +140,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### A. Identify compatible instance-type\n",
+ "### A. Identify compatible instance-type\n",
"\n",
"1. On the listing, Under **Pricing Information**, you will see **software pricing** for **real-time inference** as well as **batch-transform usage** for specific instance-types. \n",
"\n",
@@ -168,7 +168,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### B. Identify content_type\n",
+ "### B. Identify content_type\n",
"You need to specify input content-type and payload while performing inference on the model. In this sub-section you will identify input content type that is accepted by the model you wish to try. "
]
},
@@ -214,7 +214,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### C. Specify model-package-arn\n",
+ "### C. Specify model-package-arn\n",
"A model-package-arn is a unique identifier for each ML model package from AWS Marketplace within a chosen region."
]
},
@@ -249,7 +249,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "### 2. Create an Endpoint and perform real-time inference."
+ "## 2. Create an Endpoint and perform real-time inference."
]
},
{
@@ -275,7 +275,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### A. Create an Endpoint"
+ "### A. Create an Endpoint"
]
},
{
@@ -304,7 +304,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### B. Create input payload"
+ "### B. Create input payload"
]
},
{
@@ -667,7 +667,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### C. Perform Real-time inference"
+ "### C. Perform Real-time inference"
]
},
{
@@ -739,7 +739,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### D. Visualize output"
+ "### D. Visualize output"
]
},
{
@@ -765,7 +765,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### E. Delete the endpoint"
+ "### E. Delete the endpoint"
]
},
{
@@ -789,7 +789,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "### 3. Perform Batch inference"
+ "## 3. Perform Batch inference"
]
},
{
@@ -838,7 +838,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "#### C. Visualize output"
+ "Visualize output"
]
},
{
@@ -879,7 +879,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "### 4. Delete the model"
+ "## 4. Delete the model"
]
},
{
@@ -902,7 +902,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "### 5. Cleanup "
+ "## 5. Cleanup "
]
},
{
diff --git a/aws_marketplace/using_model_packages/improving_industrial_workplace_safety/improving_industrial_workplace_safety.ipynb b/aws_marketplace/using_model_packages/improving_industrial_workplace_safety/improving_industrial_workplace_safety.ipynb
index e1ee2da146..4eea15d3ff 100644
--- a/aws_marketplace/using_model_packages/improving_industrial_workplace_safety/improving_industrial_workplace_safety.ipynb
+++ b/aws_marketplace/using_model_packages/improving_industrial_workplace_safety/improving_industrial_workplace_safety.ipynb
@@ -4,14 +4,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "## Demonstrating Industrial Workplace Safety using Pre-trained Machine Learning Models\n",
+ "# Demonstrating Industrial Workplace Safety using Pre-trained Machine Learning Models\n",
"\n",
- "### Introduction\n",
+ "## Introduction\n",
"\n",
"This sample notebook shows how to use pre-trained model packages from [AWS Marketplace](https://aws.amazon.com/marketplace/search/results?page=1&filters=FulfillmentOptionType&FulfillmentOptionType=SageMaker&ref_=mlmp_gitdemo_indust) to detect industrial workspace safety related object labels, such as hard-hat, personal protective equipment, construction machinery, and construction worker in an image. The notebook also shows an approach to perform inference on a video by taking snapshots from the video file to generate an activity/status log. At the end of this you will become familiar on steps to integrate inferences from pre-trained models into your application. This notebook is intended for demonstration, we highly recommend you to evaluate the accuracy of machine learning models to see if they meet your expectations.\n",
"\n",
"\n",
- "### Pre-requisites:\n",
+ "## Pre-requisites\n",
"This sample notebook requires you to subscribe to pre-trained machine learning model packages. Follow the following steps to subscribe to the listings:\n",
"\n",
"1. Open the following model package product detail pages, in separate tabs, in your web browser. \n",
@@ -180,7 +180,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "### Step 1.2.1: View construction site image"
+ "#### Step 1.2.1: View construction site image"
]
},
{
@@ -215,7 +215,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "### Step 1.2.2: View an image with a worker and a person at a workplace"
+ "#### Step 1.2.2: View an image with a worker and a person at a workplace"
]
},
{
@@ -250,7 +250,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "### Step 1.2.3: View an image with an excavator and a truck at work"
+ "#### Step 1.2.3: View an image with an excavator and a truck at work"
]
},
{
diff --git a/aws_sagemaker_studio/index.rst b/aws_sagemaker_studio/index.rst
index 76fb05a827..6ddcfa683a 100644
--- a/aws_sagemaker_studio/index.rst
+++ b/aws_sagemaker_studio/index.rst
@@ -41,7 +41,12 @@ Model compilation with Neo
.. toctree::
:maxdepth: 1
- sagemaker_neo_compilation_jobs/deploy_tensorflow_model_on_Inf1_instance/tensorflow_distributed_mnist_neo_inf1_studio
+ sagemaker_neo_compilation_jobs/gluoncv_ssd_mobilenet_studio/gluoncv_ssd_mobilenet_neo_studio
+ sagemaker_neo_compilation_jobs/imageclassification_caltech/Image-classification-fulltraining-highlevel-neo-studio
+ sagemaker_neo_compilation_jobs/pytorch_torchvision/pytorch_torchvision_neo_studio
+ sagemaker_neo_compilation_jobs/pytorch_vgg19_bn/pytorch-vgg19-bn-studio
+ sagemaker_neo_compilation_jobs/tensorflow_unet/sagemaker-neo-tf-unet
+ sagemaker_neo_compilation_jobs/xgboost_customer_churn/xgboost_customer_churn_neo_studio
Bring your own container to Studio
diff --git a/aws_sagemaker_studio/sagemaker_studio_image_build/xgboost_bring_your_own/Batch_Transform_BYO_XGB.ipynb b/aws_sagemaker_studio/sagemaker_studio_image_build/xgboost_bring_your_own/Batch_Transform_BYO_XGB.ipynb
index b32999e4f1..06761b7ad3 100644
--- a/aws_sagemaker_studio/sagemaker_studio_image_build/xgboost_bring_your_own/Batch_Transform_BYO_XGB.ipynb
+++ b/aws_sagemaker_studio/sagemaker_studio_image_build/xgboost_bring_your_own/Batch_Transform_BYO_XGB.ipynb
@@ -617,7 +617,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Part 2: Building the Container and Training the model\n",
+ "## Part 2: Building the Container and Training the model\n",
"\n",
"\n",
"### Step 5: Set up SageMaker Experiments"
@@ -814,7 +814,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Part 3: Using the trained model for inference\n",
+ "## Part 3: Using the trained model for inference\n",
"\n",
"### Step 8: Inference using Batch Transform\n",
"\n",
diff --git a/ground_truth_labeling_jobs/3d_point_cloud_input_data_processing/3D-point-cloud-input-data-processing.ipynb b/ground_truth_labeling_jobs/3d_point_cloud_input_data_processing/3D-point-cloud-input-data-processing.ipynb
index fd9b8395fb..7933722b7c 100644
--- a/ground_truth_labeling_jobs/3d_point_cloud_input_data_processing/3D-point-cloud-input-data-processing.ipynb
+++ b/ground_truth_labeling_jobs/3d_point_cloud_input_data_processing/3D-point-cloud-input-data-processing.ipynb
@@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Create a 3D Point Cloud Labeling Job with Amazon SageMaker Ground Truth\n",
+ "# Create a 3D Point Cloud Labeling Job for Object Tracking with Amazon SageMaker Ground Truth\n",
"\n",
"\n",
"This notebook will demonstrate how you can pre-process your 3D point cloud input data to create an [object tracking labeling job](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-point-cloud-object-tracking.html) and include sensor and camera data for sensor fusion. \n",
@@ -1008,7 +1008,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Acknowledgments\n",
+ "## Acknowledgments\n",
"\n",
"We would like to thank the KITTI team for letting us use this dataset to demonstrate how to prepare your 3D point cloud data for use in SageMaker Ground Truth."
]
diff --git a/ground_truth_labeling_jobs/bring_your_own_model_for_sagemaker_labeling_workflows_with_active_learning/bring_your_own_model_for_sagemaker_labeling_workflows_with_active_learning.ipynb b/ground_truth_labeling_jobs/bring_your_own_model_for_sagemaker_labeling_workflows_with_active_learning/bring_your_own_model_for_sagemaker_labeling_workflows_with_active_learning.ipynb
index 687e5e0ecc..7974c54331 100644
--- a/ground_truth_labeling_jobs/bring_your_own_model_for_sagemaker_labeling_workflows_with_active_learning/bring_your_own_model_for_sagemaker_labeling_workflows_with_active_learning.ipynb
+++ b/ground_truth_labeling_jobs/bring_your_own_model_for_sagemaker_labeling_workflows_with_active_learning/bring_your_own_model_for_sagemaker_labeling_workflows_with_active_learning.ipynb
@@ -1,5 +1,12 @@
{
"cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Create an Active Learning Workflow using Amazon SageMaker Ground Truth"
+ ]
+ },
{
"cell_type": "markdown",
"metadata": {
@@ -52,7 +59,6 @@
"execution_count": null,
"metadata": {
"button": false,
- "collapsed": true,
"deletable": true,
"new_sheet": false,
"run_control": {
@@ -108,7 +114,6 @@
"execution_count": null,
"metadata": {
"button": false,
- "collapsed": true,
"deletable": true,
"new_sheet": false,
"run_control": {
@@ -125,7 +130,6 @@
"execution_count": null,
"metadata": {
"button": false,
- "collapsed": true,
"deletable": true,
"new_sheet": false,
"run_control": {
@@ -160,7 +164,6 @@
"execution_count": null,
"metadata": {
"button": false,
- "collapsed": true,
"deletable": true,
"new_sheet": false,
"run_control": {
@@ -180,7 +183,6 @@
"execution_count": null,
"metadata": {
"button": false,
- "collapsed": true,
"deletable": true,
"new_sheet": false,
"run_control": {
@@ -197,7 +199,6 @@
"execution_count": null,
"metadata": {
"button": false,
- "collapsed": true,
"deletable": true,
"new_sheet": false,
"run_control": {
@@ -228,7 +229,6 @@
"execution_count": null,
"metadata": {
"button": false,
- "collapsed": true,
"deletable": true,
"new_sheet": false,
"run_control": {
@@ -278,7 +278,6 @@
"execution_count": null,
"metadata": {
"button": false,
- "collapsed": true,
"deletable": true,
"new_sheet": false,
"run_control": {
@@ -310,7 +309,6 @@
"execution_count": null,
"metadata": {
"button": false,
- "collapsed": true,
"deletable": true,
"new_sheet": false,
"run_control": {
@@ -361,7 +359,6 @@
"execution_count": null,
"metadata": {
"button": false,
- "collapsed": true,
"deletable": true,
"new_sheet": false,
"run_control": {
@@ -417,7 +414,6 @@
"execution_count": null,
"metadata": {
"button": false,
- "collapsed": true,
"deletable": true,
"new_sheet": false,
"run_control": {
@@ -458,7 +454,6 @@
"execution_count": null,
"metadata": {
"button": false,
- "collapsed": true,
"deletable": true,
"new_sheet": false,
"run_control": {
@@ -502,7 +497,6 @@
"execution_count": null,
"metadata": {
"button": false,
- "collapsed": true,
"deletable": true,
"new_sheet": false,
"run_control": {
@@ -536,7 +530,6 @@
"execution_count": null,
"metadata": {
"button": false,
- "collapsed": true,
"deletable": true,
"new_sheet": false,
"run_control": {
@@ -601,7 +594,6 @@
"execution_count": null,
"metadata": {
"button": false,
- "collapsed": true,
"deletable": true,
"new_sheet": false,
"run_control": {
@@ -702,7 +694,6 @@
"execution_count": null,
"metadata": {
"button": false,
- "collapsed": true,
"deletable": true,
"new_sheet": false,
"run_control": {
@@ -744,7 +735,6 @@
"execution_count": null,
"metadata": {
"button": false,
- "collapsed": true,
"deletable": true,
"new_sheet": false,
"run_control": {
@@ -808,7 +798,6 @@
"execution_count": null,
"metadata": {
"button": false,
- "collapsed": true,
"deletable": true,
"new_sheet": false,
"run_control": {
@@ -850,7 +839,6 @@
"execution_count": null,
"metadata": {
"button": false,
- "collapsed": true,
"deletable": true,
"new_sheet": false,
"run_control": {
@@ -911,7 +899,6 @@
"execution_count": null,
"metadata": {
"button": false,
- "collapsed": true,
"deletable": true,
"new_sheet": false,
"run_control": {
@@ -999,7 +986,6 @@
"execution_count": null,
"metadata": {
"button": false,
- "collapsed": true,
"deletable": true,
"new_sheet": false,
"run_control": {
@@ -1038,7 +1024,6 @@
"execution_count": null,
"metadata": {
"button": false,
- "collapsed": true,
"deletable": true,
"new_sheet": false,
"run_control": {
diff --git a/ground_truth_labeling_jobs/from_unlabeled_data_to_deployed_machine_learning_model_ground_truth_demo_image_classification/from_unlabeled_data_to_deployed_machine_learning_model_ground_truth_demo_image_classification.ipynb b/ground_truth_labeling_jobs/from_unlabeled_data_to_deployed_machine_learning_model_ground_truth_demo_image_classification/from_unlabeled_data_to_deployed_machine_learning_model_ground_truth_demo_image_classification.ipynb
index 5026338ccb..3d5fc4e7a4 100644
--- a/ground_truth_labeling_jobs/from_unlabeled_data_to_deployed_machine_learning_model_ground_truth_demo_image_classification/from_unlabeled_data_to_deployed_machine_learning_model_ground_truth_demo_image_classification.ipynb
+++ b/ground_truth_labeling_jobs/from_unlabeled_data_to_deployed_machine_learning_model_ground_truth_demo_image_classification/from_unlabeled_data_to_deployed_machine_learning_model_ground_truth_demo_image_classification.ipynb
@@ -119,7 +119,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Run a Ground Truth labeling job\n",
+ "## Run a Ground Truth labeling job\n",
"**This section should take about 3h to complete.**\n",
"\n",
"We will first run a labeling job. This involves several steps: collecting the images we want labeled, specifying the possible label categories, creating instructions, and writing a labeling job specification. In addition, we highly recommend to run a (free) mock job using a private workforce before you submit any job to the public workforce. This notebook will explain how to do that as an optional step. Without using a private workforce, this section until completion of your labeling job should take about 3h. However, this may vary depending on the availability of the public annotation workforce.\n",
@@ -786,7 +786,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Analyze Ground Truth labeling job results\n",
+ "## Analyze Ground Truth labeling job results\n",
"**This section should take about 20min to complete.**\n",
"\n",
"After the job finishes running (**make sure `sagemaker_client.describe_labeling_job` shows the job is complete!**), it is time to analyze the results. The plots in the [Monitor job progress](#Monitor-job-progress) section form part of the analysis. In this section, we will gain additional insights into the results, all contained in the `output manifest`. You can find the location of the output manifest under `AWS Console > SageMaker > Labeling Jobs > [name of your job]`. We will obtain it programmatically in the cell below.\n",
@@ -1095,7 +1095,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Compare Ground Truth results to known, pre-labeled data\n",
+ "## Compare Ground Truth results to known, pre-labeled data\n",
"**This section should take about 5 minutes to complete.**\n",
"\n",
"Sometimes (for example, when benchmarking the system) we have an alternative set of data labels available. \n",
@@ -1275,7 +1275,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Train an image classifier using Ground Truth labels\n",
+ "## Train an image classifier using Ground Truth labels\n",
"At this stage, we have fully labeled our dataset and we can train a machine learning model to classify images based on the categories we previously defined. We'll do so using the **augmented manifest** output of our labeling job - no additional file translation or manipulation required! For a more complete description of the augmented manifest, see our other [example notebook](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/ground_truth_labeling_jobs/object_detection_augmented_manifest_training/object_detection_augmented_manifest_training.ipynb).\n",
"\n",
"**NOTE:** Training neural networks to high accuracy often requires a careful choice of hyperparameters. In this case, we hand-picked hyperparameters that work reasonably well for this dataset. The neural net should have accuracy of about **60% if you're using 100 datapoints, and over 95% if you're using 1000 datapoints.**. To train neural networks on novel data, consider using [SageMaker's model tuning / hyperparameter optimization algorithms](https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning-how-it-works.html).\n",
@@ -1434,7 +1434,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Deploy the Model \n",
+ "## Deploy the Model \n",
"\n",
"Now that we've fully labeled our dataset and have a trained model, we want to use the model to perform inference.\n",
"\n",
@@ -1736,7 +1736,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Review\n",
+ "## Review\n",
"\n",
"We covered a lot of ground in this notebook! Let's recap what we accomplished. First we started with an unlabeled dataset (technically, the dataset was previously labeled by the authors of the dataset, but we discarded the original labels for the purposes of this demonstration). Next, we created a SageMake Ground Truth labeling job and generated new labels for all of the images in our dataset. Then we split this file into a training set and a validation set and trained a SageMaker image classification model. Finally, we created a hosted model endpoint and used it to make a live prediction for a held-out image in the original dataset."
]
diff --git a/ground_truth_labeling_jobs/ground_truth_object_detection_tutorial/object_detection_tutorial.ipynb b/ground_truth_labeling_jobs/ground_truth_object_detection_tutorial/object_detection_tutorial.ipynb
index 340ba351d9..3d535ed7df 100644
--- a/ground_truth_labeling_jobs/ground_truth_object_detection_tutorial/object_detection_tutorial.ipynb
+++ b/ground_truth_labeling_jobs/ground_truth_object_detection_tutorial/object_detection_tutorial.ipynb
@@ -36,7 +36,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Introduction\n",
+ "## Introduction\n",
"\n",
"This sample notebook takes you through an end-to-end workflow to demonstrate the functionality of SageMaker Ground Truth. We'll start with an unlabeled image data set, acquire bounding boxes for objects in the images using SageMaker Ground Truth, analyze the results, train an object detector, host the resulting model, and, finally, use it to make predictions. Before you begin, we highly recommend you start a Ground Truth labeling job through the AWS Console first to familiarize yourself with the workflow. The AWS Console offers less flexibility than the API, but is simple to use.\n",
"\n",
@@ -114,7 +114,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Run a Ground Truth labeling job\n",
+ "## Run a Ground Truth labeling job\n",
"\n",
"**This section should take about 4 hours to complete.**\n",
"\n",
@@ -760,7 +760,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Analyze Ground Truth labeling job results\n",
+ "## Analyze Ground Truth labeling job results\n",
"**This section should take about 20 minutes to complete.**\n",
"\n",
"Once the job has finished, we can analyze the results. Evaluate the following cell and verify the output is `'Completed'` before continuing."
@@ -1083,7 +1083,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Compare Ground Truth results to standard labels\n",
+ "## Compare Ground Truth results to standard labels\n",
"\n",
"**This section should take about 5 minutes to complete.**\n",
"\n",
@@ -1366,7 +1366,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Train an object detection model using Ground Truth labels\n",
+ "## Train an object detection model using Ground Truth labels\n",
"At this stage, we have fully labeled our dataset and we can train a machine learning model to perform object detection. We'll do so using the **augmented manifest** output of our labeling job - no additional file translation or manipulation required! For a more complete description of the augmented manifest, see our other [example notebook](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/ground_truth_labeling_jobs/object_detection_augmented_manifest_training/object_detection_augmented_manifest_training.ipynb).\n",
"\n",
"**NOTE:** Object detection is a complex task, and training neural networks to high accuracy requires large datasets and careful hyperparameter tuning. The following cells illustrate how to train a neural network using a Ground Truth output augmented manifest, and how to interpret the results. However, we shouldn't expect a network trained on 100 or 1000 images to do a phenomenal job on unseen images!\n",
@@ -1644,7 +1644,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Deploy the Model \n",
+ "## Deploy the Model \n",
"\n",
"Now that we've fully labeled our dataset and have a trained model, we want to use the model to perform inference.\n",
"\n",
@@ -2024,7 +2024,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Review\n",
+ "## Review\n",
"\n",
"We covered a lot of ground in this notebook! Let's recap what we accomplished. First we started with an unlabeled dataset (technically, the dataset was previously labeled by the authors of the dataset, but we discarded the original labels for the purposes of this demonstration). Next, we created a SageMake Ground Truth labeling job and generated new labels for all of the images in our dataset. Then we split this file into a training set and a validation set and trained a SageMaker object detection model. Next, we trained a new model using these Ground Truth results and submitted a batch job to label a held-out image from the original dataset. Finally, we created a hosted model endpoint and used it to make a live prediction for the same held-out image."
]
diff --git a/ground_truth_labeling_jobs/pretrained_model/pretrained_model_labeling_tutorial.ipynb b/ground_truth_labeling_jobs/pretrained_model/pretrained_model_labeling_tutorial.ipynb
index 8bd14bdcad..39ee393ffd 100644
--- a/ground_truth_labeling_jobs/pretrained_model/pretrained_model_labeling_tutorial.ipynb
+++ b/ground_truth_labeling_jobs/pretrained_model/pretrained_model_labeling_tutorial.ipynb
@@ -17,7 +17,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Introduction\n",
+ "## Introduction\n",
"\n",
"SageMaker Ground Truth is a fully managed service for labeling datasets for machine learning applications. Ground Truth allows you to start a labeling job with a pre-trained model, which is a great way to accelerate the labeling process. If you have a machine learning model that already encodes some domain knowledge about your dataset, you can use it to \"jump start\" Ground Truth's auto-labeling process. \n",
"\n",
@@ -98,7 +98,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Iteration #1: Create Initial Labeling Job\n",
+ "## Iteration #1: Create Initial Labeling Job\n",
"\n",
"## Setup\n",
"\n",
@@ -752,7 +752,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Iteration #2: Labeling Job with Pre-Trained Model \n",
+ "## Iteration #2: Labeling Job with Pre-Trained Model \n",
"\n",
"Now we'll use the model trained during the first labeling job to help label the second subset of our original dataset."
]
@@ -1066,7 +1066,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Iteration #3: Second Data Subset Without Pre-Trained Model \n",
+ "## Iteration #3: Second Data Subset Without Pre-Trained Model \n",
"\n",
"This time, we'll create a new labeling job using the second subset of the data (the one we just used in the previous labeling job), but we'll start it without the pre-trained model. In the previous step, we saw some significant improvements in cost and labeling time by leveraging a pre-trained model, but some of the differences might be due to the fact that the first and second labeling jobs used different datasets. This third labeling job will provide a more fair comparison, since it is identical to the second labeling job without the pre-trained model specification."
]
@@ -1358,7 +1358,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Conclusion \n",
+ "## Conclusion \n",
"\n",
"This marks the conclusion of our sample notebook demonstrating the use of pre-trained models to accelerate labeling jobs. Let's review what we covered.\n",
"\n",
diff --git a/introduction_to_amazon_algorithms/blazingtext_hosting_pretrained_fasttext/blazingtext_hosting_pretrained_fasttext.ipynb b/introduction_to_amazon_algorithms/blazingtext_hosting_pretrained_fasttext/blazingtext_hosting_pretrained_fasttext.ipynb
index ab48daed3b..9e6f7a4285 100644
--- a/introduction_to_amazon_algorithms/blazingtext_hosting_pretrained_fasttext/blazingtext_hosting_pretrained_fasttext.ipynb
+++ b/introduction_to_amazon_algorithms/blazingtext_hosting_pretrained_fasttext/blazingtext_hosting_pretrained_fasttext.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "white-commander",
+ "id": "74cd3ea9",
"metadata": {
"papermill": {
"duration": 0.008283,
@@ -14,14 +14,14 @@
"tags": []
},
"source": [
- "## Introduction\n",
+ "# Hosting and Deployment of Pre-Trained Text Models using SageMaker Endpoint and BlazingText\n",
"\n",
"In this notebook, we demonstrate how BlazingText supports hosting of pre-trained Text Classification and Word2Vec models [FastText models](https://fasttext.cc/docs/en/english-vectors.html). BlazingText is a GPU accelerated version of FastText. FastText is a shallow Neural Network model used to perform both word embedding generation (unsupervised) and text classification (supervised). BlazingText uses custom CUDA kernels to accelerate the training process of FastText but the underlying algorithm is same for both the algorithms. Therefore, if you have a model trained with FastText or if one of the pre-trained models made available by FastText team is sufficient for your use case, then you can take advantage of Hosting support for BlazingText to setup SageMaker endpoints for realtime predictions using FastText models. It can help you avoid to train with BlazingText algorithm if your use-case is covered by the pre-trained models available from FastText."
]
},
{
"cell_type": "markdown",
- "id": "chemical-peace",
+ "id": "753d08e4",
"metadata": {
"papermill": {
"duration": 0.008171,
@@ -39,7 +39,7 @@
{
"cell_type": "code",
"execution_count": 1,
- "id": "formal-manchester",
+ "id": "ddfc6e8b",
"metadata": {
"execution": {
"iopub.execute_input": "2021-06-03T00:15:02.061346Z",
@@ -87,7 +87,7 @@
{
"cell_type": "code",
"execution_count": 2,
- "id": "enabling-compiler",
+ "id": "1440f9bd",
"metadata": {
"execution": {
"iopub.execute_input": "2021-06-03T00:15:03.260540Z",
@@ -112,7 +112,7 @@
{
"cell_type": "code",
"execution_count": 3,
- "id": "atmospheric-keeping",
+ "id": "06506a0d",
"metadata": {
"execution": {
"iopub.execute_input": "2021-06-03T00:15:03.282145Z",
@@ -145,7 +145,7 @@
},
{
"cell_type": "markdown",
- "id": "polish-expert",
+ "id": "b5774b11",
"metadata": {
"papermill": {
"duration": 0.008723,
@@ -164,7 +164,7 @@
},
{
"cell_type": "markdown",
- "id": "destroyed-purpose",
+ "id": "27ea5a86",
"metadata": {
"papermill": {
"duration": 0.008652,
@@ -184,7 +184,7 @@
{
"cell_type": "code",
"execution_count": 4,
- "id": "environmental-athens",
+ "id": "ad9ca233",
"metadata": {
"execution": {
"iopub.execute_input": "2021-06-03T00:15:03.347130Z",
@@ -226,7 +226,7 @@
},
{
"cell_type": "markdown",
- "id": "reduced-nothing",
+ "id": "a45ce5b7",
"metadata": {
"papermill": {
"duration": 0.011146,
@@ -244,7 +244,7 @@
{
"cell_type": "code",
"execution_count": 5,
- "id": "finite-answer",
+ "id": "5bd274dc",
"metadata": {
"execution": {
"iopub.execute_input": "2021-06-03T00:15:05.342791Z",
@@ -278,7 +278,7 @@
},
{
"cell_type": "markdown",
- "id": "occupied-orchestra",
+ "id": "03dc86d1",
"metadata": {
"papermill": {
"duration": 0.011522,
@@ -290,7 +290,7 @@
"tags": []
},
"source": [
- "## Creating SageMaker Inference Endpoint\n",
+ "### Creating SageMaker Inference Endpoint\n",
"\n",
"Next we'll create a SageMaker inference endpoint with the BlazingText container. This endpoint will be compatible with the pre-trained models available from FastText and can be used for inference directly without any modification. The inference endpoint works with content-type of `application/json`."
]
@@ -298,7 +298,7 @@
{
"cell_type": "code",
"execution_count": 6,
- "id": "industrial-bookmark",
+ "id": "d706f181",
"metadata": {
"execution": {
"iopub.execute_input": "2021-06-03T00:15:14.167375Z",
@@ -343,7 +343,7 @@
},
{
"cell_type": "markdown",
- "id": "ranging-forty",
+ "id": "7ee9fff3",
"metadata": {
"papermill": {
"duration": null,
@@ -361,7 +361,7 @@
{
"cell_type": "code",
"execution_count": 7,
- "id": "alone-catholic",
+ "id": "a36e838c",
"metadata": {
"papermill": {
"duration": null,
@@ -386,7 +386,7 @@
{
"cell_type": "code",
"execution_count": 8,
- "id": "settled-longitude",
+ "id": "d294b310",
"metadata": {
"papermill": {
"duration": null,
@@ -413,7 +413,7 @@
},
{
"cell_type": "markdown",
- "id": "mechanical-circular",
+ "id": "165bc8e1",
"metadata": {
"papermill": {
"duration": null,
@@ -431,7 +431,7 @@
{
"cell_type": "code",
"execution_count": 9,
- "id": "close-scope",
+ "id": "ab6aaab6",
"metadata": {
"papermill": {
"duration": null,
@@ -465,7 +465,7 @@
},
{
"cell_type": "markdown",
- "id": "alike-contractor",
+ "id": "8ee099c1",
"metadata": {
"papermill": {
"duration": null,
@@ -484,7 +484,7 @@
{
"cell_type": "code",
"execution_count": 10,
- "id": "respected-engineer",
+ "id": "6aa4cc70",
"metadata": {
"papermill": {
"duration": null,
@@ -502,7 +502,7 @@
},
{
"cell_type": "markdown",
- "id": "regular-gothic",
+ "id": "3a645cc5",
"metadata": {
"papermill": {
"duration": null,
diff --git a/introduction_to_amazon_algorithms/xgboost_mnist/xgboost_mnist.ipynb b/introduction_to_amazon_algorithms/xgboost_mnist/xgboost_mnist.ipynb
index d9f9a3c0b9..3de0ca1d54 100644
--- a/introduction_to_amazon_algorithms/xgboost_mnist/xgboost_mnist.ipynb
+++ b/introduction_to_amazon_algorithms/xgboost_mnist/xgboost_mnist.ipynb
@@ -436,7 +436,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Set up hosting for the model\n",
+ "## Set up hosting for the model\n",
"In order to set up hosting, we have to import the model from training to hosting. The step below demonstrated hosting the model generated from the distributed training job. Same steps can be followed to host the model obtained from the single machine job. \n",
"\n",
"### Import model into hosting\n",
diff --git a/introduction_to_applying_machine_learning/README.md b/introduction_to_applying_machine_learning/README.md
index e39c21994d..9b40687dc6 100644
--- a/introduction_to_applying_machine_learning/README.md
+++ b/introduction_to_applying_machine_learning/README.md
@@ -5,7 +5,6 @@
These examples provide a gentle introduction to machine learning concepts as they are applied in practical use cases across a variety of sectors.
- [Predicting Customer Churn](xgboost_customer_churn) uses customer interaction and service usage data to find those most likely to churn, and then walks through the cost/benefit trade-offs of providing retention incentives. This uses Amazon SageMaker's implementation of [XGBoost](https://github.com/dmlc/xgboost) to create a highly predictive model.
-- [Time-series Forecasting](linear_time_series_forecast) generates a forecast for topline product demand using Amazon SageMaker's Linear Learner algorithm.
- [Cancer Prediction](breast_cancer_prediction) predicts Breast Cancer based on features derived from images, using SageMaker's Linear Learner.
- [Ensembling](ensemble_modeling) predicts income using two Amazon SageMaker models to show the advantages in ensembling.
- [Video Game Sales](video_game_sales) develops a binary prediction model for the success of video games based on review scores.
diff --git a/sagemaker-python-sdk/tensorflow_serving_using_elastic_inference_with_your_own_model/tensorflow_serving_pretrained_model_elastic_inference.ipynb b/sagemaker-python-sdk/tensorflow_serving_using_elastic_inference_with_your_own_model/tensorflow_serving_pretrained_model_elastic_inference.ipynb
index a51d5cc5fb..decc9aeebd 100644
--- a/sagemaker-python-sdk/tensorflow_serving_using_elastic_inference_with_your_own_model/tensorflow_serving_pretrained_model_elastic_inference.ipynb
+++ b/sagemaker-python-sdk/tensorflow_serving_using_elastic_inference_with_your_own_model/tensorflow_serving_pretrained_model_elastic_inference.ipynb
@@ -114,7 +114,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Deploy the trained Model to an Endpoint with an attached EI accelerator\n",
+ "## Deploy the trained Model to an Endpoint with an attached EI accelerator\n",
"\n",
"The `deploy()` method creates an endpoint which serves prediction requests in real-time.\n",
"\n",
@@ -148,7 +148,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Invoke the Endpoint to get inferences\n",
+ "## Invoke the Endpoint to get inferences\n",
"\n",
"Invoking prediction:"
]
@@ -175,7 +175,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Delete the Endpoint\n",
+ "## Delete the Endpoint\n",
"\n",
"After you have finished with this example, remember to delete the prediction endpoint to release the instance(s) associated with it."
]
diff --git a/training/algorithms.rst b/training/algorithms.rst
index 2a8941bfc5..10e68afe50 100644
--- a/training/algorithms.rst
+++ b/training/algorithms.rst
@@ -105,15 +105,6 @@ deepar
../introduction_to_amazon_algorithms/deepar_electricity/DeepAR-Electricity
../introduction_to_applying_machine_learning/deepar_chicago_traffic_violations/deepar_chicago_traffic_violations
-
-linear_learner
---------------
-
-.. toctree::
- :maxdepth: 1
-
- ../introduction_to_applying_machine_learning/linear_time_series_forecast/linear_time_series_forecast
-
Supervised learning algorithms
====================================
diff --git a/training/distributed_training/index.rst b/training/distributed_training/index.rst
index 2b1859f2b1..947fd261c6 100644
--- a/training/distributed_training/index.rst
+++ b/training/distributed_training/index.rst
@@ -94,7 +94,7 @@ Use MPI on SageMaker
.. toctree::
:maxdepth: 1
- mpi_on_sagemaker/intro
+ mpi_on_sagemaker/intro/mpi_demo
.. _pytorch-distributed:
diff --git a/training/frameworks.rst b/training/frameworks.rst
index e61f3ce11d..44510922e9 100644
--- a/training/frameworks.rst
+++ b/training/frameworks.rst
@@ -47,7 +47,6 @@ PyTorch
../frameworks/pytorch/get_started_mnist_train
../frameworks/pytorch/get_started_mnist_deploy
../sagemaker-python-sdk/pytorch_lstm_word_language_model/pytorch_rnn
- ../sagemaker-python-sdk/pytorch_lstm_word_language_model/pytorch_rnn
R