Skip to content

Commit

Permalink
fix advanced_functionality/search/ml_experiment_management_using_sear…
Browse files Browse the repository at this point in the history
…ch.ipynb
  • Loading branch information
EC2 Default User committed Apr 8, 2022
1 parent 5525f98 commit c664b2a
Showing 1 changed file with 5 additions and 5 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@
"tags": []
},
"source": [
"# Introduction\n",
"## Introduction\n",
"\n",
"Welcome to our example introducing Amazon SageMaker Search! Amazon SageMaker Search lets you quickly find and evaluate the most relevant model training runs from potentially hundreds and thousands of your Amazon SageMaker model training jobs.\n",
"Developing a machine learning model requires continuous experimentation, trying new learning algorithms and tuning hyper parameters, all the while observing the impact of such changes on model performance and accuracy. This iterative exercise often leads to explosion of hundreds of model training experiments and model versions, slowing down the convergence and discovery of “winning” model. In addition, the information explosion makes it very hard down the line to trace back the lineage of a model version i.e. the unique combination of datasets, algorithms and parameters that brewed that model in the first place. \n",
Expand Down Expand Up @@ -292,7 +292,7 @@
"tags": []
},
"source": [
"# Training the linear model\n",
"## Training the linear model\n",
"Once we have the data preprocessed and available in the correct format for training, the next step is to actually train the model using the data. First, let's specify our algorithm container. More details on algorithm containers can be found in [AWS documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html)."
]
},
Expand Down Expand Up @@ -457,7 +457,7 @@
"tags": []
},
"source": [
"# Use Amazon SageMaker Search to organize and evaluate experiments\n",
"## Use Amazon SageMaker Search to organize and evaluate experiments\n",
"Usually you will experiment with tuning multiple hyperparameters or even try new learning algorithms and training datasets resulting in potentially hundreds of model training runs and model versions. However, for the sake of simplicity, we are only tuning mini_batch_size in this example, trying only three different values resulting in as many model versions. Now we will use [Search](https://docs.aws.amazon.com/sagemaker/latest/dg/search.html) to **group together** the three model training runs and **evaluate** the best performing model by ranking and comparing them on a metric of our choice. \n",
"\n",
"**For grouping** the relevant model training runs together, we will search the model training jobs by the unique label or tag that we have been using as a tracking label to track our experiments. \n",
Expand Down Expand Up @@ -556,7 +556,7 @@
"tags": []
},
"source": [
"# Set up hosting for the model\n",
"## Set up hosting for the model\n",
"Now that we've found our best performing model (in this example the one with mini_batch_size=100), we can deploy it behind an Amazon SageMaker real-time hosted endpoint. This will allow out to make predictions (or inference) from the model dyanamically."
]
},
Expand Down Expand Up @@ -601,7 +601,7 @@
"tags": []
},
"source": [
"# Tracing the lineage of a model starting from an endpoint\n",
"## Tracing the lineage of a model starting from an endpoint\n",
"Now we will present an example of how you can use the Amazon SageMaker Search to trace the antecedents of a model deployed at an endpoint i.e. unique combination of algorithms, datasets, and parameters that brewed the model in first place."
]
},
Expand Down

0 comments on commit c664b2a

Please sign in to comment.