Skip to content

Commit

Permalink
Fixing grammar and spelling
Browse files Browse the repository at this point in the history
  • Loading branch information
Lokiiiiii committed Jun 24, 2022
1 parent 607c7ff commit 8f21b1a
Showing 1 changed file with 7 additions and 7 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
"2. [Working with the Caltech-256 dataset](#Working-with-the-Caltech-256-dataset)\n",
" 1. [Installation](#Installation)\n",
" 2. [SageMaker environment](#SageMaker-environment)\n",
"3. [How effective is SageMaker Training Compiler ?](#How-effective-is-SageMaker-Training-Compiler-?)\n",
"3. [How effective is SageMaker Training Compiler?](#How-effective-is-SageMaker-Training-Compiler-?)\n",
" 1. [SageMaker Training Job](#SageMaker-Training-Job)\n",
" 2. [Training Setup](#Training-Setup)\n",
" 3. [Experimenting with Native TensorFlow](#Experimenting-with-Native-TensorFlow)\n",
Expand Down Expand Up @@ -167,7 +167,7 @@
"id": "0b4decc2",
"metadata": {},
"source": [
"## How effective is SageMaker Training Compiler ?\n",
"## How effective is SageMaker Training Compiler?\n",
"\n",
"The effectiveness of SageMaker Training Compiler depends on the model architecture, model size, input shape, and the training loop. Please refer to our [Best Practices](https://docs.aws.amazon.com/sagemaker/latest/dg/training-compiler-tips-pitfalls.html) documentation to understand how to get the most out of your training job using SageMaker Training Compiler. In this section, we will compare and contrast a training job with and without SageMaker Training Compiler.\n",
"\n",
Expand Down Expand Up @@ -258,9 +258,9 @@
"\n",
"We can limit the number of training jobs spawned concurrently in the ```max_parallel_jobs``` argument and limit the total number of training jobs spawned in the ```max_jobs``` argument.\n",
"\n",
"For more information regarding SageMaker Hyperparameter Tuner refer to [the documentation]()\n",
"For more information regarding SageMaker Hyperparameter Tuner refer to [Perform Automatic Model Tuning with SageMaker](https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning.html)\n",
"\n",
"In the example below, we are trying to find the best batch size between 32 and 80 that will result in the smallest possible epoch latency, by launching 40 training jobs, 10 at a time. The range for batch sizes is our best guess. You can always [reuse and restart a tuning job with an extended range](https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning-warm-start.html).\n"
"In the example below, we are trying to find the best batch size between 32 and 80 that will result in the smallest possible epoch latency, by launching 40 training jobs, 10 at a time. The range for batch sizes is our best guess. You can always reuse and restart a tuning job with an extended range, as explained in [Run a Warm Start Hyperparameter Tuning Job](https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning-warm-start.html).\n"
]
},
{
Expand Down Expand Up @@ -298,7 +298,7 @@
" **tuner_args,\n",
")\n",
"\n",
"# Start the tuning job with the specified input da\n",
"# Start the tuning job with the specified input data\n",
"native_tuner.fit(inputs=destn, wait=False)\n",
"\n",
"# Save the name of the tuning job\n",
Expand All @@ -319,7 +319,7 @@
" }\n",
"```\n",
"\n",
"This can restrict the search space to just 6 training jobs as opposed to 40 !"
"This can restrict the search space to just 6 training jobs as opposed to 40!"
]
},
{
Expand Down Expand Up @@ -364,7 +364,7 @@
"source": [
"from sagemaker.tuner import HyperparameterTuner, IntegerParameter\n",
"\n",
"# Define the tunung job\n",
"# Define the tuning job\n",
"optimized_tuner = HyperparameterTuner(\n",
" estimator=optimized_estimator,\n",
" hyperparameter_ranges={\"BATCH_SIZE\": IntegerParameter(20, 60, \"Linear\")},\n",
Expand Down

0 comments on commit 8f21b1a

Please sign in to comment.