Skip to content

Commit

Permalink
Adding expln for decrease in batch size when tuning with Training Com…
Browse files Browse the repository at this point in the history
…piler (#3484)
  • Loading branch information
Lokiiiiii authored Jun 29, 2022
1 parent a84c2c8 commit 62de6a1
Showing 1 changed file with 1 addition and 1 deletion.
Original file line number Diff line number Diff line change
Expand Up @@ -329,7 +329,7 @@
"source": [
"### Experimenting with Optimized TensorFlow\n",
"\n",
"Compilation through SageMaker Training Compiler changes the memory footprint of the model. Most commonly, this manifests as a reduction in memory utilization and a consequent increase in the largest batch size that can fit on the GPU. In the example below we will find the new batch size with SageMaker Training Compiler enabled and the resultant latency per epoch.\n",
"Compilation through SageMaker Training Compiler changes the memory footprint of the model. Most commonly, this manifests as a reduction in memory utilization and a consequent increase in the largest batch size that can fit on the GPU. But in some cases, the compiler intelligently promotes caching which leads to increased memory utilization and a consequent decrease in the largest batch size that can fit on the GPU. In the example below we will find the new batch size with SageMaker Training Compiler enabled and the resultant latency per epoch.\n",
"\n",
"**Note:** We recommend you to turn the SageMaker Debugger's profiling and debugging tools off when you use compilation to avoid additional overheads."
]
Expand Down

0 comments on commit 62de6a1

Please sign in to comment.