Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor the Debugger detect_stalled_training_job_and_stop.ipynb notebook #1592

Merged
merged 25 commits into from
Oct 8, 2020
Merged
Show file tree
Hide file tree
Changes from 16 commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
b162b00
publish BYOC with Debugger notebook
mchoi8739 Aug 27, 2020
19e0df9
some test change
mchoi8739 Aug 27, 2020
e4acede
revert the kernel names in the metadata
mchoi8739 Aug 27, 2020
f1db9c5
fix typos
mchoi8739 Aug 27, 2020
4679ff0
Merge branch 'master' of https://github.com/awslabs/amazon-sagemaker-…
mchoi8739 Aug 31, 2020
76931e2
incorporate feedback
mchoi8739 Aug 31, 2020
19f617f
incorporate comments
mchoi8739 Aug 31, 2020
e19f33f
Merge branch 'master' of https://github.com/awslabs/amazon-sagemaker-…
mchoi8739 Sep 11, 2020
08198d9
Merge branch 'master' of https://github.com/awslabs/amazon-sagemaker-…
mchoi8739 Sep 17, 2020
115fa60
pin to pysdk v1
mchoi8739 Sep 25, 2020
cb7707d
Merge branch 'master' of https://github.com/awslabs/amazon-sagemaker-…
mchoi8739 Sep 25, 2020
cc12ef8
remove installation output logs
mchoi8739 Sep 25, 2020
b47ba73
Merge branch 'master' of https://github.com/awslabs/amazon-sagemaker-…
mchoi8739 Sep 25, 2020
879e1fb
Merge branch 'master' of https://github.com/awslabs/amazon-sagemaker-…
mchoi8739 Oct 6, 2020
9706e37
refactor the stalled training job notebook
mchoi8739 Oct 6, 2020
76b93cf
remove unnecessary module imports / minor fix
mchoi8739 Oct 6, 2020
a431d51
incorporate feedback
mchoi8739 Oct 6, 2020
b012054
minor fix
mchoi8739 Oct 6, 2020
50d0d8f
fix typo
mchoi8739 Oct 6, 2020
a2d61b8
minor fix
mchoi8739 Oct 6, 2020
ff111f7
fix unfinished sentence
mchoi8739 Oct 6, 2020
8b2585d
incorporate feedback
mchoi8739 Oct 7, 2020
5a8018d
Merge branch 'master' of https://github.com/awslabs/amazon-sagemaker-…
mchoi8739 Oct 7, 2020
050499e
minor fix
mchoi8739 Oct 7, 2020
5cca0a4
Merge branch 'master' of https://github.com/awslabs/amazon-sagemaker-…
mchoi8739 Oct 7, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -4,30 +4,22 @@
"cell_type": "markdown",
mchoi8739 marked this conversation as resolved.
Show resolved Hide resolved
mchoi8739 marked this conversation as resolved.
Show resolved Hide resolved
mchoi8739 marked this conversation as resolved.
Show resolved Hide resolved
mchoi8739 marked this conversation as resolved.
Show resolved Hide resolved
mchoi8739 marked this conversation as resolved.
Show resolved Hide resolved
mchoi8739 marked this conversation as resolved.
Show resolved Hide resolved
mchoi8739 marked this conversation as resolved.
Show resolved Hide resolved
mchoi8739 marked this conversation as resolved.
Show resolved Hide resolved
"metadata": {},
"source": [
"# Detect stalled training and stop training job using debugger rule\n",
"# Detect Stalled Training and Stop Training Job Using SageMaker Debugger Rule\n",
" \n",
"This notebook guides you how to use the `StalledTrainingRule` built-in rule. This rule can take an action to stop your training job, when the rule detects an inactivity in your training job for a certain time period. This functionality helps you monitor the training job status and save redundant resource usage.\n",
"\n",
"In this notebook, we'll show you how you can use StalledTrainingRule rule which can take action like stopping your training job when it finds that there has been no update in training job for certain threshold duration.\n",
"## How `StalledTrainingRule` Works\n",
"\n",
"## How does StalledTrainingRule works?\n",
"Amazon Sagemaker Debugger captures tensors that you want to watch from training jobs on [AWS Deep Learning Containers](https://docs.aws.amazon.com/sagemaker/latest/dg/pre-built-containers-frameworks-deep-learning.html) or your local machine. If you use one of the Debugger-integrated Deep Learning Containers, you don't need to make any changes to your training script to use the functionality of built-in rules. For information about Debugger-supported SageMaker frameworks and versions, see [Debugger-supported framework versions for zero script change](https://github.com/awslabs/sagemaker-debugger/blob/master/docs/sagemaker.md#zero-script-change). \n",
"\n",
"Amazon Sagemaker debugger automatically captures tensors from training job which use AWS DLC(tensorflow, pytorch, mxnet, xgboost)[refer doc for supported versions](https://github.com/awslabs/sagemaker-debugger/blob/master/docs/sagemaker.md#zero-script-change). StalledTrainingRule keeps watching on emission of tensors like loss. The execution happens outside of training containers. It is evident that if training job is running good and is not stalled it is expected to emit loss and metrics tensors at frequent intervals. If Rule doesn't find new tensors being emitted from training job for threshold period of time, it takes automatic action to issue StopTrainingJob.\n",
"\n",
"#### With no changes to your training script\n",
"If you use one of the SageMaker provided [Deep Learning Containers](https://docs.aws.amazon.com/sagemaker/latest/dg/pre-built-containers-frameworks-deep-learning.html). [Refer doc for supported framework versions](https://github.com/awslabs/sagemaker-debugger/blob/master/docs/sagemaker.md#zero-script-change), then you don't need to make any changes to your training script for activating this rule. Loss tensors will automatically be captured and monitored by the rule.\n",
"\n",
"You can also emit tensors periodically by using [save scalar api of hook](https://github.com/awslabs/sagemaker-debugger/blob/master/docs/api.md#common-hook-api) . \n",
"\n",
"Also look at example how to use save_scalar api [here](https://github.com/awslabs/sagemaker-debugger/blob/master/examples/tensorflow2/scripts/tf_keras_fit_non_eager.py#L42)"
"The Debugger `StalledTrainingRule` watches tensor updates from your training job. If the rule doesn't find new tensors updated to the default S3 URI for a threshold period of time, it takes an action to trigger the `StopTrainingJob` API operation. The following code cells set up a SageMaker TensorFlow estimator with the Debugger `StalledTrainingRule` to watch the `losses` pre-built tensor collection."
]
},
{
"cell_type": "code",
"execution_count": null,
"cell_type": "markdown",
"metadata": {},
"outputs": [],
"source": [
"! pip install -q sagemaker"
"### Import SageMaker Python SDK"
]
},
{
Expand All @@ -36,21 +28,16 @@
"metadata": {},
"outputs": [],
"source": [
"import boto3\n",
"import os\n",
"import sagemaker\n",
"from sagemaker.tensorflow import TensorFlow\n",
"print(sagemaker.__version__)"
]
},
{
"cell_type": "code",
"execution_count": null,
"cell_type": "markdown",
"metadata": {},
"outputs": [],
"source": [
"from sagemaker.debugger import Rule, DebuggerHookConfig, TensorBoardOutputConfig, CollectionConfig\n",
"import smdebug_rulesconfig as rule_configs"
"### Import SageMaker Debugger classes for rule configuration"
]
},
{
Expand All @@ -59,24 +46,21 @@
"metadata": {},
"outputs": [],
"source": [
"# define the entrypoint script\n",
"# Below script has 5 minutes sleep, we will create a stalledTrainingRule with 3 minutes of threshold.\n",
"entrypoint_script='src/simple_stalled_training.py'\n",
"\n",
"# these hyperparameters ensure that vanishing gradient will trigger for our tensorflow mnist script\n",
"hyperparameters = {\n",
" \"num_epochs\": \"10\",\n",
" \"lr\": \"10.00\"\n",
"}"
"from sagemaker.debugger import Rule, CollectionConfig, rule_configs"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create unique training job prefix\n",
"We will create unique training job name prefix. this prefix would be passed to StalledTrainingRule to identify which training job, rule should take action on once the stalled training rule condition is met.\n",
"Note that, this prefix needs to be unique. If rule doesn't find exactly one job with provided prefix, it will fallback to safe mode and not take action of stop training job. Rule will still emit a cloudwatch event if the rule condition is met. To see details about cloud watch event, check [here](https://github.com/awslabs/amazon-sagemaker-examples/tree/master/sagemaker-debugger/tensorflow_action_on_rule/tf-mnist-stop-training-job.ipynb). "
"### Create a unique training job prefix\n",
"The unique prefix must be specified for `StalledTrainingRule` to identify the exact training job name that you want to monitor and stop when the rule triggers the stalled training job issue.\n",
"If there are multiple training jobs sharing the same prefix, this rule may react to other training jobs. If the rule cannot find the exact training job name with a provided prefix, it will fallback to safe mode and not take action of stop the training job.\n",
"\n",
"The following code cell includes:\n",
"* a code line to create a unique `base_job_name_prefix`\n",
"* a stalled training job rule configuration object\n",
"* a SageMaker TensorFlow estimator configuration with the Debugger `rules` parameter to run the built-in rule"
]
},
{
Expand All @@ -85,56 +69,43 @@
"metadata": {},
"outputs": [],
"source": [
"# Append current time to your training job name to generate a unique base_job_name_prefix\n",
"import time\n",
"print(int(time.time()))\n",
"# Note that sagemaker appends date to your training job and truncates the provided name to 39 character. So, we will make \n",
"# sure that we use less than 39 character in below prefix. Appending time is to provide a unique id\n",
"base_job_name_prefix= 'smdebug-stalled-demo-' + str(int(time.time()))\n",
"base_job_name_prefix = base_job_name_prefix[:34]\n",
"print(base_job_name_prefix)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"stalled_training_job_rule = Rule.sagemaker(\n",
" base_config={\n",
" 'DebugRuleConfiguration': {\n",
" 'RuleConfigurationName': 'StalledTrainingRule', \n",
" 'RuleParameters': {'rule_to_invoke': 'StalledTrainingRule'}\n",
" }\n",
" },\n",
" rule_parameters={\n",
" 'threshold': '120',\n",
" 'training_job_name_prefix': base_job_name_prefix,\n",
" 'stop_training_on_fire' : 'True'\n",
" }, \n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"\n",
"# Configure a StalledTrainingRule rule parameter object\n",
"stalled_training_job_rule = [\n",
" Rule.sagemaker(\n",
" base_config=rule_configs.stalled_training_rule(),\n",
" rule_parameters={\n",
" \"threshold\": \"120\", \n",
" \"stop_training_on_fire\": \"True\",\n",
" \"training_job_name_prefix\": base_job_name_prefix\n",
" },\n",
" collections_to_save=[ \n",
" CollectionConfig(\n",
" name=\"losses\", \n",
" parameters={\n",
" \"save_interval\": \"500\"\n",
" } \n",
" )\n",
" ]\n",
" )\n",
"]\n",
"\n",
"# Configure a SageMaker TensorFlow estimator\n",
"estimator = TensorFlow(\n",
" role=sagemaker.get_execution_role(),\n",
" base_job_name=base_job_name_prefix,\n",
" train_instance_count=1,\n",
" train_instance_type='ml.m5.4xlarge',\n",
" entry_point=entrypoint_script,\n",
" #source_dir = 'src',\n",
" entry_point='src/simple_stalled_training.py', # This sample script forces the training job to sleep for 10 minutes\n",
" framework_version='1.15.0',\n",
" py_version='py3',\n",
" train_max_run=3600,\n",
" script_mode=True,\n",
" ## New parameter\n",
" rules = [stalled_training_job_rule]\n",
")\n"
" ## Debugger-specific parameter\n",
" rules = stalled_training_job_rule\n",
")"
]
},
{
Expand All @@ -143,12 +114,7 @@
"metadata": {},
"outputs": [],
"source": [
"# After calling fit, SageMaker will spin off 1 training job and 1 rule job for you\n",
"# The rule evaluation status(es) will be visible in the training logs\n",
"# at regular intervals\n",
"# wait=False makes this a fire and forget function. To stream the logs in the notebook leave this out\n",
"\n",
"estimator.fit(wait=True)"
"estimator.fit(wait=False)"
]
},
{
Expand All @@ -157,20 +123,17 @@
"source": [
"## Monitoring\n",
"\n",
"SageMaker kicked off rule evaluation job `StalledTrainingRule` as specified in the estimator. \n",
"Given that we've stalled our training script for 10 minutes such that `StalledTrainingRule` is bound to fire and take action StopTrainingJob, we should expect to see the `TrainingJobStatus` as\n",
"`Stopped` once the `RuleEvaluationStatus` for `StalledTrainingRule` changes to `IssuesFound`"
"Once you excute the `estimator.fit()` API, SageMaker initiates a trining job in the background, and Debugger initiates a `StalledTrainingRule` rule evaluation job in parallel.\n",
"Because the training scripts has a couple of lines of code at the end to force a stalled training job for 10 minutes, the `RuleEvaluationStatus` for `StalledTrainingRule` changes to `IssuesFound` in 2 minutes and trigger the `StopTrainingJob` API. The following code cells track the `TrainingJobStatus` until the `SecondaryStatus` returns `Stopped` or `Completed`."
]
},
{
"cell_type": "code",
"execution_count": null,
"cell_type": "markdown",
"metadata": {},
"outputs": [],
"source": [
"# rule job summary gives you the summary of the rule evaluations. You might have to run it over \n",
"# a few times before you start to see all values populated/changing\n",
"estimator.latest_training_job.rule_job_summary()"
"### Print the training job name\n",
"\n",
"The following cell outputs the training job name and its training status running in the background."
]
},
{
Expand All @@ -179,38 +142,21 @@
"metadata": {},
"outputs": [],
"source": [
"# This utility gives the link to monitor the CW event\n",
"def _get_rule_job_name(training_job_name, rule_configuration_name, rule_job_arn):\n",
" \"\"\"Helper function to get the rule job name\"\"\"\n",
" return \"{}-{}-{}\".format(\n",
" training_job_name[:26], rule_configuration_name[:26], rule_job_arn[-8:]\n",
" )\n",
" \n",
"def _get_cw_url_for_rule_job(rule_job_name, region):\n",
" return \"https://{}.console.aws.amazon.com/cloudwatch/home?region={}#logStream:group=/aws/sagemaker/ProcessingJobs;prefix={};streamFilter=typeLogStreamPrefix\".format(region, region, rule_job_name)\n",
"\n",
"job_name = estimator.latest_training_job.name\n",
"print('Training job name: {}'.format(job_name))\n",
"\n",
"def get_rule_jobs_cw_urls(estimator):\n",
" region = boto3.Session().region_name\n",
" training_job = estimator.latest_training_job\n",
" training_job_name = training_job.describe()[\"TrainingJobName\"]\n",
" rule_eval_statuses = training_job.describe()[\"DebugRuleEvaluationStatuses\"]\n",
" \n",
" result={}\n",
" for status in rule_eval_statuses:\n",
" if status.get(\"RuleEvaluationJobArn\", None) is not None:\n",
" rule_job_name = _get_rule_job_name(training_job_name, status[\"RuleConfigurationName\"], status[\"RuleEvaluationJobArn\"])\n",
" result[status[\"RuleConfigurationName\"]] = _get_cw_url_for_rule_job(rule_job_name, region)\n",
" return result\n",
"client = estimator.sagemaker_session.sagemaker_client\n",
"\n",
"get_rule_jobs_cw_urls(estimator)"
"description = client.describe_training_job(TrainingJobName=job_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"After running the last two cells over and until `VanishingGradient` reports `IssuesFound`, we'll attempt to describe the `TrainingJobStatus` for our training job."
"### Output the current job status\n",
"\n",
"The following cell tracks the status of training job until the `SecondaryStatus` changes to `Training`. While training, Debugger collects output tensors from the training job and monitors the training job with the rules. "
]
},
{
Expand All @@ -219,24 +165,44 @@
"metadata": {},
"outputs": [],
"source": [
"estimator.latest_training_job.describe()[\"TrainingJobStatus\"]"
"import time\n",
"\n",
"if description['TrainingJobStatus'] != 'Completed':\n",
" while description['SecondaryStatus'] not in {'Stopped', 'Completed'}:\n",
" description = client.describe_training_job(TrainingJobName=job_name)\n",
" primary_status = description['TrainingJobStatus']\n",
" secondary_status = description['SecondaryStatus']\n",
" print('Current job status: [PrimaryStatus: {}, SecondaryStatus: {}] | {} Rule Evaluation Status: {}'\n",
" .format(primary_status, secondary_status, \n",
" estimator.latest_training_job.rule_job_summary()[0][\"RuleConfigurationName\"],\n",
" estimator.latest_training_job.rule_job_summary()[0][\"RuleEvaluationStatus\"]\n",
" )\n",
" )\n",
" time.sleep(15)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Result\n",
"## Conclusion\n",
"\n",
"This notebook attempted to show a very simple setup of how you can use CloudWatch events for your training job to take action on rule evaluation status changes. Learn more about Amazon SageMaker Debugger in the [GitHub Documentation](https://github.com/awslabs/sagemaker-debugger)."
"This notebook showed how you can use the Debugger `StalledTrainingRule` built-in rule for your training job to take action on rule evaluation status changes. To find more information about Debugger, see [Amazon SageMaker Debugger Developer Guide](https://integ-docs-aws.amazon.com/sagemaker/latest/dg/train-debugger.html) and the [smdebug GitHub documentation](https://github.com/awslabs/sagemaker-debugger)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "conda_tensorflow_p36",
"language": "python",
"name": "python3"
"name": "conda_tensorflow_p36"
},
"language_info": {
"codemirror_mode": {
Expand Down
Loading