Skip to content
This repository has been archived by the owner on May 25, 2022. It is now read-only.

[RDST-111] keep tensorboard directory clean during s3 sync #3

Merged
merged 2 commits into from
Jan 24, 2018
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 26 additions & 1 deletion src/sagemaker/tensorflow/estimator.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@ def __init__(self, estimator, logdir=None):
threading.Thread.__init__(self)
self.event = threading.Event()
self.estimator = estimator
self.aws_sync_dir = tempfile.mkdtemp()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My gut says this should be self._aws_sync_dir, but it's not clear to me why some fields are considered public while others aren't, so it probably doesn't matter.

self.logdir = logdir or tempfile.mkdtemp()

@staticmethod
Expand All @@ -47,6 +48,29 @@ def _cmd_exists(cmd):
for path in os.environ["PATH"].split(os.pathsep)
)

@staticmethod
def _sync_directories(from_directory, to_directory):
"""Sync to_directory with from_directory by copying each file in
to_directory with new contents. Why do this? Because TensorBoard picks
up temp files from `aws s3 sync` and then stops reading the correct
tfevent files.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a relevant TensorBoard issue we could include here? I'm just curious if that could be an easier reference for the future in understanding this and knowing if/when it's fixed, versus using git blame to track down this discussion.

This is the closest one I came across, though it's slightly different: tensorflow/tensorboard#70

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I still think it's probably related to tensorflow/tensorboard#349. I'll put that in the docstring.


Args:
from_directory (str): The directory with updated files.
to_directory (str): The directory to be synced.
"""
for root, dirs, files in os.walk(from_directory):
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you'll want to check to see if to_directory exists before this loop, then create it if it doesn't. From what I can tell, running aws s3 sync src_dir dst_dir will create dst_dir for you, but it looks like we're only attempting to create subdirectories of to_directory rather than the directory itself.

to_root = root.replace(from_directory, to_directory)
for directory in dirs:
to_child_dir = os.path.join(to_root, directory)
if not os.path.exists(to_child_dir):
os.mkdir(to_child_dir)
for fname in files:
from_file = os.path.join(root, fname)
to_file = os.path.join(to_root, fname)
with open(from_file, 'rb') as a, open(to_file, 'wb') as b:
b.write(a.read())

def validate_requirements(self):
"""Ensure that TensorBoard and the AWS CLI are installed.

Expand Down Expand Up @@ -98,8 +122,9 @@ def run(self):
while not self.estimator.checkpoint_path:
self.event.wait(1)
while not self.event.is_set():
args = ['aws', 's3', 'sync', self.estimator.checkpoint_path, self.logdir]
args = ['aws', 's3', 'sync', self.estimator.checkpoint_path, self.aws_sync_dir]
subprocess.call(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
self._sync_directories(self.aws_sync_dir, self.logdir)
self.event.wait(10)
tensorboard_process.terminate()

Expand Down