Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training from warm start with multiple GPUs #251

Merged
merged 12 commits into from
Aug 18, 2023
Merged

Conversation

ohinds
Copy link
Contributor

@ohinds ohinds commented Jul 21, 2023

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)

Summary

Enable training from a warm start. The tensorflow distributed strategy is now maintained as a member variable of the estimator so a model can be loaded and trained with the same strategy object.

Resolves #239

Checklist

  • I have added tests to cover my changes
  • I have updated documentation (if necessary)

Acknowledgment

  • I acknowledge that this contribution will be available under the Apache 2 license.

@satra satra merged commit dc5461c into master Aug 18, 2023
@hvgazula hvgazula deleted the ohinds-multi_gpu_warm_start branch March 8, 2024 15:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Training a model with warm start fails using multi_gpu=True
2 participants