-
Notifications
You must be signed in to change notification settings - Fork 349
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add support for distributed custom training #66
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM + tests pass, added a note about the default machine_type
""" | ||
|
||
replica_count: int = 0 | ||
machine_type: str = "n1-standard-2" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
n1-standard-2
is supported in Deploy but not for Training.
n1-standard-4
is currently the lowest common denominator.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! That's good to know. I was working off the machine_resource.MachineSpec proto comments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
Adds support for chief-worker distributed training to custom training. If replica_count > 1 the remainder as provisioned as workers
The _DistributedTrainingSpec can also support more custom provisioning. We will have a follow up PR after we consider how we want to expose more custom provisioning on the API surface.
Note: If library upgrades to 3.7 we should switch the Spec classes to dataclass.
Fixes b/172369809 🦕