-
Notifications
You must be signed in to change notification settings - Fork 348
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add seq2seq forecasting training job #1196
Conversation
Overall LG, just one minor comment. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks!
7f16202
to
e16cf4e
Compare
22098ca
to
2cf47c9
Compare
|
||
self._optimization_objective = optimization_objective | ||
self._additional_experiments = [] | ||
|
||
def run( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you just remove the run
override and defer to super?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we remove this, Sphinx will not generate documentation for the run()
function. It can pull the docstring from the parent class (tested with nox -s docs
), but I couldn't find a way to make it document the function signature other than by writing it out manually.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, that's quite annoying. Let me ask the SDK team for more solutions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like you can simplify both AutoMLForecastingTrainingJob and SequenceToSequencePlusForecastingTrainingJob by removing all methods, since they are the exact same as the super's implementation.
We also need to add system tests for these. See tests/system/aiplatform/test_e2e_tabular.py for an example.
Regarding my last comment, you can simplify to:
I ran the unit tests and it works fine. Otherwise, everything else looks pretty good. |
Thanks for the review @ivanmkc! qq: how will simplifying the way you suggested impact the documentation? Is there a way to check/ensure that there will be no impact? Like if the code used to generate the docs has an option to document functions from super()? |
32fb0d4
to
7577c31
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM if system tests pass.
Moves AutoMLForecasting training logic into a base class so we can reuse it for other forecasting models. Adds tests for a future seq2seq training job.
Thank you for opening a Pull Request! Before submitting your PR, there are a few things you can do to make sure it goes smoothly: - [x] Make sure to open an issue as a [bug/issue](https://github.com/googleapis/python-aiplatform/issues/new/choose) before writing your code! That way we can discuss the change, evaluate designs, and agree on the general idea - [x] Ensure the tests and linter pass - [x] Code coverage does not decrease (if any source code was changed) - [x] Appropriate docs were updated (if necessary) Fixes b/229909845 🦕 --- Adds a `SequenceToSequencePlusForecastingTrainingJob` to training jobs. This job has the exact same signature as `AutoMLForecastingTrainingJob`, but we are creating a separate job in case the two models diverge in the future. The logic for `AutoMLForecastingTrainingJob` has been moved to a new abstract base class `_ForecastingTrainingJob`. The only things that differ between the seq2seq and automl training jobs that extend it are the `model_type` and `training_task_definition`.
Thank you for opening a Pull Request! Before submitting your PR, there are a few things you can do to make sure it goes smoothly:
Fixes b/229909845 🦕
Adds a
SequenceToSequencePlusForecastingTrainingJob
to training jobs. This job has the exact same signature asAutoMLForecastingTrainingJob
, but we are creating a separate job in case the two models diverge in the future.The logic for
AutoMLForecastingTrainingJob
has been moved to a new abstract base class_ForecastingTrainingJob
. The only things that differ between the seq2seq and automl training jobs that extend it are themodel_type
andtraining_task_definition
.