-
Notifications
You must be signed in to change notification settings - Fork 348
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: support model monitoring for batch prediction in Vertex SDK #1570
Conversation
Optional. The objective config for model monitoring. Passing this parameter enables | ||
monitoring on the model associated with this batch prediction job. | ||
model_monitoring_alert_config (aiplatform.model_monitoring.EmailAlertConfig): | ||
Optional. Configures how model monitoring alerts are sent to the user. Right now |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are these docstrings copied from the source at https://github.com/googleapis/googleapis/tree/master/google/cloud/aiplatform? Will we be able to remember to update this when/if alerts other than email alert become supported?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They're not directly copied from GAPIC. But Jing's team also confirmed that there's no plans for additional alert configs.
google/cloud/aiplatform/jobs.py
Outdated
@@ -702,6 +759,11 @@ def create( | |||
sync=sync, | |||
create_request_timeout=create_request_timeout, | |||
) | |||
# TODO: remove temporary re-import statements once model monitoring for batch prediction is GA |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since there are several of these throughout the PR, perhaps add a tracking bug number so the TODO comments look like # TODO(b/.....): remove...
with a common bug number.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
buganizer component created & assigned
@@ -17,9 +17,16 @@ | |||
|
|||
from typing import Optional, List | |||
from google.cloud.aiplatform_v1.types import ( | |||
model_monitoring as gca_model_monitoring, | |||
model_monitoring as gca_model_monitoring_v1, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we use compat
for these imports?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
compat
by default imports from GA version of GAPIC, unless if DEFAULT_VERSION
is set to v1beta1
. I tried to see if the import aliases will index into the correct version by simply setting DEFAULT_VERSION = 'v1beta1'
and then switching it back to v1
on an ad-hoc basis, but it doesn't dynamically index in the way I was hoping for. I think it's because the symbol table isn't automatically re-written unless if we explicitly re-import. So that's why I imported both v1 and v1beta1 versions explicitly.
def as_proto(self): | ||
"""Returns EmailAlertConfig as a proto message.""" | ||
# TODO: remove config_for_bp parameter when model monitoring for batch prediction is GA | ||
def as_proto(self, config_for_bp: Optional[bool] = False): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This typing annotation should be just config_for_bp: bool = False
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated
def as_proto(self): | ||
"""Returns _ObjectiveConfig as a proto message.""" | ||
# TODO: remove config_for_bp parameter when model monitoring for batch prediction is feature complete and in GA | ||
def as_proto(self, config_for_bp: Optional[bool] = False): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same comment as above about typing annotation here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated
io as gca_io, | ||
model_monitoring as gca_model_monitoring, | ||
) | ||
|
||
# constants used for testing | ||
USER_EMAIL = "" | ||
MODEL_NAME = "churn" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this used as the model's display_name
? if so maybe better call it MODEL_DISPLAY_NAME
, since "model name" could be understood as its resource full name.
If this is indeed intended as a display name of a resource used only for testing, please prefix it with "temp".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I updated the variable names to make it more specific. But the extra prefixing is unnecessary because the actual display names are being created by _make_display_name
implemented in e2e_base.py
, which does append a prefix set by the class (in this case the prefix is 'temp_e2e_model_monitoring_test_'
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. In that case can we rename this further and call it something like MODEL_DISPLAYNAME_KEY to prevent the next engineer reading this to misinterpret as the actual model displayname? More generally, what’s the meaning of “churn” in the context of testing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've updated the variable name. Regarding the actual value string, "churn" is just a shorthand used to reference the BQ dataset. The same dataset and pre-trained model was used in the example notebook on the Cloud SDK documentation page and a separate example for BQML.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approving from MLMD, as this is a no-op on MLMD side.
Thank you for opening a Pull Request! Before submitting your PR, there are a few things you can do to make sure it goes smoothly:
Fixes #<issue_number_goes_here> 🦕