-
Notifications
You must be signed in to change notification settings - Fork 348
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: SDK support for model monitoring #1249
Conversation
6481291
to
68b6fa4
Compare
…nts; batch prediction use case will be implemented separately)"
a154c08
to
327611c
Compare
99987d3
to
ee05588
Compare
…on-aiplatform into model-monitoring
|
||
|
||
class RandomSampleConfig(_SamplingStrategy): | ||
def __init__(self, sample_rate: Optional[float] = 1): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please change this or clarify the behavior when sample_rate
is None
.
return ( | ||
gca_model_deployment_monitoring_job.ModelDeploymentMonitoringScheduleConfig( | ||
monitor_interval=duration_pb2.Duration( | ||
seconds=self.monitor_interval * 3600 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This conversion can be surprising. If the original schedule config (defined in the service protocol) expresses this in seconds, why not keep using seconds (and rename the variable as something like monitor_interval_seconds
) instead of int
hours? How will the user express "every 10 minutes" using the ScheduleConfig
class here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
model monitoring only supports hourly schedules. even if the user specifies something like 1.6
, it'll be rounded up to the next hour
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so under the original protocol, even if the user passes something like seconds = 3500
, it'll get rounded up to 3600
behind the scenes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given the current behavior of the service (rounding to hours), this makes sense. Do we know that the service will not be updated to support more fine-grained monitor interval? If that happens do we have an easy path of updating the library to support that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have sync'd with @qijing93 offline and there's no additional support planned for fine-grained monitor intervals.
…on-aiplatform into model-monitoring
…on-aiplatform into model-monitoring
…on-aiplatform into model-monitoring
This patch only adds model monitoring implementation for models deployed to an endpoint. The batch prediction use case will be addressed separately in future PRs.
To-do list:
Fixes b/231988321 🦕