-
-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(ddm): Add support for blocking metrics #63598
Conversation
🔍 Existing Issues For ReviewYour pull request is modifying functions with the following pre-existing issues: 📄 File: src/sentry/relay/config/init.py
Did you find this useful? React with a 👍 or 👎 |
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## master #63598 +/- ##
==========================================
- Coverage 81.38% 81.37% -0.01%
==========================================
Files 5221 5224 +3
Lines 231753 231924 +171
Branches 40066 40111 +45
==========================================
+ Hits 188603 188734 +131
- Misses 37344 37369 +25
- Partials 5806 5821 +15
|
publish_status = {"GET": ApiPublishStatus.EXPERIMENTAL, "PATCH": ApiPublishStatus.EXPERIMENTAL} | ||
owner = ApiOwner.TELEMETRY_EXPERIENCE | ||
|
||
def get(self, request: Request, organization) -> Response: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moved the metrics/meta
endpoint also here, to prepare for a future refactor.
OrganizationMetricsEndpoint.as_view(), | ||
name="sentry-api-0-organization-metrics-index", | ||
), | ||
re_path( | ||
r"^(?P<organization_slug>[^/]+)/metrics/meta/$", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the previous endpoint, which is kept for backwards compatibility.
try: | ||
blocked_metrics_payload = json.loads(json_payload) | ||
except ValueError: | ||
if repair: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Repairing on write allows us to make sure we don't end up in a broken state of the system which requires a separate PR to unbreak.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do understand that this is a real possibility since anyone else could theoretically write to this field, but I do not see a value in trying to "repair" it by deleting the key completely. Raising an exception and opening an issue on Sentry would be enough IMO.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can remove this behavior, but if something is corrupted, we either delete it or figure out its content and make a PR to untangle it programmatically.
My main idea for doing so is that if a config of a user gets corrupted, they will now ingest all their blocked metrics and they don't have a way to block them again (they have to wait for our PR). With my system, they can immediately fix the problem with the downside being losing their previous settings. Still, it's a negligible problem since if the JSON is corrupted there likely won't be valuable data in the stored settings.
|
||
return BlockedMetrics(metrics=blocked_metrics)._merge_blocked_metrics() | ||
|
||
def _merge_blocked_metrics(self) -> "BlockedMetrics": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We run merging on both writing and reading, this assumes that options data can be overridden by someone else, so as long as the data is in the right format, we will merge.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need merging because of the tags?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nope, we need merging in general, since we don't want to return to the frontend duplicated data.
type=parsed_mri.entity, | ||
project_ids=project_ids, | ||
blocked_for_projects = [] | ||
if (blocked := blocked_metrics.get(metric_mri)) is not None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Too much manipulation going on here but if we have to merge blocked and non-blocked metrics we have no other ways.
@@ -2904,7 +2904,7 @@ def build_and_store_session( | |||
self.store_session(self.build_session(**kwargs)) | |||
|
|||
|
|||
class OrganizationMetricMetaIntegrationTestCase(MetricsAPIBaseTestCase): | |||
class OrganizationMetricsIntegrationTestCase(MetricsAPIBaseTestCase): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
driveby: Renamed for consistency.
|
||
for metric_mri, project_ids in stored_mris.items(): | ||
stored_metrics = get_stored_metrics_of_projects(projects, use_case_id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
driveby: Removed for consistency.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks fine overall, left a couple of comments. Also, as mentioned before, I think we should try to keep refactors to a minimum during feature impl.
|
||
@dataclass(frozen=True) | ||
class BlockedMetric: | ||
metric_mri: str |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit:
metric_mri: str | |
mri: str |
try: | ||
blocked_metrics_payload = json.loads(json_payload) | ||
except ValueError: | ||
if repair: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do understand that this is a real possibility since anyone else could theoretically write to this field, but I do not see a value in trying to "repair" it by deleting the key completely. Raising an exception and opening an issue on Sentry would be enough IMO.
|
||
return BlockedMetrics(metrics=blocked_metrics)._merge_blocked_metrics() | ||
|
||
def _merge_blocked_metrics(self) -> "BlockedMetrics": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need merging because of the tags?
@@ -1,5 +1,7 @@ | |||
from __future__ import annotations | |||
|
|||
from sentry.sentry_metrics.visibility import get_blocked_metrics |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: do we plan on putting anything else in this module? If not, then i would call it blocking
or metrics_blocking
or would flatten it out by butting exports and errors in metrics_blocking.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes we are, all the deletion logic will go there. It's called visibility exactly because it's a generic term that both have in common.
This PR adds the ability to store blocked metrics in sentry options and allows to return of those blocked metrics in the
metrics/meta
endpoint.A follow-up PR will be done in which the metrics configuration will be emitted for Relay. It was not done in this PR to not add additional complexity and surface for errors.