Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(profiling): add namespace for profile metrics #3229

Merged
merged 6 commits into from
Mar 7, 2024

Conversation

viglia
Copy link
Contributor

@viglia viglia commented Mar 6, 2024

Sentry PR that added the profles use case: https://github.com/getsentry/sentry/pull/65152/files

@viglia viglia requested a review from a team as a code owner March 6, 2024 18:51
@viglia viglia self-assigned this Mar 6, 2024
Copy link
Member

@jjbayer jjbayer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might be worth adding an integration test, like this one:

def test_metrics(mini_sentry, relay):
relay = relay(mini_sentry, options=TEST_CONFIG)
project_id = 42
mini_sentry.add_basic_project_config(project_id)
timestamp = int(datetime.now(tz=timezone.utc).timestamp())
metrics_payload = (
f"transactions/foo:42|c|T{timestamp}\ntransactions/bar:17|c|T{timestamp}"
)
relay.send_metrics(project_id, metrics_payload)
envelope = mini_sentry.captured_events.get(timeout=3)
assert len(envelope.items) == 1
metrics_item = envelope.items[0]
assert metrics_item.type == "metric_buckets"
received_metrics = json.loads(metrics_item.get_bytes().decode())
received_metrics = sorted(received_metrics, key=lambda x: x["name"])
assert received_metrics == [
{
"timestamp": timestamp,
"width": 1,
"name": "c:transactions/bar@none",
"value": 17.0,
"type": "c",
},
{
"timestamp": timestamp,
"width": 1,
"name": "c:transactions/foo@none",
"value": 42.0,
"type": "c",
},
]

Copy link
Member

@Dav1dde Dav1dde left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to make sure every other piece of the pipeline already supports the namespace!

Are Sentry and the consumers already able to process the profile metrics without crashing?

@viglia
Copy link
Contributor Author

viglia commented Mar 7, 2024

@Dav1dde I followed this doc and added the use case as suggested ~2 weeks ago in this PR

ps. in the doc there is no mention about other consumer-specific changes. If you're aware of other changes that are documented somewhere else, please, link them here and I'll follow up.

@Dav1dde
Copy link
Member

Dav1dde commented Mar 7, 2024

@viglia I am just a bit paranoid, when we add it to Relay this means anyone can ingest these messages on any of our instances.

I don't know what needs to be added to Sentry, but it seems like there were no changes made to rate and cardinality limits, which eventually should be added.

Copy link
Contributor

@iker-barriocanal iker-barriocanal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we link in the description to the PRs/code where the namespace has been added in sentry/snuba/etc? Thanks beforehand!

@viglia viglia requested a review from Dav1dde March 7, 2024 11:24
@viglia
Copy link
Contributor Author

viglia commented Mar 7, 2024

@viglia I am just a bit paranoid, when we add it to Relay this means anyone can ingest these messages on any of our instances.

I don't know what needs to be added to Sentry, but it seems like there were no changes made to rate and cardinality limits, which eventually should be added.

@Dav1dde keep in mind that setting those limits is optional.

By default, a database writes quota will be provisioned by the following sentry options. 
They will be shared amongst all use case IDs without a dedicated write limit option name.

sentry-metrics.writes-limiter.limits.generic-metrics.global
sentry-metrics.writes-limiter.limits.generic-metrics.per-orgCardinality limit

So we have a default rate limiting value, both for global and for the per-org

Should we need customized values, we'll only find out with time and can make those changes to enforce different limits then.

Copy link
Member

@Dav1dde Dav1dde left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Dav1dde keep in mind that setting those limits is optional.

Maybe, but these settings exist to protect our infrastructure. And in the incident case the necessary code in Sentry is missing to even configure the limits.

These two configs you pointed out are not the correct ones.

I urge you to follow up on these things, this is incident territory.

I don't want to block you, the Relay changes are fine the, just please drop the error.
The Rest I don't want to worry about.

@viglia
Copy link
Contributor Author

viglia commented Mar 7, 2024

Maybe, but these settings exist to protect our infrastructure. And in the incident case the necessary code in Sentry is missing to even configure the limits.

yes, that's why there are default cardinality & write limits that are shared among use cases that don't specify a custom one.

These two configs you pointed out are not the correct ones.

Before merging I'll follow up with sns to double check everything, but those options I've shared come directly from the Doc that outlines how to add a new use case --> https://www.notion.so/sentry/Creating-a-New-Use-Case-ID-for-the-Generic-Metric-Pipeline-Locally-d9bdeaac34c14ea88f1ff0f9dbd6e90a?pvs=4#e0e24ab1731340c0aef682eefe8112b3

@viglia viglia merged commit 912a697 into master Mar 7, 2024
20 checks passed
@viglia viglia deleted the viglia/feat/add-profiles-metrics-namespace branch March 7, 2024 16:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants