-
-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(replay): add kafka config for new ingest-feedbacks consumer #66169
Conversation
@@ -269,6 +269,14 @@ def ingest_events_options() -> list[click.Option]: | |||
"consumer_type": "events", | |||
}, | |||
}, | |||
"ingest-feedbacks": { | |||
"topic": settings.KAFKA_INGEST_FEEDBACKS, | |||
"strategy_factory": "sentry.ingest.consumer.factory.IngestStrategyFactory", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Each consumer should have it's own strategy. The ingest strategy is already doing far more than it should and we should not add additional logic to it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any tips/references for how to write a new strategy? In ingest/consumer/factory.py, right
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The directory structure in sentry is a bit misleading, but the ingest
module really just refers to the errors/transactions ingest consumer today and not any of the rest of the piplines. I would suggest keeping it either entirely separate, or inside a new module ingest.user_feedback
or something like that. See https://getsentry.github.io/arroyo/ for docs how to write a strategy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lynnagara We're not adding additional logic to it. This behavior already exists in that consumer. We're just moving it to a new topic and new consumer infrastructure. I agree we should split out this product into its own consumer. Can that be handled in a follow-up PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah makes sense, I thought it was totally new functionality.
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #66169 +/- ##
==========================================
+ Coverage 84.27% 84.29% +0.01%
==========================================
Files 5304 5304
Lines 237045 237075 +30
Branches 41026 41031 +5
==========================================
+ Hits 199776 199834 +58
+ Misses 37050 37022 -28
Partials 219 219
|
"strategy_factory": "sentry.ingest.consumer.factory.IngestStrategyFactory", | ||
"click_options": ingest_events_options(), | ||
"static_args": { | ||
"consumer_type": "events", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you might want to make this a new type. We don't really want to run all the events
logic and configuration (even if they share the bulk of code in the initial implementation), and i'm guessing this will diverge in the future?
…re flag (#66257) removes flag declaration and all refs getsentry/team-replay#387
Was using TSResult here which is wrong, it's a dict w/ data or a dict of dicts if it's a group-by. The presence of the data key should be indicative. Fixes SENTRY-2V84
### Summary If there are any problems with the split decision this will cause it to always re-run the code and save the new decision,
<img width="1035" alt="Screenshot 2024-03-03 at 9 36 22 PM" src="https://github.com/getsentry/sentry/assets/60121741/860a6370-b1aa-41dc-b5c8-b62b9778ffa5"> --------- Co-authored-by: Abdullah Khan <[email protected]>
This PR adds the current user to the header of the replay in the issue details page: Before: ![Screenshot 2024-03-04 at 1 15 37 PM](https://github.com/getsentry/sentry/assets/8533851/614047aa-e0da-4520-91d0-5d71f22f8cd9) After: ![Screenshot 2024-03-04 at 1 15 31 PM](https://github.com/getsentry/sentry/assets/8533851/4592b55d-ac83-4448-8af5-06988dfc5aa1)
…or environment (#66265) These constraints are causing migrations to fail in getsentry, where we've set up the router to move these tables to a separate database.
We need to support deleted fields on Django model schemas for a number of self-hosted released (currently 2). This means that even if we delete a field on HEAD, we still need to make the import script aware that the field could exit on imported JSON, and pre-process that JSON in such a way that the field is removed in cases where the JSON was generated by an old version that still contained the field. There is no purpose in testing this code standalone. It passes tests in #65923, which is the first actual use of this shimming mechanism.
Just moving 3 consumers as a first step, will migrate the rest as a follow up
failure_issue_threshold will never be falsey due because of this condition here [0] [0]: https://github.com/getsentry/sentry/blob/11e039354141a8487776be19d7535d0da2ee53ce/src/sentry/monitors/logic/mark_failed.py#L44-L45
🚨 Warning: This pull request contains Frontend and Backend changes! It's discouraged to make changes to Sentry's Frontend and Backend in a single pull request. The Frontend and Backend are not atomically deployed. If the changes are interdependent of each other, they must be separated into two pull requests and be made forward or backwards compatible, such that the Backend or Frontend can be safely deployed independently. Have questions? Please ask in the |
my bad 😂. reopened at #66484 |
Linked to #66100 and
https://getsentry.atlassian.net/browse/OPS-5317
Sentry-kafka-schemas (merge second?) PR
May have to wait on:
#66288
#66381