-
-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(monitors): Add basic ingestion via Kafka #45079
Conversation
params = json.loads(message.payload.value) | ||
current_datetime = to_datetime(params["date_updated"]) | ||
|
||
# TODO: Same as MonitorCheckInDetailsEndpoint.put. Keep in sync or factor out. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@evanpurkhiser The update code is partially duplicated with the cron details endpoint. I can
factor this out into a special update function on MonitorCheckIn
. Any preference?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's keep in sync, we can factor out later.
We're also working on environment support right now #42788 so this code is changing a lot anyway
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this code is changing a lot, then we should factor it out rather sooner than later. I'll leave it as-is, and you can refactor after merging. Please just make sure it doesn't diverge too much, so we don't go hunt bugs in wrong places :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree, I wanted to move monitors into a top-level django app where we can nicely consolidate logic, but I haven't had time to do that quite yet. I think we should have some re-usable stuff soon. The thing I am most interested in is just having everything talking to each other, we can swap out what this piece does easily enough.
|
||
|
||
def process_message(message: Message[KafkaPayload]) -> None: | ||
params = json.loads(message.payload.value) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We'll delegate all validation to Relay and assume these are validated payloads. Esp the Serializer is no longer needed if Relay handles all the mutating endpoints.
We're going with a msgpack wrapper for the payload in Relay similar to replays and profiles; this PR needs to be updated accordingly. |
* master: (79 commits) feat(perf-issues): Add performance issue detection timing runner command (#44912) Revert "chore: Investigating org slug already set to a different value (#45134)" fix(hybrid-cloud): Redirect to org restoration page for customer domains (#45159) bug(replays): Fix 500 error when marshaling tags field (#45097) ref(sourcemaps): Redesign lookup of source and sourcemaps (#45032) chore: Investigating org slug already set to a different value (#45134) feat(dynamic-sampling): Implement prioritize by project bias [TET-574] (#42939) feat(dynamic-sampling): Add transaction name prioritize option - (#45034) feat(dyn-sampling): add new bias toggle to project details for prioritise by tx name [TET-717] (#44944) feat(admin) Add admin relay project config view [TET-509] (#45120) Revert "chore(assignment): Add analytics when autoassigning after a manual assignment (#45099)" feat(sourcemaps): Implement new tables supporting debug ids (#44572) ref(js): Remove usage of react-document-title (#45170) chore(py): Consistently name urls using `organization-` prefix (#45180) ref: rename acceptance required checks collector (#45156) chore(assignment): Add analytics when autoassigning after a manual assignment (#45099) feat(source-maps): Update copy for source map debug alerts (#45164) ref(js): Remove custom usage of DocumentTitle (#45165) chore(login): update the login banners (#45151) ref(py): Remove one more legacy project_id from Environment (#45160) ...
* master: (37 commits) ref(ppf): Don't use --commit-batch-size for futures queue length (#45182) feat(codecov-v2): Add more logging (#45225) fix(alerts): Center table items on alert history page (#45226) feat(CapMan): Pass `tenant_ids` to Snuba (#44788) ref(db): Drop `project_id` from Environment (model state) (#45207) chore(profiling): Rename context in profiles task (#45208) feat(replays): Improve index page query performance (#45098) chore(issue assignment): Add logging for`GroupOwner` auto assignment (#45142) fix(hybrid-cloud): Uncache organization when queueing it for deletion (#45213) fix(perf): Navigating to Transaction Summary from Trends widget should persist custom date selection (#45190) fix(pageFilter): Fix overflow (#45169) ref(git hooks): Only suggest autoupdate variable when pulling if not already set (#45179) fix(dashboard): Include dashboard filters in widget viewer (#45106) fix(alerts): Remove null projects from alerts list (#45202) feat(replay): Update Inline replay onboarding img to support dark mode (#45084) __iexact reduce call has default value now. (#45206) feat(replay): Use SDK value for LCP (#44868) chore(hybrid-cloud): breaking foreign keys (#45203) Revert "ref(db): Drop `project_id` from Environment (model state) (#45094)" ref(db): Drop `project_id` from Environment (model state) (#45094) ...
This will need to merge alongside https://github.com/getsentry/getsentry/pull/9675 |
def process_message(message: Message[KafkaPayload]) -> None: | ||
wrapper = msgpack.unpackb(message.payload.value) | ||
|
||
params = json.loads(wrapper["payload"]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any way we could add types for this btw?
Adds an ingest consumer similar to Replays and Profiles that reads from a Kafka topic `ingest-monitors` and updates them directly in postgres. The consumer runs automatically in devserver and can be started via `sentry run ingest-monitors`. This supports creating and updating check ins with a status and duration. Creating monitors is not supported yet. The consumer closely follows the logic from checkin creation, but there is a difference for disabled monitors compared to the Update (PUT) endpoint.
Follow-up to #45079. Stores durations correctly when they are specified by the client. The Relay and Kafka protocol specify them as seconds, and the consumer misses a conversion to integer milliseconds.
Adds an ingest consumer similar to Replays and Profiles that reads from
a Kafka topic
ingest-monitors
and updates them directly in postgres.The consumer runs automatically in devserver and can be started via
sentry run ingest-monitors
.This supports creating and updating check ins with a status and duration. Creating monitors is not supported yet. The consumer closely follows the logic from checkin creation, but there is a difference for disabled monitors compared to the Update (PUT) endpoint.
See getsentry/relay#1886 for Relay.
Requires https://github.com/getsentry/ops/pull/6226
Requires https://github.com/getsentry/getsentry/pull/9675