-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tests: roachtest that creates overload when doing intent resolution #108916
Conversation
ignore the first commit -- it is from #108873 |
Timeout: time.Hour, | ||
Benchmark: true, | ||
Tags: registry.Tags(`weekly`), | ||
// Second node is solely for Prometheus. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't need to BTW - even running prometheus on the same node, or not at all, is ok. These runs happen on GCE and we now have that global prometheus instance that knows how to scrape every endpoint.
Owner: registry.OwnerAdmissionControl, | ||
Timeout: time.Hour, | ||
Benchmark: true, | ||
Tags: registry.Tags(`weekly`), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Run nightly for a while to shake out flakes before reducing to weekly frequency?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TFTR!
Reviewable status:
complete! 0 of 0 LGTMs obtained (waiting on @DarrylWong, @herkolategan, and @irfansharif)
pkg/cmd/roachtest/tests/admission_control_intent_resolution.go
line 39 at r2 (raw file):
Previously, irfansharif (irfan sharif) wrote…
Run nightly for a while to shake out flakes before reducing to weekly frequency?
Done
pkg/cmd/roachtest/tests/admission_control_intent_resolution.go
line 40 at r2 (raw file):
Previously, irfansharif (irfan sharif) wrote…
We don't need to BTW - even running prometheus on the same node, or not at all, is ok. These runs happen on GCE and we now have that global prometheus instance that knows how to scrape every endpoint.
I added this only because someone could do --cloud aws
. I've left this unchanged.
ee2f24d
to
d973f6a
Compare
Informs cockroachdb#97108 Epic: CRDB-25458 Release note: None
d973f6a
to
8f401f6
Compare
bors r=irfansharif |
Build succeeded: |
Informs #97108
Epic: CRDB-25458
Release note: None