-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
client/requestbatcher: fix panic when only batch is sent due to size #34837
client/requestbatcher: fix panic when only batch is sent due to size #34837
Conversation
hold on, added some junk to this commit by accident |
282d292
to
a8b3a3f
Compare
Updated, PTAL |
a8b3a3f
to
c05ce92
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 3 of 3 files at r1.
Reviewable status: complete! 1 of 0 LGTMs obtained (waiting on @ajwerner)
pkg/internal/client/requestbatcher/batcher.go, line 235 at r1 (raw file):
func (b *RequestBatcher) run(ctx context.Context) { var deadline time.Time var timer timeutil.Timer
We can still use the zero value here and below.
pkg/internal/client/requestbatcher/batcher.go, line 247 at r1 (raw file):
already sent due to size constraints
We might add other triggers for batches to be sent, so I'd generalize this to something like "already sent before the timer fired".
This PR fixes a bug discovered during the implementation of cockroachdb#34803 whereby a nil batch may be send due to a stale timer that corresponds to the sole batch being sent due to size and no logic to cancel its timer. This condition occurs when there is one outstanding batch for a single range and it gets filled according to size constraints. Hopefully this case is rare because in cases where a batch is filled based on size limits hopefully there is traffic to another range. This is a pretty egregious bug. While fixing it I encountered another gotcha with regards to the timeutil.Timer.Stop method for which I added additional comments. Release note: None
c05ce92
to
4d82429
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: complete! 1 of 0 LGTMs obtained (waiting on @nvanbenschoten)
pkg/internal/client/requestbatcher/batcher.go, line 235 at r1 (raw file):
Previously, nvanbenschoten (Nathan VanBenschoten) wrote…
We can still use the zero value here and below.
I'm not sure that that's true. See here. The method points the pointer into the pool. If I overwrite the value, its address will still be in the pool.
pkg/internal/client/requestbatcher/batcher.go, line 247 at r1 (raw file):
Previously, nvanbenschoten (Nathan VanBenschoten) wrote…
already sent due to size constraints
We might add other triggers for batches to be sent, so I'd generalize this to something like "already sent before the timer fired".
Done.
bors r+ |
Build failed |
bors r+ |
34837: client/requestbatcher: fix panic when only batch is sent due to size r=ajwerner a=ajwerner This PR fixes a bug discovered during the implementation of #34803 whereby a nil batch may be send due to a stale timer that corresponds to the sole batch being sent due to size and no logic to cancel its timer. This condition occurs when there is one outstanding batch for a single range and it gets filled according to size constraints. Hopefully this case is rare because in cases where a batch is filled based on size limits hopefully there is traffic to another range. This is a pretty egregious bug. While fixing it I encountered another gotcha with regards to the timeutil.Timer.Stop method for which I added additional comments. Release note: None Co-authored-by: Andrew Werner <[email protected]>
Build succeeded |
This PR fixes a bug discovered during the implementation of #34803 whereby a
nil batch may be send due to a stale timer that corresponds to the sole batch
being sent due to size and no logic to cancel its timer. This condition occurs
when there is one outstanding batch for a single range and it gets filled
according to size constraints. Hopefully this case is rare because in cases
where a batch is filled based on size limits hopefully there is traffic to
another range. This is a pretty egregious bug. While fixing it I encountered
another gotcha with regards to the timeutil.Timer.Stop method for which I
added additional comments.
Release note: None