-
Notifications
You must be signed in to change notification settings - Fork 9.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
clientv3/integration: add TestBalancerUnderServerShutdownImmutable #8779
Conversation
donec := make(chan struct{}) | ||
go func() { | ||
defer close(donec) | ||
for i := 0; i < 3; i++ { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why retry 3 times? why we do not in a go routine? instead of trying get after the termination?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be better in a goroutine if we want to test range request retrial logic? Also not all range request overlaps with node shutting down so try 3 times.
Or we can send range request right after terminate without sleep?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think the goal here is simple: test if balancer would switch endpoint when a machine is shutdown. we should not mix retry + balancer switch together unless when needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it. I will simplify the tests.
|
||
select { | ||
case <-donec: | ||
case <-time.After(2 * timeout): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this timeout is smaller than line187. line187 can be triggered 3 times. so the max timeout of retry is 21s.
73d6900
to
7100a26
Compare
/cc @jpbetz |
// pin eps[0] | ||
cli, err := clientv3.New(clientv3.Config{Endpoints: []string{eps[0]}}) | ||
if err != nil { | ||
t.Fatal(err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add context on error. failed to create client: %v, err
err = op(cli, cctx) | ||
ccancel() | ||
if err != nil { | ||
t.Fatal(err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
context on error. also this should be t.errorf
defer cli.Close() | ||
|
||
// wait for eps[0] to be pinned | ||
waitPinReady(t, cli) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does waitPin return an error?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's fatal inside waitPinReady
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
probably should rename to mustWaitPinReady later.
Signed-off-by: Gyu-Ho Lee <[email protected]>
lgtm if CI passes |
testBalancerUnderServerShutdownImmutable(t, func(cli *clientv3.Client, ctx context.Context) error { | ||
_, err := cli.Get(ctx, "foo") | ||
return err | ||
}, 7*time.Second) // give enough time for leader election, balancer switch |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as @jpbetz mentioned in other pr, we should define leader election timeout as a const somehere. and normal waiting timeout too. so this could be rewritten toelectionWait + retryWait
#8711