-
Notifications
You must be signed in to change notification settings - Fork 184
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
file: Fix race in closing #31
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for working on this change! This is definitely something we should take... see my other comments on a few tweaks that we need before merging this.
alreadyClosing := f.closing | ||
f.closing = true | ||
f.closingLock.Unlock() | ||
if !alreadyClosing { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If alreadyClosing
is true, then closeHandle()
will no longer block waiting for all IO to complete... Of course this may be better than the current state of things, but I think it might be better to add a wait group or channel to wait for the goroutine that is performing the close.
I would also suggest changing the mutex to just be an atomic.SwapUint32()
call in this path, and atomic.LoadUint32()
call in isClosing()
. We don't need the extra synchronization of the mutex since we are already relying on ordering between the wg.Add(1)
call and the read of closing
; the atomic makes this ordering contract more explicit and correct without relying on mutual exclusion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If alreadyClosing
is true, that implies that closeHandle()
was called from another goroutine. This is the same behavior as today, since the existing code sets f.closing = true
before waiting.
Does closeHandle()
need to wait when called from multiple goroutines concurrently (does each call to closeHandle()
need to wait)?
4cdbfb8
to
db46d9b
Compare
@jstarks I've pushed a new commit that fixes an error in my previous one. As I mentioned, if I looked a bit at removing the |
@jstarks @jhowardmsft ping? |
1 similar comment
@jstarks @jhowardmsft ping? |
One for @jstarks. |
@samuelkarp Looking back, it seems @jstarks suggested some changes. Are you able to address those? Thanks. |
@jhowardmsft I looked at @jstarks's suggestion and it didn't seem to make the race detector happy (unless I misunderstood what was he was asking?), see #31 (comment). |
We've been hitting a Stack Trace
|
Didn't fix the panic.
Didn't fix the panic.
This was fixed by #59. Closing. |
Looks like microsoft/go-winio#31 has been fixed, let's try.
Hello!
It looks like there's a race condition created in
(*win32File).closeHandle()
where multiple concurrent goroutines are accessing the(*win32File).closing
field. This shows up in the go race detector. This change adds a lock around theclosing
field to remove the race.Thanks!
Sam