Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

More places might need memory barriers on AArch64 #13055

Closed
HertzDevil opened this issue Feb 9, 2023 · 0 comments · Fixed by #14272
Closed

More places might need memory barriers on AArch64 #13055

HertzDevil opened this issue Feb 9, 2023 · 0 comments · Fixed by #14272
Labels
kind:bug A bug in the code. Does not apply to documentation, specs, etc. platform:aarch64 topic:multithreading topic:stdlib:concurrency

Comments

@HertzDevil
Copy link
Contributor

#13010 covers just one place where memory barriers are needed on AArch64. The following spec fails randomly on an M2 when -Dpreview_mt is provided:

it "works with multiple threads" do
x = 0
mutex = Mutex.new
fibers = 10.times.map do
spawn do
100.times do
mutex.synchronize { x += 1 }
end
end
end.to_a
fibers.each do |f|
wait_until_finished f
end
x.should eq(1000)
end

This is probably due to Mutex using not just one, but two Atomics directly. In particular, #unlock calls @state.lazy_set, which is non-atomic.

Some other places that look suspicious:

  • The recently added Crystal::AtomicSemaphore, which is part of Add Process.on_interrupt #13034; as of now the type is used only on x86-64 Windows
  • Crystal::RWLock, which too calls Atomic#lazy_set
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind:bug A bug in the code. Does not apply to documentation, specs, etc. platform:aarch64 topic:multithreading topic:stdlib:concurrency
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant