-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Segmentation fault when sending -QUIT signal to unicorn workers #152
Comments
Same for me -- Machine register context ------------------------------------------------ -- Other runtime information -----------------------------------------------
00400000-00401000 r-xp 00000000 103:01 1456362 /home/user/.rvm/rubies/ruby-2.5.5/bin/ruby |
You have to make sure you dispose all mini_racer contexts prior to forking, in fact bets practice imo would be just not to create any contexts in the unicorn master. If v8 simply does not fork right. I am very surprised though that disposing does not work |
We just ran into the same issue with puma. before_fork do
require 'objspace'
ObjectSpace.each_object(MiniRacer::Context, &:dispose)
end Because somehow it seems that we are getting the Segmentation Fault anyway. And how does Discourse solve this? Because I can't see anything regarding It is even happening when trying to start delayed_job (click to see the stacktrace).
We use
PS: Thank you so much for your work! |
Sort of, this will break existing contexts in the master process, so as
long as you are not using mini racer in the master this will be safe,
certainly this will fix any segfaulting.
…On Tue, Jun 16, 2020 at 6:37 PM Alexander ADAM ***@***.***> wrote:
We just into the same issue with puma.
Should it be enough to add this to puma.rb?
before_fork do
require 'objspace'
ObjectSpace.each_object(MiniRacer::Context, &:dispose)end
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#152 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAABIXK6HQ3LREJ7GLPAF4DRW4VKZANCNFSM4JP37BUA>
.
|
But how can I control whether this is used in the master process or isn't? So I'm very open in case you have any advice how to debug or even fix this. PS: Maybe it would help if the |
@SamSaffron We have also this issue but only when running unit tests... And the |
@crevete Hi! I'm sorry you're having issues. I'm not sure I understand what you mean by this part?
Since you say that the issue happens only when running tests, it sounds to me like you're using a parallel test runner that is trying to "copy" the v8 The solution would be to figure out where your test suite is booting up and add the dispose call in its equivalent of a |
@nightpool Thanks for your response! FYI, we are using minitest and factory_bot_rails for unit testing. |
@crevete are you using If so, you might want to do something like this Minitest.before_parallel_fork do
ObjectSpace.each_object(MiniRacer::Context){ |c| c.dispose }
end |
No, we don't use the |
Well, are you using system tests where are webserver is spawned (i.e. unicorn or puma)? Otherwise I can't help you. My problem isn't solved either. |
In a unicorn environment you must
OR
|
Today we swapped over our rails app from therubyracer
v0.12.3
using libv8v3.16.14.19
to mini_racerv0.2.8
using libv8v7.3.492.27.1
.Once we did this, we noticed our unicorn workers started to segfault when we sent them a -QUIT signal and all of our workers would segfault when sending -QUIT to the master (this happens whenever unicorn restarts). The segfaults only seem to occur when the unicorn worker is shutting down and not when serving traffic. We don't see segfaults anywhere else in our app (e.g. sidekiq, rake commands, rails consoles, etc.).
We have also tried several "tweaks" for unicorn mentioned in your README specifically:
ObjectSpace.each_object(MiniRacer::Context, &:dispose) if defined?(MiniRacer::Context)
MiniRacer::Platform.set_flags! :noconcurrent_recompilation, :noconcurrent_sweeping
However, our unicorn workers still segfault with or without these tweaks.
I've attached the output from our unicorn log unicorn.log.txt when one of the workers segfaults. It's not super helpful, so I installed a "beta" version of libv8 instead
7.3.492.27.0beta1
and enabled core dumps. I then loaded up the coredump in gdb and ran "where" which produced this output:It's not really clear to me if this is a mini_racer issue or libv8?
Any help getting this resolved would be greatly appreciated. Let me know if you think I should open a ticket on libv8's project instead. Thanks!
unicorn.log.txt
The text was updated successfully, but these errors were encountered: