-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cassandra error from kong.core.cluster
#966
Comments
BTW, I see the same error when running using Kong's internal process manager, via
|
…and these errors from a different function in cluster.lua:
|
Hi @mars, The first log you are seeing is not an error but an The second log is weird, lua-cassandra should print nicer error messages than this.
Globals initialized in the Otherwise, I believe lua-cassandra was never intended to be used inside of |
Hi @thibaultcha! Thanks for taking time to look into this, I do see lots of log entries like
So, I've been trying to diagnose further by exposing |
That is correct. This is one thing that extremely annoys/disgusts me too. Big, long-term plans are to refactor this as well as the plugin definition. We have brainstormed over a few ideas to make plugins more robust and most of all to not depend on upvalues such as the Kong ones or |
In another experiment, I just converted the Kong codebase from those global With that in place, I still see the same |
Here's a branch to demo defining On that branch, when running
Reading the docs for Co-sockets are not supported in these contexts. It looks like the solution might be to define the |
Thank you for taking time to investigate this! I find it very valuable that you dedicate time to look into Kong. I did not know lua-cassandra was even being used in there, and curiously I never encountered this error, which either is very strange, either because I missed it. Yes, like I said, lua-cassandra was never meant to be working in
That could be one solution too, maybe more short-term, since ideally those globals should be nuked anyways (#709 is a good place to talk about this, even though since then we have a clear ideas of how this should be accomplished and would like to POC it someday). |
I just tried that fix which resulted in a new mutex problem from
I see @thibaultcha making progress on lua-resty-socket & lua-cassandra late last night 😲 So, I'm going to jump back out of this 🐰 hole. |
I don't see how doing so will solve the problem, since those values are globals anyways, they are already available in all the contexts and the timers (regardless of how wrong that is). There is one thing I still don't understand though, I am really unable to replicate this, and not seeing any such error. Especially since |
The I see the error running Kong 0.6.1 with both:
The only place I have a Kong cluster running is in a Heroku Private Space. I've been trying to reproduce locally, but would need to setup a development cluster on OS X. I have not yet figured that out. Do you have any notes for how you all work with clustering locally/in development? |
I finally figured out how to get a second Kong running locally in a cluster. I now confirm that this error does not occur when running locally on OS X with either Mashape/kong 0.6.1 or the fork mars/kong. I guess there's something not-quite-right about the build on Heroku. I'll close this in favor of a new issue in that repo. Thanks for your patience. |
I've been seeing these errors coming out of Nginx:
(Edited for clarity. Removed misleading log INFO entry.)
This seems to indicate that the calls from
Kong.init_worker()
are missing the globalconfiguration
set inKong.init()
.So, I tried adding the global
configuration
toinit_worker()
, and now I see a much more descriptive error:A few preliminary conclusions:
kong.core.cluster
relying on globalsconfiguration
&dao
sleep
&tcp_sock
are not allowed ininit_worker_by_lua*
As @thibaultcha & @thefosk know, I'm operating Kong on an experimental fork using an external supervisor #928, so this issue may be self-inflicted or a canary indicator of a latent issue.
The text was updated successfully, but these errors were encountered: