-
Notifications
You must be signed in to change notification settings - Fork 455
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
panic: runtime error: invalid memory address or nil pointer dereference / [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xeb3c26] #2051
Comments
More info...
All pods simple-cluster-rep1* terminated. And respawned. Thanks to m3db-operator.
Is there something wrong with the disks ?
Set replicationFactor=1
Set replicationFactor=2 Enjoy the magic of m3db-operator...
OK, so I guess the problem has something related with the disk. |
(forgot to say that now, all 6 pods have build revision de4bc64) |
@ymettier unfortunately this is an issue introduced using Looks like in the future we'll do more select testing with other users before merging to master as it seems like more people are pinned to Here's the fix, it'll go into master in the next hour or two: Apologies and thank you for the detailed bug report, greatly appreciated. |
Np, thanks a lot that sounds good. |
Today, I tested m3 database with m3nsch, and a similar error was reported in the final execution. May I ask what happened? Error reporting occurs when I initialize with m3nsch_client, and whatever the argument is, the following error is reported. I directly pulled the master branch of m3 today, and then compiled and built m3nsch. The following is the error output, and the three nodes have the same error INIT
OUTPUT
|
Disclaimer
I'm new to M3DB and I'm just testing it.
This is a bug report that I will not be able to follow. If you think it is useless, feel free to close it.
I have no sensitive data. And I guess that I will remove everything and restart from scratch.
Context
Deploying in Kubernetes thanks to m3db-operator
Step 1 : my config
I'm running with this config :
I have 2 pods running :
Both are running build revision 638ecfb
This is my initial deployment so there are no data that comes from any previous deployment that could explain the bug.
Now let's play...
Step 2 : Scaling
I update my config :
I have 3 pods that spawn (one after the other) :
Notice : no simple-cluster-rep1-2
All new pods are running build revision de4bc64.
Step 3 : Upgrade the 1st old pod
The pod terminates and a new one spawns. build revision de4bc64.
I wait for more than 5 minutes and I see simple-cluster-rep1-2 that spawns (build revision de4bc64) with no action of mine. Thanks m3db-operator.
OK all is running fine. I still have one old pod.
Step 4 : upgrade the last pod
My Grafana says that I have 5 boostrapped nodes and 1 bootstrapping. But i'm already waiting for more than 5 minutes and nothing happens.
The pod terminates and a new one spawns. build revision de4bc64.
Fine. Not fine : Error. Then CrashLoopBackOff.
I hope you can find the origin of this crash.
As I said at the beginning, if you cannot do anything with this bug report, feel free to close it.
The text was updated successfully, but these errors were encountered: