-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[0.9.6.1 tsm1] panic: unexpected fault address #5283
Comments
I have restarted the Database and its working OK. |
This may have been fixed by #5264 which is in the latest nightlies. |
Also, can you put the full trace of panic in a gist? |
Attached is the log file and as a Gist: |
@ivanscattergood @dswarbrick Are you able to run one of the nightlies compiled with go 1.4.2 and see if this panic goes away? |
@jwilder I've built golang 1.4.3 from source packages that are still lurking in Debian's repos, and built InfluxDB from git master (7ccbbec as of writing). It's starting up now (which seems to take about 30 minutes, as it wades through every shard, but that's in issue for another day). Due to the nature of the bug, I won't know if it's resolved until 24 hours from now. |
@dswarbrick Great. Could you log an issue for the startup and attach a copy of your startup log? I'm aware of some slow parts in startup, but would like see what is taking so long on your system. |
If anyone else would like to try and recreate this issue, I've generated Go 1.4.2 packages from master here: |
@jwilder My golang 1.4 build did not panic or get oom-killed, but as you can see from the attached graphs, things still went quite pear shaped after it had been running for 24h. |
@dswarbrick I'm curious if you see that pear-shaped graph each night still? It may have been that the server crashed so no full compactions had run that pear is just first time all of them had succeeded. |
@jwilder Here is another screenshot of my dashboard. Unfortunately Grafana seems to mess up the CPU graphs when zoomed right out, but you can see on the left of the load average graph the "pear shape" hump which was the golang 1.4 build. At 16:00 on 1/9 I restarted InfluxDB, using 0.10.0-nightly-72c6a51 from repos (i.e., golang 1.5 build). Interestingly, the load average is noticeably higher with the golang 1.5 build (but still quite acceptable). Since installing 0.10.0-nightly-72c6a51, there have been a few moments at intervals of 24h since starting, where it looked like it was getting ready for its daily spin cycle, but the load subsided once again. Also no crazy disk utilization spikes, no oom-kills, and no panics. Incidentally, I took your advice and started using second-precision timestamps - which resulted in visibly lower network traffic around 03:00 on 1/10. Yesterday's shard was also 7.9 GB instead of the usual 11 GB. |
@dswarbrick Great. #5331 will be switching the builds back to 1.4.3. |
Hi
I have been running an instance of 0.9.6.1 using tsm1 since 27th December, it crashed this morning just after midnight.
At midnight I run a summarisation routine which inserts aggregate data for the Day (I have tried using Continuous queries for this without much success). So I can assume this is the reason for the crash.
I am restarting the database now to see if it recovers.
Here is the output from the log:
The text was updated successfully, but these errors were encountered: