-
Notifications
You must be signed in to change notification settings - Fork 29.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AssertionError crash at ServerResponse.resOnFinish #2639
Comments
I was fiddling around with my server on digital ocean and got this screen when checking out their embedded terminal. Edit: total RAM of the server is 512. |
Did you try with other versions, like io.js 2.5.0? |
No, this just happened on a live instance. I have been running 3.2 on a different instance until yesterday, but the other machine had 1gb RAM. I will try upgrading the server and see if it happens again. |
Yeah, people have been posting and so far, so good. "top" indicates about 400MB free RAM, so I really guess the issue was with RAM causing trouble down the road. |
Actually, it still crashes, just isn't so often so the application will stop. |
Crashed with 3.3.0, trying with 2.5.0. Edit: so far, it seems 2.5.0 is going through the same scenarios that caused 3.x to crash and there was absolutely zero failures so far. |
Update: someone pointed digital ocean doesn`t give you swap memory. So it might be possible the crashes were caused by a RAM peak? |
We see the same error with Node 4.0 on our RHEL 6 machines. We haven't seen this issue with io.js 2.5.0. |
This seems to be related to #1411. @indutny Any idea what's happening here? Stacktrace:
|
I'm running into the same issue with node 4.1.0 using just Express. Looks like socket.io folks are seeing this too: socketio/socket.io#2236 😭 Definitely not RAM: 16 GB here. |
I may have fixed it with an |
I'm running into the same issue with iojs 3.0.0 :( Any update on this? @fasiha how are you going with npm update? |
@simplier Node 4.1.0 produces the assertion failure less often after |
I'm running into the same issue everytime I send a server-request with node 4.1.0 and also after running |
I tried to debug this a bit, and I can see that when |
no luck with iojs 3.x, come back to iojs 2.5 and works fine |
Did anyone who faces the issue try to create a minimal test case? I tried to build one, but I can't get this error reproduced, as I don't yet understand what is actually causing this. |
@arthurschreiber No idea, I just figured the error happens with a more significant load. |
@mrseth no, it sometimes happens on the very first request I make to express, immediately after starting node 😩 |
@fasiha in my case it always happens on the first request ... had to downgrade to keep developing |
Here's dependencies in my
Perhaps if we get enough of these dependency lists, taking the intersection of them all will identify the problem? I'm on Mac. |
@fasiha Are you running a http or https server? Can you try disabling different modules and see what combination still triggers that error on your side? In our application, this error only happens during high load and so far I've not found a way to reproduce. 😞 |
Trying to debug this further. I added an express route that basically patches the
The output is quite interesting:
I don't really understand how this can happen. It looks like the socket never got attached, or that it tries to detach the socket twice. |
@arthurschreiber maybe a stack corruption on C++ code?
|
@indutny @bnoordhuis Any idea what could be wrong? Any idea how we could better debug this? |
Relevant PR: #3059 |
Just in case, it will be merged as soon as two things will happen:
So if you'll get a chance - please test this patch and let us know if it fixes the bug for you! |
now it does not crash, but it does not load at all... It was another type of behaviour - if node didn't crash some requests took exactly 30 seconds (on ios network monitor), on server few ms... Now after the patch it keeps loading for few minutes, I can watch network monitor which request, but it is not quite ok... |
@mareksrom does it hang if you'll replace |
before this patch it didn't... I added some logs, will try and let you know... |
@mareksrom argh... stupid me. Try this one: diff --git a/lib/_http_outgoing.js b/lib/_http_outgoing.js
index 0522ecb..da54247 100644
--- a/lib/_http_outgoing.js
+++ b/lib/_http_outgoing.js
@@ -71,6 +71,7 @@ function OutgoingMessage() {
this._header = null;
this._headers = null;
this._headerNames = {};
+ this._prefinish = false;
}
util.inherits(OutgoingMessage, Stream);
@@ -526,7 +527,14 @@ OutgoingMessage.prototype.end = function(data, encoding, callback) {
var self = this;
function finish() {
- self.emit('finish');
+ // NOTE: It is very important to wait for `prefinish` event.
+ // It is emitted only when the response becomes active in the pipeline.
+ // The ordering of requests/responses in `_http_server.js` is dependent
+ // on this, doing it in different order will result in crash.
+ if (self._prefinish)
+ self.emit('finish');
+ else
+ self.on('prefinish', finish);
}
if (typeof callback === 'function')
@@ -586,6 +594,7 @@ OutgoingMessage.prototype.end = function(data, encoding, callback) {
OutgoingMessage.prototype._finish = function() {
assert(this.connection);
+ this._prefinish = true;
this.emit('prefinish');
};
|
I put away second patch and here what's going on:
log is:
it is of course followed by crash like in previous tries.... I'm pretty new for this (I see node's http server code for the first time), but why we do expect that response finishes in the same order like the request was put into the incoming queue? If it is necessary than I think we need to put resOnFinish into some queue and call it after all previous requests in incoming queue passed resOnFinish... @indutny is this idea right? I can try to prepare some patch, but I'm not sure if the order is really necessary. If it is, than according to me is really really badly written and change of prefinish vs. finish is only some side effect... If not than assert is with bad idea... |
@indutny I just tried the following test case, based on the test in your PR:
I see the following stack trace:
Which is different than what others are reporting, but the same as we see on our test environments. |
My last comment was not wise, of course it has to be in order:) I think I know where is problem... just trying... |
I can't replicate it on my local machine. It just fails in QA server which is exposed. |
I already addressed the problem and I am working on patch, it already works, but some requests still keep loading, so some testing need to be done, but I am almost finished, expect to publish it today evening or tommorow. In some cases finish event is beeing fired even before socket was assigned... |
@indutny I proposed _http_outgoing.js change with the bug fix (this is first time here so I don't know if I should do it this way:). Problem was that when response was ended and came to hack _send('',...) before the connection was assigned to it, callback to finish was fired immediately before anything was send, but it should wait for activating this response and flusing it... Hope it is correct my test case it solved and after spending few hours today in http code I hope that I understand it quite well. Code of ending response I think is worth refactoring... I left there one assert that sould not happen - in _writeraw - if data would be empty, connection assigned and something was in output buffer, than behaviour would not be correct, I don't know if this can happen... |
@mareksrom thanks for the insight! I don't exactly agree with the fix, though, and I think we can do it better. I'll comment on that PR. |
yeah of course, we can, as I wrote this is only fix, I didn't want to make bigger changes that I think should be done here... I don't know anything about your guidelines and so on, I was just unhappy of node falling down every few minutes and it was a good reason to look a bit under the cover...:) |
@indutny @mareksrom sorry I'm a little unclear on the status of your patches—should we be testing them, or are they both in flux? Apologies for being dense… I ask because @arthurschreiber seems to have experienced assertion failure with @indutny's patch. |
I believe @mareksrom's patch is in the closest state to being merged; either way we'll have a release out this week with a fix in it as it seems like the cause has been fully diagnosed now. |
🎉 so happy! Thanks everyone 🍻 |
Please upgrade to v4.1.2 and confirm that this problem has been resolved. |
I had it in a very intermittent fashion, so you will need someone that could reproduce it at will to confirm. I had already upgraded my server, but it doesn't get enough traffic. |
Hi, i started noticing problem, searched for info, found this issue, updated to 4.1.2 and now the problem is gone. Thanks! |
It used to crash often while developing on Windows (http-server with a lot of static requests), looks stable now, thanks. |
looks stable!!!! 👍 |
Thanks |
Changes to `stream_base.cc` are required to support empty writes. Fixes CVE-2015-7384, nodejs#3138 Fix: nodejs#2639 PR-URL: nodejs#3128
thx! |
Running io.js 3.2 installed from source on a centOS 7 machine, here are the logs:
http://pastebin.com/ix0Ke5zS
The text was updated successfully, but these errors were encountered: