Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error handling request while parsing request using flask and gunicorn #818

Closed
shashank9487 opened this issue Jul 14, 2014 · 22 comments
Closed
Milestone

Comments

@shashank9487
Copy link

I got this error logs in my error log file. What does this mean, my all request are dropped or what?

Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/gunicorn/workers/async.py", line 116, in handle_request
raise StopIteration()
StopIteration
2014-07-14 08:30:29 [1833] [ERROR] Error handling request
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/gunicorn/workers/async.py", line 116, in handle_request
raise StopIteration()
StopIteration
2014-07-14 08:30:29 [1919] [ERROR] Error handling request
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/gunicorn/workers/async.py", line 116, in handle_request
raise StopIteration()
StopIteration
2014-07-14 08:30:29 [1702] [ERROR] Error handling request
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/gunicorn/workers/async.py", line 116, in handle_request
raise StopIteration()
StopIteration
2014-07-14 08:30:29 [1859] [ERROR] Error handling request
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/gunicorn/workers/async.py", line 116, in handle_request
raise StopIteration()
StopIteration
2014-07-14 08:30:30 [1859] [ERROR] Error handling request
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/gunicorn/workers/async.py", line 116, in handle_request
raise StopIteration()
StopIteration
2014-07-14 08:30:30 [1859] [ERROR] Error handling request
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/gunicorn/workers/async.py", line 116, in handle_request
raise StopIteration()
StopIteration

@tilgovi
Copy link
Collaborator

tilgovi commented Jul 14, 2014

What version of gunicorn?

@shashank9487
Copy link
Author

I am using gunicorn==19.0.0 version

@tilgovi
Copy link
Collaborator

tilgovi commented Jul 15, 2014

@shashank9487 is this your problem? #790
Is it fixed on master for you?

@shashank9487
Copy link
Author

i am trying this fix, but another error i faced here:

2014-07-15 09:12:49 [19872] [ERROR] Error handling request
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/gunicorn/workers/async.py", line 108, in handle_request
resp.write(item)
File "/usr/local/lib/python2.7/dist-packages/gunicorn/http/wsgi.py", line 342, in write
util.write(self.sock, arg, self.chunked)
File "/usr/local/lib/python2.7/dist-packages/gunicorn/util.py", line 301, in write
sock.sendall(data)
File "/usr/local/lib/python2.7/dist-packages/gevent/socket.py", line 458, in sendall
data_sent += self.send(_get_memory(data, data_sent), flags)
File "/usr/local/lib/python2.7/dist-packages/gevent/socket.py", line 435, in send
return sock.send(data, flags)
error: [Errno 32] Broken pipe

Stopiteration exception is handled. But still my request return errors...

@dsoprea
Copy link

dsoprea commented Jul 15, 2014

This might be related to this "Error handling request" Gunicorn/gevent bug, here: pallets/flask#1115 I had guessed it was a Flask problem, since I'd used Gevent with Gunicorn on web.py in the past, and had not gotten the error. Thoughts?

@romabysen
Copy link
Contributor

I'm seeing the same thing but only when Gunicorn 19.0 is behind nginx. I have Gunicorn/Flask/Gevent instances on heroku that does not have this problem. It only started happening after upgrading to 19.0. The same thing happen with eventlet workers but not with sync.

@romabysen
Copy link
Contributor

Hmm..a slight correction, it also happens on heroku. I just didn't see it because I wasn't running with "--error-logfile -".

@devries
Copy link

devries commented Jul 23, 2014

This is not just in flask. I get it when using bottle too. My setup is gunicorn 19.0 behind nginx running a bottle app. I also verified that the same app runs without the exception when it is not being proxied behind nginx.

My nginx configuration is:

upstream bottlecluster {
  server bottle:8080;
}

server {
  ...
  location / {
    proxy_pass        http://bottlecluster;
    proxy_redirect    off;
    proxy_set_header  Host $host;
    proxy_set_header  X-Real-IP $remote_addr;
    proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
  }
}

@devries
Copy link

devries commented Jul 23, 2014

I was able to resolve the problem by adding the line:

proxy_set_header  Connection "";

in my nginx proxy configuration.

@tilgovi
Copy link
Collaborator

tilgovi commented Jul 23, 2014

@devries wouldn't that break keep-alive?

@devries
Copy link

devries commented Jul 23, 2014

It would, unless you set it to

proxy_http_version 1.1;

@benoitc benoitc added this to the R19.1 milestone Jul 25, 2014
@benoitc
Copy link
Owner

benoitc commented Jul 26, 2014

@devries can you reproduce it on latest master?

@benoitc
Copy link
Owner

benoitc commented Jul 26, 2014

fixed in f41f86c

@benoitc benoitc closed this as completed Jul 26, 2014
@devries
Copy link

devries commented Jul 27, 2014

I have verified that the fix works for me. Thank you, great work!

I am getting "[1] [INFO] 3 workers" in the logs approximately every 30 seconds now.

@romabysen
Copy link
Contributor

I can confirm that this fixes it for me too.

@benoitc
Copy link
Owner

benoitc commented Jul 28, 2014

@devries @romabysen thanks for the feedback!

@moodh
Copy link

moodh commented Jul 30, 2014

Any reason why "X Workers" is sent as INFO instead of DEBUG? :)

@tilgovi
Copy link
Collaborator

tilgovi commented Jul 30, 2014

I think it was added for the statsd work. It seemed acceptable that one might wish to monitor the number of workers but not get all the debug logs. But maybe we can just log it when it changes or something. Not sure how that works for statsd metrics. cc @alq666

@alq666
Copy link
Contributor

alq666 commented Jul 30, 2014

Indeed it was added to be able to track the number of workers as a metric via statsd. If we change it to debug, it won't be published as a metric (as it is).

If you think this is too much logging, we could move that message to DEBUG as long as I rework the treatment of debug messages in statsd.py

@tilgovi
Copy link
Collaborator

tilgovi commented Jul 30, 2014

Or maybe we should just log whenever the number of workers changes? It's great to track the worker count but it probably should not stand in for a heartbeat.

@benoitc
Copy link
Owner

benoitc commented Jul 30, 2014

On Wed, Jul 30, 2014 at 8:37 PM, Randall Leeds [email protected]
wrote:

Or maybe we should just log whenever the number of workers changes? It's
great to track the worker count but it probably should not stand in for a
heartbeat.

+1


Reply to this email directly or view it on GitHub
#818 (comment).

@alq666
Copy link
Contributor

alq666 commented Jul 30, 2014

I created a separate issue to track this: #834

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants