Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

log.py's StdErrWrapper swallows fatal stacktraces #163

Closed
jpotter opened this issue May 28, 2014 · 7 comments
Closed

log.py's StdErrWrapper swallows fatal stacktraces #163

jpotter opened this issue May 28, 2014 · 7 comments
Labels

Comments

@jpotter
Copy link

jpotter commented May 28, 2014

In the condition where there's a python error in the locust file, the stack trace is swallowed by the StdErrWrapper.

In main.py, the code imports the locust file:
imported = import(os.path.splitext(locustfile)[0])
without a try/catch block.

One possibility is to try/catch around this, and logging the stack trace there, but the try block would have to explicitly list things like IOError, KeyError, etc as Error exceptions are not caught by an unbounded "except" clause. The other possibility is to not wrap STDERR, and leave any prints to sys.stderr as something to be handled by the supervisor that's managing the process.

This is actually a really annoyance: anytime we have an issue in any library of ours, there's no way to see the stacktrace; locust just instantly exits 1 with no info as to the cause.

What's the suggested way to fix this?

Thanks,
Jeff

@jpotter
Copy link
Author

jpotter commented May 29, 2014

It looks like this may actually be an issue with an import from our locust file that then sets up logger again -- the answer seems to be to use logger in our locust file, instead of printing to stdout/stderr.

@jpotter
Copy link
Author

jpotter commented May 29, 2014

I've managed to reduce this to a very simple case.

Starting up locust like so:

virtualenv/bin/locust --clients 50 --hatch-rate 5 --num-request 100 --no-web --host https://localhost:8443 --locustfile /Users/jeff/dev/locust_tester.py MainLocust

In locust_tester.py, we have:

logger = logging.getLogger()
logger.critical("Loading locust_tester")  # So we can see in locust logs that we actually started
import logging.config
logging.config.fileConfig("/Users/jeff/dev/logging.conf")
logger.critical("Loading locust_tester 2")

This outputs:

[2014-05-29 12:59:23,496] Jeffs-Laptop.local/CRITICAL/root: Loading locust_tester

The contents of /Users/jeff/dev/logging.conf:

[loggers]
keys = root

[handlers]
keys = consoleHandler

[formatters]
keys = ourFormatter

[formatter_ourFormatter]
format=%(asctime)s %(process)d %(name)s %(levelname)s - %(message)s
datefmt=%m-%d %H:%M:%S

[logger_root]
level = NOTSET
handlers = consoleHandler

[handler_consoleHandler]
class=StreamHandler
formatter=ourFormatter
args=(sys.stderr,)

I feel like I must be missing something very basic here. I think the issue is that the args=sys.stderr doesn't work as expected, because locust has replaced sys.stderr with StdErrWrapper.

Suggestions or thoughts?

@jpotter
Copy link
Author

jpotter commented May 29, 2014

Setting args to be "args=(sys.stderr,)" -- that's sys.(underscore underscore)stderr(underscore underscore) -- actually causes this to work -- so that's a pretty good indication that the assignment of sys.stderr to StdErrWrapper is causing problems. I don't think using sys.stderr is a good solution, though.

@heyman
Copy link
Member

heyman commented Jun 24, 2014

Hey, sorry for taking so long to reply to this. TBH, python logging has always been confusing to me. The current solution in Locust wraps stdout and stderr to catch print statements, be able to write to a log file using the --logfile option, and format the messages with timestamps. I'm not sure if this can be done in some other way without wrapping stdout and stderr, or if it's just a bad idea altogether?

As I understand it, the problem arises when trying to manually configure logging within your test files? Perhaps we could add an option that prevents Locust to run it's logging setup code, and instead leaves it up to the user?

@jpotter
Copy link
Author

jpotter commented Jun 24, 2014

You're not alone in being confused by python's logging. I find it ... cumbersome? ... too.

The issue, I think, is that your replacing sys.stderr with an object that isn't fully compatible with all use cases -- e.g. when python's logger tries to write to stderr after you've changed it, the writes get completely swallowed and never appear anywhere.

Not being a python logger expert, I'll leave it to others to suggest how to correctly configure it. I don't think you'd normally wrap stderr like you're doing, though.

If you want to maintain the current way you manage logging data, then yes, I think adding a --no-log-wrap or --skip-log-handler option would at least give a work-around for users who don't want to fork and edit the code in this scenario.

-Jeff

@Exide
Copy link
Contributor

Exide commented Jun 18, 2018

@heyman @jpotter This issue is fairly old but I ran into it also and submitted #738 to side step the setup_logging call with a CLI option.

@heyman
Copy link
Member

heyman commented Oct 20, 2019

This should now be avoidable by using --skip-log-setup merged in #738.

@heyman heyman closed this as completed Oct 20, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants