-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
web.run_app: cleanup handlers not called if an error occurs before run_forever #1959
Comments
I did some experimentation here: https://github.com/cecton/aiohttp/compare/sigterm-handling...cecton:runapp-better-context?expand=1 I made contexts, re-indented the code... nothing big despite the diff. I also tried to preserve the order in the original finally. The test in that commit will fail if executed with the current code in master: def test_startup_cleanup_signals_even_on_failure(loop, mocker):
skip_if_no_dict(loop)
setattr(loop, 'create_server', raise_expected_exception)
loop.call_later(0.05, loop.stop)
app = web.Application()
mocker.spy(app, 'startup')
mocker.spy(app, 'cleanup')
with pytest.raises(ExpectedException):
web.run_app(app, loop=loop)
app.startup.assert_called_once_with()
app.cleanup.assert_called_once_with() https://github.com/cecton/aiohttp/blob/runapp-better-context/tests/test_run_app.py#L575 |
Would you create a PR? |
Fixed by #1982 |
Long story short
The current implementation of web.run_app does not wrap the whole server creation in a try...except but only the run_forever part. This means that if an error occurs (ex: port already taken) between those two lines, the cleanup handlers will not be called despite the startup ones have been called.
Typically when you create a task in a startup handler, it won't be canceled and awaited by the cleanup handler because the cleanup handler is never called.
Expected behaviour
If an error occurs during the server creation, the cleanup handlers still need to be called.
Actual behaviour
If the process ends, it shows the warning:
Steps to reproduce
Run two instances of the following script:
However... I could reproduce this issue only by using asyncio.Queue. For some reason, a simple asyncio.sleep or an asynchronous HTTP request does not show the error "Task was destroyed but it is pending!". It doesn't even show anything at all actually...
Your environment
Python 3.6, Arch Linux
The text was updated successfully, but these errors were encountered: