-
Notifications
You must be signed in to change notification settings - Fork 179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test(api): Replace pytest-aiohttp with pytest-asyncio for running async tests #9981
Conversation
@@ -9,3 +9,4 @@ markers = | |||
ot2_only: Test only functions using the OT2 hardware | |||
ot3_only: Test only functions the OT3 hardware | |||
addopts = --color=yes | |||
asyncio_mode = auto |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See https://github.com/pytest-dev/pytest-asyncio#modes.
We alternatively could have used strict mode, but it would have created a lot of boilerplate.
Codecov Report
@@ Coverage Diff @@
## edge #9981 +/- ##
=======================================
Coverage 74.93% 74.93%
=======================================
Files 2077 2077
Lines 54807 54807
Branches 5527 5527
=======================================
Hits 41069 41069
Misses 12642 12642
Partials 1096 1096
Flags with carried forward coverage won't be shown. Click here to find out more. |
@pytest.fixture(autouse=True) | ||
def asyncio_loop_exception_handler(loop): | ||
def exception_handler(loop, context): | ||
pytest.fail(str(context)) | ||
|
||
loop.set_exception_handler(exception_handler) | ||
yield | ||
loop.set_exception_handler(None) | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe the intent of this fixture was to fail any test when the subject accidentally created an orphan task, and that task raised an uncaught exception.
I couldn't figure out a robust way to reimplement this, since we no longer have the loop
fixture.
Note that it would not be sufficient to do:
@pytest.fixture(autouse=True)
async def asyncio_loop_exception_handler():
loop = asyncio.get_running_loop()
def exception_handler(loop, context):
pytest.fail(str(context))
loop.set_exception_handler(exception_handler)
yield
loop.set_exception_handler(None)
For 2 reasons:
- We can't choose exactly when this fixture runs (as far as I can tell). So, if it happens to run after an orphan task raises an exception, this exception handler won't catch it.
- This fixture only makes sense if we restore the old exception handler after the event loop closes. Otherwise, we can't be sure our exception handler has been around to catch everything there is to catch. So, inherently, this can't run inside the event loop. It needs to be something supervisory, "around" the event loop.
(Actually, now that I'm thinking about this, these might have been problems with the fixture under pytest-aiohttp
, all along.)
Honestly, I'm not too bummed about removing this? It's nice to catch these errors if we can, but really, we should never be creating orphan tasks in the first place. And the easiest way to enforce that is often by enforcing good structural practices, rather than tests. Sort of like how we know it's important to use with open(...) as f:
to avoid leaking file descriptors, because the tests can't always catch that kind of bug easily. We can also look into cranking up the warnings, though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Holy crap, the test speedup is wild. I had no idea this plugin was causing such harm to our test suite. My make -C api test
times are:
edge
-3m42s
- This branch -
1m35s
(more than twice as fast)
A couple minor comments below, but the reasoning for going with pytest-asyncio
over anyio
all checks out, and is consistent with our actual needs (which do not include attempting to run on a trio
loop or anything like that). The auto
mode is nice and seems cleanest for test suite maintenance.
There's a few places where we could use the event_loop
fixture instead of asyncio.get_running_loop()
. I think I'm more in favor of the latter (as this PR is written) but I'm unaware of the implications one way or another. There are enough open issues about event_loop
in the pytest-asyncio
repo to make me nervous about using it.
Reading through this PR, I also think at some point we should re-evaluate all these explicit loop
parameters spread across our codebase and remove most, if not all, of them
return ProtocolContext( | ||
implementation=ProtocolContextImplementation(sync_hardware=hardware.sync), | ||
loop=loop, | ||
loop=asyncio.get_running_loop(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure how I feel about this, but seems like we could use the event_loop
fixture instead
"""The test subject.""" | ||
return AsyncSerial( | ||
serial=mock_serial, | ||
executor=ThreadPoolExecutor(), | ||
loop=loop, | ||
loop=asyncio.get_running_loop(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could use event_loop
(but still, not sure if we should)
@pytest.fixture | ||
def subject(mock_serial_port: AsyncMock, ack: str) -> SerialConnection: | ||
async def subject(mock_serial_port: AsyncMock, ack: str) -> SerialConnection: | ||
"""Create the test subject.""" | ||
SerialConnection.RETRY_WAIT_TIME = 0 # type: ignore[attr-defined] | ||
return SerialConnection( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps we should use the async SerialConnection.create
factory method, instead, to make this more obvious? The creation of an asyncio.Lock
in __init__
seems a little problematic
@@ -12,8 +12,10 @@ def mock_serial_connection() -> AsyncMock: | |||
return AsyncMock(spec=AsyncSerial) | |||
|
|||
|
|||
# Async because SmoothieConnection.__init__() needs an event loop, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same comment about using create
, if possible
emulation_app: Iterator[None], | ||
emulator_settings: Settings, | ||
) -> AsyncIterator[MagDeck]: | ||
module = await MagDeck.build( | ||
port=f"socket://127.0.0.1:{emulator_settings.magdeck_proxy.driver_port}", | ||
execution_manager=AsyncMock(), | ||
usb_port=USBPort(name="", port_number=1, device_path="", hub=1), | ||
loop=loop, | ||
loop=asyncio.get_running_loop(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was going to suggest omitting the loop
parameter, but it looks like AbstractModule.__init__
will call asyncio.get_event_loop
instead of get_running_loop
. So the code as you've written it seems like the best move here.
Calling this out since get_event_loop
may spin up a new event loop, which seems bad and is probably worth a TODO or ticket at some point. See src/opentrons/hardware_control/util.py::use_or_initialize_loop
for the offending code
# Async because ProtocolContext.__init__() needs an event loop, | ||
# so this fixture needs to run in an event loop. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if we should make this more obvious by passing in loop=asyncio.get_running_loop()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah this is an interesting question that I struggled with. And it applies to async API design in general, not just tests.
I’ll ask about this in Slack.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Love the speedup, makes total sense.
Yeah, this was a happy surprise for me, too. :)
I totally missed that that fixture even existed. Thanks for the pointer. I also don’t have an informed opinion on whether we should be using it or
💯 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🙌
* module instances clean themselves up
Overview
Our test framework,
pytest
, does not natively supportasync
test functions or fixtures. Various 3rd-party plugins take up that responsibility. For historical reasons, the plugin that we've been using ispytest-aiohttp
, which isn't a good fit for us because we're not even using theaiohttp
web server in this project.This PR replaces
pytest-aiohttp
withpytest-asyncio
, a much smaller plugin from the pytest devs that's specifically meant for runningasync
tests.Benefits to this include:
loop
boilerplate arguments for async fixtures.pytest-aiohttp
pulled in the wholeaiohttp
web server library.)This addresses half of #8176, with the other half being
robot-server
. See #8176 (comment) for why I went withpytest-asyncio
instead ofanyio
.Changelog
pytest-aiohttp
dev dependency and replace it withpytest-asyncio
.pytest-aiohttp
's magicloop
fixture now no longer exists. If a fixture or test function was requesting it, remove it. If it was actually using it, replace that usage withasyncio.get_running_loop()
.def
things secretly needed to be running in an event loop, even though they neverawait
anything. Make themasync def
to helppytest-asyncio
pick them up.asyncio_loop_exception_handler
fixture, which I couldn't port.Review requests
make -C api teardown && make -C api setup
, and then run somepytest
commands. Make sure they keep reporting what they were reporting before. In particular, make sure results don't change between running the whole suite and running individual files or functions.Risk assessment
I think low. Async stuff in pytest is inherently a huge mess, so there's some risk because of that, but, I mean...can this be any worse than pytest-aiohttp?