Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(api): Do not enqueue json commands on protocol load #14759
feat(api): Do not enqueue json commands on protocol load #14759
Changes from 9 commits
07b01f5
e44a81a
a073995
ce43819
004e498
e2d4658
f87b1db
3d7c688
23c7d40
747974c
bd2afe5
335c10d
2e1c4f4
cb83ccf
819d957
c5249d3
6c812d6
76aa3e5
d3b0cd3
25344bd
02a6ead
123f313
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ignoring queued commands for a moment, this doesn't look like it matches the existing implementation based on
QueueWorker
+get_next_to_execute()
. This does not have:Special handling ofRunStoppedError
tobreak
instead of raising an errorYielding to the event loop on each commandpick_up_tip()
errors in a Python protocol #14753 for how I'm doing this for Python protocols. If we're making JSON behave more like Python now, maybe this should work the same way."intent": "setup"
commandsDo we need to handle those things, or are they already handled elsewhere in some way that I'm missing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Setup commands are added to the queue in PE but are being executed after the initial home commands instead of at the end of the run. adding prioritization will happen in a follow up pr.
Will add logic in this pr
I do not think we need this bc
add_and_execute_command
is async andadd_command
is synch but I will test this to make sure we hare not blocking anything.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do need it, unfortunately, for the reasons described in this comment in the old implementation:
opentrons/api/src/opentrons/protocol_engine/execution/queue_worker.py
Lines 80 to 83 in 9322657
Unlike JavaScript, Python's
await
isn't guaranteed to yield to the event loop. It will only do so when it calls something that goes back to the event loop, like network I/O or anasyncio.sleep()
.Other than that, that all makes sense, thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. I tested this by adding a background task that prints, then
time.sleep(3)
and was able to see the printing sequence while executing the commands. so this test is not efficient? @SyntaxColoring @sfoster1There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we need the await because, crucially, what we're talking about here is in a different async stack than the lines you linked to @SyntaxColoring . This PR does not replace or even touch the execution queue worker.
The way the engine works, just so we agree, is that there's fundamentally at least three async stacks: the execution stack, which is controlled by the execution queue linked above and owned by the engine, and
await
s command implementations; the runner task-queue stack, which is what is changed here and owned by the runner, and (now) dispatches commands viaadd_and_execute_command
; and the stack that calls the runner's primary interface, which is I think owned by the server's run data manager.The execution stack, as you mention, definitely needs to have an explicit yield in case it's running a bunch of commands that have no yield points... so it does, it's in what you linked above, which is still what's used and which this PR does not touch.
The server stack does not need to have an explicit yield as long as the runner uses the task queue, since if the runner uses a task queue it
await self._task_queue.join()
, which awaits a future, which is a yield point.The task queue stack (which is what's changed here) does not need to have an explicit yield if it's using something that eventually uses
wait_for
becausewait_for
await (asyncio.Event())
s, which is a sync point.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, that makes sense, thanks. I missed that this was still going through the
QueueWorker
under the hood.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Which means I was also wrong about this:
Per the documentation of
ProtocolEngine.add_and_execute()
, if the run is stopped, it actually doesn't raise aRunStoppedError
—it returns the command stillqueued
.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@SyntaxColoring
I had a feeling but I wanted to prove that through a test I am trying to add :-)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@SyntaxColoring I added e2e tests to prove that python and json protocols now behave the same when stopping a protocol mid run