-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The main task idea is really really wrong! #24
Comments
goodboy
pushed a commit
that referenced
this issue
Aug 2, 2018
Stop worrying about a "main task" in each actor and instead add an additional `ActorNursery.run_in_actor()` method which wraps calls to create an actor and run a lone RPC task inside it. Note this adjusts the public API of `ActorNursery.start_actor()` to drop its `main` kwarg. The dirty deats of making this possible: - each spawned RPC task is now tracked with a specific cancel scope such that when the actor is cancelled all ongoing responders are cancelled before any IPC/channel machinery is closed (turns out that spawning new actors from `outlive_main=True` actors was probably borked before finally getting this working). - make each initial RPC response be a packet which describes the `functype` (eg. `{'functype': 'asyncfunction'}`) allowing for async calls/submissions by client actors (this was required to make `run_in_actor()` work - `Portal._submit()` is the new async method). - hooray we can stop faking "main task" results for daemon actors - add better handling/raising of internal errors caught in the bowels of the `Actor` itself. - drop the rpc spawning nursery; just use the `Actor._root_nursery` - only wait on `_no_more_peers` if there are existing peer channels that are actually still connected. - an `ActorNursery.__aexit__()` now implicitly waits on `Portal.result()` on close for each `run_in_actor()` spawned actor. - handle cancelling partial started actors which haven't yet connected back to the parent Resolves #24
This was referenced Aug 2, 2018
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
After an in depth discussion with @vodik surrounding #11, it's so obvious now that the
main
kwarg toActorNursery.start_actor()
is just plain stupid and wrong. The idea of a main task is the same as starting an actor and then submitting a single function/task to that actor and then cancelling the actor once it's lone task is complete. So instead of this inverted approach (of making an actor with a main task the lower level API thus severely complicating the internal implementation ofActor._async_main()
) we agreed the nursery should instead get aActorNursery.run_in_actor()
method which internally is simply a wrapper around:When the nursery block ends this actor is waited upon for it's final result much like is done implicitly in the core...
Couple notes:
Portal
will need to learn a completed state that signals the far end task is done (in the case of an async generator this happens when either end closes - i.e..aclose()
)trio.Event
Note this also avoids the confusion in how does the
main=my_func
module in the remote actor get figured out? (hint it doesn't since this was using themultiprocessing.forkserver
's internal pickling stuff to make it happen).This let's us entirely drop the idea ofand a bunch of implementation stuff should be severely simplified!Portal.result()
afaik which in turn means we lose all future-like junk just liketrio
^ Hmm not sure it should get dropped otherwise it gets tricky trying to figure out how you can call
run_in_actor()
without blocking. Maybe it's ok to look somewhat like theFuture
returned fromloop.run_in_executor()
as inasyncio
? This also allows for additionalportal.run()
calls to be made to that same actor while the nursery block is open.The text was updated successfully, but these errors were encountered: