-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Explicit tasks cancel #383
Conversation
Also instead of waiting close one by one for tasks why not cancel them all then wait. e.g. tasks = [self._task_reconnect, self._task_shutdown_entities, self._task_connect]
for task in tasks:
task.cancel()
await asyncio.gather(*tasks) |
How it would deal with Actually, they work almost instantly. No real performance to gain. |
Now I believe it is done. I even recalled about disconnected sub-devices that remove themselves from their gateways! I think the goal of this PR is met. |
Here is what I'm currently testing. I'm against making I can't provide a proof that the check I added |
Testing BLE device detach-attach, when LocalTuya self-restarted due to local key changed for detached sub-device, I've got
in the log. From the previous records I see that the sub-device finished updating its local key while other devices were closing. Then it should sleep for So, I'm still curious when a sleep is cancelled and when it does not... I'd return the This is a minor issue that need more investigation, and, I believe, does not prevent merging existing changes. Made a fix based on this test (forgotten |
Wih the commit 08ec9e1 the problem with detached BLE sub-device has gone! So, I've done the same with Now it looks and acts good for my taste. |
From my understanding I think this occurs in "while" loops so usually when creating a while loop the whole code should be wrapped inside try block. I think this is good now, thank you 😸 |
* Merged abort connection w/ pending tasks. * added cancled asyncio in pytuya connect. * rename subdevice_state var/func
* abort_connect() shall be called at the end: revert #383 abort interface close before sub-devices * subdevice_state_updated calls * Prevent out of order commands * Better serialization of writes * Check for _is_closing before going to sleep
I've started this set of changes when I realized that
TuyaDevice._call_on_close
permanently grows. Consider a WiFi battery powered T&H sensor that wakes up and sends measurements every 20 minutes, 3 times per hour. If it is configured with sleep time of 6 hours (21600 seconds), it generates 3_shutdown_entities
tasks per hour, 72 items in theTuyaDevice._call_on_close
per day. Add here one_new_entity_handler
per successful connect and_async_reconnect
to re-connect, to have 3x72=216 items per day from one sensor alone, not counting other devices that may disconnect and re-connect. This is a memory leak.Moreover, with this consideration, after 6 hours, 18
_shutdown_entities
async tasks pilled up in the tasks queue, and then the queue always has at least 18 tasks to serve, from only one sensor. This is a waste of queue resources.When I've implemented one
TuyaDevice
member per async task (_task_reconnect
,_task_shutdown_entities
,_unsub_new_entity
) instead of_call_on_close
to explicitly stop them in theclose()
, I've found out thatcancel()
does not actually stops a task which callsawait asyncio.pause
!asyncio.CancelledError
is raised for a task only ifawait asyncio.pause
is called in atry
block, directly or indirectly. It means, for low power devices,_shutdown_entities
never stopped, and_async_reconnect
stopped with a delay, during LocalTuya restart or shutdown. This could cause unexpected misbehavior. Probably, this was the reason why the original #363 code didn't work for you. So,self._sleep()
was implemented and tested to confirm that now all the tasks stop instantly.Please remember that all the async tasks are now running from the same loop, meaning, e.g. when
close()
is running, other tasks in the queue hang onawait
statements. But whenclose()
itself executesawait
, any other task may continue running until either its end, or anotherawait
. That's why I've inserted severaland not self._is_closing
checks into_make_connection()
, to avoid redundant work while waiting for the_task_connect
to stop. Alternatively,except asyncio.CancelledError/return
could be added to each and everytry
block, but it would make more lines of code.I've renamed
_connect_task
->_task_connect
to have uniform names.Frankly, I never saw
_new_entity_handler
being called, and I don't quite understand the conditions for it to be called. But I believe it's enough to set it only once, like other HASS callbacks.