Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Analyze additional places where there is timer handle churn #8613

Closed
bdraco opened this issue Aug 6, 2024 · 10 comments
Closed

Analyze additional places where there is timer handle churn #8613

bdraco opened this issue Aug 6, 2024 · 10 comments
Assignees

Comments

@bdraco
Copy link
Member

bdraco commented Aug 6, 2024

          > Yep, thinking about it more, this is part of the low-level API that most users shouldn't interact with. So, I guess we should be dealing with this limitation.

Other places that might be improved:

client_proto.py:read_timeout: We cancel and reschedule everytime data is received.

I think it might be worth doing. It fires far less often than websockets though. Need to do some more profiling with this one.

client_ws.py: Do we need to deal with _pong_response_cb too? Cancelled and rescheduled on each heartbeat send.

This is a rare case where its almost never rescheduled since we expect it to respond right away or be closed. Its probably not worth optimizing.

web_protocol.py:.keep_alive(): Should we cancel if the val is already the same and a handle already exists?

This one doesn't seem to get rescheduled frequently when I looked at the profile.

Maybe some issues in connector.py if there are many connections that are getting closed.

cleanup_closed could use some improvements. Something to investigate for sure.

Also, helpers.py:TimeoutHandle is only used once in client.py and seems like something that could be replaced by asyncio.timeout(). We'd need to use asyncio.Timeout.reschedule() in the resp.connection callback.

That looks like a good opportunity to cleanup there.

Originally posted by @bdraco in #8608 (comment)

@Dreamsorcerer
Copy link
Member

web_protocol.py:.keep_alive(): Should we cancel if the val is already the same and a handle already exists?

This one doesn't seem to get rescheduled frequently when I looked at the profile.

For this one, I just figured it'd be a one-line fix to add an if statement in, which could reduce the impact even if it's not hit often.

@bdraco
Copy link
Member Author

bdraco commented Aug 7, 2024

cleanup_closed could use some improvements. Something to investigate for sure.

This one is heavy. Should make it a priority. It might just be because of the 2.0 second interval

@bdraco
Copy link
Member Author

bdraco commented Aug 8, 2024

The cleanup_closed case is because its every 2.0. HA changed it to 60.0 but there are lot of places where libs don't use HA's subclassed connector so we still see it there. 2.0s might be too frequent, but I'm not sure how safe it is to increase for the generate use case.

@bdraco
Copy link
Member Author

bdraco commented Aug 8, 2024

web_protocol.py:.keep_alive(): Should we cancel if the val is already the same and a handle already exists?

This one doesn't seem to get rescheduled frequently when I looked at the profile.

For this one, I just figured it'd be a one-line fix to add an if statement in, which could reduce the impact even if it's not hit often.

Testing a solution in #8662

@bdraco
Copy link
Member Author

bdraco commented Aug 9, 2024

I didn't see client_proto.py:_reschedule_timeout ever schedule a timer because it looks like everything in HA is using a total timeout and not a read timeout.

Its going to churn if there is a read timeout but that seems like its going to be a more rare case

@Dreamsorcerer
Copy link
Member

Its going to churn if there is a read timeout but that seems like its going to be a more rare case

As long as it's not affected by sock_connect timeout, which we're trying to add by default: https://github.com/aio-libs/aiohttp/pull/7368/files#diff-f334e752b4894ef951105572ab8b195aeb8db90eb6e48b1dfbd9a01da4c854f5R166

@bdraco
Copy link
Member Author

bdraco commented Aug 9, 2024

Its going to churn if there is a read timeout but that seems like its going to be a more rare case

As long as it's not affected by sock_connect timeout, which we're trying to add by default: #7368 (files)

Looks like that won't affect it AFAICT

@Dreamsorcerer
Copy link
Member

@bdraco Probably a good idea to edit the original message into a checklist, so we can which have already been done?

@bdraco
Copy link
Member Author

bdraco commented Oct 1, 2024

The only thing left of substance is cleanup closed.

It probably doesn't need to run every 2s.

Home Assistant overrides the value to every 60s and that has worked out fine

@bdraco
Copy link
Member Author

bdraco commented Oct 29, 2024

It looks like we can get rid of cleanup closed (conditionally) now that its fixed upstream in cpython

see #9590

That was the last one here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants