Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Choose addr cached based on a round robin strategy #1836

Merged
merged 1 commit into from
Apr 28, 2017

Conversation

pfreixes
Copy link
Contributor

At each attempt to open a new connection, the addrs related to that
specific host will be retrieved using a round robin strategy. As a
result all hosts resolved by the DNS query will be used.

What do these changes do?

Enable the use of all addresses resolved for a specific host name. The current implementation only falls to the next host if and only if the first one fails. Taking into account that DNS entries returned do not have weights all addresses must be eligibles.

Are there changes in behavior for the user?

No

Related issue number

Checklist

  • I think the code is well written
  • Unit tests for the changes exist
  • Documentation reflects the changes
  • If you provide code modification, please add yourself to CONTRIBUTORS.txt
    • The format is <Name> <Surname>.
    • Please keep alphabetical order, the file is sorted by names.
  • Add a new entry to CHANGES.rst
    • Choose any open position to avoid merge conflicts with other PRs.
    • Add a link to the issue you are fixing (if any) using #issue_number format at the end of changelog message. Use Pull Request number if there are no issues for PR or PR covers the issue only partially.

At each attempt to open a new connection, the addrs related to that
specific host will be retrieved using a round robin strategy. As a
result all hosts resolved by the DNS query will be used.
@codecov-io
Copy link

codecov-io commented Apr 23, 2017

Codecov Report

Merging #1836 into master will increase coverage by <.01%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #1836      +/-   ##
==========================================
+ Coverage   97.23%   97.24%   +<.01%     
==========================================
  Files          37       37              
  Lines        7499     7523      +24     
  Branches     1301     1303       +2     
==========================================
+ Hits         7292     7316      +24     
  Misses         87       87              
  Partials      120      120
Impacted Files Coverage Δ
aiohttp/connector.py 97.57% <100%> (+0.13%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 6a7e6bc...3da3a01. Read the comment docs.

@pfreixes
Copy link
Contributor Author

any feedback about this @fafhrd91 ?

@fafhrd91
Copy link
Member

lgtm.

@kxepal any comments?

@fafhrd91
Copy link
Member

@asvetlov comments?

@kxepal
Copy link
Member

kxepal commented Apr 27, 2017

Looks good since it's totaly private thing, but I wonder if we need to have something like this rather than suggest to install some local dns cache service instead, like dnsmasq, systemd or whatever. Windows has it own OS local DSN cache. Double caching problem may be not good.

@kxepal
Copy link
Member

kxepal commented Apr 27, 2017

I can tell you a story about Hadoop namenode which uses sort of local dns cache as well and how this turns things into hell when you need to migrate some nodes to another zone.

@asvetlov
Copy link
Member

Looks good.
People would want to use the option on early development stages.
What bother me is missing new ctor parameters in http://aiohttp.readthedocs.io/en/stable/client_reference.html#tcpconnector

@pfreixes would you fix it?
It could be done by another PR

@fafhrd91 fafhrd91 merged commit e1adb3b into aio-libs:master Apr 28, 2017
@fafhrd91
Copy link
Member

@pfreixes thanks!

@pfreixes
Copy link
Contributor Author

pfreixes commented Apr 28, 2017

Hi, first thanks for the reviewing to all of you @fafhrd91 @asvetlov and @kxepal

Regarding the constructor param in the TCPConnector remember that this is a replacement of the old cache system that didn't have a way to use all addrs during the happy path execution, therefore IMHO the use_dns_cache is, in fact, the related param to enable or disable this feature. Or were you thinking in another case?

@kxepal The idea behind this it's put some continuation in this other work [1], having an expiration of the DNS table with an enough low value and having the chance to use all addr resolved by a host you will be in the best position to use services that have in front LBs that use the DNS as a way scale up or scale down. Take as an example the case of the ELBs orchestrated by AWS that you use in your daily work, they behave as it has been explained.

There are examples that show you how the software [2] spent some time to react to the change in the expiration latency, IMHO the app layer has to be ready to live in an environment where the resolutions can change quickly.

[1] #1819
[2] http://discourse.haproxy.org/t/trouble-with-dns-resolvers-sticking-to-single-ip-address/146/4

@kxepal
Copy link
Member

kxepal commented Apr 28, 2017

@pfreixes
Suddenly (or fortunately?) , I don't and never used AWS things, but I feel that there is a real problem that this could solve for some cases. Thank you for the explanation! This became more clear for me.

@pfreixes pfreixes mentioned this pull request Sep 21, 2017
@lock
Copy link

lock bot commented Oct 29, 2019

This thread has been automatically locked since there has not been
any recent activity after it was closed. Please open a new issue for
related bugs.

If you feel like there's important points made in this discussion,
please include those exceprts into that new issue.

@lock lock bot added the outdated label Oct 29, 2019
@lock lock bot locked as resolved and limited conversation to collaborators Oct 29, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants