Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dns.lookup blocks filesystem I/O #8436

Closed
ottobonn opened this issue Sep 7, 2016 · 47 comments
Closed

dns.lookup blocks filesystem I/O #8436

ottobonn opened this issue Sep 7, 2016 · 47 comments
Labels
confirmed-bug Issues with confirmed bugs. dns Issues and PRs related to the dns subsystem. libuv Issues and PRs related to the libuv dependency or the uv binding.

Comments

@ottobonn
Copy link

ottobonn commented Sep 7, 2016

  • Version: v4.5.0
  • Platform: Linux jessie 3.16.0-4-amd64 # 1 SMP Debian 3.16.7-ckt25-2+deb8u3 (2016-07-02) x86_64 GNU/Linux
  • Subsystem: dns

Hi all,

I'm posting this issue as a continuation of an issue originally opened against node-serialport: serialport/node-serialport#797

On networks with slow DNS response, or where DNS requests time out and fail, we observe that the blocking calls to getaddrinfo issued by dns.lookup saturate Node's libuv threadpool and delay our serialport or filesystem IO.

It looks like this issue has come up many times before. The most pertinent example I could find is nodejs/node-v0.x-archive#2868, which outlines exactly this issue. However, it was closed in favor of nodejs/node-v0.x-archive#8475, in which the API of net.connect was changed to allow a user-specified DNS resolution handler.

The original proposal of issue 2868, which included a change to dns.lookup, seems to have been lost in the consolidation to issue 8475. In our case, we'd like to be able to use the OS-level DNS facilities like a local DNS cache, so using dns.resolve does not seem to be an equivalent substitute to using dns.lookup. Furthermore, our application uses multiple high-level modules that wrap REST APIs with their own request modules, so changing every call to net.connect to use a custom DNS function is not feasible.

There seems to be a relevant open issue against libuv on the matter at libuv/libuv#203. Through various issue reports, I've seen proposals in libuv to use getaddrinfo_a, to segregate the threadpools, or to cap the number of threads DNS requests could use, to avoid DNS requests starving all other IO.

Unfortunately, although the synchronicity of dns.lookup is documented at https://nodejs.org/api/dns.html#dns_implementation_considerations, its behavior is not intuitive given that other network requests are truly asynchronous, and it has resulted in many downstream issues, including our seeming issue with serialport.

@mscdex mscdex added the dns Issues and PRs related to the dns subsystem. label Sep 7, 2016
@jasnell jasnell added the confirmed-bug Issues with confirmed bugs. label Sep 7, 2016
@jasnell
Copy link
Member

jasnell commented Sep 7, 2016

Yes, unfortunately this has been a known issue for a while but there's not a clear solution just yet outside of a massive (and non-backwards compatible) refactoring of the dns module in core. It's something that needs to get done, for sure, but it's going to take a while. I'm going to mark this as a confirmed bug so that it stays on the radar.

@ottobonn
Copy link
Author

ottobonn commented Sep 8, 2016

Thanks for the prompt response.

Just to clarify, I'm referring to the fact that the underlying implementation of dns.lookup is based on a blocking call to getaddrinfo. Could you elaborate on why the dns module would need refactoring? Do you mean the JavaScript layer needs refactoring, or the implementation of the dns module's low-level interaction with libuv? When you say it wouldn't be backwards-compatible, are you referring to the JS API or the internals?

@jasnell
Copy link
Member

jasnell commented Sep 8, 2016

Little of both, really. There may be ways of tweaking the current implementation to improve this particular issue without breaking things but doing so would largely just end up being a band aid. There are several issues with the current implementation that make it less than desirable.

@owenallenaz
Copy link

owenallenaz commented Dec 20, 2016

Upvoting this issue. We had this recently hit one of our applications in a very non-intuitive way. The node process we are running acts as a monitor for hundreds of host names. We are pinging the servers using request but the monitor process was running out of memory and crashing. From our tests the Node event loop wasn't blocked because setImmediate fires efficiently, yet fs calls would never complete.

After roughly 10 hours searching the internet on how outgoing requests could cause the fs to block I finally found this thread. Based on it, and the dns docs, it appears that what is occurring under the hood is that the dns requests are being delayed, causing the libuv threadpool to fill. We had a setInterval in place that was logging the state of the system, since the fs.writeFile was hanging forever, then the scope of that function were staying in memory forever (since the writeFile callback is never executed).

The other complicating factor is that in the http package there is a setTimeout call but it doesn't affect the timeout on the dns.lookup. So if I put a setTimeout of 10ms on an http.request(), it will not timeout even if the dns.lookup takes 5000ms. This makes this bug very unintuitive for a user as they can have an http request with a timeout, that won't timeout, even if the request takes longer than the timeout (since it doesn't include dns.lookup in that timeout).

So while this behavior might be known, the results of it are very, very unintuitive and difficult to debug. The idea that a dns failure can block the file system, even though the event loop is clear is pretty arcane. That being said, what are the ways to avoid dns delays from causing widespread failures? Pre-lookup the ip based on the host header using dns.resolve and pass ip with host header to things like http.request()? Alterations to the local dns setup to timeout faster or cache better? Pray that dns never goes down? Speak to package developers for tools like request to see if they will bake this system into theirs?

Is there a list of specific configuration files that differ between dns.lookup and dns.resolve in the docs they say e.g. /etc/hosts that makes mention of other files, but what are they? From my tests our dnsmasq running locally is still hit even when I utilize dns.resolve.

Any guidance is appreciated.

@sam-github
Copy link
Contributor

The other complicating factor is that in the http package there is a setTimeout call but it doesn't affect the timeout on the dns.lookup.

Can you report that as a bug, please? That should be fixable.

Pre-lookup the ip based on the host header using dns.resolve and pass ip with host header to things like http.request()?

Yes, that will keep node out of the thread pool and on the main thread, solving your contention problem.

Local DNS setup can help with the problem of just getting faster DNS responses.

Increasing the threadpool size from 4 to a higher number can help, UV_THREADPOOL_SIZE=X with the environment.

@sam-github
Copy link
Contributor

Btw, no need to increase threadpool size if you can avoid using the threadpool, but fs is using the pool, so its still worth looking at.

@owenallenaz
Copy link

owenallenaz commented Jan 12, 2017

Is there an objection to exposing a setting on http and https that would allow someone to specify whether to utilize lookup or resolve? Currently there is a family argument where we can specify whether we want ipv4 or ipv6 results. My request feels like it is in the same vein. Is there a harm to allowing something like lookupMethod : "lookup" || "resolve" and if resolve is passed it will utilize the proper resolve4 or resolve6 according to the family argument. This alleviates backward compatibility issues and allows user-land modules like request to piggyback and expose that setting to their users.

Yes, the behavior I seek can be accomplish by doing things like createConnection argument on the http. request, but it's a fair amount of work for something which feels pretty simple and pretty ubiquitous. I think if most people knew that their dns requests were being processed synchronously, it would be a bigger issue than it is and it really only rears it's head when a major dns provider starts timing out and suddenly your node app comes to a screeching halt because of synchronous calls (which is why we avoid sync calls in the first place!)

@sam-github
Copy link
Contributor

dns calls are no more synchronous than fs calls, they aren't. They do contend with the thread pool.

What is happening with DNS provider timeouts? Is the thread in the (small) pool taking so long that new lookups are blocked, waiting for a free thread?

Has pre-resolving the hostnames not worked for you?

There has been some interest in having a pluggable resolver, but no-one has stepped up to do the work, yet. Note that user-land modules like request can already implement options to resolve with dns.resolve instead of dns.lookup.

@owenallenaz
Copy link

owenallenaz commented Jan 13, 2017

@sam-github The problem arose for us that our DNS server was taking 5s+ to respond, due to issues with the server itself. The node process was responsible for heartbeating thousands of different host names via http requests every 5 minutes. When the dns server slowed, then the dns.lookup calls quickly took up all 4 members of the thread pool. Now since the node event loop was free, our setInterval based file logging, would fire, never reach the fs callback (due to full thread pool), causing a process crashing memory leak. The idea that DNS lookups can cause a memory leak due to file system backups is really, really unintuitive. Generally dns is instant, and we've updated our local dns configuration to further prevent this problem by querying multiple servers in parallel via dnsmasq.

I have reproduction of the fs lock-up below:

https://gist.github.com/owenallenaz/fc3d8b24cdaa4ec04cc9dd0bf8ab485f

In that case, by utilizing dns.lookup (the default for http.request) I can see that the fs.readFile stalls for an entire 4s. If I change this to dns.resolve this behavior does not occur.

The main reason I bring this up here if someone does not care about host file interactions, it would seem to me that the default behavior really should be dns.resolve rather than dns.lookup. Obviously we can't change that because of backwards compat reasons, but simplifying that switch to a setting seems worthwhile and then could be leveraged downstream by user-land packages. Pre-resolving did work for us, it's just a fair amount of code for something that feels as if it should almost be default behavior. If we were to submit a setting branch for this, would it be considered or is it still something you'd want in user-land?

@sam-github
Copy link
Contributor

sam-github commented Jan 13, 2017

Thanks for the war-story, its good for them to be heard. I'm a bit surprised you don't also have a mem leak in the dns.resolve() case, don't you end up with the dns.resolve() calls taking much longer than necessary, and requests piling up? Or do they just pile up to a higher than expected but still tolerable steady-state memory usage, as opposed to the fs log output, which piled up until out-of-memory?

Btw, arguably, async itself is unintuitive (at least, lots of people have had difficulty wrapping their head around it), so general statements about how some node API was found unintuitive to some person aren't really compelling reasons to make the API more complex.

I find the docs are deficient, they have no description of what is async-on-epoll, and what is async-in-the-thread-pool, and I intend to fix that. I think that clearly documenting that, as well as UV_THREADPOOL_SIZE is a better fix of the "unintuitive" than adding yet another knob to an already complex HTTP API (and the same knob would need to be added to the net and tls APIs, too, of course, they need to resolve host names).

All PRs are considered, of course. But you want a predictor of how positive people would feel about it before wasting your time, understandable.

Personally, I'd be not inclined towards it, here is why.

function get(host, ...) {
  dns.resolve(host, function(ip) {
    http.get(ip, ...)
  }
}

vs

function get(host, ...) {
    http.get(ip, {
      hostLookup: 'resolve'
    }, ...)
}

The first looks like functional composition to me, nice APIs, that work well together, the second just looks like an ugly hack.

IMHO, of course.

Also, the options knob to change the underlying function in dns.* to use doesn't allow more interesting usage, like making the dns implementation completely user-selectable.

This particular suggestion is probably awful, but as a straw-man, I'd be more inclined to something like

function get(host, ...) {
   http.get(host, {dns: myDns}, ....);
}

Where myDns (and built-in dns) was required to have a host2addr() method. Then, you could completely replace dns with your own implementation, or just do myDns = {host2addr: dns.resolve}, or even dns.host2addr = dns.resolve; to change it globally.

Basically, I'd prefer to see an approach to a more pluggable DNS, that does more than what can be already achieved by functional composition, and otherwise, just document that node APIs can be used together to get specific results.

@owenallenaz
Copy link

Sounds good. I'll move my nagging up the chain to request to see if they are interested at that level. If this thread sees more traffic we can re-evaluate. As it stands right now I think it's fair to say the current system is entirely pluggable, due to the ability to pass http.request({ createConnection : customFn }) which then can pass through to net.createConnection({ lookup : customLookupFn }). That pathway does allow a custom dns lookup method to be used as you describe (not very well documented, but existing). It's just a fair amount of code per call, so putting it in a library somewhere is wise.

To your question, I end up with dns.resolve calls piling up, but only briefly, as the whole load of 1000 queries clears in 5s. What I don't end up with is fs.writeFile calls piling up and holding their string data in memory forever because the application setInterval logging mechanics still work since the thread pool still has room, even if the resolve calls are timing out..

With dns.lookup it can only run 4 requests in parallel, so 1000 calls means 1000 calls / 4 in parallel * 5 seconds per batch === 20 minutes of complete thread pool lock-out. Then with the thread pool locked up, the setInterval triggered fs.writeFile holds it's string forever, or at least for 20 minutes, by then it's attempted to queue up 4 writes before even one of them flushes. It would require a massive increase in the thread pool count to make a dent in that because of the problematic synchronicity.

@RobertDiebels
Copy link

RobertDiebels commented Feb 7, 2017

Hi guys,

Good to see there is still work going on surrounding this issue. Would just like to re-affirm that this issue is still relevant. I submitted a issue to karma not to long ago which they investigated and traced to this bug.

They mentioned this issue: nodejs/node-v0.x-archive#25489 as a reference. Unfortunately it is in the archives (and I don't think someone is tracking it).
Also it is assigned to someone who hasn't commit to node for a few months.

So I'm partially commenting on here so I have something to track concerning my Karma issue: karma-runner/karma#2050

EDIT: I'll read the entire thread later today (it's morning here and have to go to work soon). I read something about using dns.resolve instead of dns.lookup and it might be worth mentioning that in the Karma issue.

@sam-github
Copy link
Contributor

@RobertDiebels nodejs/node-v0.x-archive#25489 isn't at all related to this issue.

@RobertDiebels
Copy link

@sam-github my bad. This issue: nodejs/node-v0.x-archive#25338 gave me the impression that it was. It is linked to the one from before: nodejs/node-v0.x-archive#25489

@bminer
Copy link
Contributor

bminer commented Mar 27, 2017

Why not have multiple thread pools? File system I/O has nothing to do with networking/DNS anyway.

This has been an ongoing issue for years. In our application, we had to patch dns.lookup and force it to only use 1 thread in the pool. We do this by queuing dns.lookup calls and caching successful lookups for a short period of time (5 minutes worked for our app). If the queue gets too large (i.e. due to connectivity issues or DNS problems), the dns.lookup method immediately starts returning errors until things stabilize again. It doesn't solve the problem, but at least file system operations are left unaffected.

I should also mention that dns.resolve was not an acceptable solution in our application either. We needed to use getaddrinfo since that uses /etc/resolv.conf, etc.

Edit: As an improvement to our workaround, one could always fallback to dns.resolve when the dns.lookup queue gets too big.

@sam-github
Copy link
Contributor

@bminer why not always use dns.resolve()? do you need to be able to lookup names in local mDNS, /etc/hosts, or using other non-DNS resolution techniques? Did increasing the threadpool size not work for you?

Problem with multiple is it raises the question of "how many"... all C++ code using libuv to queue work off the main thread is using the same pool and potentially contending for those threads (including fs, dns.lookup, crypto.random, SQL native DB drivers, etc...). Should each one have its own pool? That makes it hard to control overall amount of concurrency, the threads in use could explode as the number of pools does, without a decent way to manage them overall. And what about fs.read() and fs.write(), should they each have their own pools? Is it just dns.lookup() that is a special case?

@bminer
Copy link
Contributor

bminer commented Mar 29, 2017

Interesting point. After thinking about this for a bit, I think dns.lookup is a special case. Most of the other stuff is blocking a thread based on the "size" of the request. In other words, reading a very large file will take a long time. Application responsiveness can be improved by reading smaller chunks.

But, dns.lookup is supposed to be quite fast in the average case. Nevertheless, when connection issues occur, you could be waiting several seconds for a response. Very few concurrent dns.lookup calls can lock the entire thread pool while the OS is not doing a whole lot of work.

My suggestion is to give dns.lookup its own thread pool, defaulting to 1 or 2 threads. Everyone else can share another thread pool, defaulting to 4 threads.

Another idea is to allow the creation of multiple thread pools and assigning different tasks to different pools. Perhaps the default configuration is a single thread pool with 4 threads -- all blocking activity uses that pool. But, the end-user could customize things to allow dns.lookup to use its own pool.

@sam-github
Copy link
Contributor

Fair enough, it could be done, at some cost in time, by someone sufficiently motivated. War stories may provide that motivation.

You didn't answer my question, why not always use dns.resolve()? It uses c-ares, which does not go throug the uv work queue, so doesn't interact with the uv thread pool. Most people who care about this can use dns.resolve() (which is admittedly not "worked out of the box").

@jorangreef
Copy link
Contributor

jorangreef commented Mar 29, 2017

This is how it could work:

  1. The current threadpool used by Node and native bindings should be increased to 128 threads by default or MAX_IO_THREADPOOL_SIZE or MAX_THREADPOOL_SIZE if defined. Most of these threads are used for IO and are not "run hot" so context switching is less of an issue. This should be called the "IO" threadpool. This should be the default threadpool for Node and native bindings.

  2. A new "CPU" threadpool should be added. This should be sized by default to the number of cores minus 1 (leaving a core free for the IO threadpool). The size of the CPU threadpool could also be explicitly set by the user through a MAX_CPU_THREADPOOL_SIZE env variable at startup. CPU-intensive code which is "run hot" can then start to take advantage of the CPU threadpool.

This design should be amenable to CPU pinning although I don't have experience with that.

Calls such as crypto.randomBytes() and crypto.pbkdf2() could be migrated to the CPU threadpool.

NAN could expose an option to have an Nan::AsyncQueueWorker indicate that it should execute in the CPU threadpool.

By default, unless indicated, everything should run in the IO threadpool.

See this and this regarding 128 threads for the IO threadpool. @bnoordhuis 's comment in the second link could be handled by migrating some hot fs.stat calls to the CPU threadpool (after passing a heuristic).

@bminer
Copy link
Contributor

bminer commented Mar 29, 2017

@sam-github Sorry, I didn't answer your question. We needed to use the dns.lookup because it uses the OS's getaddrinfo routine. We use custom DNS servers controlled by resolvconf. dns.resolve might have worked, but I also didn't want to perform DNS resolution every time I make a HTTP request... especially when most of the time it is to the same host over and over. I feel like dns.lookup has more predictable behavior and is more performant.

@jorangreef - How does your suggestion solve the problem at hand? The idea is to separate DNS into its own thread pool to avoid I/O thread starvation when DNS/networking problems crop up. It begs the question, will 128 threads be enough? One can easily build up 128 DNS pending requests when handling dozens of requests per second.

@bminer
Copy link
Contributor

bminer commented Mar 29, 2017

(war story follows)... I should also disclose that we bumped into this problem because we have PCs connected to the Internet via cellular modems running a Node process. Cell modem connections are iffy at best, and connections drop a lot. Whenever the connection was lost, the entire Node application became unresponsive for a few minutes. This problem was very difficult to debug. Eventually, we just patched dns.lookup, and things have been running smoothly ever since.

When using Node on a server, 99.99% of the time you probably don't care about DNS sharing the thread pool with other I/O, but when you're using Node on PCs where DNS can regularly fail, this issue matters quite a lot.

@jorangreef
Copy link
Contributor

How does your suggestion solve the problem at hand? The idea is to separate DNS into its own thread pool to avoid I/O thread starvation when DNS/networking problems crop up.

I'm not suggesting that Node attempt to solve every application's cellular modem issue perfectly. Your example could equally be reframed along many dimensions leading to an explosion in the number of threadpools required. For example, say you're using 32 hard drives in a server, and one drive goes bad and blocks requests, blocking all 128 threads (or all 1024 threads if that were possible etc.). Your logic of having DNS in its own threadpool would similarly suggest having a separate threadpool per failure domain, i.e. one threadpool per hard drive?

One can easily build up 128 DNS pending requests when handling dozens of requests per second.

It's better than the default of 4 threads. With 128 threads, you've given it enough concurrency, and at that point, there's probably something wrong with your network which would likely overflow any threadpool you throw at it. That kind of thing should be monitored and handled using a queue in your application, before requests hit the IO threadpool. When you see the queue is overflowing, your application should do something about it, rather than encouraging the queue to keep growing (by throwing more threads at it). Perhaps Node could help solve this with a maxConcurrentDNSRequests limit and a callback to notify you when DNS requests start getting dropped by Node before they hit the network.

So I disagree that Node should have a DNS-only threadpool as you suggest.

But there is a problem with Node's threadpool design.

The basic issue in all the recent threadpool discussions has been that the current threadpool design conflates CPU-intensive tasks with IO-intensive tasks, and uses a default of 4 threads which would be good for CPU-intensive tasks but bad for IO-intensive tasks. What you want for CPU-intensive tasks such as pbkdf2() or fs.stat() hitting the filesystem cache is a thread per core to minimize context switching, but what you want for IO-intensive tasks such as slow DNS requests is many threads to have enough concurrency. So you want MAX_THREADPOOL_SIZE=4 for CPU-intensive tasks but MAX_THREADPOOL_SIZE=128 for IO-intensive tasks.

Two threadpools, one for CPU-intensive tasks and one for IO-intensive tasks would solve the majority of problems people have with Node's threadpool, allowing classification of tasks and pinning, as well as reasonable concurrency for IO tasks (the 4 threads default is not reasonable), without conflating CPU-intensive tasks with IO-intensive tasks.

@vkurchatkin
Copy link
Contributor

@jorangreef it would be even better to make IO pool dynamically sized

@bminer
Copy link
Contributor

bminer commented Mar 29, 2017

@jorangreef - Fair enough. I agree with what you're saying. 128 threads for I/O seems like a lot, but... I agree with your proposal of a separate CPU threadpool with one thread per CPU core. I wonder... how much RAM is consumed by a sleeping thread? Anyway, the size of both threadpools should be configurable by the user via an environment variable or something.

I also like your suggestion of having Node limit the number of concurrent dns.lookup requests by default, although this value should be configurable by the user and even disabled if necessary. This solution seems a bit more elegant than a separate threadpool for DNS.

@sam-github
Copy link
Contributor

@bminer I thought dns.resolve()/C-Ares used resolve.conf, so would only query a local caching DNS proxy if so configured, which sounds like what you want. Did you confirm that it does not use resolve.conf? I'll have to check up on that, the docs are woefully brief.

@jorangreef I like your idea of seperating pools by purpose (blocking or cpu-busy), but I also think for most use-cases its the same as "increase the threadpool size to 128", since other than the two crypto RNG generation APIs you listed, I'm hard pressed to think of a time I've seen an addon delegate CPU intensive work to the UV threadpool. The pool is almost entirely used to make (potentially) blocking calls, and the threadpool size can be increased already by anybody who wants to with UV_THREADPOOL_SIZE,

Btw, I'm not for or against increasing the pool size, but it might cause poorer performance for some workloads, and help others. Its hard to get good feedback on this, which I think is why it was just left user-configurable. I'm not sure how much evidence would be needed to show this change is usually beneficial. It might be worth bring up as a LEP: https://github.com/libuv/leps

@refack
Copy link
Contributor

refack commented Aug 21, 2018

Reopening until libuv/libuv#1845 lands in node

@refack refack reopened this Aug 21, 2018
@refack refack added the libuv Issues and PRs related to the libuv dependency or the uv binding. label Aug 21, 2018
@gireeshpunathil
Copy link
Member

gireeshpunathil commented Sep 25, 2018

this (libuv/libuv#1845) is landed in node through #22997 , closing

@addaleax
Copy link
Member

@gireeshpunathil Do you have any idea about how one could write regression test in Node.js for this?

@gireeshpunathil
Copy link
Member

@addaleax - sure, let me think about it, and if possible write one, and keep this opened till then so that I don't forget.

gireeshpunathil added a commit to gireeshpunathil/node that referenced this issue Dec 30, 2018
Validate that massive dns lookups do not block filesytem I/O
(or any fast I/O for that matter).
Prior to libuv/libuv#1845 few back-to-back dns
lookup were sufficient to engage libuv threadpool workers in a blocking
manner, throttling other work items that need the pool. this test acts
as a regression test for the same.

Start slow and fast I/Os together, and make sure fast I/O can complete
in at least in 1/100th of time for slow I/O.

Refs: libuv/libuv#1845
Refs: nodejs#8436

PR-URL: nodejs#23099
Reviewed-By: Sakthipriyan Vairamani <[email protected]>
Reviewed-By: Anna Henningsen <[email protected]>
@gireeshpunathil
Copy link
Member

a regression test is in place (54fa59c), closing.

targos pushed a commit that referenced this issue Jan 1, 2019
Validate that massive dns lookups do not block filesytem I/O
(or any fast I/O for that matter).
Prior to libuv/libuv#1845 few back-to-back dns
lookup were sufficient to engage libuv threadpool workers in a blocking
manner, throttling other work items that need the pool. this test acts
as a regression test for the same.

Start slow and fast I/Os together, and make sure fast I/O can complete
in at least in 1/100th of time for slow I/O.

Refs: libuv/libuv#1845
Refs: #8436

PR-URL: #23099
Reviewed-By: Sakthipriyan Vairamani <[email protected]>
Reviewed-By: Anna Henningsen <[email protected]>
refack pushed a commit to refack/node that referenced this issue Jan 14, 2019
Validate that massive dns lookups do not block filesytem I/O
(or any fast I/O for that matter).
Prior to libuv/libuv#1845 few back-to-back dns
lookup were sufficient to engage libuv threadpool workers in a blocking
manner, throttling other work items that need the pool. this test acts
as a regression test for the same.

Start slow and fast I/Os together, and make sure fast I/O can complete
in at least in 1/100th of time for slow I/O.

Refs: libuv/libuv#1845
Refs: nodejs#8436

PR-URL: nodejs#23099
Reviewed-By: Sakthipriyan Vairamani <[email protected]>
Reviewed-By: Anna Henningsen <[email protected]>
BethGriggs pushed a commit that referenced this issue Apr 17, 2019
Validate that massive dns lookups do not block filesytem I/O
(or any fast I/O for that matter).
Prior to libuv/libuv#1845 few back-to-back dns
lookup were sufficient to engage libuv threadpool workers in a blocking
manner, throttling other work items that need the pool. this test acts
as a regression test for the same.

Start slow and fast I/Os together, and make sure fast I/O can complete
in at least in 1/100th of time for slow I/O.

Refs: libuv/libuv#1845
Refs: #8436

PR-URL: #23099
Reviewed-By: Sakthipriyan Vairamani <[email protected]>
Reviewed-By: Anna Henningsen <[email protected]>
BethGriggs pushed a commit that referenced this issue Apr 28, 2019
Validate that massive dns lookups do not block filesytem I/O
(or any fast I/O for that matter).
Prior to libuv/libuv#1845 few back-to-back dns
lookup were sufficient to engage libuv threadpool workers in a blocking
manner, throttling other work items that need the pool. this test acts
as a regression test for the same.

Start slow and fast I/Os together, and make sure fast I/O can complete
in at least in 1/100th of time for slow I/O.

Refs: libuv/libuv#1845
Refs: #8436

PR-URL: #23099
Reviewed-By: Sakthipriyan Vairamani <[email protected]>
Reviewed-By: Anna Henningsen <[email protected]>
BethGriggs pushed a commit that referenced this issue May 10, 2019
Validate that massive dns lookups do not block filesytem I/O
(or any fast I/O for that matter).
Prior to libuv/libuv#1845 few back-to-back dns
lookup were sufficient to engage libuv threadpool workers in a blocking
manner, throttling other work items that need the pool. this test acts
as a regression test for the same.

Start slow and fast I/Os together, and make sure fast I/O can complete
in at least in 1/100th of time for slow I/O.

Refs: libuv/libuv#1845
Refs: #8436

PR-URL: #23099
Reviewed-By: Sakthipriyan Vairamani <[email protected]>
Reviewed-By: Anna Henningsen <[email protected]>
MylesBorins pushed a commit that referenced this issue May 16, 2019
Validate that massive dns lookups do not block filesytem I/O
(or any fast I/O for that matter).
Prior to libuv/libuv#1845 few back-to-back dns
lookup were sufficient to engage libuv threadpool workers in a blocking
manner, throttling other work items that need the pool. this test acts
as a regression test for the same.

Start slow and fast I/Os together, and make sure fast I/O can complete
in at least in 1/100th of time for slow I/O.

Refs: libuv/libuv#1845
Refs: #8436

PR-URL: #23099
Reviewed-By: Sakthipriyan Vairamani <[email protected]>
Reviewed-By: Anna Henningsen <[email protected]>
Slayer95 referenced this issue in smogon/pokemon-showdown Jul 9, 2019
This helps prevent DNS poisoning attacks if the platform supports DNSSEC
since dns.resolve4 uses c-ares, which doesn't support DNSSEC.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
confirmed-bug Issues with confirmed bugs. dns Issues and PRs related to the dns subsystem. libuv Issues and PRs related to the libuv dependency or the uv binding.
Projects
None yet
Development

No branches or pull requests