Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fs: improve readFile performance #27063

Closed
wants to merge 4 commits into from

Conversation

BridgeAR
Copy link
Member

@BridgeAR BridgeAR commented Apr 3, 2019

This increases the maximum buffer size per read to 256kb when using
fs.readFile. This is important to improve the read performance for
bigger files.

Refs: #25741

Benchmark (512kb):

17:32:35                                                  confidence improvement accuracy (*)    (**)   (***)
17:32:35  fs/readfile.js concurrent=10 len=1024 dur=5                    -3.29 %       ±4.88%  ±6.48%  ±8.40%
17:32:35  fs/readfile.js concurrent=10 len=16777216 dur=5        ***    592.66 %      ±18.04% ±24.13% ±31.68%
17:32:35  fs/readfile.js concurrent=1 len=1024 dur=5                      2.18 %       ±4.41%  ±5.84%  ±7.57%
17:32:35  fs/readfile.js concurrent=1 len=16777216 dur=5         ***    356.82 %      ±25.96% ±34.72% ±45.57%
Checklist
  • make -j4 test (UNIX), or vcbuild test (Windows) passes
  • tests and/or benchmarks are included
  • documentation is changed or added
  • commit message follows commit guidelines

This increases the maximum buffer size per read to 256kb when using
`fs.readFile`. This is important to improve the read performance for
bigger files.

Refs: nodejs#25741
@BridgeAR BridgeAR added the performance Issues and PRs related to the performance of Node.js. label Apr 3, 2019
@nodejs-github-bot
Copy link
Collaborator

@nodejs-github-bot nodejs-github-bot added the fs Issues and PRs related to the fs subsystem / file system. label Apr 3, 2019
@nodejs-github-bot
Copy link
Collaborator

@mscdex
Copy link
Contributor

mscdex commented Apr 3, 2019

Couldn't we just change kReadFileBufferLength instead?

@BridgeAR
Copy link
Member Author

BridgeAR commented Apr 3, 2019

Couldn't we just change kReadFileBufferLength instead?

That would not have the expected behavior: it is now only used in case the stat call failed and we do not know the file size. In that case, it's likely best to be conservative and not to pre-allocate a big buffer.

@mscdex
Copy link
Contributor

mscdex commented Apr 3, 2019

Maybe we should have two constants then, since kReadFileBufferLength would no longer be completely accurate?

@sam-github
Copy link
Contributor

That would not have the expected behavior: it is now only used in case the stat call failed and we do not know the file size. In that case, it's likely best to be conservative and not to pre-allocate a big buffer.

It would have the behaviour I'd expect from the commit description! :-)

Why is this more conservative? What is safer? If we are going to read 1,000K, and we know it, with this change it will be read in larger 256K chunks. But if we don't know that it will be 1,000K, it will be read in the older, smaller, 8K chunks. Maybe there is a good reason for this, but its not immediately obvious.

I'm actually not clear on why there are two branches at all. It seems like the branch where size isn't known would work for all cases. AFAICT the only optimization is that if the file happens to be exactly a multiple of kReadFileBufferLength, and .size is known, one extra read() (which would just return 0 meaning EOF), is avoided. Is that the entire point? But if .size came from a stat() call, then we always pay the stat() price, and almost never get the "1 less read" bonus, so it doesn't seem a net win.

.... OK, I'm not deleting the above, but I think I just realized why there are two branches. The .size !== 0 branch uses pread() in libuv, which doesn't change the file position, but that won't work if a fd was passed, and the fd isn't seekable (its a stream or pipe, like stdin). Its trying to make fs.readFile(A_FD) not change the current file position. I think a comment to this effect, if my guess is right, would be useful.

Still not sure why the larger buf size isn't used in all cases.

@BridgeAR
Copy link
Member Author

BridgeAR commented Apr 4, 2019

@mscdex

Maybe we should have two constants then, since kReadFileBufferLength would no longer be completely accurate?

I added another commit that also refactors the code a tiny bit to make it simpler. I also added the constant as suggested.

@sam-github

I think a comment to this effect, if my guess is right, would be useful.

There is a comment in one branch that checks for size === 0. Should I add another one for this specific case?


Benchmark: https://ci.nodejs.org/view/Node.js%20benchmark/job/benchmark-node-micro-benchmarks/323/

@nodejs-github-bot
Copy link
Collaborator

@jasnell
Copy link
Member

jasnell commented Apr 4, 2019

There's a slightly different approach we could take on this by either:

a. Providing an option to set the read chunk size... e.g. fs.readFile(fn, { readSize: 10000 }, () = > {}), or...

b. Providing an option that is effectively a preallocated buffer to fill on each read... fs.readFile(fn, { readBuf: Buffer.from(10000) }, () => {})

Option a is likely the far better choice and allows users to performance tune on their own.

@BridgeAR
Copy link
Member Author

BridgeAR commented Apr 4, 2019

I increased the value to 512kb and also doubled the value for unknown file sizes.

256kb already show significant improvements:

15:41:32                                                  confidence improvement accuracy (*)    (**)   (***)
15:41:32  fs/readfile.js concurrent=10 len=1024 dur=5                     0.61 %       ±4.89%  ±6.48%  ±8.38%
15:41:32  fs/readfile.js concurrent=10 len=16777216 dur=5        ***    504.18 %      ±10.37% ±13.83% ±18.07%
15:41:32  fs/readfile.js concurrent=1 len=1024 dur=5                     -0.86 %       ±4.08%  ±5.40%  ±6.99%
15:41:32  fs/readfile.js concurrent=1 len=16777216 dur=5         ***    285.81 %      ±17.78% ±23.72% ±31.01%

@jasnell I believe we should raise this limit with or without the option. So adding the option could be done in a separate PR and this is a quick win.

@zbjornson
Copy link
Contributor

master...zbjornson:chunksizeopt this branch has a chunkSize option like @jasnell's (a) if it's useful. I was using it for benchmarking.

However, @davisjam and I actually just had a nice chat and agreed that partitioning may be causing more harm than good (in part due to the increased risk of OOMs), and were both open to removing partitioning and instead adding a prominent doc warning.

@BridgeAR
Copy link
Member Author

BridgeAR commented Apr 4, 2019

@jasnell

b. Providing an option that is effectively a preallocated buffer to fill on each read... fs.readFile(fn, { readBuf: Buffer.alloc(10000) }, () => {})

So that the buffer would be as big as the whole file? We would have to guard against wrong sizes (and then potentially still manage multiple buffers) and what would be the benefit of b over a?

@jasnell
Copy link
Member

jasnell commented Apr 4, 2019

As I said, option a is likely better than option b ;-)

@sam-github
Copy link
Contributor

There is a comment in one branch that checks for size === 0.

If you mean

Unknown size, just read until we don't get bytes.

That says what, but not why. Since you can read until you don't get bytes whether you know the size or not, without some context, its quite mysterious why the file doesn't just always use the size===0 branch. I happen to remember the kerflufle about using fds instead of file names, so I think I guessed why, but only after writing a paragraph about how I didn't undertstand. It shouldn't take insider knowledge to understand this.

This was not introduced in this PR, so consider it a nit. I can fix, if you want.

Copy link
Contributor

@sam-github sam-github left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The effect of this is to make the buffer size different when reading from pipes vs. from files (technically, for "seekable" vs "non-seekable" fds). I don't understand why both cases wouldn't want the same performance enhancement.

@BridgeAR
Copy link
Member Author

BridgeAR commented Apr 4, 2019

@sam-github

This is not about using an fd or not but about the file type.

We know the file size for regular files (S_IFREG type) but not in case it's another file type. So if we know the file type, it'll allocate the full buffer in advance and then read in chunks into that buffer. For other file types we allocate a small buffer because we could otherwise over allocate and that costs memory and performance and create a new buffer on each read until we find the end of the file. At the end we concat those buffers together.

See https://github.com/nodejs/node/pull/27063/files#diff-9a205ef7ee967ee32efee02e58b3482dR265

Copy link
Member

@mcollina mcollina left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@BridgeAR
Copy link
Member Author

BridgeAR commented Apr 5, 2019

@nodejs/fs this could use some further reviews.

@sam-github @jasnell @zbjornson I suggest to land this, no matter what the latter solution might be to improve the current situation. Are you good with that?

@nodejs-github-bot
Copy link
Collaborator

Copy link
Member

@mcollina mcollina left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. However I’d make both of them 512KB.

@BridgeAR
Copy link
Member Author

BridgeAR commented Apr 5, 2019

@mcollina

LGTM. However I’d make both of them 512KB.

I guess it won't make a huge difference as the cases where the file does not have a regular file type is not common but it will definitely take a significantly longer time to allocate 512kb over the current 8kb and if we over allocate, we also do work and use more memory than we should. What about using 64kb or 128kb for unknown file sizes? I guess with that size we cover a lot more files and also improve the performance significantly.

@mcollina
Copy link
Member

mcollina commented Apr 5, 2019

I guess it won't make a huge difference as the cases where the file does not have a regular file type is not common but it will definitely take a significantly longer time to allocate 512kb over the current 8kb and if we over allocate, we also do work and use more memory than we should. What about using 64kb or 128kb for unknown file sizes? I guess with that size we cover a lot more files and also improve the performance significantly.

Mine is not a blocker. I'm ok as it is. Maybe can you add a comment with the above?

@BridgeAR
Copy link
Member Author

BridgeAR commented Apr 5, 2019

I added a comment to highlight the difference and updated the buffer size to 64kb for unknown file sizes.

@BridgeAR BridgeAR added the author ready PRs that have at least one approval, no pending requests for changes, and a CI started. label Apr 5, 2019
@nodejs-github-bot
Copy link
Collaborator

@nodejs-github-bot
Copy link
Collaborator

Copy link
Contributor

@davisjam davisjam left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. This should give a big performance boost while still "taking turns" on the threadpool.

BridgeAR added a commit that referenced this pull request Apr 7, 2019
This increases the maximum buffer size per read to 512kb when using
`fs.readFile`. This is important to improve the read performance for
bigger files.

PR-URL: #27063
Refs: #25741
Reviewed-By: Matteo Collina <[email protected]>
Reviewed-By: James M Snell <[email protected]>
Reviewed-By: Jamie Davis <[email protected]>
@BridgeAR
Copy link
Member Author

BridgeAR commented Apr 7, 2019

Landed in eb2d416 🎉

Thanks for the reviews!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
author ready PRs that have at least one approval, no pending requests for changes, and a CI started. fs Issues and PRs related to the fs subsystem / file system. performance Issues and PRs related to the performance of Node.js.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants