Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

http: server code refactor #6533

Merged
merged 8 commits into from
Dec 29, 2016
Merged

Conversation

mscdex
Copy link
Contributor

@mscdex mscdex commented May 2, 2016

Checklist
  • tests and code linting passes
  • the commit message follows commit guidelines
Affected core subsystem(s)
  • http
  • streams
Description of change

This PR consists of a refactor of http server-related code as well as various cleanups and minor optimizations in the http and writable stream modules.

Here are some benchmark results with these changes:

http/simple.js type=bytes length=4 chunks=0 c=50: ./node: 25895 ./node-http-before: 21780 ........ 18.90%
http/simple.js type=bytes length=4 chunks=0 c=500: ./node: 24965 ./node-http-before: 21956 ....... 13.70%
http/simple.js type=bytes length=4 chunks=1 c=50: ./node: 21823 ./node-http-before: 20029 ......... 8.96%
http/simple.js type=bytes length=4 chunks=1 c=500: ./node: 21403 ./node-http-before: 17624 ....... 21.44%
http/simple.js type=bytes length=4 chunks=4 c=50: ./node: 1197.3 ./node-http-before: 1197.1 ....... 0.01%
http/simple.js type=bytes length=4 chunks=4 c=500: ./node: 11844 ./node-http-before: 11817 ........ 0.22%
http/simple.js type=bytes length=1024 chunks=0 c=50: ./node: 22803 ./node-http-before: 21585 ...... 5.65%
http/simple.js type=bytes length=1024 chunks=0 c=500: ./node: 21938 ./node-http-before: 20275 ..... 8.20%
http/simple.js type=bytes length=1024 chunks=1 c=50: ./node: 18782 ./node-http-before: 17787 ...... 5.59%
http/simple.js type=bytes length=1024 chunks=1 c=500: ./node: 17992 ./node-http-before: 16139 .... 11.48%
http/simple.js type=bytes length=1024 chunks=4 c=50: ./node: 1197.8 ./node-http-before: 1195.8 .... 0.17%
http/simple.js type=bytes length=1024 chunks=4 c=500: ./node: 11656 ./node-http-before: 11675 .... -0.17%
http/simple.js type=bytes length=102400 chunks=0 c=50: ./node: 1487.3 ./node-http-before: 1466.3 .. 1.44%
http/simple.js type=bytes length=102400 chunks=0 c=500: ./node: 1494.2 ./node-http-before: 1470.5 . 1.61%
http/simple.js type=bytes length=102400 chunks=1 c=50: ./node: 1125.4 ./node-http-before: 1132.1 . -0.60%
http/simple.js type=bytes length=102400 chunks=1 c=500: ./node: 1084.6 ./node-http-before: 1079.8 . 0.44%
http/simple.js type=bytes length=102400 chunks=4 c=50: ./node: 4260.3 ./node-http-before: 4279.6 . -0.45%
http/simple.js type=bytes length=102400 chunks=4 c=500: ./node: 4241.2 ./node-http-before: 4148.5 . 2.23%
http/simple.js type=buffer length=4 chunks=0 c=50: ./node: 21961 ./node-http-before: 20022 ........ 9.68%
http/simple.js type=buffer length=4 chunks=0 c=500: ./node: 20901 ./node-http-before: 19318 ....... 8.20%
http/simple.js type=buffer length=4 chunks=1 c=50: ./node: 20613 ./node-http-before: 19061 ........ 8.15%
http/simple.js type=buffer length=4 chunks=1 c=500: ./node: 19639 ./node-http-before: 18314 ....... 7.23%
http/simple.js type=buffer length=4 chunks=4 c=50: ./node: 17262 ./node-http-before: 15158 ....... 13.89%
http/simple.js type=buffer length=4 chunks=4 c=500: ./node: 16812 ./node-http-before: 15758 ....... 6.69%
http/simple.js type=buffer length=1024 chunks=0 c=50: ./node: 21184 ./node-http-before: 20191 ..... 4.92%
http/simple.js type=buffer length=1024 chunks=0 c=500: ./node: 18821 ./node-http-before: 18476 .... 1.87%
http/simple.js type=buffer length=1024 chunks=1 c=50: ./node: 19632 ./node-http-before: 16735 .... 17.31%
http/simple.js type=buffer length=1024 chunks=1 c=500: ./node: 18979 ./node-http-before: 17733 .... 7.02%
http/simple.js type=buffer length=1024 chunks=4 c=50: ./node: 14943 ./node-http-before: 14621 ..... 2.20%
http/simple.js type=buffer length=1024 chunks=4 c=500: ./node: 16062 ./node-http-before: 15061 .... 6.65%
http/simple.js type=buffer length=102400 chunks=0 c=50: ./node: 17115 ./node-http-before: 15099 .. 13.35%
http/simple.js type=buffer length=102400 chunks=0 c=500: ./node: 16807 ./node-http-before: 15763 .. 6.62%
http/simple.js type=buffer length=102400 chunks=1 c=50: ./node: 14429 ./node-http-before: 13821 ... 4.39%
http/simple.js type=buffer length=102400 chunks=1 c=500: ./node: 15486 ./node-http-before: 13636 . 13.57%
http/simple.js type=buffer length=102400 chunks=4 c=50: ./node: 14031 ./node-http-before: 12521 .. 12.06%
http/simple.js type=buffer length=102400 chunks=4 c=500: ./node: 14034 ./node-http-before: 13554 .. 3.54%

Probably the more controversial changes here would be the ones in the writable stream module because of:

  • var -> const changes
    • Thinking about the readable-stream module in particular here, which would see problems with older versions of node (e.g. v0.10), unless these changes were specifically not pulled in there (is this even feasible, despite older node versions going unmaintained at the end of this year?).
  • passing of the stream to write() callbacks as a second argument
    • This may be considered a semver-major change, as it may or may not affect modules that use (directly or indirectly) arguments inside callbacks passed to write()
    • The reason for this particular change is that (AFAICT) it is the only way to keep v8 from having to constantly recompile/reoptimize the write() callback used in OutgoingMessage.prototype.end(), since writable streams do not call their write() callbacks within the context of the stream (e.g. they use cb() instead of cb.call(stream).

/cc @nodejs/http

@mscdex mscdex added wip Issues and PRs that are still a work in progress. http Issues or PRs related to the http subsystem. performance Issues and PRs related to the performance of Node.js. labels May 2, 2016
@mscdex mscdex changed the title Http server refactor http server code refactor May 2, 2016
@mscdex mscdex changed the title http server code refactor http: server code refactor May 2, 2016
@jasnell
Copy link
Member

jasnell commented May 2, 2016

First off, thank you for doing this. This module needed some love badly. Glad to see it getting some attention.

The readable-stream changes are certainly just a bit concerning. Let's see what @nodejs/streams has to say.

@mscdex
Copy link
Contributor Author

mscdex commented May 29, 2016

/cc @nodejs/collaborators @nodejs/streams Any comments, especially on the controversial changes?

const req = parser.incoming;
debug('SERVER upgrade or connect', req.method);

if (!d)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd find it more readable if this was on the same line or with {}s

@benjamingr
Copy link
Member

I've attempted reading it but I'm not really sure - would you mind underlying the changes I should be looking at and explaining them?

Alternatively, I would split the commits of things like var -> const and naming function expressions to a separate commit since it makes it hard to me to distil the parts that changed logic vs. cosmetic changes. I'm aware that the rest of core might not feel that way. nevermind I see you already did this and I was fooled by GH.

  • 3d6c0b7 LGTM
  • 1e824be I'll have to read again, but would appreciate more detail in the commit message. I read the code and I didn't notice any errors but I don't really understand all of its logic.
  • 131c596 LGTM (nit)
  • 1bab396 left a question, but the code itself looks fine.
  • 08d8a87 LGTM with a test request
  • 37a652c LGTM

@mscdex
Copy link
Contributor Author

mscdex commented May 29, 2016

@benjamingr I'm not sure what in particular (in 1e824be) you're looking for in the commit message?

}
return valid;
return true;
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There might be a slight benefit in having a single return point. Not sure if it is visibile here or not.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's very low level and definitely valid for some old version of V8. It is due to how v8 "compiles" this code, basically having a single return statement allow V8 to produce "faster" code.

Not sure if still applies.. @trevnorris can definitely explain it better than I do.

@mcollina mcollina added the stream Issues and PRs related to the stream subsystem. label May 30, 2016
@mcollina
Copy link
Member

mcollina commented May 30, 2016

This is amazing work @mscdex! Some comments on your questions:

  • var -> const changes
    • Thinking about the readable-stream module in particular here, which would see problems with older versions of node (e.g. v0.10), unless these changes were specifically not pulled in there (is this even feasible, despite older node versions going unmaintained at the end of this year?).

No problem, readable-stream is already transpiled (from nodejs/readable-stream#186, 2.0.5). Go ahead and use all of the ES6 feature you like. cc @calvinmetcalf.

  • passing of the stream to write() callbacks as a second argument
    • This may be considered a semver-major change, as it may or may not affect modules that use (directly or indirectly) arguments inside callbacks passed to write()
    • The reason for this particular change is that (AFAICT) it is the only way to keep v8 from having to constantly recompile/reoptimize the write() callback used in OutgoingMessage.prototype.end(), since writable streams do not call their write() callbacks within the context of the stream (e.g. they use cb() instead of cb.call(stream).

This is not correct. See the technique I used in https://github.com/nodejs/node/blob/master/lib/_stream_writable.js#L394, in particular https://github.com/nodejs/node/blob/master/lib/_stream_writable.js#L511-L530. Yes, it is really really ugly but it gets the job done. A module that does the same trick is https://github.com/mcollina/reusify. The overhead slightly reduce the benefit, but it's better than an API change.

I would rather prefer to change the interface so that cb is called within the context of the stream. This might involve some more work, I will encourage you to do two PRs, one with the stream change and one with the HTTP one. It's a bit too much to reason about here. It might also affect fs and net differently.

On the other end, I do not see a compatibility problem in either case. I need this feature a gazillion of other places, and I do not see any way this could break as currently it's not set. I do not see a compatibility issue because this is used internally, and currently there is not a public API to run an HTTP server on top of any stream.

@mscdex
Copy link
Contributor Author

mscdex commented May 30, 2016

@mcollina Well actually CorkedRequest.finish() is another method that continually gets recompiled/reoptimized because it's not a prototype method, but that's a separate issue ;-)

I haven't done any benchmarks yet, but I'm not sure if adding the overhead of cb.call() (vs cb()) is a worthwhile tradeoff just to benefit the http module, since it will slow down all other stream users?

@mcollina
Copy link
Member

@mcollina Well actually CorkedRequest.finish() is another method that continually gets recompiled/reoptimized because it's not a prototype method, but that's a separate issue ;-)

Not really, if clearBuffer is not called multiple times synchronously, only two of those for each stream should exists. This is 99,999% of the use case (uncork in HTTP is wrappend in process.nextTick), so that should get optimized pretty heavily. If not, then we should have a look.

I haven't done any benchmarks yet, but I'm not sure if adding the overhead of cb.call() (vs cb()) is a worthwhile tradeoff just to benefit the http module, since it will slow down all other stream users?

It's already everywhere in all the EE interface, and for each chunk it is used quite a bit. I think it will not cause a significant decrease. Note that most users do not use the callback on write, but they rely on pipe, which does not pass a callback.

@mscdex
Copy link
Contributor Author

mscdex commented May 30, 2016

@mcollina RE: CorkedRequest.finish(), it gets recompiled/reoptimized for every http connection. This is very noticeable when you run the http benchmarks and you have --trace-opt, etc. turned on.

@mcollina
Copy link
Member

@mcollina RE: CorkedRequest.finish(), it gets recompiled/reoptimized for every http connection. This is very noticeable when you run the http benchmarks and you have --trace-opt, etc. turned on.

Yes! Before #4354 it was reallocated whenever we did _writev.
Any idea to solve the problem and not having to do that gymnastic would help.

Again, I really think we should consider splitting this into 2 PRs, one stream-related and one HTTP related. It will simplify benchmarking/reviewing.

@mscdex
Copy link
Contributor Author

mscdex commented Dec 20, 2016

Alright, I've completely redone this PR now.

Things to note:

  • To avoid the constant re-optimization of some anonymous functions, I have opted to instead use fn.bind() which now actually performs well in master/node v7. This means drastic changes to the stream API are no longer necessary. However, I think it's probably best that the @nodejs/streams team not pull in the streams changes from this PR to readable-stream because most users will not be on node v7, so the usage of fn.bind() in there will make things slower for them.

  • When validating http headers/tokens, I've decided to optimize for the common case of character codes <= 255. So there will be a perf decrease for UTF-8 characters whose code is > 255. This is due to the lookup table only being 256 in size, so when access to an undefined index is made, it slows down. I could add a "char > 255" check before looking in the table which does still achieve a net perf increase for the common case, but it's nowhere near as fast as without the extra check.

  • Performance increase is not as large as it was initially, I suspect that was because of perf improvements made since this PR was first submitted. Here are the results I currently get (up to 7%):

    http/simple.js c=50 chunks=0 length=1024 type="buffer" benchmarker="wrk"         5.09 %         *** 2.321680e-15
    http/simple.js c=50 chunks=0 length=1024 type="bytes" benchmarker="wrk"          6.08 %         *** 5.745435e-15
    http/simple.js c=50 chunks=0 length=102400 type="buffer" benchmarker="wrk"       5.12 %         *** 2.096909e-17
    http/simple.js c=50 chunks=0 length=102400 type="bytes" benchmarker="wrk"       -0.35 %             3.260978e-01
    http/simple.js c=50 chunks=0 length=4 type="buffer" benchmarker="wrk"            5.52 %         *** 1.277371e-17
    http/simple.js c=50 chunks=0 length=4 type="bytes" benchmarker="wrk"             6.15 %         *** 2.696716e-17
    http/simple.js c=50 chunks=1 length=1024 type="buffer" benchmarker="wrk"         5.12 %         *** 2.151779e-14
    http/simple.js c=50 chunks=1 length=1024 type="bytes" benchmarker="wrk"          4.39 %         *** 1.425773e-15
    http/simple.js c=50 chunks=1 length=102400 type="buffer" benchmarker="wrk"       4.59 %         *** 2.692188e-15
    http/simple.js c=50 chunks=1 length=102400 type="bytes" benchmarker="wrk"        0.30 %             3.754454e-01
    http/simple.js c=50 chunks=1 length=4 type="buffer" benchmarker="wrk"            5.04 %         *** 7.873866e-14
    http/simple.js c=50 chunks=1 length=4 type="bytes" benchmarker="wrk"             4.85 %         *** 3.014973e-12
    http/simple.js c=50 chunks=4 length=1024 type="buffer" benchmarker="wrk"         3.86 %         *** 4.730619e-12
    http/simple.js c=50 chunks=4 length=1024 type="bytes" benchmarker="wrk"         -0.03 %             7.544566e-01
    http/simple.js c=50 chunks=4 length=102400 type="buffer" benchmarker="wrk"       2.30 %         *** 4.291004e-06
    http/simple.js c=50 chunks=4 length=102400 type="bytes" benchmarker="wrk"        2.61 %          ** 7.671801e-03
    http/simple.js c=50 chunks=4 length=4 type="buffer" benchmarker="wrk"            3.85 %         *** 2.555875e-13
    http/simple.js c=50 chunks=4 length=4 type="bytes" benchmarker="wrk"            -0.02 %             8.485765e-01
    http/simple.js c=500 chunks=0 length=1024 type="buffer" benchmarker="wrk"        4.72 %         *** 5.100754e-12
    http/simple.js c=500 chunks=0 length=1024 type="bytes" benchmarker="wrk"         7.10 %         *** 4.305934e-17
    http/simple.js c=500 chunks=0 length=102400 type="buffer" benchmarker="wrk"      4.67 %         *** 3.232866e-10
    http/simple.js c=500 chunks=0 length=102400 type="bytes" benchmarker="wrk"       1.03 %           * 1.166409e-02
    http/simple.js c=500 chunks=0 length=4 type="buffer" benchmarker="wrk"           5.62 %         *** 9.720317e-13
    http/simple.js c=500 chunks=0 length=4 type="bytes" benchmarker="wrk"            6.67 %         *** 1.426534e-16
    http/simple.js c=500 chunks=1 length=1024 type="buffer" benchmarker="wrk"        4.10 %         *** 4.870958e-12
    http/simple.js c=500 chunks=1 length=1024 type="bytes" benchmarker="wrk"         5.09 %         *** 2.455658e-15
    http/simple.js c=500 chunks=1 length=102400 type="buffer" benchmarker="wrk"      5.10 %         *** 6.170185e-15
    http/simple.js c=500 chunks=1 length=102400 type="bytes" benchmarker="wrk"       0.45 %           * 3.542224e-02
    http/simple.js c=500 chunks=1 length=4 type="buffer" benchmarker="wrk"           4.15 %         *** 4.757659e-14
    http/simple.js c=500 chunks=1 length=4 type="bytes" benchmarker="wrk"            4.85 %         *** 6.287268e-11
    http/simple.js c=500 chunks=4 length=1024 type="buffer" benchmarker="wrk"        5.41 %         *** 9.070252e-15
    http/simple.js c=500 chunks=4 length=1024 type="bytes" benchmarker="wrk"         3.75 %         *** 2.073711e-11
    http/simple.js c=500 chunks=4 length=102400 type="buffer" benchmarker="wrk"      4.87 %         *** 1.669781e-11
    http/simple.js c=500 chunks=4 length=102400 type="bytes" benchmarker="wrk"       1.49 %          ** 9.449434e-03
    http/simple.js c=500 chunks=4 length=4 type="buffer" benchmarker="wrk"           5.67 %         *** 1.159900e-18
    http/simple.js c=500 chunks=4 length=4 type="bytes" benchmarker="wrk"            3.75 %         *** 7.230132e-09
    

CI: https://ci.nodejs.org/job/node-test-pull-request/5493/
EDIT: The one CI failure on Linux is unrelated (timers) and there is an existing issue about that particular test.

@mscdex mscdex removed the wip Issues and PRs that are still a work in progress. label Dec 20, 2016
Copy link
Member

@mcollina mcollina left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you submit the changes to Streams into a different PR? I do not think it's the right time to merge those, and we should do a more overhaul on them when the next LTS comes out.
At the moment, we do not have the infrastructure to pull in or skip specific commits.

evanlucas pushed a commit that referenced this pull request Jan 3, 2017
PR-URL: #6533
Reviewed-By: Matteo Collina <[email protected]>
Reviewed-By: James M Snell <[email protected]>
Reviewed-By: Fedor Indutny <[email protected]>
Reviewed-By: Benjamin Gruenbaum <[email protected]>
evanlucas pushed a commit that referenced this pull request Jan 3, 2017
PR-URL: #6533
Reviewed-By: Matteo Collina <[email protected]>
Reviewed-By: James M Snell <[email protected]>
Reviewed-By: Fedor Indutny <[email protected]>
Reviewed-By: Benjamin Gruenbaum <[email protected]>
evanlucas pushed a commit that referenced this pull request Jan 3, 2017
PR-URL: #6533
Reviewed-By: Matteo Collina <[email protected]>
Reviewed-By: James M Snell <[email protected]>
Reviewed-By: Fedor Indutny <[email protected]>
Reviewed-By: Benjamin Gruenbaum <[email protected]>
evanlucas pushed a commit that referenced this pull request Jan 3, 2017
The new table-based lookups perform significantly better for the
common cases (checking latin1 characters).

PR-URL: #6533
Reviewed-By: Matteo Collina <[email protected]>
Reviewed-By: James M Snell <[email protected]>
Reviewed-By: Fedor Indutny <[email protected]>
Reviewed-By: Benjamin Gruenbaum <[email protected]>
evanlucas pushed a commit that referenced this pull request Jan 3, 2017
Since at least V8 5.4, using function.bind() is now fast enough to
use to avoid recompiling/reoptimizing the same anonymous functions.
These changes especially impact http servers.

PR-URL: #6533
Reviewed-By: Matteo Collina <[email protected]>
Reviewed-By: James M Snell <[email protected]>
Reviewed-By: Fedor Indutny <[email protected]>
Reviewed-By: Benjamin Gruenbaum <[email protected]>
evanlucas pushed a commit that referenced this pull request Jan 3, 2017
PR-URL: #6533
Reviewed-By: Matteo Collina <[email protected]>
Reviewed-By: James M Snell <[email protected]>
Reviewed-By: Fedor Indutny <[email protected]>
Reviewed-By: Benjamin Gruenbaum <[email protected]>
evanlucas pushed a commit that referenced this pull request Jan 3, 2017
PR-URL: #6533
Reviewed-By: Matteo Collina <[email protected]>
Reviewed-By: James M Snell <[email protected]>
Reviewed-By: Fedor Indutny <[email protected]>
Reviewed-By: Benjamin Gruenbaum <[email protected]>
evanlucas pushed a commit that referenced this pull request Jan 3, 2017
This commit uses instanceof instead of Array.isArray() for faster
type checking and avoids calling Object.keys() when the headers are
stored as a 2D array instead of a plain object.

PR-URL: #6533
Reviewed-By: Matteo Collina <[email protected]>
Reviewed-By: James M Snell <[email protected]>
Reviewed-By: Fedor Indutny <[email protected]>
Reviewed-By: Benjamin Gruenbaum <[email protected]>
evanlucas added a commit that referenced this pull request Jan 3, 2017
Notable changes:

* buffer:
  - Improve performance of Buffer allocation by ~11% (Brian White) #10443
  - Improve performance of Buffer.from() by ~50% (Brian White) #10443
* events: Improve performance of EventEmitter.once() by ~27% (Brian White) #10445
* http: Improve performance of http server by ~7% (Brian White) #6533
evanlucas pushed a commit that referenced this pull request Jan 4, 2017
PR-URL: #6533
Reviewed-By: Matteo Collina <[email protected]>
Reviewed-By: James M Snell <[email protected]>
Reviewed-By: Fedor Indutny <[email protected]>
Reviewed-By: Benjamin Gruenbaum <[email protected]>
evanlucas pushed a commit that referenced this pull request Jan 4, 2017
PR-URL: #6533
Reviewed-By: Matteo Collina <[email protected]>
Reviewed-By: James M Snell <[email protected]>
Reviewed-By: Fedor Indutny <[email protected]>
Reviewed-By: Benjamin Gruenbaum <[email protected]>
evanlucas pushed a commit that referenced this pull request Jan 4, 2017
PR-URL: #6533
Reviewed-By: Matteo Collina <[email protected]>
Reviewed-By: James M Snell <[email protected]>
Reviewed-By: Fedor Indutny <[email protected]>
Reviewed-By: Benjamin Gruenbaum <[email protected]>
evanlucas pushed a commit that referenced this pull request Jan 4, 2017
The new table-based lookups perform significantly better for the
common cases (checking latin1 characters).

PR-URL: #6533
Reviewed-By: Matteo Collina <[email protected]>
Reviewed-By: James M Snell <[email protected]>
Reviewed-By: Fedor Indutny <[email protected]>
Reviewed-By: Benjamin Gruenbaum <[email protected]>
evanlucas pushed a commit that referenced this pull request Jan 4, 2017
Since at least V8 5.4, using function.bind() is now fast enough to
use to avoid recompiling/reoptimizing the same anonymous functions.
These changes especially impact http servers.

PR-URL: #6533
Reviewed-By: Matteo Collina <[email protected]>
Reviewed-By: James M Snell <[email protected]>
Reviewed-By: Fedor Indutny <[email protected]>
Reviewed-By: Benjamin Gruenbaum <[email protected]>
evanlucas pushed a commit that referenced this pull request Jan 4, 2017
PR-URL: #6533
Reviewed-By: Matteo Collina <[email protected]>
Reviewed-By: James M Snell <[email protected]>
Reviewed-By: Fedor Indutny <[email protected]>
Reviewed-By: Benjamin Gruenbaum <[email protected]>
evanlucas pushed a commit that referenced this pull request Jan 4, 2017
PR-URL: #6533
Reviewed-By: Matteo Collina <[email protected]>
Reviewed-By: James M Snell <[email protected]>
Reviewed-By: Fedor Indutny <[email protected]>
Reviewed-By: Benjamin Gruenbaum <[email protected]>
evanlucas pushed a commit that referenced this pull request Jan 4, 2017
This commit uses instanceof instead of Array.isArray() for faster
type checking and avoids calling Object.keys() when the headers are
stored as a 2D array instead of a plain object.

PR-URL: #6533
Reviewed-By: Matteo Collina <[email protected]>
Reviewed-By: James M Snell <[email protected]>
Reviewed-By: Fedor Indutny <[email protected]>
Reviewed-By: Benjamin Gruenbaum <[email protected]>
evanlucas added a commit that referenced this pull request Jan 4, 2017
Notable changes:

* buffer:
  - Improve performance of Buffer allocation by ~11% (Brian White) #10443
  - Improve performance of Buffer.from() by ~50% (Brian White) #10443
* events: Improve performance of EventEmitter.once() by ~27% (Brian White) #10445
* fs: Allow passing Uint8Array to fs methods where Buffers are supported. (Anna Henningsen) #10382
* http: Improve performance of http server by ~7% (Brian White) #6533
* npm: Upgrade to v4.0.5 (Kat Marchán) #10330

PR-URL: #10589
evanlucas added a commit that referenced this pull request Jan 4, 2017
Notable changes:

* buffer:
  - Improve performance of Buffer allocation by ~11% (Brian White) #10443
  - Improve performance of Buffer.from() by ~50% (Brian White) #10443
* events: Improve performance of EventEmitter.once() by ~27% (Brian White) #10445
* fs: Allow passing Uint8Array to fs methods where Buffers are supported. (Anna Henningsen) #10382
* http: Improve performance of http server by ~7% (Brian White) #6533
* npm: Upgrade to v4.0.5 (Kat Marchán) #10330

PR-URL: #10589
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 // ... 255
];

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you have any reference mat'l on the choice of int vs bool here? just interested.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Performance-wise? Not really. At the very least it's more compact. I wouldn't be surprised though if V8 internally treated booleans as SMIs with values of 0 or 1.

imyller added a commit to imyller/meta-nodejs that referenced this pull request Mar 2, 2017
    Notable changes:

    * buffer:
      - Improve performance of Buffer allocation by ~11% (Brian White) nodejs/node#10443
      - Improve performance of Buffer.from() by ~50% (Brian White) nodejs/node#10443
    * events: Improve performance of EventEmitter.once() by ~27% (Brian White) nodejs/node#10445
    * fs: Allow passing Uint8Array to fs methods where Buffers are supported. (Anna Henningsen) nodejs/node#10382
    * http: Improve performance of http server by ~7% (Brian White) nodejs/node#6533
    * npm: Upgrade to v4.0.5 (Kat Marchán) nodejs/node#10330

    PR-URL: nodejs/node#10589

Signed-off-by: Ilkka Myller <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
http Issues or PRs related to the http subsystem. performance Issues and PRs related to the performance of Node.js. stream Issues and PRs related to the stream subsystem.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants