Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bad Performance with large number of watchers #3589

Closed
thomassuckow opened this issue Jun 15, 2018 · 22 comments
Closed

Bad Performance with large number of watchers #3589

thomassuckow opened this issue Jun 15, 2018 · 22 comments
Labels
🚧 in-triage Issue currently being triaged

Comments

@thomassuckow
Copy link

Intended outcome:
Having a large number of queries should not block the the javascript runtime

Actual outcome:
Processing a graphql response takes > 300ms

How to reproduce the issue:

  • Have a large number (~26) of React components be mounted with the same graphql query.

Versions
System:
OS: macOS High Sierra 10.13.5
Binaries:
Node: 10.0.0 - /usr/local/bin/node
Yarn: 1.6.0 - /usr/local/bin/yarn
npm: 5.6.0 - /usr/local/bin/npm
Browsers:
Chrome: 67.0.3396.87
Firefox: 59.0.2
Safari: 11.1.1
npmPackages:
apollo-cache-inmemory: ^1.1.12 => 1.2.1
apollo-client: ^2.2.8 => 2.3.1
react-apollo: ^2.1.3 => 2.1.4

apollo-client/react-apollo perform a large amount of work when there is multiple listeners to a query.

screen shot 2018-06-14 at 17 19 17

This causes significant performance implications as the page appears "laggy" when being interacted with. This cost could be hidden behind the loading time and the actual page update is cheap. Normally I would batch the updates in a requestAnimationFrame (and in fact do that in a custom apollo-link). However there is no well documented place to hook into where this is happening. It appears the cache.write method is being invoked each time and that could be made asynchronous but there is no documentation about the assumptions of a cache provider and whether hacking a custom cache in front of the inmemory cache would be safe.

screen shot 2018-06-14 at 17 22 29

@thomassuckow
Copy link
Author

Here is a contrived demo

https://codesandbox.io/s/ozpqoyw46

I had to increase the number of components with the same query to 100, my guess is performance is also a function of how many queries you've already have loaded and since this example doesn't have much it takes more to make the problem as problematic.

screen shot 2018-06-15 at 07 22 03

@benjamn
Copy link
Member

benjamn commented Jun 15, 2018

Would you mind trying out a beta version of apollo-cache-inmemory?

npm install [email protected]

See here for more details: #3394

@thomassuckow
Copy link
Author

I am seeing a noticeable 2x improvement in performance. It is much more tolerable.

I am seeing an unusual warning though:

../node_modules/optimism/lib/local.js
15:14-32 Critical dependency: the request of a dependency is an expression
warnings @ index.js
onmessage @ socket.js:41
EventTarget.dispatchEvent @ sockjs.js:170
(anonymous) @ sockjs.js:883
SockJS._transportMessage @ sockjs.js:881
EventEmitter.emit @ sockjs.js:86
WebSocketTransport.ws.onmessage @ sockjs.js:2957
../node_modules/optimism/lib/local.js
Module not found: Error: Can't resolve 'fibers' in 'node_modules/optimism/lib'
warnings @ index.js
onmessage @ socket.js:41
EventTarget.dispatchEvent @ sockjs.js:170
(anonymous) @ sockjs.js:883
SockJS._transportMessage @ sockjs.js:881
EventEmitter.emit @ sockjs.js:86
WebSocketTransport.ws.onmessage @ sockjs.js:2957

My guess is that is expected... with the hack in the optimism library to try and load fibers.
I think there probably still should be a way to break up the execution of BroadcastWrites but maybe that is what you are trying to achieve with the fibers?

For the sake of completeness I include performance information for my trials

[email protected], development build

Total 410ms
Broadcast Writes: 6-10ms
Broadcast occurances ~42

screen shot 2018-06-18 at 07 51 48

screen shot 2018-06-18 at 07 53 23

[email protected], production build

Total: 360ms
Broadcast Writes: 6-8ms
Broadcast occurances ~43

screen shot 2018-06-18 at 08 05 25

Note: Flamegraph is "sampled" and some methods occur more than once despite only being invoked once (i.e. _processCallbacks).

[email protected], development build

Total: 837ms
Broadcast Writes: 16-22ms
Broadcast Occurances: ~43

screen shot 2018-06-18 at 08 25 40

screen shot 2018-06-18 at 08 26 40

[email protected], production build

Total: 624ms
Broadcast Writes: 13-15ms
Broadcast Occurances: ~41

screen shot 2018-06-18 at 08 20 37

@benjamn
Copy link
Member

benjamn commented Jun 18, 2018

@thomassuckow What bundling tool is giving you those warnings?

@thomassuckow
Copy link
Author

Webpack 3.12.0

@benjamn
Copy link
Member

benjamn commented Jun 20, 2018

While you can safely ignore those warnings, I think I have a plan to silence them.

@goldo
Copy link

goldo commented Jun 21, 2018

Thx a lot, looking forward to this release ! We have the same problem !

@thomassuckow
Copy link
Author

@benjamn Would simply installing the fibers library alter the behaviour of the library or is that something that should only be enabled when react has fibers? And if so, why is it not just a dependency?

@goldo
Copy link

goldo commented Jun 26, 2018

Any news on this ? :) I also have the warnings

@goldo
Copy link

goldo commented Jun 28, 2018

We tried this PR in our app, in production, we got some navigator crashes :( We rolled back to 1.1.9

@nevir
Copy link
Contributor

nevir commented Jul 6, 2018

@thomassuckow I'd be curious to see how https://github.com/convoyinc/apollo-cache-hermes stacks up for your case (assuming it supports all the functionality you rely on)

@thomassuckow
Copy link
Author

@nevir When I tried appllo-cache-hermes before posting this, it just crashed in a ball of flames.

I in my testing I believe I only had basic queries. Though we do use a custom apollo-link without subscription support.

@nevir
Copy link
Contributor

nevir commented Jul 9, 2018 via email

@thomassuckow
Copy link
Author

thomassuckow commented Jul 9, 2018

https://codesandbox.io/s/k0mj92nx9o has my sandbox attempt with hermes that works.

@nevir
Copy link
Contributor

nevir commented Jul 9, 2018

Interesting thing about this example: apollo-client is triggering a cache write for every observer (even though only a single request is made)

@thomassuckow
Copy link
Author

0.5.28-alpha was the hermes version I had tried before and it didn't work. The current 0.6.2 version seems to work well with the exception of convoyinc/apollo-cache-hermes#208.

Much less scientific than before as data has changed so the graphql queries are different and hermes appears to cause my animation frame hack to not behave as ideally.

But still, it appears to be faster with react being the bigger bottleneck overall in development mode, though still room for improvement:

[email protected], development build
Total time: 190ms

screen shot 2018-07-09 at 09 53 21

[email protected], production build
Total time: 80ms

screen shot 2018-07-09 at 10 04 47

I'd probably switch to hermes today if not for the devtools not working. That would be the death of my other developers.

@nevir
Copy link
Contributor

nevir commented Jul 9, 2018

0.5.28-alpha was the hermes version I had tried before and it didn't work. The current 0.6.2 version seems to work well with the exception of convoyinc/apollo-cache-hermes#208.

Awesome! You had likely encountered convoyinc/apollo-cache-hermes#242 in the previous attempt then


Re: dev tools; that's unfortunately probably going to be a fair bit of work :(

The "quick 'n dirty" approach is likely to have Hermes emit events that make it appear like apollo-cache-inmemory to the dev tools. Long term is probably to spec out some cache-agnostic way of describing cache state w/o exposing internals


convoyinc/apollo-cache-hermes#333 is probably another one that blocks many folk (workin' on it!)

@thomassuckow
Copy link
Author

thomassuckow commented Jul 9, 2018

Streamlining cache updating when there is a single request and many observers would help apollo regardless of the cache mechanism and make how long processing a response takes much more predictable.

I'm actually kinda surprised the devtools reach so deep into the cache implementation. One of these days I need to build the devtools from scratch. They are buggy as hell anyway.

@nevir
Copy link
Contributor

nevir commented Jul 9, 2018

Streamlining cache updating when there is a single request and many observers would help apollo regardless of the cache mechanism and make how long processing a response takes much more predictable.

Yeah, this seems like a worthy fix regardless; and will likely give an order of magnitude improvement for this particular case (for all caches)

I'm actually kinda surprised the devtools reach so deep into the cache implementation. One of these days I need to build the devtools from scratch. They are buggy as hell anyway.

Last time I looked at them, they were directly consuming the serialized form of inmemory's state (which is particularly problematic with hermes due to cyclic object references, heh)

@thomassuckow
Copy link
Author

Not knowing a lot about making a custom apollo-cache. One would think it would contain simply the queries and variables of all current queries that shouldn't be evicted (ie. are in use on the page). and then a graph representing data loaded from the server so it can avoid making requests for data already retrieved but with a different query. The api to traverse the graph for the devtools might be interesting. but if you assume a user will traverse it one level at a time you could just have an api that traverses it one level at a time and be oblivious to the cycles issue.

@nevir
Copy link
Contributor

nevir commented Jul 9, 2018

Yeah, it'd be ideal to have a traversal protocol of some form! That'd be a big win for people that cache a lot of data w/ inmemory, as well

I think the current approach was made for pragmatic reasons (also, back when the dev tools were originally built, there was no notion of pluggable caches). Needs some love, 'tis all :)

@hwillson hwillson added the 🚧 in-triage Issue currently being triaged label Jul 9, 2019
@hwillson
Copy link
Member

hwillson commented Jul 9, 2019

I believe everything directly actionable has been addressed in this thread, so I'll close this off. Let us know otherwise - thanks!

@hwillson hwillson closed this as completed Jul 9, 2019
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Feb 16, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
🚧 in-triage Issue currently being triaged
Projects
None yet
Development

No branches or pull requests

5 participants