You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I did some initial profiling to get a rough sense of the compute performance. These are raw notes, would be good to analyze further and add to README or write them up.
Test case
There's some data from an initial test case I worked on, but most of the data is from what I called test case #2. The test case starts with 1187 actions already commited to the log (or run through the data layer for redux). Then it starts the proper app, and for 3 seconds, performs a randomly selected action every 10ms, including new todos with random text. This yields around 1382 actions. The measure of the test is the time taken in the compute and render portions of the app, based on just grabbing timings from performance.now. All profiling is done on my relatively new Macbook Pro, but I'd love to figure out how to profile easily on a phone. There's a variation on the same test case that just runs for 30seconds instead, which yields around 3021 actions.
For the tests cases, I tried swapping a few different pieces in and out. These are mostly different optimizers, which improve the efficiency of performing computation, and renders, which improve the efficiency of updating the DOM. There's also some intial data about a compaction strategy for bounding the size of the log.
redux as baseline
3 seconds
This is the timeline for a single run with redux. Keep in mind there is noise and actual randomness in these tests and this is just a single run.
##### 30 seconds
On a 30second run of the same test case, this is what we see. Looks like there's some DOM nodes leaking and unable to be GCed, I haven't figured out why yet. When I saw this initially, by running on larger time scales these would get GCed, but that wasn't the case when trying again now, so i'm not sure why they're still retained.
memosnap+raf
3 seconds
This is the timeline for a single "memosnap+raf" run:
30 seconds
The DOM node leaks shows up here on the longer run as well, so definitely something to look into. You can see there's a lot more GC see-sawing than with redux, but I was surprised that the memory usage after a GC run isn't much higher.
memosnap+precompute
3 seconds
This is the timeline for the "memosnap" optimizer and the "precompute" renderer:
##### 30 seconds
The DOM node leak is still there:
Comparing performance of different strategies, and with redux
For test case #2, I did seven runs of a few different configurations, and then graphed how they came out. I also added some instrumentation to redux to see how this compared to that as a baseline. I was a bit surprised with how the results came out, so I'll look over it again (and which is why I'm not summarizing here) and I'd love more eyes on this. The graph showing the TLDR and the raw dataset are here: https://docs.google.com/a/twitter.com/spreadsheets/d/1_j-exUs3XjqjXh4Xa4D7nDspH5vaRlj9_dEKmB7iGCM/pubhtml.
Next steps
For memory usage, performance.memory.usedJSHeapSize wouldn't change during my tests (presumably it's allocating larger chunks and the test just didn't fill the buffer). But more confusingly, the number shown in the Timeline view didn't match the number that I'd see when taking a heap snapshot in the Profiles view. So I need to learn how to read those better, and if anyone knows how and can help that would be awesome.
I also don't know what's going on with Chrome, and what work it's doing that is preventing everything from running 60fps. It's that magic outlined box that from what I remember means that Chrome is doing work but it isn't able to introspect and tell you what exactly the kind of work is.
It'd be awesome to get feedback on how to improve the test case here. It succeeds in throwing work at the system but it's not entirely realistic with the rate of updating and frequency of DOM changes.
It'd be interesting to see if using setState instead of forceUpdate in PrecomputeReactRenderer would allow for invaliding a component but then still short-circuiting rendering and reconciliation further down (versus forcing the entire render with forceUpdate now). This might work with loggit exposing a shouldComponentUpdate method and components implementing the hook and calling it. Might improve rendering performance further.
Swap in immutable data structures.
The text was updated successfully, but these errors were encountered:
I did some initial profiling to get a rough sense of the compute performance. These are raw notes, would be good to analyze further and add to README or write them up.
Test case
There's some data from an initial test case I worked on, but most of the data is from what I called test case #2. The test case starts with 1187 actions already commited to the log (or run through the data layer for redux). Then it starts the proper app, and for 3 seconds, performs a randomly selected action every 10ms, including new todos with random text. This yields around 1382 actions. The measure of the test is the time taken in the
compute
andrender
portions of the app, based on just grabbing timings from performance.now. All profiling is done on my relatively new Macbook Pro, but I'd love to figure out how to profile easily on a phone. There's a variation on the same test case that just runs for 30seconds instead, which yields around 3021 actions.For the tests cases, I tried swapping a few different pieces in and out. These are mostly different optimizers, which improve the efficiency of performing computation, and renders, which improve the efficiency of updating the DOM. There's also some intial data about a compaction strategy for bounding the size of the log.
redux as baseline
3 seconds
This is the timeline for a single run with redux. Keep in mind there is noise and actual randomness in these tests and this is just a single run.
##### 30 seconds
On a 30second run of the same test case, this is what we see. Looks like there's some DOM nodes leaking and unable to be GCed, I haven't figured out why yet. When I saw this initially, by running on larger time scales these would get GCed, but that wasn't the case when trying again now, so i'm not sure why they're still retained.
memosnap+raf
3 seconds
This is the timeline for a single "memosnap+raf" run:
30 seconds
The DOM node leaks shows up here on the longer run as well, so definitely something to look into. You can see there's a lot more GC see-sawing than with redux, but I was surprised that the memory usage after a GC run isn't much higher.
memosnap+precompute
3 seconds
This is the timeline for the "memosnap" optimizer and the "precompute" renderer:
##### 30 seconds
The DOM node leak is still there:
Comparing performance of different strategies, and with redux
For test case #2, I did seven runs of a few different configurations, and then graphed how they came out. I also added some instrumentation to redux to see how this compared to that as a baseline. I was a bit surprised with how the results came out, so I'll look over it again (and which is why I'm not summarizing here) and I'd love more eyes on this. The graph showing the TLDR and the raw dataset are here: https://docs.google.com/a/twitter.com/spreadsheets/d/1_j-exUs3XjqjXh4Xa4D7nDspH5vaRlj9_dEKmB7iGCM/pubhtml.
Next steps
For memory usage,
performance.memory.usedJSHeapSize
wouldn't change during my tests (presumably it's allocating larger chunks and the test just didn't fill the buffer). But more confusingly, the number shown in the Timeline view didn't match the number that I'd see when taking a heap snapshot in the Profiles view. So I need to learn how to read those better, and if anyone knows how and can help that would be awesome.I also don't know what's going on with Chrome, and what work it's doing that is preventing everything from running 60fps. It's that magic
outlined box
that from what I remember means that Chrome is doing work but it isn't able to introspect and tell you what exactly the kind of work is.It'd be awesome to get feedback on how to improve the test case here. It succeeds in throwing work at the system but it's not entirely realistic with the rate of updating and frequency of DOM changes.
It'd be interesting to see if using
setState
instead offorceUpdate
inPrecomputeReactRenderer
would allow for invaliding a component but then still short-circuiting rendering and reconciliation further down (versus forcing the entire render withforceUpdate
now). This might work with loggit exposing ashouldComponentUpdate
method and components implementing the hook and calling it. Might improve rendering performance further.Swap in immutable data structures.
The text was updated successfully, but these errors were encountered: