Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use a slice instead of a persistent list for temporary accumulation of lists #1660

Merged
merged 1 commit into from
Jul 11, 2016

Conversation

bboreham
Copy link
Collaborator

@bboreham bboreham commented Jul 11, 2016

Reduce garbage created in Merge() and Add() from O(n) to O(1) by using a slice with enough capacity to hold the temporary head-portion of the list accumulated while walking the list to insert element(s).

(EDIT: except in the case where all merged data is newer, Merge() discards at least one of the input lists, so in that sense still creates O(n) garbage. I've reduced the internally-created garbage)

…f lists

Avoids creating O(n) garbage by using a slice with enough capacity to
hold the temporary head-portion of the list accumulated while walking
the list to insert element(s).
@tomwilkie
Copy link
Contributor

Nice! I can't see any problems with this.

@2opremio
Copy link
Contributor

2opremio commented Jul 11, 2016

Contributes to #1010

Thanks @bboreham !

I have measured the impact this would have when using Scope to monitor our service (6 probes) but, interestingly, it doesn't look very high. Here are the results of an -alloc_objects -focus 'Metric' heap profile of the:

probe_alloc_objects_metric

app_alloc_objects_metric

In what scenario did this make a considerable difference? Do you happen to have allocation profiles?

@2opremio
Copy link
Contributor

2opremio commented Jul 11, 2016

I've just realized that we are running Scope on the service with --probe.processes=false which will probably explain the difference. I will try again without this.

(No wonder I was surprised by the low CPU consumption).

If I get a similar profile, it will tell us that most of the allocations come from the metrics of the process topology.

@2opremio
Copy link
Contributor

After re-enabling the process topology and connecting the browser I get a closer behavior but not as close as I expected:

Here's the App's -alloc_objects heap profile (with 4 connected probes in the dev environment):

app_alloc_objects

And here's the same profile with focusing on revCons (-focus 'revCons'):

app_alloc_objects_revcons

So, this would reduce object allocation in ~5%.

Anyways, the change LGTM. @bboreham mentioned offline that this change was done after profiling the object allocation https://github.com/weaveworks/weavedemo , I will take a look at that next.

@bboreham
Copy link
Collaborator Author

The expected reduction is about half the allocations seen through revCons. We save the accumulator list, but still have to build a new list.

@2opremio 2opremio merged commit f0edc26 into master Jul 11, 2016
@2opremio 2opremio deleted the acc-is-slice branch July 11, 2016 16:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants