Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runtime: provide a mechanism for a global kill switch with constant memory overhead #11872

Closed
jmarantz opened this issue Jul 2, 2020 · 10 comments
Labels
question Questions that are neither investigations, bugs, nor enhancements stale stalebot believes this issue/PR has not been touched recently

Comments

@jmarantz
Copy link
Contributor

jmarantz commented Jul 2, 2020

In #11252 there's a new RuntimeDouble config in api/envoy/config/cluster/v3/cluster.proto and, without any other C++ changes, this results in 256 extra bytes consumed per cluster. For 10k clusters this is 2.56M which isn't going to make or break us, but it seems like a high cost to pay for a switch.

I'm wondering if this usage matches intent. Should that config be on a less-replicated structure? What's the best practice?

@mattklein123
@gkleiman

@jmarantz jmarantz added the question Questions that are neither investigations, bugs, nor enhancements label Jul 2, 2020
@alyssawilk
Copy link
Contributor

My solution would be to replace the per-thread-snapshot runtime with global google flags, but some people around here aren't a fan of globals (cough@jmarantz-and-@mattklein123cough)
I find per-cluster runtime overrides kinda overkill, and I'll comment that on the PR in question.

@alyssawilk alyssawilk removed their assignment Jul 6, 2020
@jmarantz
Copy link
Contributor Author

jmarantz commented Jul 6, 2020

While not a fan of statics as a way to implement per-process state, I am not against per-process state. Just a question of how it gets injected, initialized, and cleaned up.

And there is already a Runtime singleton in which to put such state, so it seems plausible to have one of the bits it controls be per-process.

@alyssawilk
Copy link
Contributor

Either way it'd be a lot of memory if you want a per-cluster option. We'd save memory on (#threads) but we'd still be burning (#clusters) which is significantly larger. I think if you don't want largeish memory in O(#clusters) you just don't do per-cluster overrides which allocate string memory :-P

@jmarantz
Copy link
Contributor Author

jmarantz commented Jul 6, 2020

agreed. so should we suggest on the PR just to put runtime override on a different structure? Which structure?

@alyssawilk
Copy link
Contributor

I'd argue that CDS is sufficiently reloadable you can just have a fixed config option. If they really want runtime, hopefully Harvey as API guru can suggest a spot

@mattklein123
Copy link
Member

Per @jmarantz I have no objection to global state, but I still think we should use runtime for it. For the config knob in question it does seem like it should be per cluster, though I'm confused why it takes 256 bytes vs. 8 for a unique_ptr.

@jmarantz
Copy link
Contributor Author

The 256 bytes overhead appear to be due to protobuf data structures, which we keep alive in the running system rather than pulling into STL or whatever. This observation was based on an experiment by @gkleiman where he removed all the C++ changes in #11252 and left only the protobuf changes.

I am guessing it would be an xDS perf win to pull the data into C++ structures but that's quite a big whale to kick across the beach :)

Regarding the global-kill-switch: @gkleiman indicated there was a desire at Lyft to have this particular knob actually be per-cluster.

@mattklein123
Copy link
Member

OK yes let's please move the other PR forward and leave some TODOs to revisit this? I think per-cluster makes sense there.

@stale
Copy link

stale bot commented Aug 29, 2020

This issue has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in the next 7 days unless it is tagged "help wanted" or other activity occurs. Thank you for your contributions.

@stale stale bot added the stale stalebot believes this issue/PR has not been touched recently label Aug 29, 2020
@stale
Copy link

stale bot commented Sep 6, 2020

This issue has been automatically closed because it has not had activity in the last 37 days. If this issue is still valid, please ping a maintainer and ask them to label it as "help wanted". Thank you for your contributions.

@stale stale bot closed this as completed Sep 6, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Questions that are neither investigations, bugs, nor enhancements stale stalebot believes this issue/PR has not been touched recently
Projects
None yet
Development

No branches or pull requests

3 participants