-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding priority levels to Envoy endpoints (not yet used for LB) #2088
Conversation
Signed-off-by: Alyssa Wilk <[email protected]>
Signed-off-by: Alyssa Wilk <[email protected]>
@htuch @mattklein123 can you take a look? If the basic approach is good I'll fix up tests and ping back |
Yes I will look this morning. Sorry for the delay. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@alyssawilk I didn't review in detail but did a quick skim. At an interface level this makes sense to me so I would say let's complete this PR. I think it's good to split the LB changes into another PR if that is possible since this is going to be complicated to do a full review on.
source/common/upstream/eds.cc
Outdated
@@ -42,7 +42,7 @@ EdsClusterImpl::EdsClusterImpl(const envoy::api::v2::Cluster& cluster, Runtime:: | |||
void EdsClusterImpl::startPreInit() { subscription_->start({cluster_name_}, *this); } | |||
|
|||
void EdsClusterImpl::onConfigUpdate(const ResourceVector& resources) { | |||
std::vector<HostSharedPtr> new_hosts; | |||
std::vector<std::vector<HostSharedPtr>> new_hosts(4); // Large enough to avoid most resizes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed by changing around the memory model (which I needed to do in case there were priority levels with no updates)
priority_set_.addMemberUpdateCb([this](uint32_t priority, | ||
const std::vector<HostSharedPtr>& hosts_added, | ||
const std::vector<HostSharedPtr>& hosts_removed) { | ||
// FIXME(alyssawilk) talk to Matt about how to change stats in prod. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO it's probably better to just keep the current stats, and also have per-priority stats? (Perhaps only per-priority if there are > 1 priorities).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SG. I'll leave these as-is and we can add per-priority when/if folks want 'em
We may be able to get away with just having priority in LB reports, and I've added a TODO for that
Signed-off-by: Alyssa Wilk <[email protected]>
Signed-off-by: Alyssa Wilk <[email protected]>
Signed-off-by: Alyssa Wilk <[email protected]>
Signed-off-by: Alyssa Wilk <[email protected]>
Signed-off-by: Alyssa Wilk <[email protected]>
Signed-off-by: Alyssa Wilk <[email protected]>
d121445
to
8ae3131
Compare
Signed-off-by: Alyssa Wilk <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some early feedback.
include/envoy/upstream/upstream.h
Outdated
@@ -119,19 +119,22 @@ class Host : virtual public HostDescription { | |||
typedef std::shared_ptr<const Host> HostConstSharedPtr; | |||
|
|||
/** | |||
* Base host set interface. This is used both for clusters, as well as per thread/worker host sets | |||
* used during routing/forwarding. | |||
* Base host set interface. This contains all of the endpoints for a given LocalityLbEndpoints |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: the war on two spaces continues..
include/envoy/upstream/upstream.h
Outdated
virtual void updateHosts( | ||
std::shared_ptr<const std::vector<HostSharedPtr>> hosts, | ||
std::shared_ptr<const std::vector<HostSharedPtr>> healthy_hosts, | ||
std::shared_ptr<const std::vector<std::vector<HostSharedPtr>>> hosts_per_locality, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think (?) some typedefs might makes this more readable.
* | ||
* @param hosts supplies the (usually new) list of hosts in the host set. | ||
* @param healthy hosts supplies the subset of hosts which are healthy. | ||
* @param hosts_per_locality supplies the hosts subdivided by locality. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At a high level, without diving into implementation, it seems that it's redundant to track and supply both (healthy) hosts and (healthy) hosts per locality, one of these (presumably the latter) should be sufficient. I know this is an extant thing, just wanted to start a thread to discuss this with you and @mattklein123 .
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree, but as it turns out we track locality inconsistently (I think the rule is only if there are multiple localities and one is local and it's configured?) so it's quite non-trivial to combine the two.
I wonder if the better move might be to always just have the "locality style" group and if we don't care to pick by locality it's just in one big group. I know the healthy host is used for some things (mainly size for determining if it's a global panic, but size could be latched) but I can't remember if the full host list is ever used when splitting up by locality.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Previously this was hidden inside the implementation and is split out like this for perf reasons (basically compute once and then use everywhere). @alyssawilk do we need to expose this at the interface layer? I agree with @htuch at the interface level it's not great.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's that, or doing a bunch of casts, since the generic priority set impl is handing out containers of HostSets not HostSet Impls. And because they're complex types (it's super handy to iterate over the vector) I believe we can't just override.
If we prefer casts in upstream_impl.cc I'm happy to do that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't thought through all the details but if you think this is the way to go it's fine with me. I can take a more complete look through this tomorrow if that would help. Sorry for the delay.
include/envoy/upstream/upstream.h
Outdated
/** | ||
* @return the host sets for this priority. If no such host set exists, it will be created. | ||
*/ | ||
virtual HostSet& getHostSet(uint32_t priority) PURE; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we ever remove hosts sets at a given priority?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nope. I thought that was more of an implementation detail though so that's commented upstream_impl.h
* Install a callback that will be invoked when any of the HostSets in the PrioritySet changes. | ||
* This includes when a new HostSet is created. | ||
* | ||
* @param callback supplies the callback to invoke. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: document @return
include/envoy/upstream/upstream.h
Outdated
* Returns the host sets for this priority set, ordered by priority. | ||
* The first element in the vector is the host set for priority 0, and so on. | ||
* | ||
* @return the host sets for this priority set. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: include return type in @return
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
We're fairly inconsistent on this point and I think it makes it a lot uglier but shrug
source/common/upstream/eds.cc
Outdated
@@ -59,20 +60,42 @@ void EdsClusterImpl::onConfigUpdate(const ResourceVector& resources) { | |||
cluster_load_assignment.cluster_name())); | |||
} | |||
for (const auto& locality_lb_endpoint : cluster_load_assignment.endpoints()) { | |||
uint32_t priority = locality_lb_endpoint.priority().value(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: const
source/common/upstream/eds.cc
Outdated
@@ -59,20 +60,42 @@ void EdsClusterImpl::onConfigUpdate(const ResourceVector& resources) { | |||
cluster_load_assignment.cluster_name())); | |||
} | |||
for (const auto& locality_lb_endpoint : cluster_load_assignment.endpoints()) { | |||
uint32_t priority = locality_lb_endpoint.priority().value(); | |||
if (new_hosts.size() <= priority) { | |||
new_hosts.resize(priority + 1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense to make this structure std::map
, i.e. ordered, to support sparse priority use? I'm thinking not, since probably folks will use dense priorities in practice, but throwing it out there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah as commented in the eds proto you can't skip. We don't have config validation so I just make sure the code works regardless but it doesn't make sense to do so reorganizing the data structures to optimize for it seems silly.
rq_success += host->stats().rq_success_.latch(); | ||
rq_error += host->stats().rq_error_.latch(); | ||
rq_active += host->stats().rq_active_.value(); | ||
for (auto& host_set : cluster.prioritySet().hostSetsPerPriority()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should be merging across all priorities for a given locality here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really? I thought we should report stats per-priority (and I have a TODO for that somewhere).
If you prefer merging comment and I'll remove the TODO and merge.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As discussed IRL, it would provide greatest flexibility to do it per-priority as you plan. We should do something in the interim to avoid surprises here, since the code as implemented is just going to provide multiple reports when priorities are used for the same locality, with no way to distinguish them (other than implied ordering). I'd suggest merging for now and adding priorities later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could do that. Alternately if you don't object I can just only report load for priority group 0, and TODO to add the stats reporting before I fix the load balancers.
Without the load balancers priority-aware it's impossible to get load at other priority clusters, and that's a faster workaround until we do the API change and real load reporting (which is trivial)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, priority 0 only and a TODO is fine for this PR, thanks.
source/common/upstream/subset_lb.h
Outdated
@@ -71,7 +71,7 @@ class SubsetLoadBalancer : public LoadBalancer, Logger::Loggable<Logger::Id::ups | |||
LoadBalancerPtr lb_; | |||
}; | |||
|
|||
// Implements HostSet::MemberUpdateCb | |||
// called by HostSet::MemberUpdateCb |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: s/called/Called/
source/common/upstream/subset_lb.h
Outdated
@@ -94,7 +94,9 @@ class SubsetLoadBalancer : public LoadBalancer, Logger::Loggable<Logger::Id::ups | |||
|
|||
SubsetMetadata extractSubsetMetadata(const std::set<std::string>& subset_keys, const Host& host); | |||
|
|||
const HostSetImpl& emptyHostSet() { CONSTRUCT_ON_FIRST_USE(HostSetImpl); }; | |||
const HostSetImpl& emptyHostSet() { | |||
CONSTRUCT_ON_FIRST_USE(HostSetImpl, original_host_set_.priority()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe add a comment stating that the host set priority will never change, hence why this is valid to do.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Turns out this function isn't used
if (host_sets_.size() < priority + 1) { | ||
for (size_t i = host_sets_.size(); i <= priority; ++i) { | ||
host_sets_.push_back(HostSetPtr{new HostSetImpl(i)}); | ||
host_sets_[i]->addMemberUpdateCb([this](uint32_t priority, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is another place it seems that std::map
implementation might be cleaner, but not sure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think map will be uglier when we start selecting priority levels, because you're not guaranteed in-order priorities or existence. Vector with empty sets makes more sense to me.
Also one is not supposed to skip priorities, we just dont have config sanitization yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW, I don't think it's broken to skip priorities, in the same sense that it's not broken to assign processes to different priority levels in a scheduler where there is no contiguity. If it simplifies things I guess it's fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
proto validation will never do this type of checking for us. I would be fine having a check that all priorities need to be sequential and do this check in the EDS response. If we did this, we could also pre-create the host sets. I actually wonder if we really need the getHostSet()
function which auto creates. It seems less error prone to just operate on the returned vector? Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe we can get rid of getHostSet from the API pretty easily, so that's done.
At this point we could replace getHostSet with
if (getHostSetsPerPriority.size() < priority) { createHostSet(priority)} but that seems suboptimal :-P
I'll rename getOrCreateHostSet to make the create explicit. SG?
This looks rad overall. |
Signed-off-by: Alyssa Wilk <[email protected]>
Signed-off-by: Alyssa Wilk <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, the only non-trivial thing here on my end is to do the load report merge (or some other mitigation to avoid getting multiple reports for the same locality), plus @mattklein123 comment on interface.
rq_success += host->stats().rq_success_.latch(); | ||
rq_error += host->stats().rq_error_.latch(); | ||
rq_active += host->stats().rq_active_.value(); | ||
for (auto& host_set : cluster.prioritySet().hostSetsPerPriority()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As discussed IRL, it would provide greatest flexibility to do it per-priority as you plan. We should do something in the interim to avoid surprises here, since the code as implemented is just going to provide multiple reports when priorities are used for the same locality, with no way to distinguish them (other than implied ordering). I'd suggest merging for now and adding priorities later.
if (host_sets_.size() < priority + 1) { | ||
for (size_t i = host_sets_.size(); i <= priority; ++i) { | ||
host_sets_.push_back(HostSetPtr{new HostSetImpl(i)}); | ||
host_sets_[i]->addMemberUpdateCb([this](uint32_t priority, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW, I don't think it's broken to skip priorities, in the same sense that it's not broken to assign processes to different priority levels in a scheduler where there is no contiguity. If it simplifies things I guess it's fine.
test/common/upstream/eds_test.cc
Outdated
|
||
ASSERT_EQ(2, cluster_->prioritySet().hostSetsPerPriority().size()); | ||
ASSERT_EQ(2, cluster_->prioritySet().getHostSet(0).hosts().size()); | ||
ASSERT_EQ(1, cluster_->prioritySet().getHostSet(1).hosts().size()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
EXPECT_EQ
vs. ASSERT_EQ
here?
test/common/upstream/eds_test.cc
Outdated
ASSERT_EQ(4, cluster_->prioritySet().getHostSet(3).hosts().size()); | ||
} | ||
|
||
TEST_F(EdsTest, PriorityAndLocality) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please describe what this test does in a one liner here.
test/common/upstream/eds_test.cc
Outdated
EXPECT_EQ(2, second_hosts_per_locality[1].size()); | ||
} | ||
|
||
add_hosts_to_locality_and_priority("oceania", "koala", "eucalyptus", 0, 3); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add comments as in the above TEST in the body of this one.
respond(0, "200", false, true); | ||
respond(1, "200", false, true); | ||
EXPECT_TRUE(cluster_->prioritySet().getMockHostSet(0)->hosts_[0]->healthy()); | ||
EXPECT_TRUE(cluster_->prioritySet().getMockHostSet(1)->hosts_[0]->healthy()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These two tests are very similar, could refactor but up to you (if there was a 3rd copy I'd be more inclined to do it). Also, please add a comment line above each test (although it's fairly obvious here from the name).
@@ -638,6 +647,54 @@ TEST(StaticClusterImplTest, SourceAddressPriority) { | |||
} | |||
} | |||
|
|||
TEST(PrioritySet, Extend) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comment.
uint32_t changes = 0; | ||
uint32_t last_priority = 0; | ||
priority_set.addMemberUpdateCb([&](uint32_t priority, const std::vector<HostSharedPtr>&, | ||
const std::vector<HostSharedPtr>&) -> void { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: -> void is redundant.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just looked through in more detail. In general this looks great. Beyond @htuch comments, my main question is whether we can simplify slightly and get rid of getHostSet(i) function?
@@ -264,11 +264,13 @@ ClusterManagerStats ClusterManagerImpl::generateStats(Stats::Scope& scope) { | |||
} | |||
|
|||
void ClusterManagerImpl::postInitializeCluster(Cluster& cluster) { | |||
if (cluster.hosts().empty()) { | |||
return; | |||
for (size_t i = 0; i < cluster.prioritySet().hostSetsPerPriority().size(); ++i) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Any reason to not just iterate directly through the returned vector here? Why do we need to get the size and then call get each time? (Same question if this happens elsewhere)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh yeah, now that host set latches its own priority I can rewrite these.
if (host_sets_.size() < priority + 1) { | ||
for (size_t i = host_sets_.size(); i <= priority; ++i) { | ||
host_sets_.push_back(HostSetPtr{new HostSetImpl(i)}); | ||
host_sets_[i]->addMemberUpdateCb([this](uint32_t priority, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
proto validation will never do this type of checking for us. I would be fine having a check that all priorities need to be sequential and do this check in the EDS response. If we did this, we could also pre-create the host sets. I actually wonder if we really need the getHostSet()
function which auto creates. It seems less error prone to just operate on the returned vector? Thoughts?
@@ -73,7 +73,7 @@ class SubsetLoadBalancer : public LoadBalancer, Logger::Loggable<Logger::Id::ups | |||
LoadBalancerPtr lb_; | |||
}; | |||
|
|||
// Implements HostSet::MemberUpdateCb | |||
// Called by HostSet::MemberUpdateCb |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: del "Called by" to match rest of code
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's the standard when we're implementing an interface, but is it when we're explaining the caller of a function?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry nevermind!
info_->stats().membership_change_.inc(); | ||
} | ||
|
||
uint32_t healthy_hosts = 0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe add a TODO here for per-priority stats?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sanity check: do you believe anyone wants them yet?
on more thought, I think we can get away with remotely reported load stats and I'd rather not add them unless someone wants them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't imagine that anyone that actually uses priority wouldn't want broken out stats, but given that we won't be using this at Lyft right now we can wait until someone requests. :)
Signed-off-by: Alyssa Wilk <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks for the interface changes. I didn't review the tests, so will defer to @htuch or someone else for final approval.
@@ -602,43 +610,50 @@ ClusterManagerImpl::ThreadLocalClusterManagerImpl::ClusterEntry::ClusterEntry( | |||
parent.parent_.local_info_, parent.parent_, parent.parent_.runtime_, | |||
parent.parent_.random_, | |||
Router::ShadowWriterPtr{new Router::ShadowWriterImpl(parent.parent_)}) { | |||
|
|||
// TODO(alyssawilk) make lb priority-set aware in a follow-up patch |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: period end of sentence.
source/common/upstream/eds.cc
Outdated
HostVectorSharedPtr current_hosts_copy(new std::vector<HostSharedPtr>(hosts())); | ||
for (size_t i = 0; i < new_hosts.size(); ++i) { | ||
if (new_hosts[i] != nullptr) { | ||
UpdateHostsPerLocality(priority_set_.getOrCreateHostSet(i), *new_hosts[i]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: updateHostsPerLocality
@@ -312,6 +347,9 @@ class ClusterImplBase : public Cluster, | |||
const LocalInfo::LocalInfo& local_info, | |||
Outlier::EventLoggerSharedPtr outlier_event_logger, | |||
bool added_via_api); | |||
// From Cluster |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: // Upstream::Cluster
Signed-off-by: Alyssa Wilk <[email protected]>
Signed-off-by: Alyssa Wilk <[email protected]>
Signed-off-by: Alyssa Wilk <[email protected]>
EXPECT_EQ(4, cluster_->prioritySet().hostSetsPerPriority()[3]->hosts().size()); | ||
} | ||
|
||
// Set up an EDS config with multiple priorities and localitie and make sure |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: localities (not worth blocking merge).
Thanks for the merge. Fixed over in in
#2154
…On Thu, Nov 30, 2017 at 3:44 PM, htuch ***@***.***> wrote:
***@***.**** approved this pull request.
------------------------------
In test/common/upstream/eds_test.cc
<#2088 (comment)>:
> + EXPECT_EQ(0, cluster_->prioritySet().hostSetsPerPriority()[2]->hosts().size());
+ EXPECT_EQ(5, cluster_->prioritySet().hostSetsPerPriority()[3]->hosts().size());
+
+ // Update the number of hosts in priority #4. Make sure no other priority
+ // levels are affected.
+ cluster_load_assignment->clear_endpoints();
+ add_hosts_to_priority(3, 4);
+ EXPECT_NO_THROW(cluster_->onConfigUpdate(resources));
+ ASSERT_EQ(4, cluster_->prioritySet().hostSetsPerPriority().size());
+ EXPECT_EQ(4, cluster_->prioritySet().hostSetsPerPriority()[0]->hosts().size());
+ EXPECT_EQ(1, cluster_->prioritySet().hostSetsPerPriority()[1]->hosts().size());
+ EXPECT_EQ(0, cluster_->prioritySet().hostSetsPerPriority()[2]->hosts().size());
+ EXPECT_EQ(4, cluster_->prioritySet().hostSetsPerPriority()[3]->hosts().size());
+}
+
+// Set up an EDS config with multiple priorities and localitie and make sure
Nit: localities (not worth blocking merge).
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#2088 (review)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ARYFvdHvUanKYnuPO6tJGGeIiUq-jrQCks5s7z3DgaJpZM4QmKaQ>
.
|
…roxy#2088) Signed-off-by: Piotr Sikora <[email protected]>
Description: Adds public API and support for providing a list of domains to which H2 connections will be established *without* protocol negotiation (i.e. ALPN). See Envoy docs for the domain name patterns supported: https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/route/v3/route_components.proto#envoy-v3-api-msg-config-route-v3-route Risk Level: Low Testing: Unit Co-authored-by: Mike Schore <[email protected]> Signed-off-by: Mike Schore <[email protected]> Signed-off-by: JP Simard <[email protected]>
Description: Adds public API and support for providing a list of domains to which H2 connections will be established *without* protocol negotiation (i.e. ALPN). See Envoy docs for the domain name patterns supported: https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/route/v3/route_components.proto#envoy-v3-api-msg-config-route-v3-route Risk Level: Low Testing: Unit Co-authored-by: Mike Schore <[email protected]> Signed-off-by: Mike Schore <[email protected]> Signed-off-by: JP Simard <[email protected]>
Signed-off-by: Alyssa Wilk [email protected]
Description:
Reworking the organization of endpoints from HostSets-with-localities to a tree with PrioritySets at the top and HostSets as nodes.
This allows the failover characteristics requested in #1929 as well as satisfying the defunct #1354
This will be followed up by a PR (or set of PRs) converting the load balancers over to use the new priority logic.
API Changes:
envoyproxy/data-plane-api#259
Risk Level: High
It's a pretty big refactor and there were some subtle gotchas (like stats initially updating before the backends)
Testing:
Current tests passing.
health checker unit tests verify all hosts are health checked across priority groups
outlier detection unit tests do the same for outlier detection.
upstream_impl_test.cc has unit testing basic priority set operations
load stats integration test extended for priorities. Verifies stats reporting insofar as there's now way to direct load yet, but the localities show up as expected.