From bba9145305267de4a7847b6fdbae15695029759f Mon Sep 17 00:00:00 2001 From: Jeremy Olmsted-Thompson Date: Sun, 29 Mar 2020 23:38:00 -0700 Subject: [PATCH 01/10] Multi-Cluster Services API Create a provisional KEP for MC services based on SIG conversations and this original doc: http://bit.ly/k8s-mc-svc-api-proposal --- .../1645-multi-cluster-services-api/README.md | 789 ++++++++++++++++++ .../1645-multi-cluster-services-api/kep.yaml | 13 + 2 files changed, 802 insertions(+) create mode 100644 keps/sig-multicluster/1645-multi-cluster-services-api/README.md create mode 100644 keps/sig-multicluster/1645-multi-cluster-services-api/kep.yaml diff --git a/keps/sig-multicluster/1645-multi-cluster-services-api/README.md b/keps/sig-multicluster/1645-multi-cluster-services-api/README.md new file mode 100644 index 00000000000..f25d3f37d08 --- /dev/null +++ b/keps/sig-multicluster/1645-multi-cluster-services-api/README.md @@ -0,0 +1,789 @@ + +# KEP-1645: Multi-Cluster Services API + + + + + + +- [KEP-1645: Multi-Cluster Services API](#kep-1645-multi-cluster-services-api) + - [Release Signoff Checklist](#release-signoff-checklist) + - [Summary](#summary) + - [Motivation](#motivation) + - [Goals](#goals) + - [Non-Goals](#non-goals) + - [Proposal](#proposal) + - [Terminology](#terminology) + - [User Stories (optional)](#user-stories-optional) + - [Different Services Each Deployed to Separate Cluster](#different-services-each-deployed-to-separate-cluster) + - [Single Service Deployed to Multiple Clusters](#single-service-deployed-to-multiple-clusters) + - [Notes/Constraints/Caveats (optional)](#notesconstraintscaveats-optional) + - [Risks and Mitigations](#risks-and-mitigations) + - [Design Details](#design-details) + - [Exporting Services](#exporting-services) + - [Exported Service Behavior Expectations](#exported-service-behavior-expectations) + - [SuperclusterIP](#superclusterip) + - [DNS](#dns) + - [EndpointSlice](#endpointslice) + - [Endpoint TTL](#endpoint-ttl) + - [Consumption of EndpointSlice](#consumption-of-endpointslice) + - [Constraints and Conflict Resolution](#constraints-and-conflict-resolution) + - [Global Properties](#global-properties) + - [Service Port](#service-port) + - [IP Family](#ip-family) + - [Component Level Properties](#component-level-properties) + - [Session Affinity](#session-affinity) + - [TopologyKeys](#topologykeys) + - [Publish Not-Ready Addresses](#publish-not-ready-addresses) + - [Test Plan](#test-plan) + - [Graduation Criteria](#graduation-criteria) + - [Alpha -> Beta Graduation](#alpha---beta-graduation) + - [Beta -> GA Graduation](#beta---ga-graduation) + - [Removing a deprecated flag](#removing-a-deprecated-flag) + - [Upgrade / Downgrade Strategy](#upgrade--downgrade-strategy) + - [Version Skew Strategy](#version-skew-strategy) + - [Implementation History](#implementation-history) + - [Drawbacks](#drawbacks) + - [Alternatives](#alternatives) + - [Infrastructure Needed (optional)](#infrastructure-needed-optional) + + +## Release Signoff Checklist + + + +- [ ] Enhancement issue in release milestone, which links to KEP dir in [kubernetes/enhancements] (not the initial KEP PR) +- [ ] KEP approvers have approved the KEP status as `implementable` +- [ ] Design details are appropriately documented +- [ ] Test plan is in place, giving consideration to SIG Architecture and SIG Testing input +- [ ] Graduation criteria is in place +- [ ] "Implementation History" section is up-to-date for milestone +- [ ] User-facing documentation has been created in [kubernetes/website], for publication to [kubernetes.io] +- [ ] Supporting documentation e.g., additional design documents, links to mailing list discussions/SIG meetings, relevant PRs/issues, release notes + + + +[kubernetes.io]: https://kubernetes.io/ +[kubernetes/enhancements]: https://git.k8s.io/enhancements +[kubernetes/kubernetes]: https://git.k8s.io/kubernetes +[kubernetes/website]: https://git.k8s.io/website + +## Summary + + +There is currently no standard way to connect or even think about Kubernetes +services beyond the cluster boundary, but we increasingly see users deploy +applications across multiple clusters designed to work in concert. This KEP +proposes a new API to extend the service concept across multiple clusters. It +aims for minimal additional configuration, making multi-cluster services as easy +to use as in-cluster services, and leaves room for multiple implementations. + +*Converted from this [original proposal doc](http://bit.ly/k8s-mc-svc-api-proposal).* + + +## Motivation + + +There are [many +reasons](http://bit.ly/k8s-multicluster-conversation-starter-doc) why a K8s user +may want to split their deployments across multiple clusters, but still retain +mutual dependencies between workloads running in those clusters. Today the +cluster is a hard boundary, and a service is opaque to a remote K8s consumer +that would otherwise be able to make use of metadata (e.g. endpoint toplogy) to +better direct traffic. To support failover or temporarily during migration, +users may want to consume services spread across clusters, but today that +requires non-trivial bespoke solutions. + +The Multi-Cluster Services API aims to fix these problems. + +### Goals + + +- Define a minimal API to support service discovery and consumption across clusters. + - Consume a service in another cluster. + - Consume a service deployed in multiple clusters as a single service. +- When a service is consumed from another cluster its behavior should be + predictable and consistent with how it would be consumed within its own + cluster. +- Create building blocks for multi-cluster tooling. +- Support multiple implementations. +- Leave room for future extension and new use cases. + +### Non-Goals + + +- Define specific implementation details beyond general API behavior. +- Change behavior of single cluster services in any way. +- Define what NetworkPolicy means for multi-cluster services. +- Solve mechanics of multi-cluster service orchestration. + +## Proposal + + +#### Terminology + +- **supercluster** - a placeholder name for a group of clusters with a high degree of mutual trust and shared ownership that share services amongst themselves. +- **mcsd-controller** - a controller that syncs services across clusters. There may be multiple implementations, this doc describes expected common behavior. + +We propose a new CRD called `ServiceExport`, used to specify which services +should be exposed across all clusters in the supercluster. `ServiceExports` must +be created in each cluster that the underlying `Service` resides in. Creation of +a `ServiceExport` in a cluster will signify that the `Service` with the same +name and namespace as the export should be visible to other clusters in the +supercluster. + +Another CRD called `ImportedService` will be introduced to store connection +information about the `Services` in each cluster, e.g. topology. This is +analogous to the traditional `Service` type in Kubernetes. Each cluster will +have an `ImportedService` for each `Service` that has been exported within the +supercluster, referenced by namespaced name. + +If multiple clusters export a `Service` with the same namespaced name, they will +be recognized as a single combined service. The resulting `ImportedService` will +reference endpoints from both clusters. Properties of the `ImportedService` +(e.g. ports, topology) will be derived from a merger of component Service +properties. + +Existing implementations of Kubernetes Service API (e.g. kube-proxy) can be +extended to present `ImportedServices` alongside traditional `Services`. + + +### User Stories (optional) + + + +#### Different Services Each Deployed to Separate Cluster + +I have 2 clusters, each running different services managed by different teams, +where services from one team depend on services from the other team. I want to +ensure that a service from one team can discover a service from the other team +(via DNS resolving to VIP), regardless of the cluster that they reside in. In +addition, I want to make sure that if the dependent service is migrated to +another cluster, the dependee is not impacted. + +#### Single Service Deployed to Multiple Clusters + +I have deployed my stateless service to multiple clusters for redundancy or +scale. Now I want to propagate topologically-aware service endpoints (local, +regional, global) to all clusters, so that other services in my clusters can +access instances of this service in priority order based on availability and +locality. Requests to my replicated service should seamlessly transition (within +SLO for dropped requests) between instances of my service in case of failure or +removal without action by or impact on the caller. Routing to my replicated +service should optimize for cost metric (e.g.prioritize traffic local to zone, +region). + +``` +<<[UNRESOLVED]>> +Due to additional constraints that apply to stateful services (e.g. each cluster +potentially having pods with the conflicting hostnames `set-name-0`, `set-name-1`, +etc.) we are only targeting stateless services for the multi-cluster backed use +case for now. +<<[/UNRESOLVED]>> +``` + +### Notes/Constraints/Caveats (optional) + + + +### Risks and Mitigations + + + +## Design Details + + +### Exporting Services + +Services will not be visible to other clusters in the supercluster by default. +They must be explicitly marked for export by the user. This allows users to +decide exactly which services should be visible outside of the local cluster. + +Tooling may (and likely will, in the future) be built on top of this to simplify +the user experience. Some initial ideas are to allow users to specify that all +services in a given namespace or in a namespace selector or even a whole cluster +should be automatically exported by default. In that case, a `ServiceExport` +could be automatically created for all `Services`. This tooling will be designed +in a separate doc, and is secondary to the main API proposed here. + +To mark a service for export to the supercluster, a user will create a +ServiceExport CR: + +```golang +// ServiceExport declares that the associated service should be exported to +// other clusters. +type ServiceExport struct { + metav1.TypeMeta `json:",inline"` + // +optional + metav1.ObjectMeta `json:"metadata,omitempty"` + // +optional + Status ServiceExportStatus `json:"status,omitempty"` +} + +// ServiceExportStatus contains the current status of an export. +type ServiceExportStatus struct { + // +optional + Conditions []ServiceExportCondition `json:"conditions,omitempty"` +} + +type ServiceExportConditionType string + +const { + // ServiceExportInitialized means the service export has been noticed + // by the controller, has passed validation, has appropriate finalizers + // set, and any required supercluster resources like the IP have been + // reserved + ServiceExportInitialized ServiceExportConditionType = "Initialized" + // ServiceExportExported means that the service referenced by this + // service export has been synced to all clusters in the supercluster + ServiceExportExported ServiceExportConditionType = "Exported" +} + +// ServiceExportCondition contains details for the current condition of this +// service export. +type ServiceExportCondition struct { + Type ServiceExportConditionType `json:"type"` + // Status is one of {"True", "False", "Unknown"} + Status corev1.ConditionStatus `json:"status"` + // +optional + LastTransitionTime *metav1.Time `json:"lastTransitionTime,omitempty"` + // +optional + Reason *string `json:"reason,omitempty"` + // +optional + Message *string `json:"message,omitempty"` +} +``` +```yaml +apiVersion: multicluster.k8s.io/v1alpha1 +kind: ServiceExport +metadata: + name: my-svc + namespace: my-ns +status: + conditions: + - type: Initialized + status: "True" + lastTransitionTime: "2020-03-30T01:33:51Z" + - type: Exported + status: "True" + lastTransitionTime: "2020-03-30T01:33:55Z" +``` + +`ServiceExports` will be created within the cluster and namespace that the +service resides in and are name-mapped to the service for export - that is, they +reference the `Service` with the same name as the export. If multiple clusters +within the supercluster have `ServiceExports` with the same name and namespace, +these will be considered the same service and will be combined at the +supercluster level. + +This requires that within a supercluster, a given namespace is governed by a +single authority across all clusters. It is that authority’s responsibility to +ensure that a name is shared by multiple services within the namespace if and +only if they are instances of the same service. + +Most information about the service, including ports, backends and topology, will +continue to be stored in the Service object, which is name mapped to the service +export. + +### Exported Service Behavior Expectations + +#### SuperclusterIP + +When a `ServiceExport` is created, an IP address is reserved and assigned to +this supercluster `Service`. This IP may be supercluster-wide, or assigned on a +per-cluster basis. Requests to the corresponding IP from within a given cluster +will route to endpoint addresses for the aggregated Service. + +Note: this doc does not discuss `NetworkPolicy`, which cannot currently be used +to describe a policy that applies to a multi-cluster service. + +#### DNS + +When a `ServiceExport` is created, this will cause a domain name for the +multi-cluster service to become accessible from within the supercluster. The +domain name will be +`..svc.supercluster.local`. +Requests to this domain name from within the supercluster will resolve to the +supercluster VIP, which points to the endpoint addresses for pods within the +underlying `Service`(s) across the supercluster. + +#### EndpointSlice + +When a `ServiceExport` is created, this will cause `EndpointSlice` objects for +the underlying `Service` to be created in each cluster within the supercluster. +One or more `EndpointSlice` resources will exist for each cluster that exported +the `Service`, with each `EndpointSlice` containing only endpoints from its +source cluster. These `EndpointSlice` objects will be marked as managed by the +supercluster service controller, so that the endpoint slice controller doesn’t +delete them. + +#### Endpoint TTL + +To prevent stale endpoints from persisting in the event that a cluster becomes +unreachable to the supercluster controller, each `EndpointSlice` is associated +with a lease representing connectivity with its source cluster. The supercluster +service controller is responsible for periodically renewing the lease so long as +the connection with the source cluster is confirmed alive. A separate +controller, that may run inside each cluster, is responsible for watching each +lease and removing all remaining `EndpointSlices` associated with a cluster when +that cluster’s lease expires. + +### Consumption of EndpointSlice + +To consume a supercluster service, users will use the domain name associated +with their `ServiceExport`. When the mcsd-controller sees a `ServiceExport`, a +`ImportedService` will be introduced, which can be largely ignored by the user. + +An `ImportedService` is a service that may have endpoints in other clusters. +This includes 3 scenarios: +1. This service is running entirely in different cluster(s) +1. This service has endpoints in other cluster(s) and in this cluster +1. This service is running entirely in this cluster, but is exported to other cluster(s) as well + +For each exported service, one `ServiceExport` will exist in each cluster that +runs the service. The mcsd-controller will create and maintain a derived +`ImportedService` in each cluster within the supercluster (see: [constraints and +conflict resolution](#constraints-and-conflict-resolution)). If all `ServiceExport` instances are deleted, each +`ImportedService` will also be deleted from all clusters. + +Since a given `ImportedService` may be backed by multiple `EndpointSlices`, a +given `EndpointSlice` will reference its `ImportedService` using the label +`multicluster.kubernetes.io/imported-service-name` similarly to how an +`EndpointSlice` is associated with its `Service` in a single cluster. Each +imported `EndpointSlice` will also have a +`multicluster.kubernetes.io/source-cluster` label with a registry-scoped unique +identifier for the cluster. + +```golang +// ImportedService declares that the specified service should be exported to other clusters. +type ImportedService struct { + metav1.TypeMeta `json:",inline"` + metav1.ObjectMeta `json:"metadata,omitempty"` + + Spec ImportedServiceSpec `json:"spec,omitempty"` +} + +// ImportedServiceSpec contains the current status of an imported service and the +// information necessary to consume it +type ImportedServiceSpec struct { + Ports []ServicePort `json:"ports"` + Clusters []ClusterSpec `json:"clusters"` + IPFamily corev1.IPFamily `json:"ipFamily"` + IP string `json:"ip,omitempty"` +} + +// ClusterSpec contains service configuration mapped to a specific cluster +type ClusterSpec struct { + Cluster string `json:"cluster"` + TopologyKeys []string `json:"topologyKeys"` + PublishNotReadyAddresses bool `json:"publishNotReadyAddresses"` + SessionAffinity corev1.ServiceAffinity `json:"sessionAffinity"` + SessionAffinityConfig *corev1.SessionAffinityConfig `json:"sessionAffinityConfig"` +} +``` +```yaml +apiVersion: multicluster.k8s.io/v1alpha1 +kind: ImportedService +metadata: + name: my-svc + namespace: my-ns +spec: + ipFamily: IPv4 + ip: 42.42.42.42 + ports: + - name: http + protocol: TCP + port: 80 + clusters: + - cluster: us-west2-a-my-cluster + topologyKeys: + - topology.kubernetes.io/zone + sessionAffinity: None +--- +apiVersion: discovery.k8s.io/v1beta1 +kind: EndpointSlice +metadata: + name: imported-my-svc-cluster-b-1 + namespace: my-ns + labels: + multicluster.kubernetes.io/source-cluster: us-west2-a-my-cluster + multicluster.kubernetes.io/imported-service-name: my-svc + ownerReferences: + - apiVersion: multicluster.k8s.io/v1alpha1 + controller: false + kind: ImportedService + name: my-svc +addressType: IPv4 +ports: + - name: http + protocol: TCP + port: 80 +endpoints: + - addresses: + - "10.1.2.3" + conditions: + ready: true + topology: + topology.kubernetes.io/zone: us-west2-a +``` + +The `ImportedService.Spec.IP` (VIP) can be used to access this service from within this cluster. + +## Constraints and Conflict Resolution + +Exported services are derived from the properties of each component service and +their respective endpoints. However, some properties combine across exports +better than others. They generally fall into two categories: global properties, +and component-level properties. + + +### Global Properties + +These properties describe how the service should be consumed as a whole. They +directly impact service consumption and must be consistent across all child +services. If these properties are out of sync for a subset of exported services, +there is no clear way to determine how a service should be accessed. **If any +global properties have conflicts that can not be resolved, a condition will be +set on the `ServiceExport` with a description of the conflict. The service will +not be synced, and an error will be set on the status of each affected +`ServiceExport` and any previously-derived `ImportedServices` will be deleted +from each cluster in the supercluster.** + + +#### Service Port + +A derived service will be accessible with the supercluster IP at the ports +dictated by child services. If the external properties of service ports for a +set of exported services don’t match, we won’t know which port is the correct +choice for a service. For example, if two exported services use different ports +with the name “http”, which port is correct? What if a service uses the same +port with different names? As long as there are no conflicts (different ports +with the same name), the supercluster service will expose the superset of +service ports declared on its component services. If a user wants to change a +service port in a conflicting way, we recommend deploying a new service or +making the change in non-conflicting phases. + + +#### IP Family + +Because IPv4 and IPv6 addresses cannot be safely intermingled (e.g. iptables +rules can not mix IPv4 and IPv6), all component exported services making up a +supercluster service must use the same `IPFamily`. As with the single cluster +case - a service’s `IPFamily` is immutable - changing families will require a +new service to be created. + + +### Component Level Properties + +These properties are export-specific and pertain only to the subset of endpoints +backed by a single instance of each exported service. They may be safely carried +throughout the supercluster without risk of conflict. We propagate these +properties forward with no attempt to merge or alter them. + + +#### Session Affinity + +Session affinity affects a service as a whole for a given consumer. What would +it mean for a service to have e.g. client IP session affinity set for half its +backends? Would sessions only be sticky for those backends, or would there be no +affinity? If sessions are selectively sticky, we’d expect to see traffic to skew +toward the sticky subset of endpoints. That said, there’s nothing preventing us +from applying affinity on a per-slice basis so we will carry it forward. + + +#### TopologyKeys + +A `Service`’s `topologyKeys` dictate how endpoints in all `EndpointSlices` +associated with a given service should be applied to each node. While a single +`Service` may have multiple `EndpointSlices`, each `EndpointSlice` will only +ever originate from a single `Service`. `ImportedService` will contain +label-mapped lists of `topologyKeys` synced from each originating exported +service. Kube-proxy will filter endpoints in each slice based only on the +`topologyKeys` defined on the slice’s specific source `Service`. + +#### Publish Not-Ready Addresses + +Like `topologyKeys` above, we can apply `publishNotReadyAddresses` at the +per-slice level based on the originating cluster. This will allow incremental +rollout of changes without any risk of conflict. When true for a cluster, the +supercluster service DNS implementation must expose not-ready addresses for +slices from that cluster. + +### Test Plan + + + +### Graduation Criteria + + + +### Upgrade / Downgrade Strategy + + + +### Version Skew Strategy + + + +## Implementation History + + + +## Drawbacks + + + +## Alternatives + + + +## Infrastructure Needed (optional) + + diff --git a/keps/sig-multicluster/1645-multi-cluster-services-api/kep.yaml b/keps/sig-multicluster/1645-multi-cluster-services-api/kep.yaml new file mode 100644 index 00000000000..5263f1ad115 --- /dev/null +++ b/keps/sig-multicluster/1645-multi-cluster-services-api/kep.yaml @@ -0,0 +1,13 @@ +title: Multi-Cluster Services API +kep-number: 1645 +authors: + - "@jeremyot" +owning-sig: sig-multicluster +participating-sigs: + - sig-network +status: provisional +creation-date: 2020-03-30 +reviewers: + - TBD +approvers: + - TBD From fa379835f2dfe2d0e8872ec76fd694dce53e9f5a Mon Sep 17 00:00:00 2001 From: Jeremy Olmsted-Thompson Date: Mon, 30 Mar 2020 09:52:13 -0700 Subject: [PATCH 02/10] Address ObjectRef and other clarifications - Adds a note on the use of ObjectReference to Alternatives - Clarifies the meaning of MCSD - Note on intention to use generic conditions once the relevant KEP is implemented. - Unresolved: scalability is still an open question --- .../1645-multi-cluster-services-api/README.md | 44 ++++++++++++++++--- 1 file changed, 39 insertions(+), 5 deletions(-) diff --git a/keps/sig-multicluster/1645-multi-cluster-services-api/README.md b/keps/sig-multicluster/1645-multi-cluster-services-api/README.md index f25d3f37d08..19cb4b95fd4 100644 --- a/keps/sig-multicluster/1645-multi-cluster-services-api/README.md +++ b/keps/sig-multicluster/1645-multi-cluster-services-api/README.md @@ -15,14 +15,14 @@ To get started with this template: Copy this template into the owning SIG's directory and name it `NNNN-short-descriptive-title`, where `NNNN` is the issue number (with no leading-zero padding) assigned to your enhancement above. -- [s] **Fill out as much of the kep.yaml file as you can.** +- [x] **Fill out as much of the kep.yaml file as you can.** At minimum, you should fill in the "title", "authors", "owning-sig", "status", and date-related fields. -- [ ] **Fill out this file as best you can.** +- [x] **Fill out this file as best you can.** At minimum, you should fill in the "Summary", and "Motivation" sections. These should be easy if you've preflighted the idea of the KEP with the appropriate SIG(s). -- [ ] **Create a PR for this KEP.** +- [x] **Create a PR for this KEP.** Assign it to people in the SIG that are sponsoring this process. - [ ] **Merge early and iterate.** Avoid getting hung up on specific details and instead aim to get the goals of @@ -116,6 +116,7 @@ tags, and then generate with `hack/update-toc.sh`. - [Implementation History](#implementation-history) - [Drawbacks](#drawbacks) - [Alternatives](#alternatives) + - [`ObjectReference` in `ServiceExport.Spec` to directly map to a `Service`](#objectreference-in-serviceexportspec-to-directly-map-to-a-service) - [Infrastructure Needed (optional)](#infrastructure-needed-optional) @@ -236,8 +237,13 @@ nitty-gritty. --> #### Terminology -- **supercluster** - a placeholder name for a group of clusters with a high degree of mutual trust and shared ownership that share services amongst themselves. -- **mcsd-controller** - a controller that syncs services across clusters. There may be multiple implementations, this doc describes expected common behavior. +- **supercluster** - a placeholder name for a group of clusters with a high + degree of mutual trust and shared ownership that share services amongst + themselves. +- **mcsd-controller** - a controller that syncs services across clusters and + makes them available for multi-cluster service discovery (MCSD) and + connectivitiy. There may be multiple implementations, this doc describes + expected common behavior. We propose a new CRD called `ServiceExport`, used to specify which services should be exposed across all clusters in the supercluster. `ServiceExports` must @@ -365,6 +371,7 @@ type ServiceExportStatus struct { Conditions []ServiceExportCondition `json:"conditions,omitempty"` } +// ServiceExportConditionType identifies a specific condition. type ServiceExportConditionType string const { @@ -380,6 +387,9 @@ const { // ServiceExportCondition contains details for the current condition of this // service export. +// +// Once [#1624](https://github.com/kubernetes/enhancements/pull/1624) is +// merged, this will be replaced by metav1.Condition. type ServiceExportCondition struct { Type ServiceExportConditionType `json:"type"` // Status is one of {"True", "False", "Unknown"} @@ -456,6 +466,14 @@ source cluster. These `EndpointSlice` objects will be marked as managed by the supercluster service controller, so that the endpoint slice controller doesn’t delete them. +``` +<<[UNRESOLVED]>> +We have not yet sorted out scalability impact here. We hope the upper bound for +imported endpoints + in-cluster endpoints will be ~= the upper bound for +in-cluster endpoints today, but this remains to be determined. +<<[/UNRESOLVED]>> +``` + #### Endpoint TTL To prevent stale endpoints from persisting in the event that a cluster becomes @@ -780,6 +798,22 @@ not need to be as detailed as the proposal, but should include enough information to express the idea and why it was not acceptable. --> +### `ObjectReference` in `ServiceExport.Spec` to directly map to a `Service` + +Instead of name mapping, we could use an explicit ObjectReference in a +`ServiceExport.Spec`. This feels familiar and more explicit, but fundamentally +changes certain characteristics of the API. Name mapping means that the export +must be in the same namespace as the `Service` it exports, allowing existing RBAC +rules to restrict export rights to current namespace owners. We are building on +the concept that a namespace belongs to a single owner, and it should be the +`Service` owner who controls whether or not a given `Service` is exported. Using +`ObjectReference` instead would also open the possibility of having multiple +exports acting on a single service and would require more effort to determine if +a given service has been exported. + +The above issues could also be solved via controller logic, but we would risk +differing implementations. Name mapping enforces behavior at the API. + ## Infrastructure Needed (optional) @@ -800,7 +801,7 @@ information to express the idea and why it was not acceptable. ### `ObjectReference` in `ServiceExport.Spec` to directly map to a `Service` -Instead of name mapping, we could use an explicit ObjectReference in a +Instead of name mapping, we could use an explicit `ObjectReference` in a `ServiceExport.Spec`. This feels familiar and more explicit, but fundamentally changes certain characteristics of the API. Name mapping means that the export must be in the same namespace as the `Service` it exports, allowing existing RBAC @@ -814,6 +815,33 @@ a given service has been exported. The above issues could also be solved via controller logic, but we would risk differing implementations. Name mapping enforces behavior at the API. +### Export services via label selector +``` +<<[UNRESOLVED still being explored as viable - @thockin @mangelajo]>> + +Instead of name mapping, `ServiceExport` could have a +`ServiceExport.Spec.ServiceSelector` to select matching services for export. +This approach would make it easy to simply export all services with a given +label applied and would still scope exports to a namespace, but shares other +issues with the `ObjectReference` approach above: + +- Multiple `ServiceExports` may export a given `Service`, what would that mean? +- Determining whether or not a service is exported means seaching + `ServiceExports` for a matching selector. + +Though multiple services may match a single export, the act of exporting would +still be independent for individual services. A report of status for each export +seems like it belongs on a service-specific resource. + +With name mapping it should be relatively easy to build generic or custom logic +to automatically ensure a `ServiceExport` exists for each `Service` matching a +selector - perhaps by introducing something like a `ServiceExportPolicy` +resource (out of scope for this KEP). This would solve the above issues but +retain the flexibility of selectors. + +<<[/UNRESOLVED]>> +``` + ## Infrastructure Needed (optional) @@ -842,6 +843,21 @@ retain the flexibility of selectors. <<[/UNRESOLVED]>> ``` +### Export via annotation + +`ServiceExport` as described has no spec and seems like it could just be +replaced with an annotation, e.g. `multicluster.kubernetes.io/export`. When a +service is found with the annotation, it would be considered marked for export +to the supercluster. The controller would then create `EndpointSlices` and an +`ImportedService` in each cluster exactly as described above. Unfortunately, +`Service` does not have an extensible status and there is no way to represent +the state of the export on the annotated `Service`. We could extend +`Service.Status` to include `Conditions` and provide the flexibility we need, +but requiring changes to `Service` makes this a much more invasive proposal to +achieve the same result. As the use of a multi-cluster service implementation +would be an optional addon, it doesn't warrant a change to such a fundamental +resource. + ## Infrastructure Needed (optional) #### Terminology -- **supercluster** - a placeholder name for a group of clusters with a high +- **supercluster** - A placeholder name for a group of clusters with a high degree of mutual trust and shared ownership that share services amongst - themselves. -- **mcsd-controller** - a controller that syncs services across clusters and + themselves. Membership in a supercluster is symmetric and transitive. The set + of member clusters are mutually aware, and agree about their collective + association. +- **mcsd-controller** - A controller that syncs services across clusters and makes them available for multi-cluster service discovery (MCSD) and connectivitiy. There may be multiple implementations, this doc describes expected common behavior. @@ -254,17 +256,18 @@ a `ServiceExport` in a cluster will signify that the `Service` with the same name and namespace as the export should be visible to other clusters in the supercluster. -Another CRD called `ImportedService` will be introduced to store connection -information about the `Services` in each cluster, e.g. topology. This is -analogous to the traditional `Service` type in Kubernetes. Each cluster will -have an `ImportedService` for each `Service` that has been exported within the -supercluster, referenced by namespaced name. +Another CRD called `ImportedService` will be introduced to store information +about the services exported from each cluster, e.g. topology. This is analogous +to the traditional `Service` type in Kubernetes. Each cluster will have a +coresponding `ImportedService` for each uniquely named `Service` that has been +exported within the supercluster, referenced by namespaced name. If multiple clusters export a `Service` with the same namespaced name, they will -be recognized as a single combined service. The resulting `ImportedService` will -reference endpoints from both clusters. Properties of the `ImportedService` -(e.g. ports, topology) will be derived from a merger of component Service -properties. +be recognized as a single combined service. For example, if 5 clusters export +`my-svc.my-ns`, there will be one `ImportedService` named `my-svc` in the +`my-ns` namespace and it will be associated with endpoints from all exporting +clusters. Properties of the `ImportedService` (e.g. ports, topology) will be +derived from a merger of component `Service` properties. Existing implementations of Kubernetes Service API (e.g. kube-proxy) can be extended to present `ImportedServices` alongside traditional `Services`. From d515b889cbf146997dad218095f66f56e2f46bf9 Mon Sep 17 00:00:00 2001 From: Jeremy Olmsted-Thompson Date: Mon, 6 Apr 2020 11:48:25 -0700 Subject: [PATCH 06/10] add notes on rbac and no change to cluster.local --- .../1645-multi-cluster-services-api/README.md | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/keps/sig-multicluster/1645-multi-cluster-services-api/README.md b/keps/sig-multicluster/1645-multi-cluster-services-api/README.md index 7f6837455f9..834868f6c75 100644 --- a/keps/sig-multicluster/1645-multi-cluster-services-api/README.md +++ b/keps/sig-multicluster/1645-multi-cluster-services-api/README.md @@ -92,6 +92,7 @@ tags, and then generate with `hack/update-toc.sh`. - [Risks and Mitigations](#risks-and-mitigations) - [Design Details](#design-details) - [Exporting Services](#exporting-services) + - [Restricting Exports](#restricting-exports) - [Exported Service Behavior Expectations](#exported-service-behavior-expectations) - [SuperclusterIP](#superclusterip) - [DNS](#dns) @@ -439,6 +440,15 @@ Most information about the service, including ports, backends and topology, will continue to be stored in the Service object, which is name mapped to the service export. +#### Restricting Exports #### + +Cluster administrators may use RBAC rules to prevent creation of +`ServiceExports` in select namespaces. While there are no general restrictions +on which namespaces are allowed, administrators should be especially careful +about permitting exports from `kube-system` and `default`. As a best practice, +admins may want to tightly or completely prevent exports from these namespaces +unless there is a clear use case. + ### Exported Service Behavior Expectations #### SuperclusterIP @@ -459,7 +469,11 @@ domain name will be `..svc.supercluster.local`. Requests to this domain name from within the supercluster will resolve to the supercluster VIP, which points to the endpoint addresses for pods within the -underlying `Service`(s) across the supercluster. +underlying `Service`(s) across the supercluster. All service consumers must use +the `*.svc.supercluster.local` name to enable supercluster routing, even if +there is a matching `Service` with the same namespaced name in the local +cluster. There will be no change to existing behavior of the `svc.cluster.local` +zone. #### EndpointSlice From 7284f55576b3fdb5691a00c1869fafb0c78caa06 Mon Sep 17 00:00:00 2001 From: Jeremy Olmsted-Thompson Date: Mon, 13 Apr 2020 22:28:35 -0700 Subject: [PATCH 07/10] add section on supported service types --- .../1645-multi-cluster-services-api/README.md | 28 +++++++++++++++++++ 1 file changed, 28 insertions(+) diff --git a/keps/sig-multicluster/1645-multi-cluster-services-api/README.md b/keps/sig-multicluster/1645-multi-cluster-services-api/README.md index 834868f6c75..2c763c8dc3e 100644 --- a/keps/sig-multicluster/1645-multi-cluster-services-api/README.md +++ b/keps/sig-multicluster/1645-multi-cluster-services-api/README.md @@ -98,6 +98,7 @@ tags, and then generate with `hack/update-toc.sh`. - [DNS](#dns) - [EndpointSlice](#endpointslice) - [Endpoint TTL](#endpoint-ttl) + - [Service Types](#service-types) - [Consumption of EndpointSlice](#consumption-of-endpointslice) - [Constraints and Conflict Resolution](#constraints-and-conflict-resolution) - [Global Properties](#global-properties) @@ -504,6 +505,33 @@ controller, that may run inside each cluster, is responsible for watching each lease and removing all remaining `EndpointSlices` associated with a cluster when that cluster’s lease expires. +#### Service Types + +- `ClusterIP`: This is the the straightforward case most of the proposal + assumes. Each `EndpointSlice` associated with the exported service is combined + with slices from other clusters to make up the supercluster service. They will + be imported to the cluster behind the supercluster IP. + +``` +<<[UNRESOLVED re:stateful sets]>> + Today's headless services likely don't want a VIP and may not function + properly behind one. It probably doesn't make sense to export a current + headless service to the supercluster, it would work, but likely not the way + you want. +<<[/UNRESOLVED]>> +``` +- `NodePort` and `LoadBalancer`: These create `ClusterIP` services that would + sync as expected. For example If you export a `NodePort` service, the + resulting cross-cluster service will still be a supercluster IP type. You + could use node ports to access the cluster-local service in the source + cluster, but not in any other cluster, and it would only route to local + endpoints. +- `ExternalName`: It doesn't make sense to export an `ExternalName` service. + They can't be merged with other exports, and it seems like it would only + complicate deployments by even attempting to stretch them across clusters. + Instead, regular `ExternalName` type `Services` should be created in each + cluster individually. + ### Consumption of EndpointSlice To consume a supercluster service, users will use the domain name associated From ccc1448e1cbf6e74db51b63bd54d74273594232d Mon Sep 17 00:00:00 2001 From: Jeremy Olmsted-Thompson Date: Tue, 21 Apr 2020 15:08:54 -0700 Subject: [PATCH 08/10] Rename ImportedService to ServiceImport Rename based on PR feedback --- .../1645-multi-cluster-services-api/README.md | 44 +++++++++---------- 1 file changed, 22 insertions(+), 22 deletions(-) diff --git a/keps/sig-multicluster/1645-multi-cluster-services-api/README.md b/keps/sig-multicluster/1645-multi-cluster-services-api/README.md index 2c763c8dc3e..27a892515ee 100644 --- a/keps/sig-multicluster/1645-multi-cluster-services-api/README.md +++ b/keps/sig-multicluster/1645-multi-cluster-services-api/README.md @@ -258,21 +258,21 @@ a `ServiceExport` in a cluster will signify that the `Service` with the same name and namespace as the export should be visible to other clusters in the supercluster. -Another CRD called `ImportedService` will be introduced to store information +Another CRD called `ServiceImport` will be introduced to store information about the services exported from each cluster, e.g. topology. This is analogous to the traditional `Service` type in Kubernetes. Each cluster will have a -coresponding `ImportedService` for each uniquely named `Service` that has been +coresponding `ServiceImport` for each uniquely named `Service` that has been exported within the supercluster, referenced by namespaced name. If multiple clusters export a `Service` with the same namespaced name, they will be recognized as a single combined service. For example, if 5 clusters export -`my-svc.my-ns`, there will be one `ImportedService` named `my-svc` in the +`my-svc.my-ns`, there will be one `ServiceImport` named `my-svc` in the `my-ns` namespace and it will be associated with endpoints from all exporting -clusters. Properties of the `ImportedService` (e.g. ports, topology) will be +clusters. Properties of the `ServiceImport` (e.g. ports, topology) will be derived from a merger of component `Service` properties. Existing implementations of Kubernetes Service API (e.g. kube-proxy) can be -extended to present `ImportedServices` alongside traditional `Services`. +extended to present `ServiceImports` alongside traditional `Services`. ### User Stories (optional) @@ -536,9 +536,9 @@ that cluster’s lease expires. To consume a supercluster service, users will use the domain name associated with their `ServiceExport`. When the mcsd-controller sees a `ServiceExport`, a -`ImportedService` will be introduced, which can be largely ignored by the user. +`ServiceImport` will be introduced, which can be largely ignored by the user. -An `ImportedService` is a service that may have endpoints in other clusters. +An `ServiceImport` is a service that may have endpoints in other clusters. This includes 3 scenarios: 1. This service is running entirely in different cluster(s) 1. This service has endpoints in other cluster(s) and in this cluster @@ -546,12 +546,12 @@ This includes 3 scenarios: For each exported service, one `ServiceExport` will exist in each cluster that runs the service. The mcsd-controller will create and maintain a derived -`ImportedService` in each cluster within the supercluster (see: [constraints and +`ServiceImport` in each cluster within the supercluster (see: [constraints and conflict resolution](#constraints-and-conflict-resolution)). If all `ServiceExport` instances are deleted, each -`ImportedService` will also be deleted from all clusters. +`ServiceImport` will also be deleted from all clusters. -Since a given `ImportedService` may be backed by multiple `EndpointSlices`, a -given `EndpointSlice` will reference its `ImportedService` using the label +Since a given `ServiceImport` may be backed by multiple `EndpointSlices`, a +given `EndpointSlice` will reference its `ServiceImport` using the label `multicluster.kubernetes.io/imported-service-name` similarly to how an `EndpointSlice` is associated with its `Service` in a single cluster. Each imported `EndpointSlice` will also have a @@ -559,17 +559,17 @@ imported `EndpointSlice` will also have a identifier for the cluster. ```golang -// ImportedService declares that the specified service should be exported to other clusters. -type ImportedService struct { +// ServiceImport declares that the specified service should be exported to other clusters. +type ServiceImport struct { metav1.TypeMeta `json:",inline"` metav1.ObjectMeta `json:"metadata,omitempty"` - Spec ImportedServiceSpec `json:"spec,omitempty"` + Spec ServiceImportSpec `json:"spec,omitempty"` } -// ImportedServiceSpec contains the current status of an imported service and the +// ServiceImportSpec contains the current status of an imported service and the // information necessary to consume it -type ImportedServiceSpec struct { +type ServiceImportSpec struct { Ports []ServicePort `json:"ports"` Clusters []ClusterSpec `json:"clusters"` IPFamily corev1.IPFamily `json:"ipFamily"` @@ -587,7 +587,7 @@ type ClusterSpec struct { ``` ```yaml apiVersion: multicluster.k8s.io/v1alpha1 -kind: ImportedService +kind: ServiceImport metadata: name: my-svc namespace: my-ns @@ -615,7 +615,7 @@ metadata: ownerReferences: - apiVersion: multicluster.k8s.io/v1alpha1 controller: false - kind: ImportedService + kind: ServiceImport name: my-svc addressType: IPv4 ports: @@ -631,7 +631,7 @@ endpoints: topology.kubernetes.io/zone: us-west2-a ``` -The `ImportedService.Spec.IP` (VIP) can be used to access this service from within this cluster. +The `ServiceImport.Spec.IP` (VIP) can be used to access this service from within this cluster. ## Constraints and Conflict Resolution @@ -650,7 +650,7 @@ there is no clear way to determine how a service should be accessed. **If any global properties have conflicts that can not be resolved, a condition will be set on the `ServiceExport` with a description of the conflict. The service will not be synced, and an error will be set on the status of each affected -`ServiceExport` and any previously-derived `ImportedServices` will be deleted +`ServiceExport` and any previously-derived `ServiceImports` will be deleted from each cluster in the supercluster.** @@ -700,7 +700,7 @@ from applying affinity on a per-slice basis so we will carry it forward. A `Service`’s `topologyKeys` dictate how endpoints in all `EndpointSlices` associated with a given service should be applied to each node. While a single `Service` may have multiple `EndpointSlices`, each `EndpointSlice` will only -ever originate from a single `Service`. `ImportedService` will contain +ever originate from a single `Service`. `ServiceImport` will contain label-mapped lists of `topologyKeys` synced from each originating exported service. Kube-proxy will filter endpoints in each slice based only on the `topologyKeys` defined on the slice’s specific source `Service`. @@ -894,7 +894,7 @@ retain the flexibility of selectors. replaced with an annotation, e.g. `multicluster.kubernetes.io/export`. When a service is found with the annotation, it would be considered marked for export to the supercluster. The controller would then create `EndpointSlices` and an -`ImportedService` in each cluster exactly as described above. Unfortunately, +`ServiceImport` in each cluster exactly as described above. Unfortunately, `Service` does not have an extensible status and there is no way to represent the state of the export on the annotated `Service`. We could extend `Service.Status` to include `Conditions` and provide the flexibility we need, From 7be0f38778c50175eb7cb774dcb7e7912e5a5e26 Mon Sep 17 00:00:00 2001 From: Jeremy Olmsted-Thompson Date: Mon, 27 Apr 2020 22:00:43 -0700 Subject: [PATCH 09/10] clean up structs --- .../1645-multi-cluster-services-api/README.md | 121 ++++++++++-------- 1 file changed, 67 insertions(+), 54 deletions(-) diff --git a/keps/sig-multicluster/1645-multi-cluster-services-api/README.md b/keps/sig-multicluster/1645-multi-cluster-services-api/README.md index 27a892515ee..eeb2ad8b8a2 100644 --- a/keps/sig-multicluster/1645-multi-cluster-services-api/README.md +++ b/keps/sig-multicluster/1645-multi-cluster-services-api/README.md @@ -77,51 +77,47 @@ tags, and then generate with `hack/update-toc.sh`. --> -- [KEP-1645: Multi-Cluster Services API](#kep-1645-multi-cluster-services-api) - - [Release Signoff Checklist](#release-signoff-checklist) - - [Summary](#summary) - - [Motivation](#motivation) - - [Goals](#goals) - - [Non-Goals](#non-goals) - - [Proposal](#proposal) - - [Terminology](#terminology) - - [User Stories (optional)](#user-stories-optional) - - [Different Services Each Deployed to Separate Cluster](#different-services-each-deployed-to-separate-cluster) - - [Single Service Deployed to Multiple Clusters](#single-service-deployed-to-multiple-clusters) - - [Notes/Constraints/Caveats (optional)](#notesconstraintscaveats-optional) - - [Risks and Mitigations](#risks-and-mitigations) - - [Design Details](#design-details) - - [Exporting Services](#exporting-services) - - [Restricting Exports](#restricting-exports) - - [Exported Service Behavior Expectations](#exported-service-behavior-expectations) - - [SuperclusterIP](#superclusterip) - - [DNS](#dns) - - [EndpointSlice](#endpointslice) - - [Endpoint TTL](#endpoint-ttl) - - [Service Types](#service-types) - - [Consumption of EndpointSlice](#consumption-of-endpointslice) - - [Constraints and Conflict Resolution](#constraints-and-conflict-resolution) - - [Global Properties](#global-properties) - - [Service Port](#service-port) - - [IP Family](#ip-family) - - [Component Level Properties](#component-level-properties) - - [Session Affinity](#session-affinity) - - [TopologyKeys](#topologykeys) - - [Publish Not-Ready Addresses](#publish-not-ready-addresses) - - [Test Plan](#test-plan) - - [Graduation Criteria](#graduation-criteria) - - [Alpha -> Beta Graduation](#alpha---beta-graduation) - - [Beta -> GA Graduation](#beta---ga-graduation) - - [Removing a deprecated flag](#removing-a-deprecated-flag) - - [Upgrade / Downgrade Strategy](#upgrade--downgrade-strategy) - - [Version Skew Strategy](#version-skew-strategy) - - [Implementation History](#implementation-history) - - [Drawbacks](#drawbacks) - - [Alternatives](#alternatives) - - [`ObjectReference` in `ServiceExport.Spec` to directly map to a `Service`](#objectreference-in-serviceexportspec-to-directly-map-to-a-service) - - [Export services via label selector](#export-services-via-label-selector) - - [Export via annotation](#export-via-annotation) - - [Infrastructure Needed (optional)](#infrastructure-needed-optional) +- [Release Signoff Checklist](#release-signoff-checklist) +- [Summary](#summary) +- [Motivation](#motivation) + - [Goals](#goals) + - [Non-Goals](#non-goals) +- [Proposal](#proposal) + - [Terminology](#terminology) + - [User Stories (optional)](#user-stories-optional) + - [Different Services Each Deployed to Separate Cluster](#different-services-each-deployed-to-separate-cluster) + - [Single Service Deployed to Multiple Clusters](#single-service-deployed-to-multiple-clusters) + - [Notes/Constraints/Caveats (optional)](#notesconstraintscaveats-optional) + - [Risks and Mitigations](#risks-and-mitigations) +- [Design Details](#design-details) + - [Exporting Services](#exporting-services) + - [Restricting Exports](#restricting-exports) + - [Exported Service Behavior Expectations](#exported-service-behavior-expectations) + - [SuperclusterIP](#superclusterip) + - [DNS](#dns) + - [EndpointSlice](#endpointslice) + - [Endpoint TTL](#endpoint-ttl) + - [Service Types](#service-types) + - [Consumption of EndpointSlice](#consumption-of-endpointslice) +- [Constraints and Conflict Resolution](#constraints-and-conflict-resolution) + - [Global Properties](#global-properties) + - [Service Port](#service-port) + - [IP Family](#ip-family) + - [Component Level Properties](#component-level-properties) + - [Session Affinity](#session-affinity) + - [TopologyKeys](#topologykeys) + - [Publish Not-Ready Addresses](#publish-not-ready-addresses) + - [Test Plan](#test-plan) + - [Graduation Criteria](#graduation-criteria) + - [Upgrade / Downgrade Strategy](#upgrade--downgrade-strategy) + - [Version Skew Strategy](#version-skew-strategy) +- [Implementation History](#implementation-history) +- [Drawbacks](#drawbacks) +- [Alternatives](#alternatives) + - [ObjectReference in ServiceExport.Spec to directly map to a Service](#-in--to-directly-map-to-a-service) + - [Export services via label selector](#export-services-via-label-selector) + - [Export via annotation](#export-via-annotation) +- [Infrastructure Needed (optional)](#infrastructure-needed-optional) ## Release Signoff Checklist @@ -248,7 +244,7 @@ nitty-gritty. association. - **mcsd-controller** - A controller that syncs services across clusters and makes them available for multi-cluster service discovery (MCSD) and - connectivitiy. There may be multiple implementations, this doc describes + connectivity. There may be multiple implementations, this doc describes expected common behavior. We propose a new CRD called `ServiceExport`, used to specify which services @@ -375,6 +371,10 @@ type ServiceExport struct { // ServiceExportStatus contains the current status of an export. type ServiceExportStatus struct { // +optional + // +patchStrategy=merge + // +patchMergeKey=type + // +listType=map + // +listMapKey=type Conditions []ServiceExportCondition `json:"conditions,omitempty"` } @@ -562,26 +562,44 @@ identifier for the cluster. // ServiceImport declares that the specified service should be exported to other clusters. type ServiceImport struct { metav1.TypeMeta `json:",inline"` + // +optional metav1.ObjectMeta `json:"metadata,omitempty"` - + // +optional Spec ServiceImportSpec `json:"spec,omitempty"` } // ServiceImportSpec contains the current status of an imported service and the // information necessary to consume it type ServiceImportSpec struct { - Ports []ServicePort `json:"ports"` + // +patchStrategy=merge + // +patchMergeKey=port + // +listType=map + // +listMapKey=port + // +listMapKey=protocol + Ports []ServicePort `json:"ports"` + // +optional + // +patchStrategy=merge + // +patchMergeKey=cluster + // +listType=map + // +listMapKey=cluster Clusters []ClusterSpec `json:"clusters"` + // +optional IPFamily corev1.IPFamily `json:"ipFamily"` + // +optional IP string `json:"ip,omitempty"` } // ClusterSpec contains service configuration mapped to a specific cluster type ClusterSpec struct { Cluster string `json:"cluster"` + // +optional + // +listType=set TopologyKeys []string `json:"topologyKeys"` + // +optional PublishNotReadyAddresses bool `json:"publishNotReadyAddresses"` + // +optional SessionAffinity corev1.ServiceAffinity `json:"sessionAffinity"` + // +optional SessionAffinityConfig *corev1.SessionAffinityConfig `json:"sessionAffinityConfig"` } ``` @@ -845,7 +863,7 @@ not need to be as detailed as the proposal, but should include enough information to express the idea and why it was not acceptable. --> -### `ObjectReference` in `ServiceExport.Spec` to directly map to a `Service` +### `ObjectReference` in `ServiceExport.Spec` to directly map to a Service Instead of name mapping, we could use an explicit `ObjectReference` in a `ServiceExport.Spec`. This feels familiar and more explicit, but fundamentally @@ -862,8 +880,6 @@ The above issues could also be solved via controller logic, but we would risk differing implementations. Name mapping enforces behavior at the API. ### Export services via label selector -``` -<<[UNRESOLVED still being explored as viable - @thockin @mangelajo]>> Instead of name mapping, `ServiceExport` could have a `ServiceExport.Spec.ServiceSelector` to select matching services for export. @@ -885,9 +901,6 @@ selector - perhaps by introducing something like a `ServiceExportPolicy` resource (out of scope for this KEP). This would solve the above issues but retain the flexibility of selectors. -<<[/UNRESOLVED]>> -``` - ### Export via annotation `ServiceExport` as described has no spec and seems like it could just be From e73b941b69f4763cf5945f11f1116b2cd7382003 Mon Sep 17 00:00:00 2001 From: Jeremy Olmsted-Thompson Date: Fri, 8 May 2020 17:31:04 -0700 Subject: [PATCH 10/10] add approvers --- keps/sig-multicluster/1645-multi-cluster-services-api/kep.yaml | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/keps/sig-multicluster/1645-multi-cluster-services-api/kep.yaml b/keps/sig-multicluster/1645-multi-cluster-services-api/kep.yaml index 5263f1ad115..7222ce0c64b 100644 --- a/keps/sig-multicluster/1645-multi-cluster-services-api/kep.yaml +++ b/keps/sig-multicluster/1645-multi-cluster-services-api/kep.yaml @@ -10,4 +10,5 @@ creation-date: 2020-03-30 reviewers: - TBD approvers: - - TBD + - "@pmorie" + - "@thockin"