Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: configure arbitrary provider-specific properties via annotations #4875

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

Dadeos-Menlo
Copy link

Description

Support for provider-specific annotations, that manifest as Endpoint.ProviderSpecific properties, is currently restricted to a subset of pre-defined providers. This functionality should be available to all providers, without requirement for special-case registration within the getProviderSpecificAnnotations(…) function.

The proposed changes include:

  • Inversion of logic within the getProviderSpecificAnnotations(…) function so as to treat all annotations prefixed with external-dns.alpha.kubernetes.io/, but that are not otherwise recognised, as provider-specific properties

Notes:

  • The existing allocation of annotations (i.e. whether they might be considered provider-specific or not) is somewhat unfortunate; but it is appreciated that any efforts to tidy-up annotations are likely constrained by concerns regarding backwards-compatibility
  • It is assumed that the naming of provider-specific properties (i.e. whether they should be of the form provider/property or provider-property) is an internal implementation detail, and therefore the proposed renaming does not represent any backwards-incompatibility
  • The documented suggestion for adopting annotations of the form …/webhook-<custom-annotation> is considered unwise - provider-specific properties are considered to be provider-specific, whereas the Webhook provider is considered to be a wrapper of providers, rather than a provider in and of itself (see: Moving providers out of tree #4347)
    • The renaming of Webhook related provider-specific properties (i.e. from webhook-property to webhook/property has not been implemented - implementation would be trivial, but given the conclusion that provider-specific properties associated with a Webhook provider are nonsensical, such an implementation is not being initially proposed
      • It is not clear to me, but seems unlikely, that anyone will have adopted the Webhook framework and are relying upon provider-specific properties

Checklist

  • Unit tests updated
  • End user documentation updated

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Nov 14, 2024
@k8s-ci-robot
Copy link
Contributor

Hi @Dadeos-Menlo. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Nov 14, 2024
@Dadeos-Menlo
Copy link
Author

I've just discovered the existence of CRD Source, which appears to involve the crd source interrogating Kubernetes custom-resource objects of type DNSEndpoint.

Unfortunately, the DNSEndpoint CRD exposes the providerSpecific properties, which is somewhat contrary to my previous assumptions that these represent "an internal implementation detail".

This design is very unfortunate and hopefully, given its "alpha" status, can still be addressed. A far better approach would be to have end-users apply annotations on the DNSEntrypoint objects, in much the same way as end-users are expected to apply annotations to every other type of Kubernetes object that may be used as a source, and have the crdSource call getProviderSpecificAnnotations(…) passing the annotations from the DNSEndpoint object being considered.

I suggest that part of the confusion surrounding these provider-specific annotations/properties is that Endpoint.ProviderSpecific fulfils two distinct roles:

  • Providing a mechanism to convey additional, potentially provider-specific, configuration from end-users to DNS provider[s] (i.e. as specified via annotations on sources)
  • Providing a mechanism for DNS provider[s] to convey additional configuration from the DNS record[s] obtained from interrogating the DNS Zone, to be used when updating/modifying any existing DNS records

This commingling of responsibilities can easily result in unintended consequences; the most readily reproducible ones being that specifying an unsupported provider-specific property, such as suggested in the CRD source example (i.e. specifying a Cloudflare provider-specific property when using any provider other than Cloudflare) results in the DNS records generated being perpetually out-of-sync with the source object (i.e. the Endpoint generated from the source contains the provider-specific property, the Endpoint generated from the provider observing the pre-existing DNS record does not contain the provider-specific property, the system sees this as an inconsistency to be resolved and therefore deletes and recreates the DNS record ad infinitum).

@mloiseleur
Copy link
Contributor

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Nov 26, 2024
@mloiseleur
Copy link
Contributor

This design is very unfortunate and hopefully, given its "alpha" status, can still be addressed. A far better approach would be to have end-users apply annotations on the DNSEntrypoint objects, in much the same way as end-users are expected to apply annotations to every other type of Kubernetes object that may be used as a source, and have the crdSource call getProviderSpecificAnnotations(…) passing the annotations from the DNSEndpoint object being considered.

I think I agree with you: it would be more consistent to use annotations, maybe in a v1alpha2 version of the CRD.

For the webhook annotation, it was introduced because some webhook providers needs it. See for example mikrotik.

@mloiseleur
Copy link
Contributor

The logic seems better to me with this logic, thanks 👍 .
Would you please re-add documentation on webhook and fix ci ?
/retitle feat: configure arbitrary provider-specific properties via annotations

@k8s-ci-robot k8s-ci-robot changed the title Add support for configuring arbitrary provider-specific properties via annotations feat: configure arbitrary provider-specific properties via annotations Nov 26, 2024
@Dadeos-Menlo
Copy link
Author

For the webhook annotation, it was introduced because some webhook providers needs it. See for example mikrotik.

I've reinstated the documentation of webhook annotations, but still consider this approach misguided.

The provider-properties associated with the mikrotik example cited should be named mikrotik-…, rather than webhook-…; admittedly an approach that is not feasible without the changes proposed here (NB: the changes to the internal representation of provider-specific properties proposed here necessitate the following change to this line:

--- provider.go.orig
+++ provider.go
@@ -94,7 +94,7 @@
 func getProviderSpecific(ep *endpoint.Endpoint, ps string) string {
        value, valueExists := ep.GetProviderSpecificProperty(ps)
        if !valueExists {
-               value, _ = ep.GetProviderSpecificProperty(fmt.Sprintf("webhook/%s", ps))
+               value, _ = ep.GetProviderSpecificProperty(fmt.Sprintf("webhook-%s", ps))
        }
        return value
 }

).

Imagine scenarios if/when the AWS provider was to be repackaged to use the Webhook provider architecture; one wouldn't expect to have to rename all of the existing annotations on one's Kubernetes objects. Not to mention the preclusion of deploying a single instance of external-dns, managing multiple providers deployed via the Webhook architecture, whose webhook-… prefixed annotations would clash/commingle.

@Dadeos-Menlo
Copy link
Author

b21b0ff adds annotation support to the DNSEndpoint custom-resources; the DNSEndpoint.Spec.Endpoints configuration appears to be largely redundant given support for the various annotations.

The only potential benefit for supporting the DNSEndpoint.Spec.Endpoints configuration is that it facilitates the representation of multiple Endpoint objects via a single DNSEndpoint Kubernetes resource; but given the inconsistent pluralisation and the challenges of associating annotations with multiple Endpoint, it is likely more intuitive to require a 1-to-1 relationship between DNSEndpoint resource and Endpoint object, or at least having one DNSEndpoint represent a single DNS name, with perhaps multiple record types (e.g. Endpoint objects representing "A" and "AAAA" records).

@mloiseleur
Copy link
Contributor

The provider-properties associated with the mikrotik example cited should be named mikrotik-…, rather than webhook-…;

You're right. It makes sense to include the provider name instead of webhook.

@mloiseleur
Copy link
Contributor

@mircea-pavel-anton as the author of microtik external provider, Wdyt ? Would it make sense for you ?

@mircea-pavel-anton
Copy link
Contributor

mircea-pavel-anton commented Dec 22, 2024

So there are a couple of things to address here.

  1. Modifying the DNSEndpoint CRD to implement provider-specific configuration via annotations as well
  2. Changing the prefix for annotations in Ingress and Service objects

First off, I am not so sure about changing the DNSEndpoint CRD to be honest. In my view, we're using annotations for Ingress and Service objects because the API that was already there had no built-in support for specifying such configurations. Thus we had to "extend" it in this manner. When we're creating custom CRDs for a new object, it makes sense to me to have all of the configuration options under the spec of that object.

I understand it from the point of view of consistency, both from the user experience point of view (though I would argue that using the CRD directly is much less common) as well as the developer experience one (the value that gets passed to the webhook provider ends up being different when we're using annotations), but I don't really like this solution.

Secondly, I do agree that changing the prefix for webhook providers is a good idea. This is also similar to this PR #4951 that I have already commented on.

I think that, more importantly, each webhook should have a customizable prefix specified upon installation and ONLY the annotations with that prefix should be passed in as provider specific configurations (optionally, in the form of the substring that comes AFTER the prefix).

What I mean by this is that there should be a configuration option when installing external-dns with a webhook provider, maybe something like:

...
provider:
  name: webhook
  webhook:
    name: custom-name-here
    image:
      repository: ghcr.io/mirceanton/external-dns-provider-mikrotik
...

and then only unrecognized annotations that start with external-dns.alpha.kubernetes.io/custom-name-here- should be passed in to that specific webhook.

This will likely solve this issue i have also ran into, that @Dadeos-Menlo also mentioned: mirceanton/external-dns-provider-mikrotik#140

As for the second part, i.e. "optionally, in the form of the substring that comes AFTER the prefix" I mean that when we are passing in provider-specific annotations to a webhook provider via CRD configs vs annotations, the webhook itself receives different keys for the key: value.

For example:

---
apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
  name: complex-record
spec:
  endpoints:
    - dnsName: some.example.com
      recordTTL: 180
      recordType: A
      targets:
        - 1.2.3.4
      providerSpecific:
        - name: comment
          value: "This is a comment"

and

---

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-demo
  annotations:
    external-dns.alpha.kubernetes.io/hostname: "some.example.com"
    external-dns.alpha.kubernetes.io/ttl: "180"
    external-dns.alpha.kubernetes.io/webhook-comment: "This is a comment" annotations!"
    

Might seem the same, but for the DNS Endpoint object, the webhook receives comment as a provider-specific configuration, whereas with the annotations, it receives webhook-comment. This makes it weird in having to support both situations, like so: https://github.com/mirceanton/external-dns-provider-mikrotik/blob/main/internal/mikrotik/record.go#L138-L153


So TL;DR: yes, I do think custom annotation prefixes per provider are a good idea. I also think they should be customizable.

@kashalls do you have some thoughts on this topic? IIRC you were also impacted by this cloudflare annotation issue that kept records perpetually out of sync

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Dec 22, 2024
@kashalls
Copy link

kashalls commented Dec 22, 2024

do you have some thoughts on this topic? IIRC you were also impacted by this cloudflare annotation issue that kept records perpetually out of sync

Yes, my unifi webhook provider and pretty much all of the webhook providers are affected by this.

I am leaning more towards the idea that each provider can negotiate with external-dns to determine what features are allowed / disabled / unsupported. (Maybe we can provide a list of valid annotations that the webhook expects to handle?) We already support this using the GET / to negotiate the domain filter. The documentation is extremely limited on what kind of response external-dns expects here and that could be worked to be more beneficial. Users can pretty much pass any annotation to external-dns and external-dns would require it to show back on the next reconciliation which in some provider's we don't have the ability to add "notes" or "tags" to these records so we have to handle it in the provider which is not efficient.

As for the annotation specific's I think adding a name to specifically only pass the prefixed annotations would be a good start. The user deploying the provider should be the one to create the prefix key as there might be some cases where the user wants to run two different instances of a provider. I do like this approach here because it gives the user the configuration option to choose their prefix keys for each provider.

...
provider:
  name: webhook
  webhook:
    name: custom-name-here
    image:
      repository: ghcr.io/mirceanton/external-dns-provider-mikrotik
...

Perhaps we could do more with the provide negotiate endpoint? Apologies if it seems more like a ramble, I am currently on a holiday trip.

@kashalls
Copy link

@Dadeos-Menlo I'm back from vacation, do you need any help with this?

@Dadeos-Menlo
Copy link
Author

@Dadeos-Menlo I'm back from vacation, do you need any help with this?

I'm not sure I understand; I don't feel that I need any specific help regarding this issue. Upon consideration of the existing implementation of the handling of provider-specific annotations I concluded that it might be improved and hence have proposed the changes associated with this pull-request. I presume that the matter is now in the hands of the maintainers/reviewers to, accept, reject, or otherwise comment on the changes proposed.

Regarding the wider issues raised surrounding the usage of annotations to represent provider-specific configuration, my opinions for what they're worth are:

  • I consider the assumption that Endpoint objects will only ever have known provider-specific configuration to be flawed
  • I am not persuaded by the argument that provider-specific configuration should form part of the specification of any DNSEndpoint style custom-resource-description (CRD)
    • Whilst I have sympathy for the argument that expressing information via the specification is generally preferable to the use of annotations, I ultimately conclude that the inherently extensible nature of the provider-specific configuration combined with the benefits of adopting a consistent approach to such configuration across all potential sources of DNS records make the annotation approach preferable
    • Any CRD representing a DNS resource record should likely be constrained to the essence of the record (e.g. name, record type, record data) with additional configuration expressed via annotations

@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jan 2, 2025
@mircea-pavel-anton
Copy link
Contributor

After some more thinking about it, I decided I actually agree with @Dadeos-Menlo

I think that a DNSEndpoint resource should only contain in the spec the bare minimum that is globally available across all providers and then anything else via annotations just like Services or Ingresses.

Thus, I think that this DNSEndpoint:

---
apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
  name: complex-record
spec:
  endpoints:
    - dnsName: complex.example.com
      recordTTL: 180
      recordType: A
      targets:
        - 1.2.3.4
      providerSpecific:
        - name: comment
          value: "This is a comment"
        - name: address-list
          value: "1.2.3.1"
        - name: match-subdomain
          value: "true"
        - name: disabled
          value: "false"

Should actually look like this:

---
apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
  name: complex-record
  annotations:
    external-dns.alpha.kubernetes.io/mikrotik-comment: "This is a comment"
    external-dns.alpha.kubernetes.io/mikrotik-address-list: "1.2.3.1"
    external-dns.alpha.kubernetes.io/mikrotik-match-subdomain: "true"
    external-dns.alpha.kubernetes.io/mikrotik-disabled: "false"
spec:
  endpoints:
    - dnsName: complex.example.com
      recordTTL: 180
      recordType: A
      targets:
        - 1.2.3.4

Then all annotations should be sent to the webhook and it is up to each webhook to decide which ones it should implement.

I think that this would end up greatly simplifying things. I am currently running into issues due to the different ways provider specific values are passed in from annotations or CRDs.

@mloiseleur any chance we can move this further? Is there any way I can help get this merged and released?

@mloiseleur
Copy link
Contributor

@mircea-pavel-anton Yes, you can help by doing a first review of this PR.

cc @ivankatliarchuk for a second review

@ivankatliarchuk
Copy link
Contributor

I am open to refactoring the CRDs and annotation, but I have concerns about the current proposal. This change has the potential to significantly impact many of us.

Instead of proceeding directly with the proposed changes, I suggest creating a dedicated issue to outline a comprehensive plan for CRD or annotation improvements. Pin the issue and define the release, when it will be available.

Currently, the DNSEndpoint resource allows for multiple endpoints and DNS names with varying TTLs, and the examples primarily demonstrate configuration through annotations, but examples takes to account a case with single target, so not clear how this going to work with multiple endpoints and bunch of dnnames. So the examples should contain the old world if still supported and the new world, and nice to have an example for case when there are multiple endpoints.

Is providerSpecific configuration still supported by the DNSEndpoint or not?

Even example like this one how should looks like in new world?

  spec:
    endpoints:
    - dnsName: auth-api-internal-eks.eu-west-1.example.com
      recordTTL: 60
      recordType: CNAME
      targets:
      - eks-cluster-dev-ingress-internal.example.com
	  providerSpecific:
      - name: "aws/failover"
        value: "PRIMARY"
      - name: "aws/health-check-id"
        value: "asdf1234-as12-as12-as12-asdf12345678"
      - name: "aws/evaluate-target-health"
        value: "true"
    - dnsName: auth-api-debug-internal-eks.example.com
      recordTTL: 60
      recordType: CNAME
      targets:
      - eks-cluster-dev-ingress-internal.example.com
	  providerSpecific:
      - name: "aws/failover"
        value: "PRIMARY"
      - name: "aws/health-check-id"
        value: "1a0da404-d89d-11ef-88e7-4750e71f6ce9"
      - name: "aws/evaluate-target-health"
        value: "false"

It's quite common to have something like that

    endpoints:
    - dnsName: attract-go-data-provider-internal-eks.eu-west-1.dev.example.com
      recordTTL: 60
      recordType: CNAME
	  providerSpecific:
	   ....
      targets:
      - eks-dev-ingress-internal.eu-west-1.dev.example.com
    - dnsName: attract-go-data-provider-internal-eks.http2.eu-west-1.dev.example.com
      recordTTL: 60
      recordType: CNAME
	  providerSpecific:
	   ....
      targets:
      - eks-staging-ingress-internal.eu-west-1.dev.example.com
    - dnsName: attract-go-data-provider-debug-internal-eks.eu-west-1.dev.example.com
      recordTTL: 60
      recordType: CNAME
	  providerSpecific:
	   ....
      targets:
      - traefik-ingress.eu-west-1.dev.example.com
    - dnsName: example-go-data-provider-debug-internal-eks.http2.eu-west-1.dev.example.com
      recordTTL: 60
      recordType: CNAME
      targets:
      - nginx-ingress-internal.eu-west-1.dev.example.com
    - dnsName: example-go-data-provider-grpc-internal-eks.http2.eu-west-1.dev.example.com
      recordTTL: 60
      recordType: CNAME
      targets:
      - eks-cluster-dev-ingress-internal.eu-west-1.dev.example.com

Given these concerns, I cannot currently support the proposed changes.

@ivankatliarchuk
Copy link
Contributor

And this most likely will block mulitple-targets support or at least not clear how is suppose to work with annotation https://github.com/kubernetes-sigs/external-dns/blob/master/docs/proposal/multi-target.md

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign szuecs for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Jan 24, 2025
@Dadeos-Menlo
Copy link
Author

The proposed changes have been rebased and backwards compatibility issues that were not initially fully appreciated have been addressed.

An initial assumption made was:

  • It is assumed that the naming of provider-specific properties (i.e. whether they should be of the form provider/property or provider-property) is an internal implementation detail, and therefore the proposed renaming does not represent any backwards-incompatibility

This initial assumption was flawed in two respects:

  • The internal representation of provider-specific properties are exposed to third-party implementations via the Webhook provider interface
  • The internal representation of provider-specific properties my be used when specifying DNSEndpoint custom-resource objects

The proposed changes seek to rename the provider-specific properties associated with the AWS and scaleway providers, in order to eliminate the use of / characters because they are not permitted within the name segment of Kubernetes annotations. If the naming of provider-specific properties can be made equal to the name segments of annotations prefixed with external-dns.alpha.kubernetes.io/ then the mapping of annotations to provider-specific properties may be simplified to the point of elimination. Were it the case that the naming of provider-specific properties was entirely an internal implementation detail then the renaming could be broadly achieved by eliminating the modification of annotation name segments when converting them to provider-property names within the getProviderSpecificAnnotations(…) function and modifying the respective provider implementations to expect the new names. However, the provider-specific property naming leaks outside of the implementation in the two areas identified above, namely the Webhook provider interface and the DNSEndpoint specification.

Based upon previous discussions, it is understood that the consensus is broadly that:

  • The use of external-dns.alpha.kubernetes.io/webhook-… prefixed annotations should be deprecated in favour of provider-specific property names prefixed with identifiers better representing the provider that has been provided via the Webhook interface
  • The use of the providerSpecific property of DNSEndpoint specifications should be deprecated in favour of specifying provider-specific configuration via annotations

Therefore, in the interests of backwards compatibility the following mitigations are proposed:

  • The renaming of annotations of the form external-dns.alpha.kubernetes.io/webhook-… into provider-specific properties named webhook/… is maintained
    • The transition plan is for Webhook providers to migrate away from external-dns.alpha.kubernetes.io/webhook-… based annotations to ones that better reflect the provider being offered - once such a transition is complete, the renaming of Webhook provider-specific properties becomes moot and may ultimately be removed
  • The specification of provider-specific properties via the providerSpecifc property of DNSEndpoint objects is subject to conversion of the old naming (i.e. aws/… and scw/…) into the new form (i.e. aws-… and scw-…)
    • The transition plan is for support for the specification of provider-specific properties to be achieved via annotations, rather than via the providerSpecific property - once such a transition is complete, the processing of any providerSpecific property values becomes moot and may ultimately be removed

The proposed changes are split into two separate commits. The first introducing new unit-tests to capture the pertinent behaviour of the existing implementation. The second implementing the proposed changes, including appropriate modifications of the unit-tests introduced in the first commit reflecting the, lack of, externally observable differences in behaviour.

@Dadeos-Menlo
Copy link
Author

Dadeos-Menlo commented Jan 24, 2025

@ivankatliarchuk: in response to your queries/concerns:

I am open to refactoring the CRDs and annotation, but I have concerns about the current proposal. This change has the potential to significantly impact many of us.

The changes proposed under this pull-request should not have any impact to anyone not wishing to take any immediate action.

Currently, the DNSEndpoint resource allows for multiple endpoints and DNS names with varying TTLs, and the examples primarily demonstrate configuration through annotations, but examples takes to account a case with single target, so not clear how this going to work with multiple endpoints and bunch of dnnames. So the examples should contain the old world if still supported and the new world, and nice to have an example for case when there are multiple endpoints.

An observation to be made is that a single DNSEndpoint object, comprising an endpoints specification describing multiple Endpoint objects, for example:

apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
  name: multiple
spec:
  endpoints:
  - dnsName: a.example.com
    recordType: A
    targets:
    - 10.0.0.1
  - dnsName: b.example.com
    recordType: A
    targets:
    - 10.0.0.2

is broadly equivalent to multiple DNSEndpoint objects, each describing a single Endpoint, for example:

apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
  name: single1
spec:
  endpoints:
  - dnsName: a.example.com
    recordType: A
    targets:
    - 10.0.0.1
---
apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
  name: single2
spec:
  endpoints:
  - dnsName: b.example.com
    recordType: A
    targets:
    - 10.0.0.2

If one were to introduce provider-specific configuration, via annotations, then one could either apply the same configuration to both Endpoint specifications using either representation or, if one required distinct provider-specific configuration for each Endpoint then one would have to adopt the latter representation; for example:

  • Identical provider-specific configuration applied to both Endpoint objects:
apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
  name: multiple
  annotations:
    external-dns.alpha.kubernetes.io/property: value
spec:
  endpoints:
  - dnsName: a.example.com
    recordType: A
    targets:
    - 10.0.0.1
  - dnsName: b.example.com
    recordType: A
    targets:
    - 10.0.0.2

or

apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
  name: single1
  annotations:
    external-dns.alpha.kubernetes.io/property: value
spec:
  endpoints:
  - dnsName: a.example.com
    recordType: A
    targets:
    - 10.0.0.1
---
apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
  name: single2
  annotations:
    external-dns.alpha.kubernetes.io/property: value
spec:
  endpoints:
  - dnsName: b.example.com
    recordType: A
    targets:
    - 10.0.0.2
  • Distinct provider-specific configuration applied to both Endpoint objects:
apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
  name: single1
  annotations:
    external-dns.alpha.kubernetes.io/property: value1
spec:
  endpoints:
  - dnsName: a.example.com
    recordType: A
    targets:
    - 10.0.0.1
---
apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
  name: single2
  annotations:
    external-dns.alpha.kubernetes.io/property: value2
spec:
  endpoints:
  - dnsName: b.example.com
    recordType: A
    targets:
    - 10.0.0.2

Is providerSpecific configuration still supported by the DNSEndpoint or not?

Yes - no, immediate, change is being made to the support/behaviour of the DNSEndpoint providerSpecific configuration

Even example like this one how should looks like in new world?

  spec:
    endpoints:
    - dnsName: auth-api-internal-eks.eu-west-1.example.com
      recordTTL: 60
      recordType: CNAME
      targets:
      - eks-cluster-dev-ingress-internal.example.com
	  providerSpecific:
      - name: "aws/failover"
        value: "PRIMARY"
      - name: "aws/health-check-id"
        value: "asdf1234-as12-as12-as12-asdf12345678"
      - name: "aws/evaluate-target-health"
        value: "true"
    - dnsName: auth-api-debug-internal-eks.example.com
      recordTTL: 60
      recordType: CNAME
      targets:
      - eks-cluster-dev-ingress-internal.example.com
	  providerSpecific:
      - name: "aws/failover"
        value: "PRIMARY"
      - name: "aws/health-check-id"
        value: "1a0da404-d89d-11ef-88e7-4750e71f6ce9"
      - name: "aws/evaluate-target-health"
        value: "false"

There is no new world; however, if you were to ask me what I consider to be a preferable approach to expressing such a configuration, then I would suggest the following:

apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
  name: auth-api-internal-eks
  annotations:
    external-dns.alpha.kubernetes.io/aws-failover: PRIMARY
    external-dns.alpha.kubernetes.io/aws-health-check-id: asdf1234-as12-as12-as12-asdf12345678
    external-dns.alpha.kubernetes.io/aws-evaluate-target-health: "true"
spec:
  endpoints:
  - dnsName: auth-api-internal-eks.eu-west-1.example.com
    recordTTL: 60
    recordType: CNAME
    targets:
    - eks-cluster-dev-ingress-internal.example.com
---
apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
  name: auth-api-debug-internal-eks
  annotations:
    external-dns.alpha.kubernetes.io/aws-failover: PRIMARY
    external-dns.alpha.kubernetes.io/aws-health-check-id: 1a0da404-d89d-11ef-88e7-4750e71f6ce9
    external-dns.alpha.kubernetes.io/aws-evaluate-target-health: "false"
spec:
  endpoints:
  - dnsName: auth-api-debug-internal-eks.example.com
    recordTTL: 60
    recordType: CNAME
    targets:
    - eks-cluster-dev-ingress-internal.example.com

@mloiseleur
Copy link
Contributor

@ivankatliarchuk does the last answer for @Dadeos-Menlo address your concerns ?

@ivankatliarchuk
Copy link
Contributor

ivankatliarchuk commented Feb 9, 2025

I still have concerns if this will work, as it's only cover single case. Example, we currently have dozens of kind: DNSEndpoint each with multiple endpoints: and provider specific configuration.
No examples showing, that the current approach is still supported, hence is a breaking change.

Old approach - one DNSEndpoint to many endpoint, new approach one DNSEndpoint to one endpoint.

@ivankatliarchuk
Copy link
Contributor

Another concern

apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
  name: examplednsrecord
  annotations:
    external-dns.alpha.kubernetes.io/hostname: "subdomain.foo.bar.com"
    external-dns.alpha.kubernetes.io/target: "other-subdomain.foo.bar.com"
    external-dns.alpha.kubernetes.io/set-identifier: "some-unique-id"
    external-dns.alpha.kubernetes.io/aws-failover: "PRIMARY"
    external-dns.alpha.kubernetes.io/aws-health-check-id: "asdf1234-as12-as12-as12-asdf12345678"
    external-dns.alpha.kubernetes.io/aws-evaluate-target-health: "true"

Why CRD is even required in this case, what is the value in creating kind: DNSEndpoint, I might missing something.

@ivankatliarchuk
Copy link
Contributor

Is any of the kubernetes or kubernetes-sigs project uses similar approach so I can understand better a proposal?

@mircea-pavel-anton
Copy link
Contributor

Another concern

apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
  name: examplednsrecord
  annotations:
    external-dns.alpha.kubernetes.io/hostname: "subdomain.foo.bar.com"
    external-dns.alpha.kubernetes.io/target: "other-subdomain.foo.bar.com"
    external-dns.alpha.kubernetes.io/set-identifier: "some-unique-id"
    external-dns.alpha.kubernetes.io/aws-failover: "PRIMARY"
    external-dns.alpha.kubernetes.io/aws-health-check-id: "asdf1234-as12-as12-as12-asdf12345678"
    external-dns.alpha.kubernetes.io/aws-evaluate-target-health: "true"

Why CRD is even required in this case, what is the value in creating kind: DNSEndpoint, I might missing something.

If I understand correctly, the point here is that the CRD should hold in it's spec definition only the core variables that are the "base" spec, such as DNS name, TTL, record type and targets.

Any kind of provider specific stuff should be applied as annotations, impacting the entire CRD (all endpoints contained)

@ivankatliarchuk
Copy link
Contributor

If I understand correctly, the point here is that the CRD should hold in it's spec definition only the core variables that are the "base" spec, such as DNS name, TTL, record type and targets.

Any kind of provider specific stuff should be applied as annotations, impacting the entire CRD (all endpoints contained)

I'm not agree with any of this. Create a proposal and try to explain why it should be the only way.

Not sure if I have enough expertise to approve this pull requests. But I've created a proposal in same area #5080 as there are definitely things to improve.

Current pull request, from how it's documented, there is a breaking change, change in behaviour as well. So for me it's a no go. I'd rather step away and leave it for a maintainers with more expertise to make a decision.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Feb 9, 2025
Add unit-test for CRD provider properties.
…rwise recognised, as provider properties.

Replace '/' characters in provider property names with '-'.
Add annotation support to the `DNSEndpoint` custom resource source.
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Feb 10, 2025
@Dadeos-Menlo
Copy link
Author

@ivankatliarchuk: I am surprised and disappointed that my previous responses do not appear to have allayed your concerns. Please find additional clarification below:

I still have concerns if this will work, as it's only cover single case. Example, we currently have dozens of kind: DNSEndpoint each with multiple endpoints: and provider specific configuration.
No examples showing, that the current approach is still supported, hence is a breaking change.

I have explained previously that I consider that any potential backwards compatibility concerns have been addressed. If there are any particular scenarios that you can provide that demonstrate backwards incompatibility issues then I'd be happy to explore them further. However, I am currently unaware of any such scenarios.

Old approach - one DNSEndpoint to many endpoint, new approach one DNSEndpoint to one endpoint.

I have explained previously that the proposed changes have no effect in this regard; prior to the proposed changes it is possible to specify zero or more Endpoint objects per DNSEndpoint object, as remains the case post the proposed changes.

It is simply not the case that there are "old" and "new" approaches, merely two approaches for expressing equivalent configurations.

Another concern

apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
  name: examplednsrecord
  annotations:
    external-dns.alpha.kubernetes.io/hostname: "subdomain.foo.bar.com"
    external-dns.alpha.kubernetes.io/target: "other-subdomain.foo.bar.com"
    external-dns.alpha.kubernetes.io/set-identifier: "some-unique-id"
    external-dns.alpha.kubernetes.io/aws-failover: "PRIMARY"
    external-dns.alpha.kubernetes.io/aws-health-check-id: "asdf1234-as12-as12-as12-asdf12345678"
    external-dns.alpha.kubernetes.io/aws-evaluate-target-health: "true"

Why CRD is even required in this case, what is the value in creating kind: DNSEndpoint, I might missing something.

The Custom Resource Description (CRD) is required in order to define a custom resource (i.e. kind: DNSEndpoint); upon which annotations may be applied. As to why anyone might consider it desirable to express what is essentially static configuration via the "external-dns" system, as opposed to configuring the DNS resource records in AWS' Route53 directly; I find it somewhat puzzling but it does not detract from the fact that doing so is possible via the "external-dns" system. Perhaps you prefer the following representation:

apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
  name: examplednsrecord
  annotations:
    external-dns.alpha.kubernetes.io/set-identifier: "some-unique-id"
    external-dns.alpha.kubernetes.io/aws-failover: "PRIMARY"
    external-dns.alpha.kubernetes.io/aws-health-check-id: "asdf1234-as12-as12-as12-asdf12345678"
    external-dns.alpha.kubernetes.io/aws-evaluate-target-health: "true"
spec:
  endpoints:
  - dnsName: subdomain.foo.bar.com
    recordType: CNAME
    targets:
    - other-subdomain.foo.bar.com

They're all equivalent; that's essentially the point.

I'm not agree with any of this. Create a proposal and try to explain why it should be the only way.

My understanding is that this pull-request represents a proposal, and an implementation, and I consider that I have previously explained all of the issues that you raise.

Current pull request, from how it's documented, there is a breaking change, change in behaviour as well.

Please provide a configuration demonstrating a breaking change.

So for me it's a no go. I'd rather step away and leave it for a maintainers with more expertise to make a decision.

You appear to be simultaneously expressing a veto and absolving yourself of involvement.

Ultimately, my understanding was that "external-dns" is an open-source project welcoming contributions. I observed an area that I felt could be improved and consequently, in the spirit of open-source development, I made a proposal for what I considered to be improvements. If they are not considered welcome, then so be it.

@mloiseleur
Copy link
Contributor

mloiseleur commented Feb 18, 2025

Current pull request, from how it's documented, there is a breaking change

@ivankatliarchuk Maybe I missed something, but on my side I do not see a breaking change in this PR. The CRD is not modified. All fields are kept.

My understanding is that this pull-request represents a proposal, and an implementation, and I consider that I have previously explained all of the issues that you raise.

@Dadeos-Menlo It's quite new, but we now have a template for proposal. The goal is to write down all the details and cover all the aspects of an impacting change.

To me, FTM, it's not very clear. This PR is about provider-specific annotations and it contains examples about expressing all fields with annotations, not only provider-specific.

About this kind of spec:

apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
  name: examplednsrecord
  annotations:
    external-dns.alpha.kubernetes.io/hostname: "subdomain.foo.bar.com"
    external-dns.alpha.kubernetes.io/target: "other-subdomain.foo.bar.com"
    external-dns.alpha.kubernetes.io/set-identifier: "some-unique-id"
    external-dns.alpha.kubernetes.io/aws-failover: "PRIMARY"
    external-dns.alpha.kubernetes.io/aws-health-check-id: "asdf1234-as12-as12-as12-asdf12345678"
    external-dns.alpha.kubernetes.io/aws-evaluate-target-health: "true"

It's not completely equivalent to the other one. In the other one, the CRD fields can be checked when writing it and are checked by the API Server. It can be quite elaborate, see CEL validation. There is no check with annotation and also no protection against typo.

Even if it can, I'm not sure we should go that far and allow to set common fields with annotations.

It's also possible to introduce a reference to an other CRD, which would be specific to provider. That's the choice of Cluster API, see here.

To say it differently, as a community, we probably need to really discuss and cover all the aspect induced by this kind of changes before reviewing the code implementing it.

=> May I invite you to open a proposal ? or @mircea-pavel-anton ?

It would definitely help other external-dns and webhook maintainers to review and share their thoughts on this.

@ivankatliarchuk
Copy link
Contributor

This approach and similar pull reqests is a no go for me. I could not approve it, have no issue if other maintainers do approve it.

I may have been unclear in my previous explanation. I'm concerned about the plan to add numerous annotations on top of Custom Resource Definition (CRD), not to attach them to other sources but to a CRD itself. This isn't a common practice, and I haven't seen similar solutions before and it does not looks right to me. Are we potentially looking at 100+ annotations? Isn't the purpose of CRDs to avoid this kind of situation? I do not think that CRD should have annotations as well from same vendor. CRD is an API, it has open API spec, and now let's say we publishing this API spec, but how to document, that it does support annotations as well.....

A disclaimer, I have no bias towards CRD or annotations, I think both approaches are great, and some k8s-sigs products do provide exceptional level of support for annotations.

I recognise that the current implementation of CRD is most likely obsolete and there is a luck of annotations probably, but not sure.

This approach introduces technical debt, from my perspective. Specifically:

  • It involves refactoring while simultaneously adding new features.
  • It lacks end-to-end tests and any validations.
  • The current pull request adds 10+ or 20+ annotations. Are they all functional?
  • From the example kind: DNSEndpoint should only have annotations now.....
  • Current pull request introduced a breaking change. Just look at the comment // Fixup legacy naming of some provider specific properties. It's could be not a breaking change, but how come it became legacy?

What are the potential risks?

  1. The whole solution is implemented, based on example aka DNSEndpoint only supports single endpoint

from

apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
  name: examplednsrecord
spec:
  endpoints:
  - dnsName: subdomain.foo.bar.com
    providerSpecific:
      - name: "aws/failover"
        value: "PRIMARY"
      - name: "aws/health-check-id"
        value: "asdf1234-as12-as12-as12-asdf12345678"
      - name: "aws/evaluate-target-health"
        value: "true"
    recordType: CNAME
    setIdentifier: some-unique-id
    targets:
    - other-subdomain.foo.bar.com

to

apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
  name: examplednsrecord
  annotations:
    external-dns.alpha.kubernetes.io/hostname: "subdomain.foo.bar.com"
    external-dns.alpha.kubernetes.io/target: "other-subdomain.foo.bar.com"
    external-dns.alpha.kubernetes.io/set-identifier: "some-unique-id"
    external-dns.alpha.kubernetes.io/aws-failover: "PRIMARY"
    external-dns.alpha.kubernetes.io/aws-health-check-id: "asdf1234-as12-as12-as12-asdf12345678"
    external-dns.alpha.kubernetes.io/aws-evaluate-target-health: "true"

But this is the reality. At the moment DNSEndpoint support a multiple endpoints. How this proposed approach even to translate into new world? It was one to many, but became one-to-one -> is it not a breaking change?

apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
  name: examplednsrecord
spec:
  endpoints:
  - dnsName: subdomain.foo.bar.com
    providerSpecific:
      - name: "aws/failover"
        value: "PRIMARY"
      - name: "aws/health-check-id"
        value: "asdf1234-as12-as12-as12-asdf12345678"
      - name: "aws/evaluate-target-health"
        value: "false"
    recordType: CNAME
    setIdentifier: some-unique-id
    targets:
    - other-subdomain.foo.bar.com
  - dnsName: subdomain.foo.bar.com
    providerSpecific:
      - name: "aws/failover"
        value: "SECONDARY"
      - name: "aws/health-check-id"
        value: "asdf1234-as12-as12-as12-asdfasfasdf"
      - name: "aws/evaluate-target-health"
        value: "true"
    recordType: CNAME
    setIdentifier: some-unique-id
    targets:
    - other-subdomain.foo.bar.com
  - dnsName: subdomain.foo.bar.com
    providerSpecific:
      - name: "aws/failover"
        value: "SECONDARY"
      - name: "aws/health-check-id"
        value: "asdf1234-as12-as12-as12-33333"
      - name: "aws/evaluate-target-health"
        value: "false"
    recordType: A
    targets:
    - 10.0.0.6
  - dnsName: subdomain.foo.bar.com
    providerSpecific:
      - name: "aws/failover"
        value: "PRIMARY"
      - name: "aws/health-check-id"
        value: "asdf1234-as12-as12-as12-asdf12345678"
      - name: "aws/evaluate-target-health"
        value: "true"
    recordType: CNAME
    setIdentifier: some-unique-id
    targets:
    - other-subdomain.foo.bar.com
  - dnsName: subdomain.foo.bar.com
    providerSpecific:
      - name: "aws/failover"
        value: "PRIMARY"
      - name: "aws/health-check-id"
        value: "asdf1234-as12-as12-as12-asdf12345678"
      - name: "aws/evaluate-target-health"
        value: "false"
    recordType: CNAME
    setIdentifier: some-unique-id
    targets:
    - other-subdomain.foo.bar.com
  1. What the behaviour is when there is a mix, CRD providerSpecific and annotations?
apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
  name: examplednsrecord
  annotations:
    external-dns.alpha.kubernetes.io/hostname: "subdomain-annotation.foo.bar.com"
    external-dns.alpha.kubernetes.io/target: "other-subdomain-annotation.foo.bar.com"
    external-dns.alpha.kubernetes.io/set-identifier: "some-unique-id"
    external-dns.alpha.kubernetes.io/aws-failover: "secondary"
    external-dns.alpha.kubernetes.io/aws-health-check-id: "asdf1234-as12-as12-as12-asdfafasdfas678"
    external-dns.alpha.kubernetes.io/aws-evaluate-target-health: "true"
spec:
  endpoints:
  - dnsName: subdomain.foo.bar.com
    providerSpecific:
      - name: "aws/failover"
        value: "PRIMARY"
      - name: "aws/health-check-id"
        value: "asdf1234-as12-as12-as12-asdf12345678"
      - name: "aws/evaluate-target-health"
        value: "true"
    recordType: CNAME
    setIdentifier: some-unique-id
    targets:
    - other-subdomain.foo.bar.com
  1. How to plan graduation for CRD from one version to another, if now there are annotations set on CRD/API spec? Bump them together? CRD and annotations coupled now.

  2. Why annotations for specific providers not set on other sources instead?

  3. All examples only shows a single way of doing things, with annotations from now on DNSEndpoint, when it should be old world and new with annotations. They should be at least equivalent. And it explicitly should provide an example how to translate CRD with multiple endpoints into annotations world.

@k8s-ci-robot
Copy link
Contributor

PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Feb 20, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants