-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: configure arbitrary provider-specific properties via annotations #4875
base: master
Are you sure you want to change the base?
feat: configure arbitrary provider-specific properties via annotations #4875
Conversation
Hi @Dadeos-Menlo. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
I've just discovered the existence of CRD Source, which appears to involve the crd source interrogating Kubernetes custom-resource objects of type DNSEndpoint. Unfortunately, the This design is very unfortunate and hopefully, given its "alpha" status, can still be addressed. A far better approach would be to have end-users apply annotations on the I suggest that part of the confusion surrounding these provider-specific annotations/properties is that Endpoint.ProviderSpecific fulfils two distinct roles:
This commingling of responsibilities can easily result in unintended consequences; the most readily reproducible ones being that specifying an unsupported provider-specific property, such as suggested in the CRD source example (i.e. specifying a Cloudflare provider-specific property when using any provider other than Cloudflare) results in the DNS records generated being perpetually out-of-sync with the source object (i.e. the |
/ok-to-test |
I think I agree with you: it would be more consistent to use annotations, maybe in a v1alpha2 version of the CRD. For the webhook annotation, it was introduced because some webhook providers needs it. See for example mikrotik. |
The logic seems better to me with this logic, thanks 👍 . |
2d6fb18
to
c2041db
Compare
I've reinstated the documentation of webhook annotations, but still consider this approach misguided. The provider-properties associated with the mikrotik example cited should be named
). Imagine scenarios if/when the AWS provider was to be repackaged to use the Webhook provider architecture; one wouldn't expect to have to rename all of the existing annotations on one's Kubernetes objects. Not to mention the preclusion of deploying a single instance of external-dns, managing multiple providers deployed via the Webhook architecture, whose |
b21b0ff adds annotation support to the The only potential benefit for supporting the |
You're right. It makes sense to include the provider name instead of webhook. |
@mircea-pavel-anton as the author of microtik external provider, Wdyt ? Would it make sense for you ? |
So there are a couple of things to address here.
First off, I am not so sure about changing the I understand it from the point of view of consistency, both from the user experience point of view (though I would argue that using the CRD directly is much less common) as well as the developer experience one (the value that gets passed to the webhook provider ends up being different when we're using annotations), but I don't really like this solution. Secondly, I do agree that changing the prefix for webhook providers is a good idea. This is also similar to this PR #4951 that I have already commented on. I think that, more importantly, each webhook should have a customizable prefix specified upon installation and ONLY the annotations with that prefix should be passed in as provider specific configurations (optionally, in the form of the substring that comes AFTER the prefix). What I mean by this is that there should be a configuration option when installing external-dns with a webhook provider, maybe something like: ...
provider:
name: webhook
webhook:
name: custom-name-here
image:
repository: ghcr.io/mirceanton/external-dns-provider-mikrotik
... and then only unrecognized annotations that start with This will likely solve this issue i have also ran into, that @Dadeos-Menlo also mentioned: mirceanton/external-dns-provider-mikrotik#140 As for the second part, i.e. "optionally, in the form of the substring that comes AFTER the prefix" I mean that when we are passing in provider-specific annotations to a webhook provider via CRD configs vs annotations, the webhook itself receives different For example:
and ---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-demo
annotations:
external-dns.alpha.kubernetes.io/hostname: "some.example.com"
external-dns.alpha.kubernetes.io/ttl: "180"
external-dns.alpha.kubernetes.io/webhook-comment: "This is a comment" annotations!"
Might seem the same, but for the DNS Endpoint object, the webhook receives So TL;DR: yes, I do think custom annotation prefixes per provider are a good idea. I also think they should be customizable. @kashalls do you have some thoughts on this topic? IIRC you were also impacted by this cloudflare annotation issue that kept records perpetually out of sync |
Yes, my unifi webhook provider and pretty much all of the webhook providers are affected by this. I am leaning more towards the idea that each provider can negotiate with external-dns to determine what features are allowed / disabled / unsupported. (Maybe we can provide a list of valid annotations that the webhook expects to handle?) We already support this using the As for the annotation specific's I think adding a name to specifically only pass the prefixed annotations would be a good start. The user deploying the provider should be the one to create the prefix key as there might be some cases where the user wants to run two different instances of a provider. I do like this approach here because it gives the user the configuration option to choose their prefix keys for each provider. ...
provider:
name: webhook
webhook:
name: custom-name-here
image:
repository: ghcr.io/mirceanton/external-dns-provider-mikrotik
... Perhaps we could do more with the provide negotiate endpoint? Apologies if it seems more like a ramble, I am currently on a holiday trip. |
@Dadeos-Menlo I'm back from vacation, do you need any help with this? |
I'm not sure I understand; I don't feel that I need any specific help regarding this issue. Upon consideration of the existing implementation of the handling of provider-specific annotations I concluded that it might be improved and hence have proposed the changes associated with this pull-request. I presume that the matter is now in the hands of the maintainers/reviewers to, accept, reject, or otherwise comment on the changes proposed. Regarding the wider issues raised surrounding the usage of annotations to represent provider-specific configuration, my opinions for what they're worth are:
|
b21b0ff
to
6753540
Compare
After some more thinking about it, I decided I actually agree with @Dadeos-Menlo I think that a Thus, I think that this ---
apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
name: complex-record
spec:
endpoints:
- dnsName: complex.example.com
recordTTL: 180
recordType: A
targets:
- 1.2.3.4
providerSpecific:
- name: comment
value: "This is a comment"
- name: address-list
value: "1.2.3.1"
- name: match-subdomain
value: "true"
- name: disabled
value: "false"
Should actually look like this: ---
apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
name: complex-record
annotations:
external-dns.alpha.kubernetes.io/mikrotik-comment: "This is a comment"
external-dns.alpha.kubernetes.io/mikrotik-address-list: "1.2.3.1"
external-dns.alpha.kubernetes.io/mikrotik-match-subdomain: "true"
external-dns.alpha.kubernetes.io/mikrotik-disabled: "false"
spec:
endpoints:
- dnsName: complex.example.com
recordTTL: 180
recordType: A
targets:
- 1.2.3.4
Then all annotations should be sent to the webhook and it is up to each webhook to decide which ones it should implement. I think that this would end up greatly simplifying things. I am currently running into issues due to the different ways provider specific values are passed in from annotations or CRDs. @mloiseleur any chance we can move this further? Is there any way I can help get this merged and released? |
@mircea-pavel-anton Yes, you can help by doing a first review of this PR. cc @ivankatliarchuk for a second review |
I am open to refactoring the CRDs and annotation, but I have concerns about the current proposal. This change has the potential to significantly impact many of us. Instead of proceeding directly with the proposed changes, I suggest creating a dedicated issue to outline a comprehensive plan for CRD or annotation improvements. Pin the issue and define the release, when it will be available. Currently, the DNSEndpoint resource allows for multiple endpoints and DNS names with varying TTLs, and the examples primarily demonstrate configuration through annotations, but examples takes to account a case with single target, so not clear how this going to work with multiple endpoints and bunch of dnnames. So the examples should contain the old world if still supported and the new world, and nice to have an example for case when there are multiple endpoints. Is Even example like this one how should looks like in new world? spec:
endpoints:
- dnsName: auth-api-internal-eks.eu-west-1.example.com
recordTTL: 60
recordType: CNAME
targets:
- eks-cluster-dev-ingress-internal.example.com
providerSpecific:
- name: "aws/failover"
value: "PRIMARY"
- name: "aws/health-check-id"
value: "asdf1234-as12-as12-as12-asdf12345678"
- name: "aws/evaluate-target-health"
value: "true"
- dnsName: auth-api-debug-internal-eks.example.com
recordTTL: 60
recordType: CNAME
targets:
- eks-cluster-dev-ingress-internal.example.com
providerSpecific:
- name: "aws/failover"
value: "PRIMARY"
- name: "aws/health-check-id"
value: "1a0da404-d89d-11ef-88e7-4750e71f6ce9"
- name: "aws/evaluate-target-health"
value: "false" It's quite common to have something like that endpoints:
- dnsName: attract-go-data-provider-internal-eks.eu-west-1.dev.example.com
recordTTL: 60
recordType: CNAME
providerSpecific:
....
targets:
- eks-dev-ingress-internal.eu-west-1.dev.example.com
- dnsName: attract-go-data-provider-internal-eks.http2.eu-west-1.dev.example.com
recordTTL: 60
recordType: CNAME
providerSpecific:
....
targets:
- eks-staging-ingress-internal.eu-west-1.dev.example.com
- dnsName: attract-go-data-provider-debug-internal-eks.eu-west-1.dev.example.com
recordTTL: 60
recordType: CNAME
providerSpecific:
....
targets:
- traefik-ingress.eu-west-1.dev.example.com
- dnsName: example-go-data-provider-debug-internal-eks.http2.eu-west-1.dev.example.com
recordTTL: 60
recordType: CNAME
targets:
- nginx-ingress-internal.eu-west-1.dev.example.com
- dnsName: example-go-data-provider-grpc-internal-eks.http2.eu-west-1.dev.example.com
recordTTL: 60
recordType: CNAME
targets:
- eks-cluster-dev-ingress-internal.eu-west-1.dev.example.com Given these concerns, I cannot currently support the proposed changes. |
And this most likely will block mulitple-targets support or at least not clear how is suppose to work with annotation https://github.com/kubernetes-sigs/external-dns/blob/master/docs/proposal/multi-target.md |
6753540
to
d39ff59
Compare
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
The proposed changes have been rebased and backwards compatibility issues that were not initially fully appreciated have been addressed. An initial assumption made was:
This initial assumption was flawed in two respects:
The proposed changes seek to rename the provider-specific properties associated with the AWS and scaleway providers, in order to eliminate the use of Based upon previous discussions, it is understood that the consensus is broadly that:
Therefore, in the interests of backwards compatibility the following mitigations are proposed:
The proposed changes are split into two separate commits. The first introducing new unit-tests to capture the pertinent behaviour of the existing implementation. The second implementing the proposed changes, including appropriate modifications of the unit-tests introduced in the first commit reflecting the, lack of, externally observable differences in behaviour. |
@ivankatliarchuk: in response to your queries/concerns:
The changes proposed under this pull-request should not have any impact to anyone not wishing to take any immediate action.
An observation to be made is that a single apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
name: multiple
spec:
endpoints:
- dnsName: a.example.com
recordType: A
targets:
- 10.0.0.1
- dnsName: b.example.com
recordType: A
targets:
- 10.0.0.2 is broadly equivalent to multiple apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
name: single1
spec:
endpoints:
- dnsName: a.example.com
recordType: A
targets:
- 10.0.0.1
---
apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
name: single2
spec:
endpoints:
- dnsName: b.example.com
recordType: A
targets:
- 10.0.0.2 If one were to introduce provider-specific configuration, via annotations, then one could either apply the same configuration to both
apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
name: multiple
annotations:
external-dns.alpha.kubernetes.io/property: value
spec:
endpoints:
- dnsName: a.example.com
recordType: A
targets:
- 10.0.0.1
- dnsName: b.example.com
recordType: A
targets:
- 10.0.0.2 or apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
name: single1
annotations:
external-dns.alpha.kubernetes.io/property: value
spec:
endpoints:
- dnsName: a.example.com
recordType: A
targets:
- 10.0.0.1
---
apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
name: single2
annotations:
external-dns.alpha.kubernetes.io/property: value
spec:
endpoints:
- dnsName: b.example.com
recordType: A
targets:
- 10.0.0.2
apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
name: single1
annotations:
external-dns.alpha.kubernetes.io/property: value1
spec:
endpoints:
- dnsName: a.example.com
recordType: A
targets:
- 10.0.0.1
---
apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
name: single2
annotations:
external-dns.alpha.kubernetes.io/property: value2
spec:
endpoints:
- dnsName: b.example.com
recordType: A
targets:
- 10.0.0.2
Yes - no, immediate, change is being made to the support/behaviour of the
There is no new world; however, if you were to ask me what I consider to be a preferable approach to expressing such a configuration, then I would suggest the following: apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
name: auth-api-internal-eks
annotations:
external-dns.alpha.kubernetes.io/aws-failover: PRIMARY
external-dns.alpha.kubernetes.io/aws-health-check-id: asdf1234-as12-as12-as12-asdf12345678
external-dns.alpha.kubernetes.io/aws-evaluate-target-health: "true"
spec:
endpoints:
- dnsName: auth-api-internal-eks.eu-west-1.example.com
recordTTL: 60
recordType: CNAME
targets:
- eks-cluster-dev-ingress-internal.example.com
---
apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
name: auth-api-debug-internal-eks
annotations:
external-dns.alpha.kubernetes.io/aws-failover: PRIMARY
external-dns.alpha.kubernetes.io/aws-health-check-id: 1a0da404-d89d-11ef-88e7-4750e71f6ce9
external-dns.alpha.kubernetes.io/aws-evaluate-target-health: "false"
spec:
endpoints:
- dnsName: auth-api-debug-internal-eks.example.com
recordTTL: 60
recordType: CNAME
targets:
- eks-cluster-dev-ingress-internal.example.com |
@ivankatliarchuk does the last answer for @Dadeos-Menlo address your concerns ? |
I still have concerns if this will work, as it's only cover single case. Example, we currently have dozens of Old approach - one DNSEndpoint to many endpoint, new approach one DNSEndpoint to one endpoint. |
Another concern
Why CRD is even required in this case, what is the value in creating |
Is any of the kubernetes or kubernetes-sigs project uses similar approach so I can understand better a proposal? |
If I understand correctly, the point here is that the CRD should hold in it's Any kind of provider specific stuff should be applied as annotations, impacting the entire CRD (all endpoints contained) |
I'm not agree with any of this. Create a proposal and try to explain why it should be the only way. Not sure if I have enough expertise to approve this pull requests. But I've created a proposal in same area #5080 as there are definitely things to improve. Current pull request, from how it's documented, there is a breaking change, change in behaviour as well. So for me it's a no go. I'd rather step away and leave it for a maintainers with more expertise to make a decision. |
Add unit-test for CRD provider properties.
…rwise recognised, as provider properties. Replace '/' characters in provider property names with '-'. Add annotation support to the `DNSEndpoint` custom resource source.
d39ff59
to
a79544a
Compare
@ivankatliarchuk: I am surprised and disappointed that my previous responses do not appear to have allayed your concerns. Please find additional clarification below:
I have explained previously that I consider that any potential backwards compatibility concerns have been addressed. If there are any particular scenarios that you can provide that demonstrate backwards incompatibility issues then I'd be happy to explore them further. However, I am currently unaware of any such scenarios.
I have explained previously that the proposed changes have no effect in this regard; prior to the proposed changes it is possible to specify zero or more It is simply not the case that there are "old" and "new" approaches, merely two approaches for expressing equivalent configurations.
The Custom Resource Description (CRD) is required in order to define a custom resource (i.e.
They're all equivalent; that's essentially the point.
My understanding is that this pull-request represents a proposal, and an implementation, and I consider that I have previously explained all of the issues that you raise.
Please provide a configuration demonstrating a breaking change.
You appear to be simultaneously expressing a veto and absolving yourself of involvement. Ultimately, my understanding was that "external-dns" is an open-source project welcoming contributions. I observed an area that I felt could be improved and consequently, in the spirit of open-source development, I made a proposal for what I considered to be improvements. If they are not considered welcome, then so be it. |
@ivankatliarchuk Maybe I missed something, but on my side I do not see a breaking change in this PR. The CRD is not modified. All fields are kept.
@Dadeos-Menlo It's quite new, but we now have a template for proposal. The goal is to write down all the details and cover all the aspects of an impacting change. To me, FTM, it's not very clear. This PR is about provider-specific annotations and it contains examples about expressing all fields with annotations, not only provider-specific. About this kind of spec: apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
name: examplednsrecord
annotations:
external-dns.alpha.kubernetes.io/hostname: "subdomain.foo.bar.com"
external-dns.alpha.kubernetes.io/target: "other-subdomain.foo.bar.com"
external-dns.alpha.kubernetes.io/set-identifier: "some-unique-id"
external-dns.alpha.kubernetes.io/aws-failover: "PRIMARY"
external-dns.alpha.kubernetes.io/aws-health-check-id: "asdf1234-as12-as12-as12-asdf12345678"
external-dns.alpha.kubernetes.io/aws-evaluate-target-health: "true" It's not completely equivalent to the other one. In the other one, the CRD fields can be checked when writing it and are checked by the API Server. It can be quite elaborate, see CEL validation. There is no check with annotation and also no protection against typo. Even if it can, I'm not sure we should go that far and allow to set common fields with annotations. It's also possible to introduce a reference to an other CRD, which would be specific to provider. That's the choice of Cluster API, see here. To say it differently, as a community, we probably need to really discuss and cover all the aspect induced by this kind of changes before reviewing the code implementing it. => May I invite you to open a proposal ? or @mircea-pavel-anton ? It would definitely help other external-dns and webhook maintainers to review and share their thoughts on this. |
I may have been unclear in my previous explanation. I'm concerned about the plan to add numerous annotations on top of Custom Resource Definition (CRD), not to attach them to other sources but to a CRD itself. This isn't a common practice, and I haven't seen similar solutions before and it does not looks right to me. Are we potentially looking at 100+ annotations? Isn't the purpose of CRDs to avoid this kind of situation? I do not think that CRD should have annotations as well from same vendor. CRD is an API, it has open API spec, and now let's say we publishing this API spec, but how to document, that it does support annotations as well..... A disclaimer, I have no bias towards CRD or annotations, I think both approaches are great, and some k8s-sigs products do provide exceptional level of support for annotations. I recognise that the current implementation of CRD is most likely obsolete and there is a luck of annotations probably, but not sure. This approach introduces technical debt, from my perspective. Specifically:
What are the potential risks?
from apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
name: examplednsrecord
spec:
endpoints:
- dnsName: subdomain.foo.bar.com
providerSpecific:
- name: "aws/failover"
value: "PRIMARY"
- name: "aws/health-check-id"
value: "asdf1234-as12-as12-as12-asdf12345678"
- name: "aws/evaluate-target-health"
value: "true"
recordType: CNAME
setIdentifier: some-unique-id
targets:
- other-subdomain.foo.bar.com to apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
name: examplednsrecord
annotations:
external-dns.alpha.kubernetes.io/hostname: "subdomain.foo.bar.com"
external-dns.alpha.kubernetes.io/target: "other-subdomain.foo.bar.com"
external-dns.alpha.kubernetes.io/set-identifier: "some-unique-id"
external-dns.alpha.kubernetes.io/aws-failover: "PRIMARY"
external-dns.alpha.kubernetes.io/aws-health-check-id: "asdf1234-as12-as12-as12-asdf12345678"
external-dns.alpha.kubernetes.io/aws-evaluate-target-health: "true" But this is the reality. At the moment DNSEndpoint support a multiple endpoints. How this proposed approach even to translate into new world? It was one to many, but became one-to-one -> is it not a breaking change? apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
name: examplednsrecord
spec:
endpoints:
- dnsName: subdomain.foo.bar.com
providerSpecific:
- name: "aws/failover"
value: "PRIMARY"
- name: "aws/health-check-id"
value: "asdf1234-as12-as12-as12-asdf12345678"
- name: "aws/evaluate-target-health"
value: "false"
recordType: CNAME
setIdentifier: some-unique-id
targets:
- other-subdomain.foo.bar.com
- dnsName: subdomain.foo.bar.com
providerSpecific:
- name: "aws/failover"
value: "SECONDARY"
- name: "aws/health-check-id"
value: "asdf1234-as12-as12-as12-asdfasfasdf"
- name: "aws/evaluate-target-health"
value: "true"
recordType: CNAME
setIdentifier: some-unique-id
targets:
- other-subdomain.foo.bar.com
- dnsName: subdomain.foo.bar.com
providerSpecific:
- name: "aws/failover"
value: "SECONDARY"
- name: "aws/health-check-id"
value: "asdf1234-as12-as12-as12-33333"
- name: "aws/evaluate-target-health"
value: "false"
recordType: A
targets:
- 10.0.0.6
- dnsName: subdomain.foo.bar.com
providerSpecific:
- name: "aws/failover"
value: "PRIMARY"
- name: "aws/health-check-id"
value: "asdf1234-as12-as12-as12-asdf12345678"
- name: "aws/evaluate-target-health"
value: "true"
recordType: CNAME
setIdentifier: some-unique-id
targets:
- other-subdomain.foo.bar.com
- dnsName: subdomain.foo.bar.com
providerSpecific:
- name: "aws/failover"
value: "PRIMARY"
- name: "aws/health-check-id"
value: "asdf1234-as12-as12-as12-asdf12345678"
- name: "aws/evaluate-target-health"
value: "false"
recordType: CNAME
setIdentifier: some-unique-id
targets:
- other-subdomain.foo.bar.com
apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
name: examplednsrecord
annotations:
external-dns.alpha.kubernetes.io/hostname: "subdomain-annotation.foo.bar.com"
external-dns.alpha.kubernetes.io/target: "other-subdomain-annotation.foo.bar.com"
external-dns.alpha.kubernetes.io/set-identifier: "some-unique-id"
external-dns.alpha.kubernetes.io/aws-failover: "secondary"
external-dns.alpha.kubernetes.io/aws-health-check-id: "asdf1234-as12-as12-as12-asdfafasdfas678"
external-dns.alpha.kubernetes.io/aws-evaluate-target-health: "true"
spec:
endpoints:
- dnsName: subdomain.foo.bar.com
providerSpecific:
- name: "aws/failover"
value: "PRIMARY"
- name: "aws/health-check-id"
value: "asdf1234-as12-as12-as12-asdf12345678"
- name: "aws/evaluate-target-health"
value: "true"
recordType: CNAME
setIdentifier: some-unique-id
targets:
- other-subdomain.foo.bar.com
|
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Description
Support for provider-specific annotations, that manifest as Endpoint.ProviderSpecific properties, is currently restricted to a subset of pre-defined providers. This functionality should be available to all providers, without requirement for special-case registration within the getProviderSpecificAnnotations(…) function.
The proposed changes include:
getProviderSpecificAnnotations(…)
function so as to treat all annotations prefixed withexternal-dns.alpha.kubernetes.io/
, but that are not otherwise recognised, as provider-specific propertiesNotes:
provider/property
orprovider-property
) is an internal implementation detail, and therefore the proposed renaming does not represent any backwards-incompatibility…/webhook-<custom-annotation>
is considered unwise - provider-specific properties are considered to be provider-specific, whereas the Webhook provider is considered to be a wrapper of providers, rather than a provider in and of itself (see: Moving providers out of tree #4347)webhook-property
towebhook/property
has not been implemented - implementation would be trivial, but given the conclusion that provider-specific properties associated with a Webhook provider are nonsensical, such an implementation is not being initially proposedChecklist