-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rfc2136 continually removing then adding A records that have more than 1 target #1596
Comments
I created another new record, and the same problem is not happening:
|
Hm, this is strange. The removing records have a zero-second TTL:
But the adding records have a 60 second TTL, which is something I configured a couple of days ago:
Is that wrong? |
The same is happening with the Cloudflare provider. Example:
This is an old issue. Please see all the discussion of #992. The problem is not fixed. This issue can cause random major outages where the DNS cannot be resolved at all. I had outages of more than a day! fortunately in a personal cluster. |
correct, I had to revert back to route53 because of these outages in dns name resolution, not pleasant. |
This should be fixed in 0.7.2 at least for cloudflare provider, could you check? |
I will check hopefully next week. |
This is still broken for rfc2136 with CNAME records and provider openshift-routes |
seems solved with cloudflare for me |
Still having issues with CNAME records and cloudflare. |
@Elegant996 What version of external-dns are you using? Can you write a test to reproduce in cloudflare_test.go? |
Currently using The worst part about this issue is cloudflare eventually ignored the CNAME. It appears that after being added a removed enough times, it just assumed the record to be non-existent and returned NXDOMAIN. The record had to be removed for a few hours and then manually added before it worked again. |
Hi folks, just to be clear, this ticket is in regards to the |
So, for my rfc2136 what's happening now is that the DNS record itself has a TTL of 20 seconds, but The same mismatch seems to happen for users that specify their own TTL with The following record actually has a TTL of 20, but for some reason,
|
Here's an updated log containing the debug messages for this server:
|
I see that our DNS servers return two IPs in this record ( |
@stefanlasiewski: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I believe this bug only affects records that have two IP addresses, and it appears that the rfc2136 provider has trouble reconciling those two records, and therefore it loops every interval ( |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
This isn't affecting us any more as far as I can tell. I'll go ahead and close this issue. |
What happened:
External DNS with the rfc2136 provider is continually Removing then re-adding DNS records.
What you expected to happen:
I expect External DNS to add records once, and then only update them on occasion when it detects a change.
How to reproduce it (as minimally and precisely as possible):
Configure an ingress.
Wait a couple of days. Do normal things like upgrade external-dns to the next version of the chart, etc. Come back, and notice how existing DNS records are being updated every minute:
The resulting commandline is:
Anything else we need to know?:
Environment:
external-dns --version
): v0.7.1The text was updated successfully, but these errors were encountered: