-
Notifications
You must be signed in to change notification settings - Fork 690
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cache sync latency #5550
Comments
Hey @therealak12! Thanks for opening your first issue. We appreciate your contribution and welcome you to our community! We are glad to have you here and to have your input on Contour. You can also join us on our mailing list and in our channel in the Kubernetes Slack Workspace |
Some more details on this issue: Contour fills an additional cache (separate from the controller runtime's cache). As far as I've understood, the cache is filled gradually as the
A challenge arises in that the eventHandler added to the manager solely implements the LeaderElectionRunnable interface. However, IMHO, it should implement the hasCache interface so that the manager could detect when the cache is synced! In our cluster, which utilizes a slow-disk etcd, the sequence of events is as follows:
A trivial solution is to implement a |
The client-go informers list their specific resources initially. Later the The same implementation for the contour dag's internal cache is possible though not trivial. We have to list the resources we are interested in and pass them to the contourHandler. Then the handler itself should pop each object from the list after it's handled. A new As a workaround, we can make These stuff are just my interpretation of the contour's logic. If any of the maintainers could approve these suggestions, I'm okay with implementing them. Thanks in advance. |
I'm not familiar with this, so take following with a grain of salt, but some time ago I saw client-go and controller-runtime added a new parameter I did not attempt to find out more about it, to prove this is the case, or to see if there are other ways. Anyhow, it seemed bit backward to me, since from that boolean you still do not know if that was the last |
Many thanks! |
It would be absolutely fantastic if you do! 😄 🎉 |
What steps did you take and what happened:
leader election lost
error message.secret for fallback certificate not found
ortarget service not found
.What did you expect to happen:
The controller should wait for the cache sync and then start validating httpproxy objects.
Anything else you would like to add:
I'm not sure if the cache sync latency is the issue. Another possibility is the latency in reconciling the
TLSCertificateDelegation
resource.Environment:
kubectl version
): v1.23.3CPU: 24 (Skylake, IBRS)
Arch: x86_64
Memory: 62Gi
/etc/os-release
): CentOS 7The text was updated successfully, but these errors were encountered: