-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
salt: Check Service CIDR has a route declared #3076
Conversation
Hello gdemonet,My role is to assist you with the merge of this Status report is not available. |
Waiting for approvalThe following approvals are needed before I can proceed with the merge:
Peer approvals must include at least 1 approval from the following list: |
Waiting for approvalThe following approvals are needed before I can proceed with the merge:
Peer approvals must include at least 1 approval from the following list:
|
When deploying some control plane services in the cluster (e.g. calico), we let such services interact with kube-apiserver using its Service IP (10.96.0.1 by default). However, the resolution of such virtual IPs, emulated using iptables (through kube-proxy), requires at a minimum that a route exists for the Service CIDR (a default route also works). We add a check to the `metalk8s_checks.node` helper, to have it run before bootstrap and expansion. See: kubernetes/kubernetes#57534
"ip route get" fails on recent kernel versions (with NETLINK_GET_STRICT_CHK enabled) if provided with a CIDR larger than /32 (for IPv4). This prevents our call to `network.get_route` from working if we give it the full Service IP range. To work around this limitation, we now: - cast the `destination` into a `ipaddress.IPv4Network` - get a route for the network's `network_address` - if found, check the routing table and verify that the configured routes include the full network range See: https://bugzilla.redhat.com/show_bug.cgi?id=1852038
In the previous commit, we introduce a validation of the full Service CIDR by looking at all routes defined in the system routing tables (obtained with `network.routes`). This change makes the previous approach (using `network.get_route`, similar to an `ip route get` invocation) unneeded, as we are already comparing networks inclusion-wise.
cd79fe2
to
1c347fc
Compare
/approve |
Waiting for approvalThe following approvals are needed before I can proceed with the merge:
Peer approvals must include at least 1 approval from the following list:
The following options are set: approve |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
In the queueThe changeset has received all authorizations and has been added to the The changeset will be merged in:
The following branches will NOT be impacted:
There is no action required on your side. You will be notified here once IMPORTANT Please do not attempt to modify this pull request.
If you need this pull request to be removed from the queue, please contact a The following options are set: approve |
I have successfully merged the changeset of this pull request
The following branches have NOT changed:
Please check the status of the associated issue None. Goodbye gdemonet. |
Component: salt, kubernetes, networking
Context:
When deploying some control plane services in the cluster (e.g. calico), we let such services interact with
kube-apiserver
using its Service IP (10.96.0.1 by default).However, the resolution of such virtual IPs, emulated using
iptables
(throughkube-proxy
), requires at a minimum that a route exists for the Service CIDR (a default route also works).We add a check to the
metalk8s_checks.node
helper, to have it run before bootstrap and expansion.See: kubernetes/kubernetes#57534
Summary:
network.get_route
Salt execution methodAcceptance criteria:
networks:service
range should bail out (both in bootstrap and expansion), similarly to checks from Properly fail if a conflicting package is installed and do not install conflicting packages #3050 or salt: Add a check about conflicting services for MetalK8s #3069