Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Backport multiple pre-checks in 2.7 #3153

Merged
merged 11 commits into from
Mar 3, 2021

Conversation

In salt we have a function `jinja.load_map` that retrieve a specific
value from a map.jinja (same as what we do in salt SLS) but this
function does not support saltenv
Add a new function to retrieve information from `map.jinja` in MetalK8s
context, so hardcoded map.jinja path and retrieving saltenv from version
stored in the pillar
Sees: saltstack/salt#59300

Cherry-picked from: 21a1cfa
@gdemonet gdemonet added kind:enhancement New feature or request topic:deployment Bugs in or enhancements to deployment stages complexity:easy Something that requires less than a day to fix topic:salt Everything related to SaltStack in our product labels Feb 26, 2021
@gdemonet gdemonet added this to the MetalK8s 2.7.3 milestone Feb 26, 2021
@gdemonet gdemonet requested a review from a team February 26, 2021 16:37
@bert-e

This comment has been minimized.

@bert-e

This comment has been minimized.

Copy link
Collaborator

@TeddyAndrieux TeddyAndrieux left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You want to backport this one as well otherwise bootstrap restore is broken #3107

CHANGELOG.md Outdated Show resolved Hide resolved
TeddyAndrieux and others added 10 commits February 26, 2021 17:58
If some package are installed on the host where we want to deploy
MetalK8s the installation does not work (e.g: containerd.io)
Add a new function to check that those package are not installed on the
host before deploying all the MetalK8s components.
NOTE: We do not automatically uninstall the package from the host since
those packages may have been installed for good reason, so just ask the
user to remove those packages

Fixes: #2992
Cherry-picked from: 49dcaa4
Another commit already added the check for conflicting package installed
on the host but we also need to exclude those packages especially for
`containerd.io` because if you try to install `containerd` or `runc` it
will, by default` install `containerd.io` since this one provide
`containerd` and `runc` and obsoletes thoses ones.
Today MetalK8s cannot work with `containerd.io` so we need to
explicitly exclude this package in case one repository on the host is
already configured and provide this `containerd.io` package

Cherry-picked from: 1ea12d0
If some service are started on the host where we want to deploy MetalK8s
the installation may not work properly (e.g.: firewalld)
Add a new function to check that those service are not started on the
host before deploying all the MetalK8s components.
NOTE: We do not automatically stop the service from the host since those
services may have been started for good reason, so just ask the user to
remove those packages

Fixes: #3067
Cherry-picked from: 0acf814
Currently in the bootstrap we use the local node as bootstrap node and
also as CA minion, adding a check to be sure that the minion id provided
in the bootstrap config match the local minion id.
NOTE: This check is only part of the script and not somewhere else in
Salt, the `CA:minion` key inside the bootstrap configuration
is here for being able to install MetalK8s using another Salt minion
as CA

Cherry-picked from: 6814f27
When deploying some control plane services in the cluster (e.g. calico),
we let such services interact with kube-apiserver using its Service IP
(10.96.0.1 by default).
However, the resolution of such virtual IPs, emulated using iptables
(through kube-proxy), requires at a minimum that a route exists for the
Service CIDR (a default route also works).
We add a check to the `metalk8s_checks.node` helper, to have it run
before bootstrap and expansion.

See: kubernetes/kubernetes#57534
Cherry-picked from: fc6f3e6
"ip route get" fails on recent kernel versions (with
NETLINK_GET_STRICT_CHK enabled) if provided with a CIDR larger than /32
(for IPv4).
This prevents our call to `network.get_route` from working if we give it
the full Service IP range.
To work around this limitation, we now:
- cast the `destination` into a `ipaddress.IPv4Network`
- get a route for the network's `network_address`
- if found, check the routing table and verify that the configured
routes include the full network range

See: https://bugzilla.redhat.com/show_bug.cgi?id=1852038
Cherry-picked from: c73aede
In the previous commit, we introduce a validation of the full Service
CIDR by looking at all routes defined in the system routing tables
(obtained with `network.routes`).
This change makes the previous approach (using `network.get_route`,
similar to an `ip route get` invocation) unneeded, as we are already
comparing networks inclusion-wise.

Cherry-picked from: 1c347fc
In restore script we do not want to check this CA minion as we override
the one in the bootstrap config (so we sure the CA minion match the
local minion id)

Cherry-picked from: 016bea0
For some check we do in "check local node" we need some information from
the bootstrap configuration (e.g.: Service cidr) so we need to restore
this bootstrap configuration before launching the "check local node"

Cherry-picked from: 6bd2434
Move `ensure_archives_mounted` function from bootstrap script to common
and call this function in the restore script.

Sees: e173f8b
Cherry-picked from: 4eb0897
@gdemonet gdemonet force-pushed the improvement/backport-prechecks branch from f819bae to 37c1bcb Compare February 26, 2021 16:59
@bert-e
Copy link
Contributor

bert-e commented Feb 26, 2021

Integration data created

I have created the integration data for the additional destination branches.

The following branches will NOT be impacted:

  • development/1.0
  • development/1.1
  • development/1.2
  • development/1.3
  • development/2.0
  • development/2.1
  • development/2.2
  • development/2.3
  • development/2.4
  • development/2.5
  • development/2.6

You can set option create_pull_requests if you need me to create
integration pull requests in addition to integration branches, with:

@bert-e create_pull_requests

@bert-e

This comment has been minimized.

@bert-e

This comment has been minimized.

@gdemonet
Copy link
Contributor Author

/approve

@bert-e
Copy link
Contributor

bert-e commented Feb 26, 2021

Waiting for approval

The following approvals are needed before I can proceed with the merge:

  • the author

  • one peer

Peer approvals must include at least 1 approval from the following list:

The following options are set: approve

@TeddyAndrieux
Copy link
Collaborator

LGTM, just shouldn't we remove entry from 2.8.0 changelog since this one is not released yet ?

@gdemonet
Copy link
Contributor Author

gdemonet commented Mar 1, 2021

LGTM, just shouldn't we remove entry from 2.8.0 changelog since this one is not released yet ?

Oh good point, right, will do this change.

@bert-e

This comment has been minimized.

@bert-e

This comment has been minimized.

@bert-e
Copy link
Contributor

bert-e commented Mar 2, 2021

Waiting for approval

The following approvals are needed before I can proceed with the merge:

  • the author

  • one peer

Peer approvals must include at least 1 approval from the following list:

The following options are set: approve

@bert-e
Copy link
Contributor

bert-e commented Mar 2, 2021

In the queue

The changeset has received all authorizations and has been added to the
relevant queue(s). The queue(s) will be merged in the target development
branch(es) as soon as builds have passed.

The changeset will be merged in:

  • ✔️ development/2.7

  • ✔️ development/2.8

  • ✔️ development/2.9

The following branches will NOT be impacted:

  • development/1.0
  • development/1.1
  • development/1.2
  • development/1.3
  • development/2.0
  • development/2.1
  • development/2.2
  • development/2.3
  • development/2.4
  • development/2.5
  • development/2.6

There is no action required on your side. You will be notified here once
the changeset has been merged. In the unlikely event that the changeset
fails permanently on the queue, a member of the admin team will
contact you to help resolve the matter.

IMPORTANT

Please do not attempt to modify this pull request.

  • Any commit you add on the source branch will trigger a new cycle after the
    current queue is merged.
  • Any commit you add on one of the integration branches will be lost.

If you need this pull request to be removed from the queue, please contact a
member of the admin team now.

The following options are set: approve

@bert-e
Copy link
Contributor

bert-e commented Mar 3, 2021

I have successfully merged the changeset of this pull request
into targetted development branches:

  • ✔️ development/2.7

  • ✔️ development/2.8

  • ✔️ development/2.9

The following branches have NOT changed:

  • development/1.0
  • development/1.1
  • development/1.2
  • development/1.3
  • development/2.0
  • development/2.1
  • development/2.2
  • development/2.3
  • development/2.4
  • development/2.5
  • development/2.6

Please check the status of the associated issue None.

Goodbye gdemonet.

@bert-e bert-e merged commit 48edee4 into development/2.7 Mar 3, 2021
@bert-e bert-e deleted the improvement/backport-prechecks branch March 3, 2021 10:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
complexity:easy Something that requires less than a day to fix kind:enhancement New feature or request topic:deployment Bugs in or enhancements to deployment stages topic:salt Everything related to SaltStack in our product
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants