-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Solutions management #2279
Solutions management #2279
Conversation
Hello alexandre-allard-scality,My role is to assist you with the merge of this Status report is not available. |
ConflictA conflict has been raised during the creation of I have not created the integration branch. Here are the steps to resolve this conflict: $ git fetch
$ git checkout -B w/2.5/feature/solutions-management origin/development/2.5
$ git merge origin/feature/solutions-management
$ # <intense conflict resolution>
$ git commit
$ git push -u origin w/2.5/feature/solutions-management |
scripts/solutions.sh
Outdated
return 1 | ||
fi | ||
|
||
if ! jq -e ".[] | select(.version == \"$VERSION\")" \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Installing a package just for this single line seems a bit heavy... But that's OK I guess, jq
is a nice tool to have at hand, could even make sense to put it in metalk8s-utils
container image (see #2156).
FWIW, here's how you could have done it without external deps:
if ! python -c "import json, sys; \
available = any(v['version'] == '$VERSION' for v in json.loads('$solution_data')); \
sys.exit(0 if available else 1)"; then
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, it's a bit heavy, but I think jq
can still be useful to parse json as I did here, if one want to read the content of a configmap for example.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've removed it and used the python solution.
1732bca
to
3c98eb4
Compare
2fd3615
to
8fbee1c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some comment but you already know most of them I think
} | ||
|
||
delete_solution() { | ||
[[ ${NAMESPACE:-} ]] || NAMESPACE=$NAME |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All namespaces used for a solution could be retrieved using label selector like in the salt module,
But IMO we don't need to do it in this PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, but the user can have 2 namespaces in the same environment, with the same Solution and only want to delete a specific Solution, not both.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes indeed but IMO by default we should remove all namespaces even if namespace is not equal to the environement name, but anyway lets not take care of this in the bash script
Some objects are required for MetalK8s to manage Solutions, notably a Namespace and ConfigMap for Admin UIs, and the `Environment` CRD. Issue: #1852
A new field, currently named `active` (may be a poor choice, we'll see with first user interactions, `activeVersions` could be more explicit), is introduced to set which version of a Solution should be used for the cluster-wide components of this Solution (i.e. the Admin UI and CRDs for now). Validation of the file format is added, using the usual `kind` and `apiVersion` fields. Later improvements could use some OpenAPI schema. Fixes: #1582
Previous approached relied on a ConfigMap, which poses numerous issues: - it allows one to lie to the formula, potentially causing further problems down the line - the CM cannot always be up-to-date with the actual system state (due to errors when updating it, or manual operations outside of Salt) We change this approach by including basic heuristics in the `metalk8s_solutions` module, which can be refined in the future. Listing of mounted Solutions is made based on `mount.available` and a naive filtering on the mountpoints (necessarily under /srv/scality). Listing of active Solution versions is made by looking for UI Service objects in the `metalk8s-solutions` namespace. This approach relies on labels that we don't set ourselves, which is brittle, and does not account for Solutions that may want to ship without an Admin UI. Future iterations could improve those methods, but we are convinced it is still safer than the ConfigMap method used before. Issue: #1852
Making a Solution archive "available" to the cluster was split into two formulas: `metalk8s.solutions.mounted` and `.configured`. We group them into a single one, called `.available`. We also remove the `.unmounted` and `.unconfigured` formulas, since the source of truth is entirely held by the configuration file (/etc/metalk8s/solutions.yaml). A single formula is responsible for applying the desired state described in this configuration, adding and removing Solution archives from the cluster.
Once a Solution is "available" (i.e. mounted and its images exposed by the cluster registry), its cluster-wide components can be deployed. We rely on the `active` mapping of active versions in the config file to know which components to deploy or remove. Note that one can set a desired version to 'latest' in this mapping, which the orchestration will handle for using the highest version available for a Solution (in SemVer terms). Issue: #1852
We used `display_name` and `machine_id` to differenciate the expected consumers (user or machine). For simplicity (thanks to slaperche-scality), we rename them in `name` and `id` respectively. KISS!
Use Salt to read/write the Solutions config file
Environment CRD is not longer needed as we use directly a ConfigMap in each namespace
Add salt module to read solution environment ConfigMap and retrieve it in the solution ext_pillar
We re-instate the ConfigMap as both a way to track which version was made available (through the `deploy-components` orchestrate) and a way to share the information with MetalK8s UI users.
The setup of "environment" Namespaces and all the resources required to run a Solution instance in it is becoming too complex for it to be properly managed by the Solution Admin UIs, without mentioning the authz requirements on users of said Admin UIs to make it work. Instead, Admin UIs will now be deployed once per environment, making it easier to keep in sync with the same Operator version.
Solution ISOs can now contain a `config.yaml` file to describe the few specifics of its Operator and UI. We expose this information in the `metalk8s:solutions:available` pillar, for later consumption in orchestrations.
Salt is now able to "prepare" environments, based on two sources of information: - the environment *Namespace*s (denoted by some well-known labels) and their 'metalk8s-environment' *ConfigMap*s (which lists the Solution versions to deploy in this environment) - the *SolutionConfig*s stored in each Solution ISO which allows Salt to know what should be deployed Both sources of information are stored in the pillar.
Let's be consistent with the script's names, (we don't have bootstrap-manager, downgrade-manager, ...) the "-manager" suffix is not really useful anyway. Refs: #2277
We add two helpers: - first one to retrieve the saltenv - second one to retrieve the local minion ID These functions will be used in the script managing the Solutions Refs: #2277
We no longer try to reconfigure registry by including metalk8s.repo.installed in metalk8s.solutions.available which is wrong: We alter the pillar by mounting Solutions and the changes are not taken into account inside the repo sls, because Jjinja rendering has already been done at this point, so it reconfigures registry with outdated configuration. We now do this in 2 steps, first mount/unmount Solutions, then reconfigure registry. Refs: #2277
Adding of salt no-op to avoid having an empty sls file after rendering, otherwise it returns an exit code of 2 when using --retcode-passthrough, thus breaking the calling script. Refs: #2277
8fbee1c
to
ac115f9
Compare
Integration data createdI have created the integration data for the additional destination branches.
The following branches will NOT be impacted:
You can set option
|
Waiting for approvalThe following approvals are needed before I can proceed with the merge:
Peer approvals must include at least 1 approval from the following list:
The following reviewers are expecting changes from the author, or must review again: |
@@ -0,0 +1,116 @@ | |||
# |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe this state should be renamed because he not really make the solutions available now :/ maybe like for archives of metalk8s mounted.sls
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, we'll change that later, we must first merge this asap.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Waiting for approvalThe following approvals are needed before I can proceed with the merge:
Peer approvals must include at least 1 approval from the following list:
|
/approve |
Build failedThe build for commit did not succeed in branch w/2.6/feature/solutions-management. The following options are set: approve |
Build failedThe build for commit did not succeed in branch w/2.5/feature/solutions-management. The following options are set: approve |
Build failedThe build for commit did not succeed in branch feature/solutions-management. The following options are set: approve |
Build failedThe build for commit did not succeed in branch w/2.6/feature/solutions-management. The following options are set: approve |
./solutions.sh import --archive </path/to/solution1.iso> \ | ||
--archive </path/to/solution2.iso> | ||
|
||
Unimport a Solution |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unimport is probably not the right term, I would simply say remove
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it's not the right term, it's not even a term. :)
Anyway this stuff will not stay as is and will be changed soon when we'll integrate this script in a centralized CLI.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will be changed soon
soon
can mean far future or never in IT 😛 (except if you mean it'll be tackled in your next PR)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Almost my next PR, it's not "maybe, someday... if we have time" :)
Build failedThe build for commit did not succeed in branch w/2.5/feature/solutions-management. The following options are set: approve |
Build failedThe build for commit did not succeed in branch feature/solutions-management. The following options are set: approve |
Build failedThe build for commit did not succeed in branch w/2.6/feature/solutions-management. The following options are set: approve |
Build failedThe build for commit did not succeed in branch w/2.5/feature/solutions-management. The following options are set: approve |
Build failedThe build for commit did not succeed in branch feature/solutions-management. The following options are set: approve |
Build failedThe build for commit did not succeed in branch w/2.5/feature/solutions-management. The following options are set: approve |
Build failedThe build for commit did not succeed in branch feature/solutions-management. The following options are set: approve |
Build failedThe build for commit did not succeed in branch w/2.5/feature/solutions-management. The following options are set: approve |
Build failedThe build for commit did not succeed in branch w/2.6/feature/solutions-management. The following options are set: approve |
Build failedThe build for commit did not succeed in branch w/2.5/feature/solutions-management. The following options are set: approve |
Build failedThe build for commit did not succeed in branch w/2.6/feature/solutions-management. The following options are set: approve |
In the queueThe changeset has received all authorizations and has been added to the The changeset will be merged in:
The following branches will NOT be impacted:
There is no action required on your side. You will be notified here once IMPORTANT Please do not attempt to modify this pull request.
If you need this pull request to be removed from the queue, please contact a The following options are set: approve |
I have successfully merged the changeset of this pull request
The following branches have NOT changed:
Please check the status of the associated issue None. Goodbye alexandre-allard-scality. |
Component: salt, scripts, solutions
Context:
Summary:
Acceptance criteria: Ability to import, activate and deploy a Solution in an Environment using the CLI
solutions.sh
Closes: #2277