From ad4c3d31994908ca784959e936aa8ef1e3f0ea7c Mon Sep 17 00:00:00 2001 From: Luuk van Venrooij <11056665+seriva@users.noreply.github.com> Date: Tue, 9 Aug 2022 05:31:56 -0700 Subject: [PATCH] Merge develop into 2.0 (#3250) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Document upgrade Red Hat / CentOS 7 to v 8.x (#3109) * Migrate registry volume to named one (#3118) * Fix dnf repoquery checks only latest kube* packages (#3123) (#3126) * Switch to RHEL 8.4 for longer support (#3117) (#3129) Co-authored-by: Tomasz Baran <46519524+to-bar@users.noreply.github.com> * [develop] Fetch missing packages, add stderr handling (#3132) * Add handling stderr to repoquery and check if there are missing packages * Skip missing_packages if dependencies, handle output of dnf download * Unit tests fix * Changes after review, Update libmodulemd to the latest * Documentation update * Remove duplicated run(), fix for offline mode * Tests fix * Lifecycle update (#3135) * Lifecycle update * Mark 1.2 as out of support * Bump epicli version in develop (#3134) Co-authored-by: przemyslavic <> * Add policycoreutils package (#3139) * Add allow_mismatch flag for ceph/ceph:v16.2.7 image (#3136) (#3138) * [2.0.1] Filebeat upgrade to 7.12.1 (#3086) * Filebeat update to 7.12.1 * Add missing tasks name in upgrade/filebeat.yml * Update sha256 * Update changelogs * Changelog change back to 2.0.1 Co-authored-by: przemyslavic <43173646+przemyslavic@users.noreply.github.com> * [2.0.1] k8s-modules: update documentation (#3146) Result of spike: #2982 Signed-off-by: cicharka * Support 'epicli upgrade' for RHEL/AlmaLinux 8 (#3154) * Upgrade only to RHEL 8.4 * Disable legacy containerd plugin to avoid instance auto-recovery on AWS * Reboot system after update only when needed * Update Leapp metadata file * Enable yum repos after OS is updated * Use target option * Handle PostgreSQL packages * Enable upgrade mode for RedHat OS family * Add releasever parameter * Fix update of libmodulemd package * Remove releasever DNF variable * Suspend HealthCheck process on AWS * Install python3-psycopg2 package also for RedHat family * Add ntsysv package for Azure * Prevent auto-upgrade of repmgr10-4.0.6-1.el7 * Update changelog * [2.0.1] Migration to OpenSearch (#3093) * All in one commit - from PR #2983 * Ansible-lint adjustments * Remove leftovers * Tests fix * Adjust download-requirements * HA fix, improvements * Fix defaults, schema * Fixes in migration to opensearch and opensearch dashboards, add cleanup * Improvements * Changes after review * Update doc, changelog and schema after review * Spec tests update, rebase + changes after review * Fix defaults * Fix unittest * Fix backup/restore * Replace kibana with opensearch_dashboards * Fix apply mode, cleanup, add opensearch spec test * Disable upgrade of logging/opensearch, cleanup and rename vars * [2.0.1] Add ARM architecture support for AlmaLinux 8.4 (#3151) * merge ARM installation * Add repositories ids * Modify SHAs * Merge in develop changes * Add policycoreutils to packages list * Add docs and lua package * Modification after review * Fix Config.py for supported architectures (#3175) * [2.0.1] Allow temporary credentials (session token parameter) (#3076) * filebeat: update template for new version (#3141) * Fix after bumping up filebeat version (PR: #3086). * related to k8s_as_cloud_service flag * Source containerd version and allow downgrade (#3170) * [2.0.1] Bumped Python packages (#3176) * Bumping Python packages. * Added changelog * Added Sonarcloud status badges. (#3182) * [2.0.1] Add sssd and dependencies to requirements (#3155) * Add ssd packages needed to upgrade ssd to v2.6.2 * [2.0.1] Low hanging fruit SonarQube fixes. (#3183) * SonarQube fixes * [2.0.1] Fix `use_network_security_groups` is set to `false` (#3181) * Fix `use_network_security_groups` is set to `false` * SonarQube fix. * Minor fix after review * Ensure ca-certificates package is in the latest version (#3169) * Ensure ca-certificates package is in the latest version * Add tar to base packages for RHEL mode * Ensure tar is not uninstalled too early * Use constants instead of string literals * Ignore non-critical DNF error * Ensure dnf config-manager command * Do not use constants for better readability * Ensure epel repo is enabled * Fix is_repo_enabled method * Preserve epel-release package * Remove accidental import * Apply suggestions from code review * Apply suggestions from 2nd review * Fix `The same or higher version of epel-release is already installed` error * Create a YAML build pipeline (#3187) * Fix PostgreSQL tests (#3192) * Fix postgresql tests * Update default configurations * Restore escaping for PostgreSQL tests (#3195) * Ensure epicli upgrade works on cluster with upgraded RHEL from version 7 to 8 (#3191) * Fix repmgr10 service * Fix for K8s master with Calico * Mark AWS instances as healthy * Suspend ReplaceUnhealthy process * Put all instances into Standby state and disable auto-recovery * Keep ReplaceUnhealthy process suspended * Remove unsupported Pylint options (#3197) * Add ARM dependencies (#3185) * Add ARM dependencies * Add rook to unsupported roles * Add FELIX_IPTABLESBACKEND variable to calico configuration * Update documentation * Add FELIX_IPTABLESBACKEND for ARM only * Remove rook from requirements * Changelog: move ARM to 2.0.1 * [2.0.1] Adaptive mode for downloading requirements (#3188) * Split available_roles and roles_mapping into separate yaml documents (#3097) (#3119) * available_roles splitted into feature-mappings and features documents * feature-mappings added to the Init by default * Add manifest file parsing (#3105) (#3130) * Add `-m/--manifest` flag to accept manifest.yml produced by `epicli init/prepare` * Add `-v/--verbose` mode for printing out parsed manifest data * Add ManifestReader class used for paring the manifest.yml file * Move src/command/*.py to debian/redhat subdirs where needed * Optimize Grafana dashboards downloading (#3131) (#3150) * Optimize files downloading (#3116) (#3156) * Add image-registry configuration reading (#3106) (#3159) * Fix ansible-lint scan location (#3203) * Fix ansible-lint scan location * Update ansible_lint_error_threshold * Allow excluding test groups (#3202) * Add excluding test groups * Exclude effective test groups * Update doc * Update configuration * Do not use sets * Apply suggestions from review * Fix failed services after RHEL 7 upgrade on cluster created with epicli v1.3 (#3204) * Fix failed services * Fix for offline mode * Workaround for esl-erlang package issue (#3211) * Provide kubeconfig file for spec tests (#3206) * Comply with Rubocop * Print selected groups as yaml * Provide kubeconfig file for spec tests * Fix Pylint import-error issues in VSCode * Self code review * Add new option to launch configurations * Fix after tests * Update rubocop_linter_threshold * Apply suggestions from code review * Print selected test groups before preparing env * Added checking enabled roles for feature. (#3213) * ceph: fix tag for ceph image (#3199) * Use a stable tag for the quay.io/ceph/ceph:v16.2.7 image Signed-off-by: cicharka * Added ability to disable OpenSearch audit logs (#3215) * Added ability to disable OpenSearch audit logs * Added Black Duck Scan plugin (#3219) * Added Black Duck Scan plugin * Add java to devcontainer to run BDS. * Change to JAVA headless * Cache is already cleaned, removed unneeded run. * Support for original output coloring (#3220) * Add click package * Support for original output coloring * Add click package to CI pipeline * Fix CI task * Use human friendly color codes * Fix naming style * Do not detect log level for colored loggers * Apply --no-color option for epicli output formatter * Highlight info on Ansible commands * Fix UncolorJsonFormatter * Update changelog * Update pylint_score_cli_threshold * Better formatting * Add support for NO_COLOR env var * Use 'python3 -m pip' instead of pip * Ensure click * Restore higher threshold * Use python3 -m pylint * Fix pylint_score_cli_threshold * Resolve dependencies for specified package version (#3223) * Fix issues reported by Pylint * Resolve dependencies for specified version * Update changelog * Update crane to v0.11.0 (#3230) * Fix disabling rook in feature-mappings (#3227) * Run rook playbook on rook group * Move rook images under rook group * Update LIFECYCLE.md (#3235) * Update LIFECYCLE.md * Skip firewall role unless present in inventory (#3233) * Update Calico and Canal to fix issue on ARM (#3228) * Update Calico and Canal to fix issue on ARM * Use single arch for CNI plugin images * Fix incorrect checksums * OpenSearch: add dedicated user for Filebeat (#3079) (#3221) * removes previously used `logstash` user from filebeat configuration * removes `logstash` user from demo users configured by opensearch * enables creation of dedicated filebeat user - by default name set to `filebeatservice` * add user detection in case of re-apply * set user `kibanaserver` and `filebeatservice` installation dependent on inventory groups rather than user_active flag (previously configured by users) * simplify documentation * set dashboards hosts list based on their group * use yaml anchors in user manipulation tasks * Add filtering mechanism for the sensitive data (#3207) (#3208) * Add filtering mechanism for the sensitive data (#3207) * Include aws-cli and git in Dockerfile #2982 (#3236) * Include aws-cli and git in Dockerfile * add components entries Signed-off-by: cicharka * Fix getting package dependencies (#3239) * Optimize get_package_dependencies method * Update changelog * Simplify method in CommandRunMock class * Add test_get_package_dependencies_return_value * Apply suggestions from code review * Sort imports * Move APT_CACHE_DEPENDS_DATA over test * Fix k8s_as_cloud_service flag used in download-requirements (#3222) (#3242) * Fix k8s_as_cloud_service flag used in download-requirements (#3222) * filebeat: fix template for k8s_as_cloud_service (#3247) Signed-off-by: cicharka * k8s: controller managed attachment and detachment #3190 (#3237) * enable configuration of enable-controller-attach-detach kubelet parameter in input manifest * set enable-controller-attach-detach to true * fix extend-kubeadm-config.yml task in order to keep consistent values in configMaps and kubeadm-config.yml * move get and set cluster version utils to kubernetes_common * Update docs/home/howto/kubernetes/PERSISTENT_STORAGE.md * Remove leftovers of OpenDistro repository (#3248) * Add haproxy to k8s images group (#3240) * fix for #3231 * enhance test data for image_requirements * add type hints * Fix handling of download-requirements flag file (#3246) Co-authored-by: Irek Głownia <48471627+plirglo@users.noreply.github.com> Co-authored-by: Tomasz Baran <46519524+to-bar@users.noreply.github.com> Co-authored-by: sbbroot <86356638+sbbroot@users.noreply.github.com> Co-authored-by: przemyslavic <43173646+przemyslavic@users.noreply.github.com> Co-authored-by: Rafal Zeidler Co-authored-by: cicharka <93913624+cicharka@users.noreply.github.com> Co-authored-by: Anatoli Tsikhamirau Co-authored-by: Tomasz Baran <110602076+tomasz-baran@users.noreply.github.com> --- .ansible-lint | 1 + .devcontainer/Dockerfile | 13 +- .devcontainer/devcontainer.json | 2 + .devcontainer/poetry.lock | 666 +++++++-------- .devcontainer/pyproject.toml | 3 +- .devcontainer/python.env | 2 +- .devcontainer/requirements.txt | 78 +- .pylintrc | 42 +- .rubocop.yml | 14 + .vscode/launch.json | 8 +- .vscode/settings.json | 1 + .vscode/tasks.json | 10 + Dockerfile | 10 +- README.md | 10 +- VERSION | 2 +- ansible/playbooks/backup_logging.yml | 22 +- ansible/playbooks/filebeat.yml | 2 +- ansible/playbooks/filter_plugins/container.py | 27 + ansible/playbooks/firewall.yml | 4 +- ansible/playbooks/group_vars/all.yml | 4 - ansible/playbooks/kibana.yml | 12 - ansible/playbooks/kubernetes_master.yml | 6 +- .../opendistro_for_elasticsearch.yml | 10 - ansible/playbooks/opensearch.yml | 10 + ansible/playbooks/opensearch_dashboards.yml | 11 + ansible/playbooks/recovery_logging.yml | 17 +- ansible/playbooks/repository.yml | 2 +- .../playbooks/roles/backup/defaults/main.yml | 6 +- .../tasks/logging_elasticsearch_snapshot.yml | 90 -- ...ch_etc.yml => logging_opensearch_conf.yml} | 14 +- ...=> logging_opensearch_dashboards_conf.yml} | 9 +- .../tasks/logging_opensearch_snapshot.yml | 96 +++ .../certificate/tasks/install-packages.yml | 2 +- .../roles/download/tasks/list_files.yml | 28 +- .../roles/download/tasks/list_images.yml | 9 + .../download/tasks/list_requirements.yml | 25 + .../elasticsearch_curator/tasks/main.yml | 2 +- .../roles/filebeat/defaults/main.yml | 4 +- .../filebeat/tasks/configure-filebeat.yml | 8 +- .../playbooks/roles/filebeat/tasks/main.yml | 2 +- .../templates/custom-chart-values.yml.j2 | 12 +- .../roles/filebeat/templates/filebeat.yml.j2 | 42 +- .../tasks/install-packages-Debian.yml | 3 +- .../tasks/install-packages-RedHat.yml | 5 +- .../roles/haproxy_runc/tasks/main.yml | 5 + .../roles/image_registry/tasks/main.yml | 47 +- .../playbooks/roles/kibana/defaults/main.yml | 8 - ansible/playbooks/roles/kibana/tasks/main.yml | 68 -- .../roles/kibana/tasks/setup-logging.yml | 30 - .../roles/kibana/templates/kibana.yml.j2 | 64 -- .../roles/kibana/templates/logrotate.conf.j2 | 8 - .../tasks/extend-kubeadm-config.yml | 26 +- .../tasks}/get-cluster-version.yml | 0 .../tasks}/set-cluster-version.yml | 2 +- .../tasks/cni-plugins/canal.yml | 1 - ...kubeconfig.yml => generate-kubeconfig.yml} | 0 .../kubernetes_master/templates/calico.yml.j2 | 750 +++++++++++++++-- .../kubernetes_master/templates/canal.yml.j2 | 777 ++++++++++++++++-- .../templates/kube-flannel.yml.j2 | 483 ++--------- .../templates/kubeadm-config.yml.j2 | 1 + .../templates/kubeadm-join-node.yml.j2 | 1 - .../playbooks/roles/logging/tasks/main.yml | 6 +- .../tasks/configure-es.yml | 263 ------ .../tasks/install-es.yml | 14 - .../tasks/install-opendistro.yml | 25 - .../tasks/main.yml | 21 - .../tasks/patch-log4j.yml | 68 -- .../defaults/main.yml | 34 +- .../meta/main.yml | 0 .../opensearch/tasks/configure-opensearch.yml | 304 +++++++ .../opensearch/tasks/configure-sysctl.yml | 12 + .../tasks/generate-certs.yml | 69 +- .../opensearch/tasks/install-opensearch.yml | 78 ++ .../playbooks/roles/opensearch/tasks/main.yml | 23 + .../tasks/remove-demo-certs.yml | 0 .../tasks/remove-known-demo-certs.yml | 12 +- .../templates/jvm.options.j2 | 15 +- .../templates/opensearch.service.j2 | 51 ++ .../templates/opensearch.yml.j2} | 66 +- .../opensearch_dashboards/defaults/main.yml | 7 + .../opensearch_dashboards/handlers/main.yml | 6 + .../tasks/dashboards.yml | 55 ++ .../opensearch_dashboards/tasks/main.yml | 19 + .../opensearch-dashboards.service.j2 | 48 ++ .../templates/opensearch_dashboards.yml.j2 | 13 + .../roles/preflight/defaults/main.yml | 66 +- .../playbooks/roles/preflight/tasks/main.yml | 3 - .../preflight/tasks/upgrade-pre-common.yml | 7 - .../tasks/install-packages-redhat.yml | 2 +- .../roles/recovery/defaults/main.yml | 4 +- ...na_etc.yml => logging_opensearch_conf.yml} | 19 +- ...=> logging_opensearch_dashboards_conf.yml} | 19 +- ...ot.yml => logging_opensearch_snapshot.yml} | 46 +- .../roles/repository/defaults/main.yml | 3 +- .../download-requirements.py | 4 +- .../repositories/aarch64/redhat/redhat.yml | 70 ++ .../repositories/x86_64/debian/debian.yml | 4 - .../repositories/x86_64/redhat/redhat.yml | 13 - .../requirements/aarch64/cranes.yml | 4 + .../requirements/aarch64/files.yml | 57 ++ .../requirements/aarch64/images.yml | 65 ++ .../aarch64/redhat/almalinux-8/packages.yml | 11 + .../requirements/aarch64/redhat/packages.yml | 204 +++++ .../requirements/x86_64/cranes.yml | 4 +- .../x86_64/debian/ubuntu-20.04/packages.yml | 12 +- .../requirements/x86_64/files.yml | 33 +- .../requirements/x86_64/images.yml | 278 ++++--- .../requirements/x86_64/redhat/packages.yml | 23 +- .../src/command/{ => debian}/apt.py | 0 .../src/command/{ => debian}/apt_cache.py | 73 +- .../src/command/{ => debian}/apt_key.py | 0 .../src/command/dnf_config_manager.py | 19 - .../src/command/{ => redhat}/dnf.py | 92 ++- .../src/command/redhat/dnf_config_manager.py | 33 + .../src/command/{ => redhat}/dnf_download.py | 9 +- .../src/command/{ => redhat}/dnf_repoquery.py | 0 .../src/command/{ => redhat}/rpm.py | 0 .../src/command/toolchain.py | 16 +- .../src/config/config.py | 154 +++- .../src/config/manifest_reader.py | 116 +++ .../src/config/os_type.py | 6 +- .../src/config/version.py | 24 + .../files/download-requirements/src/error.py | 33 +- .../src/mode/base_mode.py | 53 +- .../src/mode/debian_family_mode.py | 43 +- .../src/mode/red_hat_family_mode.py | 104 ++- .../tests/command/{ => debian}/test_apt.py | 2 +- .../tests/command/debian/test_apt_cache.py | 40 + .../command/{ => debian}/test_apt_key.py | 2 +- .../tests/command/{ => redhat}/test_dnf.py | 12 +- .../tests/command/redhat/test_dnf_base.py | 12 + .../{ => redhat}/test_dnf_config_manager.py | 2 +- .../command/{ => redhat}/test_dnf_download.py | 2 +- .../{ => redhat}/test_dnf_repoquery.py | 2 +- .../tests/command/{ => redhat}/test_rpm.py | 2 +- .../tests/command/test_apt_cache.py | 18 - .../tests/config/test_config.py | 75 ++ .../tests/config/test_manifest_reader.py | 26 + .../tests/config/test_version.py | 17 + .../tests/data/apt_cache.py | 32 + .../tests/data/config.py | 478 +++++++++++ .../tests/data/manifest_reader.py | 438 ++++++++++ .../tests/mocks/command_run_mock.py | 12 +- .../files/server/RedHat/create-repository.sh | 8 +- .../roles/repository/library/__init__.py | 0 .../repository/library/filter_credentials.py | 154 ++++ .../repository/library/tests/__init__.py | 0 .../tests/data/filter_credentials_data.py | 267 ++++++ .../library/tests/test_filter_credentials.py | 41 + .../tasks/RedHat/install-packages.yml | 1 + .../tasks/check-whether-to-run-download.yml | 37 +- .../repository/tasks/clean-up-epirepo.yml | 34 +- .../tasks/copy-download-requirements.yml | 14 + .../tasks/download-requirements.yml | 43 +- .../roles/repository/tasks/setup.yml | 10 +- ansible/playbooks/roles/rook/tasks/main.yml | 2 - .../playbooks/roles/upgrade/defaults/main.yml | 22 +- .../upgrade/tasks/elasticsearch-curator.yml | 2 +- .../roles/upgrade/tasks/filebeat.yml | 9 +- .../playbooks/roles/upgrade/tasks/kibana.yml | 47 -- .../tasks/kubernetes/patch-kubelet-cm.yml | 4 +- .../tasks/kubernetes/upgrade-master0.yml | 6 +- .../tasks/kubernetes/verify-upgrade.yml | 6 +- .../tasks/opendistro_for_elasticsearch-01.yml | 52 -- .../tasks/opendistro_for_elasticsearch-02.yml | 13 - .../migrate-from-demo-certs-01.yml | 71 -- .../migrate-from-demo-certs-02.yml | 115 --- .../migrate-from-demo-certs-non-clustered.yml | 77 -- .../upgrade-elasticsearch-01.yml | 157 ---- .../upgrade-elasticsearch-02.yml | 109 --- .../upgrade-plugins.yml | 18 - .../roles/upgrade/tasks/opensearch.yml | 39 + .../upgrade/tasks/opensearch/cleanup.yml | 24 + .../tasks/opensearch/migrate-kibana.yml | 111 +++ .../tasks/opensearch/migrate-odfe-serial.yml | 114 +++ .../upgrade/tasks/opensearch/migrate-odfe.yml | 203 +++++ .../upgrade/tasks/opensearch/pre-migrate.yml | 27 + .../utils/assert-api-access.yml | 4 +- .../utils/assert-cert-files-exist.yml | 6 +- .../utils/create-dual-cert-file.yml | 6 +- .../utils/enable-shard-allocation.yml | 2 +- .../utils/get-cluster-health.yml | 2 +- .../utils/get-config-from-files.yml | 6 +- .../prepare-cluster-for-node-restart.yml | 8 +- .../utils/restart-node.yml | 8 +- .../utils/save-initial-cluster-status.yml | 8 +- .../utils/test-api-access.yml | 2 +- .../utils/wait-for-cluster-status.yml | 2 +- .../utils/wait-for-node-to-join.yml | 2 +- .../utils/wait-for-shard-allocation.yml | 2 +- ansible/playbooks/rook.yml | 4 +- ansible/playbooks/upgrade.yml | 97 +-- .../playbooks/os/rhel/upgrade-release.yml | 493 +++++++++-- ci/pipelines/build.yaml | 79 ++ ci/pipelines/linters.yaml | 151 ++++ cli/epicli.py | 73 +- cli/licenses.py | 124 +-- cli/src/Config.py | 52 +- cli/src/Log.py | 81 +- cli/src/ansible/AnsibleCommand.py | 49 +- cli/src/ansible/AnsibleConfigFileCreator.py | 3 + cli/src/ansible/AnsibleInventoryCreator.py | 10 +- cli/src/ansible/AnsibleVarsGenerator.py | 14 +- cli/src/commands/Apply.py | 36 +- cli/src/commands/Init.py | 3 + cli/src/commands/Test.py | 97 ++- cli/src/commands/Upgrade.py | 5 + cli/src/helpers/argparse_helpers.py | 14 + cli/src/providers/aws/APIProxy.py | 8 +- .../providers/azure/InfrastructureBuilder.py | 4 +- cli/src/schema/ConfigurationAppender.py | 75 +- cli/src/spec/SpecCommand.py | 29 +- cli/src/terraform/TerraformCommand.py | 5 +- docs/architecture/logical-view.md | 14 +- docs/architecture/process-view.md | 4 +- docs/assets/images/lifecycle.png | 3 - docs/changelogs/CHANGELOG-0.5.md | 2 +- docs/changelogs/CHANGELOG-2.0.md | 55 +- docs/home/ARM.md | 41 +- docs/home/COMPONENTS.md | 91 +- docs/home/DEVELOPMENT.md | 4 +- docs/home/HOWTO.md | 9 +- docs/home/LIFECYCLE.md | 40 +- docs/home/LIFECYCLE_GANTT.md | 40 - docs/home/RESOURCES.md | 4 +- docs/home/SECURITY.md | 18 +- docs/home/howto/BACKUP.md | 8 +- docs/home/howto/CLUSTER.md | 77 +- docs/home/howto/DATABASES.md | 36 +- docs/home/howto/K8S_MODULES.md | 11 + docs/home/howto/KUBERNETES.md | 8 +- docs/home/howto/LOGGING.md | 180 ++-- docs/home/howto/MAINTENANCE.md | 8 +- docs/home/howto/MODULES.md | 6 +- docs/home/howto/MONITORING.md | 105 ++- docs/home/howto/RETENTION.md | 2 +- docs/home/howto/SECURITY.md | 26 + docs/home/howto/SECURITY_GROUPS.md | 6 +- docs/home/howto/UPGRADE.md | 88 +- .../howto/kubernetes/PERSISTENT_STORAGE.md | 9 + pytest.ini | 1 + .../configuration/minimal-cluster-config.yml | 9 +- schema/any/defaults/epiphany-cluster.yml | 3 +- .../configuration/minimal-cluster-config.yml | 7 +- schema/aws/defaults/epiphany-cluster.yml | 8 +- .../cloud-os-image-defaults.yml | 1 + .../infrastructure/default-security-group.yml | 1 + .../defaults/infrastructure/efs-storage.yml | 1 + .../infrastructure/internet-gateway.yml | 1 + .../defaults/infrastructure/public-key.yml | 1 + .../infrastructure/resource-group.yml | 1 + .../route-table-association.yml | 1 + .../defaults/infrastructure/route-table.yml | 1 + .../infrastructure/security-group-rule.yml | 1 + .../infrastructure/security-group.yml | 1 + schema/aws/defaults/infrastructure/subnet.yml | 1 + .../infrastructure/virtual-machine.yml | 5 +- schema/aws/defaults/infrastructure/vpc.yml | 1 + .../infrastructure/default-security-group.yml | 3 +- .../validation/infrastructure/efs-storage.yml | 1 + .../infrastructure/internet-gateway.yml | 1 + .../validation/infrastructure/public-key.yml | 1 + .../infrastructure/resource-group.yml | 1 + .../route-table-association.yml | 1 + .../validation/infrastructure/route-table.yml | 1 + .../infrastructure/security-group-rule.yml | 1 + .../infrastructure/security-group.yml | 1 + .../aws/validation/infrastructure/subnet.yml | 1 + .../infrastructure/virtual-machine.yml | 1 + schema/aws/validation/infrastructure/vpc.yml | 1 + .../configuration/minimal-cluster-config.yml | 3 +- schema/azure/defaults/epiphany-cluster.yml | 3 +- .../infrastructure/availability-set.yml | 1 + .../infrastructure/cloud-init-custom-data.yml | 1 + .../cloud-os-image-defaults.yml | 1 + ...k-interface-security-group-association.yml | 1 + .../infrastructure/network-interface.yml | 5 +- .../infrastructure/network-security-group.yml | 3 +- .../defaults/infrastructure/public-ip.yml | 3 +- .../infrastructure/resource-group.yml | 3 +- .../defaults/infrastructure/storage-share.yml | 3 +- ...net-network-security-group-association.yml | 1 + .../azure/defaults/infrastructure/subnet.yml | 3 +- .../infrastructure/virtual-machine.yml | 5 +- schema/azure/defaults/infrastructure/vnet.yml | 1 + .../infrastructure/availability-set.yml | 1 + .../infrastructure/cloud-init-custom-data.yml | 1 + ...k-interface-security-group-association.yml | 1 + .../infrastructure/network-interface.yml | 1 + .../infrastructure/network-security-group.yml | 1 + .../validation/infrastructure/public-ip.yml | 1 + .../infrastructure/resource-group.yml | 1 + .../infrastructure/storage-share.yml | 1 + ...net-network-security-group-association.yml | 1 + .../validation/infrastructure/subnet.yml | 1 + .../infrastructure/virtual-machine.yml | 1 + .../azure/validation/infrastructure/vnet.yml | 1 + .../defaults/configuration/applications.yml | 1 + .../common/defaults/configuration/backup.yml | 1 + .../configuration/elasticsearch-curator.yml | 1 + ...ature-mapping.yml => feature-mappings.yml} | 63 +- .../defaults/configuration/features.yml | 54 ++ .../defaults/configuration/filebeat.yml | 3 +- .../defaults/configuration/firewall.yml | 5 +- .../common/defaults/configuration/grafana.yml | 1 + .../common/defaults/configuration/haproxy.yml | 1 + .../defaults/configuration/helm-charts.yml | 1 + schema/common/defaults/configuration/helm.yml | 1 + .../defaults/configuration/image-registry.yml | 422 +++++----- .../defaults/configuration/jmx-exporter.yml | 1 + .../defaults/configuration/kafka-exporter.yml | 1 + .../common/defaults/configuration/kafka.yml | 1 + .../common/defaults/configuration/kibana.yml | 5 - .../configuration/kubernetes-master.yml | 2 + .../configuration/kubernetes-node.yml | 1 + .../common/defaults/configuration/logging.yml | 31 +- .../defaults/configuration/node-exporter.yml | 1 + .../opendistro-for-elasticsearch.yml | 27 - .../configuration/opensearch-dashboards.yml | 14 + .../defaults/configuration/opensearch.yml | 31 + .../configuration/postgres-exporter.yml | 1 + .../defaults/configuration/postgresql.yml | 1 + .../defaults/configuration/prometheus.yml | 1 + .../defaults/configuration/rabbitmq.yml | 1 + .../defaults/configuration/recovery.yml | 1 + .../defaults/configuration/repository.yml | 1 + schema/common/defaults/configuration/rook.yml | 2 +- .../defaults/configuration/shared-config.yml | 1 + .../defaults/configuration/zookeeper.yml | 1 + .../validation/configuration/applications.yml | 1 + .../validation/configuration/backup.yml | 1 + .../configuration/elasticsearch-curator.yml | 1 + ...ature-mapping.yml => feature-mappings.yml} | 18 +- .../validation/configuration/features.yml | 15 + .../validation/configuration/filebeat.yml | 3 +- .../validation/configuration/firewall.yml | 5 +- .../validation/configuration/grafana.yml | 1 + .../validation/configuration/haproxy.yml | 1 + .../common/validation/configuration/helm.yml | 1 + .../configuration/image-registry.yml | 105 ++- .../validation/configuration/jmx-exporter.yml | 1 + .../configuration/kafka-exporter.yml | 1 + .../common/validation/configuration/kafka.yml | 1 + .../validation/configuration/kibana.yml | 7 - .../configuration/kubernetes-master.yml | 3 + .../configuration/kubernetes-node.yml | 1 + .../validation/configuration/logging.yml | 33 +- .../configuration/node-exporter.yml | 1 + .../configuration/opensearch-dashboards.yml | 25 + ...o-for-elasticsearch.yml => opensearch.yml} | 36 +- .../configuration/postgres-exporter.yml | 1 + .../validation/configuration/postgresql.yml | 1 + .../validation/configuration/prometheus.yml | 1 + .../validation/configuration/rabbitmq.yml | 1 + .../validation/configuration/recovery.yml | 1 + .../validation/configuration/repository.yml | 1 + .../common/validation/configuration/rook.yml | 1 + .../configuration/shared-config.yml | 1 + .../validation/configuration/zookeeper.yml | 1 + schema/common/validation/core/base.yml | 58 +- schema/common/validation/core/definitions.yml | 3 +- schema/common/validation/epiphany-cluster.yml | 25 +- terraform/aws/epiphany-cluster.j2 | 7 +- tests/spec/Rakefile | 99 ++- .../undo-copy-kubeconfig.yml | 40 + .../kubernetes_master/copy-kubeconfig.yml | 68 ++ tests/spec/spec/filebeat/filebeat_spec.rb | 14 +- tests/spec/spec/kibana/kibana_spec.rb | 89 -- tests/spec/spec/logging/logging_spec.rb | 58 +- tests/spec/spec/opensearch/opensearch_spec.rb | 63 ++ tests/spec/spec/postgresql/postgresql_spec.rb | 3 +- tests/unit/helpers/constants.py | 4 +- tests/unit/helpers/test_data_loader.py | 6 +- tests/unit/providers/data/APIProxy_data.py | 5 +- 374 files changed, 9098 insertions(+), 4521 deletions(-) create mode 100644 ansible/playbooks/filter_plugins/container.py delete mode 100644 ansible/playbooks/kibana.yml delete mode 100644 ansible/playbooks/opendistro_for_elasticsearch.yml create mode 100644 ansible/playbooks/opensearch.yml create mode 100644 ansible/playbooks/opensearch_dashboards.yml delete mode 100644 ansible/playbooks/roles/backup/tasks/logging_elasticsearch_snapshot.yml rename ansible/playbooks/roles/backup/tasks/{logging_elasticsearch_etc.yml => logging_opensearch_conf.yml} (64%) rename ansible/playbooks/roles/backup/tasks/{logging_kibana_etc.yml => logging_opensearch_dashboards_conf.yml} (69%) create mode 100644 ansible/playbooks/roles/backup/tasks/logging_opensearch_snapshot.yml create mode 100644 ansible/playbooks/roles/download/tasks/list_images.yml create mode 100644 ansible/playbooks/roles/download/tasks/list_requirements.yml delete mode 100644 ansible/playbooks/roles/kibana/defaults/main.yml delete mode 100644 ansible/playbooks/roles/kibana/tasks/main.yml delete mode 100644 ansible/playbooks/roles/kibana/tasks/setup-logging.yml delete mode 100644 ansible/playbooks/roles/kibana/templates/kibana.yml.j2 delete mode 100644 ansible/playbooks/roles/kibana/templates/logrotate.conf.j2 rename ansible/playbooks/roles/{upgrade/tasks/kubernetes => kubernetes_common/tasks}/get-cluster-version.yml (100%) rename ansible/playbooks/roles/{upgrade/tasks/kubernetes => kubernetes_common/tasks}/set-cluster-version.yml (88%) rename ansible/playbooks/roles/kubernetes_master/tasks/{copy-kubeconfig.yml => generate-kubeconfig.yml} (100%) delete mode 100644 ansible/playbooks/roles/opendistro_for_elasticsearch/tasks/configure-es.yml delete mode 100644 ansible/playbooks/roles/opendistro_for_elasticsearch/tasks/install-es.yml delete mode 100644 ansible/playbooks/roles/opendistro_for_elasticsearch/tasks/install-opendistro.yml delete mode 100644 ansible/playbooks/roles/opendistro_for_elasticsearch/tasks/main.yml delete mode 100644 ansible/playbooks/roles/opendistro_for_elasticsearch/tasks/patch-log4j.yml rename ansible/playbooks/roles/{opendistro_for_elasticsearch => opensearch}/defaults/main.yml (70%) rename ansible/playbooks/roles/{opendistro_for_elasticsearch => opensearch}/meta/main.yml (100%) create mode 100644 ansible/playbooks/roles/opensearch/tasks/configure-opensearch.yml create mode 100644 ansible/playbooks/roles/opensearch/tasks/configure-sysctl.yml rename ansible/playbooks/roles/{opendistro_for_elasticsearch => opensearch}/tasks/generate-certs.yml (77%) create mode 100644 ansible/playbooks/roles/opensearch/tasks/install-opensearch.yml create mode 100644 ansible/playbooks/roles/opensearch/tasks/main.yml rename ansible/playbooks/roles/{opendistro_for_elasticsearch => opensearch}/tasks/remove-demo-certs.yml (100%) rename ansible/playbooks/roles/{opendistro_for_elasticsearch => opensearch}/tasks/remove-known-demo-certs.yml (73%) rename ansible/playbooks/roles/{opendistro_for_elasticsearch => opensearch}/templates/jvm.options.j2 (81%) create mode 100644 ansible/playbooks/roles/opensearch/templates/opensearch.service.j2 rename ansible/playbooks/roles/{opendistro_for_elasticsearch/templates/elasticsearch.yml.j2 => opensearch/templates/opensearch.yml.j2} (54%) create mode 100644 ansible/playbooks/roles/opensearch_dashboards/defaults/main.yml create mode 100644 ansible/playbooks/roles/opensearch_dashboards/handlers/main.yml create mode 100644 ansible/playbooks/roles/opensearch_dashboards/tasks/dashboards.yml create mode 100644 ansible/playbooks/roles/opensearch_dashboards/tasks/main.yml create mode 100644 ansible/playbooks/roles/opensearch_dashboards/templates/opensearch-dashboards.service.j2 create mode 100644 ansible/playbooks/roles/opensearch_dashboards/templates/opensearch_dashboards.yml.j2 delete mode 100644 ansible/playbooks/roles/preflight/tasks/upgrade-pre-common.yml rename ansible/playbooks/roles/recovery/tasks/{logging_kibana_etc.yml => logging_opensearch_conf.yml} (62%) rename ansible/playbooks/roles/recovery/tasks/{logging_elasticsearch_etc.yml => logging_opensearch_dashboards_conf.yml} (59%) rename ansible/playbooks/roles/recovery/tasks/{logging_elasticsearch_snapshot.yml => logging_opensearch_snapshot.yml} (66%) create mode 100644 ansible/playbooks/roles/repository/files/download-requirements/repositories/aarch64/redhat/redhat.yml create mode 100644 ansible/playbooks/roles/repository/files/download-requirements/requirements/aarch64/cranes.yml create mode 100644 ansible/playbooks/roles/repository/files/download-requirements/requirements/aarch64/files.yml create mode 100644 ansible/playbooks/roles/repository/files/download-requirements/requirements/aarch64/images.yml create mode 100644 ansible/playbooks/roles/repository/files/download-requirements/requirements/aarch64/redhat/almalinux-8/packages.yml create mode 100644 ansible/playbooks/roles/repository/files/download-requirements/requirements/aarch64/redhat/packages.yml rename ansible/playbooks/roles/repository/files/download-requirements/src/command/{ => debian}/apt.py (100%) rename ansible/playbooks/roles/repository/files/download-requirements/src/command/{ => debian}/apt_cache.py (67%) rename ansible/playbooks/roles/repository/files/download-requirements/src/command/{ => debian}/apt_key.py (100%) delete mode 100644 ansible/playbooks/roles/repository/files/download-requirements/src/command/dnf_config_manager.py rename ansible/playbooks/roles/repository/files/download-requirements/src/command/{ => redhat}/dnf.py (63%) create mode 100644 ansible/playbooks/roles/repository/files/download-requirements/src/command/redhat/dnf_config_manager.py rename ansible/playbooks/roles/repository/files/download-requirements/src/command/{ => redhat}/dnf_download.py (86%) rename ansible/playbooks/roles/repository/files/download-requirements/src/command/{ => redhat}/dnf_repoquery.py (100%) rename ansible/playbooks/roles/repository/files/download-requirements/src/command/{ => redhat}/rpm.py (100%) create mode 100644 ansible/playbooks/roles/repository/files/download-requirements/src/config/manifest_reader.py create mode 100644 ansible/playbooks/roles/repository/files/download-requirements/src/config/version.py rename ansible/playbooks/roles/repository/files/download-requirements/tests/command/{ => debian}/test_apt.py (97%) create mode 100644 ansible/playbooks/roles/repository/files/download-requirements/tests/command/debian/test_apt_cache.py rename ansible/playbooks/roles/repository/files/download-requirements/tests/command/{ => debian}/test_apt_key.py (88%) rename ansible/playbooks/roles/repository/files/download-requirements/tests/command/{ => redhat}/test_dnf.py (72%) create mode 100644 ansible/playbooks/roles/repository/files/download-requirements/tests/command/redhat/test_dnf_base.py rename ansible/playbooks/roles/repository/files/download-requirements/tests/command/{ => redhat}/test_dnf_config_manager.py (93%) rename ansible/playbooks/roles/repository/files/download-requirements/tests/command/{ => redhat}/test_dnf_download.py (94%) rename ansible/playbooks/roles/repository/files/download-requirements/tests/command/{ => redhat}/test_dnf_repoquery.py (96%) rename ansible/playbooks/roles/repository/files/download-requirements/tests/command/{ => redhat}/test_rpm.py (96%) delete mode 100644 ansible/playbooks/roles/repository/files/download-requirements/tests/command/test_apt_cache.py create mode 100644 ansible/playbooks/roles/repository/files/download-requirements/tests/config/test_config.py create mode 100644 ansible/playbooks/roles/repository/files/download-requirements/tests/config/test_manifest_reader.py create mode 100644 ansible/playbooks/roles/repository/files/download-requirements/tests/config/test_version.py create mode 100644 ansible/playbooks/roles/repository/files/download-requirements/tests/data/apt_cache.py create mode 100644 ansible/playbooks/roles/repository/files/download-requirements/tests/data/config.py create mode 100644 ansible/playbooks/roles/repository/files/download-requirements/tests/data/manifest_reader.py create mode 100644 ansible/playbooks/roles/repository/library/__init__.py create mode 100644 ansible/playbooks/roles/repository/library/filter_credentials.py create mode 100644 ansible/playbooks/roles/repository/library/tests/__init__.py create mode 100644 ansible/playbooks/roles/repository/library/tests/data/filter_credentials_data.py create mode 100644 ansible/playbooks/roles/repository/library/tests/test_filter_credentials.py delete mode 100644 ansible/playbooks/roles/upgrade/tasks/kibana.yml delete mode 100644 ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch-01.yml delete mode 100644 ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch-02.yml delete mode 100644 ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/migrate-from-demo-certs-01.yml delete mode 100644 ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/migrate-from-demo-certs-02.yml delete mode 100644 ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/migrate-from-demo-certs-non-clustered.yml delete mode 100644 ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/upgrade-elasticsearch-01.yml delete mode 100644 ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/upgrade-elasticsearch-02.yml delete mode 100644 ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/upgrade-plugins.yml create mode 100644 ansible/playbooks/roles/upgrade/tasks/opensearch.yml create mode 100644 ansible/playbooks/roles/upgrade/tasks/opensearch/cleanup.yml create mode 100644 ansible/playbooks/roles/upgrade/tasks/opensearch/migrate-kibana.yml create mode 100644 ansible/playbooks/roles/upgrade/tasks/opensearch/migrate-odfe-serial.yml create mode 100644 ansible/playbooks/roles/upgrade/tasks/opensearch/migrate-odfe.yml create mode 100644 ansible/playbooks/roles/upgrade/tasks/opensearch/pre-migrate.yml rename ansible/playbooks/roles/upgrade/tasks/{opendistro_for_elasticsearch => opensearch}/utils/assert-api-access.yml (85%) rename ansible/playbooks/roles/upgrade/tasks/{opendistro_for_elasticsearch => opensearch}/utils/assert-cert-files-exist.yml (89%) rename ansible/playbooks/roles/upgrade/tasks/{opendistro_for_elasticsearch => opensearch}/utils/create-dual-cert-file.yml (68%) rename ansible/playbooks/roles/upgrade/tasks/{opendistro_for_elasticsearch => opensearch}/utils/enable-shard-allocation.yml (88%) rename ansible/playbooks/roles/upgrade/tasks/{opendistro_for_elasticsearch => opensearch}/utils/get-cluster-health.yml (89%) rename ansible/playbooks/roles/upgrade/tasks/{opendistro_for_elasticsearch => opensearch}/utils/get-config-from-files.yml (69%) rename ansible/playbooks/roles/upgrade/tasks/{opendistro_for_elasticsearch => opensearch}/utils/prepare-cluster-for-node-restart.yml (89%) rename ansible/playbooks/roles/upgrade/tasks/{opendistro_for_elasticsearch => opensearch}/utils/restart-node.yml (74%) rename ansible/playbooks/roles/upgrade/tasks/{opendistro_for_elasticsearch => opensearch}/utils/save-initial-cluster-status.yml (58%) rename ansible/playbooks/roles/upgrade/tasks/{opendistro_for_elasticsearch => opensearch}/utils/test-api-access.yml (83%) rename ansible/playbooks/roles/upgrade/tasks/{opendistro_for_elasticsearch => opensearch}/utils/wait-for-cluster-status.yml (93%) rename ansible/playbooks/roles/upgrade/tasks/{opendistro_for_elasticsearch => opensearch}/utils/wait-for-node-to-join.yml (88%) rename ansible/playbooks/roles/upgrade/tasks/{opendistro_for_elasticsearch => opensearch}/utils/wait-for-shard-allocation.yml (95%) create mode 100755 ci/pipelines/build.yaml create mode 100755 ci/pipelines/linters.yaml create mode 100644 cli/src/helpers/argparse_helpers.py delete mode 100644 docs/assets/images/lifecycle.png delete mode 100644 docs/home/LIFECYCLE_GANTT.md create mode 100644 docs/home/howto/K8S_MODULES.md rename schema/common/defaults/configuration/{feature-mapping.yml => feature-mappings.yml} (54%) create mode 100644 schema/common/defaults/configuration/features.yml delete mode 100644 schema/common/defaults/configuration/kibana.yml delete mode 100644 schema/common/defaults/configuration/opendistro-for-elasticsearch.yml create mode 100644 schema/common/defaults/configuration/opensearch-dashboards.yml create mode 100644 schema/common/defaults/configuration/opensearch.yml rename schema/common/validation/configuration/{feature-mapping.yml => feature-mappings.yml} (75%) create mode 100644 schema/common/validation/configuration/features.yml delete mode 100644 schema/common/validation/configuration/kibana.yml create mode 100644 schema/common/validation/configuration/opensearch-dashboards.yml rename schema/common/validation/configuration/{opendistro-for-elasticsearch.yml => opensearch.yml} (57%) create mode 100644 tests/spec/post_run/ansible/kubernetes_master/undo-copy-kubeconfig.yml create mode 100644 tests/spec/pre_run/ansible/kubernetes_master/copy-kubeconfig.yml delete mode 100644 tests/spec/spec/kibana/kibana_spec.rb create mode 100644 tests/spec/spec/opensearch/opensearch_spec.rb mode change 100644 => 100755 tests/unit/providers/data/APIProxy_data.py diff --git a/.ansible-lint b/.ansible-lint index 92478d5e18..30affe3883 100644 --- a/.ansible-lint +++ b/.ansible-lint @@ -38,6 +38,7 @@ skip_list: - meta-no-info - package-latest - fqcn-builtins + - no-jinja-when ################## # Tags to follow # diff --git a/.devcontainer/Dockerfile b/.devcontainer/Dockerfile index be768c8fa3..e3651e6e15 100644 --- a/.devcontainer/Dockerfile +++ b/.devcontainer/Dockerfile @@ -4,6 +4,7 @@ ARG USERNAME=vscode ARG USER_UID=1000 ARG USER_GID=$USER_UID +ARG AWS_CLI_VERSION=2.0.30 ARG HELM_VERSION=3.3.1 ARG KUBECTL_VERSION=1.22.4 ARG TERRAFORM_VERSION=1.1.3 @@ -19,6 +20,7 @@ RUN : INSTALL APT REQUIREMENTS \ make musl-dev openssh-client procps \ psmisc rsync ruby-full sudo tar \ unzip vim \ + openjdk-11-jdk-headless \ && apt-get -q autoremove -y \ && apt-get -q clean -y \ && rm -rf /var/lib/apt/lists/* @@ -37,7 +39,13 @@ RUN : INSTALL HELM BINARY \ && curl -fsSLO https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip \ && unzip terraform_${TERRAFORM_VERSION}_linux_amd64.zip -d /usr/local/bin \ && rm terraform_${TERRAFORM_VERSION}_linux_amd64.zip \ - && terraform version + && terraform version \ + && : INSTALL AWS CLI BINARY \ + && curl -fsSLO https://awscli.amazonaws.com/awscli-exe-linux-x86_64-${AWS_CLI_VERSION}.zip \ + && unzip awscli-exe-linux-x86_64-${AWS_CLI_VERSION}.zip \ + && ./aws/install -i /usr/local/aws-cli -b /usr/local/bin \ + && rm -rf awscli-exe-linux-x86_64-${AWS_CLI_VERSION}.zip ./aws \ + && aws --version RUN : INSTALL GEM REQUIREMENTS \ && gem install \ @@ -58,5 +66,8 @@ RUN : SETUP USER AND OTHERS \ && chmod ug=r,o= /etc/sudoers.d/$USERNAME \ && setcap 'cap_net_bind_service=+ep' /usr/bin/ssh +RUN : SETUP JAVE_HOME +ENV JAVA_HOME /usr/lib/jvm/java-11-openjdk-amd64/ + RUN : SETUP EPICLI ALIAS \ && echo alias epicli='"export PYTHONPATH=/workspaces/epiphany && python3 -m cli.epicli"' >> /etc/bash.bashrc diff --git a/.devcontainer/devcontainer.json b/.devcontainer/devcontainer.json index cda3dfed84..451b7bfa7a 100644 --- a/.devcontainer/devcontainer.json +++ b/.devcontainer/devcontainer.json @@ -4,6 +4,8 @@ "extensions": [ // Ansible "redhat.ansible", + // Black Duck Scan + "synopsyscodesight.vscode-codesight", // Common "shardulm94.trailing-spaces", // Git diff --git a/.devcontainer/poetry.lock b/.devcontainer/poetry.lock index ccd405f238..80c4961c26 100644 --- a/.devcontainer/poetry.lock +++ b/.devcontainer/poetry.lock @@ -25,7 +25,7 @@ ansible-core = ">=2.12.1,<2.13.0" [[package]] name = "ansible-core" -version = "2.12.1" +version = "2.12.6" description = "Radically simple IT automation" category = "main" optional = false @@ -259,7 +259,7 @@ portalocker = ">=1.2,<2.0" [[package]] name = "azure-common" -version = "1.1.27" +version = "1.1.28" description = "Microsoft Azure Client Library for Python (Common)" category = "main" optional = false @@ -267,15 +267,16 @@ python-versions = "*" [[package]] name = "azure-core" -version = "1.21.1" +version = "1.24.0" description = "Microsoft Azure Core Library for Python" category = "main" optional = false -python-versions = "*" +python-versions = ">=3.6" [package.dependencies] requests = ">=2.18.4" six = ">=1.11.0" +typing-extensions = ">=4.0.1" [[package]] name = "azure-cosmos" @@ -317,17 +318,17 @@ msrestazure = ">=0.4.32,<2.0.0" [[package]] name = "azure-identity" -version = "1.7.1" +version = "1.10.0" description = "Microsoft Azure Identity Library for Python" category = "main" optional = false -python-versions = "*" +python-versions = ">=3.6" [package.dependencies] azure-core = ">=1.11.0,<2.0.0" cryptography = ">=2.5" msal = ">=1.12.0,<2.0.0" -msal-extensions = ">=0.3.0,<0.4.0" +msal-extensions = ">=0.3.0,<2.0.0" six = ">=1.12.0" [[package]] @@ -607,15 +608,15 @@ azure-core = ">=1.15.0,<2.0.0" [[package]] name = "azure-mgmt-cosmosdb" -version = "7.0.0b2" +version = "7.0.0b6" description = "Microsoft Azure Cosmos DB Management Client Library for Python" category = "main" optional = false -python-versions = "*" +python-versions = ">=3.6" [package.dependencies] azure-common = ">=1.1,<2.0" -azure-mgmt-core = ">=1.2.0,<2.0.0" +azure-mgmt-core = ">=1.3.0,<2.0.0" msrest = ">=0.6.21" [[package]] @@ -1227,15 +1228,15 @@ msrest = ">=0.6.21" [[package]] name = "azure-mgmt-sqlvirtualmachine" -version = "1.0.0b1" -description = "Microsoft Azure Sqlvirtualmachine Management Client Library for Python" +version = "1.0.0b2" +description = "Microsoft Azure Sql Virtual Machine Management Client Library for Python" category = "main" optional = false -python-versions = "*" +python-versions = ">=3.6" [package.dependencies] azure-common = ">=1.1,<2.0" -azure-mgmt-core = ">=1.2.0,<2.0.0" +azure-mgmt-core = ">=1.3.0,<2.0.0" msrest = ">=0.6.21" [[package]] @@ -1253,7 +1254,7 @@ msrest = ">=0.6.21" [[package]] name = "azure-mgmt-synapse" -version = "2.1.0b4" +version = "2.1.0b5" description = "Microsoft Azure Synapse Management Client Library for Python" category = "main" optional = false @@ -1382,7 +1383,7 @@ msrest = ">=0.5.0" [[package]] name = "bcrypt" -version = "3.2.0" +version = "3.2.2" description = "Modern password hashing for your software and your servers" category = "main" optional = false @@ -1390,7 +1391,6 @@ python-versions = ">=3.6" [package.dependencies] cffi = ">=1.1" -six = ">=1.4.1" [package.extras] tests = ["pytest (>=3.2.1,!=3.3.0)"] @@ -1398,15 +1398,15 @@ typecheck = ["mypy"] [[package]] name = "boto3" -version = "1.20.45" +version = "1.23.10" description = "The AWS SDK for Python" category = "main" optional = false python-versions = ">= 3.6" [package.dependencies] -botocore = ">=1.23.45,<1.24.0" -jmespath = ">=0.7.1,<1.0.0" +botocore = ">=1.26.10,<1.27.0" +jmespath = ">=0.7.1,<2.0.0" s3transfer = ">=0.5.0,<0.6.0" [package.extras] @@ -1414,27 +1414,27 @@ crt = ["botocore[crt] (>=1.21.0,<2.0a0)"] [[package]] name = "botocore" -version = "1.23.45" +version = "1.26.10" description = "Low-level, data-driven core of boto 3." category = "main" optional = false python-versions = ">= 3.6" [package.dependencies] -jmespath = ">=0.7.1,<1.0.0" +jmespath = ">=0.7.1,<2.0.0" python-dateutil = ">=2.1,<3.0.0" urllib3 = ">=1.25.4,<1.27" [package.extras] -crt = ["awscrt (==0.12.5)"] +crt = ["awscrt (==0.13.8)"] [[package]] name = "certifi" -version = "2021.10.8" +version = "2022.5.18.1" description = "Python package for providing Mozilla's CA Bundle." category = "main" optional = false -python-versions = "*" +python-versions = ">=3.6" [[package]] name = "cffi" @@ -1457,7 +1457,7 @@ python-versions = "*" [[package]] name = "charset-normalizer" -version = "2.0.10" +version = "2.0.12" description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet." category = "main" optional = false @@ -1466,6 +1466,17 @@ python-versions = ">=3.5.0" [package.extras] unicode_backport = ["unicodedata2"] +[[package]] +name = "click" +version = "8.1.3" +description = "Composable command line interface toolkit" +category = "main" +optional = false +python-versions = ">=3.7" + +[package.dependencies] +colorama = {version = "*", markers = "platform_system == \"Windows\""} + [[package]] name = "colorama" version = "0.4.4" @@ -1476,7 +1487,7 @@ python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" [[package]] name = "cryptography" -version = "36.0.1" +version = "37.0.2" description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers." category = "main" optional = false @@ -1491,7 +1502,7 @@ docstest = ["pyenchant (>=1.6.11)", "twine (>=1.12.0)", "sphinxcontrib-spelling pep8test = ["black", "flake8", "flake8-import-order", "pep8-naming"] sdist = ["setuptools_rust (>=0.11.4)"] ssh = ["bcrypt (>=3.1.5)"] -test = ["pytest (>=6.2.0)", "pytest-cov", "pytest-subtests", "pytest-xdist", "pretend", "iso8601", "pytz", "hypothesis (>=1.11.4,!=3.79.2)"] +test = ["pytest (>=6.2.0)", "pytest-benchmark", "pytest-cov", "pytest-subtests", "pytest-xdist", "pretend", "iso8601", "pytz", "hypothesis (>=1.11.4,!=3.79.2)"] [[package]] name = "deprecated" @@ -1509,15 +1520,15 @@ dev = ["tox", "bump2version (<1)", "sphinx (<2)", "importlib-metadata (<3)", "im [[package]] name = "distro" -version = "1.6.0" +version = "1.7.0" description = "Distro - an OS platform information API" category = "main" optional = false -python-versions = "*" +python-versions = ">=3.6" [[package]] name = "fabric" -version = "2.6.0" +version = "2.7.0" description = "High level SSH command execution" category = "main" optional = false @@ -1553,7 +1564,7 @@ python-versions = ">=3.5" [[package]] name = "invoke" -version = "1.6.0" +version = "1.7.1" description = "Pythonic task execution" category = "main" optional = false @@ -1583,11 +1594,11 @@ six = ">=1.4,<2.0" [[package]] name = "jinja2" -version = "3.0.3" +version = "3.1.2" description = "A very fast and expressive template engine." category = "main" optional = false -python-versions = ">=3.6" +python-versions = ">=3.7" [package.dependencies] MarkupSafe = ">=2.0" @@ -1597,11 +1608,11 @@ i18n = ["Babel (>=2.7)"] [[package]] name = "jmespath" -version = "0.10.0" +version = "1.0.0" description = "JSON Matching Expressions" category = "main" optional = false -python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*" +python-versions = ">=3.7" [[package]] name = "jsondiff" @@ -1613,7 +1624,7 @@ python-versions = "*" [[package]] name = "jsonschema" -version = "4.4.0" +version = "4.5.1" description = "An implementation of JSON Schema validation for Python" category = "main" optional = false @@ -1644,22 +1655,22 @@ tabulate = "*" [[package]] name = "markupsafe" -version = "2.0.1" +version = "2.1.1" description = "Safely add untrusted strings to HTML/XML markup." category = "main" optional = false -python-versions = ">=3.6" +python-versions = ">=3.7" [[package]] name = "msal" -version = "1.16.0" +version = "1.17.0" description = "The Microsoft Authentication Library (MSAL) for Python library enables your app to access the Microsoft Cloud by supporting authentication of users with Microsoft Azure Active Directory accounts (AAD) and Microsoft Accounts (MSA) using industry standard OAuth2 and OpenID Connect." category = "main" optional = false python-versions = "*" [package.dependencies] -cryptography = ">=0.6,<38" +cryptography = ">=0.6,<39" PyJWT = {version = ">=1.0.0,<3", extras = ["crypto"]} requests = ">=2.0.0,<3" @@ -1710,16 +1721,16 @@ six = "*" [[package]] name = "oauthlib" -version = "3.1.1" +version = "3.2.0" description = "A generic, spec-compliant, thorough implementation of the OAuth request-signing logic" category = "main" optional = false python-versions = ">=3.6" [package.extras] -rsa = ["cryptography (>=3.0.0,<4)"] +rsa = ["cryptography (>=3.0.0)"] signals = ["blinker (>=1.4.0)"] -signedtoken = ["cryptography (>=3.0.0,<4)", "pyjwt (>=2.0.0,<3)"] +signedtoken = ["cryptography (>=3.0.0)", "pyjwt (>=2.0.0,<3)"] [[package]] name = "packaging" @@ -1734,7 +1745,7 @@ pyparsing = ">=2.0.2,<3.0.5 || >3.0.5" [[package]] name = "paramiko" -version = "2.9.2" +version = "2.11.0" description = "SSH2 protocol library" category = "main" optional = false @@ -1744,6 +1755,7 @@ python-versions = "*" bcrypt = ">=3.1.3" cryptography = ">=2.5" pynacl = ">=1.0.1" +six = "*" [package.extras] all = ["pyasn1 (>=0.1.7)", "pynacl (>=1.0.1)", "bcrypt (>=3.1.3)", "invoke (>=1.3)", "gssapi (>=1.4.1)", "pywin32 (>=2.1.8)"] @@ -1753,7 +1765,7 @@ invoke = ["invoke (>=1.3)"] [[package]] name = "pathlib2" -version = "2.3.6" +version = "2.3.7.post1" description = "Object-oriented filesystem paths" category = "main" optional = false @@ -1790,14 +1802,14 @@ tests = ["pytest (>=4.6.9)", "pytest-cov (>=2.8.1)", "sphinx (>=1.8.5)", "pytest [[package]] name = "psutil" -version = "5.9.0" +version = "5.9.1" description = "Cross-platform lib for process and system monitoring in Python." category = "main" optional = false -python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" [package.extras] -test = ["ipaddress", "mock", "unittest2", "enum34", "pywin32", "wmi"] +test = ["ipaddress", "mock", "enum34", "pywin32", "wmi"] [[package]] name = "pycparser" @@ -1826,15 +1838,15 @@ integrations = ["cryptography"] [[package]] name = "pygments" -version = "2.11.2" +version = "2.12.0" description = "Pygments is a syntax highlighting package written in Python." category = "main" optional = false -python-versions = ">=3.5" +python-versions = ">=3.6" [[package]] name = "pyjwt" -version = "2.3.0" +version = "2.4.0" description = "JSON Web Token implementation in Python" category = "main" optional = false @@ -1867,15 +1879,14 @@ tests = ["pytest (>=3.2.1,!=3.3.0)", "hypothesis (>=3.27.0)"] [[package]] name = "pyopenssl" -version = "21.0.0" +version = "22.0.0" description = "Python wrapper module around the OpenSSL library" category = "main" optional = false -python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*" +python-versions = ">=3.6" [package.dependencies] -cryptography = ">=3.3" -six = ">=1.5.2" +cryptography = ">=35.0" [package.extras] docs = ["sphinx", "sphinx-rtd-theme"] @@ -1883,14 +1894,14 @@ test = ["flaky", "pretend", "pytest (>=3.0.1)"] [[package]] name = "pyparsing" -version = "3.0.7" -description = "Python parsing module" +version = "3.0.9" +description = "pyparsing module - Classes and methods to define and execute parsing grammars" category = "main" optional = false -python-versions = ">=3.6" +python-versions = ">=3.6.8" [package.extras] -diagrams = ["jinja2", "railroad-diagrams"] +diagrams = ["railroad-diagrams", "jinja2"] [[package]] name = "pyreadline3" @@ -1937,7 +1948,7 @@ python-versions = ">=3.5" [[package]] name = "pywin32" -version = "303" +version = "304" description = "Python for Window Extensions" category = "main" optional = false @@ -1972,7 +1983,7 @@ use_chardet_on_py3 = ["chardet (>=3.0.2,<5)"] [[package]] name = "requests-oauthlib" -version = "1.3.0" +version = "1.3.1" description = "OAuthlib authentication support for Requests." category = "main" optional = false @@ -2001,7 +2012,7 @@ test = ["commentjson", "packaging", "pytest"] [[package]] name = "ruamel.yaml" -version = "0.17.20" +version = "0.17.21" description = "ruamel.yaml is a YAML parser/emitter that supports roundtrip preservation of comments, seq/map flow style, and map key order" category = "main" optional = false @@ -2024,7 +2035,7 @@ python-versions = ">=3.5" [[package]] name = "s3transfer" -version = "0.5.0" +version = "0.5.2" description = "An Amazon S3 Transfer Manager" category = "main" optional = false @@ -2090,9 +2101,17 @@ python-versions = "*" [package.extras] widechars = ["wcwidth"] +[[package]] +name = "typing-extensions" +version = "4.2.0" +description = "Backported and Experimental Type Hints for Python 3.7+" +category = "main" +optional = false +python-versions = ">=3.7" + [[package]] name = "urllib3" -version = "1.26.8" +version = "1.26.9" description = "HTTP library with thread-safe connection pooling, file post, and more." category = "main" optional = false @@ -2105,7 +2124,7 @@ idna = {version = ">=2.0.0", optional = true, markers = "extra == \"secure\""} pyOpenSSL = {version = ">=0.14", optional = true, markers = "extra == \"secure\""} [package.extras] -brotli = ["brotlipy (>=0.6.0)"] +brotli = ["brotlicffi (>=0.8.0)", "brotli (>=1.0.9)", "brotlipy (>=0.6.0)"] secure = ["pyOpenSSL (>=0.14)", "cryptography (>=1.3.4)", "idna (>=2.0.0)", "certifi", "ipaddress"] socks = ["PySocks (>=1.5.6,!=1.5.7,<2.0)"] @@ -2122,7 +2141,7 @@ six = "*" [[package]] name = "wrapt" -version = "1.13.3" +version = "1.14.1" description = "Module for decorators, wrappers and monkey patching." category = "main" optional = false @@ -2130,16 +2149,16 @@ python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7" [[package]] name = "xmltodict" -version = "0.12.0" +version = "0.13.0" description = "Makes working with XML feel like you are working with JSON" category = "main" optional = false -python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" +python-versions = ">=3.4" [metadata] lock-version = "1.1" -python-versions = "3.10" -content-hash = "4bc2a1e88c2824cd4e5c1382d64eb3e4c6fc2b83817d49e016c681382a5bfa5e" +python-versions = "3.10.4" +content-hash = "ce5d21a9287c34e301f7fd1b6643e0752b568afaa80da27572d54a4c81ca6b4f" [metadata.files] adal = [ @@ -2150,7 +2169,7 @@ ansible = [ {file = "ansible-5.2.0.tar.gz", hash = "sha256:c6d448f229cb4a77a6026bd61bcd5bcf062f4f666f1ed24432ba043d145499cb"}, ] ansible-core = [ - {file = "ansible-core-2.12.1.tar.gz", hash = "sha256:a4508707262be11bb4dd98a006f1b14817879a055e6b6c46ad9fca8894fb3073"}, + {file = "ansible-core-2.12.6.tar.gz", hash = "sha256:5f366e851159d8f72ce68d32b8c0edda56ee537c01e9f68eca382bd1510af65d"}, ] antlr4-python3-runtime = [ {file = "antlr4-python3-runtime-4.7.2.tar.gz", hash = "sha256:168cdcec8fb9152e84a87ca6fd261b3d54c8f6358f42ab3b813b14a7193bb50b"}, @@ -2188,12 +2207,12 @@ azure-cli-telemetry = [ {file = "azure_cli_telemetry-1.0.6-py3-none-any.whl", hash = "sha256:05c11939b8ed9a98b8bad1d0201909ff7c33671aaa4a98932069594e815aefbe"}, ] azure-common = [ - {file = "azure-common-1.1.27.zip", hash = "sha256:9f3f5d991023acbd93050cf53c4e863c6973ded7e236c69e99c8ff5c7bad41ef"}, - {file = "azure_common-1.1.27-py2.py3-none-any.whl", hash = "sha256:426673962740dbe9aab052a4b52df39c07767decd3f25fdc87c9d4c566a04934"}, + {file = "azure-common-1.1.28.zip", hash = "sha256:4ac0cd3214e36b6a1b6a442686722a5d8cc449603aa833f3f0f40bda836704a3"}, + {file = "azure_common-1.1.28-py2.py3-none-any.whl", hash = "sha256:5c12d3dcf4ec20599ca6b0d3e09e86e146353d443e7fcc050c9a19c1f9df20ad"}, ] azure-core = [ - {file = "azure-core-1.21.1.zip", hash = "sha256:88d2db5cf9a135a7287dc45fdde6b96f9ca62c9567512a3bb3e20e322ce7deb2"}, - {file = "azure_core-1.21.1-py2.py3-none-any.whl", hash = "sha256:3d70e9ec64de92dfae330c15bc69085caceb2d83813ef6c01cc45326f2a4be83"}, + {file = "azure-core-1.24.0.zip", hash = "sha256:345b1b041faad7d0205b20d5697f1d0df344302e7aaa8501905580ff87bd0be5"}, + {file = "azure_core-1.24.0-py3-none-any.whl", hash = "sha256:923e492e72d103c768a643dfad331ce6b8ec1669575c7d0832fed19bffd119f7"}, ] azure-cosmos = [ {file = "azure-cosmos-3.2.0.tar.gz", hash = "sha256:4f77cc558fecffac04377ba758ac4e23f076dc1c54e2cf2515f85bc15cbde5c6"}, @@ -2208,8 +2227,8 @@ azure-graphrbac = [ {file = "azure_graphrbac-0.60.0-py2.py3-none-any.whl", hash = "sha256:0b266602dfc631dca13960cc64bac172bf9dea2cccbb1aa13d1631ce76f14d79"}, ] azure-identity = [ - {file = "azure-identity-1.7.1.zip", hash = "sha256:7f22cd0c7a9b92ed297dd67ae79d9bb9a866e404061c02cec709ad10c4c88e19"}, - {file = "azure_identity-1.7.1-py2.py3-none-any.whl", hash = "sha256:454e16ed1152b4fd3fb463f4b4e2f7a3fc3a862b0ca28010bff6d5c6b2b0c50f"}, + {file = "azure-identity-1.10.0.zip", hash = "sha256:656e5034d9cef297cf9b35376ed620085273c18cfa52cea4a625bf0d5d2d6409"}, + {file = "azure_identity-1.10.0-py3-none-any.whl", hash = "sha256:b386f1ccbea6a48b9ab7e7f162adc456793c345193a7c1a713959562b08dcbbd"}, ] azure-keyvault = [ {file = "azure-keyvault-1.1.0.zip", hash = "sha256:37a8e5f376eb5a304fcd066d414b5d93b987e68f9212b0c41efa37d429aadd49"}, @@ -2296,8 +2315,8 @@ azure-mgmt-core = [ {file = "azure_mgmt_core-1.3.0-py2.py3-none-any.whl", hash = "sha256:7b7fa952aeb9d3eaa13eff905880f3d3b62200f7be7a8ba5a50c8b2e7295322a"}, ] azure-mgmt-cosmosdb = [ - {file = "azure-mgmt-cosmosdb-7.0.0b2.zip", hash = "sha256:855bd85bd8247d354cc22b3721d0f4257603c5d29ccb2cc8171de1364ed530b2"}, - {file = "azure_mgmt_cosmosdb-7.0.0b2-py2.py3-none-any.whl", hash = "sha256:d433569b1d7c780640868213455d2c1ddda56e9091b391acf25859b508e93dde"}, + {file = "azure-mgmt-cosmosdb-7.0.0b6.zip", hash = "sha256:e055802603f6ba9c21bd80a57737efe548c91418325df2d1882e9155a2732f2f"}, + {file = "azure_mgmt_cosmosdb-7.0.0b6-py3-none-any.whl", hash = "sha256:ff33ceabf930eef4b2c5319a1372123c7c2e45d9bfb99b37c7eee67a1cf7ee93"}, ] azure-mgmt-databoxedge = [ {file = "azure-mgmt-databoxedge-1.0.0.zip", hash = "sha256:04090062bc1e8f00c2f45315a3bceb0fb3b3479ec1474d71b88342e13499b087"}, @@ -2490,16 +2509,16 @@ azure-mgmt-sql = [ {file = "azure_mgmt_sql-3.0.1-py2.py3-none-any.whl", hash = "sha256:1d1dd940d4d41be4ee319aad626341251572a5bf4a2addec71779432d9a1381f"}, ] azure-mgmt-sqlvirtualmachine = [ - {file = "azure-mgmt-sqlvirtualmachine-1.0.0b1.zip", hash = "sha256:4ab153bd4fbaed4dc2a4c2cf64c6b05ee45d4886d3b1f6afda315922c6563e6c"}, - {file = "azure_mgmt_sqlvirtualmachine-1.0.0b1-py2.py3-none-any.whl", hash = "sha256:33a2337ffb3f6768005bb0c49c2a4cfd02ee49d703902cf3bdb0a6d8fc60fc90"}, + {file = "azure-mgmt-sqlvirtualmachine-1.0.0b2.zip", hash = "sha256:ceab0bb9f8d498e975671bbc421633b042988284be9bc193a51c9d97b8ef5cc9"}, + {file = "azure_mgmt_sqlvirtualmachine-1.0.0b2-py3-none-any.whl", hash = "sha256:590205d2387e6b89fe10678ce812289b7bb8a61b988136fa5d6f771503e7e703"}, ] azure-mgmt-storage = [ {file = "azure-mgmt-storage-19.0.0.zip", hash = "sha256:f05963e5a8696d0fd4dcadda4feecb9b382a380d2e461b3647704ac787d79876"}, {file = "azure_mgmt_storage-19.0.0-py2.py3-none-any.whl", hash = "sha256:d4960693a4e2aa046b510df13c2071df2eb3f99925384a127d843a3b969fc54b"}, ] azure-mgmt-synapse = [ - {file = "azure-mgmt-synapse-2.1.0b4.zip", hash = "sha256:4ab7ec2bd2ad5320a6a622c5c5bdb958dada49c11de9b240635cf3ed5fec2420"}, - {file = "azure_mgmt_synapse-2.1.0b4-py3-none-any.whl", hash = "sha256:9e66c4f449b1949b883ec1f1771cc30c46663e3b7c9501c26cec97504e8a7f57"}, + {file = "azure-mgmt-synapse-2.1.0b5.zip", hash = "sha256:e44e987f51a03723558ddf927850db843c67380e9f3801baa288f1b423f89be9"}, + {file = "azure_mgmt_synapse-2.1.0b5-py3-none-any.whl", hash = "sha256:bc49a3000b8412cb9f1651c43b7a0e12c227c843b02536066ec40700779982f4"}, ] azure-mgmt-trafficmanager = [ {file = "azure-mgmt-trafficmanager-0.51.0.zip", hash = "sha256:fc8ae77022cfe52fda4379a2f31e0b857574d536e41291a7b569b5c0f4104186"}, @@ -2539,28 +2558,29 @@ azure-synapse-spark = [ {file = "azure_synapse_spark-0.2.0-py2.py3-none-any.whl", hash = "sha256:79f998a58cb2a4b2145ef79aaf3ef2d79331b41f90df06e8dd0562b49a573b40"}, ] bcrypt = [ - {file = "bcrypt-3.2.0-cp36-abi3-macosx_10_10_universal2.whl", hash = "sha256:b589229207630484aefe5899122fb938a5b017b0f4349f769b8c13e78d99a8fd"}, - {file = "bcrypt-3.2.0-cp36-abi3-macosx_10_9_x86_64.whl", hash = "sha256:c95d4cbebffafcdd28bd28bb4e25b31c50f6da605c81ffd9ad8a3d1b2ab7b1b6"}, - {file = "bcrypt-3.2.0-cp36-abi3-manylinux1_x86_64.whl", hash = "sha256:63d4e3ff96188e5898779b6057878fecf3f11cfe6ec3b313ea09955d587ec7a7"}, - {file = "bcrypt-3.2.0-cp36-abi3-manylinux2010_x86_64.whl", hash = "sha256:cd1ea2ff3038509ea95f687256c46b79f5fc382ad0aa3664d200047546d511d1"}, - {file = "bcrypt-3.2.0-cp36-abi3-manylinux2014_aarch64.whl", hash = "sha256:cdcdcb3972027f83fe24a48b1e90ea4b584d35f1cc279d76de6fc4b13376239d"}, - {file = "bcrypt-3.2.0-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:a0584a92329210fcd75eb8a3250c5a941633f8bfaf2a18f81009b097732839b7"}, - {file = "bcrypt-3.2.0-cp36-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:56e5da069a76470679f312a7d3d23deb3ac4519991a0361abc11da837087b61d"}, - {file = "bcrypt-3.2.0-cp36-abi3-win32.whl", hash = "sha256:a67fb841b35c28a59cebed05fbd3e80eea26e6d75851f0574a9273c80f3e9b55"}, - {file = "bcrypt-3.2.0-cp36-abi3-win_amd64.whl", hash = "sha256:81fec756feff5b6818ea7ab031205e1d323d8943d237303baca2c5f9c7846f34"}, - {file = "bcrypt-3.2.0.tar.gz", hash = "sha256:5b93c1726e50a93a033c36e5ca7fdcd29a5c7395af50a6892f5d9e7c6cfbfb29"}, + {file = "bcrypt-3.2.2-cp36-abi3-macosx_10_10_universal2.whl", hash = "sha256:7180d98a96f00b1050e93f5b0f556e658605dd9f524d0b0e68ae7944673f525e"}, + {file = "bcrypt-3.2.2-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:61bae49580dce88095d669226d5076d0b9d927754cedbdf76c6c9f5099ad6f26"}, + {file = "bcrypt-3.2.2-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:88273d806ab3a50d06bc6a2fc7c87d737dd669b76ad955f449c43095389bc8fb"}, + {file = "bcrypt-3.2.2-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:6d2cb9d969bfca5bc08e45864137276e4c3d3d7de2b162171def3d188bf9d34a"}, + {file = "bcrypt-3.2.2-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2b02d6bfc6336d1094276f3f588aa1225a598e27f8e3388f4db9948cb707b521"}, + {file = "bcrypt-3.2.2-cp36-abi3-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:a2c46100e315c3a5b90fdc53e429c006c5f962529bc27e1dfd656292c20ccc40"}, + {file = "bcrypt-3.2.2-cp36-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:7d9ba2e41e330d2af4af6b1b6ec9e6128e91343d0b4afb9282e54e5508f31baa"}, + {file = "bcrypt-3.2.2-cp36-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:cd43303d6b8a165c29ec6756afd169faba9396a9472cdff753fe9f19b96ce2fa"}, + {file = "bcrypt-3.2.2-cp36-abi3-win32.whl", hash = "sha256:4e029cef560967fb0cf4a802bcf4d562d3d6b4b1bf81de5ec1abbe0f1adb027e"}, + {file = "bcrypt-3.2.2-cp36-abi3-win_amd64.whl", hash = "sha256:7ff2069240c6bbe49109fe84ca80508773a904f5a8cb960e02a977f7f519b129"}, + {file = "bcrypt-3.2.2.tar.gz", hash = "sha256:433c410c2177057705da2a9f2cd01dd157493b2a7ac14c8593a16b3dab6b6bfb"}, ] boto3 = [ - {file = "boto3-1.20.45-py3-none-any.whl", hash = "sha256:2ea0e0aa1494ef87a342260da8d9381000d774e73d83ee3d7c2906fefcbe2c32"}, - {file = "boto3-1.20.45.tar.gz", hash = "sha256:3d9cb5edeff09598b7065abe5b42affb0b6e1c0c805ab57c051d0f3592a0f02b"}, + {file = "boto3-1.23.10-py3-none-any.whl", hash = "sha256:40d08614f17a69075e175c02c5d5aab69a6153fd50e40fa7057b913ac7bf40e7"}, + {file = "boto3-1.23.10.tar.gz", hash = "sha256:2a4395e3241c20eef441d7443a5e6eaa0ee3f7114653fb9d9cef41587526f7bd"}, ] botocore = [ - {file = "botocore-1.23.45-py3-none-any.whl", hash = "sha256:793a0a4b572bfb157ba17971e4d783766f59c5a0f117407bbeefeb577efa1ed1"}, - {file = "botocore-1.23.45.tar.gz", hash = "sha256:782323846dad22ea814a64bd64b89c7f04550812d3945ce77748b2bac6fe745b"}, + {file = "botocore-1.26.10-py3-none-any.whl", hash = "sha256:8a4a984bf901ccefe40037da11ba2abd1ddbcb3b490a492b7f218509c99fc12f"}, + {file = "botocore-1.26.10.tar.gz", hash = "sha256:5df2cf7ebe34377470172bd0bbc582cf98c5cbd02da0909a14e9e2885ab3ae9c"}, ] certifi = [ - {file = "certifi-2021.10.8-py2.py3-none-any.whl", hash = "sha256:d62a0163eb4c2344ac042ab2bdf75399a71a2d8c7d47eac2e2ee91b9d6339569"}, - {file = "certifi-2021.10.8.tar.gz", hash = "sha256:78884e7c1d4b00ce3cea67b44566851c4343c120abd683433ce934a68ea58872"}, + {file = "certifi-2022.5.18.1-py3-none-any.whl", hash = "sha256:f1d53542ee8cbedbe2118b5686372fb33c297fcd6379b050cca0ef13a597382a"}, + {file = "certifi-2022.5.18.1.tar.gz", hash = "sha256:9c5705e395cd70084351dd8ad5c41e65655e08ce46f2ec9cf6c2c08390f71eb7"}, ] cffi = [ {file = "cffi-1.15.0-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:c2502a1a03b6312837279c8c1bd3ebedf6c12c4228ddbad40912d671ccc8a962"}, @@ -2619,46 +2639,52 @@ chardet = [ {file = "chardet-3.0.4.tar.gz", hash = "sha256:84ab92ed1c4d4f16916e05906b6b75a6c0fb5db821cc65e70cbd64a3e2a5eaae"}, ] charset-normalizer = [ - {file = "charset-normalizer-2.0.10.tar.gz", hash = "sha256:876d180e9d7432c5d1dfd4c5d26b72f099d503e8fcc0feb7532c9289be60fcbd"}, - {file = "charset_normalizer-2.0.10-py3-none-any.whl", hash = "sha256:cb957888737fc0bbcd78e3df769addb41fd1ff8cf950dc9e7ad7793f1bf44455"}, + {file = "charset-normalizer-2.0.12.tar.gz", hash = "sha256:2857e29ff0d34db842cd7ca3230549d1a697f96ee6d3fb071cfa6c7393832597"}, + {file = "charset_normalizer-2.0.12-py3-none-any.whl", hash = "sha256:6881edbebdb17b39b4eaaa821b438bf6eddffb4468cf344f09f89def34a8b1df"}, +] +click = [ + {file = "click-8.1.3-py3-none-any.whl", hash = "sha256:bb4d8133cb15a609f44e8213d9b391b0809795062913b383c62be0ee95b1db48"}, + {file = "click-8.1.3.tar.gz", hash = "sha256:7682dc8afb30297001674575ea00d1814d808d6a36af415a82bd481d37ba7b8e"}, ] colorama = [ {file = "colorama-0.4.4-py2.py3-none-any.whl", hash = "sha256:9f47eda37229f68eee03b24b9748937c7dc3868f906e8ba69fbcbdd3bc5dc3e2"}, {file = "colorama-0.4.4.tar.gz", hash = "sha256:5941b2b48a20143d2267e95b1c2a7603ce057ee39fd88e7329b0c292aa16869b"}, ] cryptography = [ - {file = "cryptography-36.0.1-cp36-abi3-macosx_10_10_universal2.whl", hash = "sha256:73bc2d3f2444bcfeac67dd130ff2ea598ea5f20b40e36d19821b4df8c9c5037b"}, - {file = "cryptography-36.0.1-cp36-abi3-macosx_10_10_x86_64.whl", hash = "sha256:2d87cdcb378d3cfed944dac30596da1968f88fb96d7fc34fdae30a99054b2e31"}, - {file = "cryptography-36.0.1-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:74d6c7e80609c0f4c2434b97b80c7f8fdfaa072ca4baab7e239a15d6d70ed73a"}, - {file = "cryptography-36.0.1-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:6c0c021f35b421ebf5976abf2daacc47e235f8b6082d3396a2fe3ccd537ab173"}, - {file = "cryptography-36.0.1-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5d59a9d55027a8b88fd9fd2826c4392bd487d74bf628bb9d39beecc62a644c12"}, - {file = "cryptography-36.0.1-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0a817b961b46894c5ca8a66b599c745b9a3d9f822725221f0e0fe49dc043a3a3"}, - {file = "cryptography-36.0.1-cp36-abi3-manylinux_2_24_x86_64.whl", hash = "sha256:94ae132f0e40fe48f310bba63f477f14a43116f05ddb69d6fa31e93f05848ae2"}, - {file = "cryptography-36.0.1-cp36-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:7be0eec337359c155df191d6ae00a5e8bbb63933883f4f5dffc439dac5348c3f"}, - {file = "cryptography-36.0.1-cp36-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:e0344c14c9cb89e76eb6a060e67980c9e35b3f36691e15e1b7a9e58a0a6c6dc3"}, - {file = "cryptography-36.0.1-cp36-abi3-win32.whl", hash = "sha256:4caa4b893d8fad33cf1964d3e51842cd78ba87401ab1d2e44556826df849a8ca"}, - {file = "cryptography-36.0.1-cp36-abi3-win_amd64.whl", hash = "sha256:391432971a66cfaf94b21c24ab465a4cc3e8bf4a939c1ca5c3e3a6e0abebdbcf"}, - {file = "cryptography-36.0.1-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:bb5829d027ff82aa872d76158919045a7c1e91fbf241aec32cb07956e9ebd3c9"}, - {file = "cryptography-36.0.1-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ebc15b1c22e55c4d5566e3ca4db8689470a0ca2babef8e3a9ee057a8b82ce4b1"}, - {file = "cryptography-36.0.1-pp37-pypy37_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:596f3cd67e1b950bc372c33f1a28a0692080625592ea6392987dba7f09f17a94"}, - {file = "cryptography-36.0.1-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:30ee1eb3ebe1644d1c3f183d115a8c04e4e603ed6ce8e394ed39eea4a98469ac"}, - {file = "cryptography-36.0.1-pp38-pypy38_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:ec63da4e7e4a5f924b90af42eddf20b698a70e58d86a72d943857c4c6045b3ee"}, - {file = "cryptography-36.0.1-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ca238ceb7ba0bdf6ce88c1b74a87bffcee5afbfa1e41e173b1ceb095b39add46"}, - {file = "cryptography-36.0.1-pp38-pypy38_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:ca28641954f767f9822c24e927ad894d45d5a1e501767599647259cbf030b903"}, - {file = "cryptography-36.0.1-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:39bdf8e70eee6b1c7b289ec6e5d84d49a6bfa11f8b8646b5b3dfe41219153316"}, - {file = "cryptography-36.0.1.tar.gz", hash = "sha256:53e5c1dc3d7a953de055d77bef2ff607ceef7a2aac0353b5d630ab67f7423638"}, + {file = "cryptography-37.0.2-cp36-abi3-macosx_10_10_universal2.whl", hash = "sha256:ef15c2df7656763b4ff20a9bc4381d8352e6640cfeb95c2972c38ef508e75181"}, + {file = "cryptography-37.0.2-cp36-abi3-macosx_10_10_x86_64.whl", hash = "sha256:3c81599befb4d4f3d7648ed3217e00d21a9341a9a688ecdd615ff72ffbed7336"}, + {file = "cryptography-37.0.2-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:2bd1096476aaac820426239ab534b636c77d71af66c547b9ddcd76eb9c79e004"}, + {file = "cryptography-37.0.2-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:31fe38d14d2e5f787e0aecef831457da6cec68e0bb09a35835b0b44ae8b988fe"}, + {file = "cryptography-37.0.2-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:093cb351031656d3ee2f4fa1be579a8c69c754cf874206be1d4cf3b542042804"}, + {file = "cryptography-37.0.2-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:59b281eab51e1b6b6afa525af2bd93c16d49358404f814fe2c2410058623928c"}, + {file = "cryptography-37.0.2-cp36-abi3-manylinux_2_24_x86_64.whl", hash = "sha256:0cc20f655157d4cfc7bada909dc5cc228211b075ba8407c46467f63597c78178"}, + {file = "cryptography-37.0.2-cp36-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:f8ec91983e638a9bcd75b39f1396e5c0dc2330cbd9ce4accefe68717e6779e0a"}, + {file = "cryptography-37.0.2-cp36-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:46f4c544f6557a2fefa7ac8ac7d1b17bf9b647bd20b16decc8fbcab7117fbc15"}, + {file = "cryptography-37.0.2-cp36-abi3-win32.whl", hash = "sha256:731c8abd27693323b348518ed0e0705713a36d79fdbd969ad968fbef0979a7e0"}, + {file = "cryptography-37.0.2-cp36-abi3-win_amd64.whl", hash = "sha256:471e0d70201c069f74c837983189949aa0d24bb2d751b57e26e3761f2f782b8d"}, + {file = "cryptography-37.0.2-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a68254dd88021f24a68b613d8c51d5c5e74d735878b9e32cc0adf19d1f10aaf9"}, + {file = "cryptography-37.0.2-pp37-pypy37_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:a7d5137e556cc0ea418dca6186deabe9129cee318618eb1ffecbd35bee55ddc1"}, + {file = "cryptography-37.0.2-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:aeaba7b5e756ea52c8861c133c596afe93dd716cbcacae23b80bc238202dc023"}, + {file = "cryptography-37.0.2-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:95e590dd70642eb2079d280420a888190aa040ad20f19ec8c6e097e38aa29e06"}, + {file = "cryptography-37.0.2-pp38-pypy38_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:1b9362d34363f2c71b7853f6251219298124aa4cc2075ae2932e64c91a3e2717"}, + {file = "cryptography-37.0.2-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:e53258e69874a306fcecb88b7534d61820db8a98655662a3dd2ec7f1afd9132f"}, + {file = "cryptography-37.0.2-pp39-pypy39_pp73-macosx_10_10_x86_64.whl", hash = "sha256:1f3bfbd611db5cb58ca82f3deb35e83af34bb8cf06043fa61500157d50a70982"}, + {file = "cryptography-37.0.2-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:419c57d7b63f5ec38b1199a9521d77d7d1754eb97827bbb773162073ccd8c8d4"}, + {file = "cryptography-37.0.2-pp39-pypy39_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:dc26bb134452081859aa21d4990474ddb7e863aa39e60d1592800a8865a702de"}, + {file = "cryptography-37.0.2-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:3b8398b3d0efc420e777c40c16764d6870bcef2eb383df9c6dbb9ffe12c64452"}, + {file = "cryptography-37.0.2.tar.gz", hash = "sha256:f224ad253cc9cea7568f49077007d2263efa57396a2f2f78114066fd54b5c68e"}, ] deprecated = [ {file = "Deprecated-1.2.13-py2.py3-none-any.whl", hash = "sha256:64756e3e14c8c5eea9795d93c524551432a0be75629f8f29e67ab8caf076c76d"}, {file = "Deprecated-1.2.13.tar.gz", hash = "sha256:43ac5335da90c31c24ba028af536a91d41d53f9e6901ddb021bcc572ce44e38d"}, ] distro = [ - {file = "distro-1.6.0-py2.py3-none-any.whl", hash = "sha256:c8713330ab31a034623a9515663ed87696700b55f04556b97c39cd261aa70dc7"}, - {file = "distro-1.6.0.tar.gz", hash = "sha256:83f5e5a09f9c5f68f60173de572930effbcc0287bb84fdc4426cb4168c088424"}, + {file = "distro-1.7.0-py3-none-any.whl", hash = "sha256:d596311d707e692c2160c37807f83e3820c5d539d5a83e87cfb6babd8ba3a06b"}, + {file = "distro-1.7.0.tar.gz", hash = "sha256:151aeccf60c216402932b52e40ee477a939f8d58898927378a02abbe852c1c39"}, ] fabric = [ - {file = "fabric-2.6.0-py2.py3-none-any.whl", hash = "sha256:7a71714b8b8f28cf828eceb155196f43ebac1bd4c849b7161ed5993d1cbcaa40"}, - {file = "fabric-2.6.0.tar.gz", hash = "sha256:47f184b070272796fd2f9f0436799e18f2ccba4ee8ee587796fca192acd46cd2"}, + {file = "fabric-2.7.0-py2.py3-none-any.whl", hash = "sha256:e8bfe851719a88be24f40ad7e96ac5bf023ce1af650b561d7641ed1eaed55fe5"}, + {file = "fabric-2.7.0.tar.gz", hash = "sha256:0bf797a68c4b389720dc4dd6181497a58c41ed762e283d9e3c1b0148b32a9aff"}, ] humanfriendly = [ {file = "humanfriendly-10.0-py2.py3-none-any.whl", hash = "sha256:1697e1a8a8f550fd43c2865cd84542fc175a61dcb779b6fee18cf6b6ccba1477"}, @@ -2669,9 +2695,8 @@ idna = [ {file = "idna-3.3.tar.gz", hash = "sha256:9d643ff0a55b762d5cdb124b8eaa99c66322e2157b69160bc32796e824360e6d"}, ] invoke = [ - {file = "invoke-1.6.0-py2-none-any.whl", hash = "sha256:e6c9917a1e3e73e7ea91fdf82d5f151ccfe85bf30cc65cdb892444c02dbb5f74"}, - {file = "invoke-1.6.0-py3-none-any.whl", hash = "sha256:769e90caeb1bd07d484821732f931f1ad8916a38e3f3e618644687fc09cb6317"}, - {file = "invoke-1.6.0.tar.gz", hash = "sha256:374d1e2ecf78981da94bfaf95366216aaec27c2d6a7b7d5818d92da55aa258d3"}, + {file = "invoke-1.7.1-py3-none-any.whl", hash = "sha256:2dc975b4f92be0c0a174ad2d063010c8a1fdb5e9389d69871001118b4fcac4fb"}, + {file = "invoke-1.7.1.tar.gz", hash = "sha256:7b6deaf585eee0a848205d0b8c0014b9bf6f287a8eb798818a642dff1df14b19"}, ] isodate = [ {file = "isodate-0.6.1-py2.py3-none-any.whl", hash = "sha256:0751eece944162659049d35f4f549ed815792b38793f07cf73381c1c87cbed96"}, @@ -2682,98 +2707,69 @@ javaproperties = [ {file = "javaproperties-0.5.2.tar.gz", hash = "sha256:68add3438bd24d6e32665cad91f254fc82a51bf905e21f3f424a085c79904fb3"}, ] jinja2 = [ - {file = "Jinja2-3.0.3-py3-none-any.whl", hash = "sha256:077ce6014f7b40d03b47d1f1ca4b0fc8328a692bd284016f806ed0eaca390ad8"}, - {file = "Jinja2-3.0.3.tar.gz", hash = "sha256:611bb273cd68f3b993fabdc4064fc858c5b47a973cb5aa7999ec1ba405c87cd7"}, + {file = "Jinja2-3.1.2-py3-none-any.whl", hash = "sha256:6088930bfe239f0e6710546ab9c19c9ef35e29792895fed6e6e31a023a182a61"}, + {file = "Jinja2-3.1.2.tar.gz", hash = "sha256:31351a702a408a9e7595a8fc6150fc3f43bb6bf7e319770cbc0db9df9437e852"}, ] jmespath = [ - {file = "jmespath-0.10.0-py2.py3-none-any.whl", hash = "sha256:cdf6525904cc597730141d61b36f2e4b8ecc257c420fa2f4549bac2c2d0cb72f"}, - {file = "jmespath-0.10.0.tar.gz", hash = "sha256:b85d0567b8666149a93172712e68920734333c0ce7e89b78b3e987f71e5ed4f9"}, + {file = "jmespath-1.0.0-py3-none-any.whl", hash = "sha256:e8dcd576ed616f14ec02eed0005c85973b5890083313860136657e24784e4c04"}, + {file = "jmespath-1.0.0.tar.gz", hash = "sha256:a490e280edd1f57d6de88636992d05b71e97d69a26a19f058ecf7d304474bf5e"}, ] jsondiff = [ {file = "jsondiff-1.3.1.tar.gz", hash = "sha256:04cfaebd4a5e5738948ab615710dc3ee98efbdf851255fd3977c4c2ee59e7312"}, ] jsonschema = [ - {file = "jsonschema-4.4.0-py3-none-any.whl", hash = "sha256:77281a1f71684953ee8b3d488371b162419767973789272434bbc3f29d9c8823"}, - {file = "jsonschema-4.4.0.tar.gz", hash = "sha256:636694eb41b3535ed608fe04129f26542b59ed99808b4f688aa32dcf55317a83"}, + {file = "jsonschema-4.5.1-py3-none-any.whl", hash = "sha256:71b5e39324422543546572954ce71c67728922c104902cb7ce252e522235b33f"}, + {file = "jsonschema-4.5.1.tar.gz", hash = "sha256:7c6d882619340c3347a1bf7315e147e6d3dae439033ae6383d6acb908c101dfc"}, ] knack = [ {file = "knack-0.9.0-py3-none-any.whl", hash = "sha256:d609abcca2604086b2e41935f7ffade2f83b709891af87df5088d23240ecbecc"}, {file = "knack-0.9.0.tar.gz", hash = "sha256:7fcab17585c0236885eaef311c01a1e626d84c982aabcac81703afda3f89c81f"}, ] markupsafe = [ - {file = "MarkupSafe-2.0.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:d8446c54dc28c01e5a2dbac5a25f071f6653e6e40f3a8818e8b45d790fe6ef53"}, - {file = "MarkupSafe-2.0.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:36bc903cbb393720fad60fc28c10de6acf10dc6cc883f3e24ee4012371399a38"}, - {file = "MarkupSafe-2.0.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2d7d807855b419fc2ed3e631034685db6079889a1f01d5d9dac950f764da3dad"}, - {file = "MarkupSafe-2.0.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:add36cb2dbb8b736611303cd3bfcee00afd96471b09cda130da3581cbdc56a6d"}, - {file = "MarkupSafe-2.0.1-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:168cd0a3642de83558a5153c8bd34f175a9a6e7f6dc6384b9655d2697312a646"}, - {file = "MarkupSafe-2.0.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:4dc8f9fb58f7364b63fd9f85013b780ef83c11857ae79f2feda41e270468dd9b"}, - {file = "MarkupSafe-2.0.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:20dca64a3ef2d6e4d5d615a3fd418ad3bde77a47ec8a23d984a12b5b4c74491a"}, - {file = "MarkupSafe-2.0.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:cdfba22ea2f0029c9261a4bd07e830a8da012291fbe44dc794e488b6c9bb353a"}, - {file = "MarkupSafe-2.0.1-cp310-cp310-win32.whl", hash = "sha256:99df47edb6bda1249d3e80fdabb1dab8c08ef3975f69aed437cb69d0a5de1e28"}, - {file = "MarkupSafe-2.0.1-cp310-cp310-win_amd64.whl", hash = "sha256:e0f138900af21926a02425cf736db95be9f4af72ba1bb21453432a07f6082134"}, - {file = "MarkupSafe-2.0.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:f9081981fe268bd86831e5c75f7de206ef275defcb82bc70740ae6dc507aee51"}, - {file = "MarkupSafe-2.0.1-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:0955295dd5eec6cb6cc2fe1698f4c6d84af2e92de33fbcac4111913cd100a6ff"}, - {file = "MarkupSafe-2.0.1-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:0446679737af14f45767963a1a9ef7620189912317d095f2d9ffa183a4d25d2b"}, - {file = "MarkupSafe-2.0.1-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:f826e31d18b516f653fe296d967d700fddad5901ae07c622bb3705955e1faa94"}, - {file = "MarkupSafe-2.0.1-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:fa130dd50c57d53368c9d59395cb5526eda596d3ffe36666cd81a44d56e48872"}, - {file = "MarkupSafe-2.0.1-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:905fec760bd2fa1388bb5b489ee8ee5f7291d692638ea5f67982d968366bef9f"}, - {file = "MarkupSafe-2.0.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bf5d821ffabf0ef3533c39c518f3357b171a1651c1ff6827325e4489b0e46c3c"}, - {file = "MarkupSafe-2.0.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:0d4b31cc67ab36e3392bbf3862cfbadac3db12bdd8b02a2731f509ed5b829724"}, - {file = "MarkupSafe-2.0.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:baa1a4e8f868845af802979fcdbf0bb11f94f1cb7ced4c4b8a351bb60d108145"}, - {file = "MarkupSafe-2.0.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:deb993cacb280823246a026e3b2d81c493c53de6acfd5e6bfe31ab3402bb37dd"}, - {file = "MarkupSafe-2.0.1-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:63f3268ba69ace99cab4e3e3b5840b03340efed0948ab8f78d2fd87ee5442a4f"}, - {file = "MarkupSafe-2.0.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:8d206346619592c6200148b01a2142798c989edcb9c896f9ac9722a99d4e77e6"}, - {file = "MarkupSafe-2.0.1-cp36-cp36m-win32.whl", hash = "sha256:6c4ca60fa24e85fe25b912b01e62cb969d69a23a5d5867682dd3e80b5b02581d"}, - {file = "MarkupSafe-2.0.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b2f4bf27480f5e5e8ce285a8c8fd176c0b03e93dcc6646477d4630e83440c6a9"}, - {file = "MarkupSafe-2.0.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:0717a7390a68be14b8c793ba258e075c6f4ca819f15edfc2a3a027c823718567"}, - {file = "MarkupSafe-2.0.1-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:6557b31b5e2c9ddf0de32a691f2312a32f77cd7681d8af66c2692efdbef84c18"}, - {file = "MarkupSafe-2.0.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:49e3ceeabbfb9d66c3aef5af3a60cc43b85c33df25ce03d0031a608b0a8b2e3f"}, - {file = "MarkupSafe-2.0.1-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:d7f9850398e85aba693bb640262d3611788b1f29a79f0c93c565694658f4071f"}, - {file = "MarkupSafe-2.0.1-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:6a7fae0dd14cf60ad5ff42baa2e95727c3d81ded453457771d02b7d2b3f9c0c2"}, - {file = "MarkupSafe-2.0.1-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:b7f2d075102dc8c794cbde1947378051c4e5180d52d276987b8d28a3bd58c17d"}, - {file = "MarkupSafe-2.0.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e9936f0b261d4df76ad22f8fee3ae83b60d7c3e871292cd42f40b81b70afae85"}, - {file = "MarkupSafe-2.0.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:2a7d351cbd8cfeb19ca00de495e224dea7e7d919659c2841bbb7f420ad03e2d6"}, - {file = "MarkupSafe-2.0.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:60bf42e36abfaf9aff1f50f52644b336d4f0a3fd6d8a60ca0d054ac9f713a864"}, - {file = "MarkupSafe-2.0.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:d6c7ebd4e944c85e2c3421e612a7057a2f48d478d79e61800d81468a8d842207"}, - {file = "MarkupSafe-2.0.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:f0567c4dc99f264f49fe27da5f735f414c4e7e7dd850cfd8e69f0862d7c74ea9"}, - {file = "MarkupSafe-2.0.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:89c687013cb1cd489a0f0ac24febe8c7a666e6e221b783e53ac50ebf68e45d86"}, - {file = "MarkupSafe-2.0.1-cp37-cp37m-win32.whl", hash = "sha256:a30e67a65b53ea0a5e62fe23682cfe22712e01f453b95233b25502f7c61cb415"}, - {file = "MarkupSafe-2.0.1-cp37-cp37m-win_amd64.whl", hash = "sha256:611d1ad9a4288cf3e3c16014564df047fe08410e628f89805e475368bd304914"}, - {file = "MarkupSafe-2.0.1-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:5bb28c636d87e840583ee3adeb78172efc47c8b26127267f54a9c0ec251d41a9"}, - {file = "MarkupSafe-2.0.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:be98f628055368795d818ebf93da628541e10b75b41c559fdf36d104c5787066"}, - {file = "MarkupSafe-2.0.1-cp38-cp38-manylinux1_i686.whl", hash = "sha256:1d609f577dc6e1aa17d746f8bd3c31aa4d258f4070d61b2aa5c4166c1539de35"}, - {file = "MarkupSafe-2.0.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:7d91275b0245b1da4d4cfa07e0faedd5b0812efc15b702576d103293e252af1b"}, - {file = "MarkupSafe-2.0.1-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:01a9b8ea66f1658938f65b93a85ebe8bc016e6769611be228d797c9d998dd298"}, - {file = "MarkupSafe-2.0.1-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:47ab1e7b91c098ab893b828deafa1203de86d0bc6ab587b160f78fe6c4011f75"}, - {file = "MarkupSafe-2.0.1-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:97383d78eb34da7e1fa37dd273c20ad4320929af65d156e35a5e2d89566d9dfb"}, - {file = "MarkupSafe-2.0.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6fcf051089389abe060c9cd7caa212c707e58153afa2c649f00346ce6d260f1b"}, - {file = "MarkupSafe-2.0.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:5855f8438a7d1d458206a2466bf82b0f104a3724bf96a1c781ab731e4201731a"}, - {file = "MarkupSafe-2.0.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:3dd007d54ee88b46be476e293f48c85048603f5f516008bee124ddd891398ed6"}, - {file = "MarkupSafe-2.0.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:aca6377c0cb8a8253e493c6b451565ac77e98c2951c45f913e0b52facdcff83f"}, - {file = "MarkupSafe-2.0.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:04635854b943835a6ea959e948d19dcd311762c5c0c6e1f0e16ee57022669194"}, - {file = "MarkupSafe-2.0.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:6300b8454aa6930a24b9618fbb54b5a68135092bc666f7b06901f897fa5c2fee"}, - {file = "MarkupSafe-2.0.1-cp38-cp38-win32.whl", hash = "sha256:023cb26ec21ece8dc3907c0e8320058b2e0cb3c55cf9564da612bc325bed5e64"}, - {file = "MarkupSafe-2.0.1-cp38-cp38-win_amd64.whl", hash = "sha256:984d76483eb32f1bcb536dc27e4ad56bba4baa70be32fa87152832cdd9db0833"}, - {file = "MarkupSafe-2.0.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:2ef54abee730b502252bcdf31b10dacb0a416229b72c18b19e24a4509f273d26"}, - {file = "MarkupSafe-2.0.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:3c112550557578c26af18a1ccc9e090bfe03832ae994343cfdacd287db6a6ae7"}, - {file = "MarkupSafe-2.0.1-cp39-cp39-manylinux1_i686.whl", hash = "sha256:53edb4da6925ad13c07b6d26c2a852bd81e364f95301c66e930ab2aef5b5ddd8"}, - {file = "MarkupSafe-2.0.1-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:f5653a225f31e113b152e56f154ccbe59eeb1c7487b39b9d9f9cdb58e6c79dc5"}, - {file = "MarkupSafe-2.0.1-cp39-cp39-manylinux2010_i686.whl", hash = "sha256:4efca8f86c54b22348a5467704e3fec767b2db12fc39c6d963168ab1d3fc9135"}, - {file = "MarkupSafe-2.0.1-cp39-cp39-manylinux2010_x86_64.whl", hash = "sha256:ab3ef638ace319fa26553db0624c4699e31a28bb2a835c5faca8f8acf6a5a902"}, - {file = "MarkupSafe-2.0.1-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:f8ba0e8349a38d3001fae7eadded3f6606f0da5d748ee53cc1dab1d6527b9509"}, - {file = "MarkupSafe-2.0.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c47adbc92fc1bb2b3274c4b3a43ae0e4573d9fbff4f54cd484555edbf030baf1"}, - {file = "MarkupSafe-2.0.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:37205cac2a79194e3750b0af2a5720d95f786a55ce7df90c3af697bfa100eaac"}, - {file = "MarkupSafe-2.0.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:1f2ade76b9903f39aa442b4aadd2177decb66525062db244b35d71d0ee8599b6"}, - {file = "MarkupSafe-2.0.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:4296f2b1ce8c86a6aea78613c34bb1a672ea0e3de9c6ba08a960efe0b0a09047"}, - {file = "MarkupSafe-2.0.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:9f02365d4e99430a12647f09b6cc8bab61a6564363f313126f775eb4f6ef798e"}, - {file = "MarkupSafe-2.0.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:5b6d930f030f8ed98e3e6c98ffa0652bdb82601e7a016ec2ab5d7ff23baa78d1"}, - {file = "MarkupSafe-2.0.1-cp39-cp39-win32.whl", hash = "sha256:10f82115e21dc0dfec9ab5c0223652f7197feb168c940f3ef61563fc2d6beb74"}, - {file = "MarkupSafe-2.0.1-cp39-cp39-win_amd64.whl", hash = "sha256:693ce3f9e70a6cf7d2fb9e6c9d8b204b6b39897a2c4a1aa65728d5ac97dcc1d8"}, - {file = "MarkupSafe-2.0.1.tar.gz", hash = "sha256:594c67807fb16238b30c44bdf74f36c02cdf22d1c8cda91ef8a0ed8dabf5620a"}, + {file = "MarkupSafe-2.1.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:86b1f75c4e7c2ac2ccdaec2b9022845dbb81880ca318bb7a0a01fbf7813e3812"}, + {file = "MarkupSafe-2.1.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:f121a1420d4e173a5d96e47e9a0c0dcff965afdf1626d28de1460815f7c4ee7a"}, + {file = "MarkupSafe-2.1.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a49907dd8420c5685cfa064a1335b6754b74541bbb3706c259c02ed65b644b3e"}, + {file = "MarkupSafe-2.1.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:10c1bfff05d95783da83491be968e8fe789263689c02724e0c691933c52994f5"}, + {file = "MarkupSafe-2.1.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b7bd98b796e2b6553da7225aeb61f447f80a1ca64f41d83612e6139ca5213aa4"}, + {file = "MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:b09bf97215625a311f669476f44b8b318b075847b49316d3e28c08e41a7a573f"}, + {file = "MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:694deca8d702d5db21ec83983ce0bb4b26a578e71fbdbd4fdcd387daa90e4d5e"}, + {file = "MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:efc1913fd2ca4f334418481c7e595c00aad186563bbc1ec76067848c7ca0a933"}, + {file = "MarkupSafe-2.1.1-cp310-cp310-win32.whl", hash = "sha256:4a33dea2b688b3190ee12bd7cfa29d39c9ed176bda40bfa11099a3ce5d3a7ac6"}, + {file = "MarkupSafe-2.1.1-cp310-cp310-win_amd64.whl", hash = "sha256:dda30ba7e87fbbb7eab1ec9f58678558fd9a6b8b853530e176eabd064da81417"}, + {file = "MarkupSafe-2.1.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:671cd1187ed5e62818414afe79ed29da836dde67166a9fac6d435873c44fdd02"}, + {file = "MarkupSafe-2.1.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3799351e2336dc91ea70b034983ee71cf2f9533cdff7c14c90ea126bfd95d65a"}, + {file = "MarkupSafe-2.1.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e72591e9ecd94d7feb70c1cbd7be7b3ebea3f548870aa91e2732960fa4d57a37"}, + {file = "MarkupSafe-2.1.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6fbf47b5d3728c6aea2abb0589b5d30459e369baa772e0f37a0320185e87c980"}, + {file = "MarkupSafe-2.1.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:d5ee4f386140395a2c818d149221149c54849dfcfcb9f1debfe07a8b8bd63f9a"}, + {file = "MarkupSafe-2.1.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:bcb3ed405ed3222f9904899563d6fc492ff75cce56cba05e32eff40e6acbeaa3"}, + {file = "MarkupSafe-2.1.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:e1c0b87e09fa55a220f058d1d49d3fb8df88fbfab58558f1198e08c1e1de842a"}, + {file = "MarkupSafe-2.1.1-cp37-cp37m-win32.whl", hash = "sha256:8dc1c72a69aa7e082593c4a203dcf94ddb74bb5c8a731e4e1eb68d031e8498ff"}, + {file = "MarkupSafe-2.1.1-cp37-cp37m-win_amd64.whl", hash = "sha256:97a68e6ada378df82bc9f16b800ab77cbf4b2fada0081794318520138c088e4a"}, + {file = "MarkupSafe-2.1.1-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:e8c843bbcda3a2f1e3c2ab25913c80a3c5376cd00c6e8c4a86a89a28c8dc5452"}, + {file = "MarkupSafe-2.1.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:0212a68688482dc52b2d45013df70d169f542b7394fc744c02a57374a4207003"}, + {file = "MarkupSafe-2.1.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8e576a51ad59e4bfaac456023a78f6b5e6e7651dcd383bcc3e18d06f9b55d6d1"}, + {file = "MarkupSafe-2.1.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4b9fe39a2ccc108a4accc2676e77da025ce383c108593d65cc909add5c3bd601"}, + {file = "MarkupSafe-2.1.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:96e37a3dc86e80bf81758c152fe66dbf60ed5eca3d26305edf01892257049925"}, + {file = "MarkupSafe-2.1.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:6d0072fea50feec76a4c418096652f2c3238eaa014b2f94aeb1d56a66b41403f"}, + {file = "MarkupSafe-2.1.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:089cf3dbf0cd6c100f02945abeb18484bd1ee57a079aefd52cffd17fba910b88"}, + {file = "MarkupSafe-2.1.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:6a074d34ee7a5ce3effbc526b7083ec9731bb3cbf921bbe1d3005d4d2bdb3a63"}, + {file = "MarkupSafe-2.1.1-cp38-cp38-win32.whl", hash = "sha256:421be9fbf0ffe9ffd7a378aafebbf6f4602d564d34be190fc19a193232fd12b1"}, + {file = "MarkupSafe-2.1.1-cp38-cp38-win_amd64.whl", hash = "sha256:fc7b548b17d238737688817ab67deebb30e8073c95749d55538ed473130ec0c7"}, + {file = "MarkupSafe-2.1.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:e04e26803c9c3851c931eac40c695602c6295b8d432cbe78609649ad9bd2da8a"}, + {file = "MarkupSafe-2.1.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b87db4360013327109564f0e591bd2a3b318547bcef31b468a92ee504d07ae4f"}, + {file = "MarkupSafe-2.1.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:99a2a507ed3ac881b975a2976d59f38c19386d128e7a9a18b7df6fff1fd4c1d6"}, + {file = "MarkupSafe-2.1.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:56442863ed2b06d19c37f94d999035e15ee982988920e12a5b4ba29b62ad1f77"}, + {file = "MarkupSafe-2.1.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3ce11ee3f23f79dbd06fb3d63e2f6af7b12db1d46932fe7bd8afa259a5996603"}, + {file = "MarkupSafe-2.1.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:33b74d289bd2f5e527beadcaa3f401e0df0a89927c1559c8566c066fa4248ab7"}, + {file = "MarkupSafe-2.1.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:43093fb83d8343aac0b1baa75516da6092f58f41200907ef92448ecab8825135"}, + {file = "MarkupSafe-2.1.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:8e3dcf21f367459434c18e71b2a9532d96547aef8a871872a5bd69a715c15f96"}, + {file = "MarkupSafe-2.1.1-cp39-cp39-win32.whl", hash = "sha256:d4306c36ca495956b6d568d276ac11fdd9c30a36f1b6eb928070dc5360b22e1c"}, + {file = "MarkupSafe-2.1.1-cp39-cp39-win_amd64.whl", hash = "sha256:46d00d6cfecdde84d40e572d63735ef81423ad31184100411e6e3388d405e247"}, + {file = "MarkupSafe-2.1.1.tar.gz", hash = "sha256:7f91197cc9e48f989d12e4e6fbc46495c446636dfc81b9ccf50bb0ec74b91d4b"}, ] msal = [ - {file = "msal-1.16.0-py2.py3-none-any.whl", hash = "sha256:a421a43413335099228f1d9ad93f7491d7c7c40044108290e4923fe58f41a332"}, - {file = "msal-1.16.0.tar.gz", hash = "sha256:240fb04dba46a27fd6a3178db8334412d0d02e0be85166f9e05bb45d03399084"}, + {file = "msal-1.17.0-py2.py3-none-any.whl", hash = "sha256:5a52d78e70d2c451e267c1e8c2342e4c06f495c75c859aeafd9260d3974f09fe"}, + {file = "msal-1.17.0.tar.gz", hash = "sha256:04e3cb7bb75c51f56d290381f23056207df1f3eb594ed03d38551f3b16d2a36e"}, ] msal-extensions = [ {file = "msal-extensions-0.3.1.tar.gz", hash = "sha256:d9029af70f2cbdc5ad7ecfed61cb432ebe900484843ccf72825445dbfe62d311"}, @@ -2788,20 +2784,20 @@ msrestazure = [ {file = "msrestazure-0.6.4.tar.gz", hash = "sha256:a06f0dabc9a6f5efe3b6add4bd8fb623aeadacf816b7a35b0f89107e0544d189"}, ] oauthlib = [ - {file = "oauthlib-3.1.1-py2.py3-none-any.whl", hash = "sha256:42bf6354c2ed8c6acb54d971fce6f88193d97297e18602a3a886603f9d7730cc"}, - {file = "oauthlib-3.1.1.tar.gz", hash = "sha256:8f0215fcc533dd8dd1bee6f4c412d4f0cd7297307d43ac61666389e3bc3198a3"}, + {file = "oauthlib-3.2.0-py3-none-any.whl", hash = "sha256:6db33440354787f9b7f3a6dbd4febf5d0f93758354060e802f6c06cb493022fe"}, + {file = "oauthlib-3.2.0.tar.gz", hash = "sha256:23a8208d75b902797ea29fd31fa80a15ed9dc2c6c16fe73f5d346f83f6fa27a2"}, ] packaging = [ {file = "packaging-21.3-py3-none-any.whl", hash = "sha256:ef103e05f519cdc783ae24ea4e2e0f508a9c99b2d4969652eed6a2e1ea5bd522"}, {file = "packaging-21.3.tar.gz", hash = "sha256:dd47c42927d89ab911e606518907cc2d3a1f38bbd026385970643f9c5b8ecfeb"}, ] paramiko = [ - {file = "paramiko-2.9.2-py2.py3-none-any.whl", hash = "sha256:04097dbd96871691cdb34c13db1883066b8a13a0df2afd4cb0a92221f51c2603"}, - {file = "paramiko-2.9.2.tar.gz", hash = "sha256:944a9e5dbdd413ab6c7951ea46b0ab40713235a9c4c5ca81cfe45c6f14fa677b"}, + {file = "paramiko-2.11.0-py2.py3-none-any.whl", hash = "sha256:655f25dc8baf763277b933dfcea101d636581df8d6b9774d1fb653426b72c270"}, + {file = "paramiko-2.11.0.tar.gz", hash = "sha256:003e6bee7c034c21fbb051bf83dc0a9ee4106204dd3c53054c71452cc4ec3938"}, ] pathlib2 = [ - {file = "pathlib2-2.3.6-py2.py3-none-any.whl", hash = "sha256:3a130b266b3a36134dcc79c17b3c7ac9634f083825ca6ea9d8f557ee6195c9c8"}, - {file = "pathlib2-2.3.6.tar.gz", hash = "sha256:7d8bcb5555003cdf4a8d2872c538faa3a0f5d20630cb360e518ca3b981795e5f"}, + {file = "pathlib2-2.3.7.post1-py2.py3-none-any.whl", hash = "sha256:5266a0fd000452f1b3467d782f079a4343c63aaa119221fbdc4e39577489ca5b"}, + {file = "pathlib2-2.3.7.post1.tar.gz", hash = "sha256:9fe0edad898b83c0c3e199c842b27ed216645d2e177757b2dd67384d4113c641"}, ] pkginfo = [ {file = "pkginfo-1.8.2-py2.py3-none-any.whl", hash = "sha256:c24c487c6a7f72c66e816ab1796b96ac6c3d14d49338293d2141664330b55ffc"}, @@ -2812,38 +2808,38 @@ portalocker = [ {file = "portalocker-1.7.1.tar.gz", hash = "sha256:6d6f5de5a3e68c4dd65a98ec1babb26d28ccc5e770e07b672d65d5a35e4b2d8a"}, ] psutil = [ - {file = "psutil-5.9.0-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:55ce319452e3d139e25d6c3f85a1acf12d1607ddedea5e35fb47a552c051161b"}, - {file = "psutil-5.9.0-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:7336292a13a80eb93c21f36bde4328aa748a04b68c13d01dfddd67fc13fd0618"}, - {file = "psutil-5.9.0-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:cb8d10461c1ceee0c25a64f2dd54872b70b89c26419e147a05a10b753ad36ec2"}, - {file = "psutil-5.9.0-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:7641300de73e4909e5d148e90cc3142fb890079e1525a840cf0dfd39195239fd"}, - {file = "psutil-5.9.0-cp27-none-win32.whl", hash = "sha256:ea42d747c5f71b5ccaa6897b216a7dadb9f52c72a0fe2b872ef7d3e1eacf3ba3"}, - {file = "psutil-5.9.0-cp27-none-win_amd64.whl", hash = "sha256:ef216cc9feb60634bda2f341a9559ac594e2eeaadd0ba187a4c2eb5b5d40b91c"}, - {file = "psutil-5.9.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:90a58b9fcae2dbfe4ba852b57bd4a1dded6b990a33d6428c7614b7d48eccb492"}, - {file = "psutil-5.9.0-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ff0d41f8b3e9ebb6b6110057e40019a432e96aae2008951121ba4e56040b84f3"}, - {file = "psutil-5.9.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:742c34fff804f34f62659279ed5c5b723bb0195e9d7bd9907591de9f8f6558e2"}, - {file = "psutil-5.9.0-cp310-cp310-win32.whl", hash = "sha256:8293942e4ce0c5689821f65ce6522ce4786d02af57f13c0195b40e1edb1db61d"}, - {file = "psutil-5.9.0-cp310-cp310-win_amd64.whl", hash = "sha256:9b51917c1af3fa35a3f2dabd7ba96a2a4f19df3dec911da73875e1edaf22a40b"}, - {file = "psutil-5.9.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:e9805fed4f2a81de98ae5fe38b75a74c6e6ad2df8a5c479594c7629a1fe35f56"}, - {file = "psutil-5.9.0-cp36-cp36m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c51f1af02334e4b516ec221ee26b8fdf105032418ca5a5ab9737e8c87dafe203"}, - {file = "psutil-5.9.0-cp36-cp36m-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:32acf55cb9a8cbfb29167cd005951df81b567099295291bcfd1027365b36591d"}, - {file = "psutil-5.9.0-cp36-cp36m-win32.whl", hash = "sha256:e5c783d0b1ad6ca8a5d3e7b680468c9c926b804be83a3a8e95141b05c39c9f64"}, - {file = "psutil-5.9.0-cp36-cp36m-win_amd64.whl", hash = "sha256:d62a2796e08dd024b8179bd441cb714e0f81226c352c802fca0fd3f89eeacd94"}, - {file = "psutil-5.9.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:3d00a664e31921009a84367266b35ba0aac04a2a6cad09c550a89041034d19a0"}, - {file = "psutil-5.9.0-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7779be4025c540d1d65a2de3f30caeacc49ae7a2152108adeaf42c7534a115ce"}, - {file = "psutil-5.9.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:072664401ae6e7c1bfb878c65d7282d4b4391f1bc9a56d5e03b5a490403271b5"}, - {file = "psutil-5.9.0-cp37-cp37m-win32.whl", hash = "sha256:df2c8bd48fb83a8408c8390b143c6a6fa10cb1a674ca664954de193fdcab36a9"}, - {file = "psutil-5.9.0-cp37-cp37m-win_amd64.whl", hash = "sha256:1d7b433519b9a38192dfda962dd8f44446668c009833e1429a52424624f408b4"}, - {file = "psutil-5.9.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:c3400cae15bdb449d518545cbd5b649117de54e3596ded84aacabfbb3297ead2"}, - {file = "psutil-5.9.0-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b2237f35c4bbae932ee98902a08050a27821f8f6dfa880a47195e5993af4702d"}, - {file = "psutil-5.9.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1070a9b287846a21a5d572d6dddd369517510b68710fca56b0e9e02fd24bed9a"}, - {file = "psutil-5.9.0-cp38-cp38-win32.whl", hash = "sha256:76cebf84aac1d6da5b63df11fe0d377b46b7b500d892284068bacccf12f20666"}, - {file = "psutil-5.9.0-cp38-cp38-win_amd64.whl", hash = "sha256:3151a58f0fbd8942ba94f7c31c7e6b310d2989f4da74fcbf28b934374e9bf841"}, - {file = "psutil-5.9.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:539e429da49c5d27d5a58e3563886057f8fc3868a5547b4f1876d9c0f007bccf"}, - {file = "psutil-5.9.0-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:58c7d923dc209225600aec73aa2c4ae8ea33b1ab31bc11ef8a5933b027476f07"}, - {file = "psutil-5.9.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3611e87eea393f779a35b192b46a164b1d01167c9d323dda9b1e527ea69d697d"}, - {file = "psutil-5.9.0-cp39-cp39-win32.whl", hash = "sha256:4e2fb92e3aeae3ec3b7b66c528981fd327fb93fd906a77215200404444ec1845"}, - {file = "psutil-5.9.0-cp39-cp39-win_amd64.whl", hash = "sha256:7d190ee2eaef7831163f254dc58f6d2e2a22e27382b936aab51c835fc080c3d3"}, - {file = "psutil-5.9.0.tar.gz", hash = "sha256:869842dbd66bb80c3217158e629d6fceaecc3a3166d3d1faee515b05dd26ca25"}, + {file = "psutil-5.9.1-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:799759d809c31aab5fe4579e50addf84565e71c1dc9f1c31258f159ff70d3f87"}, + {file = "psutil-5.9.1-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:9272167b5f5fbfe16945be3db475b3ce8d792386907e673a209da686176552af"}, + {file = "psutil-5.9.1-cp27-cp27m-win32.whl", hash = "sha256:0904727e0b0a038830b019551cf3204dd48ef5c6868adc776e06e93d615fc5fc"}, + {file = "psutil-5.9.1-cp27-cp27m-win_amd64.whl", hash = "sha256:e7e10454cb1ab62cc6ce776e1c135a64045a11ec4c6d254d3f7689c16eb3efd2"}, + {file = "psutil-5.9.1-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:56960b9e8edcca1456f8c86a196f0c3d8e3e361320071c93378d41445ffd28b0"}, + {file = "psutil-5.9.1-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:44d1826150d49ffd62035785a9e2c56afcea66e55b43b8b630d7706276e87f22"}, + {file = "psutil-5.9.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:c7be9d7f5b0d206f0bbc3794b8e16fb7dbc53ec9e40bbe8787c6f2d38efcf6c9"}, + {file = "psutil-5.9.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:abd9246e4cdd5b554a2ddd97c157e292ac11ef3e7af25ac56b08b455c829dca8"}, + {file = "psutil-5.9.1-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:29a442e25fab1f4d05e2655bb1b8ab6887981838d22effa2396d584b740194de"}, + {file = "psutil-5.9.1-cp310-cp310-win32.whl", hash = "sha256:20b27771b077dcaa0de1de3ad52d22538fe101f9946d6dc7869e6f694f079329"}, + {file = "psutil-5.9.1-cp310-cp310-win_amd64.whl", hash = "sha256:58678bbadae12e0db55186dc58f2888839228ac9f41cc7848853539b70490021"}, + {file = "psutil-5.9.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:3a76ad658641172d9c6e593de6fe248ddde825b5866464c3b2ee26c35da9d237"}, + {file = "psutil-5.9.1-cp36-cp36m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a6a11e48cb93a5fa606306493f439b4aa7c56cb03fc9ace7f6bfa21aaf07c453"}, + {file = "psutil-5.9.1-cp36-cp36m-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:068935df39055bf27a29824b95c801c7a5130f118b806eee663cad28dca97685"}, + {file = "psutil-5.9.1-cp36-cp36m-win32.whl", hash = "sha256:0f15a19a05f39a09327345bc279c1ba4a8cfb0172cc0d3c7f7d16c813b2e7d36"}, + {file = "psutil-5.9.1-cp36-cp36m-win_amd64.whl", hash = "sha256:db417f0865f90bdc07fa30e1aadc69b6f4cad7f86324b02aa842034efe8d8c4d"}, + {file = "psutil-5.9.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:91c7ff2a40c373d0cc9121d54bc5f31c4fa09c346528e6a08d1845bce5771ffc"}, + {file = "psutil-5.9.1-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fea896b54f3a4ae6f790ac1d017101252c93f6fe075d0e7571543510f11d2676"}, + {file = "psutil-5.9.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3054e923204b8e9c23a55b23b6df73a8089ae1d075cb0bf711d3e9da1724ded4"}, + {file = "psutil-5.9.1-cp37-cp37m-win32.whl", hash = "sha256:d2d006286fbcb60f0b391741f520862e9b69f4019b4d738a2a45728c7e952f1b"}, + {file = "psutil-5.9.1-cp37-cp37m-win_amd64.whl", hash = "sha256:b14ee12da9338f5e5b3a3ef7ca58b3cba30f5b66f7662159762932e6d0b8f680"}, + {file = "psutil-5.9.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:19f36c16012ba9cfc742604df189f2f28d2720e23ff7d1e81602dbe066be9fd1"}, + {file = "psutil-5.9.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:944c4b4b82dc4a1b805329c980f270f170fdc9945464223f2ec8e57563139cf4"}, + {file = "psutil-5.9.1-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4b6750a73a9c4a4e689490ccb862d53c7b976a2a35c4e1846d049dcc3f17d83b"}, + {file = "psutil-5.9.1-cp38-cp38-win32.whl", hash = "sha256:a8746bfe4e8f659528c5c7e9af5090c5a7d252f32b2e859c584ef7d8efb1e689"}, + {file = "psutil-5.9.1-cp38-cp38-win_amd64.whl", hash = "sha256:79c9108d9aa7fa6fba6e668b61b82facc067a6b81517cab34d07a84aa89f3df0"}, + {file = "psutil-5.9.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:28976df6c64ddd6320d281128817f32c29b539a52bdae5e192537bc338a9ec81"}, + {file = "psutil-5.9.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b88f75005586131276634027f4219d06e0561292be8bd6bc7f2f00bdabd63c4e"}, + {file = "psutil-5.9.1-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:645bd4f7bb5b8633803e0b6746ff1628724668681a434482546887d22c7a9537"}, + {file = "psutil-5.9.1-cp39-cp39-win32.whl", hash = "sha256:32c52611756096ae91f5d1499fe6c53b86f4a9ada147ee42db4991ba1520e574"}, + {file = "psutil-5.9.1-cp39-cp39-win_amd64.whl", hash = "sha256:f65f9a46d984b8cd9b3750c2bdb419b2996895b005aefa6cbaba9a143b1ce2c5"}, + {file = "psutil-5.9.1.tar.gz", hash = "sha256:57f1819b5d9e95cdfb0c881a8a5b7d542ed0b7c522d575706a80bedc848c8954"}, ] pycparser = [ {file = "pycparser-2.21-py2.py3-none-any.whl", hash = "sha256:8ee45429555515e1f6b185e78100aea234072576aa43ab53aefcae078162fca9"}, @@ -2854,12 +2850,12 @@ pygithub = [ {file = "PyGithub-1.55.tar.gz", hash = "sha256:1bbfff9372047ff3f21d5cd8e07720f3dbfdaf6462fcaed9d815f528f1ba7283"}, ] pygments = [ - {file = "Pygments-2.11.2-py3-none-any.whl", hash = "sha256:44238f1b60a76d78fc8ca0528ee429702aae011c265fe6a8dd8b63049ae41c65"}, - {file = "Pygments-2.11.2.tar.gz", hash = "sha256:4e426f72023d88d03b2fa258de560726ce890ff3b630f88c21cbb8b2503b8c6a"}, + {file = "Pygments-2.12.0-py3-none-any.whl", hash = "sha256:dc9c10fb40944260f6ed4c688ece0cd2048414940f1cea51b8b226318411c519"}, + {file = "Pygments-2.12.0.tar.gz", hash = "sha256:5eb116118f9612ff1ee89ac96437bb6b49e8f04d8a13b514ba26f620208e26eb"}, ] pyjwt = [ - {file = "PyJWT-2.3.0-py3-none-any.whl", hash = "sha256:e0c4bb8d9f0af0c7f5b1ec4c5036309617d03d56932877f2f7a0beeb5318322f"}, - {file = "PyJWT-2.3.0.tar.gz", hash = "sha256:b888b4d56f06f6dcd777210c334e69c737be74755d3e5e9ee3fe67dc18a0ee41"}, + {file = "PyJWT-2.4.0-py3-none-any.whl", hash = "sha256:72d1d253f32dbd4f5c88eaf1fdc62f3a19f676ccbadb9dbc5d07e951b2b26daf"}, + {file = "PyJWT-2.4.0.tar.gz", hash = "sha256:d42908208c699b3b973cbeb01a969ba6a96c821eefb1c5bfe4c390c01d67abba"}, ] pynacl = [ {file = "PyNaCl-1.4.0-cp27-cp27m-macosx_10_10_x86_64.whl", hash = "sha256:ea6841bc3a76fa4942ce00f3bda7d436fda21e2d91602b9e21b7ca9ecab8f3ff"}, @@ -2882,12 +2878,12 @@ pynacl = [ {file = "PyNaCl-1.4.0.tar.gz", hash = "sha256:54e9a2c849c742006516ad56a88f5c74bf2ce92c9f67435187c3c5953b346505"}, ] pyopenssl = [ - {file = "pyOpenSSL-21.0.0-py2.py3-none-any.whl", hash = "sha256:8935bd4920ab9abfebb07c41a4f58296407ed77f04bd1a92914044b848ba1ed6"}, - {file = "pyOpenSSL-21.0.0.tar.gz", hash = "sha256:5e2d8c5e46d0d865ae933bef5230090bdaf5506281e9eec60fa250ee80600cb3"}, + {file = "pyOpenSSL-22.0.0-py2.py3-none-any.whl", hash = "sha256:ea252b38c87425b64116f808355e8da644ef9b07e429398bfece610f893ee2e0"}, + {file = "pyOpenSSL-22.0.0.tar.gz", hash = "sha256:660b1b1425aac4a1bea1d94168a85d99f0b3144c869dd4390d27629d0087f1bf"}, ] pyparsing = [ - {file = "pyparsing-3.0.7-py3-none-any.whl", hash = "sha256:a6c06a88f252e6c322f65faf8f418b16213b51bdfaece0524c1c1bc30c63c484"}, - {file = "pyparsing-3.0.7.tar.gz", hash = "sha256:18ee9022775d270c55187733956460083db60b37d0d0fb357445f3094eed3eea"}, + {file = "pyparsing-3.0.9-py3-none-any.whl", hash = "sha256:5026bae9a10eeaefb61dab2f09052b9f4307d44aee4eda64b309723d8d206bbc"}, + {file = "pyparsing-3.0.9.tar.gz", hash = "sha256:2b020ecf7d21b687f219b71ecad3631f644a47f01403fa1d1036b0c6416d70fb"}, ] pyreadline3 = [ {file = "pyreadline3-3.4.1-py3-none-any.whl", hash = "sha256:b0efb6516fd4fb07b45949053826a62fa4cb353db5be2bbb4a7aa1fdd1e345fb"}, @@ -2930,18 +2926,20 @@ python-json-logger = [ {file = "python_json_logger-2.0.2-py3-none-any.whl", hash = "sha256:99310d148f054e858cd5f4258794ed6777e7ad2c3fd7e1c1b527f1cba4d08420"}, ] pywin32 = [ - {file = "pywin32-303-cp310-cp310-win32.whl", hash = "sha256:6fed4af057039f309263fd3285d7b8042d41507343cd5fa781d98fcc5b90e8bb"}, - {file = "pywin32-303-cp310-cp310-win_amd64.whl", hash = "sha256:51cb52c5ec6709f96c3f26e7795b0bf169ee0d8395b2c1d7eb2c029a5008ed51"}, - {file = "pywin32-303-cp311-cp311-win32.whl", hash = "sha256:d9b5d87ca944eb3aa4cd45516203ead4b37ab06b8b777c54aedc35975dec0dee"}, - {file = "pywin32-303-cp311-cp311-win_amd64.whl", hash = "sha256:fcf44032f5b14fcda86028cdf49b6ebdaea091230eb0a757282aa656e4732439"}, - {file = "pywin32-303-cp36-cp36m-win32.whl", hash = "sha256:aad484d52ec58008ca36bd4ad14a71d7dd0a99db1a4ca71072213f63bf49c7d9"}, - {file = "pywin32-303-cp36-cp36m-win_amd64.whl", hash = "sha256:2a09632916b6bb231ba49983fe989f2f625cea237219530e81a69239cd0c4559"}, - {file = "pywin32-303-cp37-cp37m-win32.whl", hash = "sha256:b1675d82bcf6dbc96363fca747bac8bff6f6e4a447a4287ac652aa4b9adc796e"}, - {file = "pywin32-303-cp37-cp37m-win_amd64.whl", hash = "sha256:c268040769b48a13367221fced6d4232ed52f044ffafeda247bd9d2c6bdc29ca"}, - {file = "pywin32-303-cp38-cp38-win32.whl", hash = "sha256:5f9ec054f5a46a0f4dfd72af2ce1372f3d5a6e4052af20b858aa7df2df7d355b"}, - {file = "pywin32-303-cp38-cp38-win_amd64.whl", hash = "sha256:793bf74fce164bcffd9d57bb13c2c15d56e43c9542a7b9687b4fccf8f8a41aba"}, - {file = "pywin32-303-cp39-cp39-win32.whl", hash = "sha256:7d3271c98434617a11921c5ccf74615794d97b079e22ed7773790822735cc352"}, - {file = "pywin32-303-cp39-cp39-win_amd64.whl", hash = "sha256:79cbb862c11b9af19bcb682891c1b91942ec2ff7de8151e2aea2e175899cda34"}, + {file = "pywin32-304-cp310-cp310-win32.whl", hash = "sha256:3c7bacf5e24298c86314f03fa20e16558a4e4138fc34615d7de4070c23e65af3"}, + {file = "pywin32-304-cp310-cp310-win_amd64.whl", hash = "sha256:4f32145913a2447736dad62495199a8e280a77a0ca662daa2332acf849f0be48"}, + {file = "pywin32-304-cp310-cp310-win_arm64.whl", hash = "sha256:d3ee45adff48e0551d1aa60d2ec066fec006083b791f5c3527c40cd8aefac71f"}, + {file = "pywin32-304-cp311-cp311-win32.whl", hash = "sha256:30c53d6ce44c12a316a06c153ea74152d3b1342610f1b99d40ba2795e5af0269"}, + {file = "pywin32-304-cp311-cp311-win_amd64.whl", hash = "sha256:7ffa0c0fa4ae4077e8b8aa73800540ef8c24530057768c3ac57c609f99a14fd4"}, + {file = "pywin32-304-cp311-cp311-win_arm64.whl", hash = "sha256:cbbe34dad39bdbaa2889a424d28752f1b4971939b14b1bb48cbf0182a3bcfc43"}, + {file = "pywin32-304-cp36-cp36m-win32.whl", hash = "sha256:be253e7b14bc601718f014d2832e4c18a5b023cbe72db826da63df76b77507a1"}, + {file = "pywin32-304-cp36-cp36m-win_amd64.whl", hash = "sha256:de9827c23321dcf43d2f288f09f3b6d772fee11e809015bdae9e69fe13213988"}, + {file = "pywin32-304-cp37-cp37m-win32.whl", hash = "sha256:f64c0377cf01b61bd5e76c25e1480ca8ab3b73f0c4add50538d332afdf8f69c5"}, + {file = "pywin32-304-cp37-cp37m-win_amd64.whl", hash = "sha256:bb2ea2aa81e96eee6a6b79d87e1d1648d3f8b87f9a64499e0b92b30d141e76df"}, + {file = "pywin32-304-cp38-cp38-win32.whl", hash = "sha256:94037b5259701988954931333aafd39cf897e990852115656b014ce72e052e96"}, + {file = "pywin32-304-cp38-cp38-win_amd64.whl", hash = "sha256:ead865a2e179b30fb717831f73cf4373401fc62fbc3455a0889a7ddac848f83e"}, + {file = "pywin32-304-cp39-cp39-win32.whl", hash = "sha256:25746d841201fd9f96b648a248f731c1dec851c9a08b8e33da8b56148e4c65cc"}, + {file = "pywin32-304-cp39-cp39-win_amd64.whl", hash = "sha256:d24a3382f013b21aa24a5cfbfad5a2cd9926610c0affde3e8ab5b3d7dbcf4ac9"}, ] pyyaml = [ {file = "PyYAML-6.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d4db7c7aef085872ef65a8fd7d6d09a14ae91f691dec3e87ee5ee0539d516f53"}, @@ -2983,17 +2981,16 @@ requests = [ {file = "requests-2.27.1.tar.gz", hash = "sha256:68d7c56fd5a8999887728ef304a6d12edc7be74f1cfa47714fc8b414525c9a61"}, ] requests-oauthlib = [ - {file = "requests-oauthlib-1.3.0.tar.gz", hash = "sha256:b4261601a71fd721a8bd6d7aa1cc1d6a8a93b4a9f5e96626f8e4d91e8beeaa6a"}, - {file = "requests_oauthlib-1.3.0-py2.py3-none-any.whl", hash = "sha256:7f71572defaecd16372f9006f33c2ec8c077c3cfa6f5911a9a90202beb513f3d"}, - {file = "requests_oauthlib-1.3.0-py3.7.egg", hash = "sha256:fa6c47b933f01060936d87ae9327fead68768b69c6c9ea2109c48be30f2d4dbc"}, + {file = "requests-oauthlib-1.3.1.tar.gz", hash = "sha256:75beac4a47881eeb94d5ea5d6ad31ef88856affe2332b9aafb52c6452ccf0d7a"}, + {file = "requests_oauthlib-1.3.1-py2.py3-none-any.whl", hash = "sha256:2577c501a2fb8d05a304c09d090d6e47c306fef15809d102b327cf8364bddab5"}, ] resolvelib = [ {file = "resolvelib-0.5.5-py2.py3-none-any.whl", hash = "sha256:b0143b9d074550a6c5163a0f587e49c49017434e3cdfe853941725f5455dd29c"}, {file = "resolvelib-0.5.5.tar.gz", hash = "sha256:123de56548c90df85137425a3f51eb93df89e2ba719aeb6a8023c032758be950"}, ] "ruamel.yaml" = [ - {file = "ruamel.yaml-0.17.20-py3-none-any.whl", hash = "sha256:810eef9c46523a3f77479c66267a4708255ebe806a2d540078408c2227f011af"}, - {file = "ruamel.yaml-0.17.20.tar.gz", hash = "sha256:4b8a33c1efb2b443a93fcaafcfa4d2e445f8e8c29c528d9f5cdafb7cc9e4004c"}, + {file = "ruamel.yaml-0.17.21-py3-none-any.whl", hash = "sha256:742b35d3d665023981bd6d16b3d24248ce5df75fdb4e2924e93a05c1f8b61ca7"}, + {file = "ruamel.yaml-0.17.21.tar.gz", hash = "sha256:8b7ce697a2f212752a35c1ac414471dc16c424c9573be4926b56ff3f5d23b7af"}, ] "ruamel.yaml.clib" = [ {file = "ruamel.yaml.clib-0.2.6-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:6e7be2c5bcb297f5b82fee9c665eb2eb7001d1050deaba8471842979293a80b0"}, @@ -3023,8 +3020,8 @@ resolvelib = [ {file = "ruamel.yaml.clib-0.2.6.tar.gz", hash = "sha256:4ff604ce439abb20794f05613c374759ce10e3595d1867764dd1ae675b85acbd"}, ] s3transfer = [ - {file = "s3transfer-0.5.0-py3-none-any.whl", hash = "sha256:9c1dc369814391a6bda20ebbf4b70a0f34630592c9aa520856bf384916af2803"}, - {file = "s3transfer-0.5.0.tar.gz", hash = "sha256:50ed823e1dc5868ad40c8dc92072f757aa0e653a192845c94a3b676f4a62da4c"}, + {file = "s3transfer-0.5.2-py3-none-any.whl", hash = "sha256:7a6f4c4d1fdb9a2b640244008e142cbc2cd3ae34b386584ef044dd0f27101971"}, + {file = "s3transfer-0.5.2.tar.gz", hash = "sha256:95c58c194ce657a5f4fb0b9e60a84968c808888aed628cd98ab8771fe1db98ed"}, ] scp = [ {file = "scp-0.13.6-py2.py3-none-any.whl", hash = "sha256:5e23f22b00bdbeed83a982c6b2dfae98c125b80019c15fbb16dd64dfd864a452"}, @@ -3052,68 +3049,85 @@ tabulate = [ {file = "tabulate-0.8.9-py3-none-any.whl", hash = "sha256:d7c013fe7abbc5e491394e10fa845f8f32fe54f8dc60c6622c6cf482d25d47e4"}, {file = "tabulate-0.8.9.tar.gz", hash = "sha256:eb1d13f25760052e8931f2ef80aaf6045a6cceb47514db8beab24cded16f13a7"}, ] +typing-extensions = [ + {file = "typing_extensions-4.2.0-py3-none-any.whl", hash = "sha256:6657594ee297170d19f67d55c05852a874e7eb634f4f753dbd667855e07c1708"}, + {file = "typing_extensions-4.2.0.tar.gz", hash = "sha256:f1c24655a0da0d1b67f07e17a5e6b2a105894e6824b92096378bb3668ef02376"}, +] urllib3 = [ - {file = "urllib3-1.26.8-py2.py3-none-any.whl", hash = "sha256:000ca7f471a233c2251c6c7023ee85305721bfdf18621ebff4fd17a8653427ed"}, - {file = "urllib3-1.26.8.tar.gz", hash = "sha256:0e7c33d9a63e7ddfcb86780aac87befc2fbddf46c58dbb487e0855f7ceec283c"}, + {file = "urllib3-1.26.9-py2.py3-none-any.whl", hash = "sha256:44ece4d53fb1706f667c9bd1c648f5469a2ec925fcf3a776667042d645472c14"}, + {file = "urllib3-1.26.9.tar.gz", hash = "sha256:aabaf16477806a5e1dd19aa41f8c2b7950dd3c746362d7e3223dbe6de6ac448e"}, ] websocket-client = [ {file = "websocket_client-0.56.0-py2.py3-none-any.whl", hash = "sha256:1151d5fb3a62dc129164292e1227655e4bbc5dd5340a5165dfae61128ec50aa9"}, {file = "websocket_client-0.56.0.tar.gz", hash = "sha256:1fd5520878b68b84b5748bb30e592b10d0a91529d5383f74f4964e72b297fd3a"}, ] wrapt = [ - {file = "wrapt-1.13.3-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:e05e60ff3b2b0342153be4d1b597bbcfd8330890056b9619f4ad6b8d5c96a81a"}, - {file = "wrapt-1.13.3-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:85148f4225287b6a0665eef08a178c15097366d46b210574a658c1ff5b377489"}, - {file = "wrapt-1.13.3-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:2dded5496e8f1592ec27079b28b6ad2a1ef0b9296d270f77b8e4a3a796cf6909"}, - {file = "wrapt-1.13.3-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:e94b7d9deaa4cc7bac9198a58a7240aaf87fe56c6277ee25fa5b3aa1edebd229"}, - {file = "wrapt-1.13.3-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:498e6217523111d07cd67e87a791f5e9ee769f9241fcf8a379696e25806965af"}, - {file = "wrapt-1.13.3-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:ec7e20258ecc5174029a0f391e1b948bf2906cd64c198a9b8b281b811cbc04de"}, - {file = "wrapt-1.13.3-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:87883690cae293541e08ba2da22cacaae0a092e0ed56bbba8d018cc486fbafbb"}, - {file = "wrapt-1.13.3-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:f99c0489258086308aad4ae57da9e8ecf9e1f3f30fa35d5e170b4d4896554d80"}, - {file = "wrapt-1.13.3-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:6a03d9917aee887690aa3f1747ce634e610f6db6f6b332b35c2dd89412912bca"}, - {file = "wrapt-1.13.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:936503cb0a6ed28dbfa87e8fcd0a56458822144e9d11a49ccee6d9a8adb2ac44"}, - {file = "wrapt-1.13.3-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:f9c51d9af9abb899bd34ace878fbec8bf357b3194a10c4e8e0a25512826ef056"}, - {file = "wrapt-1.13.3-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:220a869982ea9023e163ba915077816ca439489de6d2c09089b219f4e11b6785"}, - {file = "wrapt-1.13.3-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:0877fe981fd76b183711d767500e6b3111378ed2043c145e21816ee589d91096"}, - {file = "wrapt-1.13.3-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:43e69ffe47e3609a6aec0fe723001c60c65305784d964f5007d5b4fb1bc6bf33"}, - {file = "wrapt-1.13.3-cp310-cp310-win32.whl", hash = "sha256:78dea98c81915bbf510eb6a3c9c24915e4660302937b9ae05a0947164248020f"}, - {file = "wrapt-1.13.3-cp310-cp310-win_amd64.whl", hash = "sha256:ea3e746e29d4000cd98d572f3ee2a6050a4f784bb536f4ac1f035987fc1ed83e"}, - {file = "wrapt-1.13.3-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:8c73c1a2ec7c98d7eaded149f6d225a692caa1bd7b2401a14125446e9e90410d"}, - {file = "wrapt-1.13.3-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:086218a72ec7d986a3eddb7707c8c4526d677c7b35e355875a0fe2918b059179"}, - {file = "wrapt-1.13.3-cp35-cp35m-manylinux2010_i686.whl", hash = "sha256:e92d0d4fa68ea0c02d39f1e2f9cb5bc4b4a71e8c442207433d8db47ee79d7aa3"}, - {file = "wrapt-1.13.3-cp35-cp35m-manylinux2010_x86_64.whl", hash = "sha256:d4a5f6146cfa5c7ba0134249665acd322a70d1ea61732723c7d3e8cc0fa80755"}, - {file = "wrapt-1.13.3-cp35-cp35m-win32.whl", hash = "sha256:8aab36778fa9bba1a8f06a4919556f9f8c7b33102bd71b3ab307bb3fecb21851"}, - {file = "wrapt-1.13.3-cp35-cp35m-win_amd64.whl", hash = "sha256:944b180f61f5e36c0634d3202ba8509b986b5fbaf57db3e94df11abee244ba13"}, - {file = "wrapt-1.13.3-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:2ebdde19cd3c8cdf8df3fc165bc7827334bc4e353465048b36f7deeae8ee0918"}, - {file = "wrapt-1.13.3-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:610f5f83dd1e0ad40254c306f4764fcdc846641f120c3cf424ff57a19d5f7ade"}, - {file = "wrapt-1.13.3-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:5601f44a0f38fed36cc07db004f0eedeaadbdcec90e4e90509480e7e6060a5bc"}, - {file = "wrapt-1.13.3-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:e6906d6f48437dfd80464f7d7af1740eadc572b9f7a4301e7dd3d65db285cacf"}, - {file = "wrapt-1.13.3-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:766b32c762e07e26f50d8a3468e3b4228b3736c805018e4b0ec8cc01ecd88125"}, - {file = "wrapt-1.13.3-cp36-cp36m-win32.whl", hash = "sha256:5f223101f21cfd41deec8ce3889dc59f88a59b409db028c469c9b20cfeefbe36"}, - {file = "wrapt-1.13.3-cp36-cp36m-win_amd64.whl", hash = "sha256:f122ccd12fdc69628786d0c947bdd9cb2733be8f800d88b5a37c57f1f1d73c10"}, - {file = "wrapt-1.13.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:46f7f3af321a573fc0c3586612db4decb7eb37172af1bc6173d81f5b66c2e068"}, - {file = "wrapt-1.13.3-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:778fd096ee96890c10ce96187c76b3e99b2da44e08c9e24d5652f356873f6709"}, - {file = "wrapt-1.13.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:0cb23d36ed03bf46b894cfec777eec754146d68429c30431c99ef28482b5c1df"}, - {file = "wrapt-1.13.3-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:96b81ae75591a795d8c90edc0bfaab44d3d41ffc1aae4d994c5aa21d9b8e19a2"}, - {file = "wrapt-1.13.3-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:7dd215e4e8514004c8d810a73e342c536547038fb130205ec4bba9f5de35d45b"}, - {file = "wrapt-1.13.3-cp37-cp37m-win32.whl", hash = "sha256:47f0a183743e7f71f29e4e21574ad3fa95676136f45b91afcf83f6a050914829"}, - {file = "wrapt-1.13.3-cp37-cp37m-win_amd64.whl", hash = "sha256:fd76c47f20984b43d93de9a82011bb6e5f8325df6c9ed4d8310029a55fa361ea"}, - {file = "wrapt-1.13.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:b73d4b78807bd299b38e4598b8e7bd34ed55d480160d2e7fdaabd9931afa65f9"}, - {file = "wrapt-1.13.3-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:ec9465dd69d5657b5d2fa6133b3e1e989ae27d29471a672416fd729b429eb554"}, - {file = "wrapt-1.13.3-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:dd91006848eb55af2159375134d724032a2d1d13bcc6f81cd8d3ed9f2b8e846c"}, - {file = "wrapt-1.13.3-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:ae9de71eb60940e58207f8e71fe113c639da42adb02fb2bcbcaccc1ccecd092b"}, - {file = "wrapt-1.13.3-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:51799ca950cfee9396a87f4a1240622ac38973b6df5ef7a41e7f0b98797099ce"}, - {file = "wrapt-1.13.3-cp38-cp38-win32.whl", hash = "sha256:4b9c458732450ec42578b5642ac53e312092acf8c0bfce140ada5ca1ac556f79"}, - {file = "wrapt-1.13.3-cp38-cp38-win_amd64.whl", hash = "sha256:7dde79d007cd6dfa65afe404766057c2409316135cb892be4b1c768e3f3a11cb"}, - {file = "wrapt-1.13.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:981da26722bebb9247a0601e2922cedf8bb7a600e89c852d063313102de6f2cb"}, - {file = "wrapt-1.13.3-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:705e2af1f7be4707e49ced9153f8d72131090e52be9278b5dbb1498c749a1e32"}, - {file = "wrapt-1.13.3-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:25b1b1d5df495d82be1c9d2fad408f7ce5ca8a38085e2da41bb63c914baadff7"}, - {file = "wrapt-1.13.3-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:77416e6b17926d953b5c666a3cb718d5945df63ecf922af0ee576206d7033b5e"}, - {file = "wrapt-1.13.3-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:865c0b50003616f05858b22174c40ffc27a38e67359fa1495605f96125f76640"}, - {file = "wrapt-1.13.3-cp39-cp39-win32.whl", hash = "sha256:0a017a667d1f7411816e4bf214646d0ad5b1da2c1ea13dec6c162736ff25a374"}, - {file = "wrapt-1.13.3-cp39-cp39-win_amd64.whl", hash = "sha256:81bd7c90d28a4b2e1df135bfbd7c23aee3050078ca6441bead44c42483f9ebfb"}, - {file = "wrapt-1.13.3.tar.gz", hash = "sha256:1fea9cd438686e6682271d36f3481a9f3636195578bab9ca3382e2f5f01fc185"}, + {file = "wrapt-1.14.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:1b376b3f4896e7930f1f772ac4b064ac12598d1c38d04907e696cc4d794b43d3"}, + {file = "wrapt-1.14.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:903500616422a40a98a5a3c4ff4ed9d0066f3b4c951fa286018ecdf0750194ef"}, + {file = "wrapt-1.14.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:5a9a0d155deafd9448baff28c08e150d9b24ff010e899311ddd63c45c2445e28"}, + {file = "wrapt-1.14.1-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:ddaea91abf8b0d13443f6dac52e89051a5063c7d014710dcb4d4abb2ff811a59"}, + {file = "wrapt-1.14.1-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:36f582d0c6bc99d5f39cd3ac2a9062e57f3cf606ade29a0a0d6b323462f4dd87"}, + {file = "wrapt-1.14.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:7ef58fb89674095bfc57c4069e95d7a31cfdc0939e2a579882ac7d55aadfd2a1"}, + {file = "wrapt-1.14.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:e2f83e18fe2f4c9e7db597e988f72712c0c3676d337d8b101f6758107c42425b"}, + {file = "wrapt-1.14.1-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:ee2b1b1769f6707a8a445162ea16dddf74285c3964f605877a20e38545c3c462"}, + {file = "wrapt-1.14.1-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:833b58d5d0b7e5b9832869f039203389ac7cbf01765639c7309fd50ef619e0b1"}, + {file = "wrapt-1.14.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:80bb5c256f1415f747011dc3604b59bc1f91c6e7150bd7db03b19170ee06b320"}, + {file = "wrapt-1.14.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:07f7a7d0f388028b2df1d916e94bbb40624c59b48ecc6cbc232546706fac74c2"}, + {file = "wrapt-1.14.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:02b41b633c6261feff8ddd8d11c711df6842aba629fdd3da10249a53211a72c4"}, + {file = "wrapt-1.14.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2fe803deacd09a233e4762a1adcea5db5d31e6be577a43352936179d14d90069"}, + {file = "wrapt-1.14.1-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:257fd78c513e0fb5cdbe058c27a0624c9884e735bbd131935fd49e9fe719d310"}, + {file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:4fcc4649dc762cddacd193e6b55bc02edca674067f5f98166d7713b193932b7f"}, + {file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:11871514607b15cfeb87c547a49bca19fde402f32e2b1c24a632506c0a756656"}, + {file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:8ad85f7f4e20964db4daadcab70b47ab05c7c1cf2a7c1e51087bfaa83831854c"}, + {file = "wrapt-1.14.1-cp310-cp310-win32.whl", hash = "sha256:a9a52172be0b5aae932bef82a79ec0a0ce87288c7d132946d645eba03f0ad8a8"}, + {file = "wrapt-1.14.1-cp310-cp310-win_amd64.whl", hash = "sha256:6d323e1554b3d22cfc03cd3243b5bb815a51f5249fdcbb86fda4bf62bab9e164"}, + {file = "wrapt-1.14.1-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:43ca3bbbe97af00f49efb06e352eae40434ca9d915906f77def219b88e85d907"}, + {file = "wrapt-1.14.1-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:6b1a564e6cb69922c7fe3a678b9f9a3c54e72b469875aa8018f18b4d1dd1adf3"}, + {file = "wrapt-1.14.1-cp35-cp35m-manylinux2010_i686.whl", hash = "sha256:00b6d4ea20a906c0ca56d84f93065b398ab74b927a7a3dbd470f6fc503f95dc3"}, + {file = "wrapt-1.14.1-cp35-cp35m-manylinux2010_x86_64.whl", hash = "sha256:a85d2b46be66a71bedde836d9e41859879cc54a2a04fad1191eb50c2066f6e9d"}, + {file = "wrapt-1.14.1-cp35-cp35m-win32.whl", hash = "sha256:dbcda74c67263139358f4d188ae5faae95c30929281bc6866d00573783c422b7"}, + {file = "wrapt-1.14.1-cp35-cp35m-win_amd64.whl", hash = "sha256:b21bb4c09ffabfa0e85e3a6b623e19b80e7acd709b9f91452b8297ace2a8ab00"}, + {file = "wrapt-1.14.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:9e0fd32e0148dd5dea6af5fee42beb949098564cc23211a88d799e434255a1f4"}, + {file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9736af4641846491aedb3c3f56b9bc5568d92b0692303b5a305301a95dfd38b1"}, + {file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5b02d65b9ccf0ef6c34cba6cf5bf2aab1bb2f49c6090bafeecc9cd81ad4ea1c1"}, + {file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:21ac0156c4b089b330b7666db40feee30a5d52634cc4560e1905d6529a3897ff"}, + {file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:9f3e6f9e05148ff90002b884fbc2a86bd303ae847e472f44ecc06c2cd2fcdb2d"}, + {file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:6e743de5e9c3d1b7185870f480587b75b1cb604832e380d64f9504a0535912d1"}, + {file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:d79d7d5dc8a32b7093e81e97dad755127ff77bcc899e845f41bf71747af0c569"}, + {file = "wrapt-1.14.1-cp36-cp36m-win32.whl", hash = "sha256:81b19725065dcb43df02b37e03278c011a09e49757287dca60c5aecdd5a0b8ed"}, + {file = "wrapt-1.14.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b014c23646a467558be7da3d6b9fa409b2c567d2110599b7cf9a0c5992b3b471"}, + {file = "wrapt-1.14.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:88bd7b6bd70a5b6803c1abf6bca012f7ed963e58c68d76ee20b9d751c74a3248"}, + {file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b5901a312f4d14c59918c221323068fad0540e34324925c8475263841dbdfe68"}, + {file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d77c85fedff92cf788face9bfa3ebaa364448ebb1d765302e9af11bf449ca36d"}, + {file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8d649d616e5c6a678b26d15ece345354f7c2286acd6db868e65fcc5ff7c24a77"}, + {file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:7d2872609603cb35ca513d7404a94d6d608fc13211563571117046c9d2bcc3d7"}, + {file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:ee6acae74a2b91865910eef5e7de37dc6895ad96fa23603d1d27ea69df545015"}, + {file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:2b39d38039a1fdad98c87279b48bc5dce2c0ca0d73483b12cb72aa9609278e8a"}, + {file = "wrapt-1.14.1-cp37-cp37m-win32.whl", hash = "sha256:60db23fa423575eeb65ea430cee741acb7c26a1365d103f7b0f6ec412b893853"}, + {file = "wrapt-1.14.1-cp37-cp37m-win_amd64.whl", hash = "sha256:709fe01086a55cf79d20f741f39325018f4df051ef39fe921b1ebe780a66184c"}, + {file = "wrapt-1.14.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:8c0ce1e99116d5ab21355d8ebe53d9460366704ea38ae4d9f6933188f327b456"}, + {file = "wrapt-1.14.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:e3fb1677c720409d5f671e39bac6c9e0e422584e5f518bfd50aa4cbbea02433f"}, + {file = "wrapt-1.14.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:642c2e7a804fcf18c222e1060df25fc210b9c58db7c91416fb055897fc27e8cc"}, + {file = "wrapt-1.14.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7b7c050ae976e286906dd3f26009e117eb000fb2cf3533398c5ad9ccc86867b1"}, + {file = "wrapt-1.14.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ef3f72c9666bba2bab70d2a8b79f2c6d2c1a42a7f7e2b0ec83bb2f9e383950af"}, + {file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:01c205616a89d09827986bc4e859bcabd64f5a0662a7fe95e0d359424e0e071b"}, + {file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:5a0f54ce2c092aaf439813735584b9537cad479575a09892b8352fea5e988dc0"}, + {file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:2cf71233a0ed05ccdabe209c606fe0bac7379fdcf687f39b944420d2a09fdb57"}, + {file = "wrapt-1.14.1-cp38-cp38-win32.whl", hash = "sha256:aa31fdcc33fef9eb2552cbcbfee7773d5a6792c137b359e82879c101e98584c5"}, + {file = "wrapt-1.14.1-cp38-cp38-win_amd64.whl", hash = "sha256:d1967f46ea8f2db647c786e78d8cc7e4313dbd1b0aca360592d8027b8508e24d"}, + {file = "wrapt-1.14.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:3232822c7d98d23895ccc443bbdf57c7412c5a65996c30442ebe6ed3df335383"}, + {file = "wrapt-1.14.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:988635d122aaf2bdcef9e795435662bcd65b02f4f4c1ae37fbee7401c440b3a7"}, + {file = "wrapt-1.14.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9cca3c2cdadb362116235fdbd411735de4328c61425b0aa9f872fd76d02c4e86"}, + {file = "wrapt-1.14.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d52a25136894c63de15a35bc0bdc5adb4b0e173b9c0d07a2be9d3ca64a332735"}, + {file = "wrapt-1.14.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:40e7bc81c9e2b2734ea4bc1aceb8a8f0ceaac7c5299bc5d69e37c44d9081d43b"}, + {file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:b9b7a708dd92306328117d8c4b62e2194d00c365f18eff11a9b53c6f923b01e3"}, + {file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:6a9a25751acb379b466ff6be78a315e2b439d4c94c1e99cb7266d40a537995d3"}, + {file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:34aa51c45f28ba7f12accd624225e2b1e5a3a45206aa191f6f9aac931d9d56fe"}, + {file = "wrapt-1.14.1-cp39-cp39-win32.whl", hash = "sha256:dee0ce50c6a2dd9056c20db781e9c1cfd33e77d2d569f5d1d9321c641bb903d5"}, + {file = "wrapt-1.14.1-cp39-cp39-win_amd64.whl", hash = "sha256:dee60e1de1898bde3b238f18340eec6148986da0455d8ba7848d50470a7a32fb"}, + {file = "wrapt-1.14.1.tar.gz", hash = "sha256:380a85cf89e0e69b7cfbe2ea9f765f004ff419f34194018a6827ac0e3edfed4d"}, ] xmltodict = [ - {file = "xmltodict-0.12.0-py2.py3-none-any.whl", hash = "sha256:8bbcb45cc982f48b2ca8fe7e7827c5d792f217ecf1792626f808bf41c3b86051"}, - {file = "xmltodict-0.12.0.tar.gz", hash = "sha256:50d8c638ed7ecb88d90561beedbf720c9b4e851a9fa6c47ebd64e99d166d8a21"}, + {file = "xmltodict-0.13.0-py2.py3-none-any.whl", hash = "sha256:aa89e8fd76320154a40d19a0df04a4695fb9dc5ba977cbb68ab3e4eb225e7852"}, + {file = "xmltodict-0.13.0.tar.gz", hash = "sha256:341595a488e3e01a85a9d8911d8912fd922ede5fecc4dce437eb4b6c8d037e56"}, ] diff --git a/.devcontainer/pyproject.toml b/.devcontainer/pyproject.toml index f0f3c73e46..2c1a2699ab 100644 --- a/.devcontainer/pyproject.toml +++ b/.devcontainer/pyproject.toml @@ -5,7 +5,7 @@ description = "Epiphany Dev Container" authors = ["Epiphany Platform"] [tool.poetry.dependencies] -python = "3.10" +python = "3.10.4" pyyaml = "*" jinja2 = "*" boto3 = "*" @@ -14,6 +14,7 @@ python-json-logger = "*" "ruamel.yaml" = "*" ansible = "5.2.0" azure-cli = "2.32.0" +click = "*" [build-system] requires = ["poetry-core>=1.0.0"] diff --git a/.devcontainer/python.env b/.devcontainer/python.env index ac34882551..729b2ec774 100644 --- a/.devcontainer/python.env +++ b/.devcontainer/python.env @@ -2,4 +2,4 @@ # PYTHONPATH can contain multiple locations separated by os.pathsep: semicolon (;) on Windows and colon (:) on Linux/macOS. # Invalid paths are ignored. To verify use "python.analysis.logLevel": "Trace". -PYTHONPATH="ansible/playbooks/roles/repository/files/download-requirements:${PYTHONPATH}" +PYTHONPATH=ansible/playbooks/roles/repository/library:ansible/playbooks/roles/repository/files/download-requirements diff --git a/.devcontainer/requirements.txt b/.devcontainer/requirements.txt index e38c7170f8..aa45039ad2 100644 --- a/.devcontainer/requirements.txt +++ b/.devcontainer/requirements.txt @@ -1,5 +1,5 @@ adal==1.2.7; python_full_version >= "3.6.0" -ansible-core==2.12.1; python_version >= "3.8" +ansible-core==2.12.6; python_version >= "3.8" ansible==5.2.0; python_version >= "3.8" antlr4-python3-runtime==4.7.2; python_full_version >= "3.6.0" applicationinsights==0.11.10; python_full_version >= "3.6.0" @@ -10,12 +10,12 @@ azure-batch==11.0.0; python_full_version >= "3.6.0" azure-cli-core==2.32.0; python_full_version >= "3.6.0" azure-cli-telemetry==1.0.6; python_full_version >= "3.6.0" azure-cli==2.32.0; python_full_version >= "3.6.0" -azure-common==1.1.27; python_version >= "3.6" and python_full_version >= "3.6.0" -azure-core==1.21.1; python_version >= "3.6" and python_full_version >= "3.6.0" +azure-common==1.1.28; python_version >= "3.6" and python_full_version >= "3.6.0" +azure-core==1.24.0; python_version >= "3.6" and python_full_version >= "3.6.0" azure-cosmos==3.2.0; python_full_version >= "3.6.0" azure-datalake-store==0.0.52; python_full_version >= "3.6.0" azure-graphrbac==0.60.0; python_full_version >= "3.6.0" -azure-identity==1.7.1; python_full_version >= "3.6.0" +azure-identity==1.10.0; python_version >= "3.6" and python_full_version >= "3.6.0" azure-keyvault-administration==4.0.0b3; python_full_version >= "3.6.0" azure-keyvault-keys==4.5.0b4; python_full_version >= "3.6.0" azure-keyvault==1.1.0; python_full_version >= "3.6.0" @@ -37,7 +37,7 @@ azure-mgmt-containerinstance==9.1.0; python_full_version >= "3.6.0" azure-mgmt-containerregistry==8.2.0; python_full_version >= "3.6.0" azure-mgmt-containerservice==16.1.0; python_full_version >= "3.6.0" azure-mgmt-core==1.3.0; python_version >= "3.6" and python_full_version >= "3.6.0" -azure-mgmt-cosmosdb==7.0.0b2; python_full_version >= "3.6.0" +azure-mgmt-cosmosdb==7.0.0b6; python_version >= "3.6" and python_full_version >= "3.6.0" azure-mgmt-databoxedge==1.0.0; python_full_version >= "3.6.0" azure-mgmt-datalake-analytics==0.2.1; python_full_version >= "3.6.0" azure-mgmt-datalake-nspkg==3.0.1; python_full_version >= "3.6.0" @@ -85,9 +85,9 @@ azure-mgmt-servicefabricmanagedclusters==1.0.0; python_full_version >= "3.6.0" azure-mgmt-servicelinker==1.0.0b1; python_full_version >= "3.6.0" azure-mgmt-signalr==1.0.0; python_full_version >= "3.6.0" azure-mgmt-sql==3.0.1; python_full_version >= "3.6.0" -azure-mgmt-sqlvirtualmachine==1.0.0b1; python_full_version >= "3.6.0" +azure-mgmt-sqlvirtualmachine==1.0.0b2; python_version >= "3.6" and python_full_version >= "3.6.0" azure-mgmt-storage==19.0.0; python_full_version >= "3.6.0" -azure-mgmt-synapse==2.1.0b4; python_version >= "3.6" and python_full_version >= "3.6.0" +azure-mgmt-synapse==2.1.0b5; python_version >= "3.6" and python_full_version >= "3.6.0" azure-mgmt-trafficmanager==0.51.0; python_full_version >= "3.6.0" azure-mgmt-web==4.0.0; python_full_version >= "3.6.0" azure-multiapi-storage==0.7.0; python_full_version >= "3.6.0" @@ -97,66 +97,68 @@ azure-synapse-accesscontrol==0.5.0; python_full_version >= "3.6.0" azure-synapse-artifacts==0.10.0; python_full_version >= "3.6.0" azure-synapse-managedprivateendpoints==0.3.0; python_full_version >= "3.6.0" azure-synapse-spark==0.2.0; python_full_version >= "3.6.0" -bcrypt==3.2.0; python_version >= "3.6" and python_full_version >= "3.6.0" -boto3==1.20.45; python_version >= "3.6" -botocore==1.23.45; python_version >= "3.6" -certifi==2021.10.8; python_full_version >= "3.6.0" and python_version < "4" and (python_version >= "3.6" and python_full_version < "3.0.0" or python_full_version >= "3.5.0" and python_version < "4" and python_version >= "3.6") and python_version >= "3.6" +bcrypt==3.2.2; python_version >= "3.6" and python_full_version >= "3.6.0" +boto3==1.23.10; python_version >= "3.6" +botocore==1.26.10; python_version >= "3.6" +certifi==2022.5.18.1; python_full_version >= "3.6.0" and python_version < "4" and (python_version >= "3.6" and python_full_version < "3.0.0" or python_full_version >= "3.5.0" and python_version < "4" and python_version >= "3.6") and python_version >= "3.6" cffi==1.15.0; python_version >= "3.8" and python_full_version >= "3.6.0" and python_version < "4" and (python_version >= "3.6" and python_full_version < "3.0.0" or python_full_version >= "3.5.0" and python_version < "4" and python_version >= "3.6") chardet==3.0.4; python_full_version >= "3.6.0" -charset-normalizer==2.0.10; python_version >= "3.6" and python_full_version >= "3.6.0" +charset-normalizer==2.0.12; python_version >= "3.6" and python_full_version >= "3.6.0" +click==8.1.3; python_version >= "3.7" colorama==0.4.4; python_full_version >= "3.6.0" -cryptography==36.0.1 +cryptography==37.0.2 deprecated==1.2.13; python_version >= "3.6" and python_full_version >= "3.6.0" -distro==1.6.0; python_full_version >= "3.6.0" -fabric==2.6.0; python_full_version >= "3.6.0" +distro==1.7.0; python_version >= "3.6" and python_full_version >= "3.6.0" +fabric==2.7.0; python_full_version >= "3.6.0" humanfriendly==10.0; python_full_version >= "3.6.0" idna==3.3; python_full_version >= "3.6.0" and python_version < "4" and (python_version >= "3.6" and python_full_version < "3.0.0" or python_full_version >= "3.5.0" and python_version < "4" and python_version >= "3.6") and python_version >= "3.6" -invoke==1.6.0; python_full_version >= "3.6.0" +invoke==1.7.1; python_full_version >= "3.6.0" isodate==0.6.1; python_version >= "3.6" and python_full_version >= "3.6.0" javaproperties==0.5.2; python_full_version >= "3.6.0" and python_version < "4" -jinja2==3.0.3; python_version >= "3.6" -jmespath==0.10.0; python_full_version >= "3.6.0" and python_version >= "3.6" +jinja2==3.1.2; python_version >= "3.7" +jmespath==1.0.0; python_version >= "3.7" and python_full_version >= "3.6.0" jsondiff==1.3.1; python_full_version >= "3.6.0" -jsonschema==4.4.0; python_version >= "3.7" +jsonschema==4.5.1; python_version >= "3.7" knack==0.9.0; python_full_version >= "3.6.0" -markupsafe==2.0.1; python_version >= "3.8" -msal-extensions==0.3.1; python_full_version >= "3.6.0" -msal==1.16.0; python_full_version >= "3.6.0" +markupsafe==2.1.1; python_version >= "3.8" +msal-extensions==0.3.1; python_version >= "3.6" and python_full_version >= "3.6.0" +msal==1.17.0; python_version >= "3.6" and python_full_version >= "3.6.0" msrest==0.6.21; python_version >= "3.6" and python_full_version >= "3.6.0" msrestazure==0.6.4; python_full_version >= "3.6.0" -oauthlib==3.1.1; python_version >= "3.6" and python_full_version >= "3.6.0" +oauthlib==3.2.0; python_version >= "3.6" and python_full_version >= "3.6.0" packaging==21.3; python_version >= "3.8" and python_full_version >= "3.6.0" -paramiko==2.9.2; python_full_version >= "3.6.0" -pathlib2==2.3.6; python_full_version >= "3.6.0" +paramiko==2.11.0; python_full_version >= "3.6.0" +pathlib2==2.3.7.post1; python_full_version >= "3.6.0" pkginfo==1.8.2; python_full_version >= "3.6.0" portalocker==1.7.1 -psutil==5.9.0; python_full_version >= "3.6.0" +psutil==5.9.1; python_full_version >= "3.6.0" pycparser==2.21; python_version >= "3.8" and python_full_version >= "3.6.0" and python_version < "4" and (python_version >= "3.6" and python_full_version < "3.0.0" or python_full_version >= "3.5.0" and python_version < "4" and python_version >= "3.6") pygithub==1.55; python_version >= "3.6" and python_full_version >= "3.6.0" -pygments==2.11.2; python_version >= "3.5" and python_full_version >= "3.6.0" -pyjwt==2.3.0; python_version >= "3.6" and python_full_version >= "3.6.0" +pygments==2.12.0; python_version >= "3.6" and python_full_version >= "3.6.0" +pyjwt==2.4.0; python_version >= "3.6" and python_full_version >= "3.6.0" pynacl==1.4.0; python_version >= "3.6" and python_full_version >= "3.6.0" -pyopenssl==21.0.0; python_full_version >= "3.6.0" and python_version < "4" and (python_version >= "3.6" and python_full_version < "3.0.0" or python_full_version >= "3.5.0" and python_version < "4" and python_version >= "3.6") -pyparsing==3.0.7; python_version >= "3.8" and python_full_version >= "3.6.0" +pyopenssl==22.0.0; python_version >= "3.6" and python_full_version >= "3.6.0" and python_version < "4" and (python_version >= "3.6" and python_full_version < "3.0.0" or python_full_version >= "3.5.0" and python_version < "4" and python_version >= "3.6") +pyparsing==3.0.9; python_version >= "3.8" and python_full_version >= "3.6.8" pyreadline3==3.4.1; sys_platform == "win32" and python_version >= "3.8" and python_full_version >= "3.6.0" pyrsistent==0.18.1; python_version >= "3.7" pysocks==1.7.1; python_full_version >= "3.6.0" and python_version >= "3.6" python-dateutil==2.8.2; python_full_version >= "3.6.0" and python_version >= "3.6" python-json-logger==2.0.2; python_version >= "3.5" -pywin32==303; python_version >= "3.5" and platform_system == "Windows" and python_full_version >= "3.6.0" +pywin32==304; python_version >= "3.6" and platform_system == "Windows" and python_full_version >= "3.6.0" pyyaml==6.0; python_version >= "3.6" -requests-oauthlib==1.3.0; python_version >= "3.6" and python_full_version >= "3.6.0" +requests-oauthlib==1.3.1; python_version >= "3.6" and python_full_version >= "3.6.0" requests==2.27.1; python_version >= "3.6" and python_full_version >= "3.6.0" resolvelib==0.5.5; python_version >= "3.8" ruamel.yaml.clib==0.2.6; platform_python_implementation == "CPython" and python_version < "3.11" and python_version >= "3.5" -ruamel.yaml==0.17.20; python_version >= "3" -s3transfer==0.5.0; python_version >= "3.6" +ruamel.yaml==0.17.21; python_version >= "3" +s3transfer==0.5.2; python_version >= "3.6" scp==0.13.6; python_full_version >= "3.6.0" semver==2.13.0; python_full_version >= "3.6.0" -six==1.16.0; python_full_version >= "3.6.0" and python_version < "4" and python_version >= "3.6" and (python_version >= "3.6" and python_full_version < "3.0.0" or python_full_version >= "3.5.0" and python_version < "4" and python_version >= "3.6") +six==1.16.0; python_version >= "3.6" and python_full_version >= "3.6.0" and python_version < "4" sshtunnel==0.1.5; python_full_version >= "3.6.0" tabulate==0.8.9; python_full_version >= "3.6.0" -urllib3==1.26.8; python_full_version >= "3.6.0" and python_version < "4" and (python_version >= "3.6" and python_full_version < "3.0.0" or python_full_version >= "3.5.0" and python_version < "4" and python_version >= "3.6") and python_version >= "3.6" +typing-extensions==4.2.0; python_version >= "3.7" and python_full_version >= "3.6.0" +urllib3==1.26.9; python_full_version >= "3.6.0" and python_version < "4" and (python_version >= "3.6" and python_full_version < "3.0.0" or python_full_version >= "3.5.0" and python_version < "4" and python_version >= "3.6") and python_version >= "3.6" websocket-client==0.56.0; python_full_version >= "3.6.0" -wrapt==1.13.3; python_version >= "3.6" and python_full_version >= "3.6.0" -xmltodict==0.12.0; python_full_version >= "3.6.0" +wrapt==1.14.1; python_version >= "3.6" and python_full_version >= "3.6.0" +xmltodict==0.13.0; python_version >= "3.4" and python_full_version >= "3.6.0" diff --git a/.pylintrc b/.pylintrc index 42dc9f84c5..984854faf0 100644 --- a/.pylintrc +++ b/.pylintrc @@ -6,7 +6,8 @@ # Python code to execute, usually for sys.path manipulation such as # pygtk.require(). -#init-hook= +# The following line fixes 'import-error' issues in VS Code, like "Unable to import 'cli.src.ansible.AnsibleCommand'" +init-hook='from pylint.config import find_pylintrc; import os, sys; sys.path.append(os.path.dirname(find_pylintrc()))' # Files or directories to be skipped. They should be base names, not # paths. @@ -55,7 +56,6 @@ confidence= # multiple time. See also the "--disable" option for examples. enable= use-symbolic-message-instead, - useless-supression, fixme # Disable the message, report, category or checker with the given id(s). You @@ -86,11 +86,6 @@ disable= # mypackage.mymodule.MyReporterClass. output-format=junit -# Put messages in a separate file for each module / package specified on the -# command line instead of printing them on stdout. Reports (if any) will be -# written in a file name "pylint_global.[txt|html]". -files-output=yes - # Tells whether to display a full report or only the messages reports=no @@ -167,9 +162,6 @@ ignore-long-lines=^\s*(# )??$ # else. single-line-if-stmt=no -# List of optional constructs for which whitespace checking is disabled -no-space-check=trailing-comma,dict-separator - # Maximum number of lines in a module max-module-lines=2000 @@ -202,63 +194,33 @@ include-naming-hint=no # Regular expression matching correct function names function-rgx=[a-z_][a-z0-9_]{2,30}$ -# Naming hint for function names -function-name-hint=[a-z_][a-z0-9_]{2,30}$ - # Regular expression matching correct variable names variable-rgx=[a-z_][a-z0-9_]{2,30}$ -# Naming hint for variable names -variable-name-hint=[a-z_][a-z0-9_]{2,30}$ - # Regular expression matching correct constant names const-rgx=(([A-Z_][A-Z0-9_]*)|(__.*__))$ -# Naming hint for constant names -const-name-hint=(([A-Z_][A-Z0-9_]*)|(__.*__))$ - # Regular expression matching correct attribute names attr-rgx=[a-z_][a-z0-9_]{2,}$ -# Naming hint for attribute names -attr-name-hint=[a-z_][a-z0-9_]{2,}$ - # Regular expression matching correct argument names argument-rgx=[a-z_][a-z0-9_]{2,30}$ -# Naming hint for argument names -argument-name-hint=[a-z_][a-z0-9_]{2,30}$ - # Regular expression matching correct class attribute names class-attribute-rgx=([A-Za-z_][A-Za-z0-9_]{2,30}|(__.*__))$ -# Naming hint for class attribute names -class-attribute-name-hint=([A-Za-z_][A-Za-z0-9_]{2,30}|(__.*__))$ - # Regular expression matching correct inline iteration names inlinevar-rgx=[A-Za-z_][A-Za-z0-9_]*$ -# Naming hint for inline iteration names -inlinevar-name-hint=[A-Za-z_][A-Za-z0-9_]*$ - # Regular expression matching correct class names class-rgx=[A-Z_][a-zA-Z0-9]+$ -# Naming hint for class names -class-name-hint=[A-Z_][a-zA-Z0-9]+$ - # Regular expression matching correct module names module-rgx=(([a-z_][a-z0-9_]*)|([A-Z][a-zA-Z0-9]+))$ -# Naming hint for module names -module-name-hint=(([a-z_][a-z0-9_]*)|([A-Z][a-zA-Z0-9]+))$ - # Regular expression matching correct method names method-rgx=[a-z_][a-z0-9_]{2,}$ -# Naming hint for method names -method-name-hint=[a-z_][a-z0-9_]{2,}$ - # Regular expression which should only match function or class names that do # not require a docstring. no-docstring-rgx=__.*__ diff --git a/.rubocop.yml b/.rubocop.yml index 9f4022faea..34a4395234 100644 --- a/.rubocop.yml +++ b/.rubocop.yml @@ -11,3 +11,17 @@ AllCops: Include: - tests/spec/**/*.rb + +Layout/ExtraSpacing: + AllowBeforeTrailingComments: true + +Metrics/BlockLength: + Exclude: + - tests/spec/Rakefile + +Naming/FileName: + Exclude: + - tests/spec/Rakefile + +Style/FrozenStringLiteralComment: + Enabled: false diff --git a/.vscode/launch.json b/.vscode/launch.json index 604e3df3f3..a10d0f4d33 100644 --- a/.vscode/launch.json +++ b/.vscode/launch.json @@ -20,11 +20,11 @@ // "args": ["delete", "-b", "${workspaceFolder}/clusters/build/"] // "args": ["init", "-p", "", "-n", ""] // "args": ["init", "-p", "", "-n", "", "--full"] - // "args": ["prepare", "--os", ""] - // "args": ["test", "-b", "${workspaceFolder}/clusters/build/"] - // "args": ["test", "-b", "${workspaceFolder}/clusters/build/", "-g", ""] + // "args": ["prepare", "--os", "", "--arch", ""] + // "args": ["--auto-approve", "test", "-b", "${workspaceFolder}/clusters/build/", "-k", "/etc/kubernetes/admin.conf"] + // "args": ["--auto-approve", "test", "-b", "${workspaceFolder}/clusters/build/", "-i", "", "-k", "/etc/kubernetes/admin.conf"] // "args": ["upgrade", "-b", "${workspaceFolder}/clusters/build/"] - // "args": ["upgrade", "-b", "${workspaceFolder}/clusters/build/","--upgrade-components","kafka"] + // "args": ["upgrade", "-b", "${workspaceFolder}/clusters/build/", "--upgrade-components", "kafka"] } ] } diff --git a/.vscode/settings.json b/.vscode/settings.json index bf08edadd5..e87a6a2bcc 100644 --- a/.vscode/settings.json +++ b/.vscode/settings.json @@ -20,5 +20,6 @@ "ruby.lint": { "rubocop": true }, + "solargraph.diagnostics": false, "solargraph.formatting": true, } diff --git a/.vscode/tasks.json b/.vscode/tasks.json index 78c92161f4..13d0ad2938 100644 --- a/.vscode/tasks.json +++ b/.vscode/tasks.json @@ -30,6 +30,16 @@ "command": "pylint --rcfile .pylintrc ./cli ./tests --output-format text", "group": "test", }, + { + "label": "Pylint repository modules", + "command": "pylint", + "args": [ + "--rcfile", ".pylintrc", + "--output-format", "text", + "./ansible/playbooks/roles/repository/library/tests", + ], + "group": "test", + }, { "label": "Pylint download-requirements", "command": "pylint", diff --git a/Dockerfile b/Dockerfile index a99d54cac2..990f1983a4 100644 --- a/Dockerfile +++ b/Dockerfile @@ -4,6 +4,7 @@ ARG USERNAME=epiuser ARG USER_UID=1000 ARG USER_GID=$USER_UID +ARG AWS_CLI_VERSION=2.0.30 ARG HELM_VERSION=3.3.1 ARG KUBECTL_VERSION=1.22.4 ARG TERRAFORM_VERSION=1.1.3 @@ -15,7 +16,7 @@ COPY . /epicli RUN : INSTALL APT REQUIREMENTS \ && apt-get update \ && apt-get install --no-install-recommends -y \ - autossh curl gcc jq libcap2-bin libc6-dev libffi-dev make musl-dev openssh-client procps psmisc rsync ruby-full sudo tar unzip vim \ + autossh curl gcc git jq libcap2-bin libc6-dev libffi-dev make musl-dev openssh-client procps psmisc rsync ruby-full sudo tar unzip vim \ \ && : INSTALL HELM BINARY \ && curl -fsSLO https://get.helm.sh/helm-v${HELM_VERSION}-linux-amd64.tar.gz \ @@ -32,6 +33,13 @@ RUN : INSTALL APT REQUIREMENTS \ && unzip terraform_${TERRAFORM_VERSION}_linux_amd64.zip -d /usr/local/bin \ && rm terraform_${TERRAFORM_VERSION}_linux_amd64.zip \ && terraform version \ +\ + && : INSTALL AWS CLI BINARY \ + && curl -fsSLO https://awscli.amazonaws.com/awscli-exe-linux-x86_64-${AWS_CLI_VERSION}.zip \ + && unzip awscli-exe-linux-x86_64-${AWS_CLI_VERSION}.zip \ + && ./aws/install -i /usr/local/aws-cli -b /usr/local/bin \ + && rm -rf awscli-exe-linux-x86_64-${AWS_CLI_VERSION}.zip ./aws \ + && aws --version \ \ && : INSTALL GEM REQUIREMENTS \ && gem install \ diff --git a/README.md b/README.md index b996c4b75d..fc66ee31b1 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,12 @@ # Epiphany Platform [![GitHub release](https://img.shields.io/github/v/release/epiphany-platform/epiphany.svg)](https://github.com/epiphany-platform/epiphany/releases) [![Github license](https://img.shields.io/github/license/epiphany-platform/epiphany)](https://github.com/epiphany-platform/epiphany/releases) +[![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=epiphany-platform_epiphany&metric=alert_status)](https://sonarcloud.io/summary/new_code?id=epiphany-platform_epiphany) +[![Maintainability Rating](https://sonarcloud.io/api/project_badges/measure?project=epiphany-platform_epiphany&metric=sqale_rating)](https://sonarcloud.io/summary/new_code?id=epiphany-platform_epiphany) +[![Security Rating](https://sonarcloud.io/api/project_badges/measure?project=epiphany-platform_epiphany&metric=security_rating)](https://sonarcloud.io/summary/new_code?id=epiphany-platform_epiphany) +[![Bugs](https://sonarcloud.io/api/project_badges/measure?project=epiphany-platform_epiphany&metric=bugs)](https://sonarcloud.io/summary/new_code?id=epiphany-platform_epiphany) +[![Vulnerabilities](https://sonarcloud.io/api/project_badges/measure?project=epiphany-platform_epiphany&metric=vulnerabilities)](https://sonarcloud.io/summary/new_code?id=epiphany-platform_epiphany) +[![Code Smells](https://sonarcloud.io/api/project_badges/measure?project=epiphany-platform_epiphany&metric=code_smells)](https://sonarcloud.io/summary/new_code?id=epiphany-platform_epiphany) ## Overview @@ -8,9 +14,9 @@ Epiphany at its core is a full automation of Kubernetes and Docker plus addition - Kafka or RabbitMQ for high speed messaging/events - Prometheus and Alertmanager for monitoring with Graphana for visualization -- Elasticsearch and Kibana for centralized logging (OpenDistro) +- OpenSearch for centralized logging - HAProxy for loadbalancing -- Postgres and Elasticsearch for data storage +- Postgres and OpenSearch for data storage - KeyCloak for authentication - Helm as package manager for Kubernetes diff --git a/VERSION b/VERSION index 227cea2156..889dbd3fd2 100644 --- a/VERSION +++ b/VERSION @@ -1 +1 @@ -2.0.0 +2.0.1dev diff --git a/ansible/playbooks/backup_logging.yml b/ansible/playbooks/backup_logging.yml index c3a31e1df3..cfd77b545b 100644 --- a/ansible/playbooks/backup_logging.yml +++ b/ansible/playbooks/backup_logging.yml @@ -16,27 +16,19 @@ - name: Run elasticsearch snapshot tasks import_role: name: backup - tasks_from: logging_elasticsearch_snapshot + tasks_from: logging_opensearch_snapshot - name: Run elasticsearch archive tasks import_role: name: backup - tasks_from: logging_elasticsearch_etc - -- hosts: kibana[0] - gather_facts: true - become: true - become_method: sudo - serial: 1 - tasks: - - when: specification.components.logging.enabled | default(false) - block: - - name: Include kibana vars + tasks_from: logging_opensearch_conf + # OpenSearch Dashboards + - name: Include opensearch_dashboards vars include_vars: - file: roles/kibana/vars/main.yml + file: roles/opensearch_dashboards/vars/main.yml name: component_vars - - name: Run kibana backup tasks + - name: Run opensearch_dashboards backup tasks import_role: name: backup - tasks_from: logging_kibana_etc + tasks_from: logging_opensearch_dashboards_conf vars: snapshot_name: "{{ hostvars[groups.logging.0].snapshot_name }}" diff --git a/ansible/playbooks/filebeat.yml b/ansible/playbooks/filebeat.yml index d2295b29c3..952fefa1aa 100644 --- a/ansible/playbooks/filebeat.yml +++ b/ansible/playbooks/filebeat.yml @@ -1,7 +1,7 @@ --- # Ansible playbook that installs and configures Filebeat -- hosts: opendistro_for_elasticsearch:logging:kibana # to gather facts +- hosts: opensearch:logging:opensearch_dashboards # to gather facts tasks: [] - hosts: filebeat diff --git a/ansible/playbooks/filter_plugins/container.py b/ansible/playbooks/filter_plugins/container.py new file mode 100644 index 0000000000..8aaae8d8a4 --- /dev/null +++ b/ansible/playbooks/filter_plugins/container.py @@ -0,0 +1,27 @@ +from typing import Any, Dict, List + + +class FilterModule: + """ Filters for Python's container types """ + + def filters(self): + return { + 'dict_to_list': self.dict_to_list + } + + def dict_to_list(self, data: Dict, only_values: bool = False, only_keys: bool = False) -> List: + """ + Convert dict to list without using Ansible's loop mechanism with dict2items filter. + + :param data: to be converted into a list + :param only_values: construct list with only dict's values + :param only_keys: construct list with only dict's keys + :return: data transformed into a list + """ + if only_values: + return list(data.values()) + + if only_keys: + return list(data.keys()) + + return list(data.items()) diff --git a/ansible/playbooks/firewall.yml b/ansible/playbooks/firewall.yml index 00a50ca306..6369435bde 100644 --- a/ansible/playbooks/firewall.yml +++ b/ansible/playbooks/firewall.yml @@ -4,13 +4,13 @@ # To make sure connection to epirepo is not blocked between firewalld was installed and not configured yet # On Ubuntu firewalld service starts automatically while installing firewalld package -- hosts: repository +- hosts: firewall:&repository become: true become_method: sudo roles: - firewall -- hosts: all:!repository +- hosts: firewall:!repository become: true become_method: sudo roles: diff --git a/ansible/playbooks/group_vars/all.yml b/ansible/playbooks/group_vars/all.yml index 18d39626d4..3a6d785559 100644 --- a/ansible/playbooks/group_vars/all.yml +++ b/ansible/playbooks/group_vars/all.yml @@ -11,7 +11,3 @@ kubeconfig: # https://github.com/ansible/ansible/issues/57189 yum_lock_timeout: 300 - -global_architecture_alias: - x86_64: amd64 - aarch64: arm64 diff --git a/ansible/playbooks/kibana.yml b/ansible/playbooks/kibana.yml deleted file mode 100644 index b47fa3425c..0000000000 --- a/ansible/playbooks/kibana.yml +++ /dev/null @@ -1,12 +0,0 @@ ---- -# Ansible playbook that makes sure the base items for all nodes are installed - -- hosts: all - gather_facts: true - tasks: [] - -- hosts: kibana - become: true - become_method: sudo - roles: - - kibana diff --git a/ansible/playbooks/kubernetes_master.yml b/ansible/playbooks/kubernetes_master.yml index 43db8d129c..f949b3ef30 100644 --- a/ansible/playbooks/kubernetes_master.yml +++ b/ansible/playbooks/kubernetes_master.yml @@ -53,9 +53,7 @@ become: true become_method: sudo post_tasks: - - name: Run copy-kubeconfig from kubernetes_master role + - name: Run generate-kubeconfig from kubernetes_master role import_role: name: kubernetes_master - tasks_from: copy-kubeconfig - environment: - KUBECONFIG: "{{ kubeconfig.remote }}" + tasks_from: generate-kubeconfig diff --git a/ansible/playbooks/opendistro_for_elasticsearch.yml b/ansible/playbooks/opendistro_for_elasticsearch.yml deleted file mode 100644 index 9ec9a72ed6..0000000000 --- a/ansible/playbooks/opendistro_for_elasticsearch.yml +++ /dev/null @@ -1,10 +0,0 @@ ---- -# Ansible playbook for installing Elasticsearch - -- hosts: opendistro_for_elasticsearch - become: true - become_method: sudo - roles: - - opendistro_for_elasticsearch - vars: - current_group_name: "opendistro_for_elasticsearch" diff --git a/ansible/playbooks/opensearch.yml b/ansible/playbooks/opensearch.yml new file mode 100644 index 0000000000..b4a6e188df --- /dev/null +++ b/ansible/playbooks/opensearch.yml @@ -0,0 +1,10 @@ +--- +# Ansible playbook for installing OpenSearch + +- hosts: opensearch + become: true + become_method: sudo + roles: + - opensearch + vars: + current_group_name: "opensearch" diff --git a/ansible/playbooks/opensearch_dashboards.yml b/ansible/playbooks/opensearch_dashboards.yml new file mode 100644 index 0000000000..0d16452b38 --- /dev/null +++ b/ansible/playbooks/opensearch_dashboards.yml @@ -0,0 +1,11 @@ +--- +# Ansible playbook for installing OpenSearch Dashboards + +- hosts: repository # to gather facts + tasks: [] + +- hosts: opensearch_dashboards + become: true + become_method: sudo + roles: + - opensearch_dashboards diff --git a/ansible/playbooks/recovery_logging.yml b/ansible/playbooks/recovery_logging.yml index 796d1c0bae..2a15a98ed2 100644 --- a/ansible/playbooks/recovery_logging.yml +++ b/ansible/playbooks/recovery_logging.yml @@ -13,22 +13,15 @@ name: component_vars - import_role: name: recovery - tasks_from: logging_elasticsearch_etc + tasks_from: logging_opensearch_conf - import_role: name: recovery - tasks_from: logging_elasticsearch_snapshot + tasks_from: logging_opensearch_snapshot -- hosts: kibana[0] - gather_facts: true - become: true - become_method: sudo - serial: 1 - tasks: - - when: specification.components.logging.enabled | default(false) - block: + # OpenSearch Dashboards - include_vars: - file: roles/kibana/vars/main.yml + file: roles/opensearch_dashboards/vars/main.yml name: component_vars - import_role: name: recovery - tasks_from: logging_kibana_etc + tasks_from: logging_opensearch_dashboards_conf diff --git a/ansible/playbooks/repository.yml b/ansible/playbooks/repository.yml index b76cdda5fb..7a7a5e55a4 100644 --- a/ansible/playbooks/repository.yml +++ b/ansible/playbooks/repository.yml @@ -1,5 +1,5 @@ --- -# This playbook is empty by purpose, just to enable repository role in configuration/feature-mapping +# This playbook is empty by purpose, just to enable repository role in configuration/features # to populate defaults/configuration to Ansible vars - hosts: "!all" tasks: [] diff --git a/ansible/playbooks/roles/backup/defaults/main.yml b/ansible/playbooks/roles/backup/defaults/main.yml index 6f454115bb..10b2779a70 100644 --- a/ansible/playbooks/roles/backup/defaults/main.yml +++ b/ansible/playbooks/roles/backup/defaults/main.yml @@ -2,6 +2,6 @@ backup_dir: /epibackup backup_destination_dir: "{{ backup_dir }}/mounted" backup_destination_host: >- - {{ groups.repository[0] if (custom_repository_url | default(false)) else (resolved_repository_hostname | default(groups.repository[0])) }} -elasticsearch_snapshot_repository_name: epiphany -elasticsearch_snapshot_repository_location: /var/lib/elasticsearch-snapshots + "{{ groups.repository[0] if (custom_repository_url | default(false)) else (resolved_repository_hostname | default(groups.repository[0])) }}" +opensearch_snapshot_repository_name: epiphany +opensearch_snapshot_repository_location: /var/lib/opensearch-snapshots diff --git a/ansible/playbooks/roles/backup/tasks/logging_elasticsearch_snapshot.yml b/ansible/playbooks/roles/backup/tasks/logging_elasticsearch_snapshot.yml deleted file mode 100644 index 6857739ce0..0000000000 --- a/ansible/playbooks/roles/backup/tasks/logging_elasticsearch_snapshot.yml +++ /dev/null @@ -1,90 +0,0 @@ ---- -- name: Include default vars from opendistro_for_elasticsearch role - include_vars: - file: roles/opendistro_for_elasticsearch/defaults/main.yml - name: odfe - -- name: Set helper facts - set_fact: - elasticsearch_endpoint: >- - https://{{ ansible_default_ipv4.address }}:9200 - snapshot_name: >- - {{ ansible_date_time.iso8601_basic_short | replace('T','-') }} - vars: - uri_template: &uri - client_cert: "{{ odfe.certificates.dirs.certs }}/{{ odfe.certificates.files.admin.cert.filename }}" - client_key: "{{ odfe.certificates.dirs.certs }}/{{ odfe.certificates.files.admin.key.filename }}" - validate_certs: false - body_format: json - -- name: Display snapshot name - debug: var=snapshot_name - -- name: Check cluster health - uri: - <<: *uri - url: "{{ elasticsearch_endpoint }}/_cluster/health" - method: GET - register: uri_response - until: uri_response is success - retries: 12 - delay: 5 - -- name: Ensure snapshot repository is defined - uri: - <<: *uri - url: "{{ elasticsearch_endpoint }}/_snapshot/{{ elasticsearch_snapshot_repository_name }}" - method: PUT - body: - type: fs - settings: - location: "{{ elasticsearch_snapshot_repository_location }}" - compress: true - -- name: Trigger snapshot creation - uri: - <<: *uri - url: "{{ elasticsearch_endpoint }}/_snapshot/{{ elasticsearch_snapshot_repository_name }}/{{ snapshot_name }}" - method: PUT - -- name: Wait (up to 12h) for snapshot completion - uri: - <<: *uri - url: "{{ elasticsearch_endpoint }}/_snapshot/{{ elasticsearch_snapshot_repository_name }}/{{ snapshot_name }}" - method: GET - register: uri_response - until: (uri_response.json.snapshots | selectattr('snapshot', 'equalto', snapshot_name) | first).state == "SUCCESS" - retries: "{{ (12 * 3600 // 10) | int }}" # 12h - delay: 10 - -- name: Find all snapshots - uri: - <<: *uri - url: "{{ elasticsearch_endpoint }}/_snapshot/{{ elasticsearch_snapshot_repository_name }}/_all" - method: GET - register: uri_response - -- name: Delete old snapshots - uri: - <<: *uri - url: "{{ elasticsearch_endpoint }}/_snapshot/{{ elasticsearch_snapshot_repository_name }}/{{ item }}" - method: DELETE - loop: >- - {{ uri_response.json.snapshots | map(attribute='snapshot') | reject('equalto', snapshot_name) | list }} - -- name: Create snapshot archive - import_tasks: common/create_snapshot_archive.yml - vars: - snapshot_prefix: "elasticsearch_snapshot" - dirs_to_archive: - - "{{ elasticsearch_snapshot_repository_location }}/" - -- name: Create snapshot checksum - import_tasks: common/create_snapshot_checksum.yml - -- name: Transfer artifacts via rsync - import_tasks: common/download_via_rsync.yml - vars: - artifacts: - - "{{ snapshot_path }}" - - "{{ snapshot_path }}.sha1" diff --git a/ansible/playbooks/roles/backup/tasks/logging_elasticsearch_etc.yml b/ansible/playbooks/roles/backup/tasks/logging_opensearch_conf.yml similarity index 64% rename from ansible/playbooks/roles/backup/tasks/logging_elasticsearch_etc.yml rename to ansible/playbooks/roles/backup/tasks/logging_opensearch_conf.yml index 1fa5c38750..51803a018b 100644 --- a/ansible/playbooks/roles/backup/tasks/logging_elasticsearch_etc.yml +++ b/ansible/playbooks/roles/backup/tasks/logging_opensearch_conf.yml @@ -1,4 +1,14 @@ --- +- name: Include default vars from opensearch role + include_vars: + file: roles/opensearch/defaults/main.yml + name: opensearch_defaults + +- name: Include vars from opensearch role + include_vars: + file: roles/opensearch/vars/main.yml + name: opensearch_vars + - name: Assert that the snapshot_name fact is defined and valid assert: that: @@ -13,9 +23,9 @@ - name: Create snapshot archive import_tasks: common/create_snapshot_archive.yml vars: - snapshot_prefix: "elasticsearch_etc" + snapshot_prefix: "opensearch_conf" dirs_to_archive: - - /etc/elasticsearch/ + - "{{ opensearch_vars.specification.paths.opensearch_conf_dir }}" - name: Create snapshot checksum import_tasks: common/create_snapshot_checksum.yml diff --git a/ansible/playbooks/roles/backup/tasks/logging_kibana_etc.yml b/ansible/playbooks/roles/backup/tasks/logging_opensearch_dashboards_conf.yml similarity index 69% rename from ansible/playbooks/roles/backup/tasks/logging_kibana_etc.yml rename to ansible/playbooks/roles/backup/tasks/logging_opensearch_dashboards_conf.yml index acc84d08b3..98a660b802 100644 --- a/ansible/playbooks/roles/backup/tasks/logging_kibana_etc.yml +++ b/ansible/playbooks/roles/backup/tasks/logging_opensearch_dashboards_conf.yml @@ -10,12 +10,17 @@ - name: Display snapshot name debug: var=snapshot_name +- name: Include vars from opensearch_dashboards role + include_vars: + file: roles/opensearch_dashboards/vars/main.yml + name: opensearch_dashboards_vars + - name: Create snapshot archive import_tasks: common/create_snapshot_archive.yml vars: - snapshot_prefix: "kibana_etc" + snapshot_prefix: "opensearch_dashboards_conf_dir" dirs_to_archive: - - /etc/kibana/ + - "{{ opensearch_dashboards_vars.specification.paths.opensearch_dashboards_conf_dir }}" - name: Create snapshot checksum import_tasks: common/create_snapshot_checksum.yml diff --git a/ansible/playbooks/roles/backup/tasks/logging_opensearch_snapshot.yml b/ansible/playbooks/roles/backup/tasks/logging_opensearch_snapshot.yml new file mode 100644 index 0000000000..d7425bde74 --- /dev/null +++ b/ansible/playbooks/roles/backup/tasks/logging_opensearch_snapshot.yml @@ -0,0 +1,96 @@ +--- +- name: Include default vars from opensearch role + include_vars: + file: roles/opensearch/defaults/main.yml + name: opensearch_defaults + +- name: Set helper facts + set_fact: + opensearch_endpoint: >- + https://{{ ansible_default_ipv4.address }}:9200 + snapshot_name: >- + {{ ansible_date_time.iso8601_basic_short | replace('T','-') }} + vars: + uri_template: &uri + client_cert: "{{ opensearch_defaults.certificates.dirs.certs }}/{{ opensearch_defaults.certificates.files.admin.cert.filename }}" + client_key: "{{ opensearch_defaults.certificates.dirs.certs }}/{{ opensearch_defaults.certificates.files.admin.key.filename }}" + validate_certs: false + body_format: json + +- name: Check cluster health + uri: + <<: *uri + url: "{{ opensearch_endpoint }}/_cluster/health" + method: GET + return_content: yes + register: cluster_status + until: cluster_status.json.status + retries: 60 + delay: 1 + +- name: Show warning when backup is not supported + when: not cluster_status.json.number_of_nodes == 1 + debug: + msg: "[WARNING] No snapshot backup created as only single-node cluster backup is supported." + +- name: Snapshot backup + when: cluster_status.json.number_of_nodes == 1 # https://github.com/epiphany-platform/epiphany/blob/develop/docs/home/howto/BACKUP.md#logging + block: + - name: Ensure snapshot repository is defined + uri: + <<: *uri + url: "{{ opensearch_endpoint }}/_snapshot/{{ opensearch_snapshot_repository_name }}" + method: PUT + body: + type: fs + settings: + location: "{{ opensearch_snapshot_repository_location }}" + compress: true + + - name: Trigger snapshot creation + uri: + <<: *uri + url: "{{ opensearch_endpoint }}/_snapshot/{{ opensearch_snapshot_repository_name }}/{{ snapshot_name }}" + method: PUT + + - name: Wait (up to 12h) for snapshot completion + uri: + <<: *uri + url: "{{ opensearch_endpoint }}/_snapshot/{{ opensearch_snapshot_repository_name }}/{{ snapshot_name }}" + method: GET + register: uri_response + until: (uri_response.json.snapshots | selectattr('snapshot', 'equalto', snapshot_name) | first).state == "SUCCESS" + retries: "{{ (12 * 3600 // 10) | int }}" # 12h + delay: 10 + + - name: Find all snapshots + uri: + <<: *uri + url: "{{ opensearch_endpoint }}/_snapshot/{{ opensearch_snapshot_repository_name }}/_all" + method: GET + register: uri_response + + - name: Delete old snapshots + uri: + <<: *uri + url: "{{ opensearch_endpoint }}/_snapshot/{{ opensearch_snapshot_repository_name }}/{{ item }}" + method: DELETE + loop: >- + {{ uri_response.json.snapshots | map(attribute='snapshot') | reject('equalto', snapshot_name) | list }} + + - name: Create snapshot archive + import_tasks: common/create_snapshot_archive.yml + vars: + snapshot_prefix: "opensearch_snapshot" + dirs_to_archive: + - "{{ opensearch_snapshot_repository_location }}/" + + - name: Create snapshot checksum + import_tasks: common/create_snapshot_checksum.yml + + - name: Transfer artifacts via rsync + import_tasks: common/download_via_rsync.yml + vars: + artifacts: + - "{{ snapshot_path }}" + - "{{ snapshot_path }}.sha1" diff --git a/ansible/playbooks/roles/certificate/tasks/install-packages.yml b/ansible/playbooks/roles/certificate/tasks/install-packages.yml index 47a77683de..6927a66bca 100644 --- a/ansible/playbooks/roles/certificate/tasks/install-packages.yml +++ b/ansible/playbooks/roles/certificate/tasks/install-packages.yml @@ -10,4 +10,4 @@ RedHat: - python3-cryptography module_defaults: - yum: { lock_timeout: "{{ yum_lock_timeout }}" } + yum: {lock_timeout: "{{ yum_lock_timeout }}"} diff --git a/ansible/playbooks/roles/download/tasks/list_files.yml b/ansible/playbooks/roles/download/tasks/list_files.yml index 270fe93070..987e5adc52 100644 --- a/ansible/playbooks/roles/download/tasks/list_files.yml +++ b/ansible/playbooks/roles/download/tasks/list_files.yml @@ -1,25 +1,9 @@ --- -- name: Get file listing - uri: - method: GET - url: "{{ repository_url }}/files/?F=0" # F=0 formats the listing as a simple list (not FancyIndexed) - body_format: raw - return_content: true - validate_certs: "{{ validate_certs | default(false, true) | bool }}" # handling "undefined", "null", "empty" and "boolean" values all at once - register: uri_list_files - until: uri_list_files is success - retries: 3 - delay: 2 - become: false +- name: Get files list from the repository + include_tasks: list_requirements.yml + vars: + _requirements: files -# TODO: make it work with yaml or json (instead of html, sic!). -- name: Parse html response and return file listing +- name: Set files in repository as fact set_fact: - list_files_result: >- - {{ lines | select('match', regexp) - | reject('match', '.*Parent Directory.*') - | map('regex_replace', regexp, '\1') - | list }} - vars: - lines: "{{ uri_list_files.content.splitlines() }}" - regexp: '.*
  • - + {{ lines | select('match', regexp) + | reject('match', '.*Parent Directory.*') + | map('regex_replace', regexp, '\1') + | list }} + vars: + lines: "{{ uri_list_files.content.splitlines() }}" + regexp: '.*
  • - - {{ hostvars[groups.kibana|intersect(groups.logging)|first]['ansible_hostname'] }} + {{ hostvars[groups.opensearch_dashboards|intersect(groups.logging)|first]['ansible_hostname'] }} when: - not is_upgrade_run - - groups.kibana[0] is defined + - groups.opensearch_dashboards[0] is defined - groups.logging is defined - - groups.kibana | intersect(groups.logging) | length + - groups.opensearch_dashboards | intersect(groups.logging) | length - name: Copy configuration file (filebeat.yml) template: diff --git a/ansible/playbooks/roles/filebeat/tasks/main.yml b/ansible/playbooks/roles/filebeat/tasks/main.yml index 4cdfe32550..4568ff85ee 100644 --- a/ansible/playbooks/roles/filebeat/tasks/main.yml +++ b/ansible/playbooks/roles/filebeat/tasks/main.yml @@ -5,7 +5,7 @@ - name: Load variables from logging role # needed to get passwords for both installation types include_vars: file: roles/logging/vars/main.yml - name: opendistro_for_logging_vars + name: logging_vars when: groups.logging is defined - name: Include installation tasks for Filebeat as DaemonSet for "k8s as cloud service" diff --git a/ansible/playbooks/roles/filebeat/templates/custom-chart-values.yml.j2 b/ansible/playbooks/roles/filebeat/templates/custom-chart-values.yml.j2 index 831897a347..59280d5d70 100644 --- a/ansible/playbooks/roles/filebeat/templates/custom-chart-values.yml.j2 +++ b/ansible/playbooks/roles/filebeat/templates/custom-chart-values.yml.j2 @@ -64,10 +64,10 @@ filebeatConfig: processors: - add_kubernetes_metadata: - in_cluster: true - matchers: - - logs_path: - logs_path: "/var/log/containers/" + in_cluster: true + matchers: + - logs_path: + logs_path: "/var/log/containers/" {% endif %} {# -------------------------- Filebeat modules -------------------------- #} @@ -95,8 +95,8 @@ filebeatConfig: - "https://{{hostvars[host]['ansible_default_ipv4']['address']}}:9200" {% endfor %} - username: logstash - password: {{ "'%s'" % opendistro_for_logging_vars.specification.logstash_password | replace("'","''") }} + username: filebeatservice + password: {{ "'%s'" % logging_vars.specification.filebeatservice_password | replace("'","''") }} {# Controls the verification of certificates #} ssl.verification_mode: none diff --git a/ansible/playbooks/roles/filebeat/templates/filebeat.yml.j2 b/ansible/playbooks/roles/filebeat/templates/filebeat.yml.j2 index a6715edf20..f886e51a75 100644 --- a/ansible/playbooks/roles/filebeat/templates/filebeat.yml.j2 +++ b/ansible/playbooks/roles/filebeat/templates/filebeat.yml.j2 @@ -144,6 +144,12 @@ filebeat.config.modules: # ======================= Elasticsearch template setting ======================= +{% if is_upgrade_run %} +setup.template.overwrite: true +setup.template.append_fields: + - name: log.file.path + type: text +{% endif %} setup.template.settings: index.number_of_shards: 3 #index.codec: best_compression @@ -169,16 +175,21 @@ setup.template.settings: # These settings control loading the sample dashboards to the Kibana index. Loading # the dashboards is disabled by default and can be enabled either by setting the # options here or by using the `setup` command. -{% set dashboards_enabled = is_upgrade_run | ternary(existing_setup_dashboards.enabled, specification.kibana.dashboards.enabled) %} -{% if dashboards_enabled | lower == 'auto' %} - {% if group_names | intersect(['kibana', 'logging']) | count == 2 %} -setup.dashboards.enabled: true - {% else %} +# +# Below logic commented out as a workaround for problem with filebeat till the time OPS team will resolve it. +# More info: https://github.com/opensearch-project/OpenSearch-Dashboards/issues/656#issuecomment-978036236 +# A static value is used instead: setup.dashboards.enabled: false - {% endif %} -{% else %} -setup.dashboards.enabled: {{ dashboards_enabled | lower }} -{% endif %} +# {% set dashboards_enabled = is_upgrade_run | ternary(existing_setup_dashboards.enabled, specification.opensearch.dashboards.enabled) %} +# {% if dashboards_enabled | lower == 'auto' %} +# {% if group_names | intersect(['opensearch_dashboards', 'logging']) | count == 2 %} +# setup.dashboards.enabled: true +# {% else %} +#setup.dashboards.enabled: false +# {% endif %} +#{% else %} +#setup.dashboards.enabled: {{ dashboards_enabled | lower }} +#{% endif %} # The Elasticsearch index name. # This setting overwrites the index name defined in the dashboards and index pattern. @@ -186,7 +197,7 @@ setup.dashboards.enabled: {{ dashboards_enabled | lower }} {% if is_upgrade_run %} {% set dashboards_index = 'filebeat-*' if (existing_setup_dashboards.index == 'null') else existing_setup_dashboards.index %} {% else %} - {% set dashboards_index = specification.kibana.dashboards.index %} + {% set dashboards_index = specification.opensearch.dashboards.index %} {% endif %} setup.dashboards.index: "{{ dashboards_index }}" @@ -247,7 +258,7 @@ setup.kibana: {% if setup_kibana_host is defined %} host: {{ setup_kibana_host }} username: kibanaserver - password: {{ "'%s'" % opendistro_for_logging_vars.specification.kibanaserver_password | replace("'","''") }} + password: {{ "'%s'" % logging_vars.specification.kibanaserver_password | replace("'","''") }} {% else %} #host: "localhost:5601" {% endif %} @@ -256,8 +267,8 @@ setup.kibana: {% if existing_setup_kibana.host is defined %} host: {{ existing_setup_kibana.host }} {% else %} - {% if groups.kibana is defined and groups.logging is defined and (groups.kibana | intersect(groups.logging) | count > 0) %} - host: {{ hostvars[groups.kibana | intersect(groups.logging) | first].ansible_hostname }} + {% if groups.opensearch_dashboards is defined and groups.logging is defined and (groups.opensearch_dashboards | intersect(groups.logging) | count > 0) %} + host: {{ hostvars[groups.opensearch_dashboards | intersect(groups.logging) | first].ansible_hostname }} {% else %} #host: "localhost:5601" {% endif %} @@ -303,10 +314,11 @@ output.elasticsearch: {% endfor %} # Authentication credentials - either API key or username/password. - username: logstash {% if not is_upgrade_run %} - password: {{ "'%s'" % opendistro_for_logging_vars.specification.logstash_password | replace("'","''") }} + username: filebeatservice + password: {{ "'%s'" % logging_vars.specification.filebeatservice_password | replace("'","''") }} {% else %} + username: logstash password: {{ "'%s'" % existing_output_es_password | replace("'","''") }} {% endif %} diff --git a/ansible/playbooks/roles/haproxy_runc/tasks/install-packages-Debian.yml b/ansible/playbooks/roles/haproxy_runc/tasks/install-packages-Debian.yml index be92c1e13b..20a95056a5 100644 --- a/ansible/playbooks/roles/haproxy_runc/tasks/install-packages-Debian.yml +++ b/ansible/playbooks/roles/haproxy_runc/tasks/install-packages-Debian.yml @@ -3,5 +3,6 @@ apt: update_cache: true name: - - containerd.io # provides "runc" + - containerd.io={{ containerd_defaults.containerd_version }}-* # provides "runc" state: present + allow_downgrade: true diff --git a/ansible/playbooks/roles/haproxy_runc/tasks/install-packages-RedHat.yml b/ansible/playbooks/roles/haproxy_runc/tasks/install-packages-RedHat.yml index 99cdc268a5..e6f66c4161 100644 --- a/ansible/playbooks/roles/haproxy_runc/tasks/install-packages-RedHat.yml +++ b/ansible/playbooks/roles/haproxy_runc/tasks/install-packages-RedHat.yml @@ -1,7 +1,8 @@ ---- + - name: Install containerd.io package for RedHat family yum: update_cache: true name: - - containerd.io # provides "runc" + - containerd.io-{{ containerd_defaults.containerd_version }} # provides "runc" state: present + allow_downgrade: true diff --git a/ansible/playbooks/roles/haproxy_runc/tasks/main.yml b/ansible/playbooks/roles/haproxy_runc/tasks/main.yml index 3f7c343529..4a08c223ac 100644 --- a/ansible/playbooks/roles/haproxy_runc/tasks/main.yml +++ b/ansible/playbooks/roles/haproxy_runc/tasks/main.yml @@ -9,6 +9,11 @@ haproxy_dir: "{{ runc_dir }}/{{ haproxy_service }}" haproxy_service_needs_restart: false +- name: Include containerd defaults + include_vars: + file: roles/containerd/defaults/main.yml + name: containerd_defaults + - name: Install required system packages include_tasks: "install-packages-{{ ansible_os_family }}.yml" diff --git a/ansible/playbooks/roles/image_registry/tasks/main.yml b/ansible/playbooks/roles/image_registry/tasks/main.yml index ef16833b7b..e7a122df85 100644 --- a/ansible/playbooks/roles/image_registry/tasks/main.yml +++ b/ansible/playbooks/roles/image_registry/tasks/main.yml @@ -122,24 +122,59 @@ --name {{ epiphany_registry.container_name }} -v {{ epiphany_registry.volume_name }}:/var/lib/registry {{ specification.registry_image.name }} - - name: Set images to load + - name: Define images to unpack set_fact: - generic_and_current_images: >- - {{ specification.images_to_load[ansible_architecture].generic + specification.images_to_load[ansible_architecture].current }} - legacy_images: "{{ specification.images_to_load[ansible_architecture].legacy }}" + current_schema_images: "{{ specification.images_to_load[ansible_architecture].current }}" + generic_schema_images: "{{ specification.images_to_load[ansible_architecture].generic }}" + legacy_schema_images: "{{ specification.images_to_load[ansible_architecture].legacy }}" + + - name: Initialize image facts + set_fact: + requested_images: [] + current_images: [] + generic_images: [] + legacy_images: [] + + - name: Set list of current images to be loaded/pushed + set_fact: + current_images: "{{ current_schema_images | dict_to_list(only_values='True') | flatten }}" + + - name: Set list of generic images to be loaded/pushed + set_fact: + generic_images: "{{ generic_schema_images | dict_to_list(only_values='True') | flatten }}" + + - name: Set list of legacy images to be loaded/pushed + set_fact: + legacy_images: "{{ legacy_schema_images | dict_to_list(only_values='True') | flatten }}" + + - name: Merge current and generic images + set_fact: + current_and_generic_images: >- + {{ current_images + generic_images }} + + - name: Get list of available images + include_role: + name: download + tasks_from: list_images.yml + + - name: Filter only requested images + set_fact: # gather only images listed in schema to avoid downloading unknown files + requested_images: "{{ requested_images + [item] }}" + when: "{{ item.file_name in list_images_result }}" + loop: "{{ current_and_generic_images }}" - name: Load generic and current version images vars: docker_image: "{{ item }}" include_tasks: load-image.yml - loop: "{{ generic_and_current_images }}" + loop: "{{ requested_images }}" - name: Push generic and current version images to registry vars: docker_image: "{{ item }}" new_image_tag: "{{ image_registry_address }}/{{ item.name }}" include_tasks: push-image.yml - loop: "{{ generic_and_current_images }}" + loop: "{{ requested_images }}" - name: Load legacy version images to registry when upgrading when: is_upgrade_run diff --git a/ansible/playbooks/roles/kibana/defaults/main.yml b/ansible/playbooks/roles/kibana/defaults/main.yml deleted file mode 100644 index f07c1f3457..0000000000 --- a/ansible/playbooks/roles/kibana/defaults/main.yml +++ /dev/null @@ -1,8 +0,0 @@ ---- -kibana_version: - RedHat: "1.13.1" - Debian: "1.13.1" - -# Required and used for upgrade Open Distro for Elasticsearch - Kibana: -specification: - kibana_log_dir: /var/log/kibana diff --git a/ansible/playbooks/roles/kibana/tasks/main.yml b/ansible/playbooks/roles/kibana/tasks/main.yml deleted file mode 100644 index 0ed8bf4be3..0000000000 --- a/ansible/playbooks/roles/kibana/tasks/main.yml +++ /dev/null @@ -1,68 +0,0 @@ ---- -- name: Install Kibana package - package: - name: "{{ _packages[ansible_os_family] }}" - state: present - vars: - _packages: - Debian: - - opendistroforelasticsearch-kibana={{ kibana_version[ansible_os_family] }} - RedHat: - - opendistroforelasticsearch-kibana-{{ kibana_version[ansible_os_family] }} - module_defaults: - yum: {lock_timeout: "{{ yum_lock_timeout }}"} - -- name: Include logging configuration tasks - include_tasks: setup-logging.yml - -- name: Load variables from logging/opendistro_for_elasticsearch role - when: context is undefined or context != "upgrade" - block: - - name: Load variables from logging role - include_vars: - file: roles/logging/vars/main.yml - name: opendistro_for_logging_vars - when: "'logging' in group_names" - - - name: Load variables from opendistro_for_elasticsearch role - include_vars: - file: roles/opendistro_for_elasticsearch/vars/main.yml - name: opendistro_for_data_vars - when: "'opendistro_for_elasticsearch' in group_names" - -- name: Update Kibana configuration file - template: - backup: true - src: kibana.yml.j2 - dest: /etc/kibana/kibana.yml - owner: kibana - group: root - mode: u=rw,go= - register: change_config - -- name: Restart Kibana service - systemd: - name: kibana - state: restarted - when: change_config.changed - -- name: Start kibana service - service: - name: kibana - state: started - enabled: true - -- name: Wait for kibana to start listening - wait_for: - host: "{{ ansible_default_ipv4.address | default(ansible_all_ipv4_addresses[0]) }}" - port: 5601 - delay: 5 - -- name: Wait for Kibana to be ready - uri: - url: http://{{ ansible_default_ipv4.address | default(ansible_all_ipv4_addresses[0]) }}:5601/api/status - method: GET - register: response - until: "'kbn_name' in response and response.status == 200" - retries: 120 - delay: 2 diff --git a/ansible/playbooks/roles/kibana/tasks/setup-logging.yml b/ansible/playbooks/roles/kibana/tasks/setup-logging.yml deleted file mode 100644 index f6f248b8d1..0000000000 --- a/ansible/playbooks/roles/kibana/tasks/setup-logging.yml +++ /dev/null @@ -1,30 +0,0 @@ ---- -- name: Create log directory for Kibana - file: - path: "{{ specification.kibana_log_dir }}" - state: directory - mode: u=rwx,go=rx - -- name: Create logfile for Kibana - copy: - dest: "{{ specification.kibana_log_dir }}/kibana.log" - owner: kibana - group: kibana - mode: u=rw,go=r - force: false - content: "" - -- name: Set permissions on logfile for Kibana - file: - path: "{{ specification.kibana_log_dir }}/kibana.log" - owner: kibana - group: kibana - mode: u=rw,go=r - -- name: Copy logrotate config - template: - dest: /etc/logrotate.d/kibana - owner: root - group: root - mode: u=rw,go=r - src: logrotate.conf.j2 diff --git a/ansible/playbooks/roles/kibana/templates/kibana.yml.j2 b/ansible/playbooks/roles/kibana/templates/kibana.yml.j2 deleted file mode 100644 index e27bf5112d..0000000000 --- a/ansible/playbooks/roles/kibana/templates/kibana.yml.j2 +++ /dev/null @@ -1,64 +0,0 @@ -# {{ ansible_managed }} - -# Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"). -# You may not use this file except in compliance with the License. -# A copy of the License is located at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# or in the "license" file accompanying this file. This file is distributed -# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either -# express or implied. See the License for the specific language governing -# permissions and limitations under the License. - -# Description: -# Default Kibana configuration for Open Distro. - -server.host: "{{ ansible_default_ipv4.address | default(ansible_all_ipv4_addresses[0]) }}" -elasticsearch.hosts: -{% if 'logging' in group_names %} - # Logging hosts: - {% for host in groups['logging'] %} - - "https://{{hostvars[host]['ansible_hostname']}}:9200" - {% endfor %} -{% elif 'opendistro_for_elasticsearch' in group_names %} - # Data hosts: - {% for host in groups['opendistro_for_elasticsearch'] %} - - "https://{{hostvars[host]['ansible_hostname']}}:9200" - {% endfor %} -{% endif %} - -elasticsearch.ssl.verificationMode: none -elasticsearch.username: kibanaserver -{% set password = 'kibanaserver' %} -{% if context is undefined or context != 'upgrade' -%} - {# mode: apply -#} - {% if 'logging' in group_names -%} - {% set password = opendistro_for_logging_vars.specification.kibanaserver_password -%} - {% elif 'opendistro_for_elasticsearch' in group_names -%} - {% set password = opendistro_for_data_vars.specification.kibanaserver_password -%} - {% endif %} -{% else -%} - {# mode: upgrade -#} - {% set password = existing_es_password %} -{% endif %} -elasticsearch.password: {{ "'%s'" % password | replace("'","''") }} -elasticsearch.requestHeadersWhitelist: ["securitytenant","Authorization"] - -# Enables you to specify a file where Kibana stores log output. -logging.dest: {{ specification.kibana_log_dir }}/kibana.log - -opendistro_security.multitenancy.enabled: true -opendistro_security.multitenancy.tenants.preferred: ["Private", "Global"] -opendistro_security.readonly_mode.roles: ["kibana_read_only"] - -# Provided with 1.10.1 version: -# https://opendistro.github.io/for-elasticsearch-docs/docs/upgrade/1-10-1/ -# Use this setting if you are running kibana without https -opendistro_security.cookie.secure: false - -newsfeed.enabled: false -telemetry.optIn: false -telemetry.enabled: false diff --git a/ansible/playbooks/roles/kibana/templates/logrotate.conf.j2 b/ansible/playbooks/roles/kibana/templates/logrotate.conf.j2 deleted file mode 100644 index d550d97e19..0000000000 --- a/ansible/playbooks/roles/kibana/templates/logrotate.conf.j2 +++ /dev/null @@ -1,8 +0,0 @@ -{{ specification.kibana_log_dir }}/*.log { - rotate 5 - daily - compress - missingok - notifempty - delaycompress -} diff --git a/ansible/playbooks/roles/kubernetes_common/tasks/extend-kubeadm-config.yml b/ansible/playbooks/roles/kubernetes_common/tasks/extend-kubeadm-config.yml index 7a931cc721..8bd1a787de 100644 --- a/ansible/playbooks/roles/kubernetes_common/tasks/extend-kubeadm-config.yml +++ b/ansible/playbooks/roles/kubernetes_common/tasks/extend-kubeadm-config.yml @@ -5,13 +5,16 @@ - update is defined fail_msg: Variable 'update' must be defined. -- name: Collect kubeadm-config +- name: Include set-cluster-version.yml + include_tasks: set-cluster-version.yml + +- name: Collect kubeadm-config from ConfigMap command: | kubectl get configmap kubeadm-config \ --namespace kube-system \ - --output jsonpath={{ jsonpath }} + --output jsonpath={{ _jsonpath }} vars: - jsonpath: >- + _jsonpath: >- '{.data.ClusterConfiguration}' register: kubeadm_config changed_when: false @@ -24,9 +27,22 @@ original: >- {{ kubeadm_config.stdout | from_yaml }} +- name: Collect kubelet-config from ConfigMap + command: |- + kubectl get cm kubelet-config-{{ cluster_version_major }}.{{ cluster_version_minor }} \ + --namespace kube-system \ + --output=jsonpath={{ _jsonpath }} + vars: + _jsonpath: >- + '{.data.kubelet}' + register: kubelet_config + changed_when: false + - name: Render /etc/kubeadm/kubeadm-config.yml copy: dest: /etc/kubeadm/kubeadm-config.yml mode: u=rw,go= - content: >- - {{ kubeadm_config | to_nice_yaml }} + content: | + {{ kubeadm_config | to_nice_yaml(indent=2) }} + --- + {{ kubelet_config.stdout | from_yaml | to_nice_yaml(indent=2) }} diff --git a/ansible/playbooks/roles/upgrade/tasks/kubernetes/get-cluster-version.yml b/ansible/playbooks/roles/kubernetes_common/tasks/get-cluster-version.yml similarity index 100% rename from ansible/playbooks/roles/upgrade/tasks/kubernetes/get-cluster-version.yml rename to ansible/playbooks/roles/kubernetes_common/tasks/get-cluster-version.yml diff --git a/ansible/playbooks/roles/upgrade/tasks/kubernetes/set-cluster-version.yml b/ansible/playbooks/roles/kubernetes_common/tasks/set-cluster-version.yml similarity index 88% rename from ansible/playbooks/roles/upgrade/tasks/kubernetes/set-cluster-version.yml rename to ansible/playbooks/roles/kubernetes_common/tasks/set-cluster-version.yml index 61e986bcf4..1bedfa2013 100644 --- a/ansible/playbooks/roles/upgrade/tasks/kubernetes/set-cluster-version.yml +++ b/ansible/playbooks/roles/kubernetes_common/tasks/set-cluster-version.yml @@ -1,6 +1,6 @@ --- - name: k8s | Include get-cluster-version.yml - include_tasks: kubernetes/get-cluster-version.yml + include_tasks: get-cluster-version.yml - name: Set cluster version as fact set_fact: diff --git a/ansible/playbooks/roles/kubernetes_master/tasks/cni-plugins/canal.yml b/ansible/playbooks/roles/kubernetes_master/tasks/cni-plugins/canal.yml index 6a941e6403..8775efdb67 100644 --- a/ansible/playbooks/roles/kubernetes_master/tasks/cni-plugins/canal.yml +++ b/ansible/playbooks/roles/kubernetes_master/tasks/cni-plugins/canal.yml @@ -38,5 +38,4 @@ - name: Include Canal deployment tasks include_tasks: deployments/deploy-template.yml vars: - canal_arch: "{{ global_architecture_alias[ansible_architecture] }}" file_name: canal.yml.j2 diff --git a/ansible/playbooks/roles/kubernetes_master/tasks/copy-kubeconfig.yml b/ansible/playbooks/roles/kubernetes_master/tasks/generate-kubeconfig.yml similarity index 100% rename from ansible/playbooks/roles/kubernetes_master/tasks/copy-kubeconfig.yml rename to ansible/playbooks/roles/kubernetes_master/tasks/generate-kubeconfig.yml diff --git a/ansible/playbooks/roles/kubernetes_master/templates/calico.yml.j2 b/ansible/playbooks/roles/kubernetes_master/templates/calico.yml.j2 index fdfc1ccf26..bf8dc7a7f2 100644 --- a/ansible/playbooks/roles/kubernetes_master/templates/calico.yml.j2 +++ b/ansible/playbooks/roles/kubernetes_master/templates/calico.yml.j2 @@ -13,10 +13,12 @@ data: typha_service_name: "none" # Configure the backend to use. calico_backend: "bird" + # Configure the MTU to use for workload interfaces and tunnels. # By default, MTU is auto-detected, and explicitly setting this field should not be required. # You can override auto-detection by providing a non-zero value. veth_mtu: "0" + # The CNI network configuration to install on each node. The special # values in this config will be automatically populated. cni_network_config: |- @@ -52,8 +54,10 @@ data: } ] } + --- # Source: calico/templates/kdd-crds.yaml + apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: @@ -92,6 +96,12 @@ spec: 64512]' format: int32 type: integer + bindMode: + description: BindMode indicates whether to listen for BGP connections + on all addresses (None) or only on the node's canonical IP address + Node.Spec.BGP.IPvXAddress (NodeIP). Default behaviour is to listen + for BGP connections on all addresses. + type: string communities: description: Communities is a list of BGP community values and their arbitrary names for tagging routes. @@ -122,6 +132,37 @@ spec: description: 'LogSeverityScreen is the log severity above which logs are sent to the stdout. [Default: INFO]' type: string + nodeMeshMaxRestartTime: + description: Time to allow for software restart for node-to-mesh peerings. When + specified, this is configured as the graceful restart timeout. When + not specified, the BIRD default of 120s is used. This field can + only be set on the default BGPConfiguration instance and requires + that NodeMesh is enabled + type: string + nodeMeshPassword: + description: Optional BGP password for full node-to-mesh peerings. + This field can only be set on the default BGPConfiguration instance + and requires that NodeMesh is enabled + properties: + secretKeyRef: + description: Selects a key of a secret in the node pod's namespace. + properties: + key: + description: The key of the secret to select from. Must be + a valid secret key. + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + optional: + description: Specify whether the Secret or its key must be + defined + type: boolean + required: + - key + type: object + type: object nodeToNodeMeshEnabled: description: 'NodeToNodeMeshEnabled sets whether full node to node BGP mesh is enabled. [Default: true]' @@ -195,6 +236,7 @@ status: plural: "" conditions: [] storedVersions: [] + --- apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition @@ -239,8 +281,8 @@ spec: in the specific branch of the Node on "bird.cfg". type: boolean maxRestartTime: - description: Time to allow for software restart. When specified, this - is configured as the graceful restart timeout. When not specified, + description: Time to allow for software restart. When specified, + this is configured as the graceful restart timeout. When not specified, the BIRD default of 120s is used. type: string node: @@ -252,6 +294,12 @@ spec: description: Selector for the nodes that should have this peering. When this is set, the Node field must be empty. type: string + numAllowedLocalASNumbers: + description: Maximum number of local AS numbers that are allowed in + the AS path for received routes. This removes BGP loop prevention + and should only be used if absolutely necesssary. + format: int32 + type: integer password: description: Optional BGP password for the peerings generated by this BGPPeer resource. @@ -307,6 +355,7 @@ status: plural: "" conditions: [] storedVersions: [] + --- apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition @@ -367,6 +416,270 @@ status: plural: "" conditions: [] storedVersions: [] + +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: (devel) + creationTimestamp: null + name: caliconodestatuses.crd.projectcalico.org +spec: + group: crd.projectcalico.org + names: + kind: CalicoNodeStatus + listKind: CalicoNodeStatusList + plural: caliconodestatuses + singular: caliconodestatus + scope: Cluster + versions: + - name: v1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: 'APIVersion defines the versioned schema of this representation + of an object. Servers should convert recognized schemas to the latest + internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' + type: string + kind: + description: 'Kind is a string value representing the REST resource this + object represents. Servers may infer this from the endpoint the client + submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + metadata: + type: object + spec: + description: CalicoNodeStatusSpec contains the specification for a CalicoNodeStatus + resource. + properties: + classes: + description: Classes declares the types of information to monitor + for this calico/node, and allows for selective status reporting + about certain subsets of information. + items: + type: string + type: array + node: + description: The node name identifies the Calico node instance for + node status. + type: string + updatePeriodSeconds: + description: UpdatePeriodSeconds is the period at which CalicoNodeStatus + should be updated. Set to 0 to disable CalicoNodeStatus refresh. + Maximum update period is one day. + format: int32 + type: integer + type: object + status: + description: CalicoNodeStatusStatus defines the observed state of CalicoNodeStatus. + No validation needed for status since it is updated by Calico. + properties: + agent: + description: Agent holds agent status on the node. + properties: + birdV4: + description: BIRDV4 represents the latest observed status of bird4. + properties: + lastBootTime: + description: LastBootTime holds the value of lastBootTime + from bird.ctl output. + type: string + lastReconfigurationTime: + description: LastReconfigurationTime holds the value of lastReconfigTime + from bird.ctl output. + type: string + routerID: + description: Router ID used by bird. + type: string + state: + description: The state of the BGP Daemon. + type: string + version: + description: Version of the BGP daemon + type: string + type: object + birdV6: + description: BIRDV6 represents the latest observed status of bird6. + properties: + lastBootTime: + description: LastBootTime holds the value of lastBootTime + from bird.ctl output. + type: string + lastReconfigurationTime: + description: LastReconfigurationTime holds the value of lastReconfigTime + from bird.ctl output. + type: string + routerID: + description: Router ID used by bird. + type: string + state: + description: The state of the BGP Daemon. + type: string + version: + description: Version of the BGP daemon + type: string + type: object + type: object + bgp: + description: BGP holds node BGP status. + properties: + numberEstablishedV4: + description: The total number of IPv4 established bgp sessions. + type: integer + numberEstablishedV6: + description: The total number of IPv6 established bgp sessions. + type: integer + numberNotEstablishedV4: + description: The total number of IPv4 non-established bgp sessions. + type: integer + numberNotEstablishedV6: + description: The total number of IPv6 non-established bgp sessions. + type: integer + peersV4: + description: PeersV4 represents IPv4 BGP peers status on the node. + items: + description: CalicoNodePeer contains the status of BGP peers + on the node. + properties: + peerIP: + description: IP address of the peer whose condition we are + reporting. + type: string + since: + description: Since the state or reason last changed. + type: string + state: + description: State is the BGP session state. + type: string + type: + description: Type indicates whether this peer is configured + via the node-to-node mesh, or via en explicit global or + per-node BGPPeer object. + type: string + type: object + type: array + peersV6: + description: PeersV6 represents IPv6 BGP peers status on the node. + items: + description: CalicoNodePeer contains the status of BGP peers + on the node. + properties: + peerIP: + description: IP address of the peer whose condition we are + reporting. + type: string + since: + description: Since the state or reason last changed. + type: string + state: + description: State is the BGP session state. + type: string + type: + description: Type indicates whether this peer is configured + via the node-to-node mesh, or via en explicit global or + per-node BGPPeer object. + type: string + type: object + type: array + required: + - numberEstablishedV4 + - numberEstablishedV6 + - numberNotEstablishedV4 + - numberNotEstablishedV6 + type: object + lastUpdated: + description: LastUpdated is a timestamp representing the server time + when CalicoNodeStatus object last updated. It is represented in + RFC3339 form and is in UTC. + format: date-time + nullable: true + type: string + routes: + description: Routes reports routes known to the Calico BGP daemon + on the node. + properties: + routesV4: + description: RoutesV4 represents IPv4 routes on the node. + items: + description: CalicoNodeRoute contains the status of BGP routes + on the node. + properties: + destination: + description: Destination of the route. + type: string + gateway: + description: Gateway for the destination. + type: string + interface: + description: Interface for the destination + type: string + learnedFrom: + description: LearnedFrom contains information regarding + where this route originated. + properties: + peerIP: + description: If sourceType is NodeMesh or BGPPeer, IP + address of the router that sent us this route. + type: string + sourceType: + description: Type of the source where a route is learned + from. + type: string + type: object + type: + description: Type indicates if the route is being used for + forwarding or not. + type: string + type: object + type: array + routesV6: + description: RoutesV6 represents IPv6 routes on the node. + items: + description: CalicoNodeRoute contains the status of BGP routes + on the node. + properties: + destination: + description: Destination of the route. + type: string + gateway: + description: Gateway for the destination. + type: string + interface: + description: Interface for the destination + type: string + learnedFrom: + description: LearnedFrom contains information regarding + where this route originated. + properties: + peerIP: + description: If sourceType is NodeMesh or BGPPeer, IP + address of the router that sent us this route. + type: string + sourceType: + description: Type of the source where a route is learned + from. + type: string + type: object + type: + description: Type indicates if the route is being used for + forwarding or not. + type: string + type: object + type: array + type: object + type: object + type: object + served: true + storage: true +status: + acceptedNames: + kind: "" + plural: "" + conditions: [] + storedVersions: [] + --- apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition @@ -430,6 +743,7 @@ status: plural: "" conditions: [] storedVersions: [] + --- apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition @@ -476,7 +790,7 @@ spec: type: boolean awsSrcDstCheck: description: 'Set source-destination-check on AWS EC2 instances. Accepted - value must be one of "DoNothing", "Enabled" or "Disabled". [Default: + value must be one of "DoNothing", "Enable" or "Disable". [Default: DoNothing]' enum: - DoNothing @@ -510,6 +824,18 @@ spec: description: 'BPFEnabled, if enabled Felix will use the BPF dataplane. [Default: false]' type: boolean + bpfEnforceRPF: + description: 'BPFEnforceRPF enforce strict RPF on all interfaces with + BPF programs regardless of what is the per-interfaces or global + setting. Possible values are Disabled or Strict. [Default: Strict]' + type: string + bpfExtToServiceConnmark: + description: 'BPFExtToServiceConnmark in BPF mode, control a 32bit + mark that is set on connections from an external client to a local + service. This mark allows us to control how packets of that connection + are routed within the host and how is routing intepreted by RPF + check. [Default: 0]' + type: integer bpfExternalServiceMode: description: 'BPFExternalServiceMode in BPF mode, controls how connections from outside the cluster to services (node ports and cluster IPs) @@ -520,14 +846,6 @@ spec: node appears to use the IP of the ingress node; this requires a permissive L2 network. [Default: Tunnel]' type: string - bpfExtToServiceConnmark: - description: 'BPFExtToServiceConnmark in BPF mode, controls a - 32bit mark that is set on connections from an external client to - a local service. This mark allows us to control how packets of - that connection are routed within the host and how is routing - intepreted by RPF check. [Default: 0]' - type: integer - bpfKubeProxyEndpointSlicesEnabled: description: BPFKubeProxyEndpointSlicesEnabled in BPF mode, controls whether Felix's embedded kube-proxy accepts EndpointSlices or not. @@ -550,6 +868,51 @@ spec: logs are emitted to the BPF trace pipe, accessible with the command `tc exec bpf debug`. [Default: Off].' type: string + bpfMapSizeConntrack: + description: 'BPFMapSizeConntrack sets the size for the conntrack + map. This map must be large enough to hold an entry for each active + connection. Warning: changing the size of the conntrack map can + cause disruption.' + type: integer + bpfMapSizeIPSets: + description: BPFMapSizeIPSets sets the size for ipsets map. The IP + sets map must be large enough to hold an entry for each endpoint + matched by every selector in the source/destination matches in network + policy. Selectors such as "all()" can result in large numbers of + entries (one entry per endpoint in that case). + type: integer + bpfMapSizeNATAffinity: + type: integer + bpfMapSizeNATBackend: + description: BPFMapSizeNATBackend sets the size for nat back end map. + This is the total number of endpoints. This is mostly more than + the size of the number of services. + type: integer + bpfMapSizeNATFrontend: + description: BPFMapSizeNATFrontend sets the size for nat front end + map. FrontendMap should be large enough to hold an entry for each + nodeport, external IP and each port in each service. + type: integer + bpfMapSizeRoute: + description: BPFMapSizeRoute sets the size for the routes map. The + routes map should be large enough to hold one entry per workload + and a handful of entries per host (enough to cover its own IPs and + tunnel IPs). + type: integer + bpfPSNATPorts: + anyOf: + - type: integer + - type: string + description: 'BPFPSNATPorts sets the range from which we randomly + pick a port if there is a source port collision. This should be + within the ephemeral range as defined by RFC 6056 (1024–65535) and + preferably outside the ephemeral ranges used by common operating + systems. Linux uses 32768–60999, while others mostly use the IANA + defined range 49152–65535. It is not necessarily a problem if this + range overlaps with the operating systems. Both ends of the range + are inclusive. [Default: 20000:29999]' + pattern: ^.* + x-kubernetes-int-or-string: true chainInsertMode: description: 'ChainInsertMode controls whether Felix hooks the kernel''s top-level iptables chains by inserting a rule at the top of the @@ -560,6 +923,15 @@ spec: Calico policy will be bypassed. [Default: insert]' type: string dataplaneDriver: + description: DataplaneDriver filename of the external dataplane driver + to use. Only used if UseInternalDataplaneDriver is set to false. + type: string + dataplaneWatchdogTimeout: + description: 'DataplaneWatchdogTimeout is the readiness/liveness timeout + used for Felix''s (internal) dataplane driver. Increase this value + if you experience spurious non-ready or non-live events when Felix + is under heavy load. Decrease the value to get felix to report non-live + or non-ready more quickly. [Default: 90s]' type: string debugDisableLogDropping: type: boolean @@ -588,9 +960,14 @@ spec: routes, by default this will be RTPROT_BOOT when left blank. type: integer deviceRouteSourceAddress: - description: This is the source address to use on programmed device - routes. By default the source address is left blank, leaving the - kernel to choose the source address used. + description: This is the IPv4 source address to use on programmed + device routes. By default the source address is left blank, leaving + the kernel to choose the source address used. + type: string + deviceRouteSourceAddressIPv6: + description: This is the IPv6 source address to use on programmed + device routes. By default the source address is left blank, leaving + the kernel to choose the source address used. type: string disableConntrackInvalidCheck: type: boolean @@ -664,6 +1041,14 @@ spec: "true" or "false" will force the feature, empty or omitted values are auto-detected. type: string + floatingIPs: + default: Disabled + description: FloatingIPs configures whether or not Felix will program + floating IP addresses. + enum: + - Enabled + - Disabled + type: string genericXDPEnabled: description: 'GenericXDPEnabled enables Generic XDP so network cards that don''t support XDP offload or driver modes can use XDP. This @@ -701,6 +1086,9 @@ spec: disabled by setting the interval to 0. type: string ipipEnabled: + description: 'IPIPEnabled overrides whether Felix should configure + an IPIP interface on the host. Optional as Felix determines this + based on the existing IP pools. [Default: nil (unset)]' type: boolean ipipMTU: description: 'IPIPMTU is the MTU to set on the tunnel device. See @@ -767,6 +1155,8 @@ spec: usage. [Default: 10s]' type: string ipv6Support: + description: IPv6Support controls whether Felix enables support for + IPv6 (if supported by the in-use dataplane). type: boolean kubeNodePortRanges: description: 'KubeNodePortRanges holds list of port ranges used for @@ -780,6 +1170,12 @@ spec: pattern: ^.* x-kubernetes-int-or-string: true type: array + logDebugFilenameRegex: + description: LogDebugFilenameRegex controls which source code files + have their Debug log output included in the logs. Only logs from + files with names that match the given regular expression are included. The + filter only applies to Debug level logs. + type: string logFilePath: description: 'LogFilePath is the full path to the Felix log. Set to none to disable file logging. [Default: /var/log/calico/felix.log]' @@ -876,6 +1272,12 @@ spec: to false. This reduces the number of metrics reported, reducing Prometheus load. [Default: true]' type: boolean + prometheusWireGuardMetricsEnabled: + description: 'PrometheusWireGuardMetricsEnabled disables wireguard + metrics collection, which the Prometheus client does by default, + when set to false. This reduces the number of metrics reported, + reducing Prometheus load. [Default: true]' + type: boolean removeExternalRoutes: description: Whether or not to remove device routes that have not been programmed by Felix. Disabling this will allow external applications @@ -903,9 +1305,9 @@ spec: routes. - CalicoIPAM: the default - use IPAM data to construct routes.' type: string routeTableRange: - description: Calico programs additional Linux route tables for various - purposes. RouteTableRange specifies the indices of the route tables - that Calico should use. + description: Deprecated in favor of RouteTableRanges. Calico programs + additional Linux route tables for various purposes. RouteTableRange + specifies the indices of the route tables that Calico should use. properties: max: type: integer @@ -915,6 +1317,21 @@ spec: - max - min type: object + routeTableRanges: + description: Calico programs additional Linux route tables for various + purposes. RouteTableRanges specifies a set of table index ranges + that Calico should use. Deprecates`RouteTableRange`, overrides `RouteTableRange`. + items: + properties: + max: + type: integer + min: + type: integer + required: + - max + - min + type: object + type: array serviceLoopPrevention: description: 'When service IP advertisement is enabled, prevent routing loops to service IPs that are not in use, by dropping or rejecting @@ -942,12 +1359,22 @@ spec: Felix makes reports. [Default: 86400s]' type: string useInternalDataplaneDriver: + description: UseInternalDataplaneDriver, if true, Felix will use its + internal dataplane programming logic. If false, it will launch + an external dataplane driver and communicate with it over protobuf. type: boolean vxlanEnabled: + description: 'VXLANEnabled overrides whether Felix should create the + VXLAN tunnel device for VXLAN networking. Optional as Felix determines + this based on the existing IP pools. [Default: nil (unset)]' type: boolean vxlanMTU: - description: 'VXLANMTU is the MTU to set on the tunnel device. See - Configuring MTU [Default: 1440]' + description: 'VXLANMTU is the MTU to set on the IPv4 VXLAN tunnel + device. See Configuring MTU [Default: 1410]' + type: integer + vxlanMTUV6: + description: 'VXLANMTUV6 is the MTU to set on the IPv6 VXLAN tunnel + device. See Configuring MTU [Default: 1390]' type: integer vxlanPort: type: integer @@ -957,10 +1384,18 @@ spec: description: 'WireguardEnabled controls whether Wireguard is enabled. [Default: false]' type: boolean + wireguardHostEncryptionEnabled: + description: 'WireguardHostEncryptionEnabled controls whether Wireguard + host-to-host encryption is enabled. [Default: false]' + type: boolean wireguardInterfaceName: description: 'WireguardInterfaceName specifies the name to use for the Wireguard interface. [Default: wg.calico]' type: string + wireguardKeepAlive: + description: 'WireguardKeepAlive controls Wireguard PersistentKeepalive + option. Set 0 to disable. [Default: 0]' + type: string wireguardListeningPort: description: 'WireguardListeningPort controls the listening port used by Wireguard. [Default: 51820]' @@ -973,6 +1408,12 @@ spec: description: 'WireguardRoutingRulePriority controls the priority value to use for the Wireguard routing rule. [Default: 99]' type: integer + workloadSourceSpoofing: + description: WorkloadSourceSpoofing controls whether pods can use + the allowedSourcePrefixes annotation to send traffic with a source + IP address that is not theirs. This is disabled by default. When + set to "Any", pods can request any prefix. + type: string xdpEnabled: description: 'XDPEnabled enables XDP acceleration for suitable untracked incoming deny rules. [Default: true]' @@ -993,6 +1434,7 @@ status: plural: "" conditions: [] storedVersions: [] + --- apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition @@ -1165,8 +1607,8 @@ spec: within the selected service(s) will be matched, and only to/from each endpoint's port. \n Services cannot be specified on the same rule as Selector, NotSelector, NamespaceSelector, - Ports, NotPorts, Nets, NotNets or ServiceAccounts. \n - Only valid on egress rules." + Nets, NotNets or ServiceAccounts. \n Ports and NotPorts + can only be specified with Services on ingress rules." properties: name: description: Name specifies the name of a Kubernetes @@ -1391,8 +1833,8 @@ spec: within the selected service(s) will be matched, and only to/from each endpoint's port. \n Services cannot be specified on the same rule as Selector, NotSelector, NamespaceSelector, - Ports, NotPorts, Nets, NotNets or ServiceAccounts. \n - Only valid on egress rules." + Nets, NotNets or ServiceAccounts. \n Ports and NotPorts + can only be specified with Services on ingress rules." properties: name: description: Name specifies the name of a Kubernetes @@ -1538,8 +1980,8 @@ spec: within the selected service(s) will be matched, and only to/from each endpoint's port. \n Services cannot be specified on the same rule as Selector, NotSelector, NamespaceSelector, - Ports, NotPorts, Nets, NotNets or ServiceAccounts. \n - Only valid on egress rules." + Nets, NotNets or ServiceAccounts. \n Ports and NotPorts + can only be specified with Services on ingress rules." properties: name: description: Name specifies the name of a Kubernetes @@ -1764,8 +2206,8 @@ spec: within the selected service(s) will be matched, and only to/from each endpoint's port. \n Services cannot be specified on the same rule as Selector, NotSelector, NamespaceSelector, - Ports, NotPorts, Nets, NotNets or ServiceAccounts. \n - Only valid on egress rules." + Nets, NotNets or ServiceAccounts. \n Ports and NotPorts + can only be specified with Services on ingress rules." properties: name: description: Name specifies the name of a Kubernetes @@ -1847,6 +2289,7 @@ status: plural: "" conditions: [] storedVersions: [] + --- apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition @@ -1899,6 +2342,7 @@ status: plural: "" conditions: [] storedVersions: [] + --- apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition @@ -2006,6 +2450,7 @@ status: plural: "" conditions: [] storedVersions: [] + --- apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition @@ -2041,8 +2486,16 @@ spec: resource. properties: affinity: + description: Affinity of the block, if this block has one. If set, + it will be of the form "host:". If not set, this block + is not affine to a host. type: string allocations: + description: Array of allocations in-use within this block. nil entries + mean the allocation is free. For non-nil entries at index i, the + index is the ordinal of the allocation within this block and the + value is the index of the associated attributes in the Attributes + array. items: type: integer # TODO: This nullable is manually added in. We should update controller-gen @@ -2050,6 +2503,10 @@ spec: nullable: true type: array attributes: + description: Attributes is an array of arbitrary metadata associated + with allocations in the block. To find attributes for a given allocation, + use the value of the allocation's entry in the Allocations array + as the index of the element in this array. items: properties: handle_id: @@ -2061,12 +2518,38 @@ spec: type: object type: array cidr: + description: The block's CIDR. type: string deleted: + description: Deleted is an internal boolean used to workaround a limitation + in the Kubernetes API whereby deletion will not return a conflict + error if the block has been updated. It should not be set manually. type: boolean + sequenceNumber: + default: 0 + description: We store a sequence number that is updated each time + the block is written. Each allocation will also store the sequence + number of the block at the time of its creation. When releasing + an IP, passing the sequence number associated with the allocation + allows us to protect against a race condition and ensure the IP + hasn't been released and re-allocated since the release request. + format: int64 + type: integer + sequenceNumberForAllocation: + additionalProperties: + format: int64 + type: integer + description: Map of allocated ordinal within the block to sequence + number of the block at the time of allocation. Kubernetes does not + allow numerical keys for maps, so the key is cast to a string. + type: object strictAffinity: + description: StrictAffinity on the IPAMBlock is deprecated and no + longer used by the code. Use IPAMConfig StrictAffinity instead. type: boolean unallocated: + description: Unallocated is an ordered list of allocations which are + free in the block. items: type: integer type: array @@ -2086,6 +2569,7 @@ status: plural: "" conditions: [] storedVersions: [] + --- apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition @@ -2141,6 +2625,7 @@ status: plural: "" conditions: [] storedVersions: [] + --- apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition @@ -2196,6 +2681,7 @@ status: plural: "" conditions: [] storedVersions: [] + --- apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition @@ -2229,13 +2715,23 @@ spec: spec: description: IPPoolSpec contains the specification for an IPPool resource. properties: + allowedUses: + description: AllowedUse controls what the IP pool will be used for. If + not specified or empty, defaults to ["Tunnel", "Workload"] for back-compatibility + items: + type: string + type: array blockSize: description: The block size to use for IP address assignments from - this pool. Defaults to 26 for IPv4 and 112 for IPv6. + this pool. Defaults to 26 for IPv4 and 122 for IPv6. type: integer cidr: description: The pool CIDR. type: string + disableBGPExport: + description: 'Disable exporting routes from this IP Pool''s CIDR over + BGP. [Default: false]' + type: boolean disabled: description: When disabled is true, Calico IPAM will not assign addresses from this pool. @@ -2294,6 +2790,61 @@ status: plural: "" conditions: [] storedVersions: [] + +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: (devel) + creationTimestamp: null + name: ipreservations.crd.projectcalico.org +spec: + group: crd.projectcalico.org + names: + kind: IPReservation + listKind: IPReservationList + plural: ipreservations + singular: ipreservation + scope: Cluster + versions: + - name: v1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: 'APIVersion defines the versioned schema of this representation + of an object. Servers should convert recognized schemas to the latest + internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' + type: string + kind: + description: 'Kind is a string value representing the REST resource this + object represents. Servers may infer this from the endpoint the client + submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + metadata: + type: object + spec: + description: IPReservationSpec contains the specification for an IPReservation + resource. + properties: + reservedCIDRs: + description: ReservedCIDRs is a list of CIDRs and/or IP addresses + that Calico IPAM will exclude from new allocations. + items: + type: string + type: array + type: object + type: object + served: true + storage: true +status: + acceptedNames: + kind: "" + plural: "" + conditions: [] + storedVersions: [] + --- apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition @@ -2396,6 +2947,11 @@ spec: type: string type: object type: object + debugProfilePort: + description: DebugProfilePort configures the port to serve memory + and cpu profiles on. If not specified, profiling is disabled. + format: int32 + type: integer etcdV3CompactionPeriod: description: 'EtcdV3CompactionPeriod is the period between etcdv3 compaction requests. Set to 0 to disable. [Default: 10m]' @@ -2506,6 +3062,11 @@ spec: type: string type: object type: object + debugProfilePort: + description: DebugProfilePort configures the port to serve memory + and cpu profiles on. If not specified, profiling is disabled. + format: int32 + type: integer etcdV3CompactionPeriod: description: 'EtcdV3CompactionPeriod is the period between etcdv3 compaction requests. Set to 0 to disable. [Default: 10m]' @@ -2536,6 +3097,7 @@ status: plural: "" conditions: [] storedVersions: [] + --- apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition @@ -2697,8 +3259,8 @@ spec: within the selected service(s) will be matched, and only to/from each endpoint's port. \n Services cannot be specified on the same rule as Selector, NotSelector, NamespaceSelector, - Ports, NotPorts, Nets, NotNets or ServiceAccounts. \n - Only valid on egress rules." + Nets, NotNets or ServiceAccounts. \n Ports and NotPorts + can only be specified with Services on ingress rules." properties: name: description: Name specifies the name of a Kubernetes @@ -2923,8 +3485,8 @@ spec: within the selected service(s) will be matched, and only to/from each endpoint's port. \n Services cannot be specified on the same rule as Selector, NotSelector, NamespaceSelector, - Ports, NotPorts, Nets, NotNets or ServiceAccounts. \n - Only valid on egress rules." + Nets, NotNets or ServiceAccounts. \n Ports and NotPorts + can only be specified with Services on ingress rules." properties: name: description: Name specifies the name of a Kubernetes @@ -3070,8 +3632,8 @@ spec: within the selected service(s) will be matched, and only to/from each endpoint's port. \n Services cannot be specified on the same rule as Selector, NotSelector, NamespaceSelector, - Ports, NotPorts, Nets, NotNets or ServiceAccounts. \n - Only valid on egress rules." + Nets, NotNets or ServiceAccounts. \n Ports and NotPorts + can only be specified with Services on ingress rules." properties: name: description: Name specifies the name of a Kubernetes @@ -3296,8 +3858,8 @@ spec: within the selected service(s) will be matched, and only to/from each endpoint's port. \n Services cannot be specified on the same rule as Selector, NotSelector, NamespaceSelector, - Ports, NotPorts, Nets, NotNets or ServiceAccounts. \n - Only valid on egress rules." + Nets, NotNets or ServiceAccounts. \n Ports and NotPorts + can only be specified with Services on ingress rules." properties: name: description: Name specifies the name of a Kubernetes @@ -3371,6 +3933,7 @@ status: plural: "" conditions: [] storedVersions: [] + --- apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition @@ -3421,6 +3984,8 @@ status: plural: "" conditions: [] storedVersions: [] + +--- --- # Source: calico/templates/calico-kube-controllers-rbac.yaml @@ -3447,10 +4012,10 @@ rules: - get - list - watch - # IPAM resources are manipulated when nodes are deleted. + # IPAM resources are manipulated in response to node and block updates, as well as periodic triggers. - apiGroups: ["crd.projectcalico.org"] resources: - - ippools + - ipreservations verbs: - list - apiGroups: ["crd.projectcalico.org"] @@ -3465,6 +4030,13 @@ rules: - update - delete - watch + # Pools are watched to maintain a mapping of blocks to IP pools. + - apiGroups: ["crd.projectcalico.org"] + resources: + - ippools + verbs: + - list + - watch # kube-controllers manages hostendpoints. - apiGroups: ["crd.projectcalico.org"] resources: @@ -3481,8 +4053,10 @@ rules: - clusterinformations verbs: - get + - list - create - update + - watch # KubeControllersConfiguration is where it gets its config - apiGroups: ["crd.projectcalico.org"] resources: @@ -3510,6 +4084,8 @@ subjects: name: calico-kube-controllers namespace: kube-system --- + +--- # Source: calico/templates/calico-node-rbac.yaml # Include a clusterrole for the calico-node DaemonSet, # and bind it to the calico-node serviceaccount. @@ -3518,6 +4094,14 @@ apiVersion: rbac.authorization.k8s.io/v1 metadata: name: calico-node rules: + # Used for creating service account tokens to be used by the CNI plugin + - apiGroups: [""] + resources: + - serviceaccounts/token + resourceNames: + - calico-node + verbs: + - create # The CNI plugin needs to get pods, nodes, and namespaces. - apiGroups: [""] resources: @@ -3532,7 +4116,7 @@ rules: resources: - endpointslices verbs: - - watch + - watch - list - apiGroups: [""] resources: @@ -3589,6 +4173,7 @@ rules: - globalbgpconfigs - bgpconfigurations - ippools + - ipreservations - ipamblocks - globalnetworkpolicies - globalnetworksets @@ -3597,6 +4182,7 @@ rules: - clusterinformations - hostendpoints - blockaffinities + - caliconodestatuses verbs: - get - list @@ -3610,6 +4196,12 @@ rules: verbs: - create - update + # Calico must update some CRDs. + - apiGroups: [ "crd.projectcalico.org" ] + resources: + - caliconodestatuses + verbs: + - update # Calico stores some configuration information on the node. - apiGroups: [""] resources: @@ -3657,6 +4249,7 @@ rules: - daemonsets verbs: - get + --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding @@ -3670,6 +4263,7 @@ subjects: - kind: ServiceAccount name: calico-node namespace: kube-system + --- # Source: calico/templates/calico-node.yaml # This manifest installs the calico-node container, as well @@ -3717,7 +4311,7 @@ spec: # It can be deleted if this is a fresh installation, or if you have already # upgraded to use calico-ipam. - name: upgrade-ipam - image: {{ image_registry_address }}/calico/cni:v3.20.3 + image: {{ image_registry_address }}/calico/cni:v3.23.3 command: ["/opt/cni/bin/calico-ipam", "-upgrade"] envFrom: - configMapRef: @@ -3744,7 +4338,7 @@ spec: # This container installs the CNI binaries # and CNI network config file on each node. - name: install-cni - image: {{ image_registry_address }}/calico/cni:v3.20.3 + image: {{ image_registry_address }}/calico/cni:v3.23.3 command: ["/opt/cni/bin/install"] envFrom: - configMapRef: @@ -3782,13 +4376,28 @@ spec: name: cni-net-dir securityContext: privileged: true - # Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes - # to communicate with Felix over the Policy Sync API. - - name: flexvol-driver - image: {{ image_registry_address }}/calico/pod2daemon-flexvol:v3.20.3 + # This init container mounts the necessary filesystems needed by the BPF data plane + # i.e. bpf at /sys/fs/bpf and cgroup2 at /run/calico/cgroup. Calico-node initialisation is executed + # in best effort fashion, i.e. no failure for errors, to not disrupt pod creation in iptable mode. + - name: "mount-bpffs" + image: {{ image_registry_address }}/calico/node:v3.23.3 + command: ["calico-node", "-init", "-best-effort"] volumeMounts: - - name: flexvol-driver-host - mountPath: /host/driver + - mountPath: /sys/fs + name: sys-fs + # Bidirectional is required to ensure that the new mount we make at /sys/fs/bpf propagates to the host + # so that it outlives the init container. + mountPropagation: Bidirectional + - mountPath: /var/run/calico + name: var-run-calico + # Bidirectional is required to ensure that the new mount we make at /run/calico/cgroup propagates to the host + # so that it outlives the init container. + mountPropagation: Bidirectional + # Mount /proc/ from host which usually is an init program at /nodeproc. It's needed by mountns binary, + # executed by calico-node, to mount root cgroup2 fs at /run/calico/cgroup to attach CTLB programs correctly. + - mountPath: /nodeproc + name: nodeproc + readOnly: true securityContext: privileged: true containers: @@ -3796,7 +4405,7 @@ spec: # container programs network policy and routes on each # host. - name: calico-node - image: {{ image_registry_address }}/calico/node:v3.20.3 + image: {{ image_registry_address }}/calico/node:v3.23.3 envFrom: - configMapRef: # Allow KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to be overridden for eBPF mode. @@ -3832,6 +4441,9 @@ spec: # Enable or Disable VXLAN on the default IP pool. - name: CALICO_IPV4POOL_VXLAN value: "Never" + # Enable or Disable VXLAN on the default IPv6 IP pool. + - name: CALICO_IPV6POOL_VXLAN + value: "Never" # Set MTU for tunnel device used if ipip is enabled - name: FELIX_IPINIPMTU valueFrom: @@ -3853,8 +4465,10 @@ spec: # The default IPv4 pool to create on startup if none exists. Pod IPs will be # chosen from this range. Changing this value after installation will have # no effect. This should fall within `--cluster-cidr`. +{# BEGIN Customized by Epiphany #} - name: CALICO_IPV4POOL_CIDR value: "10.244.0.0/16" +{# END #} # Disable file logging so `kubectl logs` works. - name: CALICO_DISABLE_FILE_LOGGING value: "true" @@ -3875,8 +4489,8 @@ spec: preStop: exec: command: - - /bin/calico-node - - -shutdown + - /bin/calico-node + - -shutdown livenessProbe: exec: command: @@ -3916,11 +4530,8 @@ spec: mountPath: /var/run/nodeagent # For eBPF mode, we need to be able to mount the BPF filesystem at /sys/fs/bpf so we mount in the # parent directory. - - name: sysfs - mountPath: /sys/fs/ - # Bidirectional means that, if we mount the BPF filesystem at /sys/fs/bpf it will propagate to the host. - # If the host is known to mount that filesystem already then Bidirectional can be omitted. - mountPropagation: Bidirectional + - name: bpffs + mountPath: /sys/fs/bpf - name: cni-log-dir mountPath: /var/log/calico/cni readOnly: true @@ -3939,10 +4550,18 @@ spec: hostPath: path: /run/xtables.lock type: FileOrCreate - - name: sysfs + - name: sys-fs hostPath: path: /sys/fs/ type: DirectoryOrCreate + - name: bpffs + hostPath: + path: /sys/fs/bpf + type: Directory + # mount /proc at /nodeproc to be used by mount-bpffs initContainer to mount root cgroup2 fs. + - name: nodeproc + hostPath: + path: /proc # Used to install CNI. - name: cni-bin-dir hostPath: @@ -3965,17 +4584,14 @@ spec: hostPath: type: DirectoryOrCreate path: /var/run/nodeagent - # Used to install Flex Volume Driver - - name: flexvol-driver-host - hostPath: - type: DirectoryOrCreate - path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds --- + apiVersion: v1 kind: ServiceAccount metadata: name: calico-node namespace: kube-system + --- # Source: calico/templates/calico-kube-controllers.yaml # See https://github.com/projectcalico/kube-controllers @@ -4009,13 +4625,15 @@ spec: operator: Exists - key: node-role.kubernetes.io/master effect: NoSchedule +{# BEGIN Customized by Epiphany #} - key: node-role.kubernetes.io/control-plane effect: NoSchedule +{# END #} serviceAccountName: calico-kube-controllers priorityClassName: system-cluster-critical containers: - name: calico-kube-controllers - image: {{ image_registry_address }}/calico/kube-controllers:v3.20.3 + image: {{ image_registry_address }}/calico/kube-controllers:v3.23.3 env: # Choose which controllers to run. - name: ENABLED_CONTROLLERS @@ -4037,15 +4655,20 @@ spec: - /usr/bin/check-status - -r periodSeconds: 10 + --- + apiVersion: v1 kind: ServiceAccount metadata: name: calico-kube-controllers namespace: kube-system + --- + # This manifest creates a Pod Disruption Budget for Controller to allow K8s Cluster Autoscaler to evict -apiVersion: policy/v1beta1 + +apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: calico-kube-controllers @@ -4057,6 +4680,7 @@ spec: selector: matchLabels: k8s-app: calico-kube-controllers + --- # Source: calico/templates/calico-etcd-secrets.yaml diff --git a/ansible/playbooks/roles/kubernetes_master/templates/canal.yml.j2 b/ansible/playbooks/roles/kubernetes_master/templates/canal.yml.j2 index 4b122337d2..41dbfaa8d3 100644 --- a/ansible/playbooks/roles/kubernetes_master/templates/canal.yml.j2 +++ b/ansible/playbooks/roles/kubernetes_master/templates/canal.yml.j2 @@ -11,17 +11,21 @@ metadata: data: # Typha is disabled. typha_service_name: "none" + # The interface used by canal for host <-> host communication. # If left blank, then the interface is chosen using the node's # default route. canal_iface: "" + # Whether or not to masquerade traffic to destinations not within # the pod network. masquerade: "true" + # Configure the MTU to use for workload interfaces and tunnels. # By default, MTU is auto-detected, and explicitly setting this field should not be required. # You can override auto-detection by providing a non-zero value. veth_mtu: "0" + # The CNI network configuration to install on each node. The special # values in this config will be automatically populated. cni_network_config: |- @@ -58,6 +62,7 @@ data: } ] } + # Flannel network configuration. Mounted into the flannel container. net-conf.json: | { @@ -66,8 +71,10 @@ data: "Type": "vxlan" } } + --- # Source: calico/templates/kdd-crds.yaml + apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: @@ -106,6 +113,12 @@ spec: 64512]' format: int32 type: integer + bindMode: + description: BindMode indicates whether to listen for BGP connections + on all addresses (None) or only on the node's canonical IP address + Node.Spec.BGP.IPvXAddress (NodeIP). Default behaviour is to listen + for BGP connections on all addresses. + type: string communities: description: Communities is a list of BGP community values and their arbitrary names for tagging routes. @@ -136,6 +149,37 @@ spec: description: 'LogSeverityScreen is the log severity above which logs are sent to the stdout. [Default: INFO]' type: string + nodeMeshMaxRestartTime: + description: Time to allow for software restart for node-to-mesh peerings. When + specified, this is configured as the graceful restart timeout. When + not specified, the BIRD default of 120s is used. This field can + only be set on the default BGPConfiguration instance and requires + that NodeMesh is enabled + type: string + nodeMeshPassword: + description: Optional BGP password for full node-to-mesh peerings. + This field can only be set on the default BGPConfiguration instance + and requires that NodeMesh is enabled + properties: + secretKeyRef: + description: Selects a key of a secret in the node pod's namespace. + properties: + key: + description: The key of the secret to select from. Must be + a valid secret key. + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + optional: + description: Specify whether the Secret or its key must be + defined + type: boolean + required: + - key + type: object + type: object nodeToNodeMeshEnabled: description: 'NodeToNodeMeshEnabled sets whether full node to node BGP mesh is enabled. [Default: true]' @@ -209,6 +253,7 @@ status: plural: "" conditions: [] storedVersions: [] + --- apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition @@ -253,8 +298,8 @@ spec: in the specific branch of the Node on "bird.cfg". type: boolean maxRestartTime: - description: Time to allow for software restart. When specified, this - is configured as the graceful restart timeout. When not specified, + description: Time to allow for software restart. When specified, + this is configured as the graceful restart timeout. When not specified, the BIRD default of 120s is used. type: string node: @@ -266,6 +311,12 @@ spec: description: Selector for the nodes that should have this peering. When this is set, the Node field must be empty. type: string + numAllowedLocalASNumbers: + description: Maximum number of local AS numbers that are allowed in + the AS path for received routes. This removes BGP loop prevention + and should only be used if absolutely necesssary. + format: int32 + type: integer password: description: Optional BGP password for the peerings generated by this BGPPeer resource. @@ -321,6 +372,7 @@ status: plural: "" conditions: [] storedVersions: [] + --- apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition @@ -381,6 +433,270 @@ status: plural: "" conditions: [] storedVersions: [] + +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: (devel) + creationTimestamp: null + name: caliconodestatuses.crd.projectcalico.org +spec: + group: crd.projectcalico.org + names: + kind: CalicoNodeStatus + listKind: CalicoNodeStatusList + plural: caliconodestatuses + singular: caliconodestatus + scope: Cluster + versions: + - name: v1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: 'APIVersion defines the versioned schema of this representation + of an object. Servers should convert recognized schemas to the latest + internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' + type: string + kind: + description: 'Kind is a string value representing the REST resource this + object represents. Servers may infer this from the endpoint the client + submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + metadata: + type: object + spec: + description: CalicoNodeStatusSpec contains the specification for a CalicoNodeStatus + resource. + properties: + classes: + description: Classes declares the types of information to monitor + for this calico/node, and allows for selective status reporting + about certain subsets of information. + items: + type: string + type: array + node: + description: The node name identifies the Calico node instance for + node status. + type: string + updatePeriodSeconds: + description: UpdatePeriodSeconds is the period at which CalicoNodeStatus + should be updated. Set to 0 to disable CalicoNodeStatus refresh. + Maximum update period is one day. + format: int32 + type: integer + type: object + status: + description: CalicoNodeStatusStatus defines the observed state of CalicoNodeStatus. + No validation needed for status since it is updated by Calico. + properties: + agent: + description: Agent holds agent status on the node. + properties: + birdV4: + description: BIRDV4 represents the latest observed status of bird4. + properties: + lastBootTime: + description: LastBootTime holds the value of lastBootTime + from bird.ctl output. + type: string + lastReconfigurationTime: + description: LastReconfigurationTime holds the value of lastReconfigTime + from bird.ctl output. + type: string + routerID: + description: Router ID used by bird. + type: string + state: + description: The state of the BGP Daemon. + type: string + version: + description: Version of the BGP daemon + type: string + type: object + birdV6: + description: BIRDV6 represents the latest observed status of bird6. + properties: + lastBootTime: + description: LastBootTime holds the value of lastBootTime + from bird.ctl output. + type: string + lastReconfigurationTime: + description: LastReconfigurationTime holds the value of lastReconfigTime + from bird.ctl output. + type: string + routerID: + description: Router ID used by bird. + type: string + state: + description: The state of the BGP Daemon. + type: string + version: + description: Version of the BGP daemon + type: string + type: object + type: object + bgp: + description: BGP holds node BGP status. + properties: + numberEstablishedV4: + description: The total number of IPv4 established bgp sessions. + type: integer + numberEstablishedV6: + description: The total number of IPv6 established bgp sessions. + type: integer + numberNotEstablishedV4: + description: The total number of IPv4 non-established bgp sessions. + type: integer + numberNotEstablishedV6: + description: The total number of IPv6 non-established bgp sessions. + type: integer + peersV4: + description: PeersV4 represents IPv4 BGP peers status on the node. + items: + description: CalicoNodePeer contains the status of BGP peers + on the node. + properties: + peerIP: + description: IP address of the peer whose condition we are + reporting. + type: string + since: + description: Since the state or reason last changed. + type: string + state: + description: State is the BGP session state. + type: string + type: + description: Type indicates whether this peer is configured + via the node-to-node mesh, or via en explicit global or + per-node BGPPeer object. + type: string + type: object + type: array + peersV6: + description: PeersV6 represents IPv6 BGP peers status on the node. + items: + description: CalicoNodePeer contains the status of BGP peers + on the node. + properties: + peerIP: + description: IP address of the peer whose condition we are + reporting. + type: string + since: + description: Since the state or reason last changed. + type: string + state: + description: State is the BGP session state. + type: string + type: + description: Type indicates whether this peer is configured + via the node-to-node mesh, or via en explicit global or + per-node BGPPeer object. + type: string + type: object + type: array + required: + - numberEstablishedV4 + - numberEstablishedV6 + - numberNotEstablishedV4 + - numberNotEstablishedV6 + type: object + lastUpdated: + description: LastUpdated is a timestamp representing the server time + when CalicoNodeStatus object last updated. It is represented in + RFC3339 form and is in UTC. + format: date-time + nullable: true + type: string + routes: + description: Routes reports routes known to the Calico BGP daemon + on the node. + properties: + routesV4: + description: RoutesV4 represents IPv4 routes on the node. + items: + description: CalicoNodeRoute contains the status of BGP routes + on the node. + properties: + destination: + description: Destination of the route. + type: string + gateway: + description: Gateway for the destination. + type: string + interface: + description: Interface for the destination + type: string + learnedFrom: + description: LearnedFrom contains information regarding + where this route originated. + properties: + peerIP: + description: If sourceType is NodeMesh or BGPPeer, IP + address of the router that sent us this route. + type: string + sourceType: + description: Type of the source where a route is learned + from. + type: string + type: object + type: + description: Type indicates if the route is being used for + forwarding or not. + type: string + type: object + type: array + routesV6: + description: RoutesV6 represents IPv6 routes on the node. + items: + description: CalicoNodeRoute contains the status of BGP routes + on the node. + properties: + destination: + description: Destination of the route. + type: string + gateway: + description: Gateway for the destination. + type: string + interface: + description: Interface for the destination + type: string + learnedFrom: + description: LearnedFrom contains information regarding + where this route originated. + properties: + peerIP: + description: If sourceType is NodeMesh or BGPPeer, IP + address of the router that sent us this route. + type: string + sourceType: + description: Type of the source where a route is learned + from. + type: string + type: object + type: + description: Type indicates if the route is being used for + forwarding or not. + type: string + type: object + type: array + type: object + type: object + type: object + served: true + storage: true +status: + acceptedNames: + kind: "" + plural: "" + conditions: [] + storedVersions: [] + --- apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition @@ -444,6 +760,7 @@ status: plural: "" conditions: [] storedVersions: [] + --- apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition @@ -490,7 +807,7 @@ spec: type: boolean awsSrcDstCheck: description: 'Set source-destination-check on AWS EC2 instances. Accepted - value must be one of "DoNothing", "Enabled" or "Disabled". [Default: + value must be one of "DoNothing", "Enable" or "Disable". [Default: DoNothing]' enum: - DoNothing @@ -524,6 +841,18 @@ spec: description: 'BPFEnabled, if enabled Felix will use the BPF dataplane. [Default: false]' type: boolean + bpfEnforceRPF: + description: 'BPFEnforceRPF enforce strict RPF on all interfaces with + BPF programs regardless of what is the per-interfaces or global + setting. Possible values are Disabled or Strict. [Default: Strict]' + type: string + bpfExtToServiceConnmark: + description: 'BPFExtToServiceConnmark in BPF mode, control a 32bit + mark that is set on connections from an external client to a local + service. This mark allows us to control how packets of that connection + are routed within the host and how is routing intepreted by RPF + check. [Default: 0]' + type: integer bpfExternalServiceMode: description: 'BPFExternalServiceMode in BPF mode, controls how connections from outside the cluster to services (node ports and cluster IPs) @@ -534,14 +863,6 @@ spec: node appears to use the IP of the ingress node; this requires a permissive L2 network. [Default: Tunnel]' type: string - bpfExtToServiceConnmark: - description: 'BPFExtToServiceConnmark in BPF mode, controls a - 32bit mark that is set on connections from an external client to - a local service. This mark allows us to control how packets of - that connection are routed within the host and how is routing - intepreted by RPF check. [Default: 0]' - type: integer - bpfKubeProxyEndpointSlicesEnabled: description: BPFKubeProxyEndpointSlicesEnabled in BPF mode, controls whether Felix's embedded kube-proxy accepts EndpointSlices or not. @@ -564,6 +885,51 @@ spec: logs are emitted to the BPF trace pipe, accessible with the command `tc exec bpf debug`. [Default: Off].' type: string + bpfMapSizeConntrack: + description: 'BPFMapSizeConntrack sets the size for the conntrack + map. This map must be large enough to hold an entry for each active + connection. Warning: changing the size of the conntrack map can + cause disruption.' + type: integer + bpfMapSizeIPSets: + description: BPFMapSizeIPSets sets the size for ipsets map. The IP + sets map must be large enough to hold an entry for each endpoint + matched by every selector in the source/destination matches in network + policy. Selectors such as "all()" can result in large numbers of + entries (one entry per endpoint in that case). + type: integer + bpfMapSizeNATAffinity: + type: integer + bpfMapSizeNATBackend: + description: BPFMapSizeNATBackend sets the size for nat back end map. + This is the total number of endpoints. This is mostly more than + the size of the number of services. + type: integer + bpfMapSizeNATFrontend: + description: BPFMapSizeNATFrontend sets the size for nat front end + map. FrontendMap should be large enough to hold an entry for each + nodeport, external IP and each port in each service. + type: integer + bpfMapSizeRoute: + description: BPFMapSizeRoute sets the size for the routes map. The + routes map should be large enough to hold one entry per workload + and a handful of entries per host (enough to cover its own IPs and + tunnel IPs). + type: integer + bpfPSNATPorts: + anyOf: + - type: integer + - type: string + description: 'BPFPSNATPorts sets the range from which we randomly + pick a port if there is a source port collision. This should be + within the ephemeral range as defined by RFC 6056 (1024–65535) and + preferably outside the ephemeral ranges used by common operating + systems. Linux uses 32768–60999, while others mostly use the IANA + defined range 49152–65535. It is not necessarily a problem if this + range overlaps with the operating systems. Both ends of the range + are inclusive. [Default: 20000:29999]' + pattern: ^.* + x-kubernetes-int-or-string: true chainInsertMode: description: 'ChainInsertMode controls whether Felix hooks the kernel''s top-level iptables chains by inserting a rule at the top of the @@ -574,6 +940,15 @@ spec: Calico policy will be bypassed. [Default: insert]' type: string dataplaneDriver: + description: DataplaneDriver filename of the external dataplane driver + to use. Only used if UseInternalDataplaneDriver is set to false. + type: string + dataplaneWatchdogTimeout: + description: 'DataplaneWatchdogTimeout is the readiness/liveness timeout + used for Felix''s (internal) dataplane driver. Increase this value + if you experience spurious non-ready or non-live events when Felix + is under heavy load. Decrease the value to get felix to report non-live + or non-ready more quickly. [Default: 90s]' type: string debugDisableLogDropping: type: boolean @@ -602,9 +977,14 @@ spec: routes, by default this will be RTPROT_BOOT when left blank. type: integer deviceRouteSourceAddress: - description: This is the source address to use on programmed device - routes. By default the source address is left blank, leaving the - kernel to choose the source address used. + description: This is the IPv4 source address to use on programmed + device routes. By default the source address is left blank, leaving + the kernel to choose the source address used. + type: string + deviceRouteSourceAddressIPv6: + description: This is the IPv6 source address to use on programmed + device routes. By default the source address is left blank, leaving + the kernel to choose the source address used. type: string disableConntrackInvalidCheck: type: boolean @@ -678,6 +1058,14 @@ spec: "true" or "false" will force the feature, empty or omitted values are auto-detected. type: string + floatingIPs: + default: Disabled + description: FloatingIPs configures whether or not Felix will program + floating IP addresses. + enum: + - Enabled + - Disabled + type: string genericXDPEnabled: description: 'GenericXDPEnabled enables Generic XDP so network cards that don''t support XDP offload or driver modes can use XDP. This @@ -715,6 +1103,9 @@ spec: disabled by setting the interval to 0. type: string ipipEnabled: + description: 'IPIPEnabled overrides whether Felix should configure + an IPIP interface on the host. Optional as Felix determines this + based on the existing IP pools. [Default: nil (unset)]' type: boolean ipipMTU: description: 'IPIPMTU is the MTU to set on the tunnel device. See @@ -781,6 +1172,8 @@ spec: usage. [Default: 10s]' type: string ipv6Support: + description: IPv6Support controls whether Felix enables support for + IPv6 (if supported by the in-use dataplane). type: boolean kubeNodePortRanges: description: 'KubeNodePortRanges holds list of port ranges used for @@ -794,6 +1187,12 @@ spec: pattern: ^.* x-kubernetes-int-or-string: true type: array + logDebugFilenameRegex: + description: LogDebugFilenameRegex controls which source code files + have their Debug log output included in the logs. Only logs from + files with names that match the given regular expression are included. The + filter only applies to Debug level logs. + type: string logFilePath: description: 'LogFilePath is the full path to the Felix log. Set to none to disable file logging. [Default: /var/log/calico/felix.log]' @@ -890,6 +1289,12 @@ spec: to false. This reduces the number of metrics reported, reducing Prometheus load. [Default: true]' type: boolean + prometheusWireGuardMetricsEnabled: + description: 'PrometheusWireGuardMetricsEnabled disables wireguard + metrics collection, which the Prometheus client does by default, + when set to false. This reduces the number of metrics reported, + reducing Prometheus load. [Default: true]' + type: boolean removeExternalRoutes: description: Whether or not to remove device routes that have not been programmed by Felix. Disabling this will allow external applications @@ -917,9 +1322,9 @@ spec: routes. - CalicoIPAM: the default - use IPAM data to construct routes.' type: string routeTableRange: - description: Calico programs additional Linux route tables for various - purposes. RouteTableRange specifies the indices of the route tables - that Calico should use. + description: Deprecated in favor of RouteTableRanges. Calico programs + additional Linux route tables for various purposes. RouteTableRange + specifies the indices of the route tables that Calico should use. properties: max: type: integer @@ -929,6 +1334,21 @@ spec: - max - min type: object + routeTableRanges: + description: Calico programs additional Linux route tables for various + purposes. RouteTableRanges specifies a set of table index ranges + that Calico should use. Deprecates`RouteTableRange`, overrides `RouteTableRange`. + items: + properties: + max: + type: integer + min: + type: integer + required: + - max + - min + type: object + type: array serviceLoopPrevention: description: 'When service IP advertisement is enabled, prevent routing loops to service IPs that are not in use, by dropping or rejecting @@ -956,12 +1376,22 @@ spec: Felix makes reports. [Default: 86400s]' type: string useInternalDataplaneDriver: + description: UseInternalDataplaneDriver, if true, Felix will use its + internal dataplane programming logic. If false, it will launch + an external dataplane driver and communicate with it over protobuf. type: boolean vxlanEnabled: + description: 'VXLANEnabled overrides whether Felix should create the + VXLAN tunnel device for VXLAN networking. Optional as Felix determines + this based on the existing IP pools. [Default: nil (unset)]' type: boolean vxlanMTU: - description: 'VXLANMTU is the MTU to set on the tunnel device. See - Configuring MTU [Default: 1440]' + description: 'VXLANMTU is the MTU to set on the IPv4 VXLAN tunnel + device. See Configuring MTU [Default: 1410]' + type: integer + vxlanMTUV6: + description: 'VXLANMTUV6 is the MTU to set on the IPv6 VXLAN tunnel + device. See Configuring MTU [Default: 1390]' type: integer vxlanPort: type: integer @@ -971,10 +1401,18 @@ spec: description: 'WireguardEnabled controls whether Wireguard is enabled. [Default: false]' type: boolean + wireguardHostEncryptionEnabled: + description: 'WireguardHostEncryptionEnabled controls whether Wireguard + host-to-host encryption is enabled. [Default: false]' + type: boolean wireguardInterfaceName: description: 'WireguardInterfaceName specifies the name to use for the Wireguard interface. [Default: wg.calico]' type: string + wireguardKeepAlive: + description: 'WireguardKeepAlive controls Wireguard PersistentKeepalive + option. Set 0 to disable. [Default: 0]' + type: string wireguardListeningPort: description: 'WireguardListeningPort controls the listening port used by Wireguard. [Default: 51820]' @@ -987,6 +1425,12 @@ spec: description: 'WireguardRoutingRulePriority controls the priority value to use for the Wireguard routing rule. [Default: 99]' type: integer + workloadSourceSpoofing: + description: WorkloadSourceSpoofing controls whether pods can use + the allowedSourcePrefixes annotation to send traffic with a source + IP address that is not theirs. This is disabled by default. When + set to "Any", pods can request any prefix. + type: string xdpEnabled: description: 'XDPEnabled enables XDP acceleration for suitable untracked incoming deny rules. [Default: true]' @@ -1007,6 +1451,7 @@ status: plural: "" conditions: [] storedVersions: [] + --- apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition @@ -1179,8 +1624,8 @@ spec: within the selected service(s) will be matched, and only to/from each endpoint's port. \n Services cannot be specified on the same rule as Selector, NotSelector, NamespaceSelector, - Ports, NotPorts, Nets, NotNets or ServiceAccounts. \n - Only valid on egress rules." + Nets, NotNets or ServiceAccounts. \n Ports and NotPorts + can only be specified with Services on ingress rules." properties: name: description: Name specifies the name of a Kubernetes @@ -1405,8 +1850,8 @@ spec: within the selected service(s) will be matched, and only to/from each endpoint's port. \n Services cannot be specified on the same rule as Selector, NotSelector, NamespaceSelector, - Ports, NotPorts, Nets, NotNets or ServiceAccounts. \n - Only valid on egress rules." + Nets, NotNets or ServiceAccounts. \n Ports and NotPorts + can only be specified with Services on ingress rules." properties: name: description: Name specifies the name of a Kubernetes @@ -1552,8 +1997,8 @@ spec: within the selected service(s) will be matched, and only to/from each endpoint's port. \n Services cannot be specified on the same rule as Selector, NotSelector, NamespaceSelector, - Ports, NotPorts, Nets, NotNets or ServiceAccounts. \n - Only valid on egress rules." + Nets, NotNets or ServiceAccounts. \n Ports and NotPorts + can only be specified with Services on ingress rules." properties: name: description: Name specifies the name of a Kubernetes @@ -1778,8 +2223,8 @@ spec: within the selected service(s) will be matched, and only to/from each endpoint's port. \n Services cannot be specified on the same rule as Selector, NotSelector, NamespaceSelector, - Ports, NotPorts, Nets, NotNets or ServiceAccounts. \n - Only valid on egress rules." + Nets, NotNets or ServiceAccounts. \n Ports and NotPorts + can only be specified with Services on ingress rules." properties: name: description: Name specifies the name of a Kubernetes @@ -1861,6 +2306,7 @@ status: plural: "" conditions: [] storedVersions: [] + --- apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition @@ -1913,6 +2359,7 @@ status: plural: "" conditions: [] storedVersions: [] + --- apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition @@ -2020,6 +2467,7 @@ status: plural: "" conditions: [] storedVersions: [] + --- apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition @@ -2055,8 +2503,16 @@ spec: resource. properties: affinity: + description: Affinity of the block, if this block has one. If set, + it will be of the form "host:". If not set, this block + is not affine to a host. type: string allocations: + description: Array of allocations in-use within this block. nil entries + mean the allocation is free. For non-nil entries at index i, the + index is the ordinal of the allocation within this block and the + value is the index of the associated attributes in the Attributes + array. items: type: integer # TODO: This nullable is manually added in. We should update controller-gen @@ -2064,6 +2520,10 @@ spec: nullable: true type: array attributes: + description: Attributes is an array of arbitrary metadata associated + with allocations in the block. To find attributes for a given allocation, + use the value of the allocation's entry in the Allocations array + as the index of the element in this array. items: properties: handle_id: @@ -2075,12 +2535,38 @@ spec: type: object type: array cidr: + description: The block's CIDR. type: string deleted: + description: Deleted is an internal boolean used to workaround a limitation + in the Kubernetes API whereby deletion will not return a conflict + error if the block has been updated. It should not be set manually. type: boolean + sequenceNumber: + default: 0 + description: We store a sequence number that is updated each time + the block is written. Each allocation will also store the sequence + number of the block at the time of its creation. When releasing + an IP, passing the sequence number associated with the allocation + allows us to protect against a race condition and ensure the IP + hasn't been released and re-allocated since the release request. + format: int64 + type: integer + sequenceNumberForAllocation: + additionalProperties: + format: int64 + type: integer + description: Map of allocated ordinal within the block to sequence + number of the block at the time of allocation. Kubernetes does not + allow numerical keys for maps, so the key is cast to a string. + type: object strictAffinity: + description: StrictAffinity on the IPAMBlock is deprecated and no + longer used by the code. Use IPAMConfig StrictAffinity instead. type: boolean unallocated: + description: Unallocated is an ordered list of allocations which are + free in the block. items: type: integer type: array @@ -2100,6 +2586,7 @@ status: plural: "" conditions: [] storedVersions: [] + --- apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition @@ -2155,6 +2642,7 @@ status: plural: "" conditions: [] storedVersions: [] + --- apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition @@ -2210,6 +2698,7 @@ status: plural: "" conditions: [] storedVersions: [] + --- apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition @@ -2243,13 +2732,23 @@ spec: spec: description: IPPoolSpec contains the specification for an IPPool resource. properties: + allowedUses: + description: AllowedUse controls what the IP pool will be used for. If + not specified or empty, defaults to ["Tunnel", "Workload"] for back-compatibility + items: + type: string + type: array blockSize: description: The block size to use for IP address assignments from - this pool. Defaults to 26 for IPv4 and 112 for IPv6. + this pool. Defaults to 26 for IPv4 and 122 for IPv6. type: integer cidr: description: The pool CIDR. type: string + disableBGPExport: + description: 'Disable exporting routes from this IP Pool''s CIDR over + BGP. [Default: false]' + type: boolean disabled: description: When disabled is true, Calico IPAM will not assign addresses from this pool. @@ -2308,6 +2807,61 @@ status: plural: "" conditions: [] storedVersions: [] + +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: (devel) + creationTimestamp: null + name: ipreservations.crd.projectcalico.org +spec: + group: crd.projectcalico.org + names: + kind: IPReservation + listKind: IPReservationList + plural: ipreservations + singular: ipreservation + scope: Cluster + versions: + - name: v1 + schema: + openAPIV3Schema: + properties: + apiVersion: + description: 'APIVersion defines the versioned schema of this representation + of an object. Servers should convert recognized schemas to the latest + internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' + type: string + kind: + description: 'Kind is a string value representing the REST resource this + object represents. Servers may infer this from the endpoint the client + submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + metadata: + type: object + spec: + description: IPReservationSpec contains the specification for an IPReservation + resource. + properties: + reservedCIDRs: + description: ReservedCIDRs is a list of CIDRs and/or IP addresses + that Calico IPAM will exclude from new allocations. + items: + type: string + type: array + type: object + type: object + served: true + storage: true +status: + acceptedNames: + kind: "" + plural: "" + conditions: [] + storedVersions: [] + --- apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition @@ -2410,6 +2964,11 @@ spec: type: string type: object type: object + debugProfilePort: + description: DebugProfilePort configures the port to serve memory + and cpu profiles on. If not specified, profiling is disabled. + format: int32 + type: integer etcdV3CompactionPeriod: description: 'EtcdV3CompactionPeriod is the period between etcdv3 compaction requests. Set to 0 to disable. [Default: 10m]' @@ -2520,6 +3079,11 @@ spec: type: string type: object type: object + debugProfilePort: + description: DebugProfilePort configures the port to serve memory + and cpu profiles on. If not specified, profiling is disabled. + format: int32 + type: integer etcdV3CompactionPeriod: description: 'EtcdV3CompactionPeriod is the period between etcdv3 compaction requests. Set to 0 to disable. [Default: 10m]' @@ -2550,6 +3114,7 @@ status: plural: "" conditions: [] storedVersions: [] + --- apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition @@ -2711,8 +3276,8 @@ spec: within the selected service(s) will be matched, and only to/from each endpoint's port. \n Services cannot be specified on the same rule as Selector, NotSelector, NamespaceSelector, - Ports, NotPorts, Nets, NotNets or ServiceAccounts. \n - Only valid on egress rules." + Nets, NotNets or ServiceAccounts. \n Ports and NotPorts + can only be specified with Services on ingress rules." properties: name: description: Name specifies the name of a Kubernetes @@ -2937,8 +3502,8 @@ spec: within the selected service(s) will be matched, and only to/from each endpoint's port. \n Services cannot be specified on the same rule as Selector, NotSelector, NamespaceSelector, - Ports, NotPorts, Nets, NotNets or ServiceAccounts. \n - Only valid on egress rules." + Nets, NotNets or ServiceAccounts. \n Ports and NotPorts + can only be specified with Services on ingress rules." properties: name: description: Name specifies the name of a Kubernetes @@ -3084,8 +3649,8 @@ spec: within the selected service(s) will be matched, and only to/from each endpoint's port. \n Services cannot be specified on the same rule as Selector, NotSelector, NamespaceSelector, - Ports, NotPorts, Nets, NotNets or ServiceAccounts. \n - Only valid on egress rules." + Nets, NotNets or ServiceAccounts. \n Ports and NotPorts + can only be specified with Services on ingress rules." properties: name: description: Name specifies the name of a Kubernetes @@ -3310,8 +3875,8 @@ spec: within the selected service(s) will be matched, and only to/from each endpoint's port. \n Services cannot be specified on the same rule as Selector, NotSelector, NamespaceSelector, - Ports, NotPorts, Nets, NotNets or ServiceAccounts. \n - Only valid on egress rules." + Nets, NotNets or ServiceAccounts. \n Ports and NotPorts + can only be specified with Services on ingress rules." properties: name: description: Name specifies the name of a Kubernetes @@ -3385,6 +3950,7 @@ status: plural: "" conditions: [] storedVersions: [] + --- apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition @@ -3435,6 +4001,8 @@ status: plural: "" conditions: [] storedVersions: [] + +--- --- # Source: calico/templates/calico-kube-controllers-rbac.yaml @@ -3461,10 +4029,10 @@ rules: - get - list - watch - # IPAM resources are manipulated when nodes are deleted. + # IPAM resources are manipulated in response to node and block updates, as well as periodic triggers. - apiGroups: ["crd.projectcalico.org"] resources: - - ippools + - ipreservations verbs: - list - apiGroups: ["crd.projectcalico.org"] @@ -3479,6 +4047,13 @@ rules: - update - delete - watch + # Pools are watched to maintain a mapping of blocks to IP pools. + - apiGroups: ["crd.projectcalico.org"] + resources: + - ippools + verbs: + - list + - watch # kube-controllers manages hostendpoints. - apiGroups: ["crd.projectcalico.org"] resources: @@ -3495,8 +4070,10 @@ rules: - clusterinformations verbs: - get + - list - create - update + - watch # KubeControllersConfiguration is where it gets its config - apiGroups: ["crd.projectcalico.org"] resources: @@ -3524,6 +4101,8 @@ subjects: name: calico-kube-controllers namespace: kube-system --- + +--- # Source: calico/templates/calico-node-rbac.yaml # Include a clusterrole for the calico-node DaemonSet, # and bind it to the calico-node serviceaccount. @@ -3532,6 +4111,14 @@ apiVersion: rbac.authorization.k8s.io/v1 metadata: name: calico-node rules: + # Used for creating service account tokens to be used by the CNI plugin + - apiGroups: [""] + resources: + - serviceaccounts/token + resourceNames: + - canal + verbs: + - create # The CNI plugin needs to get pods, nodes, and namespaces. - apiGroups: [""] resources: @@ -3546,7 +4133,7 @@ rules: resources: - endpointslices verbs: - - watch + - watch - list - apiGroups: [""] resources: @@ -3603,6 +4190,7 @@ rules: - globalbgpconfigs - bgpconfigurations - ippools + - ipreservations - ipamblocks - globalnetworkpolicies - globalnetworksets @@ -3611,6 +4199,7 @@ rules: - clusterinformations - hostendpoints - blockaffinities + - caliconodestatuses verbs: - get - list @@ -3624,6 +4213,12 @@ rules: verbs: - create - update + # Calico must update some CRDs. + - apiGroups: [ "crd.projectcalico.org" ] + resources: + - caliconodestatuses + verbs: + - update # Calico stores some configuration information on the node. - apiGroups: [""] resources: @@ -3641,6 +4236,7 @@ rules: verbs: - create - update + --- # Flannel ClusterRole # Pulled from https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel-rbac.yml @@ -3692,6 +4288,7 @@ subjects: - kind: ServiceAccount name: canal namespace: kube-system + --- # Source: calico/templates/calico-node.yaml # This manifest installs the canal container, as well @@ -3717,19 +4314,8 @@ spec: labels: k8s-app: canal spec: - affinity: - nodeAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - nodeSelectorTerms: - - matchExpressions: - - key: kubernetes.io/os - operator: In - values: - - linux - - key: kubernetes.io/arch - operator: In - values: - - {{ canal_arch }} + nodeSelector: + kubernetes.io/os: linux hostNetwork: true tolerations: # Make sure canal gets scheduled on all nodes. @@ -3749,7 +4335,7 @@ spec: # This container installs the CNI binaries # and CNI network config file on each node. - name: install-cni - image: {{ image_registry_address }}/calico/cni:v3.20.3 + image: {{ image_registry_address }}/calico/cni:v3.23.3 command: ["/opt/cni/bin/install"] envFrom: - configMapRef: @@ -3757,6 +4343,12 @@ spec: name: kubernetes-services-endpoint optional: true env: + # Set the serviceaccount name to use for the Calico CNI plugin. + # We use canal-node instead of calico-node when using flannel networking. + - name: CALICO_CNI_SERVICE_ACCOUNT + valueFrom: + fieldRef: + fieldPath: spec.serviceAccountName # Name of the CNI config file to create. - name: CNI_CONF_NAME value: "10-canal.conflist" @@ -3787,13 +4379,28 @@ spec: name: cni-net-dir securityContext: privileged: true - # Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes - # to communicate with Felix over the Policy Sync API. - - name: flexvol-driver - image: {{ image_registry_address }}/calico/pod2daemon-flexvol:v3.20.3 + # This init container mounts the necessary filesystems needed by the BPF data plane + # i.e. bpf at /sys/fs/bpf and cgroup2 at /run/calico/cgroup. Calico-node initialisation is executed + # in best effort fashion, i.e. no failure for errors, to not disrupt pod creation in iptable mode. + - name: "mount-bpffs" + image: {{ image_registry_address }}/calico/node:v3.23.3 + command: ["calico-node", "-init", "-best-effort"] volumeMounts: - - name: flexvol-driver-host - mountPath: /host/driver + - mountPath: /sys/fs + name: sys-fs + # Bidirectional is required to ensure that the new mount we make at /sys/fs/bpf propagates to the host + # so that it outlives the init container. + mountPropagation: Bidirectional + - mountPath: /var/run/calico + name: var-run-calico + # Bidirectional is required to ensure that the new mount we make at /run/calico/cgroup propagates to the host + # so that it outlives the init container. + mountPropagation: Bidirectional + # Mount /proc/ from host which usually is an init program at /nodeproc. It's needed by mountns binary, + # executed by calico-node, to mount root cgroup2 fs at /run/calico/cgroup to attach CTLB programs correctly. + - mountPath: /nodeproc + name: nodeproc + readOnly: true securityContext: privileged: true containers: @@ -3801,7 +4408,7 @@ spec: # container programs network policy and routes on each # host. - name: calico-node - image: {{ image_registry_address }}/calico/node:v3.20.3 + image: {{ image_registry_address }}/calico/node:v3.23.3 envFrom: - configMapRef: # Allow KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to be overridden for eBPF mode. @@ -3822,6 +4429,12 @@ spec: valueFrom: fieldRef: fieldPath: spec.nodeName + # Set the serviceaccount name to use for the Calico CNI plugin. + # We use canal-node instead of calico-node when using flannel networking. + - name: CALICO_CNI_SERVICE_ACCOUNT + valueFrom: + fieldRef: + fieldPath: spec.serviceAccountName # Don't enable BGP. - name: CALICO_NETWORKING_BACKEND value: "none" @@ -3837,8 +4450,10 @@ spec: # The default IPv4 pool to create on startup if none exists. Pod IPs will be # chosen from this range. Changing this value after installation will have # no effect. This should fall within `--cluster-cidr`. +{# BEGIN Customized by Epiphany #} - name: CALICO_IPV4POOL_CIDR value: "10.244.0.0/16" +{# END #} # Disable file logging so `kubectl logs` works. - name: CALICO_DISABLE_FILE_LOGGING value: "true" @@ -3859,8 +4474,8 @@ spec: preStop: exec: command: - - /bin/calico-node - - -shutdown + - /bin/calico-node + - -shutdown livenessProbe: exec: command: @@ -3898,18 +4513,15 @@ spec: mountPath: /var/run/nodeagent # For eBPF mode, we need to be able to mount the BPF filesystem at /sys/fs/bpf so we mount in the # parent directory. - - name: sysfs - mountPath: /sys/fs/ - # Bidirectional means that, if we mount the BPF filesystem at /sys/fs/bpf it will propagate to the host. - # If the host is known to mount that filesystem already then Bidirectional can be omitted. - mountPropagation: Bidirectional + - name: bpffs + mountPath: /sys/fs/bpf - name: cni-log-dir mountPath: /var/log/calico/cni readOnly: true # This container runs flannel using the kube-subnet-mgr backend # for allocating subnets. - name: kube-flannel - image: {{ image_registry_address }}/quay.io/coreos/flannel:v0.14.0-{{ canal_arch }} + image: {{ image_registry_address }}/quay.io/coreos/flannel:v0.15.1 command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr" ] securityContext: privileged: true @@ -3953,10 +4565,18 @@ spec: hostPath: path: /run/xtables.lock type: FileOrCreate - - name: sysfs + - name: sys-fs hostPath: path: /sys/fs/ type: DirectoryOrCreate + - name: bpffs + hostPath: + path: /sys/fs/bpf + type: Directory + # mount /proc at /nodeproc to be used by mount-bpffs initContainer to mount root cgroup2 fs. + - name: nodeproc + hostPath: + path: /proc # Used by flannel. - name: flannel-cfg configMap: @@ -3977,17 +4597,14 @@ spec: hostPath: type: DirectoryOrCreate path: /var/run/nodeagent - # Used to install Flex Volume Driver - - name: flexvol-driver-host - hostPath: - type: DirectoryOrCreate - path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds --- + apiVersion: v1 kind: ServiceAccount metadata: name: canal namespace: kube-system + --- # Source: calico/templates/calico-kube-controllers.yaml # See https://github.com/projectcalico/kube-controllers @@ -4021,13 +4638,15 @@ spec: operator: Exists - key: node-role.kubernetes.io/master effect: NoSchedule +{# BEGIN Customized by Epiphany #} - key: node-role.kubernetes.io/control-plane effect: NoSchedule +{# END #} serviceAccountName: calico-kube-controllers priorityClassName: system-cluster-critical containers: - name: calico-kube-controllers - image: {{ image_registry_address }}/calico/kube-controllers:v3.20.3 + image: {{ image_registry_address }}/calico/kube-controllers:v3.23.3 env: # Choose which controllers to run. - name: ENABLED_CONTROLLERS @@ -4049,15 +4668,20 @@ spec: - /usr/bin/check-status - -r periodSeconds: 10 + --- + apiVersion: v1 kind: ServiceAccount metadata: name: calico-kube-controllers namespace: kube-system + --- + # This manifest creates a Pod Disruption Budget for Controller to allow K8s Cluster Autoscaler to evict -apiVersion: policy/v1beta1 + +apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: calico-kube-controllers @@ -4069,6 +4693,7 @@ spec: selector: matchLabels: k8s-app: calico-kube-controllers + --- # Source: calico/templates/calico-etcd-secrets.yaml diff --git a/ansible/playbooks/roles/kubernetes_master/templates/kube-flannel.yml.j2 b/ansible/playbooks/roles/kubernetes_master/templates/kube-flannel.yml.j2 index 9da3743efb..53c99a6a59 100644 --- a/ansible/playbooks/roles/kubernetes_master/templates/kube-flannel.yml.j2 +++ b/ansible/playbooks/roles/kubernetes_master/templates/kube-flannel.yml.j2 @@ -1,5 +1,4 @@ -# Modified according to: -# * https://raw.githubusercontent.com/flannel-io/flannel/v0.14.0/Documentation/kube-flannel.yml +# Based on: https://raw.githubusercontent.com/flannel-io/flannel/v0.14.0/Documentation/kube-flannel.yml --- apiVersion: policy/v1beta1 kind: PodSecurityPolicy @@ -13,14 +12,14 @@ metadata: spec: privileged: false volumes: - - configMap - - secret - - emptyDir - - hostPath + - configMap + - secret + - emptyDir + - hostPath allowedHostPaths: - - pathPrefix: "/etc/cni/net.d" - - pathPrefix: "/etc/kube-flannel" - - pathPrefix: "/run/flannel" + - pathPrefix: "/etc/cni/net.d" + - pathPrefix: "/etc/kube-flannel" + - pathPrefix: "/run/flannel" readOnlyRootFilesystem: false # Users and groups runAsUser: @@ -53,29 +52,29 @@ apiVersion: rbac.authorization.k8s.io/v1 metadata: name: flannel rules: - - apiGroups: ['extensions'] - resources: ['podsecuritypolicies'] - verbs: ['use'] - resourceNames: ['psp.flannel.unprivileged'] - - apiGroups: - - "" - resources: - - pods - verbs: - - get - - apiGroups: - - "" - resources: - - nodes - verbs: - - list - - watch - - apiGroups: - - "" - resources: - - nodes/status - verbs: - - patch +- apiGroups: ['extensions'] + resources: ['podsecuritypolicies'] + verbs: ['use'] + resourceNames: ['psp.flannel.unprivileged'] +- apiGroups: + - "" + resources: + - pods + verbs: + - get +- apiGroups: + - "" + resources: + - nodes + verbs: + - list + - watch +- apiGroups: + - "" + resources: + - nodes/status + verbs: + - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 @@ -136,7 +135,7 @@ data: apiVersion: apps/v1 kind: DaemonSet metadata: - name: kube-flannel-ds-amd64 + name: kube-flannel-ds namespace: kube-system labels: tier: node @@ -145,8 +144,10 @@ spec: selector: matchLabels: app: flannel +{# BEGIN Customized by Epiphany #} updateStrategy: type: OnDelete +{# END #} template: metadata: labels: @@ -157,15 +158,11 @@ spec: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - - matchExpressions: - - key: kubernetes.io/os - operator: In - values: - - linux - - key: kubernetes.io/arch - operator: In - values: - - amd64 + - matchExpressions: + - key: kubernetes.io/os + operator: In + values: + - linux hostNetwork: true priorityClassName: system-node-critical tolerations: @@ -174,7 +171,7 @@ spec: serviceAccountName: flannel initContainers: - name: install-cni - image: {{ image_registry_address }}/quay.io/coreos/flannel:v0.14.0-amd64 + image: {{ image_registry_address }}/quay.io/coreos/flannel:v0.14.0 command: - cp args: @@ -188,7 +185,7 @@ spec: mountPath: /etc/kube-flannel/ containers: - name: kube-flannel - image: {{ image_registry_address }}/quay.io/coreos/flannel:v0.14.0-amd64 + image: {{ image_registry_address }}/quay.io/coreos/flannel:v0.14.0 command: - /opt/bin/flanneld args: @@ -220,392 +217,12 @@ spec: - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - - name: run - hostPath: - path: /run/flannel - - name: cni - hostPath: - path: /etc/cni/net.d - - name: flannel-cfg - configMap: - name: kube-flannel-cfg ---- -apiVersion: apps/v1 -kind: DaemonSet -metadata: - name: kube-flannel-ds-arm64 - namespace: kube-system - labels: - tier: node - app: flannel -spec: - selector: - matchLabels: - app: flannel - template: - metadata: - labels: - tier: node - app: flannel - spec: - affinity: - nodeAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - nodeSelectorTerms: - - matchExpressions: - - key: kubernetes.io/os - operator: In - values: - - linux - - key: kubernetes.io/arch - operator: In - values: - - arm64 - hostNetwork: true - priorityClassName: system-node-critical - tolerations: - - operator: Exists - effect: NoSchedule - serviceAccountName: flannel - initContainers: - - name: install-cni - image: {{ image_registry_address }}/quay.io/coreos/flannel:v0.14.0-arm64 - command: - - cp - args: - - -f - - /etc/kube-flannel/cni-conf.json - - /etc/cni/net.d/10-flannel.conflist - volumeMounts: - - name: cni - mountPath: /etc/cni/net.d - - name: flannel-cfg - mountPath: /etc/kube-flannel/ - containers: - - name: kube-flannel - image: {{ image_registry_address }}/quay.io/coreos/flannel:v0.14.0-arm64 - command: - - /opt/bin/flanneld - args: - - --ip-masq - - --kube-subnet-mgr - resources: - requests: - cpu: "100m" - memory: "50Mi" - limits: - cpu: "100m" - memory: "50Mi" - securityContext: - privileged: false - capabilities: - add: ["NET_ADMIN", "NET_RAW"] - env: - - name: POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - volumeMounts: - - name: run - mountPath: /run/flannel - - name: flannel-cfg - mountPath: /etc/kube-flannel/ - volumes: - - name: run - hostPath: - path: /run/flannel - - name: cni - hostPath: - path: /etc/cni/net.d - - name: flannel-cfg - configMap: - name: kube-flannel-cfg ---- -apiVersion: apps/v1 -kind: DaemonSet -metadata: - name: kube-flannel-ds-arm - namespace: kube-system - labels: - tier: node - app: flannel -spec: - selector: - matchLabels: - app: flannel - template: - metadata: - labels: - tier: node - app: flannel - spec: - affinity: - nodeAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - nodeSelectorTerms: - - matchExpressions: - - key: kubernetes.io/os - operator: In - values: - - linux - - key: kubernetes.io/arch - operator: In - values: - - arm - hostNetwork: true - priorityClassName: system-node-critical - tolerations: - - operator: Exists - effect: NoSchedule - serviceAccountName: flannel - initContainers: - - name: install-cni - image: {{ image_registry_address }}/quay.io/coreos/flannel:v0.14.0-arm - command: - - cp - args: - - -f - - /etc/kube-flannel/cni-conf.json - - /etc/cni/net.d/10-flannel.conflist - volumeMounts: - - name: cni - mountPath: /etc/cni/net.d - - name: flannel-cfg - mountPath: /etc/kube-flannel/ - containers: - - name: kube-flannel - image: {{ image_registry_address }}/quay.io/coreos/flannel:v0.14.0-arm - command: - - /opt/bin/flanneld - args: - - --ip-masq - - --kube-subnet-mgr - resources: - requests: - cpu: "100m" - memory: "50Mi" - limits: - cpu: "100m" - memory: "50Mi" - securityContext: - privileged: false - capabilities: - add: ["NET_ADMIN", "NET_RAW"] - env: - - name: POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - volumeMounts: - - name: run - mountPath: /run/flannel - - name: flannel-cfg - mountPath: /etc/kube-flannel/ - volumes: - - name: run - hostPath: - path: /run/flannel - - name: cni - hostPath: - path: /etc/cni/net.d - - name: flannel-cfg - configMap: - name: kube-flannel-cfg ---- -apiVersion: apps/v1 -kind: DaemonSet -metadata: - name: kube-flannel-ds-ppc64le - namespace: kube-system - labels: - tier: node - app: flannel -spec: - selector: - matchLabels: - app: flannel - template: - metadata: - labels: - tier: node - app: flannel - spec: - affinity: - nodeAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - nodeSelectorTerms: - - matchExpressions: - - key: kubernetes.io/os - operator: In - values: - - linux - - key: kubernetes.io/arch - operator: In - values: - - ppc64le - hostNetwork: true - priorityClassName: system-node-critical - tolerations: - - operator: Exists - effect: NoSchedule - serviceAccountName: flannel - initContainers: - - name: install-cni - image: {{ image_registry_address }}/quay.io/coreos/flannel:v0.14.0-ppc64le - command: - - cp - args: - - -f - - /etc/kube-flannel/cni-conf.json - - /etc/cni/net.d/10-flannel.conflist - volumeMounts: - - name: cni - mountPath: /etc/cni/net.d - - name: flannel-cfg - mountPath: /etc/kube-flannel/ - containers: - - name: kube-flannel - image: {{ image_registry_address }}/quay.io/coreos/flannel:v0.14.0-ppc64le - command: - - /opt/bin/flanneld - args: - - --ip-masq - - --kube-subnet-mgr - resources: - requests: - cpu: "100m" - memory: "50Mi" - limits: - cpu: "100m" - memory: "50Mi" - securityContext: - privileged: false - capabilities: - add: ["NET_ADMIN", "NET_RAW"] - env: - - name: POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - volumeMounts: - - name: run - mountPath: /run/flannel - - name: flannel-cfg - mountPath: /etc/kube-flannel/ - volumes: - - name: run - hostPath: - path: /run/flannel - - name: cni - hostPath: - path: /etc/cni/net.d - - name: flannel-cfg - configMap: - name: kube-flannel-cfg ---- -apiVersion: apps/v1 -kind: DaemonSet -metadata: - name: kube-flannel-ds-s390x - namespace: kube-system - labels: - tier: node - app: flannel -spec: - selector: - matchLabels: - app: flannel - template: - metadata: - labels: - tier: node - app: flannel - spec: - affinity: - nodeAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - nodeSelectorTerms: - - matchExpressions: - - key: kubernetes.io/os - operator: In - values: - - linux - - key: kubernetes.io/arch - operator: In - values: - - s390x - hostNetwork: true - priorityClassName: system-node-critical - tolerations: - - operator: Exists - effect: NoSchedule - serviceAccountName: flannel - initContainers: - - name: install-cni - image: {{ image_registry_address }}/quay.io/coreos/flannel:v0.14.0-s390x - command: - - cp - args: - - -f - - /etc/kube-flannel/cni-conf.json - - /etc/cni/net.d/10-flannel.conflist - volumeMounts: - - name: cni - mountPath: /etc/cni/net.d - - name: flannel-cfg - mountPath: /etc/kube-flannel/ - containers: - - name: kube-flannel - image: {{ image_registry_address }}/quay.io/coreos/flannel:v0.14.0-s390x - command: - - /opt/bin/flanneld - args: - - --ip-masq - - --kube-subnet-mgr - resources: - requests: - cpu: "100m" - memory: "50Mi" - limits: - cpu: "100m" - memory: "50Mi" - securityContext: - privileged: false - capabilities: - add: ["NET_ADMIN", "NET_RAW"] - env: - - name: POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - volumeMounts: - - name: run - mountPath: /run/flannel - - name: flannel-cfg - mountPath: /etc/kube-flannel/ - volumes: - - name: run - hostPath: - path: /run/flannel - - name: cni - hostPath: - path: /etc/cni/net.d - - name: flannel-cfg - configMap: - name: kube-flannel-cfg + - name: run + hostPath: + path: /run/flannel + - name: cni + hostPath: + path: /etc/cni/net.d + - name: flannel-cfg + configMap: + name: kube-flannel-cfg diff --git a/ansible/playbooks/roles/kubernetes_master/templates/kubeadm-config.yml.j2 b/ansible/playbooks/roles/kubernetes_master/templates/kubeadm-config.yml.j2 index a1f4bd1c1a..373dd55863 100644 --- a/ansible/playbooks/roles/kubernetes_master/templates/kubeadm-config.yml.j2 +++ b/ansible/playbooks/roles/kubernetes_master/templates/kubeadm-config.yml.j2 @@ -61,3 +61,4 @@ kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 cgroupDriver: systemd rotateCertificates: true +enableControllerAttachDetach: {{ specification.advanced.enable_controller_attach_detach }} diff --git a/ansible/playbooks/roles/kubernetes_node/templates/kubeadm-join-node.yml.j2 b/ansible/playbooks/roles/kubernetes_node/templates/kubeadm-join-node.yml.j2 index 9da0febf17..d5027362d1 100644 --- a/ansible/playbooks/roles/kubernetes_node/templates/kubeadm-join-node.yml.j2 +++ b/ansible/playbooks/roles/kubernetes_node/templates/kubeadm-join-node.yml.j2 @@ -8,5 +8,4 @@ discovery: - sha256:{{ kubernetes_common.kubeadm_cert_hash }} nodeRegistration: kubeletExtraArgs: - enable-controller-attach-detach: "false" node-labels: {{ specification.node_labels }} diff --git a/ansible/playbooks/roles/logging/tasks/main.yml b/ansible/playbooks/roles/logging/tasks/main.yml index 5671e42791..4c615900a2 100644 --- a/ansible/playbooks/roles/logging/tasks/main.yml +++ b/ansible/playbooks/roles/logging/tasks/main.yml @@ -10,8 +10,8 @@ run_once: true no_log: true # contains sensitive data -- name: Install and configure OpenDistro for Elasticsearch +- name: Install and configure OpenSearch import_role: - name: opendistro_for_elasticsearch + name: opensearch vars: - specification: "{{ logging_vars.specification }}" # to override opendistro_for_elasticsearch specification + specification: "{{ logging_vars.specification }}" # to override OpenSearch specification diff --git a/ansible/playbooks/roles/opendistro_for_elasticsearch/tasks/configure-es.yml b/ansible/playbooks/roles/opendistro_for_elasticsearch/tasks/configure-es.yml deleted file mode 100644 index f60cf05e27..0000000000 --- a/ansible/playbooks/roles/opendistro_for_elasticsearch/tasks/configure-es.yml +++ /dev/null @@ -1,263 +0,0 @@ ---- -# This file is meant to be also used by upgrade role - -- name: Ensure snapshot folder exists - file: - path: "{{ specification.paths.repo }}/" - state: directory - owner: elasticsearch - group: elasticsearch - mode: u=rwx,go= - -- name: Provide JVM configuration file - template: - backup: yes - src: jvm.options.j2 - dest: /etc/elasticsearch/jvm.options - owner: root - group: elasticsearch - mode: ug=rw,o= - register: change_jvm_config - vars: - xmx: "{{ specification.jvm_options.Xmx }}" - -- name: Generate certificates - when: not is_upgrade_run # in upgrade mode certs are required at early stage and should be already generated - block: - # Install requirements for Ansible certificate modules - - include_role: - name: certificate - tasks_from: install-packages.yml - - - include_tasks: generate-certs.yml - -- name: Provide Elasticsearch configuration file - template: - backup: yes - src: elasticsearch.yml.j2 - dest: /etc/elasticsearch/elasticsearch.yml - owner: root - group: elasticsearch - mode: ug=rw,o= - register: change_config - vars: - node_cert_filename: - http: >- - {{ existing_es_config['opendistro_security.ssl.http.pemcert_filepath'] if (is_upgrade_run) else - certificates.files.node.cert.filename }} - transport: >- - {{ existing_es_config['opendistro_security.ssl.transport.pemcert_filepath'] if (is_upgrade_run) else - certificates.files.node.cert.filename }} - node_key_filename: - http: >- - {{ existing_es_config['opendistro_security.ssl.http.pemkey_filepath'] if (is_upgrade_run) else - certificates.files.node.key.filename }} - transport: >- - {{ existing_es_config['opendistro_security.ssl.transport.pemkey_filepath'] if (is_upgrade_run) else - certificates.files.node.key.filename }} - root_ca_cert_filename: - http: >- - {{ existing_es_config['opendistro_security.ssl.http.pemtrustedcas_filepath'] if (is_upgrade_run) else - certificates.files.root_ca.cert.filename }} - transport: >- - {{ existing_es_config['opendistro_security.ssl.transport.pemtrustedcas_filepath'] if (is_upgrade_run) else - certificates.files.root_ca.cert.filename }} - _epiphany_subjects: - admin: "{{ certificates.files.admin.cert.subject }}" - node: "{{ certificates.files.node.cert.subject }}" - _epiphany_dn_attributes: - admin: "{{ certificates.dn_attributes_order | intersect(_epiphany_subjects.admin.keys()) }}" - node: "{{ certificates.dn_attributes_order | intersect(_epiphany_subjects.node.keys()) }}" - _epiphany_DNs: - admin: >- - {{ _epiphany_dn_attributes.admin | zip(_epiphany_dn_attributes.admin | map('extract', _epiphany_subjects.admin)) - | map('join','=') | join(',') }} - node: >- - {{ _epiphany_dn_attributes.node | zip(_epiphany_dn_attributes.node | map('extract', _epiphany_subjects.node)) - | map('join','=') | join(',') }} - admin_dn: >- - {{ existing_es_config['opendistro_security.authcz.admin_dn'] if (is_upgrade_run) else - [ _epiphany_DNs.admin ] }} - _epiphany_nodes_dn: >- - {%- if groups[current_group_name] | length > 1 -%} - {%- set nodes_to_iterate = ansible_play_hosts_all -%} - {%- else -%} - {%- set nodes_to_iterate = [ inventory_hostname ] -%} - {%- endif -%} - {%- for node in nodes_to_iterate -%} - {%- if loop.first -%}[{%- endif -%} - '{{ _epiphany_DNs.node.split(',') | map('regex_replace', '^CN=.+$', 'CN=' + hostvars[node].ansible_nodename) | join(',') }}' - {%- if not loop.last -%},{%- else -%}]{%- endif -%} - {%- endfor -%} - nodes_dn: >- - {{ existing_es_config['opendistro_security.nodes_dn'] if (is_upgrade_run) else - _epiphany_nodes_dn }} - opendistro_security_allow_unsafe_democertificates: "{{ certificates.files.demo.opendistro_security.allow_unsafe_democertificates }}" - - http_port: "{{ is_upgrade_run | ternary(existing_es_config['http.port'], ports.http) }}" - transport_port: "{{ is_upgrade_run | ternary(existing_es_config['transport.port'], ports.transport) }}" - -# When 'opendistro_security.allow_unsafe_democertificates' is set to 'false' all demo certificate files must be removed, -# otherwise elasticsearch service doesn't start. -# For apply mode, demo certificate files are removed based only on their names. For upgrade mode, -# public key fingerprints are checked to protect against unintentional deletion (what takes additional time). - -- name: Remove demo certificate files - include_tasks: - file: "{{ is_upgrade_run | ternary('remove-known-demo-certs.yml', 'remove-demo-certs.yml') }}" - when: not certificates.files.demo.opendistro_security.allow_unsafe_democertificates - -- name: Include log4j patch - include_tasks: patch-log4j.yml - -- name: Restart elasticsearch service - systemd: - name: elasticsearch - state: restarted - register: restart_elasticsearch - when: change_config.changed - or log4j_patch.changed - or change_jvm_config.changed - or install_elasticsearch_package.changed - or (install_opendistro_packages is defined and install_opendistro_packages.changed) - -- name: Enable and start elasticsearch service - systemd: - name: elasticsearch - state: started - enabled: yes - -- name: Change default users - when: not is_upgrade_run - block: - - name: Wait for elasticsearch service to start up - when: restart_elasticsearch.changed - wait_for: - port: 9200 - host: "{{ ansible_default_ipv4.address | default(ansible_all_ipv4_addresses[0]) }}" - - - name: Set helper facts - set_fact: - elasticsearch_endpoint: https://{{ ansible_default_ipv4.address }}:9200 - vars: - uri_template: &uri - client_cert: "{{ certificates.dirs.certs }}/{{ certificates.files.admin.cert.filename }}" - client_key: "{{ certificates.dirs.certs }}/{{ certificates.files.admin.key.filename }}" - validate_certs: false - body_format: json - - - name: Check if default admin user exists - uri: - <<: *uri - url: "{{ elasticsearch_endpoint }}/_opendistro/_security/api/internalusers/admin" - method: GET - # 404 code is used there as someone can remove admin user on its own. - status_code: [200, 404] - register: admin_check_response - until: admin_check_response is success - retries: 60 - delay: 1 - run_once: true - - - name: Set OpenDistro admin password - uri: - <<: *uri - url: "{{ elasticsearch_endpoint }}/_opendistro/_security/api/internalusers/" - method: PATCH - status_code: [200] - body: - - op: "replace" - path: "/admin" - value: - password: "{{ specification.admin_password }}" - reserved: "true" - backend_roles: - - "admin" - description: "Admin user" - register: uri_response - until: uri_response is success - retries: 15 - delay: 1 - run_once: true - when: admin_check_response.status == 200 - - - name: Check if default kibanaserver user exists - uri: - <<: *uri - url: "{{ elasticsearch_endpoint }}/_opendistro/_security/api/internalusers/kibanaserver" - method: GET - status_code: [200] - register: kibanaserver_check_response - until: kibanaserver_check_response is success - retries: 60 - delay: 1 - run_once: true - when: specification.kibanaserver_user_active - - - name: Set OpenDistro kibanaserver password - uri: - <<: *uri - url: "{{ elasticsearch_endpoint }}/_opendistro/_security/api/internalusers/" - method: PATCH - status_code: [200] - body: - - op: "replace" - path: "/kibanaserver" - value: - password: "{{ specification.kibanaserver_password }}" - reserved: "true" - description: "Kibana server user" - register: uri_response - until: uri_response is success - retries: 15 - delay: 1 - run_once: true - when: specification.kibanaserver_user_active - - - name: Check if default logstash user exists - uri: - <<: *uri - url: "{{ elasticsearch_endpoint }}/_opendistro/_security/api/internalusers/logstash" - method: GET - status_code: [200] - register: logstash_check_response - until: logstash_check_response is success - retries: 60 - delay: 1 - run_once: true - when: specification.logstash_user_active - - - name: Set OpenDistro logstash password - uri: - <<: *uri - url: "{{ elasticsearch_endpoint }}/_opendistro/_security/api/internalusers/" - method: PATCH - status_code: [200] - body: - - op: "replace" - path: "/logstash" - value: - password: "{{ specification.logstash_password }}" - reserved: "true" - backend_roles: - - "logstash" - description: "Logstash user" - register: uri_response - until: uri_response is success - retries: 3 - delay: 5 - run_once: true - when: specification.logstash_user_active - - - name: Remove OpenDistro demo users - uri: - <<: *uri - url: "{{ elasticsearch_endpoint }}/_opendistro/_security/api/internalusers/{{ item }}" - method: DELETE - status_code: [200, 404] - register: uri_response - until: uri_response is success - retries: 15 - delay: 1 - run_once: true - loop: "{{ specification.demo_users_to_remove }}" diff --git a/ansible/playbooks/roles/opendistro_for_elasticsearch/tasks/install-es.yml b/ansible/playbooks/roles/opendistro_for_elasticsearch/tasks/install-es.yml deleted file mode 100644 index 4bed42d55f..0000000000 --- a/ansible/playbooks/roles/opendistro_for_elasticsearch/tasks/install-es.yml +++ /dev/null @@ -1,14 +0,0 @@ ---- -- name: Install elasticsearch-oss packages - package: - name: "{{ _packages[ansible_os_family] }}" - state: present - vars: - _packages: - Debian: - - elasticsearch-oss={{ versions[ansible_os_family].elasticsearch_oss }} - RedHat: - - elasticsearch-oss-{{ versions[ansible_os_family].elasticsearch_oss }} - register: install_elasticsearch_package - module_defaults: - yum: { lock_timeout: "{{ yum_lock_timeout }}" } diff --git a/ansible/playbooks/roles/opendistro_for_elasticsearch/tasks/install-opendistro.yml b/ansible/playbooks/roles/opendistro_for_elasticsearch/tasks/install-opendistro.yml deleted file mode 100644 index d38b2ebcd3..0000000000 --- a/ansible/playbooks/roles/opendistro_for_elasticsearch/tasks/install-opendistro.yml +++ /dev/null @@ -1,25 +0,0 @@ ---- -# NOTE: Keep in mind compatibility matrix for Open Distro https://opendistro.github.io/for-elasticsearch-docs/docs/install/plugins/#plugin-compatibility -- name: Install opendistro-* packages - package: - name: "{{ _packages[ansible_os_family] }}" - state: present - vars: - _packages: - Debian: - - opendistro-alerting={{ versions[ansible_os_family].opendistro }} - - opendistro-index-management={{ versions[ansible_os_family].opendistro }} - - opendistro-job-scheduler={{ versions[ansible_os_family].opendistro }} - - opendistro-performance-analyzer={{ versions[ansible_os_family].opendistro }} - - opendistro-security={{ versions[ansible_os_family].opendistro }} - - opendistro-sql={{ versions[ansible_os_family].opendistro }} - RedHat: - - opendistro-alerting-{{ versions[ansible_os_family].opendistro }} - - opendistro-index-management-{{ versions[ansible_os_family].opendistro }} - - opendistro-job-scheduler-{{ versions[ansible_os_family].opendistro }} - - opendistro-performance-analyzer-{{ versions[ansible_os_family].opendistro }} - - opendistro-security-{{ versions[ansible_os_family].opendistro }} - - opendistro-sql-{{ versions[ansible_os_family].opendistro }} - register: install_opendistro_packages - module_defaults: - yum: { lock_timeout: "{{ yum_lock_timeout }}" } diff --git a/ansible/playbooks/roles/opendistro_for_elasticsearch/tasks/main.yml b/ansible/playbooks/roles/opendistro_for_elasticsearch/tasks/main.yml deleted file mode 100644 index 6860c69c17..0000000000 --- a/ansible/playbooks/roles/opendistro_for_elasticsearch/tasks/main.yml +++ /dev/null @@ -1,21 +0,0 @@ ---- -- name: Configure OS limits (open files, processes and locked-in-memory address space) - pam_limits: - domain: elasticsearch - limit_type: "{{ item.limit_type }}" - limit_item: "{{ item.limit_item }}" - value: "{{ item.value }}" - loop: - - { limit_type: 'soft', limit_item: 'nofile', value: 65536 } - - { limit_type: 'hard', limit_item: 'nofile', value: 65536 } - - { limit_type: 'soft', limit_item: 'nproc', value: 65536 } - - { limit_type: 'hard', limit_item: 'nproc', value: 65536 } - - { limit_type: 'soft', limit_item: 'memlock', value: unlimited } - - { limit_type: 'hard', limit_item: 'memlock', value: unlimited } - -- include_tasks: install-es.yml - -- include_tasks: install-opendistro.yml - -- name: Include configuration tasks - include_tasks: configure-es.yml diff --git a/ansible/playbooks/roles/opendistro_for_elasticsearch/tasks/patch-log4j.yml b/ansible/playbooks/roles/opendistro_for_elasticsearch/tasks/patch-log4j.yml deleted file mode 100644 index 917c2e52d7..0000000000 --- a/ansible/playbooks/roles/opendistro_for_elasticsearch/tasks/patch-log4j.yml +++ /dev/null @@ -1,68 +0,0 @@ ---- -- name: Log4j patch - block: - - name: "opendistro_for_elasticsearch : Log4j patch | Get archive" - include_role: - name: download - tasks_from: download_file - vars: - file_name: "{{ log4j_file_name }}" - - - name: Log4j patch | Extract archive - unarchive: - dest: /tmp/ - src: "{{ download_directory }}/{{ log4j_file_name }}" - remote_src: true - list_files: true - register: unarchive_list_files - - - name: Log4j patch | Copy new jars - register: log4j_patch - copy: - src: "{{ item.src }}" - dest: "{{ item.dest }}" - owner: elasticsearch - group: root - mode: u=rw,g=r,o= - remote_src: true - loop: - - { src: "{{ download_directory }}/{{ log4j_api }}", dest: /usr/share/elasticsearch/lib/ } - - { src: "{{ download_directory }}/{{ log4j_api }}", dest: /usr/share/elasticsearch/performance-analyzer-rca/lib/ } - - { src: "{{ download_directory }}/{{ log4j_api }}", dest: /usr/share/elasticsearch/plugins/opendistro-performance-analyzer/performance-analyzer-rca/lib/ } - - { src: "{{ download_directory }}/{{ log4j_core }}", dest: /usr/share/elasticsearch/lib/ } - - { src: "{{ download_directory }}/{{ log4j_core }}", dest: /usr/share/elasticsearch/performance-analyzer-rca/lib/ } - - { src: "{{ download_directory }}/{{ log4j_core }}", dest: /usr/share/elasticsearch/plugins/opendistro-performance-analyzer/performance-analyzer-rca/lib/ } - - { src: "{{ download_directory }}/{{ log4j_slfj_impl }}", dest: /usr/share/elasticsearch/plugins/opendistro_security/ } - vars: - log4j_api: "{{ unarchive_list_files.files | select('contains', 'log4j-api-2.17.1.jar') | first }}" - log4j_core: "{{ unarchive_list_files.files | select('contains', 'log4j-core-2.17.1.jar') | first }}" - log4j_slfj_impl: "{{ unarchive_list_files.files | select('contains', 'log4j-slf4j-impl-2.17.1.jar') | first }}" - - - name: Log4j patch - cleanup - block: - - name: Log4j patch | Remove old jars - file: - state: absent - path: "{{ item }}" - loop: - - /usr/share/elasticsearch/plugins/opendistro-performance-analyzer/performance-analyzer-rca/lib/log4j-api-2.13.0.jar - - /usr/share/elasticsearch/plugins/opendistro-performance-analyzer/performance-analyzer-rca/lib/log4j-core-2.13.0.jar - - /usr/share/elasticsearch/performance-analyzer-rca/lib/log4j-api-2.13.0.jar - - /usr/share/elasticsearch/performance-analyzer-rca/lib/log4j-core-2.13.0.jar - - /usr/share/elasticsearch/lib/log4j-api-2.11.1.jar - - /usr/share/elasticsearch/lib/log4j-core-2.11.1.jar - - /usr/share/elasticsearch/plugins/opendistro_security/log4j-slf4j-impl-2.11.1.jar - - - name: Log4j patch | Delete temporary dir - file: - dest: "{{ download_directory }}/{{ _archive_root_dir }}" - state: absent - vars: - _archive_root_dir: >- - {{ unarchive_list_files.files | first | dirname }} - -- name: Restart opendistro-performance-analyzer service - systemd: - name: opendistro-performance-analyzer - state: restarted - when: log4j_patch.changed diff --git a/ansible/playbooks/roles/opendistro_for_elasticsearch/defaults/main.yml b/ansible/playbooks/roles/opensearch/defaults/main.yml similarity index 70% rename from ansible/playbooks/roles/opendistro_for_elasticsearch/defaults/main.yml rename to ansible/playbooks/roles/opensearch/defaults/main.yml index cbde5b2a67..7d62757958 100644 --- a/ansible/playbooks/roles/opendistro_for_elasticsearch/defaults/main.yml +++ b/ansible/playbooks/roles/opensearch/defaults/main.yml @@ -1,18 +1,17 @@ --- # This file is meant to be also used by upgrade role - -versions: - RedHat: - elasticsearch_oss: "7.10.2" - opendistro: "1.13.*" - Debian: - elasticsearch_oss: "7.10.2" - opendistro: "1.13.*" +file_name_version: + opensearch: + x86_64: opensearch-1.2.4-linux-x64.tar.gz + aarch64: opensearch-1.2.4-linux-arm64.tar.gz + opensearch_perftop: + x86_64: opensearch-perf-top-1.2.0.0-linux-x64.zip + # Perftop is not supported on ARM (https://github.com/opensearch-project/perftop/issues/26) certificates: dirs: - certs: /etc/elasticsearch - ca_key: /etc/elasticsearch/private - csr: /etc/elasticsearch/csr + certs: /usr/share/opensearch/config + ca_key: /usr/share/opensearch/config + csr: /usr/share/opensearch/config dn_attributes_order: ['CN', 'OU', 'O', 'L', 'S', 'C', 'DC'] files: demo: @@ -20,12 +19,12 @@ certificates: cert: root-ca.pem admin: cert: kirk.pem - key: kirk-key.pem + key: kirk-key.pem node: cert: esnode.pem - key: esnode-key.pem - opendistro_security: - allow_unsafe_democertificates: false # if 'false' all demo files must be removed to start Elasticsearch + key: esnode-key.pem + opensearch_security: + allow_unsafe_democertificates: false # if 'false' all demo files must be removed to start OpenSearch common: subject: &common-subject O: Epiphany @@ -58,6 +57,5 @@ certificates: key: filename: epiphany-node-{{ ansible_nodename }}-key.pem ports: - http: 9200 # defaults to range but we want static port - transport: 9300 # defaults to range but we want static port -log4j_file_name: apache-log4j-2.17.1-bin.tar.gz + http: 9200 + transport: 9300 diff --git a/ansible/playbooks/roles/opendistro_for_elasticsearch/meta/main.yml b/ansible/playbooks/roles/opensearch/meta/main.yml similarity index 100% rename from ansible/playbooks/roles/opendistro_for_elasticsearch/meta/main.yml rename to ansible/playbooks/roles/opensearch/meta/main.yml diff --git a/ansible/playbooks/roles/opensearch/tasks/configure-opensearch.yml b/ansible/playbooks/roles/opensearch/tasks/configure-opensearch.yml new file mode 100644 index 0000000000..205275397e --- /dev/null +++ b/ansible/playbooks/roles/opensearch/tasks/configure-opensearch.yml @@ -0,0 +1,304 @@ +--- +# This file is meant to be also used by upgrade role + +- name: Ensure snapshot folder exists + file: + path: "{{ specification.paths.opensearch_snapshots_dir }}/" + state: directory + owner: "{{ specification.opensearch_os_user }}" + group: "{{ specification.opensearch_os_group }}" + mode: u=rwx,go= + +- name: Provide JVM configuration file + template: + backup: true + src: jvm.options.j2 + dest: "{{ specification.paths.opensearch_conf_dir }}/jvm.options" + owner: "{{ specification.opensearch_os_user }}" + group: "{{ specification.opensearch_os_group }}" + mode: ug=rw,o= + register: change_jvm_config + vars: + xmx: "{{ specification.jvm_options.Xmx }}" + +- name: Generate certificates + when: not is_upgrade_run # in upgrade mode certs are required at early stage and should be already generated + block: + - name: Install requirements for Ansible certificate modules + include_role: + name: certificate + tasks_from: install-packages.yml + + - include_tasks: generate-certs.yml + +- name: Provide OpenSearch configuration file + template: + backup: true + src: opensearch.yml.j2 + dest: "{{ specification.paths.opensearch_conf_dir }}/opensearch.yml" + owner: "{{ specification.opensearch_os_user }}" + group: "{{ specification.opensearch_os_group }}" + mode: ug=rw,o= + register: change_config + vars: + node_cert_filename: + http: >- + {{ existing_es_config['opensearch_security.ssl.http.pemcert_filepath'] if (is_upgrade_run) else + certificates.files.node.cert.filename }} + transport: >- + {{ existing_es_config['opensearch_security.ssl.transport.pemcert_filepath'] if (is_upgrade_run) else + certificates.files.node.cert.filename }} + node_key_filename: + http: >- + {{ existing_es_config['opensearch_security.ssl.http.pemkey_filepath'] if (is_upgrade_run) else + certificates.files.node.key.filename }} + transport: >- + {{ existing_es_config['opensearch_security.ssl.transport.pemkey_filepath'] if (is_upgrade_run) else + certificates.files.node.key.filename }} + root_ca_cert_filename: + http: >- + {{ existing_es_config['opensearch_security.ssl.http.pemtrustedcas_filepath'] if (is_upgrade_run) else + certificates.files.root_ca.cert.filename }} + transport: >- + {{ existing_es_config['opensearch_security.ssl.transport.pemtrustedcas_filepath'] if (is_upgrade_run) else + certificates.files.root_ca.cert.filename }} + _epiphany_subjects: + admin: "{{ certificates.files.admin.cert.subject }}" + node: "{{ certificates.files.node.cert.subject }}" + _epiphany_dn_attributes: + admin: "{{ certificates.dn_attributes_order | intersect(_epiphany_subjects.admin.keys()) }}" + node: "{{ certificates.dn_attributes_order | intersect(_epiphany_subjects.node.keys()) }}" + _epiphany_dns: + admin: >- + {{ _epiphany_dn_attributes.admin | zip(_epiphany_dn_attributes.admin | map('extract', _epiphany_subjects.admin)) + | map('join','=') | join(',') }} + node: >- + {{ _epiphany_dn_attributes.node | zip(_epiphany_dn_attributes.node | map('extract', _epiphany_subjects.node)) + | map('join','=') | join(',') }} + admin_dn: >- + {{ existing_es_config['opensearch_security.authcz.admin_dn'] if (is_upgrade_run) else + [ _epiphany_dns.admin ] }} + _epiphany_nodes_dn: >- + {%- if groups[current_group_name] | length > 1 -%} + {%- set nodes_to_iterate = ansible_play_hosts_all -%} + {%- else -%} + {%- set nodes_to_iterate = [ inventory_hostname ] -%} + {%- endif -%} + {%- for node in nodes_to_iterate -%} + {%- if loop.first -%}[{%- endif -%} + '{{ _epiphany_dns.node.split(',') | map('regex_replace', '^CN=.+$', 'CN=' + hostvars[node].ansible_nodename) | join(',') }}' + {%- if not loop.last -%},{%- else -%}]{%- endif -%} + {%- endfor -%} + nodes_dn: >- + {{ existing_es_config['opensearch_security.nodes_dn'] if (is_upgrade_run) else + _epiphany_nodes_dn }} + opensearch_security_allow_unsafe_democertificates: "{{ certificates.files.demo.opensearch_security.allow_unsafe_democertificates }}" + http_port: "{{ is_upgrade_run | ternary(existing_es_config['http.port'], ports.http) }}" + transport_port: "{{ is_upgrade_run | ternary(existing_es_config['transport.port'], ports.transport) }}" + +# When 'opensearch_security.allow_unsafe_democertificates' is set to 'false' all demo certificate files must be removed, +# otherwise opensearch service doesn't start. +# For apply mode, demo certificate files are removed based only on their names. For upgrade mode, +# public key fingerprints are checked to protect against unintentional deletion (what takes additional time). + +- name: Remove demo certificate files + include_tasks: + file: "{{ is_upgrade_run | ternary('remove-known-demo-certs.yml', 'remove-demo-certs.yml') }}" + when: not certificates.files.demo.opensearch_security.allow_unsafe_democertificates + +- name: Restart OpenSearch service + systemd: + name: opensearch + state: restarted + enabled: true + register: restart_opensearch + when: change_config.changed + or change_jvm_config.changed + +- name: Change default users + when: not is_upgrade_run + block: + - name: Wait for opensearch service to start up + when: restart_opensearch.changed + wait_for: + port: "{{ ports.http }}" + host: "{{ ansible_default_ipv4.address | default(ansible_all_ipv4_addresses[0]) }}" + + - name: Set helper facts + set_fact: + opensearch_endpoint: https://{{ ansible_default_ipv4.address }}:{{ ports.http }} + vars: + uri_template: &uri + client_cert: "{{ certificates.dirs.certs }}/{{ certificates.files.admin.cert.filename }}" + client_key: "{{ certificates.dirs.certs }}/{{ certificates.files.admin.key.filename }}" + validate_certs: false + body_format: json + + - name: Check if default admin user exists + uri: + <<: *uri + url: "{{ opensearch_endpoint }}/_opendistro/_security/api/internalusers/admin" + method: GET + # 404 code is used there as someone can remove admin user on its own. + status_code: [200, 404] + register: admin_check_response + until: admin_check_response is success + retries: 60 + delay: 1 + run_once: true + + - name: Create OpenSearch admin user + uri: + <<: *uri + url: "{{ opensearch_endpoint }}/_opendistro/_security/api/internalusers/admin" + method: PUT + status_code: [200] + body: + password: "{{ specification.admin_password }}" + reserved: "true" + backend_roles: + - "admin" + description: "Admin user" + register: uri_response + until: uri_response is success + retries: 5 + delay: 1 + run_once: true + when: admin_check_response.status == 404 + + - name: Set OpenSearch admin password + uri: + <<: *uri + url: "{{ opensearch_endpoint }}/_opendistro/_security/api/internalusers/" + method: PATCH + status_code: [200] + body: + - op: "replace" + path: "/admin" + value: + password: "{{ specification.admin_password }}" + reserved: "true" + backend_roles: + - "admin" + description: "Admin user" + register: uri_response + until: uri_response is success + retries: 15 + delay: 1 + run_once: true + when: admin_check_response.status == 200 + + - name: Check if default kibanaserver user exists + uri: + <<: *uri + url: "{{ opensearch_endpoint }}/_opendistro/_security/api/internalusers/kibanaserver" + method: GET + status_code: [200, 404] + register: kibanaserver_check_response + when: + - groups.opensearch_dashboards[0] is defined + - inventory_hostname in groups.opensearch_dashboards + + - name: Create default kibanaserver user + uri: + <<: *uri + url: "{{ opensearch_endpoint }}/_opendistro/_security/api/internalusers/kibanaserver" + method: PUT + status_code: [200] + body: &kibanaserver_data + password: "{{ specification.kibanaserver_password }}" + reserved: "true" + description: "Demo OpenSearch Dashboards user" + register: uri_response + until: uri_response is success + retries: 5 + delay: 1 + run_once: true + when: + - kibanaserver_check_response.status is defined + - kibanaserver_check_response.status == 404 + + - name: Set kibanaserver user password + uri: + <<: *uri + url: "{{ opensearch_endpoint }}/_opendistro/_security/api/internalusers/" + method: PATCH + status_code: [200] + body: + - op: "replace" + path: "/kibanaserver" + value: + <<: *kibanaserver_data + register: uri_response + until: uri_response is success + retries: 15 + delay: 1 + run_once: true + when: + - kibanaserver_check_response.status is defined + - kibanaserver_check_response.status == 200 + + - name: Check if filebeatservice user exists # for re-apply scenario + uri: + <<: *uri + url: "{{ opensearch_endpoint }}/_opendistro/_security/api/internalusers/filebeatservice" + method: GET + status_code: [200, 404] + register: filebeatservice_check_response + when: + - groups.logging[0] is defined + - inventory_hostname in groups.logging + + - name: Create dedicated filebeatservice user + uri: + <<: *uri + url: "{{ opensearch_endpoint }}/_opendistro/_security/api/internalusers/filebeatservice" + method: PUT + status_code: [200] + body: &filebeatservice_data + password: "{{ specification.filebeatservice_password }}" + reserved: "true" + backend_roles: + - "logstash" + description: "Epiphany user for Filebeat service" + register: uri_response + until: uri_response is success + retries: 5 + delay: 1 + run_once: true + when: + - filebeatservice_check_response.status is defined + - filebeatservice_check_response.status == 404 + + - name: Set filebeatservice user password + uri: + <<: *uri + url: "{{ opensearch_endpoint }}/_opendistro/_security/api/internalusers/" + method: PATCH + status_code: [200] + body: + - op: "replace" + path: "/filebeatservice" + value: + <<: *filebeatservice_data + register: uri_response + until: uri_response is success + retries: 15 + delay: 1 + run_once: true + when: + - filebeatservice_check_response.status is defined + - filebeatservice_check_response.status == 200 + + - name: Remove OpenSearch demo users + uri: + <<: *uri + url: "{{ opensearch_endpoint }}/_opendistro/_security/api/internalusers/{{ item }}" + method: DELETE + status_code: [200, 404] + register: uri_response + until: uri_response is success + retries: 15 + delay: 1 + run_once: true + loop: "{{ specification.demo_users_to_remove }}" diff --git a/ansible/playbooks/roles/opensearch/tasks/configure-sysctl.yml b/ansible/playbooks/roles/opensearch/tasks/configure-sysctl.yml new file mode 100644 index 0000000000..113fdd1797 --- /dev/null +++ b/ansible/playbooks/roles/opensearch/tasks/configure-sysctl.yml @@ -0,0 +1,12 @@ +--- +- name: Set open files limit in sysctl.conf + sysctl: + name: fs.file-max + value: "65536" + state: present + +- name: Set maximum number of memory map areas limit in sysctl.conf + sysctl: + name: vm.max_map_count + value: "262144" + state: present diff --git a/ansible/playbooks/roles/opendistro_for_elasticsearch/tasks/generate-certs.yml b/ansible/playbooks/roles/opensearch/tasks/generate-certs.yml similarity index 77% rename from ansible/playbooks/roles/opendistro_for_elasticsearch/tasks/generate-certs.yml rename to ansible/playbooks/roles/opensearch/tasks/generate-certs.yml index 898d6cbe35..e32e40794f 100644 --- a/ansible/playbooks/roles/opendistro_for_elasticsearch/tasks/generate-certs.yml +++ b/ansible/playbooks/roles/opensearch/tasks/generate-certs.yml @@ -5,40 +5,37 @@ file: state: directory path: "{{ certificates.dirs.ca_key }}" - owner: root - group: elasticsearch - mode: u=rwx,g=rx,o= # elasticsearch.service requires 'rx' for group + owner: "{{ specification.opensearch_os_user }}" + group: "{{ specification.opensearch_os_group }}" + mode: u=rwx,g=rwx,o= # csr files are kept only for idempotency - name: Create directory for CSR files file: state: directory path: "{{ certificates.dirs.csr }}" - owner: root - group: elasticsearch - mode: u=rwx,g=rx,o= # CSR file doesn't contain private key + owner: "{{ specification.opensearch_os_user }}" + group: "{{ specification.opensearch_os_group }}" + mode: u=rwx,g=rwx,o= # CSR file doesn't contain private key - name: Generate keys and certificates on first node when: inventory_hostname == ansible_play_hosts_all[0] module_defaults: community.crypto.openssl_privatekey: - size: 2048 # based on ODFE docs + size: 2048 # based on ODFE docs type: RSA mode: u=rw,go= - owner: root - group: elasticsearch + owner: "{{ specification.opensearch_os_user }}" format: pkcs8 community.crypto.openssl_csr: mode: u=rw,g=r,o= - owner: root - group: elasticsearch + owner: "{{ specification.opensearch_os_user }}" use_common_name_for_san: false community.crypto.x509_certificate: selfsigned_digest: sha256 ownca_digest: sha256 mode: u=rw,g=r,o= - owner: root - group: elasticsearch + owner: "{{ specification.opensearch_os_user }}" block: # --- Generate CA root certificate --- @@ -48,10 +45,10 @@ return_content: false register: ca_key - - name: Generate CSR for root CA # based on ODFE demo cert (root-ca.pem) + - name: Generate CSR for root CA # based on ODFE demo cert (root-ca.pem) community.crypto.openssl_csr: path: "{{ certificates.dirs.csr }}/{{ certificates.files.root_ca.cert.filename | regex_replace('\\..+$', '.csr') }}" - privatekey_path: "{{ ca_key.filename }}" # 'filename' contains full path + privatekey_path: "{{ ca_key.filename }}" # 'filename' contains full path CN: "{{ certificates.files.root_ca.cert.subject.CN }}" OU: "{{ certificates.files.root_ca.cert.subject.OU }}" O: "{{ certificates.files.root_ca.cert.subject.O }}" @@ -80,14 +77,14 @@ - name: Generate private key for admin certificate community.crypto.openssl_privatekey: path: "{{ certificates.dirs.certs }}/{{ certificates.files.admin.key.filename }}" - format: pkcs8 # specified explicitly since this format is required + format: pkcs8 # specified explicitly since this format is required return_content: false register: admin_key - - name: Generate CSR for admin certificate # based on ODFE demo cert (kirk.pem) + - name: Generate CSR for admin certificate # based on ODFE demo cert (kirk.pem) community.crypto.openssl_csr: path: "{{ certificates.dirs.csr }}/{{ certificates.files.admin.cert.filename | regex_replace('\\..+$', '.csr') }}" - privatekey_path: "{{ admin_key.filename }}" # 'filename' contains full path + privatekey_path: "{{ admin_key.filename }}" # 'filename' contains full path CN: "{{ certificates.files.admin.cert.subject.CN }}" OU: "{{ certificates.files.admin.cert.subject.OU }}" O: "{{ certificates.files.admin.cert.subject.O }}" @@ -122,14 +119,14 @@ module_defaults: copy: owner: root - group: elasticsearch + group: "{{ specification.opensearch_os_group }}" block: - name: Get certificate files from the first host slurp: src: "{{ item }}" delegate_to: "{{ ansible_play_hosts_all[0] }}" register: slurp_certs - no_log: true # sensitive data + no_log: true # sensitive data loop: - "{{ certificates.dirs.ca_key }}/{{ certificates.files.root_ca.key.filename }}" - "{{ certificates.dirs.certs }}/{{ certificates.files.root_ca.cert.filename }}" @@ -139,29 +136,29 @@ - name: Copy CA private key to other hosts copy: content: "{{ slurp_certs.results[0].content | b64decode }}" - dest: "{{ certificates.dirs.ca_key }}/{{ certificates.files.root_ca.key.filename }}" + dest: "{{ certificates.dirs.ca_key }}/{{ certificates.files.root_ca.key.filename }}" mode: u=rw,go= - no_log: true # sensitive data + no_log: true # sensitive data - name: Copy root CA to other hosts copy: content: "{{ slurp_certs.results[1].content | b64decode }}" - dest: "{{ certificates.dirs.certs }}/{{ certificates.files.root_ca.cert.filename }}" + dest: "{{ certificates.dirs.certs }}/{{ certificates.files.root_ca.cert.filename }}" mode: u=rw,g=r,o= - name: Copy admin private key to other hosts copy: content: "{{ slurp_certs.results[2].content | b64decode }}" - dest: "{{ certificates.dirs.certs }}/{{ certificates.files.admin.key.filename }}" + dest: "{{ certificates.dirs.certs }}/{{ certificates.files.admin.key.filename }}" mode: u=rw,go= - no_log: true # sensitive data + no_log: true # sensitive data - name: Copy admin certificate to other hosts copy: content: "{{ slurp_certs.results[3].content | b64decode }}" - dest: "{{ certificates.dirs.certs }}/{{ certificates.files.admin.cert.filename }}" + dest: "{{ certificates.dirs.certs }}/{{ certificates.files.admin.cert.filename }}" mode: u=rw,g=r,o= - no_log: true # sensitive data + no_log: true # sensitive data # --- Generate node certificate (each node has its own) --- @@ -171,16 +168,16 @@ format: pkcs8 size: 2048 type: RSA - mode: u=rw,g=r,o= # elasticsearch.service requires 'r' for group - owner: root - group: elasticsearch + mode: u=rw,g=r,o= + owner: "{{ specification.opensearch_os_user }}" + group: "{{ specification.opensearch_os_group }}" return_content: false register: node_key -- name: Generate CSR for node certificate # based on ODFE demo cert (esnode.pem) +- name: Generate CSR for node certificate # based on ODFE demo cert (esnode.pem) community.crypto.openssl_csr: path: "{{ certificates.dirs.csr }}/{{ certificates.files.node.cert.filename | regex_replace('\\..+$', '.csr') }}" - privatekey_path: "{{ node_key.filename }}" # 'filename' contains full path + privatekey_path: "{{ node_key.filename }}" # 'filename' contains full path CN: "{{ certificates.files.node.cert.subject.CN }}" OU: "{{ certificates.files.node.cert.subject.OU }}" O: "{{ certificates.files.node.cert.subject.O }}" @@ -199,8 +196,8 @@ subjectAltName: "{{ _dns_list + [ 'IP:' + ansible_default_ipv4.address ] }}" use_common_name_for_san: false mode: u=rw,g=r,o= - owner: root - group: elasticsearch + owner: "{{ specification.opensearch_os_user }}" + group: "{{ specification.opensearch_os_group }}" register: node_csr vars: _unique_hostnames: "{{ [ansible_hostname, ansible_nodename, ansible_fqdn] | unique }}" @@ -217,5 +214,5 @@ ownca_not_after: "{{ certificates.files.node.cert.ownca_not_after }}" ownca_digest: sha256 mode: u=rw,go=r - owner: root - group: elasticsearch + owner: "{{ specification.opensearch_os_user }}" + group: "{{ specification.opensearch_os_group }}" diff --git a/ansible/playbooks/roles/opensearch/tasks/install-opensearch.yml b/ansible/playbooks/roles/opensearch/tasks/install-opensearch.yml new file mode 100644 index 0000000000..6ed87b4157 --- /dev/null +++ b/ansible/playbooks/roles/opensearch/tasks/install-opensearch.yml @@ -0,0 +1,78 @@ +--- +- name: Download OpenSearch + include_role: + name: download + tasks_from: download_file + vars: + file_name: "{{ file_name_version.opensearch[ansible_architecture] }}" + +- name: Download PerfTop + include_role: + name: download + tasks_from: download_file + vars: + file_name: "{{ file_name_version.opensearch_perftop[ansible_architecture] }}" + when: ansible_architecture == "x86_64" # Perftop is not yet supported on ARM (https://github.com/opensearch-project/perftop/issues/26) + +- name: Prepare tasks group + when: not is_upgrade_run + block: + - name: Ensure OpenSearch service OS group exists + group: + name: "{{ specification.opensearch_os_group }}" + state: present + + - name: Ensure OpenSearch service OS user exists + user: + name: "{{ specification.opensearch_os_user }}" + state: present + shell: /bin/bash + groups: "{{ specification.opensearch_os_group }}" + home: "{{ specification.paths.opensearch_home }}" + create_home: true + + - name: Ensure directory structure exists + file: + path: "{{ specification.paths.opensearch_perftop_dir }}" + state: directory + owner: "{{ specification.opensearch_os_user }}" + group: "{{ specification.opensearch_os_group }}" + mode: u=rwx,go=rx + recurse: true + when: ansible_architecture == "x86_64" # Perftop is not yet supported on ARM (https://github.com/opensearch-project/perftop/issues/26) + + - name: Ensure directory structure exists + file: + path: "{{ item }}" + state: directory + owner: "{{ specification.opensearch_os_user }}" + group: "{{ specification.opensearch_os_group }}" + mode: u=rwx,go=rx + loop: + - "{{ specification.paths.opensearch_log_dir }}" + - "{{ specification.paths.opensearch_conf_dir }}" + - "{{ specification.paths.opensearch_data_dir }}" + - "{{ certificates.dirs.certs }}" + +- name: Extract OpenSearch tar file + unarchive: + src: "{{ download_directory }}/{{ file_name_version.opensearch[ansible_architecture] }}" + dest: "{{ specification.paths.opensearch_home }}" + owner: "{{ specification.opensearch_os_user }}" + remote_src: true + extra_opts: + - --strip-components=1 + +- name: Extract OpenSearch PerfTop tar file + unarchive: + src: "{{ download_directory }}/{{ file_name_version.opensearch_perftop[ansible_architecture] }}" + dest: "{{ specification.paths.opensearch_perftop_dir }}" + owner: "{{ specification.opensearch_os_user }}" + remote_src: true + when: ansible_architecture == "x86_64" # Perftop is not yet supported on ARM (https://github.com/opensearch-project/perftop/issues/26) + +- name: Create opensearch.service unit file + template: + src: roles/opensearch/templates/opensearch.service.j2 + dest: "/etc/systemd/system/opensearch.service" + mode: u=rw,go=r diff --git a/ansible/playbooks/roles/opensearch/tasks/main.yml b/ansible/playbooks/roles/opensearch/tasks/main.yml new file mode 100644 index 0000000000..9fdaff2c44 --- /dev/null +++ b/ansible/playbooks/roles/opensearch/tasks/main.yml @@ -0,0 +1,23 @@ +--- +- name: Configure OS limits (open files, processes and locked-in-memory address space) + pam_limits: + domain: opensearch + limit_type: "{{ item.limit_type }}" + limit_item: "{{ item.limit_item }}" + value: "{{ item.value }}" + loop: + - {limit_type: 'soft', limit_item: 'nofile', value: 65536} + - {limit_type: 'hard', limit_item: 'nofile', value: 65536} + - {limit_type: 'soft', limit_item: 'nproc', value: 65536} + - {limit_type: 'hard', limit_item: 'nproc', value: 65536} + - {limit_type: 'soft', limit_item: 'memlock', value: unlimited} + - {limit_type: 'hard', limit_item: 'memlock', value: unlimited} + +- name: Tune the system settings + include_tasks: configure-sysctl.yml + +- name: Include installation tasks + include_tasks: install-opensearch.yml + +- name: Include configuration tasks + include_tasks: configure-opensearch.yml diff --git a/ansible/playbooks/roles/opendistro_for_elasticsearch/tasks/remove-demo-certs.yml b/ansible/playbooks/roles/opensearch/tasks/remove-demo-certs.yml similarity index 100% rename from ansible/playbooks/roles/opendistro_for_elasticsearch/tasks/remove-demo-certs.yml rename to ansible/playbooks/roles/opensearch/tasks/remove-demo-certs.yml diff --git a/ansible/playbooks/roles/opendistro_for_elasticsearch/tasks/remove-known-demo-certs.yml b/ansible/playbooks/roles/opensearch/tasks/remove-known-demo-certs.yml similarity index 73% rename from ansible/playbooks/roles/opendistro_for_elasticsearch/tasks/remove-known-demo-certs.yml rename to ansible/playbooks/roles/opensearch/tasks/remove-known-demo-certs.yml index 55e0f8d07d..077adc1211 100644 --- a/ansible/playbooks/roles/opendistro_for_elasticsearch/tasks/remove-known-demo-certs.yml +++ b/ansible/playbooks/roles/opensearch/tasks/remove-known-demo-certs.yml @@ -6,12 +6,12 @@ vars: demo_files: certs: - - { filename: "{{ certificates.files.demo.admin.cert }}", public_key_sha1_fingerprint: 53:01:c4:6a:c8:9c:dd:ab:1d:2d:d9:9a:a9:c6:01:43:38:66:2c:ee } - - { filename: "{{ certificates.files.demo.node.cert }}", public_key_sha1_fingerprint: 6e:d8:94:2c:4a:a1:d2:b4:d4:5e:65:0f:66:d6:a9:35:23:a2:77:52 } - - { filename: "{{ certificates.files.demo.root_ca.cert }}", public_key_sha1_fingerprint: 4c:8a:cc:d1:9f:a5:23:6f:4a:9d:d3:bb:8f:0d:05:ab:5b:e3:f4:59 } + - {filename: "{{ certificates.files.demo.admin.cert }}", public_key_sha1_fingerprint: 53:01:c4:6a:c8:9c:dd:ab:1d:2d:d9:9a:a9:c6:01:43:38:66:2c:ee} + - {filename: "{{ certificates.files.demo.node.cert }}", public_key_sha1_fingerprint: 6e:d8:94:2c:4a:a1:d2:b4:d4:5e:65:0f:66:d6:a9:35:23:a2:77:52} + - {filename: "{{ certificates.files.demo.root_ca.cert }}", public_key_sha1_fingerprint: 4c:8a:cc:d1:9f:a5:23:6f:4a:9d:d3:bb:8f:0d:05:ab:5b:e3:f4:59} keys: - - { filename: "{{ certificates.files.demo.admin.key }}", public_key_sha1_fingerprint: 53:01:c4:6a:c8:9c:dd:ab:1d:2d:d9:9a:a9:c6:01:43:38:66:2c:ee } - - { filename: "{{ certificates.files.demo.node.key }}", public_key_sha1_fingerprint: 6e:d8:94:2c:4a:a1:d2:b4:d4:5e:65:0f:66:d6:a9:35:23:a2:77:52 } + - {filename: "{{ certificates.files.demo.admin.key }}", public_key_sha1_fingerprint: 53:01:c4:6a:c8:9c:dd:ab:1d:2d:d9:9a:a9:c6:01:43:38:66:2c:ee} + - {filename: "{{ certificates.files.demo.node.key }}", public_key_sha1_fingerprint: 6e:d8:94:2c:4a:a1:d2:b4:d4:5e:65:0f:66:d6:a9:35:23:a2:77:52} block: - name: Check if known demo certificates exist stat: @@ -60,5 +60,5 @@ label: "{{ item.filename }}" vars: _query: "[*].{ filename: item, public_key_sha1_fingerprint: public_key_fingerprints.sha1 }" - _demo_certs: "{{ _demo_certs_info.results | json_query(_query) }}" + _demo_certs: "{{ _demo_certs_info.results | json_query(_query) }}" _demo_cert_keys: "{{ _demo_cert_keys_info.results | json_query(_query) }}" diff --git a/ansible/playbooks/roles/opendistro_for_elasticsearch/templates/jvm.options.j2 b/ansible/playbooks/roles/opensearch/templates/jvm.options.j2 similarity index 81% rename from ansible/playbooks/roles/opendistro_for_elasticsearch/templates/jvm.options.j2 rename to ansible/playbooks/roles/opensearch/templates/jvm.options.j2 index e91e6b6635..75beba6b52 100644 --- a/ansible/playbooks/roles/opendistro_for_elasticsearch/templates/jvm.options.j2 +++ b/ansible/playbooks/roles/opensearch/templates/jvm.options.j2 @@ -51,7 +51,7 @@ 14-:-XX:InitiatingHeapOccupancyPercent=30 ## JVM temporary directory --Djava.io.tmpdir=${ES_TMPDIR} +-Djava.io.tmpdir=${OPENSEARCH_TMPDIR} ## heap dumps @@ -61,25 +61,20 @@ # specify an alternative path for heap dumps; ensure the directory exists and # has sufficient space --XX:HeapDumpPath=/var/lib/elasticsearch +-XX:HeapDumpPath=/var/lib/opensearch # specify an alternative path for JVM fatal error logs --XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log +-XX:ErrorFile=/var/log/opensearch/hs_err_pid%p.log ## JDK 8 GC logging 8:-XX:+PrintGCDetails 8:-XX:+PrintGCDateStamps 8:-XX:+PrintTenuringDistribution 8:-XX:+PrintGCApplicationStoppedTime -8:-Xloggc:/var/log/elasticsearch/gc.log +8:-Xloggc:/var/log/opensearch/gc.log 8:-XX:+UseGCLogFileRotation 8:-XX:NumberOfGCLogFiles=32 8:-XX:GCLogFileSize=64m # JDK 9+ GC logging -9-:-Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m - -## OpenDistro Performance Analyzer --Dclk.tck=100 --Djdk.attach.allowAttachSelf=true --Djava.security.policy=file:///usr/share/elasticsearch/plugins/opendistro_performance_analyzer/pa_config/es_security.policy +9-:-Xlog:gc*,gc+age=trace,safepoint:file=/var/log/opensearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m diff --git a/ansible/playbooks/roles/opensearch/templates/opensearch.service.j2 b/ansible/playbooks/roles/opensearch/templates/opensearch.service.j2 new file mode 100644 index 0000000000..a886e79dd1 --- /dev/null +++ b/ansible/playbooks/roles/opensearch/templates/opensearch.service.j2 @@ -0,0 +1,51 @@ +[Unit] +Description=OpenSearch +Wants=network-online.target +After=network-online.target + +[Service] +RuntimeDirectory=opensearch +PrivateTmp=true + +WorkingDirectory={{ specification.paths.opensearch_home }} + +User={{ specification.opensearch_os_user }} +Group={{ specification.opensearch_os_user }} + +ExecStart={{ specification.paths.opensearch_home }}/bin/opensearch -p {{ specification.paths.opensearch_home }}/opensearch.pid -q + +StandardOutput=journal +StandardError=inherit + +# Specifies the maximum file descriptor number that can be opened by this process +LimitNOFILE=65536 + +# Specifies the memory lock settings +LimitMEMLOCK=infinity + +# Specifies the maximum number of processes +LimitNPROC=4096 + +# Specifies the maximum size of virtual memory +LimitAS=infinity + +# Specifies the maximum file size +LimitFSIZE=infinity + +# Disable timeout logic and wait until process is stopped +TimeoutStopSec=0 + +# SIGTERM signal is used to stop the Java process +KillSignal=SIGTERM + +# Send the signal only to the JVM rather than its control group +KillMode=process + +# Java process is never killed +SendSIGKILL=no + +# When a JVM receives a SIGTERM signal it exits with code 143 +SuccessExitStatus=143 + +[Install] +WantedBy=multi-user.target diff --git a/ansible/playbooks/roles/opendistro_for_elasticsearch/templates/elasticsearch.yml.j2 b/ansible/playbooks/roles/opensearch/templates/opensearch.yml.j2 similarity index 54% rename from ansible/playbooks/roles/opendistro_for_elasticsearch/templates/elasticsearch.yml.j2 rename to ansible/playbooks/roles/opensearch/templates/opensearch.yml.j2 index 0214fcc7d0..7ad196396b 100644 --- a/ansible/playbooks/roles/opendistro_for_elasticsearch/templates/elasticsearch.yml.j2 +++ b/ansible/playbooks/roles/opensearch/templates/opensearch.yml.j2 @@ -1,16 +1,10 @@ #jinja2: lstrip_blocks: True # {{ ansible_managed }} -# ======================== Elasticsearch Configuration ========================= +# ======================== OpenSearch Configuration ========================= # -# NOTE: Elasticsearch comes with reasonable defaults for most settings. -# Before you set out to tweak and tune the configuration, make sure you -# understand what are you trying to accomplish and the consequences. -# -# The primary way of configuring a node is via this file. This template lists -# the most important settings you may want to configure for a production cluster. -# -# Please consult the documentation for further information on configuration options: -# https://www.elastic.co/guide/en/elasticsearch/reference/index.html +# ------------------- Legacy Clients Compability Flag ------------------------- +# https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/ +compatibility.override_main_response_version: true # # ---------------------------------- Cluster ----------------------------------- # @@ -32,15 +26,15 @@ node.name: {{ ansible_hostname }} # # Path to directory where to store the data (separate multiple locations by comma): # -path.data: {{ specification.paths.data }} +path.data: {{ specification.paths.opensearch_data_dir }} # # Path to directory where the shared storage should be mounted: # -path.repo: {{ specification.paths.repo }} +path.repo: {{ specification.paths.opensearch_snapshots_dir }} # # Path to log files: # -path.logs: {{ specification.paths.logs }} +path.logs: {{ specification.paths.opensearch_log_dir }} # # ----------------------------------- Memory ----------------------------------- # @@ -52,7 +46,7 @@ path.logs: {{ specification.paths.logs }} # on the system and that the owner of the process is allowed to use this # limit. # -# Elasticsearch performs poorly when the system is swapping the memory. +# OpenSearch performs poorly when the system is swapping the memory. # # ---------------------------------- Network ----------------------------------- # @@ -76,9 +70,9 @@ transport.port: {{ transport_port }} # The default list of hosts is ["127.0.0.1", "[::1]"] # {% if groups[current_group_name] | length > 1 -%} -discovery.seed_hosts: [{% for host in groups[current_group_name] %}"{{hostvars[host]['ansible_default_ipv4']['address']}}"{%- if not loop.last -%},{% endif %}{% endfor %}] +discovery.seed_hosts: [{% for host in groups[current_group_name] %}"{{ hostvars[host]['ansible_hostname'] }}"{%- if not loop.last -%},{% endif %}{% endfor %}] {% else %} -discovery.seed_hosts: ["{{ ansible_default_ipv4.address | default(ansible_all_ipv4_addresses[0]) }}"] +discovery.seed_hosts: ["{{ ansible_hostname }}"] {% endif %} # # Bootstrap the cluster using an initial set of master-eligible nodes: @@ -87,7 +81,7 @@ discovery.seed_hosts: ["{{ ansible_default_ipv4.address | default(ansible_all_ip cluster.initial_master_nodes: [] {% else %} {% if groups[current_group_name] | length > 1 %} -cluster.initial_master_nodes: [{% for host in groups[current_group_name] %}"{{hostvars[host]['ansible_hostname']}}"{%- if not loop.last -%},{% endif %}{% endfor %}] +cluster.initial_master_nodes: [{% for host in groups[current_group_name] %}"{{ hostvars[host]['ansible_hostname'] }}"{%- if not loop.last -%},{% endif %}{% endfor %}] {% else %} cluster.initial_master_nodes: ["{{ ansible_hostname }}"] {% endif %} @@ -109,33 +103,35 @@ cluster.initial_master_nodes: ["{{ ansible_hostname }}"] # #action.destructive_requires_name: true -######## Start OpenDistro for Elasticsearch Security Configuration ######## +######## OpenSearch Security Configuration ######## # WARNING: revise all the lines below before you go into production -opendistro_security.ssl.transport.pemcert_filepath: {{ node_cert_filename.transport }} -opendistro_security.ssl.transport.pemkey_filepath: {{ node_key_filename.transport }} -opendistro_security.ssl.transport.pemtrustedcas_filepath: {{ root_ca_cert_filename.transport }} -opendistro_security.ssl.transport.enforce_hostname_verification: {{ specification.opendistro_security.ssl.transport.enforce_hostname_verification | lower }} -opendistro_security.ssl.http.enabled: true -opendistro_security.ssl.http.pemcert_filepath: {{ node_cert_filename.http }} -opendistro_security.ssl.http.pemkey_filepath: {{ node_key_filename.http }} -opendistro_security.ssl.http.pemtrustedcas_filepath: {{ root_ca_cert_filename.http }} -opendistro_security.allow_unsafe_democertificates: {{ opendistro_security_allow_unsafe_democertificates | lower }} -opendistro_security.allow_default_init_securityindex: true -opendistro_security.authcz.admin_dn: +plugins.security.ssl.transport.pemcert_filepath: "{{ certificates.dirs.certs }}/{{ node_cert_filename.transport }}" +plugins.security.ssl.transport.pemkey_filepath: "{{ certificates.dirs.ca_key }}/{{ node_key_filename.transport }}" +plugins.security.ssl.transport.pemtrustedcas_filepath: "{{ certificates.dirs.certs }}/{{ root_ca_cert_filename.transport }}" +plugins.security.ssl.transport.enforce_hostname_verification: {{ specification.opensearch_security.ssl.transport.enforce_hostname_verification | lower }} +plugins.security.ssl.http.enabled: true +plugins.security.ssl.http.pemcert_filepath: "{{ certificates.dirs.certs }}/{{ node_cert_filename.http }}" +plugins.security.ssl.http.pemkey_filepath: "{{ certificates.dirs.ca_key }}/{{ node_key_filename.http }}" +plugins.security.ssl.http.pemtrustedcas_filepath: "{{ certificates.dirs.certs }}/{{ root_ca_cert_filename.http }}" +plugins.security.allow_unsafe_democertificates: {{ opensearch_security_allow_unsafe_democertificates | lower }} +plugins.security.allow_default_init_securityindex: true +plugins.security.authcz.admin_dn: {% for dn in admin_dn %} - '{{ dn }}' {% endfor %} {% if nodes_dn | count > 0 %} -opendistro_security.nodes_dn: +plugins.security.nodes_dn: {% for dn in nodes_dn %} - '{{ dn }}' {% endfor %} {% endif %} -opendistro_security.audit.type: internal_elasticsearch -opendistro_security.enable_snapshot_restore_privilege: true -opendistro_security.check_snapshot_restore_write_privileges: true -opendistro_security.restapi.roles_enabled: ["all_access", "security_rest_api_access"] +{% if specification.opensearch_security.audit.type is defined and specification.opensearch_security.audit.type|length %} +plugins.security.audit.type: {{ specification.opensearch_security.audit.type }} +{% endif %} +plugins.security.enable_snapshot_restore_privilege: true +plugins.security.check_snapshot_restore_write_privileges: true +plugins.security.restapi.roles_enabled: ["all_access", "security_rest_api_access"] cluster.routing.allocation.disk.threshold_enabled: false node.max_local_storage_nodes: 3 -######## End OpenDistro for Elasticsearch Security Configuration ######## +######## End OpenSearch Security Configuration ######## diff --git a/ansible/playbooks/roles/opensearch_dashboards/defaults/main.yml b/ansible/playbooks/roles/opensearch_dashboards/defaults/main.yml new file mode 100644 index 0000000000..cdda7d4123 --- /dev/null +++ b/ansible/playbooks/roles/opensearch_dashboards/defaults/main.yml @@ -0,0 +1,7 @@ +--- +file_name_version: + opensearch_dashboards: + x86_64: opensearch-dashboards-1.2.0-linux-x64.tar.gz + aarch64: opensearch-dashboards-1.2.0-linux-arm64.tar.gz +opensearch_api_port: 9200 +java: "{{ es_java | default('java-1.8.0-openjdk.x86_64') }}" diff --git a/ansible/playbooks/roles/opensearch_dashboards/handlers/main.yml b/ansible/playbooks/roles/opensearch_dashboards/handlers/main.yml new file mode 100644 index 0000000000..ded1b9a7a3 --- /dev/null +++ b/ansible/playbooks/roles/opensearch_dashboards/handlers/main.yml @@ -0,0 +1,6 @@ +--- +- name: Restart dashboards + systemd: + name: opensearch-dashboards + state: restarted + enabled: true diff --git a/ansible/playbooks/roles/opensearch_dashboards/tasks/dashboards.yml b/ansible/playbooks/roles/opensearch_dashboards/tasks/dashboards.yml new file mode 100644 index 0000000000..a59541c278 --- /dev/null +++ b/ansible/playbooks/roles/opensearch_dashboards/tasks/dashboards.yml @@ -0,0 +1,55 @@ +--- +- name: Download OpenSearch dashboards + include_role: + name: download + tasks_from: download_file + vars: + file_name: "{{ file_name_version.opensearch_dashboards[ansible_architecture] }}" + +- name: Create OpenSearch Dashboards OS group + group: + name: "{{ specification.dashboards_os_group }}" + state: present + +- name: Create OpenSearch Dashboards OS user + user: + name: "{{ specification.dashboards_os_user }}" + state: present + shell: /bin/bash + group: "{{ specification.dashboards_os_group }}" + home: "{{ specification.paths.dashboards_home }}" + +- name: Extract OpenSearch Dashboards tar file + unarchive: + src: "{{ download_directory }}/{{ file_name_version.opensearch_dashboards[ansible_architecture] }}" + dest: "{{ specification.paths.dashboards_home }}" + owner: "{{ specification.dashboards_os_user }}" + remote_src: true + extra_opts: + - --strip-components=1 + +# if opensearch-dashboards is enabled for groups 'logging' and 'opensearch', form dashboards cluster +# on the basis of belonging to a given group +- name: Set opensearch dashboards hosts as fact + set_fact: + opensearch_nodes_dashboards: |- + {%- set current_host_group = groups[(group_names | intersect(['logging', 'opensearch'])) | first] -%} + {%- set hosts = groups['opensearch_dashboards'] | intersect(current_host_group) -%} + {%- for item in hosts -%} + https://{{ item }}:{{ opensearch_api_port }}{%- if not loop.last -%}","{%- endif -%} + {%- endfor -%} + +- name: Copy configuration file + template: + src: opensearch_dashboards.yml.j2 + dest: "{{ specification.paths.dashboards_conf_dir }}/opensearch_dashboards.yml" + owner: "{{ specification.dashboards_os_user }}" + group: "{{ specification.dashboards_os_user }}" + mode: u=rw,go=r + backup: true + +- name: Create opensearch-dashboards.service unit file + template: + src: opensearch-dashboards.service.j2 + dest: /etc/systemd/system/opensearch-dashboards.service + mode: u=rw,go=r diff --git a/ansible/playbooks/roles/opensearch_dashboards/tasks/main.yml b/ansible/playbooks/roles/opensearch_dashboards/tasks/main.yml new file mode 100644 index 0000000000..ed9fc2a3cb --- /dev/null +++ b/ansible/playbooks/roles/opensearch_dashboards/tasks/main.yml @@ -0,0 +1,19 @@ +--- +- name: Include dashboards installation + include_tasks: dashboards.yml + +- name: Make sure OpenSearch Dashboards is started + service: + name: opensearch-dashboards + state: started + enabled: true + +- name: Get all the installed dashboards plugins + command: "{{ specification.paths.dashboards_plugin_bin_path }} list" + become: true + become_user: "{{ specification.dashboards_os_user }}" + register: list_plugins + +- name: Show all the installed dashboards plugins + debug: + msg: "{{ list_plugins.stdout }}" diff --git a/ansible/playbooks/roles/opensearch_dashboards/templates/opensearch-dashboards.service.j2 b/ansible/playbooks/roles/opensearch_dashboards/templates/opensearch-dashboards.service.j2 new file mode 100644 index 0000000000..ee4ec7dd67 --- /dev/null +++ b/ansible/playbooks/roles/opensearch_dashboards/templates/opensearch-dashboards.service.j2 @@ -0,0 +1,48 @@ +[Unit] +Description=OpenSearch Dashboards +Wants=network-online.target +After=network-online.target + +[Service] +RuntimeDirectory=opensearch-dashboards +PrivateTmp=true + +WorkingDirectory={{ specification.paths.dashboards_home }} + +User={{ specification.dashboards_os_user }} +Group={{ specification.dashboards_os_user }} + +ExecStart={{ specification.paths.dashboards_home }}/bin/opensearch-dashboards -q + +StandardOutput=journal +StandardError=inherit + +# Specifies the maximum file descriptor number that can be opened by this process +LimitNOFILE=65536 + +# Specifies the maximum number of processes +LimitNPROC=4096 + +# Specifies the maximum size of virtual memory +LimitAS=infinity + +# Specifies the maximum file size +LimitFSIZE=infinity + +# Disable timeout logic and wait until process is stopped +TimeoutStopSec=0 + +# SIGTERM signal is used to stop the Java process +KillSignal=SIGTERM + +# Send the signal only to the JVM rather than its control group +KillMode=process + +# Java process is never killed +SendSIGKILL=no + +# When a JVM receives a SIGTERM signal it exits with code 143 +SuccessExitStatus=143 + +[Install] +WantedBy=multi-user.target diff --git a/ansible/playbooks/roles/opensearch_dashboards/templates/opensearch_dashboards.yml.j2 b/ansible/playbooks/roles/opensearch_dashboards/templates/opensearch_dashboards.yml.j2 new file mode 100644 index 0000000000..49d0e5885c --- /dev/null +++ b/ansible/playbooks/roles/opensearch_dashboards/templates/opensearch_dashboards.yml.j2 @@ -0,0 +1,13 @@ +server.port: 5601 +server.host: "{{ inventory_hostname }}" +opensearch.hosts: ["{{ opensearch_nodes_dashboards }}"] +opensearch.ssl.verificationMode: none +opensearch.username: "{{ specification.dashboards_user }}" +opensearch.password: "{{ specification.dashboards_user_password }}" +opensearch.requestHeadersWhitelist: [ authorization,securitytenant ] + +opensearch_security.multitenancy.enabled: true +opensearch_security.multitenancy.tenants.preferred: ["Private", "Global"] +opensearch_security.readonly_mode.roles: ["kibana_read_only"] +# Use this setting if you are running dashboards without https +opensearch_security.cookie.secure: false diff --git a/ansible/playbooks/roles/preflight/defaults/main.yml b/ansible/playbooks/roles/preflight/defaults/main.yml index 32591a7e38..1f1b8035b6 100644 --- a/ansible/playbooks/roles/preflight/defaults/main.yml +++ b/ansible/playbooks/roles/preflight/defaults/main.yml @@ -27,30 +27,31 @@ unsupported_roles: - distro: Ubuntu arch: aarch64 roles: - - repository - - kafka + - applications + - elasticsearch_curator + - filebeat - firewall + - grafana + - haproxy + - helm - image_registry + - jmx_exporter + - kafka + - kafka_exporter - kubernetes_master - kubernetes_node - - helm - - zookeeper - - haproxy - logging - - elasticsearch_curator - - opendistro_for_elasticsearch - - elasticsearch - - kibana - - filebeat - - prometheus - - grafana - node_exporter - - jmx_exporter - - rabbitmq - - kafka_exporter + - opensearch + - opensearch_dashboards - postgresql - postgres_exporter - - applications + - prometheus + - rabbitmq + - repository + - rook + - zookeeper + - distro: AlmaLinux arch: x86_64 roles: [] # all supported @@ -58,36 +59,37 @@ unsupported_roles: arch: aarch64 roles: - elasticsearch_curator + - rook - distro: RedHat arch: x86_64 roles: [] # all supported - distro: RedHat arch: aarch64 roles: - - repository - - kafka + - applications + - elasticsearch_curator + - filebeat - firewall + - grafana + - haproxy + - helm - image_registry + - jmx_exporter + - kafka + - kafka_exporter - kubernetes_master - kubernetes_node - - helm - - zookeeper - - haproxy - logging - - elasticsearch_curator - - opendistro_for_elasticsearch - - elasticsearch - - kibana - - filebeat - - prometheus - - grafana - node_exporter - - jmx_exporter - - rabbitmq - - kafka_exporter + - opensearch + - opensearch_dashboards - postgresql - postgres_exporter - - applications + - prometheus + - rabbitmq + - repository + - rook + - zookeeper unsupported_postgres_extensions: x86_64: [] diff --git a/ansible/playbooks/roles/preflight/tasks/main.yml b/ansible/playbooks/roles/preflight/tasks/main.yml index 7aba8eca42..beaaefae13 100644 --- a/ansible/playbooks/roles/preflight/tasks/main.yml +++ b/ansible/playbooks/roles/preflight/tasks/main.yml @@ -1,7 +1,4 @@ --- -- include_tasks: upgrade-pre-common.yml - when: is_upgrade_run - - include_tasks: common/main.yml - include_tasks: apply.yml diff --git a/ansible/playbooks/roles/preflight/tasks/upgrade-pre-common.yml b/ansible/playbooks/roles/preflight/tasks/upgrade-pre-common.yml deleted file mode 100644 index 8c3fbd6520..0000000000 --- a/ansible/playbooks/roles/preflight/tasks/upgrade-pre-common.yml +++ /dev/null @@ -1,7 +0,0 @@ -# In version 2.0.0 we switched from RHEL 7 to 8 but only 'epicli apply' is supported so far. -- name: Check whether OS family is supported - assert: - that: ansible_os_family != 'RedHat' - fail_msg: >- - In this version 'epicli upgrade' is supported only for Ubuntu - success_msg: OS family check passed diff --git a/ansible/playbooks/roles/rabbitmq/tasks/install-packages-redhat.yml b/ansible/playbooks/roles/rabbitmq/tasks/install-packages-redhat.yml index 144428fa29..d3bc280466 100644 --- a/ansible/playbooks/roles/rabbitmq/tasks/install-packages-redhat.yml +++ b/ansible/playbooks/roles/rabbitmq/tasks/install-packages-redhat.yml @@ -1,6 +1,6 @@ --- - name: Install packages - yum: + dnf: name: - logrotate - "{{ versions.redhat.erlang_package[ansible_architecture] }}" diff --git a/ansible/playbooks/roles/recovery/defaults/main.yml b/ansible/playbooks/roles/recovery/defaults/main.yml index 88be45c8a6..e105375aa7 100644 --- a/ansible/playbooks/roles/recovery/defaults/main.yml +++ b/ansible/playbooks/roles/recovery/defaults/main.yml @@ -2,5 +2,5 @@ recovery_dir: /epibackup recovery_source_dir: "{{ recovery_dir }}/mounted" recovery_source_host: "{{ groups.repository[0] if (custom_repository_url | default(false)) else (resolved_repository_hostname | default(groups.repository[0])) }}" -elasticsearch_snapshot_repository_name: epiphany -elasticsearch_snapshot_repository_location: /var/lib/elasticsearch-snapshots +opensearch_snapshot_repository_name: epiphany +opensearch_snapshot_repository_location: /var/lib/opensearch-snapshots diff --git a/ansible/playbooks/roles/recovery/tasks/logging_kibana_etc.yml b/ansible/playbooks/roles/recovery/tasks/logging_opensearch_conf.yml similarity index 62% rename from ansible/playbooks/roles/recovery/tasks/logging_kibana_etc.yml rename to ansible/playbooks/roles/recovery/tasks/logging_opensearch_conf.yml index 3792303795..3b50d75ca1 100644 --- a/ansible/playbooks/roles/recovery/tasks/logging_kibana_etc.yml +++ b/ansible/playbooks/roles/recovery/tasks/logging_opensearch_conf.yml @@ -1,8 +1,13 @@ --- +- name: Include vars from opensearch role + include_vars: + file: roles/opensearch/vars/main.yml + name: opensearch_vars + - name: Find snapshot archive import_tasks: common/find_snapshot_archive.yml vars: - snapshot_prefix: "kibana_etc" + snapshot_prefix: "opensearch_conf" snapshot_name: "{{ specification.components.logging.snapshot_name }}" - name: Transfer the archive via rsync @@ -15,24 +20,24 @@ - name: Verify snapshot checksum import_tasks: common/verify_snapshot_checksum.yml -- name: Stop kibana service +- name: Stop OpenSearch service systemd: - name: kibana + name: opensearch state: stopped - name: Clear directories import_tasks: common/clear_directories.yml vars: dirs_to_clear: - - /etc/kibana/ + - "{{ opensearch_vars.specification.paths.opensearch_conf_dir }}" - name: Extract the archive unarchive: - dest: /etc/kibana/ + dest: "{{ opensearch_vars.specification.paths.opensearch_conf_dir }}" src: "{{ recovery_dir }}/{{ snapshot_path | basename }}" remote_src: true -- name: Start kibana service +- name: Start OpenSearch service systemd: - name: kibana + name: opensearch state: started diff --git a/ansible/playbooks/roles/recovery/tasks/logging_elasticsearch_etc.yml b/ansible/playbooks/roles/recovery/tasks/logging_opensearch_dashboards_conf.yml similarity index 59% rename from ansible/playbooks/roles/recovery/tasks/logging_elasticsearch_etc.yml rename to ansible/playbooks/roles/recovery/tasks/logging_opensearch_dashboards_conf.yml index 7c81954bf5..fcbfcd0f2e 100644 --- a/ansible/playbooks/roles/recovery/tasks/logging_elasticsearch_etc.yml +++ b/ansible/playbooks/roles/recovery/tasks/logging_opensearch_dashboards_conf.yml @@ -1,8 +1,13 @@ --- +- name: Include vars from opensearch role + include_vars: + file: roles/opensearch_dashboards/vars/main.yml + name: opensearch_dashboards_vars + - name: Find snapshot archive import_tasks: common/find_snapshot_archive.yml vars: - snapshot_prefix: "elasticsearch_etc" + snapshot_prefix: "opsd_conf_dir" snapshot_name: "{{ specification.components.logging.snapshot_name }}" - name: Transfer the archive via rsync @@ -15,24 +20,24 @@ - name: Verify snapshot checksum import_tasks: common/verify_snapshot_checksum.yml -- name: Stop elasticsearch service +- name: Stop opensearch-dashboards service systemd: - name: elasticsearch + name: opensearch-dashboards state: stopped - name: Clear directories import_tasks: common/clear_directories.yml vars: dirs_to_clear: - - /etc/elasticsearch/ + - "{{ opensearch_dashboards_vars.specification.paths.opsd_conf_dir }}" - name: Extract the archive unarchive: - dest: /etc/elasticsearch/ + dest: "{{ opensearch_dashboards_vars.specification.paths.opsd_conf_dir }}" src: "{{ recovery_dir }}/{{ snapshot_path | basename }}" remote_src: true -- name: Start elasticsearch service +- name: Start opensearch-dashboards service systemd: - name: elasticsearch + name: opensearch-dashboards state: started diff --git a/ansible/playbooks/roles/recovery/tasks/logging_elasticsearch_snapshot.yml b/ansible/playbooks/roles/recovery/tasks/logging_opensearch_snapshot.yml similarity index 66% rename from ansible/playbooks/roles/recovery/tasks/logging_elasticsearch_snapshot.yml rename to ansible/playbooks/roles/recovery/tasks/logging_opensearch_snapshot.yml index f1fa9bf15f..19ca6645c6 100644 --- a/ansible/playbooks/roles/recovery/tasks/logging_elasticsearch_snapshot.yml +++ b/ansible/playbooks/roles/recovery/tasks/logging_opensearch_snapshot.yml @@ -1,12 +1,12 @@ --- -- name: Include default vars from opendistro_for_elasticsearch role +- name: Include default vars from opensearch role include_vars: - file: roles/opendistro_for_elasticsearch/defaults/main.yml + file: roles/opensearch/defaults/main.yml name: odfe - name: Set helper facts set_fact: - elasticsearch_endpoint: >- + opensearch_endpoint: >- https://{{ ansible_default_ipv4.address }}:9200 vars: uri_template: &uri @@ -18,7 +18,7 @@ - name: Check cluster health uri: <<: *uri - url: "{{ elasticsearch_endpoint }}/_cluster/health" + url: "{{ opensearch_endpoint }}/_cluster/health" method: GET register: uri_response until: uri_response is success @@ -28,7 +28,7 @@ - name: Find snapshot archive import_tasks: common/find_snapshot_archive.yml vars: - snapshot_prefix: "elasticsearch_snapshot" + snapshot_prefix: "opensearch_snapshot" snapshot_name: "{{ specification.components.logging.snapshot_name }}" - name: Transfer the archive via rsync @@ -45,38 +45,38 @@ import_tasks: common/clear_directories.yml vars: dirs_to_clear: - - "{{ elasticsearch_snapshot_repository_location }}/" + - "{{ opensearch_snapshot_repository_location }}/" - name: Extract the archive unarchive: - dest: "{{ elasticsearch_snapshot_repository_location }}/" + dest: "{{ opensearch_snapshot_repository_location }}/" src: "{{ recovery_dir }}/{{ snapshot_path | basename }}" remote_src: true - name: Change snapshot directory permissions file: - path: "{{ elasticsearch_snapshot_repository_location }}/" - owner: elasticsearch - group: elasticsearch + path: "{{ opensearch_snapshot_repository_location }}/" + owner: opensearch + group: opensearch recurse: true - name: Reconstruct the snapshot_name set_fact: snapshot_name: >- - {{ snapshot_path | basename | regex_replace('^elasticsearch_snapshot_(.*).tar.gz$', '\1') }} + {{ snapshot_path | basename | regex_replace('^opensearch_snapshot_(.*).tar.gz$', '\1') }} -- debug: var=snapshot_name - -- name: Ensure all kibana and filebeat instances are stopped, then restore the snapshot +- name: Display snapshot name + debug: var=snapshot_name +- name: Ensure all OPSD and filebeat instances are stopped, then restore the snapshot block: - - name: Stop all kibana instances + - name: Stop allOpenSearch Dashboards instances delegate_to: "{{ item }}" systemd: - name: kibana + name: opensearch-dashboards state: stopped enabled: false - loop: "{{ groups.kibana | default([]) }}" + loop: "{{ groups.opensearch_dashboards | default([]) }}" - name: Stop all filebeat instances delegate_to: "{{ item }}" @@ -89,29 +89,29 @@ - name: Close all indices uri: <<: *uri - url: "{{ elasticsearch_endpoint }}/_all/_close" + url: "{{ opensearch_endpoint }}/_all/_close" method: POST - name: Delete all indices uri: <<: *uri - url: "{{ elasticsearch_endpoint }}/_all" + url: "{{ opensearch_endpoint }}/_all" method: DELETE - name: Restore the snapshot uri: <<: *uri - url: "{{ elasticsearch_endpoint }}/_snapshot/{{ elasticsearch_snapshot_repository_name }}/{{ snapshot_name }}/_restore" + url: "{{ opensearch_endpoint }}/_snapshot/{{ opensearch_snapshot_repository_name }}/{{ snapshot_name }}/_restore" method: POST always: - - name: Start all kibana instances + - name: Start all OpenSearch Dashboards instances delegate_to: "{{ item }}" systemd: - name: kibana + name: opensearch-dashboards state: started enabled: true - loop: "{{ groups.kibana | default([]) }}" + loop: "{{ groups.opensearch_dashboards | default([]) }}" - name: Start all filebeat instances delegate_to: "{{ item }}" diff --git a/ansible/playbooks/roles/repository/defaults/main.yml b/ansible/playbooks/roles/repository/defaults/main.yml index 5a0a7db45d..95b9f60c6b 100644 --- a/ansible/playbooks/roles/repository/defaults/main.yml +++ b/ansible/playbooks/roles/repository/defaults/main.yml @@ -1,4 +1,5 @@ --- download_requirements_dir: "/var/tmp/epi-download-requirements" -download_requirements_script: "{{ download_requirements_dir }}/download-requirements.py" download_requirements_flag: "{{ download_requirements_dir }}/download-requirements-done.flag" +download_requirements_manifest: "{{ download_requirements_dir }}/manifest.yml" +download_requirements_script: "{{ download_requirements_dir }}/download-requirements.py" diff --git a/ansible/playbooks/roles/repository/files/download-requirements/download-requirements.py b/ansible/playbooks/roles/repository/files/download-requirements/download-requirements.py index 4b8d1e55cd..5fabc5c7a6 100644 --- a/ansible/playbooks/roles/repository/files/download-requirements/download-requirements.py +++ b/ansible/playbooks/roles/repository/files/download-requirements/download-requirements.py @@ -9,7 +9,7 @@ from src.command.toolchain import TOOLCHAINS from src.config.config import Config -from src.error import DownloadRequirementsError +from src.error import DownloadRequirementsException def install_missing_modules(config: Config): @@ -79,7 +79,7 @@ def main(argv: List[str]) -> int: time_end = datetime.datetime.now() - time_begin logging.info(f'Total execution time: {str(time_end).split(".")[0]}') - except DownloadRequirementsError: + except DownloadRequirementsException: return 1 return 0 diff --git a/ansible/playbooks/roles/repository/files/download-requirements/repositories/aarch64/redhat/redhat.yml b/ansible/playbooks/roles/repository/files/download-requirements/repositories/aarch64/redhat/redhat.yml new file mode 100644 index 0000000000..2d952bc43a --- /dev/null +++ b/ansible/playbooks/roles/repository/files/download-requirements/repositories/aarch64/redhat/redhat.yml @@ -0,0 +1,70 @@ +--- +repositories: + elastic-6: + id: elastic-6 + data: | + name=Elastic repository for 6.x packages + baseurl=https://artifacts.elastic.co/packages/oss-6.x/yum + gpgcheck=1 + enabled=1 + autorefresh=1 + type=rpm-md + gpg_keys: + - https://artifacts.elastic.co/GPG-KEY-elasticsearch + + elasticsearch-7: + id: elasticsearch-7.x + data: | + name=Elasticsearch repository for 7.x packages + baseurl=https://artifacts.elastic.co/packages/oss-7.x/yum + gpgcheck=1 + enabled=1 + autorefresh=1 + type=rpm-md + gpg_keys: + - https://artifacts.elastic.co/GPG-KEY-elasticsearch + + elasticsearch-curator-5: + id: curator-5 + data: | + name=CentOS/RHEL 7 repository for Elasticsearch Curator 5.x packages + baseurl=https://packages.elastic.co/curator/5/centos/7 + gpgcheck=1 + enabled=1 + gpg_keys: + - https://packages.elastic.co/GPG-KEY-elasticsearch + + kubernetes: + id: kubernetes + data: | + name=Kubernetes + baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-$basearch + enabled=1 + gpgcheck=1 + repo_gpgcheck=1 + gpg_keys: + - https://packages.cloud.google.com/yum/doc/yum-key.gpg + - https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg + + postgresql-13: + id: pgdg13 + data: | + name=PostgreSQL 13 for RHEL/CentOS $releasever - $basearch + baseurl=https://download.postgresql.org/pub/repos/yum/13/redhat/rhel-$releasever-$basearch + enabled=1 + gpgcheck=1 + module_hotfixes=true + gpg_keys: + - https://download.postgresql.org/pub/repos/yum/RPM-GPG-KEY-PGDG + + rabbitmq: + id: rabbitmq-server + data: | + name=rabbitmq-rpm + baseurl=https://packagecloud.io/rabbitmq/rabbitmq-server/el/7/$basearch + gpgcheck=1 + repo_gpgcheck=1 + sslcacert=/etc/pki/tls/certs/ca-bundle.crt + enabled=1 + gpg_keys: + - https://packagecloud.io/rabbitmq/rabbitmq-server/gpgkey diff --git a/ansible/playbooks/roles/repository/files/download-requirements/repositories/x86_64/debian/debian.yml b/ansible/playbooks/roles/repository/files/download-requirements/repositories/x86_64/debian/debian.yml index aaa42da37d..447d31536d 100644 --- a/ansible/playbooks/roles/repository/files/download-requirements/repositories/x86_64/debian/debian.yml +++ b/ansible/playbooks/roles/repository/files/download-requirements/repositories/x86_64/debian/debian.yml @@ -24,10 +24,6 @@ repositories: content: 'deb https://artifacts.elastic.co/packages/oss-7.x/apt stable main' key: 'https://artifacts.elastic.co/GPG-KEY-elasticsearch' - opendistroforelasticsearch: - content: 'deb https://d3g5vo6xdbdb9a.cloudfront.net/apt stable main' - key: 'https://d3g5vo6xdbdb9a.cloudfront.net/GPG-KEY-opendistroforelasticsearch' - # postgresql pgdg: content: 'deb http://apt.postgresql.org/pub/repos/apt focal-pgdg main' diff --git a/ansible/playbooks/roles/repository/files/download-requirements/repositories/x86_64/redhat/redhat.yml b/ansible/playbooks/roles/repository/files/download-requirements/repositories/x86_64/redhat/redhat.yml index d040640e1b..2d952bc43a 100644 --- a/ansible/playbooks/roles/repository/files/download-requirements/repositories/x86_64/redhat/redhat.yml +++ b/ansible/playbooks/roles/repository/files/download-requirements/repositories/x86_64/redhat/redhat.yml @@ -46,19 +46,6 @@ repositories: - https://packages.cloud.google.com/yum/doc/yum-key.gpg - https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg - opendistroforelasticsearch: - id: opendistroforelasticsearch-artifacts-repo - data: | - name=Release RPM artifacts of OpenDistroForElasticsearch - baseurl=https://d3g5vo6xdbdb9a.cloudfront.net/yum/noarch/ - enabled=1 - gpgcheck=1 - repo_gpgcheck=1 - autorefresh=1 - type=rpm-md - gpg_keys: - - https://d3g5vo6xdbdb9a.cloudfront.net/GPG-KEY-opendistroforelasticsearch - postgresql-13: id: pgdg13 data: | diff --git a/ansible/playbooks/roles/repository/files/download-requirements/requirements/aarch64/cranes.yml b/ansible/playbooks/roles/repository/files/download-requirements/requirements/aarch64/cranes.yml new file mode 100644 index 0000000000..34b5f28d52 --- /dev/null +++ b/ansible/playbooks/roles/repository/files/download-requirements/requirements/aarch64/cranes.yml @@ -0,0 +1,4 @@ +--- +cranes: + 'https://github.com/google/go-containerregistry/releases/download/v0.11.0/go-containerregistry_Linux_arm64.tar.gz': + sha256: 84653ec8297389ded927f120ea5bc2703423046f64c56b538197875f81ba4cd6 diff --git a/ansible/playbooks/roles/repository/files/download-requirements/requirements/aarch64/files.yml b/ansible/playbooks/roles/repository/files/download-requirements/requirements/aarch64/files.yml new file mode 100644 index 0000000000..d15eb814e8 --- /dev/null +++ b/ansible/playbooks/roles/repository/files/download-requirements/requirements/aarch64/files.yml @@ -0,0 +1,57 @@ +--- +files: + # --- Exporters --- + 'https://github.com/danielqsj/kafka_exporter/releases/download/v1.4.0/kafka_exporter-1.4.0.linux-arm64.tar.gz': + sha256: 95ff0c723f3cdb6967b54c0208a5d0e67ad59dc53c1907a401cb8a448e53ec96 + deps: [kafka-exporter] + + 'https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.16.1/jmx_prometheus_javaagent-0.16.1.jar': + sha256: 0ddc6834f854c03d5795305193c1d33132a24fbd406b4b52828602f5bc30777e + deps: [kafka] + + 'https://github.com/prometheus/node_exporter/releases/download/v1.3.1/node_exporter-1.3.1.linux-arm64.tar.gz': + sha256: f19f35175f87d41545fa7d4657e834e3a37c1fe69f3bf56bc031a256117764e7 + deps: [node-exporter] + + 'https://github.com/prometheus-community/postgres_exporter/releases/download/v0.10.0/postgres_exporter-0.10.0.linux-arm64.tar.gz': + sha256: 82a1a4e07c7140f8e55532dbbdfea3bbba33dafc7ef0a221601bb2fd5359ff03 + deps: [postgres-exporter] + + # --- Misc --- + 'https://archive.apache.org/dist/kafka/2.8.1/kafka_2.12-2.8.1.tgz': + sha256: 175a4134efc569a586d58916cd16ce70f868b13dea2b5a3d12a67b1395d59f98 + deps: [kafka] + + 'https://archive.apache.org/dist/zookeeper/zookeeper-3.5.8/apache-zookeeper-3.5.8-bin.tar.gz': + sha256: c35ed6786d59b73920243f1a324d24c2ddfafb379041d7a350cc9a341c52caf3 + deps: [zookeeper] + + 'https://github.com/prometheus/alertmanager/releases/download/v0.23.0/alertmanager-0.23.0.linux-arm64.tar.gz': + sha256: afa44f350797032ceb714598900cfdddbf81d6ef03d2ecbfc0221cc2cb28a6b9 + deps: [prometheus] + + 'https://github.com/prometheus/prometheus/releases/download/v2.31.1/prometheus-2.31.1.linux-arm64.tar.gz': + sha256: a7b4694b96cbf38b63ca92d05a6d3a2cf6df50a85a4d2a3fe2d758a65dcbec3b + deps: [prometheus] + + 'https://get.helm.sh/helm-v3.2.0-linux-arm64.tar.gz': + sha256: cd11f0ed12a658f3b78392528814350a508d2c53d8da7f04145909e94bda10f1 + deps: [helm] + + # --- Helm charts --- + 'https://charts.bitnami.com/bitnami/node-exporter-2.3.17.tgz': + sha256: ec586fabb775a4f05510386899cf348391523c89ff5a1d4097b0592e675ade7f + deps: [kubernetes-master, k8s-as-cloud-service] + + 'https://helm.elastic.co/helm/filebeat/filebeat-7.12.1.tgz': + sha256: 5838058fe06372390dc335900a7707109cc7287a84164ca245d395af1f9c0a79 + deps: [kubernetes-master, k8s-as-cloud-service] + + # --- OpenSearch Bundle --- + 'https://artifacts.opensearch.org/releases/bundle/opensearch/1.2.4/opensearch-1.2.4-linux-arm64.tar.gz': + sha256: 5e8cd13ad1831e4a286a54334505c16c43ce8e50981100eea4eb18f79d3e63a5 + deps: [logging, opensearch] + + 'https://artifacts.opensearch.org/releases/bundle/opensearch-dashboards/1.2.0/opensearch-dashboards-1.2.0-linux-arm64.tar.gz': + sha256: 1f668d98f4670f1b88f03b19d30b2cc44ec439a7b2edff1a48034717d594cfe1 + deps: [logging, opensearch] diff --git a/ansible/playbooks/roles/repository/files/download-requirements/requirements/aarch64/images.yml b/ansible/playbooks/roles/repository/files/download-requirements/requirements/aarch64/images.yml new file mode 100644 index 0000000000..0547d37dc8 --- /dev/null +++ b/ansible/playbooks/roles/repository/files/download-requirements/requirements/aarch64/images.yml @@ -0,0 +1,65 @@ +--- +images: + haproxy: + 'haproxy:2.2.2-alpine': + sha1: 2fd3bd554ee6e126a1b1e5055ed0349beac81ffc + + image-registry: + 'registry:2.8.0': + sha1: c6cdd738fbdd2efaef9588b439fc3a7c0c090368 + allow_mismatch: true + + applications: + 'epiphanyplatform/keycloak:14.0.0': + sha1: c73be9d38580fc9819bec8942e2d2569196547e1 + + 'rabbitmq:3.8.9': + sha1: 854d06bae1ee7e3a94570e1ce104618430988f57 + + kubernetes-master: + # for HA configuration + 'haproxy:2.2.2-alpine': + sha1: 2fd3bd554ee6e126a1b1e5055ed0349beac81ffc + + 'kubernetesui/dashboard:v2.3.1': + sha1: 5dc90f59af4643952d5a728213983e6f3d884895 + + 'kubernetesui/metrics-scraper:v1.0.7': + sha1: 90b57b399e7ed44bad422e7d9572bfd6d737724a + # K8s + # v1.22.4 + 'k8s.gcr.io/kube-apiserver:v1.22.4': + sha1: 6e101cfa4384346b45701e6dda5591a41fa5776d + + 'k8s.gcr.io/kube-controller-manager:v1.22.4': + sha1: 6561280956af24f9547dabde5758a4091558e771 + + 'k8s.gcr.io/kube-scheduler:v1.22.4': + sha1: e224852d58ab649f3145cae3ed4f2926e66117bf + + 'k8s.gcr.io/kube-proxy:v1.22.4': + sha1: 5e5e4032f3f6464ede1a4a85854013d0801c8eff + + 'k8s.gcr.io/coredns/coredns:v1.8.4': + sha1: e5d7de1974e8f331892d9587a5312d4cdda04bb2 + + 'k8s.gcr.io/etcd:3.5.0-0': + sha1: fb1975ba3fc696fa7530c0752d15abe8ea23e80d + + 'k8s.gcr.io/pause:3.5': + sha1: 8f3be7cc532c25b01a2e5e0943b4d55bce0b0f1c + + 'quay.io/coreos/flannel:v0.14.0': + sha1: 098ec78af9bf3a70afbd1a9743ff352f72fb036d + + 'quay.io/coreos/flannel:v0.15.1': + sha1: 378513c030a1d42754abb5160016e6b1ae2a1c64 + + 'calico/cni:v3.23.3': + sha1: 7539e19d46f4c5786f9a5cbcf8234ece7920c01d + + 'calico/kube-controllers:v3.23.3': + sha1: 9161bba287310c872b4580e35dd7f630ce1e9963 + + 'calico/node:v3.23.3': + sha1: c0fb935d1d50fca65cd75eb8836c86a963ca8dc7 diff --git a/ansible/playbooks/roles/repository/files/download-requirements/requirements/aarch64/redhat/almalinux-8/packages.yml b/ansible/playbooks/roles/repository/files/download-requirements/requirements/aarch64/redhat/almalinux-8/packages.yml new file mode 100644 index 0000000000..9027c32189 --- /dev/null +++ b/ansible/playbooks/roles/repository/files/download-requirements/requirements/aarch64/redhat/almalinux-8/packages.yml @@ -0,0 +1,11 @@ +--- +# Distribution specific packages + +prereq-packages: + # prereq-packages are downloaded without dependencies because of air-gapped mode (dnf localinstall) + - almalinux-logos-httpd # for httpd + +packages: + from_repo: [] + multiple_versioned: [] + from_url: {} diff --git a/ansible/playbooks/roles/repository/files/download-requirements/requirements/aarch64/redhat/packages.yml b/ansible/playbooks/roles/repository/files/download-requirements/requirements/aarch64/redhat/packages.yml new file mode 100644 index 0000000000..109458f77d --- /dev/null +++ b/ansible/playbooks/roles/repository/files/download-requirements/requirements/aarch64/redhat/packages.yml @@ -0,0 +1,204 @@ +--- +# Common packages for RedHat OS family + +prereq-packages: + # prereq-packages are downloaded without dependencies because of air-gapped mode (dnf localinstall) + # --- createrepo --- + - 'createrepo_c' + - 'createrepo_c-libs' + - 'drpm' + # --- httpd --- + - 'apr' + - 'apr-util' + - 'apr-util-bdb' # weak + - 'apr-util-openssl' # weak + - 'httpd' + - 'httpd-filesystem' + - 'httpd-tools' + - 'mailcap' + - 'mod_http2' + +packages: + from_repo: + - 'audit' # for docker-ce + - 'bash-completion' + - 'ca-certificates' + - 'cifs-utils' + - 'conntrack-tools' # for kubelet + - 'containerd.io-1.5.11' + - 'container-selinux' + - 'copy-jdk-configs' # for java-1.8.0-openjdk-headless + - 'cri-tools' + - 'cups-libs' # for java-1.8.0-openjdk-headless + - 'curl' + - 'dejavu-sans-fonts' # for grafana + - 'docker-ce-20.10.8' + - 'docker-ce-cli-20.10.8' + - 'docker-ce-rootless-extras-20.10.8' + - 'elasticsearch-oss-7.10.2' # for opendistroforelasticsearch & logging roles + - 'ethtool' + - 'filebeat-7.12.1' + - 'firewalld' + - 'fontconfig' # for grafana + - 'fping' + - 'fuse-overlayfs' # for docker-ce-rootless-extras + - 'fuse3' # for docker-ce-rootless-extras + - 'gnutls' # for cifs-utils + - 'gssproxy' # for nfs-utils + - 'htop' + - 'iftop' + - 'ipset' # for firewalld + - 'iptables' # for iptables-ebtables + - 'iptables-ebtables' + - 'java-1.8.0-openjdk-headless' + - 'jq' + - 'libibverbs' # for libpcap + - 'libini_config' # for nfs-utils + - 'libpcap' # for iftop & iptables + - 'libX11' # for grafana + - 'libxcb' # for grafana + - 'libXcursor' # for grafana + - 'libXt' # for grafana + - 'logrotate' + - 'lua' # for java-1.8.0-openjdk-headless + - 'mcpp' # for grafana + - 'net-tools' + - 'nfs-utils' + - 'nmap-ncat' + - 'nss' # for java-1.8.0-openjdk-headless + - 'nss-softokn' # for nss + - 'ntsysv' # for python36 + - 'openssl' + - 'perl' # for vim + - 'perl-Getopt-Long' # for vim + - 'perl-libs' # for vim + - 'perl-Pod-Perldoc' # for vim + - 'perl-Pod-Simple' # for vim + - 'perl-Pod-Usage' # for vim + - 'pgaudit15_13-1.5.0' + - 'pkgconf' # for bash-completion + - 'pkgconf-pkg-config' # for bash-completion + - 'policycoreutils' + - 'postgresql13-server' + - 'python3-cffi' # for python3-cryptography + - 'python3-cryptography' + - 'python3-firewall' # for firewalld + - 'python3-ldb' # for cifs-utils + - 'python3-libselinux' + - 'python3-lxml' # for java-1.8.0-openjdk-headless + - 'python3-nftables' # for python3-firewall + - 'python3-pip' # for python36 + - 'python3-policycoreutils' # for container-selinux + - 'python3-psycopg2' + - 'python3-pycparser' # for python3-cryptography + - 'python3-slip-dbus' # for firewalld + - 'python36' # there is no python3 package + - 'quota' # for nfs-utils + - 'rabbitmq-server-3.8.9' + - 'rdma-core' + - 'rsync' + - 'samba-client' + - 'samba-client-libs' # for samba-client + - 'samba-common' + - 'samba-libs' # for cifs-utils + - 'sssd' # needed for samba packages installation + - 'sssd-client' # needed for sssd upgrade + - 'sssd-common' # needed for sssd upgrade + - 'sssd-ad' # needed for sssd upgrade + - 'sssd-ipa' # needed for sssd upgrade + - 'sssd-kcm' # needed for sssd upgrade + - 'sssd-krb5' # needed for sssd upgrade + - 'sssd-ldap' # needed for sssd upgrade + - 'sssd-proxy' # needed for sssd upgrade + - 'slirp4netns' # for docker-ce-rootless-extras + - 'sysstat' + - 'tar' + - 'telnet' + - 'tmux' + - 'urw-base35-fonts' # for grafana + - 'unzip' + - 'vim-common' # for vim + - 'vim-enhanced' + - 'wget' + - 'xorg-x11-font-utils' # for grafana + - 'xorg-x11-server-utils' # for grafana + # Erlang dependencies: + - 'SDL' + - 'adwaita-icon-theme' + - 'at-spi2-atk' + - 'at-spi2-core' + - 'atk' + - 'cairo' + - 'cairo-gobject' + - 'colord-libs' + - 'dejavu-fonts-common' + - 'dejavu-sans-mono-fonts' + - 'fribidi' + - 'gdk-pixbuf2' + - 'gdk-pixbuf2-modules' + - 'glib-networking' + - 'graphite2' + - 'gsettings-desktop-schemas' + - 'gtk-update-icon-cache' + - 'gtk3' + - 'harfbuzz' + - 'hicolor-icon-theme' + - 'jasper-libs' + - 'jbigkit-libs' + - 'lcms2' + - 'libICE' + - 'libSM' + - 'libX11-xcb' + - 'libXau' + - 'libXcomposite' + - 'libXdamage' + - 'libXext' + - 'libXfixes' + - 'libXft' + - 'libXi' + - 'libXinerama' + - 'libXrandr' + - 'libXrender' + - 'libXtst' + - 'libXxf86vm' + - 'libdatrie' + - 'libdrm' + - 'libepoxy' + - 'libglvnd' + - 'libglvnd-glx' + - 'libjpeg-turbo' + - 'libmodman' + - 'libmspack' + - 'libproxy' + - 'libsoup' + - 'libthai' + - 'libtiff' + - 'libtool-ltdl' + - 'libwayland-client' + - 'libwayland-cursor' + - 'libwayland-egl' + - 'libxshmfence' + - 'mesa-libGL' + - 'mesa-libGLU' + - 'mesa-libglapi' + - 'pango' + - 'pixman' + - 'rest' + - 'unixODBC' + - 'wxBase3' + - 'wxGTK3' + - 'wxGTK3-gl' + + multiple_versioned: + # K8s v1.22.4 + - 'kubeadm-1.22.4' + - 'kubectl-1.22.4' + - 'kubelet-1.22.4' + + from_url: + # Github repository for erlang rpm is used since packagecloud repository is limited to a certain number of versions and erlang package from erlang-solutions repository is much more complex and bigger + 'http://packages.erlang-solutions.com/erlang/rpm/centos/8/aarch64/esl-erlang_23.1.5-1~centos~8_arm64.rpm': + sha256: 9c135d300a66fe399a764da0070e49b8cd5a356516d2904dd0521f80da1a1ecb + # Grafana package is not downloaded from repository since it was not reliable (issue #2449) + 'https://dl.grafana.com/oss/release/grafana-8.3.2-1.aarch64.rpm': + sha256: a05354a9363decc3a2b036a58f827e0a4d086791ba73d7cc4b9f05afb592f4d1 diff --git a/ansible/playbooks/roles/repository/files/download-requirements/requirements/x86_64/cranes.yml b/ansible/playbooks/roles/repository/files/download-requirements/requirements/x86_64/cranes.yml index f249ac6581..b6aeb9aced 100644 --- a/ansible/playbooks/roles/repository/files/download-requirements/requirements/x86_64/cranes.yml +++ b/ansible/playbooks/roles/repository/files/download-requirements/requirements/x86_64/cranes.yml @@ -1,4 +1,4 @@ --- cranes: - 'https://github.com/google/go-containerregistry/releases/download/v0.4.1/go-containerregistry_Linux_x86_64.tar.gz': - sha256: def1364f9483d133ccc6b1c4876f59a653d024c8866d96ecda026561d38c349b + 'https://github.com/google/go-containerregistry/releases/download/v0.11.0/go-containerregistry_Linux_x86_64.tar.gz': + sha256: 3cec40eb0fac2e6ed4b71de682ae562d15819ab92145e4f669b57baf04797adb diff --git a/ansible/playbooks/roles/repository/files/download-requirements/requirements/x86_64/debian/ubuntu-20.04/packages.yml b/ansible/playbooks/roles/repository/files/download-requirements/requirements/x86_64/debian/ubuntu-20.04/packages.yml index bac12af123..46cab38ff6 100644 --- a/ansible/playbooks/roles/repository/files/download-requirements/requirements/x86_64/debian/ubuntu-20.04/packages.yml +++ b/ansible/playbooks/roles/repository/files/download-requirements/requirements/x86_64/debian/ubuntu-20.04/packages.yml @@ -14,7 +14,8 @@ packages: - 'docker-ce-cli=5:20.10.8*' - 'docker-ce-rootless-extras=5:20.10.8*' - 'ebtables' - # for opendistroforelasticsearch & logging roles + + # for opensearch & logging roles - 'elasticsearch-oss=7.10.2*' # Erlang packages must be compatible with RabbitMQ version. @@ -39,7 +40,7 @@ packages: - 'erlang-tools=1:23.1.5*' - 'erlang-xmerl=1:23.1.5*' - 'ethtool' - - 'filebeat=7.9.2*' + - 'filebeat=7.12.1*' - 'firewalld' - 'fping' - 'gnupg2' @@ -57,13 +58,6 @@ packages: # for nfs-common - 'libtirpc3' - - 'opendistro-alerting=1.13.1*' - - 'opendistro-index-management=1.13.1*' - - 'opendistro-job-scheduler=1.13.0*' - - 'opendistro-performance-analyzer=1.13.0*' - - 'opendistro-security=1.13.1*' - - 'opendistro-sql=1.13.0*' - - 'opendistroforelasticsearch-kibana=1.13.1*' - 'openjdk-8-jre-headless' - 'openssl' - 'postgresql-13' diff --git a/ansible/playbooks/roles/repository/files/download-requirements/requirements/x86_64/files.yml b/ansible/playbooks/roles/repository/files/download-requirements/requirements/x86_64/files.yml index cdd00db475..4b4df4decf 100644 --- a/ansible/playbooks/roles/repository/files/download-requirements/requirements/x86_64/files.yml +++ b/ansible/playbooks/roles/repository/files/download-requirements/requirements/x86_64/files.yml @@ -3,44 +3,67 @@ files: # --- Exporters --- 'https://github.com/danielqsj/kafka_exporter/releases/download/v1.4.0/kafka_exporter-1.4.0.linux-amd64.tar.gz': sha256: ffda682e82daede726da8719257a088f8e23dcaa4e2ac8b2b2748a129aea85f0 + deps: [kafka-exporter] 'https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.16.1/jmx_prometheus_javaagent-0.16.1.jar': sha256: 0ddc6834f854c03d5795305193c1d33132a24fbd406b4b52828602f5bc30777e + deps: [kafka] 'https://github.com/prometheus/node_exporter/releases/download/v1.3.1/node_exporter-1.3.1.linux-amd64.tar.gz': sha256: 68f3802c2dd3980667e4ba65ea2e1fb03f4a4ba026cca375f15a0390ff850949 + deps: [node-exporter] 'https://github.com/prometheus-community/postgres_exporter/releases/download/v0.10.0/postgres_exporter-0.10.0.linux-amd64.tar.gz': sha256: 1d1a008c5e29673b404a9ce119b7516fa59974aeda2f47d4a0446d102abce8a1 + deps: [postgres-exporter] # --- Misc --- 'https://archive.apache.org/dist/kafka/2.8.1/kafka_2.12-2.8.1.tgz': sha256: 175a4134efc569a586d58916cd16ce70f868b13dea2b5a3d12a67b1395d59f98 + deps: [kafka] 'https://archive.apache.org/dist/zookeeper/zookeeper-3.5.8/apache-zookeeper-3.5.8-bin.tar.gz': sha256: c35ed6786d59b73920243f1a324d24c2ddfafb379041d7a350cc9a341c52caf3 + deps: [zookeeper] 'https://github.com/prometheus/alertmanager/releases/download/v0.23.0/alertmanager-0.23.0.linux-amd64.tar.gz': sha256: 77793c4d9bb92be98f7525f8bc50cb8adb8c5de2e944d5500e90ab13918771fc + deps: [prometheus] 'https://github.com/prometheus/prometheus/releases/download/v2.31.1/prometheus-2.31.1.linux-amd64.tar.gz': sha256: 7852dc11cfaa039577c1804fe6f082a07c5eb06be50babcffe29214aedf318b3 + deps: [prometheus] 'https://get.helm.sh/helm-v3.2.0-linux-amd64.tar.gz': sha256: 4c3fd562e64005786ac8f18e7334054a24da34ec04bbd769c206b03b8ed6e457 - - 'https://archive.apache.org/dist/logging/log4j/2.17.1/apache-log4j-2.17.1-bin.tar.gz': - sha256: b876c20c9d318d77a39c0c2e095897b2bb1cd100c7859643f8c7c8b0fc6d5961 + deps: [helm] # --- Helm charts --- 'https://charts.bitnami.com/bitnami/node-exporter-2.3.17.tgz': sha256: ec586fabb775a4f05510386899cf348391523c89ff5a1d4097b0592e675ade7f + deps: [kubernetes-master, k8s-as-cloud-service] - 'https://helm.elastic.co/helm/filebeat/filebeat-7.9.2.tgz': - sha256: 5140b4c4473ca33a0af4c3f70545dcc89735c0a179d974ebc150f1f28ac229ab + 'https://helm.elastic.co/helm/filebeat/filebeat-7.12.1.tgz': + sha256: 5838058fe06372390dc335900a7707109cc7287a84164ca245d395af1f9c0a79 + deps: [kubernetes-master, k8s-as-cloud-service] 'https://charts.rook.io/release/rook-ceph-v1.8.8.tgz': sha256: f67e474dedffd4004f3a0b7b40112694a7f1c2b1a0048b03b3083d0a01e86b14 + deps: [kubernetes-master] 'https://charts.rook.io/release/rook-ceph-cluster-v1.8.8.tgz': sha256: df4e1f2125af41fb84c72e4d12aa0cb859dddd4f37b3d5979981bd092040bd16 + deps: [kubernetes-master] + + # --- OpenSearch Bundle --- + 'https://artifacts.opensearch.org/releases/bundle/opensearch/1.2.4/opensearch-1.2.4-linux-x64.tar.gz': + sha256: d40f2696623b6766aa235997e2847a6c661a226815d4ba173292a219754bd8a8 + deps: [logging, opensearch] + + 'https://artifacts.opensearch.org/releases/bundle/opensearch-dashboards/1.2.0/opensearch-dashboards-1.2.0-linux-x64.tar.gz': + sha256: 14623798e61be6913e2a218d6ba3e308e5036359d7bda58482ad2f1340aa3c85 + deps: [opensearch-dashboards] + + 'https://github.com/opensearch-project/perftop/releases/download/1.2.0.0/opensearch-perf-top-1.2.0.0-linux-x64.zip': + sha256: e8f9683976001a8cf59a9f86da5caafa10b88643315f0af2baa93a9354d41e2b + deps: [logging, opensearch] diff --git a/ansible/playbooks/roles/repository/files/download-requirements/requirements/x86_64/images.yml b/ansible/playbooks/roles/repository/files/download-requirements/requirements/x86_64/images.yml index f7d48d5ccf..da74c59157 100644 --- a/ansible/playbooks/roles/repository/files/download-requirements/requirements/x86_64/images.yml +++ b/ansible/playbooks/roles/repository/files/download-requirements/requirements/x86_64/images.yml @@ -1,191 +1,209 @@ --- images: - 'haproxy:2.2.2-alpine': - sha1: dff8993b065b7f7846adb553548bcdcfcd1b6e8e + haproxy: + 'haproxy:2.2.2-alpine': + sha1: dff8993b065b7f7846adb553548bcdcfcd1b6e8e - 'kubernetesui/dashboard:v2.3.1': - sha1: 8c8a4ac7a643f9c5dd9e5d22876c434187312db8 + image-registry: + 'registry:2.8.0': + sha1: 89795c17099199c752d02ad8797c1d4565a08aff + allow_mismatch: true - 'kubernetesui/metrics-scraper:v1.0.7': - sha1: 5a0052e2afd3eef3ae638be21938b29b1d608ebe + applications: + 'bitnami/pgpool:4.2.4': + sha1: 66741f3cf4a508bd1f80e2965b0086a4c0fc3580 - 'registry:2.8.0': - sha1: 89795c17099199c752d02ad8797c1d4565a08aff - allow_mismatch: true + 'bitnami/pgbouncer:1.16.0': + sha1: f2e37eecbf9aed44d5566f06dcc101c1ba9edff9 - # applications - 'bitnami/pgpool:4.2.4': - sha1: 66741f3cf4a508bd1f80e2965b0086a4c0fc3580 + 'epiphanyplatform/keycloak:14.0.0': + sha1: b59d75a967cedd3a4cf5867eced2fb5dff52f60e - 'bitnami/pgbouncer:1.16.0': - sha1: f2e37eecbf9aed44d5566f06dcc101c1ba9edff9 + 'rabbitmq:3.8.9': + sha1: c64408bf5bb522f47d5323652dd5e60560dcb5bc - 'epiphanyplatform/keycloak:14.0.0': - sha1: b59d75a967cedd3a4cf5867eced2fb5dff52f60e + kubernetes-master: + # for HA configuration + 'haproxy:2.2.2-alpine': + sha1: dff8993b065b7f7846adb553548bcdcfcd1b6e8e - 'rabbitmq:3.8.9': - sha1: c64408bf5bb522f47d5323652dd5e60560dcb5bc + 'kubernetesui/dashboard:v2.3.1': + sha1: 8c8a4ac7a643f9c5dd9e5d22876c434187312db8 - # K8s - # v1.18.6 - 'k8s.gcr.io/kube-apiserver:v1.18.6': - sha1: 164968226f4617abaa31e6108ed9034a1e302f4f + 'kubernetesui/metrics-scraper:v1.0.7': + sha1: 5a0052e2afd3eef3ae638be21938b29b1d608ebe - 'k8s.gcr.io/kube-controller-manager:v1.18.6': - sha1: ebea3fecab9e5693d31438fa37dc4d02c6914d67 + # K8s + # v1.18.6 + 'k8s.gcr.io/kube-apiserver:v1.18.6': + sha1: 164968226f4617abaa31e6108ed9034a1e302f4f - 'k8s.gcr.io/kube-scheduler:v1.18.6': - sha1: 183d29c4fdcfda7478d08240934fdb6845e2e3ec + 'k8s.gcr.io/kube-controller-manager:v1.18.6': + sha1: ebea3fecab9e5693d31438fa37dc4d02c6914d67 - 'k8s.gcr.io/kube-proxy:v1.18.6': - sha1: 62da886e36efff0c03a16e19c1442a1c3040fbf1 + 'k8s.gcr.io/kube-scheduler:v1.18.6': + sha1: 183d29c4fdcfda7478d08240934fdb6845e2e3ec - 'k8s.gcr.io/coredns:1.6.7': - sha1: 76615ffabb22fd4fb3d562cb6ebcd243f8826e48 + 'k8s.gcr.io/kube-proxy:v1.18.6': + sha1: 62da886e36efff0c03a16e19c1442a1c3040fbf1 - 'k8s.gcr.io/etcd:3.4.3-0': - sha1: 6ee82ddb1bbc7f1831c42046612b8bcfbb171b45 + 'k8s.gcr.io/coredns:1.6.7': + sha1: 76615ffabb22fd4fb3d562cb6ebcd243f8826e48 - 'quay.io/coreos/flannel:v0.12.0-amd64': - sha1: 3516522e779373983992095e61eb6615edd50d1f + 'k8s.gcr.io/etcd:3.4.3-0': + sha1: 6ee82ddb1bbc7f1831c42046612b8bcfbb171b45 - 'quay.io/coreos/flannel:v0.12.0': - sha1: 2cb6ce8f1361886225526767c4a0422c039453c8 + 'quay.io/coreos/flannel:v0.12.0-amd64': + sha1: 3516522e779373983992095e61eb6615edd50d1f - 'calico/cni:v3.15.0': - sha1: aa59f624c223bc398a42c7ba9e628e8143718e58 + 'quay.io/coreos/flannel:v0.12.0': + sha1: 2cb6ce8f1361886225526767c4a0422c039453c8 - 'calico/kube-controllers:v3.15.0': - sha1: f8921f5d67ee7db1c619aa9fdb74114569684ceb + 'calico/cni:v3.15.0': + sha1: aa59f624c223bc398a42c7ba9e628e8143718e58 - 'calico/node:v3.15.0': - sha1: b15308e1aa8b9c56253c142e4361e47125bb4ac5 + 'calico/kube-controllers:v3.15.0': + sha1: f8921f5d67ee7db1c619aa9fdb74114569684ceb - 'calico/pod2daemon-flexvol:v3.15.0': - sha1: dd1a6525bde05937a28e3d9176b826162ae489af + 'calico/node:v3.15.0': + sha1: b15308e1aa8b9c56253c142e4361e47125bb4ac5 - # v1.19.15 - 'k8s.gcr.io/kube-apiserver:v1.19.15': - sha1: e01c8d778e4e693a0ea09cdbbe041a65cf070c6f + 'calico/pod2daemon-flexvol:v3.15.0': + sha1: dd1a6525bde05937a28e3d9176b826162ae489af - 'k8s.gcr.io/kube-controller-manager:v1.19.15': - sha1: d1f5cc6a861b2259861fb78b2b83e9a07b788e31 + # v1.19.15 + 'k8s.gcr.io/kube-apiserver:v1.19.15': + sha1: e01c8d778e4e693a0ea09cdbbe041a65cf070c6f - 'k8s.gcr.io/kube-scheduler:v1.19.15': - sha1: b07fdd17205bc071ab108851d245689642244f92 + 'k8s.gcr.io/kube-controller-manager:v1.19.15': + sha1: d1f5cc6a861b2259861fb78b2b83e9a07b788e31 - 'k8s.gcr.io/kube-proxy:v1.19.15': - sha1: 9e2e7a8d40840bbade3a1f2dc743b9226491b6c2 + 'k8s.gcr.io/kube-scheduler:v1.19.15': + sha1: b07fdd17205bc071ab108851d245689642244f92 - # v1.20.12 - 'k8s.gcr.io/kube-apiserver:v1.20.12': - sha1: bbb037b9452db326aaf09988cee080940f3c418a + 'k8s.gcr.io/kube-proxy:v1.19.15': + sha1: 9e2e7a8d40840bbade3a1f2dc743b9226491b6c2 - 'k8s.gcr.io/kube-controller-manager:v1.20.12': - sha1: 4a902578a0c548edec93e0f4afea8b601fa54b93 + # v1.20.12 + 'k8s.gcr.io/kube-apiserver:v1.20.12': + sha1: bbb037b9452db326aaf09988cee080940f3c418a - 'k8s.gcr.io/kube-scheduler:v1.20.12': - sha1: ed5ceb21d0f5bc350db69550fb7feac7a6f1e50b + 'k8s.gcr.io/kube-controller-manager:v1.20.12': + sha1: 4a902578a0c548edec93e0f4afea8b601fa54b93 - 'k8s.gcr.io/kube-proxy:v1.20.12': - sha1: f937aba709f52be88360361230840e7bca756b2e + 'k8s.gcr.io/kube-scheduler:v1.20.12': + sha1: ed5ceb21d0f5bc350db69550fb7feac7a6f1e50b - 'k8s.gcr.io/coredns:1.7.0': - sha1: 5aa15f4cb942885879955b98a0a824833d9f66eb + 'k8s.gcr.io/kube-proxy:v1.20.12': + sha1: f937aba709f52be88360361230840e7bca756b2e - 'k8s.gcr.io/pause:3.2': - sha1: ae4799e1a1ec9cd0dda8ab643b6e50c9fe505fef + 'k8s.gcr.io/coredns:1.7.0': + sha1: 5aa15f4cb942885879955b98a0a824833d9f66eb - # v1.21.7 - 'k8s.gcr.io/kube-apiserver:v1.21.7': - sha1: edb26859b3485808716982deccd90ca420828649 + 'k8s.gcr.io/pause:3.2': + sha1: ae4799e1a1ec9cd0dda8ab643b6e50c9fe505fef - 'k8s.gcr.io/kube-controller-manager:v1.21.7': - sha1: 9abf1841da5b113b377c1471880198259ec2d246 + # v1.21.7 + 'k8s.gcr.io/kube-apiserver:v1.21.7': + sha1: edb26859b3485808716982deccd90ca420828649 - 'k8s.gcr.io/kube-scheduler:v1.21.7': - sha1: 996d25351afc96a10e9008c04418db07a99c76b7 + 'k8s.gcr.io/kube-controller-manager:v1.21.7': + sha1: 9abf1841da5b113b377c1471880198259ec2d246 - 'k8s.gcr.io/kube-proxy:v1.21.7': - sha1: 450af22a892ffef276d4d58332b7817a1dde34e7 + 'k8s.gcr.io/kube-scheduler:v1.21.7': + sha1: 996d25351afc96a10e9008c04418db07a99c76b7 - 'k8s.gcr.io/coredns/coredns:v1.8.0': - sha1: 03114a98137e7cc2dcf4983b919e6b93ac8d1189 + 'k8s.gcr.io/kube-proxy:v1.21.7': + sha1: 450af22a892ffef276d4d58332b7817a1dde34e7 - 'k8s.gcr.io/etcd:3.4.13-0': - sha1: d37a2efafcc4aa86e6dc497e87e80b5d7f326115 + 'k8s.gcr.io/coredns/coredns:v1.8.0': + sha1: 03114a98137e7cc2dcf4983b919e6b93ac8d1189 - 'k8s.gcr.io/pause:3.4.1': - sha1: 7f57ae28d733f99c0aab8f4e27d4b0c034cd0c04 + 'k8s.gcr.io/etcd:3.4.13-0': + sha1: d37a2efafcc4aa86e6dc497e87e80b5d7f326115 - # v1.22.4 - 'k8s.gcr.io/kube-apiserver:v1.22.4': - sha1: 2bf4ddb2e1f1530cf55ebaf8e8d0c56ad378b9ec + 'k8s.gcr.io/pause:3.4.1': + sha1: 7f57ae28d733f99c0aab8f4e27d4b0c034cd0c04 - 'k8s.gcr.io/kube-controller-manager:v1.22.4': - sha1: 241924fa3dc4671fe6644402f7beb60028c02c71 + # v1.22.4 + 'k8s.gcr.io/kube-apiserver:v1.22.4': + sha1: 2bf4ddb2e1f1530cf55ebaf8e8d0c56ad378b9ec - 'k8s.gcr.io/kube-scheduler:v1.22.4': - sha1: 373e2939072b03cf5b1e115820b7fb6b749b0ebb + 'k8s.gcr.io/kube-controller-manager:v1.22.4': + sha1: 241924fa3dc4671fe6644402f7beb60028c02c71 - 'k8s.gcr.io/kube-proxy:v1.22.4': - sha1: fecfb88509a430c29267a99b83f60f4a7c333583 + 'k8s.gcr.io/kube-scheduler:v1.22.4': + sha1: 373e2939072b03cf5b1e115820b7fb6b749b0ebb - 'k8s.gcr.io/coredns/coredns:v1.8.4': - sha1: 69c8e14ac3941fd5551ff22180be5f4ea2742d7f + 'k8s.gcr.io/kube-proxy:v1.22.4': + sha1: fecfb88509a430c29267a99b83f60f4a7c333583 - 'k8s.gcr.io/etcd:3.5.0-0': - sha1: 9d9ee2df54a201dcc9c7a10ea763b9a5dce875f1 + 'k8s.gcr.io/coredns/coredns:v1.8.4': + sha1: 69c8e14ac3941fd5551ff22180be5f4ea2742d7f - 'k8s.gcr.io/pause:3.5': - sha1: bf3e3420df62f093f94c41d2b7a62b874dcbfc28 + 'k8s.gcr.io/etcd:3.5.0-0': + sha1: 9d9ee2df54a201dcc9c7a10ea763b9a5dce875f1 - 'quay.io/coreos/flannel:v0.14.0-amd64': - sha1: cff47465996a51de4632b53abf1fca873f147027 + 'k8s.gcr.io/pause:3.5': + sha1: bf3e3420df62f093f94c41d2b7a62b874dcbfc28 - 'quay.io/coreos/flannel:v0.14.0': - sha1: a487a36f7b31677e50e74b96b944f27fbce5ac13 + 'quay.io/coreos/flannel:v0.14.0-amd64': + sha1: cff47465996a51de4632b53abf1fca873f147027 - 'calico/cni:v3.20.3': - sha1: 95e4cf79e92715b13e500a0efcfdb65590de1e04 + 'quay.io/coreos/flannel:v0.14.0': + sha1: a487a36f7b31677e50e74b96b944f27fbce5ac13 - 'calico/kube-controllers:v3.20.3': - sha1: 5769bae60830abcb3c5d97eb86b8f9938a587b2d + 'calico/cni:v3.20.3': + sha1: 95e4cf79e92715b13e500a0efcfdb65590de1e04 - 'calico/node:v3.20.3': - sha1: cc3c8727ad30b4850e8d0042681342a4f2351eff + 'calico/kube-controllers:v3.20.3': + sha1: 5769bae60830abcb3c5d97eb86b8f9938a587b2d - 'calico/pod2daemon-flexvol:v3.20.3': - sha1: 97c1b7ac90aa5a0f5c52e7f137549e598ff80f3e + 'calico/node:v3.20.3': + sha1: cc3c8727ad30b4850e8d0042681342a4f2351eff - # --- Rook --- - 'k8s.gcr.io/sig-storage/csi-attacher:v3.4.0': - sha1: f076bd75359c6449b965c48eb8bad96c6d40790d + 'calico/pod2daemon-flexvol:v3.20.3': + sha1: 97c1b7ac90aa5a0f5c52e7f137549e598ff80f3e - 'k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.5.0': - sha1: 129eb73c8e118e5049fee3d273b2d477c547e080 + 'quay.io/coreos/flannel:v0.15.1': + sha1: 465ed6de051d9ae9e589b9039326e34cea999ac5 - 'k8s.gcr.io/sig-storage/csi-provisioner:v3.1.0': - sha1: 2b45e5a3432cb89f3aec59584c1fa92c069e7a38 + 'calico/cni:v3.23.3': + sha1: 6bd0ee90316b2dcd0575f0a6a756a3cd976d4819 - 'k8s.gcr.io/sig-storage/csi-resizer:v1.4.0': - sha1: ce5c57454254c195762c1f58e1d902d7e81ea669 + 'calico/kube-controllers:v3.23.3': + sha1: fd643d783279e76ede70361e6246a4cdd99f4221 - 'k8s.gcr.io/sig-storage/csi-snapshotter:v5.0.1': - sha1: be1cf43617eea007629c0eb99149a99b6498f889 + 'calico/node:v3.23.3': + sha1: 85a3499d4fb7e93ab6334beed57b635d09f02d1f - 'quay.io/ceph/ceph:v16.2.7': - sha1: fe9b7802c67e19111f83ffe4754ab62df66fd417 - allow_mismatch: true + rook: + 'k8s.gcr.io/sig-storage/csi-attacher:v3.4.0': + sha1: f076bd75359c6449b965c48eb8bad96c6d40790d - 'quay.io/cephcsi/cephcsi:v3.5.1': - sha1: 51dee9ea8ad76fb95ebd16f951e8ffaaaba95eb6 + 'k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.5.0': + sha1: 129eb73c8e118e5049fee3d273b2d477c547e080 - 'quay.io/csiaddons/k8s-sidecar:v0.2.1': - sha1: f0fd757436ac5075910c460c1991ff67c4774d09 + 'k8s.gcr.io/sig-storage/csi-provisioner:v3.1.0': + sha1: 2b45e5a3432cb89f3aec59584c1fa92c069e7a38 - 'quay.io/csiaddons/volumereplication-operator:v0.3.0': - sha1: d3cd17f14fcbf09fc6c8c2c5c0419f098f87a70f + 'k8s.gcr.io/sig-storage/csi-resizer:v1.4.0': + sha1: ce5c57454254c195762c1f58e1d902d7e81ea669 - 'rook/ceph:v1.8.8': - sha1: f34039b17b18f5a855b096d48ff787b4013615e4 + 'k8s.gcr.io/sig-storage/csi-snapshotter:v5.0.1': + sha1: be1cf43617eea007629c0eb99149a99b6498f889 + + 'quay.io/ceph/ceph:v16.2.7-20220510': + sha1: 3cdc34eb3f2c5af8de5ad121c7b1bb176cca811a + + 'quay.io/cephcsi/cephcsi:v3.5.1': + sha1: 51dee9ea8ad76fb95ebd16f951e8ffaaaba95eb6 + + 'quay.io/csiaddons/k8s-sidecar:v0.2.1': + sha1: f0fd757436ac5075910c460c1991ff67c4774d09 + + 'quay.io/csiaddons/volumereplication-operator:v0.3.0': + sha1: d3cd17f14fcbf09fc6c8c2c5c0419f098f87a70f + + 'rook/ceph:v1.8.8': + sha1: f34039b17b18f5a855b096d48ff787b4013615e4 diff --git a/ansible/playbooks/roles/repository/files/download-requirements/requirements/x86_64/redhat/packages.yml b/ansible/playbooks/roles/repository/files/download-requirements/requirements/x86_64/redhat/packages.yml index 4013c4e9d3..f663404bfd 100644 --- a/ansible/playbooks/roles/repository/files/download-requirements/requirements/x86_64/redhat/packages.yml +++ b/ansible/playbooks/roles/repository/files/download-requirements/requirements/x86_64/redhat/packages.yml @@ -7,6 +7,7 @@ prereq-packages: - 'createrepo_c' - 'createrepo_c-libs' - 'drpm' + - 'python3-createrepo_c' # installed by RHEL 7 in-place upgrade # --- httpd --- - 'apr' - 'apr-util' @@ -36,9 +37,9 @@ packages: - 'docker-ce-cli-20.10.8' - 'docker-ce-rootless-extras-20.10.8' - 'elasticsearch-curator-5.8.3' - - 'elasticsearch-oss-7.10.2' # for opendistroforelasticsearch & logging roles + - 'elasticsearch-oss-7.10.2' # for opensearch & logging roles - 'ethtool' - - 'filebeat-7.9.2' + - 'filebeat-7.12.1' - 'firewalld' - 'fontconfig' # for grafana - 'fping' @@ -68,14 +69,7 @@ packages: - 'nmap-ncat' - 'nss' # for java-1.8.0-openjdk-headless - 'nss-softokn' # for nss - # Open Distro for Elasticsearch plugins are installed individually to not download them twice in different versions (as dependencies of opendistroforelasticsearch package) - - 'opendistro-alerting-1.13.1.*' - - 'opendistro-index-management-1.13.1.*' - - 'opendistro-job-scheduler-1.13.0.*' - - 'opendistro-performance-analyzer-1.13.0.*' - - 'opendistro-security-1.13.1.*' - - 'opendistro-sql-1.13.0.*' - - 'opendistroforelasticsearch-kibana-1.13.1' # kibana has shorter version + - 'ntsysv' # for python36 - 'openssl' - 'perl' # for vim - 'perl-Getopt-Long' # for vim @@ -110,6 +104,15 @@ packages: - 'samba-client-libs' # for samba-client - 'samba-common' - 'samba-libs' # for cifs-utils + - 'sssd' # needed for samba packages installation + - 'sssd-client' # needed for sssd upgrade + - 'sssd-common' # needed for sssd upgrade + - 'sssd-ad' # needed for sssd upgrade + - 'sssd-ipa' # needed for sssd upgrade + - 'sssd-kcm' # needed for sssd upgrade + - 'sssd-krb5' # needed for sssd upgrade + - 'sssd-ldap' # needed for sssd upgrade + - 'sssd-proxy' # needed for sssd upgrade - 'slirp4netns' # for docker-ce-rootless-extras - 'sysstat' - 'tar' diff --git a/ansible/playbooks/roles/repository/files/download-requirements/src/command/apt.py b/ansible/playbooks/roles/repository/files/download-requirements/src/command/debian/apt.py similarity index 100% rename from ansible/playbooks/roles/repository/files/download-requirements/src/command/apt.py rename to ansible/playbooks/roles/repository/files/download-requirements/src/command/debian/apt.py diff --git a/ansible/playbooks/roles/repository/files/download-requirements/src/command/apt_cache.py b/ansible/playbooks/roles/repository/files/download-requirements/src/command/debian/apt_cache.py similarity index 67% rename from ansible/playbooks/roles/repository/files/download-requirements/src/command/apt_cache.py rename to ansible/playbooks/roles/repository/files/download-requirements/src/command/debian/apt_cache.py index 02146000bf..e0b3f7b173 100644 --- a/ansible/playbooks/roles/repository/files/download-requirements/src/command/apt_cache.py +++ b/ansible/playbooks/roles/repository/files/download-requirements/src/command/debian/apt_cache.py @@ -26,6 +26,7 @@ def __get_package_candidate_version(self, package: str, version: str = '') -> st output_lines: List[str] = policy_output.split('\n') if version: # confirm that the wanted version is available + version = version.rstrip('*') for line in output_lines: if version in line: return version @@ -44,7 +45,7 @@ def get_package_info(self, package: str, version: str = '') -> Dict[str, str]: :param version: optional argument to use specific `package`'s version :returns: structured cached `package` info """ - show_args: List[str] = ['show', package] + show_args: List[str] = ['show', f'{package}={version}' if version else package] show_output = self.run(show_args).stdout version_info: str = '' @@ -65,11 +66,44 @@ def get_package_info(self, package: str, version: str = '') -> Dict[str, str]: return info - def get_package_dependencies(self, package: str) -> List[str]: + def __parse_apt_cache_depends(self, stdout: str) -> List[str]: + """ + Parse output from `apt-cache depends`. + For deps with alternative, only the first package is choosen. + Virtual packages are replaced by the first candidate. + + :param stdout: output from `apt-cache depends` command + :returns: required dependencies + """ + alternative_found: bool = False + is_alternative: bool = False + virt_pkg_found: bool = False + deps: List[str] = [] + for dep in stdout.strip().splitlines(): + + dep = dep.replace(' ', '') # remove white spaces + + if virt_pkg_found and not is_alternative: + deps.append(dep) # pick first from the list + virt_pkg_found = False + + if 'Depends:' in dep: # dependency found + is_alternative = alternative_found + alternative_found = dep.startswith('|Depends:') + virt_pkg_found = '<' in dep and '>' in dep + + if not virt_pkg_found and not is_alternative: + dep = dep.split('Depends:')[-1] # remove "Depends: + deps.append(dep) + + return deps + + def get_package_dependencies(self, package: str, version: str = '') -> List[str]: """ Interface for `apt-cache depends` :param package: for which dependencies will be gathered + :param version: optional argument to use specific `package`'s version :returns: all required dependencies for `package` """ args: List[str] = ['depends', @@ -80,38 +114,7 @@ def get_package_dependencies(self, package: str) -> List[str]: '--no-replaces', '--no-enhances', '--no-pre-depends', - package] + f'{package}={version}' if version else package] raw_output = self.run(args).stdout - - virt_pkg: bool = False # True - virtual package detected, False - otherwise - virt_pkgs: List[str] = [] # cached virtual packages options - deps: List[str] = [] - for dep in raw_output.split('\n'): - if not dep: # skip empty lines - continue - - dep = dep.replace(' ', '') # remove white spaces - - if virt_pkg: - virt_pkgs.append(dep) # cache virtual package option - - if '<' in dep and '>' in dep: # virtual package, more than one dependency to choose - virt_pkg = True - continue - - if 'Depends:' in dep: # new dependency found - virt_pkg = False - - if virt_pkgs: # previous choices cached - # avoid conflicts by choosing only non-cached dependency: - if not any(map(lambda elem: elem in deps, virt_pkgs)): - deps.append(virt_pkgs[0].split('Depends:')[-1]) # pick first from the list - virt_pkgs.clear() - - dep = dep.split('Depends:')[-1] # remove "Depends: - - if not virt_pkg and dep != package: # avoid adding package itself - deps.append(dep) - - return list(set(deps)) + return self.__parse_apt_cache_depends(raw_output) diff --git a/ansible/playbooks/roles/repository/files/download-requirements/src/command/apt_key.py b/ansible/playbooks/roles/repository/files/download-requirements/src/command/debian/apt_key.py similarity index 100% rename from ansible/playbooks/roles/repository/files/download-requirements/src/command/apt_key.py rename to ansible/playbooks/roles/repository/files/download-requirements/src/command/debian/apt_key.py diff --git a/ansible/playbooks/roles/repository/files/download-requirements/src/command/dnf_config_manager.py b/ansible/playbooks/roles/repository/files/download-requirements/src/command/dnf_config_manager.py deleted file mode 100644 index 3926910c1f..0000000000 --- a/ansible/playbooks/roles/repository/files/download-requirements/src/command/dnf_config_manager.py +++ /dev/null @@ -1,19 +0,0 @@ -from src.command.command import Command - - -class DnfConfigManager(Command): - """ - Interface for `dnf config-manager` - """ - - def __init__(self, retries: int): - super().__init__('dnf', retries) - - def add_repo(self, repo: str): - self.run(['config-manager', '--add-repo', repo]) - - def disable_repo(self, repo: str): - self.run(['config-manager', '--set-disabled', repo]) - - def enable_repo(self, repo: str): - self.run(['config-manager', '--set-enabled', repo]) diff --git a/ansible/playbooks/roles/repository/files/download-requirements/src/command/dnf.py b/ansible/playbooks/roles/repository/files/download-requirements/src/command/redhat/dnf.py similarity index 63% rename from ansible/playbooks/roles/repository/files/download-requirements/src/command/dnf.py rename to ansible/playbooks/roles/repository/files/download-requirements/src/command/redhat/dnf.py index 45eaf1032c..b4db5124f9 100644 --- a/ansible/playbooks/roles/repository/files/download-requirements/src/command/dnf.py +++ b/ansible/playbooks/roles/repository/files/download-requirements/src/command/redhat/dnf.py @@ -4,24 +4,41 @@ from src.error import CriticalError -class Dnf(Command): +class DnfBase(Command): """ - Interface for `dnf` + Base class for `dnf` interfaces """ def __init__(self, retries: int): super().__init__('dnf', retries) - def update(self, enablerepo: str = None, - package: str = None, - disablerepo: str = None, + def _filter_non_critical_errors(self, stderr: str) -> str: + output_lines = [line for line in stderr.split('\n') + if not line.startswith('Failed to set locale, defaulting to')] + + return '\n'.join(output_lines) + + +class Dnf(DnfBase): + """ + Interface for `dnf` + """ + + def update(self, package: str = '', + disablerepo: str = '', + enablerepo: str = '', + ignore_already_installed_error: bool = False, + releasever: str = '', assume_yes: bool = True): """ Interface for `dnf update` - :param enablerepo: :param package: :param disablerepo: + :param enablerepo: + :param ignore_already_installed_error: if set to True, + `The same or higher version of {package} is already installed` error is ignored + :param releasever: :param assume_yes: if set to True, -y flag will be used """ update_parameters: List[str] = ['update'] @@ -29,25 +46,35 @@ def update(self, enablerepo: str = None, if assume_yes: update_parameters.append('-y') - if package is not None: + if package: update_parameters.append(package) - if disablerepo is not None: + if disablerepo: update_parameters.append(f'--disablerepo={disablerepo}') - if enablerepo is not None: + if enablerepo: update_parameters.append(f'--enablerepo={enablerepo}') + if releasever: + update_parameters.append(f'--releasever={releasever}') + proc = self.run(update_parameters) if 'error' in proc.stdout: raise CriticalError( f'Found an error. dnf update failed for package `{package}`, reason: `{proc.stdout}`') - if proc.stderr: + + filtered_stderr: str = self._filter_non_critical_errors(proc.stderr) + + if filtered_stderr: + if (ignore_already_installed_error + and all(string in filtered_stderr for string in + ('The same or higher version', 'is already installed, cannot update it.'))): + return + raise CriticalError( f'dnf update failed for packages `{package}`, reason: `{proc.stderr}`') - def install(self, package: str, assume_yes: bool = True): """ @@ -60,13 +87,13 @@ def install(self, package: str, proc = self.run(['install', no_ask, package], accept_nonzero_returncode=True) if proc.returncode != 0: - if not 'does not update' in proc.stdout: # trying to reinstall package with url + if 'does not update' not in proc.stdout: # trying to reinstall package with url raise CriticalError(f'dnf install failed for `{package}`, reason `{proc.stdout}`') if 'error' in proc.stdout: raise CriticalError( f'Found an error. dnf install failed for package `{package}`, reason: `{proc.stdout}`') - if proc.stderr: + if self._filter_non_critical_errors(proc.stderr): raise CriticalError( f'dnf install failed for package `{package}`, reason: `{proc.stderr}`') @@ -81,29 +108,32 @@ def remove(self, package: str, no_ask: str = '-y' if assume_yes else '' self.run(['remove', no_ask, package]) + def __get_repo_ids(self, repoinfo_extra_args: List[str] = None) -> List[str]: + repoinfo_args: List[str] = ['--quiet', '-y'] + + if repoinfo_extra_args: + repoinfo_args.extend(repoinfo_extra_args) + + output = self.run(['repoinfo'] + repoinfo_args).stdout + repo_ids: List[str] = [] + + for line in output.splitlines(): + if 'Repo-id' in line: # e.g. `Repo-id : epel` + repo_ids.append(line.split(':')[1].strip()) + + return repo_ids + def is_repo_enabled(self, repo: str) -> bool: - output = self.run(['repolist', - '--enabled', - '--quiet', - '-y']).stdout - if repo in output: + enabled_repos = self.__get_repo_ids() + + if repo in enabled_repos: return True return False - def find_rhel_repo_id(self, patterns: List[str]) -> List[str]: - output = self.run(['repolist', - '--all', - '--quiet', - '-y']).stdout - - repos: List[str] = [] - for line in output.split('\n'): - for pattern in patterns: - if pattern in line: - repos.append(pattern) - - return repos + def are_repos_enabled(self, repos: List[str]) -> bool: + enabled_repos: List[str] = self.__get_repo_ids() + return all(repo in enabled_repos for repo in repos) def accept_keys(self): # to accept import of repo's GPG key (for repo_gpgcheck=1) diff --git a/ansible/playbooks/roles/repository/files/download-requirements/src/command/redhat/dnf_config_manager.py b/ansible/playbooks/roles/repository/files/download-requirements/src/command/redhat/dnf_config_manager.py new file mode 100644 index 0000000000..2d75b12db8 --- /dev/null +++ b/ansible/playbooks/roles/repository/files/download-requirements/src/command/redhat/dnf_config_manager.py @@ -0,0 +1,33 @@ +from src.command.command import Command +from src.error import DnfVariableNotfound + + +class DnfConfigManager(Command): + """ + Interface for `dnf config-manager` + """ + + def __init__(self, retries: int): + super().__init__('dnf', retries) + + def add_repo(self, repo: str): + self.run(['config-manager', '--add-repo', repo]) + + def disable_repo(self, repo: str): + self.run(['config-manager', '--set-disabled', repo]) + + def enable_repo(self, repo: str): + self.run(['config-manager', '--set-enabled', repo]) + + def get_variable(self, name: str) -> str: + process = self.run(['config-manager', '--dump-variables']) + variables = [x for x in process.stdout.splitlines() if '=' in x] + value = None + + for var in variables: + chunks = var.split('=', maxsplit=1) + if name == chunks[0].strip(): + value = chunks[1].strip() + return value + + raise DnfVariableNotfound(f'Variable not found: {name}') diff --git a/ansible/playbooks/roles/repository/files/download-requirements/src/command/dnf_download.py b/ansible/playbooks/roles/repository/files/download-requirements/src/command/redhat/dnf_download.py similarity index 86% rename from ansible/playbooks/roles/repository/files/download-requirements/src/command/dnf_download.py rename to ansible/playbooks/roles/repository/files/download-requirements/src/command/redhat/dnf_download.py index 24f59df960..c6240b49c1 100644 --- a/ansible/playbooks/roles/repository/files/download-requirements/src/command/dnf_download.py +++ b/ansible/playbooks/roles/repository/files/download-requirements/src/command/redhat/dnf_download.py @@ -1,18 +1,15 @@ from pathlib import Path from typing import List -from src.command.command import Command +from src.command.redhat.dnf import DnfBase from src.error import CriticalError -class DnfDownload(Command): +class DnfDownload(DnfBase): """ Interface for `dnf download` """ - def __init__(self, retries: int): - super().__init__('dnf', retries) - def download_packages(self, packages: List[str], archlist: List[str], destdir: Path, @@ -38,6 +35,6 @@ def download_packages(self, packages: List[str], if 'error' in process.stdout: raise CriticalError( f'Found an error. dnf download failed for packages `{packages}`, reason: `{process.stdout}`') - if process.stderr: + if self._filter_non_critical_errors(process.stderr): raise CriticalError( f'dnf download failed for packages `{packages}`, reason: `{process.stderr}`') diff --git a/ansible/playbooks/roles/repository/files/download-requirements/src/command/dnf_repoquery.py b/ansible/playbooks/roles/repository/files/download-requirements/src/command/redhat/dnf_repoquery.py similarity index 100% rename from ansible/playbooks/roles/repository/files/download-requirements/src/command/dnf_repoquery.py rename to ansible/playbooks/roles/repository/files/download-requirements/src/command/redhat/dnf_repoquery.py diff --git a/ansible/playbooks/roles/repository/files/download-requirements/src/command/rpm.py b/ansible/playbooks/roles/repository/files/download-requirements/src/command/redhat/rpm.py similarity index 100% rename from ansible/playbooks/roles/repository/files/download-requirements/src/command/rpm.py rename to ansible/playbooks/roles/repository/files/download-requirements/src/command/redhat/rpm.py diff --git a/ansible/playbooks/roles/repository/files/download-requirements/src/command/toolchain.py b/ansible/playbooks/roles/repository/files/download-requirements/src/command/toolchain.py index 5aba587f13..7c2488c175 100644 --- a/ansible/playbooks/roles/repository/files/download-requirements/src/command/toolchain.py +++ b/ansible/playbooks/roles/repository/files/download-requirements/src/command/toolchain.py @@ -1,17 +1,17 @@ import logging from typing import Dict -from src.command.apt import Apt -from src.command.apt_cache import AptCache -from src.command.apt_key import AptKey from src.command.crane import Crane -from src.command.dnf_repoquery import DnfRepoquery -from src.command.rpm import Rpm +from src.command.debian.apt import Apt +from src.command.debian.apt_cache import AptCache +from src.command.debian.apt_key import AptKey +from src.command.redhat.dnf import Dnf +from src.command.redhat.dnf_config_manager import DnfConfigManager +from src.command.redhat.dnf_download import DnfDownload +from src.command.redhat.dnf_repoquery import DnfRepoquery +from src.command.redhat.rpm import Rpm from src.command.tar import Tar from src.command.wget import Wget -from src.command.dnf import Dnf -from src.command.dnf_config_manager import DnfConfigManager -from src.command.dnf_download import DnfDownload from src.config.os_type import OSFamily diff --git a/ansible/playbooks/roles/repository/files/download-requirements/src/config/config.py b/ansible/playbooks/roles/repository/files/download-requirements/src/config/config.py index b4c9d53b43..7ca30f1529 100644 --- a/ansible/playbooks/roles/repository/files/download-requirements/src/config/config.py +++ b/ansible/playbooks/roles/repository/files/download-requirements/src/config/config.py @@ -4,10 +4,10 @@ from itertools import chain from os import uname from pathlib import Path -from typing import List +from typing import Any, Dict, List, Set from src.config.os_type import OSArch, OSConfig, OSType, SUPPORTED_OS_TYPES -from src.error import CriticalError +from src.error import CriticalError, OldManifestVersion class Config: @@ -17,23 +17,28 @@ def __init__(self, argv: List[str]): self.dest_files: Path self.dest_grafana_dashboards: Path self.dest_images: Path + self.dest_manifest: Path self.dest_packages: Path self.distro_subdir: Path self.is_log_file_enabled: bool self.log_file: Path + self.log_level: int self.os_arch: OSArch self.os_type: OSType self.pyyaml_installed: bool = False self.repo_path: Path self.repos_backup_file: Path self.reqs_path: Path - self.rerun: bool + self.rerun: bool = False self.retries: int self.script_path: Path + self.verbose_mode: bool self.was_backup_created: bool = False self.__add_args(argv) + self.__LINE_SIZE: int = 50 # used in printing + if not self.rerun: self.__log_info_summary() @@ -43,8 +48,7 @@ def __log_info_summary(self): """ lines: List[str] = ['Info summary:'] - LINE_SIZE: int = 50 - lines.append('-' * LINE_SIZE) + lines.append('-' * self.__LINE_SIZE) lines.append(f'OS Arch: {self.os_arch.value}') lines.append(f'OS Type: {self.os_type.os_name}') @@ -56,12 +60,16 @@ def __log_info_summary(self): lines.append(f'- packages: {str(self.dest_packages)}') lines.append(f'Repos backup file: {str(self.repos_backup_file)}') + if self.dest_manifest: + lines.append(f'Manifest used: {str(self.dest_manifest.absolute())}') + if self.is_log_file_enabled: lines.append(f'Log file location: {str(self.log_file.absolute())}') + lines.append(f'Verbose mode: {self.verbose_mode}') lines.append(f'Retries count: {self.retries}') - lines.append('-' * LINE_SIZE) + lines.append('-' * self.__LINE_SIZE) logging.info('\n'.join(lines)) @@ -93,6 +101,12 @@ def __create_parser(self) -> ArgumentParser: parser.add_argument('--no-logfile', action='store_true', dest='no_logfile', help='no logfile will be created') + parser.add_argument('--verbose', '-v', action='store_true', dest='verbose', + help='more verbose output will be provided') + + parser.add_argument('--manifest', '-m', metavar='MANIFEST_PATH', type=Path, action='store', dest='manifest', + help='manifest file generated by epicli') + # offline mode rerun options: parser.add_argument('--rerun', action='store_true', dest='rerun', default=False, help=SUPPRESS) @@ -144,18 +158,19 @@ def __setup_logger(self, log_level: str, log_file: Path, no_logfile: bool): 'info': logging.INFO, 'debug': logging.DEBUG } + self.log_level = log_levels[log_level.lower()] log_format = '%(asctime)s [%(levelname)s]: %(message)s' # add stdout logger: - logging.basicConfig(stream=sys.stdout, level=log_levels[log_level.lower()], + logging.basicConfig(stream=sys.stdout, level=self.log_level, format=log_format) # add log file: if not no_logfile: root_logger = logging.getLogger() file_handler = logging.FileHandler(log_file) - file_handler.setLevel(log_levels[log_level.lower()]) + file_handler.setLevel(self.log_level) file_handler.setFormatter(logging.Formatter(fmt=log_format)) root_logger.addHandler(file_handler) @@ -195,7 +210,130 @@ def __add_args(self, argv: List[str]): self.repos_backup_file = Path(args['repos_backup_file']) self.retries = args['retries'] self.is_log_file_enabled = False if args['no_logfile'] else True + self.dest_manifest = args['manifest'] or None + self.verbose_mode = True if self.log_level == logging.DEBUG else args['verbose'] # offline mode self.rerun = args['rerun'] self.pyyaml_installed = args['pyyaml_installed'] + + def __print_parsed_manifest_data(self, requirements: Dict[str, Any], manifest: Dict[str, Any]): + lines: List[str] = ['Manifest summary:'] + + lines.append('-' * self.__LINE_SIZE) + + lines.append('Components requested:') + for component in manifest['requested-components']: + lines.append(f'- {component}') + + lines.append('') + + lines.append('Features requested:') + for feature in manifest['requested-features']: + lines.append(f'- {feature}') + + for reqs in [('files', 'Files'), + ('grafana-dashboards', 'Dashboards')]: + reqs_to_download = sorted(requirements[reqs[0]]) + if reqs_to_download: + lines.append('') + lines.append(f'{reqs[1]} to download:') + for req_to_download in reqs_to_download: + lines.append(f'- {req_to_download}') + + images = requirements['images'] + images_to_print: List[str] = [] + for image_category in images: + for image in images[image_category]: + images_to_print.append(image) + + if images_to_print: + lines.append('') + lines.append('Images to download:') + for image in sorted(images_to_print): + lines.append(f'- {image}') + + lines.append('-' * self.__LINE_SIZE) + + logging.info('\n'.join(lines)) + + def __filter_files(self, requirements: Dict[str, Any], + manifest: Dict[str, Any]): + """ + See :func:`~config.Config.__filter_manifest` + """ + files = requirements['files'] + files_to_exclude: List[str] = [] + for file in files: + deps = files[file]['deps'] + if all(dep not in manifest['requested-features'] for dep in deps) and deps != 'default': + files_to_exclude.append(file) + + if files_to_exclude: + requirements['files'] = {url: data for url, data in files.items() if url not in files_to_exclude} + + def __filter_images(self, requirements: Dict[str, Any], manifest: Dict[str, Any]): + """ + See :func:`~config.Config.__filter_manifest` + """ + # prepare image groups: + images = requirements['images'] + images_to_download: Dict[str, Dict] = {} + selected_images: Set[str] = set() + for image_group in images: + images_to_download[image_group] = {} + + if len(manifest['requested-images']): # if image-registry document used: + for image_group in images: + for image, data in images[image_group].items(): + if image in manifest['requested-images'] and image not in selected_images: + images_to_download[image_group][image] = data + selected_images.add(image) + else: # otherwise check features used: + for image_group in images: + if image_group in manifest['requested-features']: + for image, data in images[image_group].items(): + if image not in selected_images: + images_to_download[image_group][image] = data + selected_images.add(image) + + if images_to_download: + requirements['images'] = images_to_download + + def __filter_manifest(self, requirements: Dict[str, Any], + manifest: Dict[str, Any]): + """ + Filter entries in the `requirements` based on the parsed `manifest` documents. + + :param requirements: parsed requirements which will be filtered based on the `manifest` content + :param manifest: parsed documents which will be used to filter `requirements` + """ + if 'grafana' not in manifest['requested-features']: + requirements['grafana-dashboards'] = [] + + self.__filter_files(requirements, manifest) + self.__filter_images(requirements, manifest) + + def read_manifest(self, requirements: Dict[str, Any]): + """ + Construct ManifestReader and parse only required data. + Not needed entries will be removed from the `requirements` + + :param requirements: parsed requirements which will be filtered based on the manifest output + """ + if not self.dest_manifest: + return + + # Needs to be imported here as the libyaml might be missing on the OS, + # this could cause crash on config.py import. + from src.config.manifest_reader import ManifestReader + + mreader = ManifestReader(self.dest_manifest, self.os_arch) + try: + manifest = mreader.parse_manifest() + self.__filter_manifest(requirements, manifest) + + if self.verbose_mode: + self.__print_parsed_manifest_data(requirements, manifest) + except OldManifestVersion: + pass # old manifest used, cannot optimize download time diff --git a/ansible/playbooks/roles/repository/files/download-requirements/src/config/manifest_reader.py b/ansible/playbooks/roles/repository/files/download-requirements/src/config/manifest_reader.py new file mode 100644 index 0000000000..3276aec6be --- /dev/null +++ b/ansible/playbooks/roles/repository/files/download-requirements/src/config/manifest_reader.py @@ -0,0 +1,116 @@ +from pathlib import Path +from typing import Any, Callable, Dict, List, Set + +import yaml + +from src.config.os_type import OSArch +from src.config.version import Version +from src.error import CriticalError, OldManifestVersion + + +def load_yaml_file_all(filename: Path) -> List[Any]: + try: + with open(filename, encoding="utf-8") as req_handler: + return list(yaml.safe_load_all(req_handler)) + except yaml.YAMLError as yaml_err: + raise CriticalError(f'Failed loading: `{yaml_err}`') from yaml_err + except Exception as err: + raise CriticalError(f'Failed loading: `{filename}`') from err + + +def load_yaml_file(filename: Path) -> Any: + return load_yaml_file_all(filename)[0] + + +class ManifestReader: + """ + Load the manifest file and call defined parser methods to process required documents. + Main running method is :func:`~manifest_reader.ManifestReader.parse_manifest` which returns formatted manifest output. + """ + + def __init__(self, dest_manifest: Path, arch: OSArch): + self.__dest_manifest = dest_manifest + self.__os_arch: str = arch.value + + self.__k8s_as_cloud_service: bool = False + + self.__requested_components: Set = set() + self.__requested_features: Set = set() + self.__requested_images: Set = set() + + def __parse_cluster_doc(self, cluster_doc: Dict): + """ + Parse `epiphany-cluster` document and extract only used components. + + :param cluster_doc: handler to a `epiphany-cluster` document + :raises: + :class:`OldManifestVersion`: can be raised when old manifest version used + """ + if Version(cluster_doc['version']) < Version('2.0.1'): + raise OldManifestVersion(cluster_doc['version']) + + try: + self.__k8s_as_cloud_service = cluster_doc['specification']['cloud']['k8s_as_cloud_service'] + except KeyError: + self.__k8s_as_cloud_service = False + + components = cluster_doc['specification']['components'] + for component in components: + if components[component]['count'] > 0: + self.__requested_components.add(component) + + def __parse_feature_mappings_doc(self, feature_mappings_doc: Dict): + """ + Parse `configuration/feature-mappings` document and extract only used features (based on `epiphany-cluster` doc). + + :param feature_mappings_doc: handler to a `configuration/feature-mappings` document + """ + mappings = feature_mappings_doc['specification']['mappings'] + for mapping in mappings.keys() & self.__requested_components: + for feature in mappings[mapping]: + self.__requested_features.add(feature) + + if self.__k8s_as_cloud_service: + self.__requested_features.add('k8s-as-cloud-service') + + def __parse_image_registry_doc(self, image_registry_doc: Dict): + """ + Parse `configuration/image-registry` document and extract only used images. + + :param image_registry_doc: handler to a `configuration/image-registry` document + """ + self.__requested_images.add(image_registry_doc['specification']['registry_image']['name']) + + target_arch_images = image_registry_doc['specification']['images_to_load'][self.__os_arch] + for target_images in target_arch_images: + features = target_arch_images[target_images] + for feature in features: + for image in features[feature]: + self.__requested_images.add(image['name']) + + def parse_manifest(self) -> Dict[str, Any]: + """ + Load the manifest file, call parsers on required docs and return formatted output. + """ + required_docs: Set[str] = {'epiphany-cluster', 'configuration/feature-mappings'} + parse_doc: Dict[str, Callable] = { + 'epiphany-cluster': self.__parse_cluster_doc, + 'configuration/feature-mappings': self.__parse_feature_mappings_doc, + 'configuration/image-registry': self.__parse_image_registry_doc + } + + parsed_docs: Set[str] = set() + for manifest_doc in load_yaml_file_all(self.__dest_manifest): + try: + kind: str = manifest_doc['kind'] + parse_doc[kind](manifest_doc) + parsed_docs.add(kind) + except KeyError: + pass + + if len(parsed_docs) < len(required_docs): + raise CriticalError(f'ManifestReader - could not find document(s): {parsed_docs ^ required_docs}') + + return {'requested-components': sorted(list(self.__requested_components)), + 'requested-features': sorted(list(self.__requested_features)), + 'requested-images': sorted(list(self.__requested_images))} diff --git a/ansible/playbooks/roles/repository/files/download-requirements/src/config/os_type.py b/ansible/playbooks/roles/repository/files/download-requirements/src/config/os_type.py index cb0cd14f5a..f95f938c6a 100644 --- a/ansible/playbooks/roles/repository/files/download-requirements/src/config/os_type.py +++ b/ansible/playbooks/roles/repository/files/download-requirements/src/config/os_type.py @@ -5,7 +5,7 @@ class OSArch(Enum): """ Supported architecture types """ X86_64 = 'x86_64' - ARM64 = 'arm64' + ARM64 = 'aarch64' class OSFamily(Enum): @@ -48,5 +48,7 @@ def os_aliases(self) -> List[str]: OSType.RHEL, OSType.Ubuntu ], - OSArch.ARM64: [] + OSArch.ARM64: [ + OSType.Almalinux + ] } diff --git a/ansible/playbooks/roles/repository/files/download-requirements/src/config/version.py b/ansible/playbooks/roles/repository/files/download-requirements/src/config/version.py new file mode 100644 index 0000000000..fd0de4a2ce --- /dev/null +++ b/ansible/playbooks/roles/repository/files/download-requirements/src/config/version.py @@ -0,0 +1,24 @@ +class Version: + """ + Type used in comparing epiphany version. + """ + + def __init__(self, ver: str): + major, minor, patch = ver.split('.') + self.major: int = int(major) + self.minor: int = int(minor) + self.patch: int = int(''.join(filter(lambda char: char.isdigit(), patch))) # handle `1dev`, `2rc`, etc. + + def __lt__(self, rhs): + if self.major < rhs.major: + return True + + if self.major == rhs.major: + if self.minor < rhs.minor: + return True + + if self.minor == rhs.minor: + if self.patch < rhs.patch: + return True + + return False diff --git a/ansible/playbooks/roles/repository/files/download-requirements/src/error.py b/ansible/playbooks/roles/repository/files/download-requirements/src/error.py index 0c6db7cbf7..6029e014b7 100644 --- a/ansible/playbooks/roles/repository/files/download-requirements/src/error.py +++ b/ansible/playbooks/roles/repository/files/download-requirements/src/error.py @@ -1,7 +1,13 @@ import logging -class DownloadRequirementsError(Exception): +class DownloadRequirementsException(Exception): + """ + Base class for all exceptions raised during the script runtime. + """ + + +class DownloadRequirementsError(DownloadRequirementsException): """ Base class for all non standard errors raised during a script run. """ @@ -10,6 +16,15 @@ def __init__(self, msg: str): logging.error(msg) +class DownloadRequirementsWarning(DownloadRequirementsException): + """ + Base class for all non critical issues raised during a script run. + """ + def __init__(self, msg: str): + super().__init__() + logging.warning(msg) + + class CriticalError(DownloadRequirementsError): """ Raised when there was an error that could not be fixed by @@ -17,6 +32,12 @@ class CriticalError(DownloadRequirementsError): """ +class DnfVariableNotfound(CriticalError): + """ + Raised when DNF variable was not found. + """ + + class PackageNotfound(CriticalError): """ Raised when there was no package found by the query tool. @@ -29,4 +50,12 @@ class ChecksumMismatch(DownloadRequirementsError): """ def __init__(self, msg: str): super().__init__(f'{msg} - download failed due to checksum mismatch, ' - 'WARNING someone might have replaced the file') + 'WARNING someone might have replaced the file.') + + +class OldManifestVersion(DownloadRequirementsWarning): + """ + Raised when old manifest version used + """ + def __init__(self, version: str): + super().__init__(f'Old manifest version used: `{version}`, no optimization will be performed.') diff --git a/ansible/playbooks/roles/repository/files/download-requirements/src/mode/base_mode.py b/ansible/playbooks/roles/repository/files/download-requirements/src/mode/base_mode.py index 3ae4ed2fba..535b652371 100644 --- a/ansible/playbooks/roles/repository/files/download-requirements/src/mode/base_mode.py +++ b/ansible/playbooks/roles/repository/files/download-requirements/src/mode/base_mode.py @@ -4,25 +4,14 @@ from pathlib import Path from typing import Any, Dict -import yaml - from src.command.toolchain import Toolchain, TOOLCHAINS from src.config.config import Config, OSArch +from src.config.manifest_reader import load_yaml_file from src.crypt import SHA_ALGORITHMS from src.downloader import Downloader from src.error import CriticalError, ChecksumMismatch -def load_yaml_file(filename: Path) -> Any: - try: - with open(filename, encoding="utf-8") as req_handler: - return yaml.safe_load(req_handler) - except yaml.YAMLError as yaml_err: - raise CriticalError(f'Failed loading: `{yaml_err}`') from yaml_err - except Exception as err: - raise CriticalError(f'Failed loading: `{filename}`') from err - - class BaseMode: """ An abstract class for running specific operations on target OS. @@ -35,6 +24,7 @@ def __init__(self, config: Config): self._repositories: Dict[str, Dict] = self.__parse_repositories() self._requirements: Dict[str, Any] = self.__parse_requirements() self._tools: Toolchain = TOOLCHAINS[self._cfg.os_type.os_family](self._cfg.retries) + self._cfg.read_manifest(self._requirements) def __parse_repositories(self) -> Dict[str, Dict]: """ @@ -191,12 +181,19 @@ def _download_images(self): Download images under `self._requirements['images']` using Crane. """ platform: str = 'linux/amd64' if self._cfg.os_arch == OSArch.X86_64 else 'linux/arm64' - downloader: Downloader = Downloader(self._requirements['images'], + images = self._requirements['images'] + + images_to_download: Dict[str, Dict] = {} + for image_group in images: # kubernetes-master, rabbitmq, etc. + for image, data in images[image_group].items(): + images_to_download[image] = data + + downloader: Downloader = Downloader(images_to_download, 'sha1', self._tools.crane.pull, {'platform': platform}) - for image in self._requirements['images']: + for image in images_to_download: url, version = image.split(':') filename = Path(f'{url.split("/")[-1]}-{version}.tar') # format: image_version.tar @@ -209,9 +206,15 @@ def _cleanup(self): """ pass - def _clean_up_repository_files(self): + def _cleanup_packages(self): """ - Additional routines before unpacking backup to remove repository files under the /etc directory. + Remove installed packages. + """ + pass + + def _remove_repository_files(self): + """ + Additional routines before unpacking backup to remove all repository files under the /etc directory. """ pass @@ -221,7 +224,7 @@ def __restore_repositories(self): """ if self._cfg.repos_backup_file.exists() and self._cfg.repos_backup_file.stat().st_size: logging.info('Restoring repository files...') - self._clean_up_repository_files() + self._remove_repository_files() self._tools.tar.unpack(filename=self._cfg.repos_backup_file, directory=Path('/'), absolute_names=True, @@ -239,10 +242,12 @@ def run(self): """ # add required directories self._cfg.dest_files.mkdir(exist_ok=True, parents=True) - self._cfg.dest_grafana_dashboards.mkdir(exist_ok=True, parents=True) self._cfg.dest_images.mkdir(exist_ok=True, parents=True) self._cfg.dest_packages.mkdir(exist_ok=True, parents=True) + if self._requirements['grafana-dashboards']: + self._cfg.dest_grafana_dashboards.mkdir(exist_ok=True, parents=True) + # provides tar which is required for backup logging.info('Installing base packages...') self._install_base_packages() @@ -269,9 +274,10 @@ def run(self): self.__download_files(self._requirements['files'], self._cfg.dest_files) logging.info('Done downloading files.') - logging.info('Downloading grafana dashboards...') - self.__download_grafana_dashboards() - logging.info('Done downloading grafana dashboards.') + if self._requirements['grafana-dashboards']: + logging.info('Downloading grafana dashboards...') + self.__download_grafana_dashboards() + logging.info('Done downloading grafana dashboards.') logging.info('Downloading Crane...') self.__download_crane() @@ -291,4 +297,9 @@ def run(self): self._cleanup() logging.info('Done running cleanup.') + # requires tar but has to be run after cleanup self.__restore_repositories() + + logging.info('Cleaning up installed packages...') + self._cleanup_packages() + logging.info('Done cleaning up installed packages.') diff --git a/ansible/playbooks/roles/repository/files/download-requirements/src/mode/debian_family_mode.py b/ansible/playbooks/roles/repository/files/download-requirements/src/mode/debian_family_mode.py index e01b66a60c..686adfc871 100644 --- a/ansible/playbooks/roles/repository/files/download-requirements/src/mode/debian_family_mode.py +++ b/ansible/playbooks/roles/repository/files/download-requirements/src/mode/debian_family_mode.py @@ -18,8 +18,8 @@ def __init__(self, config: Config): self.__installed_packages: List[str] = [] def __create_repo_paths(self): - for repo in self._repositories.keys(): - self._repositories[repo]['path'] = Path('/etc/apt/sources.list.d') / f'{repo}.list' + for repo_id, repo_item in self._repositories.items(): + repo_item['path'] = Path('/etc/apt/sources.list.d') / f'{repo_id}.list' def _create_backup_repositories(self): if not self._cfg.repos_backup_file.exists(): @@ -39,25 +39,26 @@ def _install_base_packages(self): # install prerequisites which might be missing installed_packages = self._tools.apt.list_installed_packages() + # Ensure ca-certificates package is in the latest version + self._tools.apt.install('ca-certificates') + for package in ['wget', 'gpg', 'curl', 'tar']: if package not in installed_packages: - self._tools.apt.install(package, assume_yes=True) + self._tools.apt.install(package) self.__installed_packages.append(package) logging.info(f'- {package}') def _add_third_party_repositories(self): # add third party keys - for repo in self._repositories: - data = self._repositories[repo] + for repo, data in self._repositories.items(): key_file = Path(f'/tmp/{repo}') self._tools.wget.download(data['key'], key_file) self._tools.apt_key.add(key_file) # create repo files - for repo in self._repositories: - data = self._repositories[repo] - with data['path'].open(mode='a') as repo_handler: - repo_handler.write(data['content']) + for repo_file in self._repositories.values(): + with repo_file['path'].open(mode='a') as repo_handler: + repo_handler.write(repo_file['content']) self._tools.apt.update() @@ -84,16 +85,17 @@ def _download_packages(self): except ValueError: package_base_name = package - package_info = self._tools.apt_cache.get_package_info(package_base_name, version.strip('*')) + package_info = self._tools.apt_cache.get_package_info(package_base_name, version) + fetched_version = package_info['Version'] # Files downloaded by `apt download` cannot have custom names # and they always starts with a package name + versioning and other info. # Find if there is a file corresponding with it's package name + fetched_version_quoted = fetched_version.replace(':', '%3a') try: - version = package_info['Version'].split(':')[-1] found_pkg: Path = [pkg_file for pkg_file in self._cfg.dest_packages.iterdir() if pkg_file.name.startswith(f'{package_info["Package"]}_') and - version in pkg_file.name][0] + fetched_version_quoted in pkg_file.name][0] if SHA_ALGORITHMS['sha256'](found_pkg) == package_info['SHA256']: logging.debug(f'- {package} - checksum ok, skipped') @@ -103,12 +105,12 @@ def _download_packages(self): pass # package not found # resolve dependencies for target package and if needed, download them first - deps: List[str] = self._tools.apt_cache.get_package_dependencies(package_base_name) + deps: List[str] = self._tools.apt_cache.get_package_dependencies(package_base_name, fetched_version) packages_to_download.extend(deps) - packages_to_download.append(package) + packages_to_download.append(f'{package_base_name}={fetched_version}' if version else package_base_name) - for package in set(packages_to_download): + for package in sorted(set(packages_to_download)): logging.info(f'- {package}') self._tools.apt.download(package) @@ -123,9 +125,12 @@ def _download_grafana_dashboard(self, dashboard: str, output_file: Path): def _download_crane_binary(self, url: str, dest: Path): self._tools.wget.download(url, dest) - def _clean_up_repository_files(self): - for repofile in Path('/etc/apt/sources.list.d').iterdir(): - repofile.unlink() + def _remove_repository_files(self): + logging.debug('Removing files from /etc/apt/sources.list.d...') + for repo_file in Path('/etc/apt/sources.list.d').iterdir(): + logging.debug(f'- {repo_file.name}') + repo_file.unlink() + logging.debug('Done removing files.') def _cleanup(self): # cleaning up 3rd party repositories @@ -133,6 +138,6 @@ def _cleanup(self): if data['path'].exists(): data['path'].unlink() - # remove installed packages + def _cleanup_packages(self): for package in self.__installed_packages: self._tools.apt.remove(package) diff --git a/ansible/playbooks/roles/repository/files/download-requirements/src/mode/red_hat_family_mode.py b/ansible/playbooks/roles/repository/files/download-requirements/src/mode/red_hat_family_mode.py index 7dcbe26deb..34d0df11b3 100644 --- a/ansible/playbooks/roles/repository/files/download-requirements/src/mode/red_hat_family_mode.py +++ b/ansible/playbooks/roles/repository/files/download-requirements/src/mode/red_hat_family_mode.py @@ -6,6 +6,7 @@ from src.command.command import Command from src.config.config import Config +from src.config.os_type import OSArch from src.mode.base_mode import BaseMode, load_yaml_file @@ -18,16 +19,16 @@ def __init__(self, config: Config): super().__init__(config) self.__all_queried_packages: Set[str] = set() self.__archs: List[str] = [config.os_arch.value, 'noarch'] - self.__base_packages: List[str] = ['curl', 'python3-dnf-plugins-core', 'wget'] + self.__base_packages: List[str] = ['curl', 'python3-dnf-plugins-core', 'wget', 'tar'] + self.__dnf_cache_dir: Path = Path('/var/cache/dnf') self.__installed_packages: List[str] = [] - self.__dnf_cache_path: Path = Path('/var/cache/dnf') try: dnf_config = configparser.ConfigParser() - with Path('/etc/dnf/dnf.conf').open() as dnf_config_file: + with Path('/etc/dnf/dnf.conf').open(encoding='utf-8') as dnf_config_file: dnf_config.read(dnf_config_file) - self.__dnf_cache_path = Path(dnf_config['main']['cachedir']) + self.__dnf_cache_dir = Path(dnf_config['main']['cachedir']) except FileNotFoundError: logging.debug('RedHatFamilyMode.__init__(): dnf config file not found') except configparser.Error as e: @@ -49,25 +50,36 @@ def _create_backup_repositories(self): logging.debug('Done.') def _install_base_packages(self): + # Ensure `dnf config-manager` command + if not self._tools.rpm.is_package_installed('dnf-plugins-core'): + self._tools.dnf.install('dnf-plugins-core') + self.__installed_packages.append('dnf-plugins-core') # Bug in RHEL 8.4 https://bugzilla.redhat.com/show_bug.cgi?id=2004853 - self._tools.dnf.update(package='libmodulemd') + releasever = '8' if self._tools.dnf_config_manager.get_variable('releasever') == '8.4' else None + self._tools.dnf.update(package='libmodulemd', releasever=releasever) - # some packages are from EPEL repo - # make sure that we reinstall it before proceeding - if self._tools.rpm.is_package_installed('epel-release'): - if not self._tools.dnf.is_repo_enabled('epel') or not self._tools.dnf.is_repo_enabled('epel-modular'): - self._tools.dnf.remove('epel-release') + # epel-release package is re-installed when repo it provides is not enabled + epel_package_initially_present: bool = self._tools.rpm.is_package_installed('epel-release') - self._tools.dnf.install('https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm') - self.__installed_packages.append('epel-release') + if epel_package_initially_present and not self._tools.dnf.are_repos_enabled(['epel', 'epel-modular']): + self._tools.dnf.remove('epel-release') + + # some packages are from EPEL repo, ensure the latest version + if not self._tools.rpm.is_package_installed('epel-release'): + self._tools.dnf.install('https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm') + + if not epel_package_initially_present: + self.__installed_packages.append('epel-release') + else: + self._tools.dnf.update('https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm', + ignore_already_installed_error=True) self.__remove_dnf_cache_for_custom_repos() - self._tools.dnf.makecache(True) + self._tools.dnf.makecache(timer=True) - # tar does not come by default from image. We install it, but don't want to remove it - if not self._tools.rpm.is_package_installed('tar'): - self._tools.dnf.install('tar') + # Ensure ca-certificates package is in the latest version + self._tools.dnf.install('ca-certificates') for package in self.__base_packages: if not self._tools.rpm.is_package_installed(package): @@ -98,35 +110,48 @@ def _add_third_party_repositories(self): self._tools.dnf_config_manager.add_repo('https://download.docker.com/linux/centos/docker-ce.repo') self._tools.dnf.accept_keys() - for repo in ['https://dl.2ndquadrant.com/default/release/get/10/rpm', # for repmgr - 'https://dl.2ndquadrant.com/default/release/get/13/rpm']: - Command('curl', self._cfg.retries, [repo]) | Command('bash', self._cfg.retries) # curl {repo} | bash + # repmgr is supported only with x86_64 architecture + if self._cfg.os_arch == OSArch.X86_64: + for repo in ['https://dl.2ndquadrant.com/default/release/get/10/rpm', # for repmgr + 'https://dl.2ndquadrant.com/default/release/get/13/rpm']: + Command('curl', self._cfg.retries, [repo]) | Command('bash', self._cfg.retries) # curl {repo} | bash - # script adds 2 repositories, only 1 is required - for repo in ['2ndquadrant-dl-default-release-pg10-debug', - '2ndquadrant-dl-default-release-pg13-debug']: - self._tools.dnf_config_manager.disable_repo(repo) + # script adds 2 repositories, only 1 is required + for repo in ['2ndquadrant-dl-default-release-pg10-debug', + '2ndquadrant-dl-default-release-pg13-debug']: + self._tools.dnf_config_manager.disable_repo(repo) self._tools.dnf.makecache(False, True) def __remove_dnf_cache_for_custom_repos(self): # clean metadata for upgrades (when the same package can be downloaded from changed repo) - repocaches: List[str] = list(self.__dnf_cache_path.iterdir()) + cache_paths: List[Path] = list(self.__dnf_cache_dir.iterdir()) + + def get_matched_paths(repo_id: str, paths: List[Path]) -> List[Path]: + return [path for path in paths if path.name.startswith(repo_id)] - id_names = [ + repo_ids = [ '2ndquadrant', 'docker-ce', 'epel', - ] + [self._repositories[key]['id'] for key in self._repositories.keys()] + ] + [repo['id'] for repo in self._repositories.values()] + + matched_cache_paths: List[Path] = [] + + for repo_id in repo_ids: + matched_cache_paths.extend(get_matched_paths(repo_id, cache_paths)) - for repocache in repocaches: - matched_ids = [repocache.name.startswith(repo_name) for repo_name in id_names] - if any(matched_ids): + if matched_cache_paths: + matched_cache_paths.sort() + logging.debug(f'Removing DNF cache files from {self.__dnf_cache_dir}...') + + for path in matched_cache_paths: + logging.debug(f'- {path.name}') try: - if repocache.is_dir(): - shutil.rmtree(str(repocache)) + if path.is_dir(): + shutil.rmtree(str(path)) else: - repocache.unlink() + path.unlink() except FileNotFoundError: logging.debug('__remove_dnf_cache_for_custom_repos: cache directory already removed') @@ -212,14 +237,17 @@ def _download_grafana_dashboard(self, dashboard: str, output_file: Path): def _download_crane_binary(self, url: str, dest: Path): self._tools.wget.download(url, dest, additional_params=False) - def _clean_up_repository_files(self): - for repofile in Path('/etc/yum.repos.d').iterdir(): - repofile.unlink() + def _remove_repository_files(self): + logging.debug('Removing files from /etc/yum.repos.d...') + for repo_file in Path('/etc/yum.repos.d').iterdir(): + logging.debug(f'- {repo_file.name}') + repo_file.unlink() + logging.debug('Done removing files.') def _cleanup(self): - # remove installed packages + self.__remove_dnf_cache_for_custom_repos() + + def _cleanup_packages(self): for package in self.__installed_packages: if self._tools.rpm.is_package_installed(package): self._tools.dnf.remove(package) - - self.__remove_dnf_cache_for_custom_repos() diff --git a/ansible/playbooks/roles/repository/files/download-requirements/tests/command/test_apt.py b/ansible/playbooks/roles/repository/files/download-requirements/tests/command/debian/test_apt.py similarity index 97% rename from ansible/playbooks/roles/repository/files/download-requirements/tests/command/test_apt.py rename to ansible/playbooks/roles/repository/files/download-requirements/tests/command/debian/test_apt.py index fa91bb4e44..086a9aeb32 100644 --- a/ansible/playbooks/roles/repository/files/download-requirements/tests/command/test_apt.py +++ b/ansible/playbooks/roles/repository/files/download-requirements/tests/command/debian/test_apt.py @@ -1,6 +1,6 @@ from tests.mocks.command_run_mock import CommandRunMock -from src.command.apt import Apt +from src.command.debian.apt import Apt def test_interface_update(mocker): diff --git a/ansible/playbooks/roles/repository/files/download-requirements/tests/command/debian/test_apt_cache.py b/ansible/playbooks/roles/repository/files/download-requirements/tests/command/debian/test_apt_cache.py new file mode 100644 index 0000000000..8f3459f680 --- /dev/null +++ b/ansible/playbooks/roles/repository/files/download-requirements/tests/command/debian/test_apt_cache.py @@ -0,0 +1,40 @@ +import subprocess +from unittest.mock import Mock, patch + +import pytest +from src.command.debian.apt_cache import AptCache + +from tests.data.apt_cache import APT_CACHE_DEPENDS_RABBITMQ_STDOUT, APT_CACHE_DEPENDS_SOLR_STDOUT +from tests.mocks.command_run_mock import CommandRunMock + + +def test_interface_get_package_dependencies(mocker): + ''' Check argument construction for `apt-cache depends` ''' + with CommandRunMock(mocker, AptCache(1).get_package_dependencies, {'package': 'vim'}) as call_args: + assert call_args == ['apt-cache', + 'depends', + '--no-recommends', + '--no-suggests', + '--no-conflicts', + '--no-breaks', + '--no-replaces', + '--no-enhances', + '--no-pre-depends', + 'vim'] + + +APT_CACHE_DEPENDS_DATA = [ + ('tar', 'tar\n', []), + ('rabbitmq-server', APT_CACHE_DEPENDS_RABBITMQ_STDOUT, ['adduser', 'erlang-base', 'erlang-crypto', 'python3']), + ('solr-common', APT_CACHE_DEPENDS_SOLR_STDOUT, ['curl', 'debconf', 'default-jre-headless', 'libjs-jquery'])] + +@pytest.mark.parametrize('PACKAGE_NAME, CMD_STDOUT, EXPECTED_DEPS', APT_CACHE_DEPENDS_DATA) +def test_get_package_dependencies_return_value(PACKAGE_NAME, CMD_STDOUT, EXPECTED_DEPS): + mock_completed_proc = Mock(spec=subprocess.CompletedProcess) + mock_completed_proc.returncode = 0 + mock_completed_proc.stdout = CMD_STDOUT + + with patch('src.command.command.subprocess.run') as mock_run: + mock_run.return_value = mock_completed_proc + return_value = AptCache(1).get_package_dependencies(package=PACKAGE_NAME) + assert return_value == EXPECTED_DEPS diff --git a/ansible/playbooks/roles/repository/files/download-requirements/tests/command/test_apt_key.py b/ansible/playbooks/roles/repository/files/download-requirements/tests/command/debian/test_apt_key.py similarity index 88% rename from ansible/playbooks/roles/repository/files/download-requirements/tests/command/test_apt_key.py rename to ansible/playbooks/roles/repository/files/download-requirements/tests/command/debian/test_apt_key.py index 980398399d..d247466d68 100644 --- a/ansible/playbooks/roles/repository/files/download-requirements/tests/command/test_apt_key.py +++ b/ansible/playbooks/roles/repository/files/download-requirements/tests/command/debian/test_apt_key.py @@ -2,7 +2,7 @@ from tests.mocks.command_run_mock import CommandRunMock -from src.command.apt_key import AptKey +from src.command.debian.apt_key import AptKey def test_interface_add(mocker): diff --git a/ansible/playbooks/roles/repository/files/download-requirements/tests/command/test_dnf.py b/ansible/playbooks/roles/repository/files/download-requirements/tests/command/redhat/test_dnf.py similarity index 72% rename from ansible/playbooks/roles/repository/files/download-requirements/tests/command/test_dnf.py rename to ansible/playbooks/roles/repository/files/download-requirements/tests/command/redhat/test_dnf.py index 983789d0a7..78c1aa6d87 100644 --- a/ansible/playbooks/roles/repository/files/download-requirements/tests/command/test_dnf.py +++ b/ansible/playbooks/roles/repository/files/download-requirements/tests/command/redhat/test_dnf.py @@ -1,6 +1,6 @@ from tests.mocks.command_run_mock import CommandRunMock -from src.command.dnf import Dnf +from src.command.redhat.dnf import Dnf def test_interface_update(mocker): @@ -25,12 +25,6 @@ def test_interface_remove(mocker): def test_interface_is_repo_enabled(mocker): - ''' Check argument construction for `dnf repolist enabled` ''' + ''' Check argument construction for `dnf repoinfo enabled` ''' with CommandRunMock(mocker, Dnf(1).is_repo_enabled, {'repo': 'some_repo'}) as call_args: - assert call_args == ['dnf' , 'repolist', '--enabled', '--quiet', '-y'] - - -def test_interface_find_rhel_repo_id(mocker): - ''' Check argument construction for `dnf repolist all` ''' - with CommandRunMock(mocker, Dnf(1).find_rhel_repo_id, {'patterns': ['pat1', 'pat2']}) as call_args: - assert call_args == ['dnf' , 'repolist', '--all', '--quiet', '-y'] + assert call_args == ['dnf' , 'repoinfo', '--quiet', '-y'] diff --git a/ansible/playbooks/roles/repository/files/download-requirements/tests/command/redhat/test_dnf_base.py b/ansible/playbooks/roles/repository/files/download-requirements/tests/command/redhat/test_dnf_base.py new file mode 100644 index 0000000000..a40a92a167 --- /dev/null +++ b/ansible/playbooks/roles/repository/files/download-requirements/tests/command/redhat/test_dnf_base.py @@ -0,0 +1,12 @@ +from src.command.redhat.dnf import DnfBase + + +def test_filter_non_critical_errors(): + STDERR = '\n'.join([ + '1st line', + 'Failed to set locale, defaulting to C.UTF-8', + '3rd line']) + + base = DnfBase(1) + output = base._filter_non_critical_errors(STDERR) + assert output == "1st line\n3rd line" diff --git a/ansible/playbooks/roles/repository/files/download-requirements/tests/command/test_dnf_config_manager.py b/ansible/playbooks/roles/repository/files/download-requirements/tests/command/redhat/test_dnf_config_manager.py similarity index 93% rename from ansible/playbooks/roles/repository/files/download-requirements/tests/command/test_dnf_config_manager.py rename to ansible/playbooks/roles/repository/files/download-requirements/tests/command/redhat/test_dnf_config_manager.py index 9835bebef9..ae9ab2dc47 100644 --- a/ansible/playbooks/roles/repository/files/download-requirements/tests/command/test_dnf_config_manager.py +++ b/ansible/playbooks/roles/repository/files/download-requirements/tests/command/redhat/test_dnf_config_manager.py @@ -1,6 +1,6 @@ from tests.mocks.command_run_mock import CommandRunMock -from src.command.dnf_config_manager import DnfConfigManager +from src.command.redhat.dnf_config_manager import DnfConfigManager def test_interface_add_repo(mocker): diff --git a/ansible/playbooks/roles/repository/files/download-requirements/tests/command/test_dnf_download.py b/ansible/playbooks/roles/repository/files/download-requirements/tests/command/redhat/test_dnf_download.py similarity index 94% rename from ansible/playbooks/roles/repository/files/download-requirements/tests/command/test_dnf_download.py rename to ansible/playbooks/roles/repository/files/download-requirements/tests/command/redhat/test_dnf_download.py index 4b689d6ffb..911aeee576 100644 --- a/ansible/playbooks/roles/repository/files/download-requirements/tests/command/test_dnf_download.py +++ b/ansible/playbooks/roles/repository/files/download-requirements/tests/command/redhat/test_dnf_download.py @@ -2,7 +2,7 @@ from tests.mocks.command_run_mock import CommandRunMock -from src.command.dnf_download import DnfDownload +from src.command.redhat.dnf_download import DnfDownload def test_interface_download_packages(mocker): diff --git a/ansible/playbooks/roles/repository/files/download-requirements/tests/command/test_dnf_repoquery.py b/ansible/playbooks/roles/repository/files/download-requirements/tests/command/redhat/test_dnf_repoquery.py similarity index 96% rename from ansible/playbooks/roles/repository/files/download-requirements/tests/command/test_dnf_repoquery.py rename to ansible/playbooks/roles/repository/files/download-requirements/tests/command/redhat/test_dnf_repoquery.py index 158509576d..44a9b94dc9 100644 --- a/ansible/playbooks/roles/repository/files/download-requirements/tests/command/test_dnf_repoquery.py +++ b/ansible/playbooks/roles/repository/files/download-requirements/tests/command/redhat/test_dnf_repoquery.py @@ -1,6 +1,6 @@ from tests.mocks.command_run_mock import CommandRunMock -from src.command.dnf_repoquery import DnfRepoquery +from src.command.redhat.dnf_repoquery import DnfRepoquery def test_interface_query(mocker): diff --git a/ansible/playbooks/roles/repository/files/download-requirements/tests/command/test_rpm.py b/ansible/playbooks/roles/repository/files/download-requirements/tests/command/redhat/test_rpm.py similarity index 96% rename from ansible/playbooks/roles/repository/files/download-requirements/tests/command/test_rpm.py rename to ansible/playbooks/roles/repository/files/download-requirements/tests/command/redhat/test_rpm.py index 57baf41eb7..1423889d44 100644 --- a/ansible/playbooks/roles/repository/files/download-requirements/tests/command/test_rpm.py +++ b/ansible/playbooks/roles/repository/files/download-requirements/tests/command/redhat/test_rpm.py @@ -1,6 +1,6 @@ from tests.mocks.command_run_mock import CommandRunMock -from src.command.rpm import Rpm +from src.command.redhat.rpm import Rpm def test_interface_is_package_installed(mocker): diff --git a/ansible/playbooks/roles/repository/files/download-requirements/tests/command/test_apt_cache.py b/ansible/playbooks/roles/repository/files/download-requirements/tests/command/test_apt_cache.py deleted file mode 100644 index 4a7008f476..0000000000 --- a/ansible/playbooks/roles/repository/files/download-requirements/tests/command/test_apt_cache.py +++ /dev/null @@ -1,18 +0,0 @@ -from tests.mocks.command_run_mock import CommandRunMock - -from src.command.apt_cache import AptCache - - -def test_interface_get_package_dependencies(mocker): - ''' Check argument construction for `apt-cache depends` ''' - with CommandRunMock(mocker, AptCache(1).get_package_dependencies, {'package': 'vim'}) as call_args: - assert call_args == ['apt-cache', - 'depends', - '--no-recommends', - '--no-suggests', - '--no-conflicts', - '--no-breaks', - '--no-replaces', - '--no-enhances', - '--no-pre-depends', - 'vim'] diff --git a/ansible/playbooks/roles/repository/files/download-requirements/tests/config/test_config.py b/ansible/playbooks/roles/repository/files/download-requirements/tests/config/test_config.py new file mode 100644 index 0000000000..816f3799f3 --- /dev/null +++ b/ansible/playbooks/roles/repository/files/download-requirements/tests/config/test_config.py @@ -0,0 +1,75 @@ +from pathlib import Path +import logging + +import pytest +import yaml + +from src.config.config import Config +from tests.data.config import ( + DASHBOARD_REQUIREMENTS, + EXPECTED_VERBOSE_DASHBOARD_OUTPUT, + EXPECTED_VERBOSE_FILE_OUTPUT, + EXPECTED_VERBOSE_IMAGE_NO_DOCUMENT_OUTPUT, + EXPECTED_VERBOSE_IMAGE_OUTPUT, + EXPECTED_VERBOSE_K8S_AS_CLOUD_SERVICE_OUTPUT, + EXPECTED_VERBOSE_OUTPUT, + FILE_REQUIREMENTS, + IMAGE_REQUIREMENTS +) +from tests.data.manifest_reader import ( + INPUT_MANIFEST_FEATURE_MAPPINGS, + INPUT_MANIFEST_IMAGES_NO_DOCUMENT, + INPUT_MANIFEST_WITH_DASHBOARDS, + INPUT_MANIFEST_WITH_IMAGES, + INPUT_MANIFEST_WITH_K8S_AS_CLOUD_SERVICE +) +from src.config.os_type import OSArch + + +@pytest.mark.parametrize('INPUT_DOC, EXPECTED_OUTPUT_DOC, REQUIREMENTS', + [ + (INPUT_MANIFEST_FEATURE_MAPPINGS, EXPECTED_VERBOSE_FILE_OUTPUT, FILE_REQUIREMENTS), + (INPUT_MANIFEST_FEATURE_MAPPINGS, EXPECTED_VERBOSE_OUTPUT, DASHBOARD_REQUIREMENTS), + (INPUT_MANIFEST_WITH_DASHBOARDS, EXPECTED_VERBOSE_DASHBOARD_OUTPUT, DASHBOARD_REQUIREMENTS), + (INPUT_MANIFEST_WITH_IMAGES, EXPECTED_VERBOSE_IMAGE_OUTPUT, IMAGE_REQUIREMENTS), + (INPUT_MANIFEST_IMAGES_NO_DOCUMENT, EXPECTED_VERBOSE_IMAGE_NO_DOCUMENT_OUTPUT, IMAGE_REQUIREMENTS), + (INPUT_MANIFEST_WITH_K8S_AS_CLOUD_SERVICE, EXPECTED_VERBOSE_K8S_AS_CLOUD_SERVICE_OUTPUT, FILE_REQUIREMENTS) + ]) +def test_manifest_verbose_output(INPUT_DOC: str, + EXPECTED_OUTPUT_DOC: str, + REQUIREMENTS: str, + mocker, caplog): + """ + Check output produced when running download-requirements script with the `-v|--verbose` flag and with provided `-m|--manifest` + + :param INPUT_DOC: yaml doc which will be parsed by the ManifestReader + :param EXPECTED_OUTPUT_DOC: expected output to be printed by the `Config` class, then tested against the parsed `INPUT_DOC` + :param REQUIREMENTS: yaml doc containing requirements passed to `Config`'s read_manifest() + """ + MANIFEST = { + 'files': '', + 'grafana-dashboards': '', + 'images': '' + } + + mocker.patch('src.config.manifest_reader.load_yaml_file_all', return_value=yaml.safe_load_all(INPUT_DOC)) + caplog.set_level(logging.INFO) + + # mock Config's init methods: + Config._Config__add_args = lambda *args: None + Config._Config__log_info_summary = lambda *args: None + + config = Config([]) + + # mock required config data: + config.dest_manifest = Path('/some/path') + config.os_arch = OSArch.X86_64 + config.verbose_mode = True + + req_key, doc = tuple(yaml.safe_load(REQUIREMENTS).items())[0] + MANIFEST[req_key] = doc + config.read_manifest(MANIFEST) + + log_output = f'\n{"".join(caplog.messages)}\n' + + assert log_output == EXPECTED_OUTPUT_DOC diff --git a/ansible/playbooks/roles/repository/files/download-requirements/tests/config/test_manifest_reader.py b/ansible/playbooks/roles/repository/files/download-requirements/tests/config/test_manifest_reader.py new file mode 100644 index 0000000000..933e5f99db --- /dev/null +++ b/ansible/playbooks/roles/repository/files/download-requirements/tests/config/test_manifest_reader.py @@ -0,0 +1,26 @@ +from pathlib import Path + +import pytest +import yaml + +from src.config.manifest_reader import ManifestReader +from src.config.os_type import OSArch +from tests.data.manifest_reader import (EXPECTED_FEATURE_MAPPINGS, + EXPECTED_FEATURE_MAPPINGS_WITH_DASHBOARDS, + EXPECTED_FEATURE_MAPPINGS_WITH_IMAGES_ARM64, + EXPECTED_FEATURE_MAPPINGS_WITH_IMAGES_X86_64, + INPUT_MANIFEST_FEATURE_MAPPINGS, + INPUT_MANIFEST_WITH_DASHBOARDS, + INPUT_MANIFEST_WITH_IMAGES) + +@pytest.mark.parametrize('INPUT_DOC, EXPECTED_OUTPUT_DOC, OS_ARCH', + [(INPUT_MANIFEST_FEATURE_MAPPINGS, EXPECTED_FEATURE_MAPPINGS, OSArch.X86_64), + (INPUT_MANIFEST_WITH_DASHBOARDS, EXPECTED_FEATURE_MAPPINGS_WITH_DASHBOARDS, OSArch.X86_64), + (INPUT_MANIFEST_WITH_IMAGES, EXPECTED_FEATURE_MAPPINGS_WITH_IMAGES_X86_64, OSArch.X86_64), + (INPUT_MANIFEST_WITH_IMAGES, EXPECTED_FEATURE_MAPPINGS_WITH_IMAGES_ARM64, OSArch.ARM64)]) +def test_parse_manifest(INPUT_DOC, EXPECTED_OUTPUT_DOC, OS_ARCH, mocker): + ''' Check manifest file parsing ''' + mocker.patch('src.config.manifest_reader.load_yaml_file_all', return_value=yaml.safe_load_all(INPUT_DOC)) + + mreader = ManifestReader(Path('/some/path'), OS_ARCH) + assert mreader.parse_manifest() == EXPECTED_OUTPUT_DOC diff --git a/ansible/playbooks/roles/repository/files/download-requirements/tests/config/test_version.py b/ansible/playbooks/roles/repository/files/download-requirements/tests/config/test_version.py new file mode 100644 index 0000000000..0e9e22bd09 --- /dev/null +++ b/ansible/playbooks/roles/repository/files/download-requirements/tests/config/test_version.py @@ -0,0 +1,17 @@ +from src.config.version import Version + + +def test_version_major(): + assert Version('1.2.4') < Version('2.3.0dev') + + +def test_version_minor(): + assert Version('1.2.4') < Version('1.3.0dev') + + +def test_version_patch(): + assert Version('1.2.4') < Version('1.2.5dev') + + +def test_version_not(): + assert not (Version('1.2.4') < Version('1.1.0')) diff --git a/ansible/playbooks/roles/repository/files/download-requirements/tests/data/apt_cache.py b/ansible/playbooks/roles/repository/files/download-requirements/tests/data/apt_cache.py new file mode 100644 index 0000000000..39f2b59d6d --- /dev/null +++ b/ansible/playbooks/roles/repository/files/download-requirements/tests/data/apt_cache.py @@ -0,0 +1,32 @@ +APT_CACHE_DEPENDS_RABBITMQ_STDOUT = ''' +rabbitmq-server + Depends: adduser + |Depends: erlang-base + Depends: erlang-base-hipe + Depends: erlang-crypto + Depends: + python3 + dummy +''' + +APT_CACHE_DEPENDS_SOLR_STDOUT = ''' +solr-common + Depends: curl + Depends: debconf + |Depends: default-jre-headless + |Depends: + default-jre-headless + openjdk-11-jre-headless + openjdk-13-jre-headless + openjdk-16-jre-headless + openjdk-17-jre-headless + openjdk-8-jre-headless + Depends: + default-jre-headless + openjdk-11-jre-headless + openjdk-13-jre-headless + openjdk-16-jre-headless + openjdk-17-jre-headless + openjdk-8-jre-headless + Depends: libjs-jquery +''' diff --git a/ansible/playbooks/roles/repository/files/download-requirements/tests/data/config.py b/ansible/playbooks/roles/repository/files/download-requirements/tests/data/config.py new file mode 100644 index 0000000000..246aee3be9 --- /dev/null +++ b/ansible/playbooks/roles/repository/files/download-requirements/tests/data/config.py @@ -0,0 +1,478 @@ +FILE_REQUIREMENTS = """ +files: + # --- Exporters --- + 'https://github.com/danielqsj/kafka_exporter/releases/download/v1.4.0/kafka_exporter-1.4.0.linux-amd64.tar.gz': + sha256: ffda682e82daede726da8719257a088f8e23dcaa4e2ac8b2b2748a129aea85f0 + deps: [kafka-exporter] + + 'https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.16.1/jmx_prometheus_javaagent-0.16.1.jar': + sha256: 0ddc6834f854c03d5795305193c1d33132a24fbd406b4b52828602f5bc30777e + deps: [kafka] + + 'https://github.com/prometheus/node_exporter/releases/download/v1.3.1/node_exporter-1.3.1.linux-amd64.tar.gz': + sha256: 68f3802c2dd3980667e4ba65ea2e1fb03f4a4ba026cca375f15a0390ff850949 + deps: [node-exporter] + + 'https://github.com/prometheus-community/postgres_exporter/releases/download/v0.10.0/postgres_exporter-0.10.0.linux-amd64.tar.gz': + sha256: 1d1a008c5e29673b404a9ce119b7516fa59974aeda2f47d4a0446d102abce8a1 + deps: [postgres-exporter] + + # --- Misc --- + 'https://archive.apache.org/dist/kafka/2.8.1/kafka_2.12-2.8.1.tgz': + sha256: 175a4134efc569a586d58916cd16ce70f868b13dea2b5a3d12a67b1395d59f98 + deps: [kafka] + + 'https://archive.apache.org/dist/zookeeper/zookeeper-3.5.8/apache-zookeeper-3.5.8-bin.tar.gz': + sha256: c35ed6786d59b73920243f1a324d24c2ddfafb379041d7a350cc9a341c52caf3 + deps: [zookeeper] + + 'https://github.com/prometheus/alertmanager/releases/download/v0.23.0/alertmanager-0.23.0.linux-amd64.tar.gz': + sha256: 77793c4d9bb92be98f7525f8bc50cb8adb8c5de2e944d5500e90ab13918771fc + deps: [prometheus] + + 'https://github.com/prometheus/prometheus/releases/download/v2.31.1/prometheus-2.31.1.linux-amd64.tar.gz': + sha256: 7852dc11cfaa039577c1804fe6f082a07c5eb06be50babcffe29214aedf318b3 + deps: [prometheus] + + 'https://get.helm.sh/helm-v3.2.0-linux-amd64.tar.gz': + sha256: 4c3fd562e64005786ac8f18e7334054a24da34ec04bbd769c206b03b8ed6e457 + deps: [helm] + + # --- Helm charts --- + 'https://charts.bitnami.com/bitnami/node-exporter-2.3.17.tgz': + sha256: ec586fabb775a4f05510386899cf348391523c89ff5a1d4097b0592e675ade7f + deps: [kubernetes-master, k8s-as-cloud-service] + + 'https://helm.elastic.co/helm/filebeat/filebeat-7.12.1.tgz': + sha256: 5838058fe06372390dc335900a7707109cc7287a84164ca245d395af1f9c0a79 + deps: [kubernetes-master, k8s-as-cloud-service] + + 'https://charts.rook.io/release/rook-ceph-v1.8.8.tgz': + sha256: f67e474dedffd4004f3a0b7b40112694a7f1c2b1a0048b03b3083d0a01e86b14 + deps: [kubernetes-master] + + 'https://charts.rook.io/release/rook-ceph-cluster-v1.8.8.tgz': + sha256: df4e1f2125af41fb84c72e4d12aa0cb859dddd4f37b3d5979981bd092040bd16 + deps: [kubernetes-master] + + # --- OpenSearch Bundle --- + 'https://artifacts.opensearch.org/releases/bundle/opensearch/1.2.4/opensearch-1.2.4-linux-x64.tar.gz': + sha256: d40f2696623b6766aa235997e2847a6c661a226815d4ba173292a219754bd8a8 + deps: [logging, opensearch] + + 'https://artifacts.opensearch.org/releases/bundle/opensearch-dashboards/1.2.0/opensearch-dashboards-1.2.0-linux-x64.tar.gz': + sha256: 14623798e61be6913e2a218d6ba3e308e5036359d7bda58482ad2f1340aa3c85 + deps: [opensearch-dashboards] + + 'https://github.com/opensearch-project/perftop/releases/download/1.2.0.0/opensearch-perf-top-1.2.0.0-linux-x64.zip': + sha256: e8f9683976001a8cf59a9f86da5caafa10b88643315f0af2baa93a9354d41e2b + deps: [logging, opensearch] +""" + + +DASHBOARD_REQUIREMENTS = """ +grafana-dashboards: + grafana_dashboard_7249: + url: 'https://grafana.com/api/dashboards/7249/revisions/1/download' + sha256: 41cc2794b1cc9fc537baf045fee12d086d23632b4c8b2e88985274bb9862e731 + grafana_dashboard_315: + url: 'https://grafana.com/api/dashboards/315/revisions/3/download' + sha256: ee46dd6e68a9950aa78e8c88ae5e565c8ebde6cbdbe08972a70f06c5486618fb + grafana_dashboard_11074: + url: 'https://grafana.com/api/dashboards/11074/revisions/9/download' + sha256: 151b23305da46eab84930e99175e1c07e375af73dbbb4b8f501ca25f5ac62785 + grafana_dashboard_405: + url: 'https://grafana.com/api/dashboards/405/revisions/8/download' + sha256: 97675027cbd5b7241e93a2b598654c4b466bc909eeb6358ba123d500094d913c + grafana_dashboard_455: + url: 'https://grafana.com/api/dashboards/455/revisions/2/download' + sha256: c66b91ab8d258b0dc005d3ee4dac3a5634a627c79cc8053875f76ab1e369a362 + grafana_dashboard_9628: + url: 'https://grafana.com/api/dashboards/9628/revisions/7/download' + sha256: c64cc38ad9ebd7af09551ee83e669a38f62a76e7c80929af5668a5852732b376 + grafana_dashboard_4279: + url: 'https://grafana.com/api/dashboards/4279/revisions/4/download' + sha256: 74d47be868da52c145240ab5586d91ace9e9218ca775af988f9d60e501907a25 + grafana_dashboard_1860: + url: 'https://grafana.com/api/dashboards/1860/revisions/23/download' + sha256: 225faab8bf35c1723af14d4c069882ccb92b455d1941c6b1cf3d95a1576c13d7 + grafana_dashboard_7589: + url: 'https://grafana.com/api/dashboards/7589/revisions/5/download' + sha256: cf020e14465626360418e8b5746818c80d77c0301422f3060879fddc099c2151 + grafana_dashboard_789: + url: 'https://grafana.com/api/dashboards/789/revisions/1/download' + sha256: 6a9b4bdc386062287af4f7d56781103a2e45a51813596a65f03c1ae1d4d3e919 + grafana_dashboard_179: + url: 'https://grafana.com/api/dashboards/179/revisions/7/download' + sha256: 8d67350ff74e715fb1463f2406f24a73377357d90344f8200dad9d1b2a8133c2 + grafana_dashboard_6663: + url: 'https://grafana.com/api/dashboards/6663/revisions/1/download' + sha256: d544d88069e1b793ff3d8f6970df641ad9a66217e69b629621e1ecbb2f06aa05 + grafana_dashboard_10991: + url: 'https://grafana.com/api/dashboards/10991/revisions/11/download' + sha256: 66340fa3256d432287cba75ab5177eb058c77afa7d521a75d58099f95b1bff50 +""" + +IMAGE_REQUIREMENTS = """ +images: + haproxy: + 'haproxy:2.2.2-alpine': + sha1: dff8993b065b7f7846adb553548bcdcfcd1b6e8e + + image-registry: + 'registry:2.8.0': + sha1: 89795c17099199c752d02ad8797c1d4565a08aff + allow_mismatch: true + + applications: + 'bitnami/pgpool:4.2.4': + sha1: 66741f3cf4a508bd1f80e2965b0086a4c0fc3580 + + 'bitnami/pgbouncer:1.16.0': + sha1: f2e37eecbf9aed44d5566f06dcc101c1ba9edff9 + + 'epiphanyplatform/keycloak:14.0.0': + sha1: b59d75a967cedd3a4cf5867eced2fb5dff52f60e + + 'rabbitmq:3.8.9': + sha1: c64408bf5bb522f47d5323652dd5e60560dcb5bc + + kubernetes-master: + 'haproxy:2.2.2-alpine': + sha1: dff8993b065b7f7846adb553548bcdcfcd1b6e8e + + 'kubernetesui/dashboard:v2.3.1': + sha1: 8c8a4ac7a643f9c5dd9e5d22876c434187312db8 + + 'kubernetesui/metrics-scraper:v1.0.7': + sha1: 5a0052e2afd3eef3ae638be21938b29b1d608ebe + + # K8s + # v1.18.6 + 'k8s.gcr.io/kube-apiserver:v1.18.6': + sha1: 164968226f4617abaa31e6108ed9034a1e302f4f + + 'k8s.gcr.io/kube-controller-manager:v1.18.6': + sha1: ebea3fecab9e5693d31438fa37dc4d02c6914d67 + + 'k8s.gcr.io/kube-scheduler:v1.18.6': + sha1: 183d29c4fdcfda7478d08240934fdb6845e2e3ec + + 'k8s.gcr.io/kube-proxy:v1.18.6': + sha1: 62da886e36efff0c03a16e19c1442a1c3040fbf1 + + 'k8s.gcr.io/coredns:1.6.7': + sha1: 76615ffabb22fd4fb3d562cb6ebcd243f8826e48 + + 'k8s.gcr.io/etcd:3.4.3-0': + sha1: 6ee82ddb1bbc7f1831c42046612b8bcfbb171b45 + + 'quay.io/coreos/flannel:v0.12.0-amd64': + sha1: 3516522e779373983992095e61eb6615edd50d1f + + 'quay.io/coreos/flannel:v0.12.0': + sha1: 2cb6ce8f1361886225526767c4a0422c039453c8 + + 'calico/cni:v3.15.0': + sha1: aa59f624c223bc398a42c7ba9e628e8143718e58 + + 'calico/kube-controllers:v3.15.0': + sha1: f8921f5d67ee7db1c619aa9fdb74114569684ceb + + 'calico/node:v3.15.0': + sha1: b15308e1aa8b9c56253c142e4361e47125bb4ac5 + + 'calico/pod2daemon-flexvol:v3.15.0': + sha1: dd1a6525bde05937a28e3d9176b826162ae489af + + # v1.19.15 + 'k8s.gcr.io/kube-apiserver:v1.19.15': + sha1: e01c8d778e4e693a0ea09cdbbe041a65cf070c6f + + 'k8s.gcr.io/kube-controller-manager:v1.19.15': + sha1: d1f5cc6a861b2259861fb78b2b83e9a07b788e31 + + 'k8s.gcr.io/kube-scheduler:v1.19.15': + sha1: b07fdd17205bc071ab108851d245689642244f92 + + 'k8s.gcr.io/kube-proxy:v1.19.15': + sha1: 9e2e7a8d40840bbade3a1f2dc743b9226491b6c2 + + # v1.20.12 + 'k8s.gcr.io/kube-apiserver:v1.20.12': + sha1: bbb037b9452db326aaf09988cee080940f3c418a + + 'k8s.gcr.io/kube-controller-manager:v1.20.12': + sha1: 4a902578a0c548edec93e0f4afea8b601fa54b93 + + 'k8s.gcr.io/kube-scheduler:v1.20.12': + sha1: ed5ceb21d0f5bc350db69550fb7feac7a6f1e50b + + 'k8s.gcr.io/kube-proxy:v1.20.12': + sha1: f937aba709f52be88360361230840e7bca756b2e + + 'k8s.gcr.io/coredns:1.7.0': + sha1: 5aa15f4cb942885879955b98a0a824833d9f66eb + + 'k8s.gcr.io/pause:3.2': + sha1: ae4799e1a1ec9cd0dda8ab643b6e50c9fe505fef + + # v1.21.7 + 'k8s.gcr.io/kube-apiserver:v1.21.7': + sha1: edb26859b3485808716982deccd90ca420828649 + + 'k8s.gcr.io/kube-controller-manager:v1.21.7': + sha1: 9abf1841da5b113b377c1471880198259ec2d246 + + 'k8s.gcr.io/kube-scheduler:v1.21.7': + sha1: 996d25351afc96a10e9008c04418db07a99c76b7 + + 'k8s.gcr.io/kube-proxy:v1.21.7': + sha1: 450af22a892ffef276d4d58332b7817a1dde34e7 + + 'k8s.gcr.io/coredns/coredns:v1.8.0': + sha1: 03114a98137e7cc2dcf4983b919e6b93ac8d1189 + + 'k8s.gcr.io/etcd:3.4.13-0': + sha1: d37a2efafcc4aa86e6dc497e87e80b5d7f326115 + + 'k8s.gcr.io/pause:3.4.1': + sha1: 7f57ae28d733f99c0aab8f4e27d4b0c034cd0c04 + + # v1.22.4 + 'k8s.gcr.io/kube-apiserver:v1.22.4': + sha1: 2bf4ddb2e1f1530cf55ebaf8e8d0c56ad378b9ec + + 'k8s.gcr.io/kube-controller-manager:v1.22.4': + sha1: 241924fa3dc4671fe6644402f7beb60028c02c71 + + 'k8s.gcr.io/kube-scheduler:v1.22.4': + sha1: 373e2939072b03cf5b1e115820b7fb6b749b0ebb + + 'k8s.gcr.io/kube-proxy:v1.22.4': + sha1: fecfb88509a430c29267a99b83f60f4a7c333583 + + 'k8s.gcr.io/coredns/coredns:v1.8.4': + sha1: 69c8e14ac3941fd5551ff22180be5f4ea2742d7f + + 'k8s.gcr.io/etcd:3.5.0-0': + sha1: 9d9ee2df54a201dcc9c7a10ea763b9a5dce875f1 + + 'k8s.gcr.io/pause:3.5': + sha1: bf3e3420df62f093f94c41d2b7a62b874dcbfc28 + + 'quay.io/coreos/flannel:v0.14.0-amd64': + sha1: cff47465996a51de4632b53abf1fca873f147027 + + 'quay.io/coreos/flannel:v0.14.0': + sha1: a487a36f7b31677e50e74b96b944f27fbce5ac13 + + 'calico/cni:v3.20.3': + sha1: 95e4cf79e92715b13e500a0efcfdb65590de1e04 + + 'calico/kube-controllers:v3.20.3': + sha1: 5769bae60830abcb3c5d97eb86b8f9938a587b2d + + 'calico/node:v3.20.3': + sha1: cc3c8727ad30b4850e8d0042681342a4f2351eff + + 'calico/pod2daemon-flexvol:v3.20.3': + sha1: 97c1b7ac90aa5a0f5c52e7f137549e598ff80f3e + + 'k8s.gcr.io/sig-storage/csi-attacher:v3.4.0': + sha1: f076bd75359c6449b965c48eb8bad96c6d40790d + + 'k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.5.0': + sha1: 129eb73c8e118e5049fee3d273b2d477c547e080 + + 'k8s.gcr.io/sig-storage/csi-provisioner:v3.1.0': + sha1: 2b45e5a3432cb89f3aec59584c1fa92c069e7a38 + + 'k8s.gcr.io/sig-storage/csi-resizer:v1.4.0': + sha1: ce5c57454254c195762c1f58e1d902d7e81ea669 + + 'k8s.gcr.io/sig-storage/csi-snapshotter:v5.0.1': + sha1: be1cf43617eea007629c0eb99149a99b6498f889 + + 'quay.io/ceph/ceph:v16.2.7': + sha1: fe9b7802c67e19111f83ffe4754ab62df66fd417 + allow_mismatch: true + + 'quay.io/cephcsi/cephcsi:v3.5.1': + sha1: 51dee9ea8ad76fb95ebd16f951e8ffaaaba95eb6 + + 'quay.io/csiaddons/k8s-sidecar:v0.2.1': + sha1: f0fd757436ac5075910c460c1991ff67c4774d09 + + 'quay.io/csiaddons/volumereplication-operator:v0.3.0': + sha1: d3cd17f14fcbf09fc6c8c2c5c0419f098f87a70f + + rook: + 'rook/ceph:v1.8.8': + sha1: f34039b17b18f5a855b096d48ff787b4013615e4 +""" + + +EXPECTED_VERBOSE_OUTPUT = """ +Manifest summary: +-------------------------------------------------- +Components requested: +- kafka +- repository + +Features requested: +- filebeat +- firewall +- image-registry +- jmx-exporter +- kafka +- kafka-exporter +- node-exporter +- repository +- zookeeper +-------------------------------------------------- +""" + + +EXPECTED_VERBOSE_DASHBOARD_OUTPUT = """ +Manifest summary: +-------------------------------------------------- +Components requested: +- monitoring +- repository + +Features requested: +- filebeat +- firewall +- grafana +- image-registry +- node-exporter +- prometheus +- repository + +Dashboards to download: +- grafana_dashboard_10991 +- grafana_dashboard_11074 +- grafana_dashboard_179 +- grafana_dashboard_1860 +- grafana_dashboard_315 +- grafana_dashboard_405 +- grafana_dashboard_4279 +- grafana_dashboard_455 +- grafana_dashboard_6663 +- grafana_dashboard_7249 +- grafana_dashboard_7589 +- grafana_dashboard_789 +- grafana_dashboard_9628 +-------------------------------------------------- +""" + + +EXPECTED_VERBOSE_FILE_OUTPUT = """ +Manifest summary: +-------------------------------------------------- +Components requested: +- kafka +- repository + +Features requested: +- filebeat +- firewall +- image-registry +- jmx-exporter +- kafka +- kafka-exporter +- node-exporter +- repository +- zookeeper + +Files to download: +- https://archive.apache.org/dist/kafka/2.8.1/kafka_2.12-2.8.1.tgz +- https://archive.apache.org/dist/zookeeper/zookeeper-3.5.8/apache-zookeeper-3.5.8-bin.tar.gz +- https://github.com/danielqsj/kafka_exporter/releases/download/v1.4.0/kafka_exporter-1.4.0.linux-amd64.tar.gz +- https://github.com/prometheus/node_exporter/releases/download/v1.3.1/node_exporter-1.3.1.linux-amd64.tar.gz +- https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.16.1/jmx_prometheus_javaagent-0.16.1.jar +-------------------------------------------------- +""" + + +EXPECTED_VERBOSE_IMAGE_OUTPUT = """ +Manifest summary: +-------------------------------------------------- +Components requested: +- repository + +Features requested: +- filebeat +- firewall +- image-registry +- node-exporter +- repository + +Images to download: +- bitnami/pgpool:4.2.4 +- epiphanyplatform/keycloak:14.0.0 +- haproxy:2.2.2-alpine +- k8s.gcr.io/coredns/coredns:v1.8.0 +- k8s.gcr.io/etcd:3.4.13-0 +- k8s.gcr.io/kube-apiserver:v1.21.7 +- k8s.gcr.io/kube-apiserver:v1.22.4 +- k8s.gcr.io/kube-controller-manager:v1.21.7 +- k8s.gcr.io/kube-controller-manager:v1.22.4 +- k8s.gcr.io/kube-proxy:v1.21.7 +- k8s.gcr.io/kube-scheduler:v1.21.7 +- k8s.gcr.io/pause:3.4.1 +- kubernetesui/dashboard:v2.3.1 +- kubernetesui/metrics-scraper:v1.0.7 +- rabbitmq:3.8.9 +- registry:2.8.0 +-------------------------------------------------- +""" + + +EXPECTED_VERBOSE_IMAGE_NO_DOCUMENT_OUTPUT = """ +Manifest summary: +-------------------------------------------------- +Components requested: +- kafka +- rabbitmq +- repository + +Features requested: +- filebeat +- firewall +- image-registry +- jmx-exporter +- kafka +- kafka-exporter +- node-exporter +- rabbitmq +- repository +- zookeeper + +Images to download: +- registry:2.8.0 +-------------------------------------------------- +""" + + +EXPECTED_VERBOSE_K8S_AS_CLOUD_SERVICE_OUTPUT = """ +Manifest summary: +-------------------------------------------------- +Components requested: +- repository + +Features requested: +- filebeat +- firewall +- image-registry +- k8s-as-cloud-service +- node-exporter +- repository + +Files to download: +- https://charts.bitnami.com/bitnami/node-exporter-2.3.17.tgz +- https://github.com/prometheus/node_exporter/releases/download/v1.3.1/node_exporter-1.3.1.linux-amd64.tar.gz +- https://helm.elastic.co/helm/filebeat/filebeat-7.12.1.tgz +-------------------------------------------------- +""" diff --git a/ansible/playbooks/roles/repository/files/download-requirements/tests/data/manifest_reader.py b/ansible/playbooks/roles/repository/files/download-requirements/tests/data/manifest_reader.py new file mode 100644 index 0000000000..04c0758aae --- /dev/null +++ b/ansible/playbooks/roles/repository/files/download-requirements/tests/data/manifest_reader.py @@ -0,0 +1,438 @@ +FEATURE_MAPPINGS = """ +--- +kind: configuration/feature-mappings +title: Feature mapping to roles +name: default +specification: + mappings: + kafka: + - zookeeper + - jmx-exporter + - kafka + - kafka-exporter + - node-exporter + - filebeat + - firewall + rabbitmq: + - rabbitmq + - node-exporter + - filebeat + - firewall + logging: + - logging + - opensearch-dashboards + - node-exporter + - filebeat + - firewall + load_balancer: + - haproxy + - node-exporter + - filebeat + - firewall + monitoring: + - prometheus + - grafana + - node-exporter + - filebeat + - firewall + postgresql: + - postgresql + - postgres-exporter + - node-exporter + - filebeat + - firewall + custom: + - repository + - image-registry + - kubernetes-master + - node-exporter + - filebeat + - rabbitmq + - postgresql + - prometheus + - grafana + - node-exporter + - logging + - firewall + - rook + single_machine: + - repository + - image-registry + - kubernetes-master + - helm + - applications + - rabbitmq + - postgresql + - firewall + - rook + kubernetes_master: + - kubernetes-master + - helm + - applications + - rook + - node-exporter + - filebeat + - firewall + kubernetes_node: + - kubernetes-node + - node-exporter + - filebeat + - firewall + opensearch: + - opensearch + - node-exporter + - filebeat + - firewall + repository: + - repository + - image-registry + - firewall + - filebeat + - node-exporter +version: 2.0.1dev +provider: azure +""" + + +INPUT_MANIFEST_FEATURE_MAPPINGS = f""" +kind: epiphany-cluster +title: Epiphany cluster Config +provider: any +name: default +specification: + name: new_cluster + admin_user: + name: operations + key_path: /shared/.ssh/epiphany-operations/id_rsa + components: + repository: + count: 1 + kubernetes_master: + count: 0 + kubernetes_node: + count: 0 + logging: + count: 0 + monitoring: + count: 0 + kafka: + count: 2 + postgresql: + count: 0 + load_balancer: + count: 0 + rabbitmq: + count: 0 + opensearch: + count: 0 +version: 2.0.1dev +{FEATURE_MAPPINGS} +""" + + +INPUT_MANIFEST_WITH_DASHBOARDS = f""" +kind: epiphany-cluster +title: Epiphany cluster Config +provider: any +name: default +specification: + name: new_cluster + admin_user: + name: operations + key_path: /shared/.ssh/epiphany-operations/id_rsa + components: + repository: + count: 1 + kubernetes_master: + count: 0 + kubernetes_node: + count: 0 + logging: + count: 0 + monitoring: + count: 1 + kafka: + count: 0 + postgresql: + count: 0 + load_balancer: + count: 0 + rabbitmq: + count: 0 + opensearch: + count: 0 +version: 2.0.1dev +{FEATURE_MAPPINGS} +""" + + +INPUT_MANIFEST_WITH_IMAGES = f""" +kind: epiphany-cluster +title: Epiphany cluster Config +provider: any +name: default +specification: + name: new_cluster + admin_user: + name: operations + key_path: /shared/.ssh/epiphany-operations/id_rsa + components: + repository: + count: 1 + kubernetes_master: + count: 0 + kubernetes_node: + count: 0 + logging: + count: 0 + monitoring: + count: 0 + kafka: + count: 0 + postgresql: + count: 0 + load_balancer: + count: 0 + rabbitmq: + count: 0 + opensearch: + count: 0 +version: 2.0.1dev +{FEATURE_MAPPINGS} +--- +kind: configuration/image-registry +title: Epiphany image registry +name: default +specification: + description: Local registry with Docker images + registry_image: + name: registry:2.8.0 + file_name: registry-2.8.0.tar + images_to_load: + x86_64: + generic: + applications: + - name: epiphanyplatform/keycloak:14.0.0 + file_name: keycloak-14.0.0.tar + - name: rabbitmq:3.8.9 + file_name: rabbitmq-3.8.9.tar + - name: bitnami/pgpool:4.2.4 + file_name: pgpool-4.2.4.tar + kubernetes-master: + - name: kubernetesui/dashboard:v2.3.1 + file_name: dashboard-v2.3.1.tar + - name: kubernetesui/metrics-scraper:v1.0.7 + file_name: metrics-scraper-v1.0.7.tar + current: + haproxy: + - name: haproxy:2.2.2-alpine + file_name: haproxy-2.2.2-alpine.tar + kubernetes-master: + - name: k8s.gcr.io/kube-apiserver:v1.22.4 + file_name: kube-apiserver-v1.22.4.tar + - name: k8s.gcr.io/kube-controller-manager:v1.22.4 + file_name: kube-controller-manager-v1.22.4.tar + legacy: + kubernetes-master: + - name: k8s.gcr.io/kube-apiserver:v1.21.7 + file_name: kube-apiserver-v1.21.7.tar + - name: k8s.gcr.io/kube-controller-manager:v1.21.7 + file_name: kube-controller-manager-v1.21.7.tar + - name: k8s.gcr.io/kube-proxy:v1.21.7 + file_name: kube-proxy-v1.21.7.tar + - name: k8s.gcr.io/kube-scheduler:v1.21.7 + file_name: kube-scheduler-v1.21.7.tar + - name: k8s.gcr.io/coredns/coredns:v1.8.0 + file_name: coredns-v1.8.0.tar + - name: k8s.gcr.io/etcd:3.4.13-0 + file_name: etcd-3.4.13-0.tar + - name: k8s.gcr.io/pause:3.4.1 + file_name: pause-3.4.1.tar + aarch64: + generic: + applications: + - name: epiphanyplatform/keycloak:14.0.0 + file_name: keycloak-14.0.0.tar + - name: rabbitmq:3.8.9 + file_name: rabbitmq-3.8.9.tar + kubernetes-master: + - name: kubernetesui/dashboard:v2.3.1 + file_name: dashboard-v2.3.1.tar + - name: kubernetesui/metrics-scraper:v1.0.7 + file_name: metrics-scraper-v1.0.7.tar + current: + haproxy: + - name: haproxy:2.2.2-alpine + file_name: haproxy-2.2.2-alpine.tar + kubernetes-master: + - name: k8s.gcr.io/kube-apiserver:v1.22.4 + file_name: kube-apiserver-v1.22.4.tar + legacy: + kubernetes-master: + - name: k8s.gcr.io/kube-apiserver:v1.21.7 + file_name: kube-apiserver-v1.21.7.tar + - name: k8s.gcr.io/kube-scheduler:v1.21.7 + file_name: kube-scheduler-v1.21.7.tar + - name: k8s.gcr.io/coredns/coredns:v1.8.0 + file_name: coredns-v1.8.0.tar + - name: k8s.gcr.io/etcd:3.4.13-0 + file_name: etcd-3.4.13-0.tar + - name: k8s.gcr.io/pause:3.4.1 + file_name: pause-3.4.1.tar +version: 2.0.1dev +provider: any +""" + + +INPUT_MANIFEST_IMAGES_NO_DOCUMENT = f""" +kind: epiphany-cluster +title: Epiphany cluster Config +provider: any +name: default +specification: + name: new_cluster + admin_user: + name: operations + key_path: /shared/.ssh/epiphany-operations/id_rsa + components: + repository: + count: 1 + kubernetes_master: + count: 0 + kubernetes_node: + count: 0 + logging: + count: 0 + monitoring: + count: 0 + kafka: + count: 2 + postgresql: + count: 0 + load_balancer: + count: 0 + rabbitmq: + count: 1 + opensearch: + count: 0 +version: 2.0.1dev +{FEATURE_MAPPINGS} +""" + + +INPUT_MANIFEST_WITH_K8S_AS_CLOUD_SERVICE = f""" +kind: epiphany-cluster +title: Epiphany cluster Config +provider: any +name: default +specification: + name: new_cluster + admin_user: + name: operations + key_path: /shared/.ssh/epiphany-operations/id_rsa + cloud: + k8s_as_cloud_service: true + components: + repository: + count: 1 + kubernetes_master: + count: 0 + kubernetes_node: + count: 0 + logging: + count: 0 + monitoring: + count: 0 + kafka: + count: 0 + postgresql: + count: 0 + load_balancer: + count: 0 + rabbitmq: + count: 0 + opensearch: + count: 0 +version: 2.0.1dev +{FEATURE_MAPPINGS} +""" + + +EXPECTED_FEATURE_MAPPINGS = { + 'requested-components': ['kafka', 'repository'], + 'requested-features': ['filebeat', + 'firewall', + 'image-registry', + 'jmx-exporter', + 'kafka', + 'kafka-exporter', + 'node-exporter', + 'repository', + 'zookeeper'], + 'requested-images': [] +} + + +EXPECTED_FEATURE_MAPPINGS_WITH_DASHBOARDS = { + 'requested-components': ['monitoring', 'repository'], + 'requested-features': ['filebeat', + 'firewall', + 'grafana', + 'image-registry', + 'node-exporter', + 'prometheus', + 'repository'], + 'requested-images': [] +} + + +EXPECTED_FEATURE_MAPPINGS_WITH_IMAGES_X86_64 = { + 'requested-components': ['repository'], + 'requested-features': ['filebeat', + 'firewall', + 'image-registry', + 'node-exporter', + 'repository'], + 'requested-images': [ + 'bitnami/pgpool:4.2.4', + 'epiphanyplatform/keycloak:14.0.0', + 'haproxy:2.2.2-alpine', + 'k8s.gcr.io/coredns/coredns:v1.8.0', + 'k8s.gcr.io/etcd:3.4.13-0', + 'k8s.gcr.io/kube-apiserver:v1.21.7', + 'k8s.gcr.io/kube-apiserver:v1.22.4', + 'k8s.gcr.io/kube-controller-manager:v1.21.7', + 'k8s.gcr.io/kube-controller-manager:v1.22.4', + 'k8s.gcr.io/kube-proxy:v1.21.7', + 'k8s.gcr.io/kube-scheduler:v1.21.7', + 'k8s.gcr.io/pause:3.4.1', + 'kubernetesui/dashboard:v2.3.1', + 'kubernetesui/metrics-scraper:v1.0.7', + 'rabbitmq:3.8.9', + 'registry:2.8.0' + ] +} + + +EXPECTED_FEATURE_MAPPINGS_WITH_IMAGES_ARM64 = { + 'requested-components': ['repository'], + 'requested-features': ['filebeat', + 'firewall', + 'image-registry', + 'node-exporter', + 'repository'], + 'requested-images': [ + 'epiphanyplatform/keycloak:14.0.0', + 'haproxy:2.2.2-alpine', + 'k8s.gcr.io/coredns/coredns:v1.8.0', + 'k8s.gcr.io/etcd:3.4.13-0', + 'k8s.gcr.io/kube-apiserver:v1.21.7', + 'k8s.gcr.io/kube-apiserver:v1.22.4', + 'k8s.gcr.io/kube-scheduler:v1.21.7', + 'k8s.gcr.io/pause:3.4.1', + 'kubernetesui/dashboard:v2.3.1', + 'kubernetesui/metrics-scraper:v1.0.7', + 'rabbitmq:3.8.9', + 'registry:2.8.0' + ] +} diff --git a/ansible/playbooks/roles/repository/files/download-requirements/tests/mocks/command_run_mock.py b/ansible/playbooks/roles/repository/files/download-requirements/tests/mocks/command_run_mock.py index 7922980232..a4cf5193f1 100644 --- a/ansible/playbooks/roles/repository/files/download-requirements/tests/mocks/command_run_mock.py +++ b/ansible/playbooks/roles/repository/files/download-requirements/tests/mocks/command_run_mock.py @@ -27,22 +27,20 @@ def __enter__(self) -> List[str]: """ :return: list of arguments passed to the subprocess.run() function """ - mock = Mock() - mock.returncode = 0 + mock_completed_proc = Mock(spec=subprocess.CompletedProcess) + mock_completed_proc.returncode = 0 - self.__mocker.patch('src.command.command.subprocess.run', side_effect=lambda args, encoding, stdout, stderr: mock) - - spy = self.__mocker.spy(subprocess, 'run') + mock_run = self.__mocker.patch('src.command.command.subprocess.run', return_value=mock_completed_proc) try: if self.__args: self.__func(**self.__args) else: self.__func() - except Exception: + except Exception: # pylint: disable=broad-except pass - return spy.call_args[0][0] + return mock_run.call_args[0][0] def __exit__(self, *args): pass diff --git a/ansible/playbooks/roles/repository/files/server/RedHat/create-repository.sh b/ansible/playbooks/roles/repository/files/server/RedHat/create-repository.sh index 7c262c3ba7..89a43832f7 100644 --- a/ansible/playbooks/roles/repository/files/server/RedHat/create-repository.sh +++ b/ansible/playbooks/roles/repository/files/server/RedHat/create-repository.sh @@ -4,7 +4,13 @@ epi_repo_server_path=$1 # /var/www/html/epirepo is the default is_offline_mode=$2 if [[ "$is_offline_mode" == "true" ]]; then - dnf localinstall --cacheonly --disablerepo='*' -y $(ls "${epi_repo_server_path}"/packages/repo-prereqs/*.rpm) + if dnf list installed python3-createrepo_c; then # install all + readarray -t prereq_packages < <(find "${epi_repo_server_path}/packages/repo-prereqs/" -type f -name \*.rpm) + else # skip python3-createrepo_c + readarray -t prereq_packages < <(find "${epi_repo_server_path}/packages/repo-prereqs/" -type f -name \*.rpm \ + ! -name python3-createrepo_c\*) + fi + dnf localinstall --cacheonly --disablerepo='*' -y "${prereq_packages[@]}" else dnf install -y httpd createrepo fi diff --git a/ansible/playbooks/roles/repository/library/__init__.py b/ansible/playbooks/roles/repository/library/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/ansible/playbooks/roles/repository/library/filter_credentials.py b/ansible/playbooks/roles/repository/library/filter_credentials.py new file mode 100644 index 0000000000..dced8574db --- /dev/null +++ b/ansible/playbooks/roles/repository/library/filter_credentials.py @@ -0,0 +1,154 @@ +#!/usr/bin/python + +from __future__ import (absolute_import, division, print_function) + +from hashlib import sha256 +from pathlib import Path +from typing import Callable +import yaml + +__metaclass__ = type + + +DOCUMENTATION = r""" +--- +module: filter_credentials + +short_description: Module for filtering sensitive data stored in manifest.yml file + +options: + src: + description: Path to the manifest file that will be filtered + required: true + type: str + dest: + description: Path to the newly created, filtered manifest + required: false + type: str +""" + +EXAMPLES = r""" +# Pass in a manifest file without modifying the original file +- name: Filter out manifest file and set result to stdout + filter_credentials: + src: /some/where/manifest.yml + +# Pass in a manifest file and save it to `dest` location +- name: Filter out manifest file and save it as a new file + filter_credentials: + src: /some/where/manifest.yml + dest: /some/other/place/manifest.yml +""" + + +from ansible.module_utils.basic import AnsibleModule + + +def _get_hash(filepath: Path) -> str: + # calculate sha256 for `filepath` + with filepath.open(mode='rb') as file_handler: + hashgen = sha256() + hashgen.update(file_handler.read()) + return hashgen.hexdigest() + + +def _filter_common(docs: list[dict]): + # remove admin user info from epiphany-cluster doc: + try: + del next(filter(lambda doc: doc['kind'] == 'epiphany-cluster', docs))['specification']['admin_user'] + except KeyError: + pass # ok, key already doesn't exist + + +def _filter_aws(docs: list[dict]): + _filter_common(docs) + + # filter epiphany-cluster doc + epiphany_cluster = next(filter(lambda doc: doc['kind'] == 'epiphany-cluster', docs)) + + try: + del epiphany_cluster['specification']['cloud']['credentials'] + except KeyError: + pass # ok, key already doesn't exist + + +def _filter_azure(docs: list[dict]): + _filter_common(docs) + + # filter epiphany-cluster doc + epiphany_cluster = next(filter(lambda doc: doc['kind'] == 'epiphany-cluster', docs)) + try: + del epiphany_cluster['specification']['cloud']['subscription_name'] + except KeyError: + pass # ok, key already doesn't exist + + +def _get_filtered_manifest(manifest_path: Path) -> str: + """ + Load the manifest file and remove any sensitive data. + + :param manifest_path: manifest file which will be loaded + :returns: filtered manifset + """ + docs = yaml.safe_load_all(manifest_path.open()) + filtered_docs = [doc for doc in docs if doc['kind'] in ['epiphany-cluster', + 'configuration/feature-mappings', + 'configuration/features', + 'configuration/image-registry']] + + FILTER_DATA: dict[str, Callable] = { + 'any': _filter_common, + 'azure': _filter_azure, + 'aws': _filter_aws + } + + FILTER_DATA[filtered_docs[0]['provider']](filtered_docs) + + return yaml.dump_all(filtered_docs) + +def run_module(): + # define available arguments/parameters a user can pass to the module + module_args = dict( + src=dict(type=str, required=True), + dest=dict(type=str, required=False, default=None), + ) + + # seed the result dict in the object + result = dict( + changed=False, + manifest='' + ) + + # create ansible module + module = AnsibleModule( + argument_spec=module_args, + supports_check_mode=True + ) + + input_manifest = Path(module.params['src']) + output_manifest = Path(module.params['dest']) if module.params['dest'] else None + + manifest = _get_filtered_manifest(input_manifest) + + if not module.params['dest']: # to stdout + result['manifest'] = manifest + else: # write to a new location + orig_hash_value = _get_hash(input_manifest) # hash value prior to change + + with output_manifest.open(mode='w', encoding='utf-8') as output_manifest_file: + output_manifest_file.write(manifest) + + new_hash_value = _get_hash(output_manifest) # hash value post change + + if orig_hash_value != new_hash_value: + result['changed'] = True + + module.exit_json(**result) + + +def main(): + run_module() + + +if __name__ == '__main__': + main() diff --git a/ansible/playbooks/roles/repository/library/tests/__init__.py b/ansible/playbooks/roles/repository/library/tests/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/ansible/playbooks/roles/repository/library/tests/data/filter_credentials_data.py b/ansible/playbooks/roles/repository/library/tests/data/filter_credentials_data.py new file mode 100644 index 0000000000..84c9c5864b --- /dev/null +++ b/ansible/playbooks/roles/repository/library/tests/data/filter_credentials_data.py @@ -0,0 +1,267 @@ +CLUSTER_DOC_ANY = { + 'kind': 'epiphany-cluster', + 'title': 'Epiphany cluster Config', + 'provider': 'any', + 'name': 'default', + 'specification': { + 'name': 'test', + 'admin_user': { + 'name': 'operations', + 'key_path': '/shared/.ssh/epiphany-operations/id_rsa'}, + 'components': { + 'repository': { + 'count': 1, + 'machines': ['default-repository']}, + 'kubernetes_master': { + 'count': 1, + 'machines': ['default-k8s-master1']}, + 'kubernetes_node': { + 'count': 2, + 'machines': ['default-k8s-node1', 'default-k8s-node2']}, + 'logging': { + 'count': 1, + 'machines': ['default-logging']}, + 'monitoring': { + 'count': 1, + 'machines': ['default-monitoring']}, + 'kafka': { + 'count': 2, + 'machines': ['default-kafka1', 'default-kafka2']}, + 'postgresql': { + 'count': 1, + 'machines': ['default-postgresql']}, + 'load_balancer': { + 'count': 1, + 'machines': ['default-loadbalancer']}, + 'rabbitmq': { + 'count': 1, + 'machines': ['default-rabbitmq']}, + 'opensearch': { + 'count': 1, + 'machines': ['default-opensearch']} + } + }, + 'version': '2.0.1dev' +} + + +EXPECTED_CLUSTER_DOC_ANY = { + 'kind': 'epiphany-cluster', + 'title': 'Epiphany cluster Config', + 'provider': 'any', + 'name': 'default', + 'specification': { + 'name': 'test', + 'components': { + 'repository': { + 'count': 1, + 'machines': ['default-repository']}, + 'kubernetes_master': { + 'count': 1, + 'machines': ['default-k8s-master1']}, + 'kubernetes_node': { + 'count': 2, + 'machines': ['default-k8s-node1', 'default-k8s-node2']}, + 'logging': { + 'count': 1, + 'machines': ['default-logging']}, + 'monitoring': { + 'count': 1, + 'machines': ['default-monitoring']}, + 'kafka': { + 'count': 2, + 'machines': ['default-kafka1', 'default-kafka2']}, + 'postgresql': { + 'count': 1, + 'machines': ['default-postgresql']}, + 'load_balancer': { + 'count': 1, + 'machines': ['default-loadbalancer']}, + 'rabbitmq': { + 'count': 1, + 'machines': ['default-rabbitmq']}, + 'opensearch': { + 'count': 1, + 'machines': ['default-opensearch']} + } + }, + 'version': '2.0.1dev' +} + + +CLUSTER_DOC_AZURE = { + 'kind': 'epiphany-cluster', + 'title': 'Epiphany cluster Config', + 'provider': 'azure', + 'name': 'default', + 'specification': { + 'name': 'test', + 'prefix': 'prefix', + 'admin_user': { + 'name': 'operations', + 'key_path': '/shared/.ssh/epiphany-operations/id_rsa'}, + 'cloud': { + 'subscription_name': 'YOUR-SUB-NAME', + 'k8s_as_cloud_service': False, + 'use_public_ips': False, + 'default_os_image': 'default'}, + 'components': { + 'repository': {'count': 1}, + 'kubernetes_master': {'count': 1}, + 'kubernetes_node': {'count': 2}, + 'logging': {'count': 1}, + 'monitoring': {'count': 1}, + 'kafka': {'count': 2}, + 'postgresql': {'count': 1}, + 'load_balancer': {'count': 1}, + 'rabbitmq': {'count': 1}, + 'opensearch': {'count': 1} + } + }, + 'version': '2.0.1dev' +} + + +EXPECTED_CLUSTER_DOC_AZURE = { + 'kind': 'epiphany-cluster', + 'title': 'Epiphany cluster Config', + 'provider': 'azure', + 'name': 'default', + 'specification': { + 'name': 'test', + 'prefix': 'prefix', + 'cloud': { + 'k8s_as_cloud_service': False, + 'use_public_ips': False, + 'default_os_image': 'default'}, + 'components': { + 'repository': {'count': 1}, + 'kubernetes_master': {'count': 1}, + 'kubernetes_node': {'count': 2}, + 'logging': {'count': 1}, + 'monitoring': {'count': 1}, + 'kafka': {'count': 2}, + 'postgresql': {'count': 1}, + 'load_balancer': {'count': 1}, + 'rabbitmq': {'count': 1}, + 'opensearch': {'count': 1} + } + }, + 'version': '2.0.1dev' +} + +CLUSTER_DOC_AWS = { + 'kind': 'epiphany-cluster', + 'title': 'Epiphany cluster Config', + 'provider': 'aws', + 'name': 'default', + 'specification': { + 'name': 'test', + 'prefix': 'prefix', + 'admin_user': { + 'name': 'ubuntu', + 'key_path': '/shared/.ssh/epiphany-operations/id_rsa'}, + 'cloud': { + 'k8s_as_cloud_service': False, + 'use_public_ips': False, + 'credentials': { + 'access_key_id': 'XXXX-XXXX-XXXX', + 'secret_access_key': 'XXXXXXXXXXXXXXXX'}, + 'default_os_image': 'default' + }, + 'components': { + 'repository': {'count': 1}, + 'kubernetes_master': {'count': 1}, + 'kubernetes_node': {'count': 2}, + 'logging': {'count': 1}, + 'monitoring': {'count': 1}, + 'kafka': {'count': 2}, + 'postgresql': {'count': 1}, + 'load_balancer': {'count': 1}, + 'rabbitmq': {'count': 1}, + 'opensearch': {'count': 1} + } + }, + 'version': '2.0.1dev' +} + + +EXPECTED_CLUSTER_DOC_AWS = { + 'kind': 'epiphany-cluster', + 'title': 'Epiphany cluster Config', + 'provider': 'aws', + 'name': 'default', + 'specification': { + 'name': 'test', + 'prefix': 'prefix', + 'cloud': { + 'k8s_as_cloud_service': False, + 'use_public_ips': False, + 'default_os_image': 'default' + }, + 'components': { + 'repository': {'count': 1}, + 'kubernetes_master': {'count': 1}, + 'kubernetes_node': {'count': 2}, + 'logging': {'count': 1}, + 'monitoring': {'count': 1}, + 'kafka': {'count': 2}, + 'postgresql': {'count': 1}, + 'load_balancer': {'count': 1}, + 'rabbitmq': {'count': 1}, + 'opensearch': {'count': 1} + } + }, + 'version': '2.0.1dev' +} + + +COMMON_DOCS = [ + { + 'kind': 'configuration/feature-mappings', + 'title': 'Feature mapping to components', + 'name': 'default' + }, + { + 'kind': 'configuration/image-registry', + 'title': 'Epiphany image registry', + 'name': 'default' + } +] + + +NOT_NEEDED_DOCS = [ + { + 'kind': 'infrastructure/machine', + 'provider': 'any', + 'name': 'default-loadbalancer', + 'specification': { + 'hostname': 'loadbalancer', + 'ip': '192.168.100.110' + }, + 'version': '2.0.1dev' + }, + { + 'kind': 'infrastructure/machine', + 'provider': 'any', + 'name': 'default-rabbitmq', + 'specification': { + 'hostname': 'rabbitmq', + 'ip': '192.168.100.111' + }, + 'version': '2.0.1dev' + }, + { + 'kind': 'infrastructure/machine', + 'provider': 'any', + 'name': 'default-opensearch', + 'specification': { + 'hostname': 'opensearch', + 'ip': '192.168.100.112' + }, + 'version': '2.0.1dev' + } +] + + +MANIFEST_WITH_ADDITIONAL_DOCS = [ CLUSTER_DOC_ANY ] + COMMON_DOCS + NOT_NEEDED_DOCS diff --git a/ansible/playbooks/roles/repository/library/tests/test_filter_credentials.py b/ansible/playbooks/roles/repository/library/tests/test_filter_credentials.py new file mode 100644 index 0000000000..2d4c350a4a --- /dev/null +++ b/ansible/playbooks/roles/repository/library/tests/test_filter_credentials.py @@ -0,0 +1,41 @@ +from copy import deepcopy # make sure that objects used in tests don't get damaged in between test cases +from pathlib import Path + +import pytest + +from library.filter_credentials import _get_filtered_manifest + +from library.tests.data.filter_credentials_data import ( + CLUSTER_DOC_ANY, + CLUSTER_DOC_AWS, + CLUSTER_DOC_AZURE, + EXPECTED_CLUSTER_DOC_ANY, + EXPECTED_CLUSTER_DOC_AWS, + EXPECTED_CLUSTER_DOC_AZURE, + MANIFEST_WITH_ADDITIONAL_DOCS +) + + +@pytest.mark.parametrize('CLUSTER_DOC, EXPECTED_OUTPUT_DOC', + [(CLUSTER_DOC_ANY, EXPECTED_CLUSTER_DOC_ANY), + (CLUSTER_DOC_AZURE, EXPECTED_CLUSTER_DOC_AZURE), + (CLUSTER_DOC_AWS, EXPECTED_CLUSTER_DOC_AWS)]) +def test_epiphany_cluster_doc_filtering(CLUSTER_DOC, EXPECTED_OUTPUT_DOC, mocker): + # Ignore yaml parsing, work on python objects: + mocker.patch('library.filter_credentials.yaml.safe_load_all', return_value=[deepcopy(CLUSTER_DOC)]) + mocker.patch('library.filter_credentials.yaml.dump_all', side_effect=lambda docs: docs) + mocker.patch('library.filter_credentials.Path.open') + + assert _get_filtered_manifest(Path('')) == [EXPECTED_OUTPUT_DOC] + + +def test_not_needed_docs_filtering(mocker): + # Ignore yaml parsing, work on python objects: + mocker.patch('library.filter_credentials.yaml.safe_load_all', return_value=deepcopy(MANIFEST_WITH_ADDITIONAL_DOCS)) + mocker.patch('library.filter_credentials.yaml.dump_all', side_effect=lambda docs: docs) + mocker.patch('library.filter_credentials.Path.open') + + EXPECTED_DOCS = ['epiphany-cluster', 'configuration/feature-mappings', 'configuration/image-registry'] + FILTERED_DOCS = [doc['kind'] for doc in _get_filtered_manifest(Path(''))] + + assert FILTERED_DOCS == EXPECTED_DOCS diff --git a/ansible/playbooks/roles/repository/tasks/RedHat/install-packages.yml b/ansible/playbooks/roles/repository/tasks/RedHat/install-packages.yml index ad59e47391..c56e0b70fc 100644 --- a/ansible/playbooks/roles/repository/tasks/RedHat/install-packages.yml +++ b/ansible/playbooks/roles/repository/tasks/RedHat/install-packages.yml @@ -21,6 +21,7 @@ - python36 - python3-pyyaml - rsync # for Ansible (synchronize module) + - tar state: present register: result retries: 3 diff --git a/ansible/playbooks/roles/repository/tasks/check-whether-to-run-download.yml b/ansible/playbooks/roles/repository/tasks/check-whether-to-run-download.yml index f0d1fc4452..d10e7420bc 100644 --- a/ansible/playbooks/roles/repository/tasks/check-whether-to-run-download.yml +++ b/ansible/playbooks/roles/repository/tasks/check-whether-to-run-download.yml @@ -7,15 +7,44 @@ path: "{{ download_requirements_flag }}" register: stat_flag_file -- name: Remove download-requirements-done.flag file if expired +- name: Remove download-requirements flag file if expired file: path: "{{ download_requirements_flag }}" state: absent + register: remove_download_requirements_flag when: - stat_flag_file.stat.exists - (ansible_date_time.epoch|int - stat_flag_file.stat.mtime) > (60 * specification.download_done_flag_expire_minutes) - name: Check whether to run download script - stat: - path: "{{ download_requirements_flag }}" - register: stat_flag_file + when: + - stat_flag_file.stat.exists + - not remove_download_requirements_flag.changed + block: + - name: Load download-requirements flag file + slurp: + path: "{{ download_requirements_flag }}" + register: slurp_download_requirements_flag + + - name: Get checksum of remote input manifest file + when: + - not full_download + - input_manifest_path + stat: + path: "{{ download_requirements_manifest }}" + get_checksum: true + get_attributes: false + get_mime: false + checksum_algorithm: sha1 + register: stat_remote_manifest + +# Skip download script when flag file exists and checksums are equal +- name: Set skip_download_requirements_script fact + set_fact: + skip_download_requirements_script: >- + {{ True if slurp_download_requirements_flag.content is defined and ( + not (slurp_download_requirements_flag.content | b64decode | from_yaml).manifest_sha1 + or (stat_remote_manifest.stat.checksum is defined and + stat_remote_manifest.stat.checksum == (slurp_download_requirements_flag.content | b64decode | from_yaml).manifest_sha1) + ) + else False }} diff --git a/ansible/playbooks/roles/repository/tasks/clean-up-epirepo.yml b/ansible/playbooks/roles/repository/tasks/clean-up-epirepo.yml index e9b26fd3ba..545613544e 100644 --- a/ansible/playbooks/roles/repository/tasks/clean-up-epirepo.yml +++ b/ansible/playbooks/roles/repository/tasks/clean-up-epirepo.yml @@ -51,17 +51,45 @@ file: roles/image_registry/vars/main.yml name: image_registry_vars +- name: Define images to unpack + set_fact: + current_schema_images: "{{ image_registry_vars.specification.images_to_load[ansible_architecture].current }}" + generic_schema_images: "{{ image_registry_vars.specification.images_to_load[ansible_architecture].generic }}" + legacy_schema_images: "{{ image_registry_vars.specification.images_to_load[ansible_architecture].legacy }}" + +- name: Initialize image facts + set_fact: + current_images: [] + generic_images: [] + legacy_images: [] + +- name: Set list of current images to be loaded/pushed + set_fact: + current_images: "{{ current_schema_images | dict_to_list(only_values='True') | flatten }}" + +- name: Set list of generic images to be loaded/pushed + set_fact: + generic_images: "{{ generic_schema_images | dict_to_list(only_values='True') | flatten }}" + +- name: Set list of legacy images to be loaded/pushed + set_fact: + legacy_images: "{{ legacy_schema_images | dict_to_list(only_values='True') | flatten }}" + +- name: Merge current, legacy and generic images + set_fact: + all_images: >- + {{ current_images + generic_images + legacy_images }} + - name: Remove old images from epirepo file: state: absent path: "{{ _apache_epirepo_path }}/images/{{ item }}" vars: images_found: "{{ files_in_epirepo.results[1].files | map(attribute='path') | map('basename') }}" - images_to_load: "{{ image_registry_vars.specification.images_to_load[ansible_architecture] }}" - images_to_preserve: "{{ images_to_load | json_query('*[].file_name') + [ image_registry_vars.specification.registry_image.file_name ] }}" + images_to_preserve: "{{ all_images | json_query('[*].file_name') + [ image_registry_vars.specification.registry_image.file_name ] }}" # images to remove since they may have the same filename but different content (e.g. jboss/keycloak vs epiphanyplatform/keycloak), # to be optimized (checksums) - replaced_images: "{{ images_to_load | json_query('*[]') | selectattr('name', 'match', 'epiphanyplatform/') + replaced_images: "{{ all_images | json_query('[*]') | selectattr('name', 'match', 'epiphanyplatform/') | map(attribute='file_name') }}" images_to_remove: "{{ images_found | difference(images_to_preserve) + replaced_images }}" loop: "{{ images_to_remove }}" diff --git a/ansible/playbooks/roles/repository/tasks/copy-download-requirements.yml b/ansible/playbooks/roles/repository/tasks/copy-download-requirements.yml index d6cb17c4f8..43650fd901 100644 --- a/ansible/playbooks/roles/repository/tasks/copy-download-requirements.yml +++ b/ansible/playbooks/roles/repository/tasks/copy-download-requirements.yml @@ -55,6 +55,20 @@ dest: "{{ download_requirements_dir }}/{{ item }}" loop: "{{ _files }}" + - name: Manifest handling + when: not full_download and input_manifest_path + block: + - name: Filter sensitive data from the manifest + filter_credentials: + src: "{{ input_manifest_path }}" + dest: /tmp/filtered_manifest.yml + delegate_to: localhost + + - name: Copy the manifest file + synchronize: + src: /tmp/filtered_manifest.yml + dest: "{{ download_requirements_dir }}/manifest.yml" + - name: Copy RedHat family specific download requirements file synchronize: src: "download-requirements/{{ _family_packages }}" diff --git a/ansible/playbooks/roles/repository/tasks/download-requirements.yml b/ansible/playbooks/roles/repository/tasks/download-requirements.yml index 534f23675f..ae0d7f1c93 100644 --- a/ansible/playbooks/roles/repository/tasks/download-requirements.yml +++ b/ansible/playbooks/roles/repository/tasks/download-requirements.yml @@ -2,6 +2,25 @@ # download-requirements-done.flag file is used to avoid re-downloading requirements (to save time) # this is to be optimized in the future +- name: |- + Run download-requirements script, this can take a long time (optimized with manifest) + You can check progress on repository host with: journalctl -f -t download-requirements.py + shell: >- + set -o pipefail && + "{{ download_requirements_script }}" \ + /var/www/html/epirepo \ + "{{ download_requirements_os_name }}" \ + --manifest "{{ download_requirements_manifest }}" \ + --no-logfile \ + --repos-backup-file /var/tmp/enabled-system-repos.tar \ + --verbose |& + tee >(systemd-cat --identifier=download-requirements.py) + args: + executable: /bin/bash + when: + - not full_download + - input_manifest_path + - name: |- Run download-requirements script, this can take a long time You can check progress on repository host with: journalctl -f -t download-requirements.py @@ -10,13 +29,29 @@ "{{ download_requirements_script }}" \ /var/www/html/epirepo \ "{{ download_requirements_os_name }}" \ + --no-logfile \ --repos-backup-file /var/tmp/enabled-system-repos.tar \ - --no-logfile |& + --verbose |& tee >(systemd-cat --identifier=download-requirements.py) args: executable: /bin/bash + when: full_download or not input_manifest_path + +# This is to check whether input configuration has changed +- name: Get checksum of remote input manifest file + when: + - not full_download + - input_manifest_path + stat: + path: "{{ download_requirements_manifest }}" + get_checksum: true + get_attributes: false + get_mime: false + checksum_algorithm: sha1 + register: stat_remote_manifest - name: Create flag file to not re-download requirements next time - file: - path: "{{ download_requirements_flag }}" - state: touch + copy: + dest: "{{ download_requirements_flag }}" + content: > + manifest_sha1: {{ stat_remote_manifest.stat.checksum if stat_remote_manifest.stat.checksum is defined else 'null' }} diff --git a/ansible/playbooks/roles/repository/tasks/setup.yml b/ansible/playbooks/roles/repository/tasks/setup.yml index 545a8ee568..54b93cca5d 100644 --- a/ansible/playbooks/roles/repository/tasks/setup.yml +++ b/ansible/playbooks/roles/repository/tasks/setup.yml @@ -59,7 +59,7 @@ - not custom_repository_url - inventory_hostname in target_repository_hostnames -- include_tasks: check-whether-to-run-download.yml # sets 'stat_flag_file' +- include_tasks: check-whether-to-run-download.yml # sets 'skip_download_requirements_script' when: - not offline_mode - not custom_repository_url @@ -94,7 +94,7 @@ - inventory_hostname in target_repository_hostnames - custom_repository_url or offline_mode or - not stat_flag_file.stat.exists # do not clean up when skipping download + not skip_download_requirements_script # do not clean up when skipping download - name: |- Copy requirements for offline installation to repository host, this can take a long time @@ -116,11 +116,13 @@ - not offline_mode - not custom_repository_url - inventory_hostname in target_repository_hostnames - - not stat_flag_file.stat.exists + - not skip_download_requirements_script - name: Set up repositories include_tasks: "{{ ansible_os_family }}/setup.yml" - name: Include Helm repository creation include_tasks: "create-helm-repo.yml" - when: inventory_hostname == target_repository_hostnames[0] + when: + - inventory_hostname == target_repository_hostnames[0] + - (groups['kubernetes_master'] is defined and inventory_hostname in groups['kubernetes_master']) diff --git a/ansible/playbooks/roles/rook/tasks/main.yml b/ansible/playbooks/roles/rook/tasks/main.yml index 74cf6d5853..acdbff636b 100644 --- a/ansible/playbooks/roles/rook/tasks/main.yml +++ b/ansible/playbooks/roles/rook/tasks/main.yml @@ -1,8 +1,6 @@ --- - name: Prepare configuration and upgrade/install Rook Helm chart when: specification.enabled - become: true - run_once: true block: - name: RedHat fix | Create helm's binary symlink file: diff --git a/ansible/playbooks/roles/upgrade/defaults/main.yml b/ansible/playbooks/roles/upgrade/defaults/main.yml index e7e0a5f77a..1695625cdb 100644 --- a/ansible/playbooks/roles/upgrade/defaults/main.yml +++ b/ansible/playbooks/roles/upgrade/defaults/main.yml @@ -1,24 +1,10 @@ --- -logging: +opensearch: upgrade_config: custom_admin_certificate: - cert_path: /etc/elasticsearch/custom-admin.pem - key_path: /etc/elasticsearch/custom-admin-key.pem - -opendistro_for_elasticsearch: - upgrade_config: - custom_admin_certificate: - cert_path: /etc/elasticsearch/custom-admin.pem - key_path: /etc/elasticsearch/custom-admin-key.pem - - certs_migration: - demo_DNs: - admin: CN=kirk,OU=client,O=client,L=test,C=de - node: CN=node-0.example.com,OU=node,O=node,L=test,DC=de - dual_root_ca: - filename: demo2epiphany-certs-migration-root-CAs.pem - - upgrade_state_file_path: /etc/elasticsearch/epicli-upgrade-started.state + cert_path: /etc/elasticsearch/epiphany-admin.pem + key_path: /etc/elasticsearch/epiphany-admin-key.pem + upgrade_state_file_path: /var/lib/epiphany/upgrade/state/opensearch-upgrade.uncompleted kubernetes: upgrade_state_file_path: /var/lib/epiphany/upgrade/state/kubernetes-{{ ver }}.uncompleted diff --git a/ansible/playbooks/roles/upgrade/tasks/elasticsearch-curator.yml b/ansible/playbooks/roles/upgrade/tasks/elasticsearch-curator.yml index f7731c3218..81af709f8f 100644 --- a/ansible/playbooks/roles/upgrade/tasks/elasticsearch-curator.yml +++ b/ansible/playbooks/roles/upgrade/tasks/elasticsearch-curator.yml @@ -24,6 +24,6 @@ - name: Update elasticsearch-curator package include_role: name: elasticsearch_curator - tasks_from: install-es-curator-{{ ansible_os_family }} # update only package and do not change configured cron jobs + tasks_from: install-ops-curator-{{ ansible_os_family }} # update only package and do not change configured cron jobs when: - curator_defaults.curator_version is version(ansible_facts.packages['elasticsearch-curator'][0].version, '>') diff --git a/ansible/playbooks/roles/upgrade/tasks/filebeat.yml b/ansible/playbooks/roles/upgrade/tasks/filebeat.yml index 978a8e0f3b..33ce0a97e8 100644 --- a/ansible/playbooks/roles/upgrade/tasks/filebeat.yml +++ b/ansible/playbooks/roles/upgrade/tasks/filebeat.yml @@ -74,11 +74,13 @@ dest: /etc/filebeat/filebeat.yml.bak_{{ ansible_facts.packages['filebeat'][0].version }} mode: u=rw,go= - - import_role: + - name: Install Filebeat as system service + import_role: name: filebeat tasks_from: install-filebeat-as-system-service - - import_role: + - name: Configure auditd + import_role: name: filebeat tasks_from: configure-auditd @@ -93,7 +95,8 @@ _filebeat_existing_config: "{{ _filebeat_config_yml.content | b64decode | from_yaml }}" no_log: true - - import_role: + - name: Configure Filebeat + import_role: name: filebeat tasks_from: configure-filebeat vars: diff --git a/ansible/playbooks/roles/upgrade/tasks/kibana.yml b/ansible/playbooks/roles/upgrade/tasks/kibana.yml deleted file mode 100644 index c8e3baab72..0000000000 --- a/ansible/playbooks/roles/upgrade/tasks/kibana.yml +++ /dev/null @@ -1,47 +0,0 @@ ---- -- name: Kibana | Get information about installed packages as facts - package_facts: - manager: auto - when: ansible_facts.packages is undefined - -# Kibana is upgraded only when there is no 'kibana-oss' package (replaced by 'opendistroforelasticsearch-kibana' since v0.5). -# This condition has been added to not fail when 'epicli upgrade' is run for Epiphany v0.4 cluster. -# We cannot upgrade Kibana to v7 having Elasticsearch v6. -- name: Upgrade Kibana - when: ansible_facts.packages['kibana-oss'] is undefined - block: - - name: Kibana | Assert that opendistroforelasticsearch-kibana package is installed - assert: - that: ansible_facts.packages['opendistroforelasticsearch-kibana'] is defined - fail_msg: opendistroforelasticsearch-kibana package not found, nothing to upgrade - quiet: true - - - name: Kibana | Load defaults from kibana role - include_vars: - file: roles/kibana/defaults/main.yml - name: kibana_defaults - - - name: Kibana | Print versions - debug: - msg: - - "Installed version: {{ ansible_facts.packages['opendistroforelasticsearch-kibana'][0].version }}" - - "Target version: {{ kibana_defaults.kibana_version[ansible_os_family] }}" - - - name: Upgrade Kibana - when: - - kibana_defaults.kibana_version[ansible_os_family] - is version(ansible_facts.packages['opendistroforelasticsearch-kibana'][0].version, '>=') - block: - - name: Kibana | Slurp /etc/kibana/kibana.yml - slurp: - src: /etc/kibana/kibana.yml - register: _kibana_config_yml - no_log: true - - - name: Kibana | Upgrade - import_role: - name: kibana - vars: - context: upgrade - existing_es_password: >- - {{ (_kibana_config_yml.content | b64decode | from_yaml)['elasticsearch.password'] }} diff --git a/ansible/playbooks/roles/upgrade/tasks/kubernetes/patch-kubelet-cm.yml b/ansible/playbooks/roles/upgrade/tasks/kubernetes/patch-kubelet-cm.yml index 5af0aa3094..a63ee7d19a 100644 --- a/ansible/playbooks/roles/upgrade/tasks/kubernetes/patch-kubelet-cm.yml +++ b/ansible/playbooks/roles/upgrade/tasks/kubernetes/patch-kubelet-cm.yml @@ -1,6 +1,8 @@ --- - name: k8s/kubelet-cm | Include set-cluster-version.yml - include_tasks: set-cluster-version.yml # sets cluster_version + import_role: + name: kubernetes_common + tasks_from: set-cluster-version.yml # sets cluster_version - name: k8s/kubelet-cm | Get kubelet config from ConfigMap command: |- diff --git a/ansible/playbooks/roles/upgrade/tasks/kubernetes/upgrade-master0.yml b/ansible/playbooks/roles/upgrade/tasks/kubernetes/upgrade-master0.yml index 8d447738ef..541c3feddd 100644 --- a/ansible/playbooks/roles/upgrade/tasks/kubernetes/upgrade-master0.yml +++ b/ansible/playbooks/roles/upgrade/tasks/kubernetes/upgrade-master0.yml @@ -38,8 +38,10 @@ delay: 30 changed_when: false - - name: k8s/master0 | Include set-cluster-version.yml - include_tasks: set-cluster-version.yml # sets cluster_version + - name: k8s/kubelet-cm | Include set-cluster-version.yml + import_role: + name: kubernetes_common + tasks_from: set-cluster-version.yml # sets cluster_version # Retries needed for HA deployment (random failures) - name: k8s/master0 | Add k8s annotation for containerd diff --git a/ansible/playbooks/roles/upgrade/tasks/kubernetes/verify-upgrade.yml b/ansible/playbooks/roles/upgrade/tasks/kubernetes/verify-upgrade.yml index 2455749f82..fae5c45f69 100644 --- a/ansible/playbooks/roles/upgrade/tasks/kubernetes/verify-upgrade.yml +++ b/ansible/playbooks/roles/upgrade/tasks/kubernetes/verify-upgrade.yml @@ -6,8 +6,10 @@ - name: k8s/verify | Include wait-for-kube-apiserver.yml include_tasks: utils/wait-for-kube-apiserver.yml - - name: k8s/verify | Include set-cluster-version.yml - include_tasks: set-cluster-version.yml + - name: k8s/kubelet-cm | Include set-cluster-version.yml + import_role: + name: kubernetes_common + tasks_from: set-cluster-version.yml # sets cluster_version - name: k8s/verify | Verify cluster version assert: diff --git a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch-01.yml b/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch-01.yml deleted file mode 100644 index b3f14e4137..0000000000 --- a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch-01.yml +++ /dev/null @@ -1,52 +0,0 @@ ---- -- name: ODFE | Get information about installed packages as facts - package_facts: - manager: auto - when: ansible_facts.packages is undefined - -- name: ODFE | Assert that elasticsearch-oss package is installed - assert: - that: ansible_facts.packages['elasticsearch-oss'] is defined - fail_msg: elasticsearch-oss package not found, nothing to upgrade - quiet: true - -- name: ODFE | Include defaults from opendistro_for_elasticsearch role - include_vars: - file: roles/opendistro_for_elasticsearch/defaults/main.yml - name: odfe_defaults - -- name: ODFE | Patch log4j - include_role: - name: opendistro_for_elasticsearch - tasks_from: patch-log4j - when: odfe_defaults.log4j_file_name is defined - -- name: Restart elasticsearch service - systemd: - name: elasticsearch - state: restarted - register: restart_elasticsearch - when: odfe_defaults.log4j_file_name is defined and log4j_patch.changed - -- name: ODFE | Print elasticsearch-oss versions - debug: - msg: - - "Installed version: {{ ansible_facts.packages['elasticsearch-oss'][0].version }}" - - "Target version: {{ odfe_defaults.versions[ansible_os_family].elasticsearch_oss }}" - -# If state file exists it means the previous run failed -- name: ODFE | Check if upgrade state file exists - stat: - path: "{{ opendistro_for_elasticsearch.upgrade_state_file_path }}" - get_attributes: false - get_checksum: false - get_mime: false - register: stat_upgrade_state_file - -- name: ODFE | Upgrade Elasticsearch and ODFE plugins (part 1/2) - include_tasks: opendistro_for_elasticsearch/upgrade-elasticsearch-01.yml - when: _target_version is version(ansible_facts.packages['elasticsearch-oss'][0].version, '>') - or (_target_version is version(ansible_facts.packages['elasticsearch-oss'][0].version, '==') - and stat_upgrade_state_file.stat.exists) - vars: - _target_version: "{{ odfe_defaults.versions[ansible_os_family].elasticsearch_oss }}" diff --git a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch-02.yml b/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch-02.yml deleted file mode 100644 index 2b3f304465..0000000000 --- a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch-02.yml +++ /dev/null @@ -1,13 +0,0 @@ ---- -# If state file exists, it means upgrade has been started by the previous play and should be continued -- name: ODFE | Check if upgrade state file exists - stat: - path: "{{ opendistro_for_elasticsearch.upgrade_state_file_path }}" - get_attributes: false - get_checksum: false - get_mime: false - register: stat_upgrade_state_file - -- name: ODFE | Upgrade Elasticsearch and ODFE plugins (part 2/2) - include_tasks: opendistro_for_elasticsearch/upgrade-elasticsearch-02.yml - when: stat_upgrade_state_file.stat.exists diff --git a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/migrate-from-demo-certs-01.yml b/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/migrate-from-demo-certs-01.yml deleted file mode 100644 index 806c09a3d0..0000000000 --- a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/migrate-from-demo-certs-01.yml +++ /dev/null @@ -1,71 +0,0 @@ ---- -# ================================================================================================= -# Migration from demo certs to generated by Epiphany -# ------------------------------------------------------------------------------------------------- -# A) Parallel part (all nodes at the same time) - THIS FILE -# 1. Assert API access using demo cert (done in pre-migration part) -# 2. Generate Epiphany certs (done in pre-migration part) -# 3. Save cluster status to file (done in pre-migration part) -# 4. Create dual root CA file for the migration (demo + Epiphany root CAs concatenated), needed temporarily -# 5. Patch the following properties in existing elasticsearch.yml: -# a) opendistro_security.authcz.admin_dn - add Epiphany admin cert -# b) opendistro_security.nodes_dn - by default not present, add all Epiphany node certs -# c) opendistro_security.ssl.http.pemtrustedcas_filepath - replace demo root CA with the dual root CA file -# d) opendistro_security.ssl.transport.pemtrustedcas_filepath - replace demo root CA with the dual root CA file -# B) Serial part (node by node) - tasks from migrate-from-demo-certs-02.yml - -# Create dual root CA transitional file -- include_tasks: utils/create-dual-cert-file.yml - vars: - certs_to_concatenate: - - "{{ (certificates.dirs.certs, certificates.files.demo.root_ca.cert) | path_join }}" - - "{{ (certificates.dirs.certs, certificates.files.root_ca.cert.filename) | path_join }}" - target_path: "{{ (certificates.dirs.certs, opendistro_for_elasticsearch.certs_migration.dual_root_ca.filename) | path_join }}" - -- name: ODFE | Load /etc/elasticsearch/elasticsearch.yml - slurp: - src: /etc/elasticsearch/elasticsearch.yml - register: _elasticsearch_yml - -- name: OFDE | Patch /etc/elasticsearch/elasticsearch.yml (switch to dual root CA) - copy: - dest: /etc/elasticsearch/elasticsearch.yml - content: "{{ _patched_content | to_nice_yaml }}" - mode: u=rw,g=rw,o= - owner: root - group: elasticsearch - backup: true - vars: - _epiphany_subjects: - admin: "{{ certificates.files.admin.cert.subject }}" - node: "{{ certificates.files.node.cert.subject }}" - _epiphany_dn_attributes: - admin: "{{ certificates.dn_attributes_order | intersect(_epiphany_subjects.admin.keys()) }}" - node: "{{ certificates.dn_attributes_order | intersect(_epiphany_subjects.node.keys()) }}" - _epiphany_DNs: - admin: >- - {{ _epiphany_dn_attributes.admin | zip(_epiphany_dn_attributes.admin | map('extract', _epiphany_subjects.admin)) - | map('join','=') | join(',') }} - node: >- - {{ _epiphany_dn_attributes.node | zip(_epiphany_dn_attributes.node | map('extract', _epiphany_subjects.node)) - | map('join','=') | join(',') }} - _epiphany_nodes_dn: >- - {%- for node in ansible_play_hosts_all -%} - {%- if loop.first -%}[{%- endif -%} - '{{ _epiphany_DNs.node.split(',') | map('regex_replace', '^CN=.+$', 'CN=' + hostvars[node].ansible_nodename) | join(',') }}' - {%- if not loop.last -%},{%- else -%}]{%- endif -%} - {%- endfor -%} - _old_content: >- - {{ _elasticsearch_yml.content | b64decode | from_yaml }} - _updated_settings: - opendistro_security.authcz.admin_dn: >- - {{ _old_content['opendistro_security.authcz.admin_dn'] | default([]) | map('replace', ', ', ',') - | union([opendistro_for_elasticsearch.certs_migration.demo_DNs.admin] + [_epiphany_DNs.admin]) }} - opendistro_security.nodes_dn: >- - {{ _old_content['opendistro_security.nodes_dn'] | default([]) - | union([opendistro_for_elasticsearch.certs_migration.demo_DNs.node] + _epiphany_nodes_dn) }} - - opendistro_security.ssl.http.pemtrustedcas_filepath: "{{ opendistro_for_elasticsearch.certs_migration.dual_root_ca.filename }}" - opendistro_security.ssl.transport.pemtrustedcas_filepath: "{{ opendistro_for_elasticsearch.certs_migration.dual_root_ca.filename }}" - _patched_content: >- - {{ _old_content | combine(_updated_settings) }} diff --git a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/migrate-from-demo-certs-02.yml b/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/migrate-from-demo-certs-02.yml deleted file mode 100644 index 223f6968df..0000000000 --- a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/migrate-from-demo-certs-02.yml +++ /dev/null @@ -1,115 +0,0 @@ ---- -# ================================================================================================= -# Migration from demo certs to generated by Epiphany -# ------------------------------------------------------------------------------------------------- -# A) Parallel part (all nodes at the same time) - tasks from migrate-from-demo-certs-01.yml -# B) Serial part (node by node) - THIS FILE -# 1. Prepare cluster for a node restart (disable shard allocation) -# 2. Restart all nodes one by one waiting for yellow cluster status after each restart -# 3. Patch elasticsearch.yml to use Epiphany node cert instead of demo (all nodes) -# 4. Restart all nodes one by one waiting for yellow cluster status after each restart -# 5. Re-enable shard allocation -# 6. Wait for green/yellow cluster status -# 7. Test API access using Epiphany admin cert (all nodes) -# 8. Update API related facts to use Epiphany admin cert instead of demo -# 9. Reload config file - -- when: inventory_hostname == ansible_play_hosts_all[0] # run once - block: - # Prepare cluster for a node restart - - include_tasks: utils/prepare-cluster-for-node-restart.yml - - # Restart all nodes (special flow: run once but in loop for each host) - - include_tasks: - file: utils/restart-node.yml - apply: - delegate_to: "{{ target_inventory_hostname }}" - delegate_facts: true - loop: "{{ ansible_play_hosts_all }}" - loop_control: - loop_var: target_inventory_hostname - - # Patch elasticsearch.yml to use Epiphany node cert (all hosts) - - - name: ODFE | Load /etc/elasticsearch/elasticsearch.yml - slurp: - src: /etc/elasticsearch/elasticsearch.yml - register: _elasticsearch_yml - delegate_to: "{{ target_inventory_hostname }}" - loop: "{{ ansible_play_hosts_all }}" - loop_control: - loop_var: target_inventory_hostname - - - name: OFDE | Patch /etc/elasticsearch/elasticsearch.yml (switch to Epiphany node certificates) - copy: - dest: /etc/elasticsearch/elasticsearch.yml - content: "{{ _patched_content | to_nice_yaml }}" - mode: u=rw,g=rw,o= - owner: root - group: elasticsearch - backup: true - delegate_to: "{{ target_inventory_hostname }}" - delegate_facts: true - loop: "{{ ansible_play_hosts_all }}" - loop_control: - index_var: loop_index0 - loop_var: target_inventory_hostname - vars: - _node_hostname: "{{ hostvars[target_inventory_hostname].ansible_nodename }}" - _epiphany_node_cert: - cert_filename: "{{ certificates.files.node.cert.filename | replace(ansible_nodename, _node_hostname) }}" - key_filename: "{{ certificates.files.node.key.filename | replace(ansible_nodename, _node_hostname) }}" - _old_content: >- - {{ _elasticsearch_yml.results[loop_index0].content | b64decode | from_yaml }} - _updated_settings: - opendistro_security.ssl.http.pemcert_filepath: "{{ _epiphany_node_cert.cert_filename }}" - opendistro_security.ssl.http.pemkey_filepath: "{{ _epiphany_node_cert.key_filename }}" - opendistro_security.ssl.transport.pemcert_filepath: "{{ _epiphany_node_cert.cert_filename }}" - opendistro_security.ssl.transport.pemkey_filepath: "{{ _epiphany_node_cert.key_filename }}" - _patched_content: >- - {{ _old_content | combine(_updated_settings) }} - - # Restart all nodes (special flow: run once but in loop for each host) - - include_tasks: - file: utils/restart-node.yml - apply: - delegate_to: "{{ target_inventory_hostname }}" - delegate_facts: true - loop: "{{ ansible_play_hosts_all }}" - loop_control: - loop_var: target_inventory_hostname - - # Re-enable shard allocation - - include_tasks: utils/enable-shard-allocation.yml - - # Wait for shard allocation (for 'green' status at least 2 nodes must be already upgraded) - - include_tasks: utils/wait-for-shard-allocation.yml - - # Test API access using Epiphany admin cert (all nodes) - - include_tasks: - file: utils/assert-api-access.yml - apply: - delegate_to: "{{ target_inventory_hostname }}" - delegate_facts: true - loop: "{{ ansible_play_hosts_all }}" - loop_control: - loop_var: target_inventory_hostname - vars: - es_api: - cert_type: Epiphany - cert_path: &epi_cert_path "{{ (certificates.dirs.certs, certificates.files.admin.cert.filename) | path_join }}" - key_path: &epi_key_path "{{ (certificates.dirs.certs, certificates.files.admin.key.filename) | path_join }}" - url: "{{ hostvars[target_inventory_hostname].es_api.url }}" - fail_msg: API access test failed. - -- name: Update API related facts to use Epiphany admin certificate instead of demo - set_fact: - es_api: "{{ es_api | combine(_es_api) }}" - vars: - _es_api: - cert_type: Epiphany - cert_path: *epi_cert_path - key_path: *epi_key_path - -# Reload config file to preserve patched settings (sets 'existing_config' fact) -- include_tasks: utils/get-config-from-files.yml diff --git a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/migrate-from-demo-certs-non-clustered.yml b/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/migrate-from-demo-certs-non-clustered.yml deleted file mode 100644 index addd327aa3..0000000000 --- a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/migrate-from-demo-certs-non-clustered.yml +++ /dev/null @@ -1,77 +0,0 @@ ---- -- name: ODFE | Load /etc/elasticsearch/elasticsearch.yml - slurp: - src: /etc/elasticsearch/elasticsearch.yml - register: _elasticsearch_yml - -- name: OFDE | Patch /etc/elasticsearch/elasticsearch.yml (switch to generated certificates) - copy: - dest: /etc/elasticsearch/elasticsearch.yml - content: "{{ _patched_content | to_nice_yaml }}" - mode: u=rw,g=rw,o= - owner: root - group: elasticsearch - backup: true - vars: - _epiphany_subjects: - admin: "{{ certificates.files.admin.cert.subject }}" - node: "{{ certificates.files.node.cert.subject }}" - _epiphany_dn_attributes: - admin: "{{ certificates.dn_attributes_order | intersect(_epiphany_subjects.admin.keys()) }}" - node: "{{ certificates.dn_attributes_order | intersect(_epiphany_subjects.node.keys()) }}" - _epiphany_DNs: - admin: >- - {{ _epiphany_dn_attributes.admin | zip(_epiphany_dn_attributes.admin | map('extract', _epiphany_subjects.admin)) - | map('join','=') | join(',') }} - node: >- - {{ _epiphany_dn_attributes.node | zip(_epiphany_dn_attributes.node | map('extract', _epiphany_subjects.node)) - | map('join','=') | join(',') }} - _old_content: >- - {{ _elasticsearch_yml.content | b64decode | from_yaml }} - _updated_settings: - opendistro_security.authcz.admin_dn: >- - {{ _old_content['opendistro_security.authcz.admin_dn'] | default([]) | map('replace', ', ', ',') - | union([_epiphany_DNs.admin]) }} - opendistro_security.nodes_dn: >- - {{ _old_content['opendistro_security.nodes_dn'] | default([]) - | union([_epiphany_DNs.node]) }} - - opendistro_security.ssl.http.pemcert_filepath: "{{ certificates.files.node.cert.filename }}" - opendistro_security.ssl.http.pemkey_filepath: "{{ certificates.files.node.key.filename }}" - opendistro_security.ssl.transport.pemcert_filepath: "{{ certificates.files.node.cert.filename }}" - opendistro_security.ssl.transport.pemkey_filepath: "{{ certificates.files.node.key.filename }}" - - opendistro_security.ssl.http.pemtrustedcas_filepath: "{{ certificates.files.root_ca.cert.filename }}" - opendistro_security.ssl.transport.pemtrustedcas_filepath: "{{ certificates.files.root_ca.cert.filename }}" - - _patched_content: >- - {{ _old_content | combine(_updated_settings) }} - -- include_tasks: - file: utils/restart-node.yml - vars: - target_inventory_hostname: "{{ inventory_hostname }}" - skip_waiting_for_node: true # because after restart demo certificate stops working - -# Test API access using Epiphany admin cert -- include_tasks: - file: utils/assert-api-access.yml - vars: - es_api: - cert_type: Epiphany - cert_path: &epi_cert_path "{{ (certificates.dirs.certs, certificates.files.admin.cert.filename) | path_join }}" - key_path: &epi_key_path "{{ (certificates.dirs.certs, certificates.files.admin.key.filename) | path_join }}" - url: "{{ hostvars[inventory_hostname].es_api.url }}" - fail_msg: API access test failed. - -- name: Update API related facts to use Epiphany admin certificate instead of demo - set_fact: - es_api: "{{ es_api | combine(_es_api) }}" - vars: - _es_api: - cert_type: Epiphany - cert_path: *epi_cert_path - key_path: *epi_key_path - -# Reload config file to preserve patched settings (sets 'existing_config' fact) -- include_tasks: utils/get-config-from-files.yml diff --git a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/upgrade-elasticsearch-01.yml b/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/upgrade-elasticsearch-01.yml deleted file mode 100644 index e709502eda..0000000000 --- a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/upgrade-elasticsearch-01.yml +++ /dev/null @@ -1,157 +0,0 @@ ---- -# This file contains only pre-upgrade tasks that can be run in parallel on all hosts - -- name: ODFE | Create upgrade state file - become: true - file: - path: "{{ opendistro_for_elasticsearch.upgrade_state_file_path }}" - state: touch - mode: u=rw,g=r,o= - -- name: ODFE | Ensure elasticsearch service is running - systemd: - name: elasticsearch - enabled: yes - state: started - register: elasticsearch_state - -# Sets 'existing_config' fact -- include_tasks: utils/get-config-from-files.yml - -- name: ODFE | Set common facts - set_fact: - certificates: "{{ odfe_defaults.certificates }}" - es_host: "{{ existing_config.main['network.host'] | default('_local_') }}" - es_http_port: "{{ existing_config.main['http.port'] | default(odfe_defaults.ports.http) }}" - es_transport_port: "{{ existing_config.main['transport.port'] | default(odfe_defaults.ports.transport) }}" - es_clustered: "{{ (existing_config.main['discovery.seed_hosts'] | length > 1) | ternary(True, False) }}" - es_node_name: "{{ existing_config.main['node.name'] }}" - -- name: ODFE | Wait for elasticsearch service to start up - wait_for: - port: "{{ es_transport_port }}" - host: "{{ es_host if (es_host is not regex('^_.+_$')) else '0.0.0.0' }}" # 0.0.0.0 means any IP - when: elasticsearch_state.changed - -# This block requires elasticsearch service to be running -- name: Get host address when special value is used # e.g. '_site_' - when: es_host is regex('^_.+_$') - block: - - name: Gather facts on listening ports - community.general.listen_ports_facts: - - - name: Get host address based on transport port - set_fact: - es_host: "{{ ansible_facts.tcp_listen | selectattr('port', '==', es_transport_port|int) - | map(attribute='address') | reject('match', '::') | first }}" - -# NOTE: We need admin certificate for passwordless administrative access to REST API (since we don't know admin's password) - -- include_role: - name: certificate - tasks_from: install-packages # requirements for Ansible certificate modules - -- name: ODFE | Get information on root CA certificate - community.crypto.x509_certificate_info: - # 'pemtrustedcas_filepath' is a relative path - path: "{{ ('/etc/elasticsearch', existing_config.main['opendistro_security.ssl.transport.pemtrustedcas_filepath']) | path_join }}" - register: _root_ca_info - -- name: ODFE | Check if demo or Epiphany certificates are in use # self-signed - set_fact: - _is_demo_cert_in_use: "{{ 'True' if _root_ca_info.subject.commonName == 'Example Com Inc. Root CA' else 'False' }}" - _is_epiphany_cert_in_use: "{{ 'True' if _root_ca_info.subject.commonName == 'Epiphany Managed ODFE Root CA' else 'False' }}" - -# For custom admin cert (non-demo and non-Epiphany), we use workaround (upgrade_config.custom_admin_certificate). -# The workaround should be replaced after implementing task #2127. -- name: ODFE | Set API access facts - set_fact: - es_api: - cert_path: "{{ _cert_path[_cert_type] }}" - cert_type: "{{ _cert_type }}" - key_path: "{{ _key_path[_cert_type] }}" - url: https://{{ es_host }}:{{ es_http_port }} - vars: - _cert_type: >- - {{ 'demo' if (_is_demo_cert_in_use) else - 'Epiphany' if (_is_epiphany_cert_in_use) else - 'custom' }} - _cert_path: - custom: "{{ lookup('vars', current_group_name).upgrade_config.custom_admin_certificate.cert_path }}" # defaults are not available via hostvars - demo: "{{ (certificates.dirs.certs, certificates.files.demo.admin.cert) | path_join }}" - Epiphany: "{{ (certificates.dirs.certs, certificates.files.admin.cert.filename) | path_join }}" - _key_path: - custom: "{{ lookup('vars', current_group_name).upgrade_config.custom_admin_certificate.key_path }}" - demo: "{{ (certificates.dirs.certs, certificates.files.demo.admin.key) | path_join }}" - Epiphany: "{{ (certificates.dirs.certs, certificates.files.admin.key.filename) | path_join }}" - -- include_tasks: utils/assert-cert-files-exist.yml - -# ================================================================================================= -# FLOW -# ------------------------------------------------------------------------------------------------- -# NOTE: For clustered nodes it's recommended to disable shard allocation for the cluster before restarting a node (https://www.elastic.co/guide/en/elasticsearch/reference/current/restart-cluster.html#restart-cluster-rolling) -# -# if cert_type == 'demo': -# Test API access -# Genereate Epiphany self-signed certs -# Save cluster status to file -# Run certificates migration procedure for all nodes when 'es_clustered is true' -# // Subtasks of the migration procedure: -# Test API access -# Update API related facts to use Epiphany admin certificate instead of demo -# if cert_type == 'Epiphany': -# Genereate Epiphany self-signed certs - to re-new certs if expiration date differs -# Test API access -# Save cluster status to file -# if cert_type == 'custom': -# Test API access -# Save cluster status to file -# Run upgrade (removes known demo certificate files) -# if cert_type == 'Epiphany': -# Remove dual root CA file (created as part of the migration, needed until all nodes are upgraded) -# ================================================================================================= - -# Test API access (demo or custom certs) -- include_tasks: utils/assert-api-access.yml - when: es_api.cert_type in ['demo', 'custom'] - vars: - _fail_msg: - common: Test of accessing API with TLS authentication failed. - custom: >- - It looks like you use custom certificates. - Please refer to 'Open Distro for Elasticsearch upgrade' section of How-To docs. - demo: >- - It looks like you use demo certificates but your configuration might be incorrect or unsupported. - fail_msg: "{{ _fail_msg.common }} {{ _fail_msg[es_api.cert_type] }}" - -- name: Generate self-signed certificates - include_role: - name: opendistro_for_elasticsearch - tasks_from: generate-certs - when: es_api.cert_type != 'custom' - -# Test API access (Epiphany certs) -- include_tasks: utils/assert-api-access.yml - when: es_api.cert_type == 'Epiphany' - vars: - fail_msg: >- - Test of accessing API with TLS authentication failed. - It looks like you use certificates generated by Epiphany but your configuration might be incorrect or an unexpected error occurred. - -# Save cluster health status before upgrade to file -- include_tasks: utils/save-initial-cluster-status.yml - -# Run migration procedure - the first (parallel) part for clustered installation -- include_tasks: migrate-from-demo-certs-01.yml - when: - - es_api.cert_type == 'demo' - - es_clustered # rolling upgrade only for clustered installation - -# Run migration procedure for non-clustered installation -- include_tasks: migrate-from-demo-certs-non-clustered.yml - when: - - es_api.cert_type == 'demo' - - not es_clustered - -# Next tasks are run in serial mode in the next play diff --git a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/upgrade-elasticsearch-02.yml b/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/upgrade-elasticsearch-02.yml deleted file mode 100644 index 237f34d4d2..0000000000 --- a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/upgrade-elasticsearch-02.yml +++ /dev/null @@ -1,109 +0,0 @@ ---- -# This file contains flow that cannot be run in parallel on multiple hosts because of rolling upgrades. -# It's run after upgrade-elasticsearch-01.yml so some facts are already set. - -# Run migration procedure - the second (serial) part -- include_tasks: opendistro_for_elasticsearch/migrate-from-demo-certs-02.yml - when: - - es_api.cert_type == 'demo' - - es_clustered # rolling upgrade only for clustered installation - -- name: ODFE | Print API facts - debug: - var: es_api - tags: [ never, debug ] # only runs when debug or never tag requested - -- name: ODFE | Prepare cluster for rolling upgrade - include_tasks: opendistro_for_elasticsearch/utils/prepare-cluster-for-node-restart.yml - when: es_clustered - -- name: ODFE | Stop elasticsearch service - systemd: - name: elasticsearch - state: stopped - -- name: ODFE | Include Elasticsearch installation tasks - include_role: - name: opendistro_for_elasticsearch - tasks_from: install-es.yml - -- name: ODFE | Include Elasticsearch configuration tasks - include_role: - name: opendistro_for_elasticsearch - tasks_from: configure-es.yml - vars: - _old: "{{ existing_config.main }}" - # Keep the same data structure as for apply mode - specification: - jvm_options: "{{ existing_config.jvm_options }}" - cluster_name: "{{ _old['cluster.name'] }}" - clustered: "{{ 'True' if _old['discovery.seed_hosts'] | length > 1 else 'False' }}" - paths: - data: "{{ _old['path.data'] }}" - repo: "{{ _old['path.repo'] | default('/var/lib/elasticsearch-snapshots') }}" # absent in Epiphany v0.6 thus we use default - logs: "{{ _old['path.logs'] }}" - opendistro_security: - ssl: - transport: - enforce_hostname_verification: "{{ _old['opendistro_security.ssl.transport.enforce_hostname_verification'] }}" - - _demo_DNs: - admin: "{{ opendistro_for_elasticsearch.certs_migration.demo_DNs.admin }}" - node: "{{ opendistro_for_elasticsearch.certs_migration.demo_DNs.node }}" - _dual_root_ca_filename: "{{ opendistro_for_elasticsearch.certs_migration.dual_root_ca.filename }}" - _epiphany_root_ca_filename: "{{ certificates.files.root_ca.cert.filename }}" - _updated_existing_config: - opendistro_security.authcz.admin_dn: "{{ _old['opendistro_security.authcz.admin_dn'] | reject('search', _demo_DNs.admin) }}" - opendistro_security.nodes_dn: "{{ _old['opendistro_security.nodes_dn'] | default([]) | reject('search', _demo_DNs.node) }}" - opendistro_security.ssl.http.pemtrustedcas_filepath: >- - {{ _old['opendistro_security.ssl.http.pemtrustedcas_filepath'] | replace(_dual_root_ca_filename, _epiphany_root_ca_filename) }} - opendistro_security.ssl.transport.pemtrustedcas_filepath: >- - {{ _old['opendistro_security.ssl.transport.pemtrustedcas_filepath'] | replace(_dual_root_ca_filename, _epiphany_root_ca_filename) }} - - http.port: "{{ _old['http.port'] | default(odfe_defaults.ports.http) }}" - transport.port: "{{ _old['transport.port'] | default(odfe_defaults.ports.transport) }}" - - existing_es_config: "{{ _old | combine(_updated_existing_config) }}" - -- name: ODFE | Include upgrade plugins tasks - include_tasks: opendistro_for_elasticsearch/upgrade-plugins.yml - -# Restart elasticsearch service (unconditionally to ensure this task is not skipped in case of rerunning after interruption) -- include_tasks: opendistro_for_elasticsearch/utils/restart-node.yml - vars: - daemon_reload: true # opendistro-performance-analyzer provides opendistro-performance-analyzer.service - target_inventory_hostname: "{{ inventory_hostname }}" - -# Post-upgrade tasks - -- name: Re-enable shard allocation - when: es_clustered - block: - - include_tasks: opendistro_for_elasticsearch/utils/enable-shard-allocation.yml - - - include_tasks: opendistro_for_elasticsearch/utils/wait-for-shard-allocation.yml - -# Read cluster health status from before the upgrade -- name: Load upgrade state file - slurp: - src: "{{ opendistro_for_elasticsearch.upgrade_state_file_path }}" - register: slurp_upgrade_state_file - -# Verify cluster status -- include_tasks: opendistro_for_elasticsearch/utils/wait-for-cluster-status.yml - when: not es_clustered or - (es_clustered and inventory_hostname == ansible_play_hosts_all[-1]) # for 'green' status at least 2 nodes must be already upgraded - vars: - initial_status: "{{ (slurp_upgrade_state_file.content | b64decode | from_json)['status'] }}" - expected_status: "{{ [ initial_status, 'green'] | unique }}" - -- name: ODFE | Remove dual root CA temporary file - file: - path: "{{ (certificates.dirs.certs, opendistro_for_elasticsearch.certs_migration.dual_root_ca.filename) | path_join }}" - state: absent - when: es_api.cert_type == 'Epiphany' - -- name: ODFE | Remove upgrade state file - file: - path: "{{ opendistro_for_elasticsearch.upgrade_state_file_path }}" - state: absent diff --git a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/upgrade-plugins.yml b/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/upgrade-plugins.yml deleted file mode 100644 index 80e34e6382..0000000000 --- a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/upgrade-plugins.yml +++ /dev/null @@ -1,18 +0,0 @@ ---- -- name: ODFE plugins | Assert that opendistro-* packages are installed - assert: - that: ansible_facts.packages['{{ item }}'] is defined - fail_msg: "Missing package to upgrade: {{ item }}" - quiet: true - loop: - - opendistro-alerting - - opendistro-index-management - - opendistro-job-scheduler - - opendistro-performance-analyzer - - opendistro-security - - opendistro-sql - -- name: ODFE plugins | Upgrade opendistro-* packages - include_role: - name: opendistro_for_elasticsearch - tasks_from: install-opendistro.yml diff --git a/ansible/playbooks/roles/upgrade/tasks/opensearch.yml b/ansible/playbooks/roles/upgrade/tasks/opensearch.yml new file mode 100644 index 0000000000..ecff2dcec3 --- /dev/null +++ b/ansible/playbooks/roles/upgrade/tasks/opensearch.yml @@ -0,0 +1,39 @@ +--- +- name: OpenSearch | Get information about installed packages as facts + package_facts: + manager: auto + when: ansible_facts.packages is undefined + +- name: OpenSearch | Run migration from ODFE + when: + - ansible_facts.packages['elasticsearch-oss'] is defined + block: + - name: OpenSearch | Include defaults from OpenSearch role + include_vars: + name: opensearch_defaults + file: roles/opensearch/defaults/main.yml + + - name: OpenSearch | Include vars from OpenSearch role + include_vars: + name: opensearch_variables + file: roles/opensearch/vars/main.yml + + - name: OpenSearch | Run pre ODFE migration tasks + include_role: + name: upgrade + tasks_from: opensearch/pre-migrate + + - name: OpenSearch | Run ODFE migration tasks + include_role: + name: upgrade + tasks_from: opensearch/migrate-odfe + + - name: OpenSearch | Run Kibana migration tasks + include_role: + name: upgrade + tasks_from: opensearch/migrate-kibana + + - name: OpenSearch | Cleanup + include_role: + name: upgrade + tasks_from: opensearch/cleanup diff --git a/ansible/playbooks/roles/upgrade/tasks/opensearch/cleanup.yml b/ansible/playbooks/roles/upgrade/tasks/opensearch/cleanup.yml new file mode 100644 index 0000000000..6401d689bc --- /dev/null +++ b/ansible/playbooks/roles/upgrade/tasks/opensearch/cleanup.yml @@ -0,0 +1,24 @@ +--- +- name: OpenSearch | Get information about installed packages as facts + package_facts: + manager: auto + when: ansible_facts.packages is undefined + +- name: Remove Kibana package + when: ansible_facts.packages['kibana'] is defined + package: + name: kibana + state: absent + +- name: Remove Elasticsearch package + when: ansible_facts.packages['elasticsearch-oss'] is defined + package: + name: elasticsearch-oss + state: absent + +# All others ODFE plugins are removed as dependencies to above +- name: Remove ODFE Kibana plugin + when: ansible_facts.packages['opendistroforelasticsearch-kibana'] is defined + package: + name: opendistroforelasticsearch-kibana + state: absent diff --git a/ansible/playbooks/roles/upgrade/tasks/opensearch/migrate-kibana.yml b/ansible/playbooks/roles/upgrade/tasks/opensearch/migrate-kibana.yml new file mode 100644 index 0000000000..77f755d64d --- /dev/null +++ b/ansible/playbooks/roles/upgrade/tasks/opensearch/migrate-kibana.yml @@ -0,0 +1,111 @@ +--- +- name: Kibana migration | Load deafults from OpenSearch Dashboards role + include_vars: + name: os_dashboards_defaults + file: roles/opensearch_dashboards/defaults/main.yml + +- name: Kibana migration | Load vars from OpenSearch Dashboards role + include_vars: + name: os_dashboards_variables + file: roles/opensearch_dashboards/vars/main.yml + +- name: Kibana migration | Download OpenSearch Dashboards binary + include_role: + name: download + tasks_from: download_file + vars: + file_name: >- + {{ os_dashboards_defaults.file_name_version.opensearch_dashboards[ansible_architecture] }} + +- name: Kibana migration | Create OpenSearch Dashboards OS group + group: + name: "{{ os_dashboards_variables.specification.dashboards_os_group }}" + state: present + +- name: Kibana migration | Create OpenSearch Dashboards OS user + user: + name: "{{ os_dashboards_variables.specification.dashboards_os_user }}" + state: present + shell: /bin/bash + group: "{{ os_dashboards_variables.specification.dashboards_os_group }}" + home: "{{ os_dashboards_variables.specification.paths.dashboards_home }}" + create_home: false + +- name: Kibana migration | Create OpenSearch Dashboards directories + file: + path: "{{ item }}" + state: directory + owner: "{{ os_dashboards_variables.specification.dashboards_os_user }}" + group: "{{ os_dashboards_variables.specification.dashboards_os_group }}" + mode: ug=rwx,o=rx + loop: + - "{{ os_dashboards_variables.specification.paths.dashboards_log_dir }}" + - "{{ os_dashboards_variables.specification.paths.dashboards_home }}" + +- name: Kibana migration | Extract the tar file + unarchive: + src: "{{ download_directory }}/{{ os_dashboards_defaults.file_name_version.opensearch_dashboards[ansible_architecture] }}" + dest: "{{ os_dashboards_variables.specification.paths.dashboards_home }}" + owner: "{{ os_dashboards_variables.specification.dashboards_os_user }}" + group: "{{ os_dashboards_variables.specification.dashboards_os_group }}" + remote_src: true + extra_opts: + - --strip-components=1 + +- name: Kibana migration | Clone Kibana settings + copy: + src: /etc/kibana/kibana.yml + dest: "{{ os_dashboards_variables.specification.paths.dashboards_conf_dir }}/opensearch_dashboards.yml" + remote_src: true + owner: "{{ os_dashboards_variables.specification.dashboards_os_user }}" + group: "{{ os_dashboards_variables.specification.dashboards_os_group }}" + mode: ug=rw,o= + +- name: Kibana migration | Porting Kibana settings to OpenSearch Dashboards + replace: + path: "{{ os_dashboards_variables.specification.paths.dashboards_conf_dir }}/opensearch_dashboards.yml" + regexp: "{{ item.1 }}" + replace: "{{ item.2 }}" + loop: + - { 1: "elasticsearch", 2: "opensearch" } + - { 1: "/kibana", 2: "/opensearch-dashboards" } + - { 1: "opendistro_security", 2: "opensearch_security" } + # OPS claims to not recognize these 3 following Kibana variables + - { 1: "newsfeed.enabled", 2: "#newsfeed.enabled" } + - { 1: "telemetry.optIn", 2: "#telemetry.optIn" } + - { 1: "telemetry.enabled", 2: "#telemetry.enabled" } + +- name: Kibana migration | Create OpenSearch Dashboards service + template: + src: roles/opensearch_dashboards/templates/opensearch-dashboards.service.j2 + dest: /etc/systemd/system/opensearch-dashboards.service + mode: u=rw,go=r + vars: + specification: "{{ os_dashboards_variables.specification }}" + +- name: Kibana migration | Stop Kibana service + systemd: + name: kibana + enabled: false + state: stopped + +- name: Kibana migration | Assure OpenSearch Dashboards service is started + service: + name: opensearch-dashboards + state: started + enabled: true + +- name: Kibana migration | Get all the installed dashboards plugins + command: "{{ os_dashboards_variables.specification.paths.dashboards_plugin_bin_path }} list" + become: false # This command can not be run as root user + register: list_plugins + +- name: Kibana migration | Show all the installed dashboards plugins + debug: + msg: "{{ list_plugins.stdout }}" + +- name: Kibana migration | Prevent Filebeat API access problem # Workaround for https://github.com/opensearch-project/OpenSearch-Dashboards/issues/656 + replace: + path: /etc/filebeat/filebeat.yml + regexp: "setup.dashboards.enabled: true" + replace: "setup.dashboards.enabled: false" diff --git a/ansible/playbooks/roles/upgrade/tasks/opensearch/migrate-odfe-serial.yml b/ansible/playbooks/roles/upgrade/tasks/opensearch/migrate-odfe-serial.yml new file mode 100644 index 0000000000..bceb94c888 --- /dev/null +++ b/ansible/playbooks/roles/upgrade/tasks/opensearch/migrate-odfe-serial.yml @@ -0,0 +1,114 @@ +--- +# Below tasks need to be run in serial +- name: ODFE migration | Stop Elasticsearch service + systemd: + name: elasticsearch + enabled: false + state: stopped + register: elasticsearch_state + +- name: ODFE migration | Install OpenSearch binaries + include_tasks: roles/opensearch/tasks/install-opensearch.yml + vars: + specification: "{{ opensearch_variables.specification }}" + file_name_version: "{{ opensearch_defaults.file_name_version }}" + +- name: ODFE migration | Copy Elasticsearch directories to OpenSearch directories + copy: + src: "{{ item.1 }}" + dest: "{{ item.2 }}" + remote_src: true + owner: "{{ opensearch_variables.specification.opensearch_os_user }}" + group: "{{ opensearch_variables.specification.opensearch_os_group }}" + mode: u=rw,go=r + loop: + - { + 1: "/var/lib/elasticsearch-snapshots/", + 2: "{{ opensearch_variables.specification.paths.opensearch_snapshots_dir }}/", + } + - { + 1: "/var/lib/elasticsearch/", + 2: "{{ opensearch_variables.specification.paths.opensearch_data_dir }}", + } + +- name: ODFE migration | Prepare a list of Elasticsearch certs and keys + find: + paths: "/etc/elasticsearch/" + patterns: "*pem" + register: pem_files + +- name: ODFE migration | Copy a list of certs and keys to OpenSearch directories + copy: + src: "{{ item.path }}" + dest: "{{ opensearch_variables.specification.paths.opensearch_conf_dir }}" + remote_src: true + with_items: "{{ pem_files.files }}" + +- name: ODFE migration | Clone JVM configuration file + copy: + src: /etc/elasticsearch/jvm.options + dest: "{{ opensearch_variables.specification.paths.opensearch_conf_dir }}/jvm.options" + remote_src: true + owner: root + group: opensearch + mode: ug=rw,o= + backup: true + +- name: ODFE migration | Update JVM configuration file + replace: + path: "{{ opensearch_variables.specification.paths.opensearch_conf_dir }}/jvm.options" + regexp: "{{ item.1 }}" + replace: "{{ item.2 }}" + loop: + - { 1: 'elasticsearch', 2: 'opensearch' } + - { 1: '\${ES_TMPDIR}', 2: '${OPENSEARCH_TMPDIR}' } + +- name: ODFE migration | Clone main configuration file + copy: + src: /etc/elasticsearch/elasticsearch.yml + dest: "{{ opensearch_variables.specification.paths.opensearch_conf_dir }}/opensearch.yml" + remote_src: true + owner: root + group: opensearch + mode: ug=rw,o= + backup: true + +- name: ODFE migration | Update main configuration file + replace: + path: "{{ opensearch_variables.specification.paths.opensearch_conf_dir }}/opensearch.yml" + regexp: "{{ item.1 }}" + replace: "{{ item.2 }}" + loop: + - { 1: "elasticsearch", 2: "opensearch" } + - { 1: "EpiphanyElastic", 2: "EpiphanyOpensearch" } + - { 1: "opendistro_security.", 2: "plugins.security." } + +- name: Set fact with batch_metrics_enabled.conf path + set_fact: + _batch_metrics_enabled: >- + /usr/share/elasticsearch/data/batch_metrics_enabled.conf + +- name: Check if batch_metrics_enabled.conf exist + stat: + path: "{{ _batch_metrics_enabled }}" + register: batch_metrics_enabled + +# TODO: make this configurable +- name: Create batch_metrics_enabled.conf + copy: + dest: "{{ _batch_metrics_enabled }}" + content: "false" + when: not batch_metrics_enabled.stat.exists + +- name: ODFE migration | Start OpenSearch service + systemd: + name: opensearch + state: started + enabled: true + register: restart_opensearch + +- name: ODFE migration | Wait for OpenSearch to be reachable + wait_for: + port: "{{ opensearch_defaults.ports.http }}" + host: "{{ ansible_default_ipv4.address | default(ansible_all_ipv4_addresses[0]) }}" + sleep: 6 diff --git a/ansible/playbooks/roles/upgrade/tasks/opensearch/migrate-odfe.yml b/ansible/playbooks/roles/upgrade/tasks/opensearch/migrate-odfe.yml new file mode 100644 index 0000000000..cb21e7f396 --- /dev/null +++ b/ansible/playbooks/roles/upgrade/tasks/opensearch/migrate-odfe.yml @@ -0,0 +1,203 @@ +--- +- name: OpenSearch | Get information about installed packages as facts + package_facts: + manager: auto + when: ansible_facts.packages is undefined + +- name: OpenSearch | Print ElasticSearch ond OpenSearch versions + debug: + msg: + - "Elasticsearch version currently installed: {{ ansible_facts.packages['elasticsearch-oss'][0].version }}" + - "OpenSearch version to be installed: {{ opensearch_defaults.file_name_version.opensearch[ansible_architecture].split('-')[1] }}" + +- name: ODFE migration | Ensure elasticsearch cluster is up and running + systemd: + name: elasticsearch + enabled: true + state: started + register: elasticsearch_state + +- name: ODFE migration | Set existing_config facts + include_tasks: opensearch/utils/get-config-from-files.yml + +- name: ODFE migration | Set common facts + set_fact: + es_host: "{{ existing_config.main['network.host'] | default('_local_') }}" + es_http_port: "{{ existing_config.main['http.port'] | default(opensearch_defaults.ports.http) }}" + es_transport_port: "{{ existing_config.main['transport.port'] | default(opensearch_defaults.ports.transport) }}" + es_clustered: "{{ (existing_config.main['discovery.seed_hosts'] | length > 1) | ternary(True, False) }}" + es_node_name: "{{ existing_config.main['node.name'] }}" + override_main_response_version_exist: + - "{{ existing_config.main['compatibility.override_main_response_version'] | default(false) }}" + +- name: ODFE migration | Prepare ODFE to OpenSearch migration + include_tasks: + file: opensearch/utils/prepare-cluster-for-node-restart.yml + apply: + delegate_to: "{{ target_inventory_hostname }}" + delegate_facts: true + loop: "{{ groups.logging | default([]) }}" + loop_control: + loop_var: target_inventory_hostname + vars: + es_api: + cert_type: Epiphany + cert_path: "{{ opensearch.upgrade_config.custom_admin_certificate.cert_path }}" + key_path: "{{ opensearch.upgrade_config.custom_admin_certificate.key_path }}" + url: https://{{ es_host }}:{{ es_http_port }} + fail_msg: API access test failed + +- name: ODFE migration | Run core migration tasks individually on each node + include_tasks: + file: opensearch/migrate-odfe-serial.yml + apply: + delegate_to: "{{ target_hostname }}" + delegate_facts: true + loop: "{{ groups.logging | default([]) }}" + loop_control: + loop_var: target_hostname + run_once: true + +- name: ODFE migration | Check if default admin user exists + uri: + url: "https://{{ inventory_hostname }}:{{ opensearch_defaults.ports.http }}/_opendistro/_security/api/internalusers/admin" + method: GET + # 404 code is used there as someone can remove admin user on its own. + status_code: [200, 404] + validate_certs: false + client_cert: "{{ opensearch.upgrade_config.custom_admin_certificate.cert_path }}" + client_key: "{{ opensearch.upgrade_config.custom_admin_certificate.key_path }}" + register: admin_check_response + until: admin_check_response is success + retries: 60 + delay: 1 + run_once: true + +- name: ODFE migration | Set OpenSearch admin password + uri: + url: "https://{{ inventory_hostname }}:{{ opensearch_defaults.ports.http }}/_opendistro/_security/api/internalusers" + method: PATCH + status_code: [200] + body: + - op: "replace" + path: "/admin" + value: + password: "{{ opensearch_variables.specification.admin_password }}" + reserved: "true" + backend_roles: + - "admin" + description: "Admin user" + client_cert: "{{ opensearch.upgrade_config.custom_admin_certificate.cert_path }}" + client_key: "{{ opensearch.upgrade_config.custom_admin_certificate.key_path }}" + body_format: json + validate_certs: false + register: uri_response + until: uri_response is success + retries: 5 + delay: 1 + run_once: true + when: admin_check_response.status == 200 + +- name: ODFE migration | Check if kibanaserver user exists + uri: + url: "https://{{ inventory_hostname }}:{{ opensearch_defaults.ports.http }}/_opendistro/_security/api/internalusers/kibanaserver" + method: GET + # 404 code is used there as someone can remove admin user on its own. + status_code: [200, 404] + validate_certs: false + client_cert: "{{ opensearch.upgrade_config.custom_admin_certificate.cert_path }}" + client_key: "{{ opensearch.upgrade_config.custom_admin_certificate.key_path }}" + register: kibanaserver_check_response + until: kibanaserver_check_response is success + retries: 60 + delay: 1 + run_once: true + +- name: ODFE migration | Set kibanaserver user password + uri: + url: "https://{{ inventory_hostname }}:{{ opensearch_defaults.ports.http }}/_opendistro/_security/api/internalusers" + method: PATCH + status_code: [200] + body: + - op: "replace" + path: "/kibanaserver" + value: + password: "{{ opensearch_variables.specification.kibanaserver_password }}" + reserved: "true" + description: "kibanaserver user" + client_cert: "{{ opensearch.upgrade_config.custom_admin_certificate.cert_path }}" + client_key: "{{ opensearch.upgrade_config.custom_admin_certificate.key_path }}" + body_format: json + validate_certs: false + register: uri_response + until: uri_response is success + retries: 5 + delay: 1 + run_once: true + when: kibanaserver_check_response.status == 200 + +- name: ODFE migration | Check if logstash user exists + uri: + url: "https://{{ inventory_hostname }}:{{ opensearch_defaults.ports.http }}/_opendistro/_security/api/internalusers/logstash" + method: GET + # 404 code is used there as someone can remove admin user on its own. + status_code: [200, 404] + validate_certs: false + client_cert: "{{ opensearch.upgrade_config.custom_admin_certificate.cert_path }}" + client_key: "{{ opensearch.upgrade_config.custom_admin_certificate.key_path }}" + register: logstash_check_response + until: logstash_check_response is success + retries: 60 + delay: 1 + run_once: true + +- name: ODFE migration | Set logstash user password + uri: + url: "https://{{ inventory_hostname }}:{{ opensearch_defaults.ports.http }}/_opendistro/_security/api/internalusers" + method: PATCH + status_code: [200] + body: + - op: "replace" + path: "/logstash" + value: + password: "{{ opensearch_variables.specification.logstash_password }}" + reserved: "true" + backend_roles: + - "logstash" + description: "Logstash user" + client_cert: "{{ opensearch.upgrade_config.custom_admin_certificate.cert_path }}" + client_key: "{{ opensearch.upgrade_config.custom_admin_certificate.key_path }}" + body_format: json + validate_certs: false + register: uri_response + until: uri_response is success + retries: 5 + delay: 1 + run_once: true + when: logstash_check_response.status == 200 + +- name: ODFE migration | Check the OpenSearch status + command: curl https://{{ inventory_hostname }}:{{ opensearch_defaults.ports.http }}/_cluster/health?pretty -u 'admin:{{ opensearch_variables.specification.admin_password }}' -k + register: opensearch_status + +- name: ODFE migration | Show the OpenSearch status + debug: + msg: "{{ opensearch_status.stdout }}" + failed_when: "'number_of_nodes' not in opensearch_status.stdout" + +- name: ODFE migration | Reenable shard allocation for the cluster + include_tasks: + file: opensearch/utils/enable-shard-allocation.yml + apply: + delegate_to: "{{ target_inventory_hostname }}" + delegate_facts: true + loop: "{{ ansible_play_hosts_all }}" + loop_control: + loop_var: target_inventory_hostname + vars: + es_api: + cert_type: Epiphany + cert_path: "{{ opensearch.upgrade_config.custom_admin_certificate.cert_path }}" + key_path: "{{ opensearch.upgrade_config.custom_admin_certificate.key_path }}" + url: https://{{ es_host }}:{{ es_http_port }} + fail_msg: API access test failed. diff --git a/ansible/playbooks/roles/upgrade/tasks/opensearch/pre-migrate.yml b/ansible/playbooks/roles/upgrade/tasks/opensearch/pre-migrate.yml new file mode 100644 index 0000000000..2f349f3cdf --- /dev/null +++ b/ansible/playbooks/roles/upgrade/tasks/opensearch/pre-migrate.yml @@ -0,0 +1,27 @@ +--- +- name: OpenSearch | Ensure OpenSearch service OS group exists + group: + name: "{{ opensearch_variables.specification.opensearch_os_group }}" + state: present + +- name: OpenSearch | Ensure OpenSearch service OS user exists + user: + name: "{{ opensearch_variables.specification.opensearch_os_user }}" + state: present + shell: /bin/bash + groups: "{{ opensearch_variables.specification.opensearch_os_group }}" + home: "{{ opensearch_variables.specification.paths.opensearch_home }}" + create_home: true + +- name: OpenSearch | Ensure directory structure exists + file: + path: "{{ item }}" + state: directory + owner: "{{ opensearch_variables.specification.opensearch_os_user }}" + group: "{{ opensearch_variables.specification.opensearch_os_group }}" + mode: u=rwx,go=rx + loop: + - "{{ opensearch_variables.specification.paths.opensearch_log_dir }}" + - "{{ opensearch_variables.specification.paths.opensearch_conf_dir }}" + - "{{ opensearch_variables.specification.paths.opensearch_data_dir }}" + - "{{ opensearch_defaults.certificates.dirs.certs }}" diff --git a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/utils/assert-api-access.yml b/ansible/playbooks/roles/upgrade/tasks/opensearch/utils/assert-api-access.yml similarity index 85% rename from ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/utils/assert-api-access.yml rename to ansible/playbooks/roles/upgrade/tasks/opensearch/utils/assert-api-access.yml index b9d36e1d9f..c99c75ad72 100644 --- a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/utils/assert-api-access.yml +++ b/ansible/playbooks/roles/upgrade/tasks/opensearch/utils/assert-api-access.yml @@ -1,5 +1,5 @@ --- -- name: ODFE | Assert input parameters +- name: OpenSearch | Assert input parameters assert: that: - es_api.cert_path is defined @@ -13,7 +13,7 @@ # Sets 'test_api_access' - include_tasks: test-api-access.yml -- name: ODFE | Assert API access +- name: OpenSearch | Assert API access assert: that: test_api_access.status == 200 fail_msg: diff --git a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/utils/assert-cert-files-exist.yml b/ansible/playbooks/roles/upgrade/tasks/opensearch/utils/assert-cert-files-exist.yml similarity index 89% rename from ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/utils/assert-cert-files-exist.yml rename to ansible/playbooks/roles/upgrade/tasks/opensearch/utils/assert-cert-files-exist.yml index a4ad4f4f60..8166ad52af 100644 --- a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/utils/assert-cert-files-exist.yml +++ b/ansible/playbooks/roles/upgrade/tasks/opensearch/utils/assert-cert-files-exist.yml @@ -1,5 +1,5 @@ --- -- name: ODFE | Assert input parameters +- name: OpenSearch | Assert input parameters assert: that: - es_api.cert_path is defined @@ -8,7 +8,7 @@ - es_api.key_path is defined quiet: true -- name: ODFE | Get info on files +- name: OpenSearch | Get info on files stat: path: "{{ item }}" get_attributes: false @@ -20,7 +20,7 @@ - "{{ es_api.key_path }}" # Specific case for custom certificates (we don't know the paths so they have to be specified manually) -- name: ODFE | Assert files exist +- name: OpenSearch | Assert files exist assert: that: stat_result.stat.exists fail_msg: "{{ _custom_cert_fail_msg if (es_api.cert_type == 'custom') else _common_fail_msg }}" diff --git a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/utils/create-dual-cert-file.yml b/ansible/playbooks/roles/upgrade/tasks/opensearch/utils/create-dual-cert-file.yml similarity index 68% rename from ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/utils/create-dual-cert-file.yml rename to ansible/playbooks/roles/upgrade/tasks/opensearch/utils/create-dual-cert-file.yml index 01946b94f6..316078d694 100644 --- a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/utils/create-dual-cert-file.yml +++ b/ansible/playbooks/roles/upgrade/tasks/opensearch/utils/create-dual-cert-file.yml @@ -3,16 +3,16 @@ # - certs_to_concatenate # - target_path -- name: ODFE | Read certificates to concatenate +- name: OpenSearch | Read certificates to concatenate slurp: src: "{{ item }}" register: _files loop: "{{ certs_to_concatenate }}" -- name: ODFE | Create dual root CA transitional file for migration +- name: OpenSearch | Create dual root CA transitional file for migration copy: dest: "{{ target_path }}" content: "{{ _files.results | map(attribute='content') | map('b64decode') | join('') }}" mode: u=rw,g=r,o= owner: root - group: elasticsearch + group: opensearch diff --git a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/utils/enable-shard-allocation.yml b/ansible/playbooks/roles/upgrade/tasks/opensearch/utils/enable-shard-allocation.yml similarity index 88% rename from ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/utils/enable-shard-allocation.yml rename to ansible/playbooks/roles/upgrade/tasks/opensearch/utils/enable-shard-allocation.yml index 8394d69fa2..4978f10a5a 100644 --- a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/utils/enable-shard-allocation.yml +++ b/ansible/playbooks/roles/upgrade/tasks/opensearch/utils/enable-shard-allocation.yml @@ -4,7 +4,7 @@ # - es_api.cert_path # - es_api.key_path -- name: ODFE | Enable shard allocation for the cluster +- name: OpenSearch | Enable shard allocation for the cluster uri: url: "{{ es_api.url }}/_cluster/settings" method: PUT diff --git a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/utils/get-cluster-health.yml b/ansible/playbooks/roles/upgrade/tasks/opensearch/utils/get-cluster-health.yml similarity index 89% rename from ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/utils/get-cluster-health.yml rename to ansible/playbooks/roles/upgrade/tasks/opensearch/utils/get-cluster-health.yml index 9c0079f468..fae3164ded 100644 --- a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/utils/get-cluster-health.yml +++ b/ansible/playbooks/roles/upgrade/tasks/opensearch/utils/get-cluster-health.yml @@ -4,7 +4,7 @@ # - es_api.cert_path # - es_api.key_path -- name: ODFE | Get cluster health +- name: OpenSearch | Get cluster health uri: url: "{{ es_api.url }}/_cluster/health" method: GET diff --git a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/utils/get-config-from-files.yml b/ansible/playbooks/roles/upgrade/tasks/opensearch/utils/get-config-from-files.yml similarity index 69% rename from ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/utils/get-config-from-files.yml rename to ansible/playbooks/roles/upgrade/tasks/opensearch/utils/get-config-from-files.yml index 814087368c..8678908038 100644 --- a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/utils/get-config-from-files.yml +++ b/ansible/playbooks/roles/upgrade/tasks/opensearch/utils/get-config-from-files.yml @@ -1,17 +1,17 @@ --- # Sets facts on existing configuration -- name: ODFE | Load /etc/elasticsearch/elasticsearch.yml +- name: OpenSearch | Load /etc/elasticsearch/elasticsearch.yml slurp: src: /etc/elasticsearch/elasticsearch.yml register: _elasticsearch_yml -- name: ODFE | Get Xmx value from /etc/elasticsearch/jvm.options +- name: OpenSearch | Get Xmx value from /etc/elasticsearch/jvm.options command: grep -oP '(?<=^-Xmx)\d+[kKmMgG]?' /etc/elasticsearch/jvm.options register: _grep_xmx changed_when: false -- name: ODFE | Set existing configuration facts +- name: OpenSearch | Set existing configuration facts set_fact: existing_config: main: "{{ _elasticsearch_yml.content | b64decode | from_yaml }}" diff --git a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/utils/prepare-cluster-for-node-restart.yml b/ansible/playbooks/roles/upgrade/tasks/opensearch/utils/prepare-cluster-for-node-restart.yml similarity index 89% rename from ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/utils/prepare-cluster-for-node-restart.yml rename to ansible/playbooks/roles/upgrade/tasks/opensearch/utils/prepare-cluster-for-node-restart.yml index 34bebc59cb..514c282258 100644 --- a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/utils/prepare-cluster-for-node-restart.yml +++ b/ansible/playbooks/roles/upgrade/tasks/opensearch/utils/prepare-cluster-for-node-restart.yml @@ -11,12 +11,12 @@ module_defaults: uri: client_cert: "{{ es_api.cert_path }}" - client_key: "{{ es_api.key_path }}" + client_key: "{{ es_api.key_path }}" validate_certs: false body_format: json block: # It's safe to run this task many times regardless of the state - - name: ODFE | Disable shard allocation for the cluster + - name: OpenSearch | Disable shard allocation for the cluster uri: url: "{{ es_api.url }}/_cluster/settings" method: PUT @@ -35,7 +35,7 @@ # In epicli 0.7.x there is ES 7.3.2 but this step is optional. - name: Handle flush failure block: - - name: ODFE | Perform a synced flush (optional step) + - name: OpenSearch | Perform a synced flush (optional step) uri: url: "{{ es_api.url }}/_flush" method: POST @@ -46,7 +46,7 @@ retries: 120 delay: 1 rescue: - - name: ODFE | Print warning + - name: OpenSearch | Print warning debug: msg: - "WARNING: flush command failed" diff --git a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/utils/restart-node.yml b/ansible/playbooks/roles/upgrade/tasks/opensearch/utils/restart-node.yml similarity index 74% rename from ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/utils/restart-node.yml rename to ansible/playbooks/roles/upgrade/tasks/opensearch/utils/restart-node.yml index c6348f7ee9..ee5c496756 100644 --- a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/utils/restart-node.yml +++ b/ansible/playbooks/roles/upgrade/tasks/opensearch/utils/restart-node.yml @@ -10,18 +10,18 @@ # - daemon_reload # - skip_waiting_for_status -- name: ODFE | Restart elasticsearch service +- name: OpenSearch | Restart elasticsearch service systemd: - name: elasticsearch + name: opensearch state: restarted daemon_reload: "{{ daemon_reload | default(omit) }}" -- name: ODFE | Wait for Elasticsearch transport port to become available +- name: OpenSearch | Wait for Elasticsearch transport port to become available wait_for: port: "{{ es_transport_port }}" host: "{{ hostvars[target_inventory_hostname].es_host }}" -- name: ODFE | Wait for Elasticsearch http port to become available +- name: OpenSearch | Wait for Elasticsearch http port to become available wait_for: port: "{{ es_http_port }}" host: "{{ hostvars[target_inventory_hostname].es_host }}" diff --git a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/utils/save-initial-cluster-status.yml b/ansible/playbooks/roles/upgrade/tasks/opensearch/utils/save-initial-cluster-status.yml similarity index 58% rename from ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/utils/save-initial-cluster-status.yml rename to ansible/playbooks/roles/upgrade/tasks/opensearch/utils/save-initial-cluster-status.yml index 9050c7799a..cd6253396c 100644 --- a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/utils/save-initial-cluster-status.yml +++ b/ansible/playbooks/roles/upgrade/tasks/opensearch/utils/save-initial-cluster-status.yml @@ -1,7 +1,7 @@ --- -- name: ODFE | Get size of upgrade state file +- name: OpenSearch | Get size of upgrade state file stat: - path: "{{ opendistro_for_elasticsearch.upgrade_state_file_path }}" + path: "{{ opensearch.upgrade_state_file_path }}" get_attributes: false get_checksum: false get_mime: false @@ -12,7 +12,7 @@ block: - include_tasks: get-cluster-health.yml - - name: ODFE | Save cluster health to upgrade state file + - name: OpenSearch | Save cluster health to upgrade state file copy: content: "{{ cluster_health.json }}" - dest: "{{ opendistro_for_elasticsearch.upgrade_state_file_path }}" + dest: "{{ opensearch.upgrade_state_file_path }}" diff --git a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/utils/test-api-access.yml b/ansible/playbooks/roles/upgrade/tasks/opensearch/utils/test-api-access.yml similarity index 83% rename from ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/utils/test-api-access.yml rename to ansible/playbooks/roles/upgrade/tasks/opensearch/utils/test-api-access.yml index 8d8495e525..cb8e49d961 100644 --- a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/utils/test-api-access.yml +++ b/ansible/playbooks/roles/upgrade/tasks/opensearch/utils/test-api-access.yml @@ -5,7 +5,7 @@ # - es_api.key_path # - es_api.url -- name: ODFE | Test API access using {{ es_api.cert_type }} certificate +- name: OpenSearch | Test API access using {{ es_api.cert_type }} certificate uri: client_cert: "{{ es_api.cert_path }}" client_key: "{{ es_api.key_path }}" diff --git a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/utils/wait-for-cluster-status.yml b/ansible/playbooks/roles/upgrade/tasks/opensearch/utils/wait-for-cluster-status.yml similarity index 93% rename from ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/utils/wait-for-cluster-status.yml rename to ansible/playbooks/roles/upgrade/tasks/opensearch/utils/wait-for-cluster-status.yml index 496198a4a0..78615ea41c 100644 --- a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/utils/wait-for-cluster-status.yml +++ b/ansible/playbooks/roles/upgrade/tasks/opensearch/utils/wait-for-cluster-status.yml @@ -5,7 +5,7 @@ # - es_api.key_path # - expected_status (type: list, e.g. [ 'green', 'yellow' ]) -- name: ODFE | Wait for '{{ expected_status | join("' or '") }}' cluster health status +- name: OpenSearch | Wait for '{{ expected_status | join("' or '") }}' cluster health status uri: url: "{{ es_api.url }}/_cluster/health" client_cert: "{{ es_api.cert_path }}" diff --git a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/utils/wait-for-node-to-join.yml b/ansible/playbooks/roles/upgrade/tasks/opensearch/utils/wait-for-node-to-join.yml similarity index 88% rename from ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/utils/wait-for-node-to-join.yml rename to ansible/playbooks/roles/upgrade/tasks/opensearch/utils/wait-for-node-to-join.yml index fcb039654c..82bf3ef35c 100644 --- a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/utils/wait-for-node-to-join.yml +++ b/ansible/playbooks/roles/upgrade/tasks/opensearch/utils/wait-for-node-to-join.yml @@ -6,7 +6,7 @@ # - target_inventory_hostname # - hostvars[target_inventory_hostname].es_node_name -- name: ODFE | Wait for Elasticsearch node to join the cluster +- name: OpenSearch | Wait for Elasticsearch node to join the cluster uri: url: "{{ es_api.url }}/_cat/nodes?h=name" method: GET diff --git a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/utils/wait-for-shard-allocation.yml b/ansible/playbooks/roles/upgrade/tasks/opensearch/utils/wait-for-shard-allocation.yml similarity index 95% rename from ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/utils/wait-for-shard-allocation.yml rename to ansible/playbooks/roles/upgrade/tasks/opensearch/utils/wait-for-shard-allocation.yml index 0175d1b2d5..2517d57286 100644 --- a/ansible/playbooks/roles/upgrade/tasks/opendistro_for_elasticsearch/utils/wait-for-shard-allocation.yml +++ b/ansible/playbooks/roles/upgrade/tasks/opensearch/utils/wait-for-shard-allocation.yml @@ -4,7 +4,7 @@ # - es_api.cert_path # - es_api.key_path -- name: ODFE | Wait for the cluster to finish shard allocation +- name: OpenSearch | Wait for the cluster to finish shard allocation uri: url: "{{ es_api.url }}/_cluster/health" method: GET diff --git a/ansible/playbooks/rook.yml b/ansible/playbooks/rook.yml index 95b75a8a0f..3765916e0f 100644 --- a/ansible/playbooks/rook.yml +++ b/ansible/playbooks/rook.yml @@ -4,7 +4,9 @@ gather_facts: true tasks: [] -- hosts: kubernetes_master[0] +# rook is not supported when k8s_as_cloud_service == True +- hosts: rook + run_once: true become: true become_method: sudo roles: diff --git a/ansible/playbooks/upgrade.yml b/ansible/playbooks/upgrade.yml index 4823d706e6..907a14e296 100644 --- a/ansible/playbooks/upgrade.yml +++ b/ansible/playbooks/upgrade.yml @@ -136,77 +136,32 @@ environment: KUBECONFIG: "{{ kubeconfig.local }}" +# Currently, the upgrade of opensearch/logging instances is disabled # === logging === -# Some pre-upgrade tasks can be run in parallel (what saves time) while others must be run in serial (to support rolling upgrades). -# Such a separation in Ansible can be applied only at play level thus we have two plays below. - -# play 1/2: pre-upgrade parallel tasks -- hosts: logging - become: true - become_method: sudo - tasks: - - include_role: - name: upgrade - tasks_from: opendistro_for_elasticsearch-01 - when: "'logging' in upgrade_components or upgrade_components|length == 0" - vars: - current_group_name: logging - -# play 2/2: serial tasks -- hosts: logging - become: true - become_method: sudo - gather_facts: false # gathered by previous play - serial: 1 - tasks: - - include_role: - name: upgrade - tasks_from: opendistro_for_elasticsearch-02 - when: "'logging' in upgrade_components or upgrade_components|length == 0" - vars: - current_group_name: logging - -# === opendistro_for_elasticsearch === - -# Some pre-upgrade tasks can be run in parallel (what saves time) while others must be run in serial (to support rolling upgrades). -# Such a separation in Ansible can be applied only at play level thus we have two plays below. - -# play 1/2: parallel tasks -- hosts: opendistro_for_elasticsearch - become: true - become_method: sudo - tasks: - - include_role: - name: upgrade - tasks_from: opendistro_for_elasticsearch-01 - when: "'opendistro_for_elasticsearch' in upgrade_components or upgrade_components|length == 0" - vars: - current_group_name: opendistro_for_elasticsearch - -# play 2/2: serial tasks -- hosts: opendistro_for_elasticsearch - become: true - become_method: sudo - gather_facts: false # gathered by previous play - serial: 1 - tasks: - - include_role: - name: upgrade - tasks_from: opendistro_for_elasticsearch-02 - when: "'opendistro_for_elasticsearch' in upgrade_components or upgrade_components|length == 0" - vars: - current_group_name: opendistro_for_elasticsearch - -- hosts: kibana - become: true - become_method: sudo - serial: 1 - tasks: - - import_role: - name: upgrade - tasks_from: kibana - when: "'kibana' in upgrade_components or upgrade_components|length == 0" +# - hosts: logging +# become: true +# become_method: sudo +# tasks: +# - include_role: +# name: upgrade +# tasks_from: opensearch +# when: "'logging' in upgrade_components or upgrade_components|length == 0" +# vars: +# current_group_name: logging + +# === opensearch === + +# - hosts: opensearch +# become: true +# become_method: sudo +# tasks: +# - include_role: +# name: upgrade +# tasks_from: opensearch +# when: "'opensearch' in upgrade_components or upgrade_components|length == 0" +# vars: +# current_group_name: opensearch - hosts: grafana become: true @@ -316,9 +271,7 @@ - include_role: name: postgresql tasks_from: upgrade/nodes/common/ensure-ansible-requirements - when: - - ansible_os_family == 'Debian' - - "'postgresql' in upgrade_components or upgrade_components|length == 0" + when: "'postgresql' in upgrade_components or upgrade_components|length == 0" # step 2: upgrade repmgr - include_role: diff --git a/ci/ansible/playbooks/os/rhel/upgrade-release.yml b/ci/ansible/playbooks/os/rhel/upgrade-release.yml index 27bc725f42..03bbe4522d 100644 --- a/ci/ansible/playbooks/os/rhel/upgrade-release.yml +++ b/ci/ansible/playbooks/os/rhel/upgrade-release.yml @@ -2,19 +2,23 @@ # Based on https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/upgrading_from_rhel_7_to_rhel_8/index # and partially on https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/redhat/redhat-in-place-upgrade -# This play requires a Leapp metadata archive from the Red Hat portal which cannot be shared publicly. -# Local path to this archive must be provided via 'leapp_archive' variable, for example: +# Requirements: +# - Leapp metadata archive from the Red Hat portal which cannot be shared publicly. +# - Epiphany manifest with credentials (for 'aws' provider only). +# - System attached to RHUI repositories or Red Hat subscription (for 'non_cloud' provider only). -# ansible-playbook -e leapp_archive=/absolute/path/leapp-data15.tar.gz +# Usage: +# ansible-playbook -e leapp_archive=/absolute/path/leapp-data16.tar.gz -e epiphany_manifest=/shared/build/aws/manifest.yml -# Requirements: -# - System attached to RHUI repositories or Red Hat subscription ('non_cloud' provider) +# Note: +# For AWS playbook: +# - creates/overwrites with backup '/root/.aws/credentials' file locally +# - suspends ReplaceUnhealthy process for auto scaling groups +# - disables auto-recovery for all instances # Limitations: # - Ansible connection as root is not supported (PermitRootLogin) -# TODO: -# Fix issue when running on AWS on repository host deployed with epicli 1.0.2 (instance auto-recovery) - name: In-place RHEL release upgrade hosts: "{{ target | default('all') }}" @@ -22,7 +26,7 @@ vars: versions: required: {major: '7', full: '7.9'} # minimal version from which upgrade is supported - target: {major: '8', full: '8.5'} + target: {major: '8', full: '8.4'} leapp_dependencies: packages: @@ -112,6 +116,15 @@ debug: var: provider + - name: Update repository certificates + when: provider == "azure" + yum: + enablerepo: rhui-microsoft-azure-rhel7 + disablerepo: "*" + state: latest + update_only: true + name: "*" + - name: Register SELinux state set_fact: pre_upgrade_selinux_facts: "{{ ansible_facts.selinux }}" @@ -121,23 +134,198 @@ register: pre_upgrade_enabled_repositories changed_when: false - - name: Ensure repositories that provide leapp utility are enabled - ini_file: - path: "{{ item.repo_file }}" - section: "{{ item.name }}" - option: enabled - value: 1 - mode: u=rw,go=r - loop: "{{ leapp_dependencies.repos[provider] }}" + # Disable legacy containerd plugin to avoid error (modprobe: FATAL: Module aufs not found) - - name: Update repository certificates - when: provider == "azure" - yum: - enablerepo: rhui-microsoft-azure-rhel7 - disablerepo: "*" - state: latest - update_only: true - name: "*" + - name: Check if /etc/containerd/config.toml file exists + stat: + path: /etc/containerd/config.toml + get_attributes: false + get_checksum: false + get_mime: false + register: stat_containerd_config + + - name: Disable aufs plugin + when: stat_containerd_config.stat.exists + block: + - name: Get disabled_plugins + command: grep -oPz '(?s)^disabled_plugins\s*=\s*\[.*?\]' /etc/containerd/config.toml # TOML allows line breaks inside arrays + changed_when: false + register: grep_disabled_plugins + failed_when: grep_disabled_plugins.rc > 1 + + - name: Set plugins to be disabled + set_fact: + plugins_to_disable: "{{ _disabled_plugins | union(['aufs']) }}" + vars: + _disabled_plugins: >- + {{ (grep_disabled_plugins.stdout | regex_replace('\s*=', ':') | from_yaml).disabled_plugins | default('[]') }} + + - name: Disable aufs plugin (update array) + replace: # handles multi-line array + path: /etc/containerd/config.toml + regexp: ^disabled_plugins\s*=\s*\[[^\]]*?\] + replace: disabled_plugins = {{ plugins_to_disable | string }} + backup: true + register: update_containerd_config_option + + - name: Disable aufs plugin (add array) + lineinfile: + path: /etc/containerd/config.toml + line: disabled_plugins = {{ plugins_to_disable | string }} + backup: true + register: add_containerd_config_option + when: grep_disabled_plugins.rc == 1 + + - name: Restart containerd service + systemd: + name: containerd.service + state: restarted + when: update_containerd_config_option.changed + or add_containerd_config_option.changed + + # AWS: Disable instance auto-recovery + + - name: Suspend ReplaceUnhealthy process for auto scaling groups and disable auto-recovery + when: provider == 'aws' + run_once: true + delegate_to: localhost + block: + - name: Load Epiphany manifest + slurp: + src: "{{ epiphany_manifest }}" + register: slurp_epiphany_manifest + + - name: Set cloud properties + vars: + _cluster_doc: >- + {{ slurp_epiphany_manifest['content'] | b64decode | from_yaml_all + | selectattr('kind', '==', 'epiphany-cluster') + | first }} + block: + - name: Set cloud facts + set_fact: + aws_config_dir: "{{ '~root' | expanduser }}/.aws" + aws_region: "{{ _cluster_doc.specification.cloud.region }}" + cluster_name: "{{ _cluster_doc.specification.name }}" + cluster_full_name: "{{ _cluster_doc.specification.prefix }}-{{ _cluster_doc.specification.name }}" + + - name: Create AWS configuration directory + file: + path: "{{ aws_config_dir }}" + state: directory + mode: u=rwx,go=rx + + - name: Check if AWS credentials file exists + stat: + path: "{{ aws_config_dir }}/{{ item }}" + get_attributes: false + get_checksum: false + get_mime: false + register: stat_aws_credentials_file + loop: + - credentials + - credentials.rhel-7-upgrade.bak + + - name: Back up AWS credentials file + when: + - stat_aws_credentials_file.results[0].stat.exists + - not stat_aws_credentials_file.results[1].stat.exists + copy: + src: "{{ aws_config_dir }}/credentials" + dest: "{{ aws_config_dir }}/credentials.rhel-7-upgrade.bak" + remote_src: true + mode: preserve + no_log: true + + - name: Create AWS credentials file + copy: + dest: "{{ aws_config_dir }}/credentials" + content: | + [default] + aws_access_key_id = {{ _cluster_doc.specification.cloud.credentials.key }} + aws_secret_access_key = {{ _cluster_doc.specification.cloud.credentials.secret }} + mode: u=rw,go= + no_log: true + + - name: Find auto scaling groups + community.aws.ec2_asg_info: + name: "{{ cluster_full_name }}" + region: "{{ aws_region }}" + register: cluster_asgs + + - name: Reconfigure ASGs to suspend HealthCheck and ReplaceUnhealthy processes + when: cluster_asgs.results | count > 0 + block: + - name: Set facts on ASGs + set_fact: + asg_facts: "{{ cluster_asgs.results | json_query(_query) }}" + vars: + _query: '[].{auto_scaling_group_name: auto_scaling_group_name, instances: instances, suspended_processes: suspended_processes}' + + - name: Set path to file with original configuration of ASGs + set_fact: + asg_config_file_path: "{{ playbook_dir }}/{{ cluster_full_name }}-asg-config.yml" + + - name: Check if backup of original configuration of ASGs exists + stat: + path: "{{ asg_config_file_path }}" + get_attributes: false + get_checksum: false + get_mime: false + register: stat_asg_config_yml + + - name: Back up configuration of auto scaling groups + when: not stat_asg_config_yml.stat.exists + become: false + copy: + dest: "{{ asg_config_file_path }}" + mode: u=rw,g=r,o= + content: | + # This file is managed by Ansible and is needed to restore original configuration. DO NOT EDIT. + {{ asg_facts | to_nice_yaml(indent=2) }} + + - name: Suspend HealthCheck and ReplaceUnhealthy processes + community.aws.ec2_asg: + name: "{{ item.auto_scaling_group_name }}" + suspend_processes: "{{ item.suspended_processes | union(['HealthCheck', 'ReplaceUnhealthy']) }}" + region: "{{ aws_region }}" + loop_control: + label: "{{ item.auto_scaling_group_name }}" + loop: >- + {{ cluster_asgs.results }} + + # Ansible modules don't support `ec2 modify-instance-maintenance-options` command so we use AWS cli + - name: Ensure pip3 + block: + - name: Check if pip3 is present + command: pip3 --version + register: check_pip3 + changed_when: false + failed_when: false + + - name: Install pip3 + command: python3 -m ensurepip + when: check_pip3.rc != 0 + + - name: Install AWS cli + pip: + name: awscli + register: install_awscli + + - name: Find cluster instances + community.aws.ec2_instance_info: + filters: + "tag:cluster_name": "{{ cluster_name }}" + instance-state-name: ['running'] + region: "{{ aws_region }}" + register: cluster_instances + + - name: Disable auto-recovery for all instances + command: >- + aws ec2 modify-instance-maintenance-options + --instance-id {{ item }} --auto-recovery disabled --region {{ aws_region }} + loop: >- + {{ cluster_instances.instances | map(attribute='instance_id') }} - &UPDATE_ALL_PACKAGES name: Update all packages in current major version @@ -146,8 +334,14 @@ name: "*" state: latest # noqa: package-latest - - &REBOOT_SYSTEM_AFTER_UPDATE - name: Reboot system after update # to load kernel from latest minor version if any + - name: Check if reboot is needed + command: needs-restarting --reboothint # exit code 1 means reboot is required + changed_when: false + register: needs_restarting + failed_when: needs_restarting.rc > 1 + + - name: Reboot system after update # to load kernel from latest minor version if any + when: needs_restarting.rc == 1 reboot: msg: Reboot initiated by Ansible due to update connect_timeout: "{{ reboot.connect_timeout }}" @@ -166,20 +360,34 @@ that: ansible_distribution_version is version(versions.required.full, '>=') quiet: true + - name: Get information on installed packages + package_facts: + manager: rpm + + - name: Ensure repositories that provide leapp utility are enabled + community.general.ini_file: + path: "{{ item.repo_file }}" + section: "{{ item.name }}" + option: enabled + value: 1 + mode: u=rw,go=r + no_extra_spaces: true + loop: "{{ leapp_dependencies.repos[provider] }}" + - name: Install packages that provide the leapp utility - package: + yum: name: "{{ leapp_dependencies.packages[provider] }}" state: present - name: Copy leapp metadata archive copy: src: "{{ leapp_archive }}" - dest: /etc/leapp/files/leapp-data15.tar.gz + dest: /etc/leapp/files/{{ leapp_archive | basename }} mode: preserve - name: Unarchive leapp metadata unarchive: - src: /etc/leapp/files/leapp-data15.tar.gz + src: /etc/leapp/files/{{ leapp_archive | basename }} dest: /etc/leapp/files/ remote_src: true @@ -245,13 +453,37 @@ name: "{{ installed_kernel_devel_packages[:-1] }}" # keep the last item state: absent + - name: Set PostgreSQL version + set_fact: + postgresql_version: >- + {{ '10' if ansible_facts.packages['postgresql10-server'] is defined else + '13' if ansible_facts.packages['postgresql13-server'] is defined else + 'null' }} + + - name: Add PostgreSQL repository + when: postgresql_version != 'null' + yum_repository: + name: pgdg{{ postgresql_version }} + file: pgdg{{ postgresql_version }}-rhel-{{ versions.target.full }} + description: PostgreSQL {{ postgresql_version }} for RHEL/CentOS {{ versions.target.full }} - $basearch + baseurl: https://download.postgresql.org/pub/repos/yum/{{ postgresql_version }}/redhat/rhel-{{ versions.target.full }}-$basearch + gpgkey: https://download.postgresql.org/pub/repos/yum/RPM-GPG-KEY-PGDG + enabled: false + gpgcheck: true + module_hotfixes: true + exclude: # prevent auto-upgrade from 4.0.6-1.el7 + - repmgr10 + - repmgr_10 + ### UPGRADE ### - name: Provide leapp answer about pam_pkcs11_module removal command: leapp answer --add --section remove_pam_pkcs11_module_check.confirm=True - name: Start leapp upgrade - command: leapp upgrade {{ '--no-rhsm' if provider != 'non_cloud' }} + command: >- + leapp upgrade --target {{ versions.target.full }} {{ '--no-rhsm' if provider != 'non_cloud' }} + {{ '--enablerepo pgdg' ~ postgresql_version if postgresql_version != 'null' }} - name: Reboot system to complete leapp upgrade procedure reboot: @@ -278,11 +510,11 @@ - *REFRESH_ANSIBLE_FACTS - - name: Assert major version + - name: Assert target version assert: that: - - ansible_distribution_version is version('8.4','=') - - ansible_kernel is version('4.18.0-305','>=') + - ansible_distribution_version is version(versions.target.full, '=') + - ansible_kernel is version('4.18.0-305', '>=') quiet: true - name: Verify subscription status for non_cloud machines @@ -299,19 +531,20 @@ - name: Check that upgraded version remains correctly subscribed assert: that: - - subscription_version.stdout == "8.4" + - subscription_version.stdout == versions.target.full - subscription_status.stdout == "Subscribed" quiet: true ### POST-UPGRADE -- CLEANUP ### - name: Remove packages from the dnf exclude list # populated by leapp during upgrade - ini_file: + community.general.ini_file: path: /etc/dnf/dnf.conf section: main option: exclude value: '' mode: u=rw,go=r + no_extra_spaces: true ## Remove Leapp @@ -325,6 +558,13 @@ path: /etc/leapp state: absent + - name: Remove PostgreSQL repository + when: postgresql_version != 'null' + yum_repository: + name: pgdg{{ postgresql_version }} + file: pgdg{{ postgresql_version }}-rhel-{{ versions.target.full }} + state: absent + ## Remove RHEL 7 packages - name: Remove remaining RHEL 7 packages @@ -378,22 +618,69 @@ ## Fix failed services - - name: Azure specific block - when: provider == 'azure' - block: - - name: Gather service facts - service_facts: ~ + - name: Gather service facts + service_facts: ~ - - &SET_FAILED_SERVICES_FACT - name: Set list of failed services - set_fact: - failed_services: "{{ ansible_facts.services | json_query('*[] | [?(@.status==`failed`)].name') }}" + - &SET_FAILED_SERVICES_FACT + name: Set list of failed services + set_fact: + failed_services: "{{ ansible_facts.services | json_query('*[] | [?(@.status==`failed`)].name') }}" - - name: Print failed services - when: failed_services | count > 0 - debug: - var: failed_services + - name: Print failed services + when: failed_services | count > 0 + debug: + var: failed_services + - name: Fix repmgr service + when: failed_services | select('match', 'repmgr[0-9]{2}\.service') + block: + - name: Set repmgr service name + set_fact: + repmgr_service_name: >- + {{ failed_services | select('match', 'repmgr[0-9]{2}\.service') | first }} + + # upstream node must be running before repmgrd can start + - name: Search for PostgreSQL primary node + become_user: postgres + # command prints primary/standby + shell: |- + set -o pipefail && \ + repmgr node status | grep -ioP '(?<=Role:).+' | xargs + changed_when: false + register: pg_node_role + failed_when: pg_node_role.rc != 0 or pg_node_role.stdout == "" + + - name: Wait for PostgreSQL primary node to be reachable + when: pg_node_role.stdout == 'primary' + wait_for: + port: 5432 + timeout: 30 + + - name: Restart repmgr service + when: pg_node_role.stdout == 'standby' + systemd: + name: "{{ repmgr_service_name }}" + state: restarted + + - name: Fix filebeat service + when: "'filebeat.service' in failed_services" + block: + - name: Wait for Kibana port + when: groups.kibana[0] is defined + delegate_to: "{{ groups.kibana[0] }}" + wait_for: + host: "{{ hostvars[groups.kibana.0].ansible_default_ipv4.address }}" + port: 5601 + timeout: 30 + + - name: Restart filebeat service + systemd: + name: filebeat + state: restarted + + - name: Azure specific block + when: provider == 'azure' + block: - name: Fix cloud-init.service when: "'cloud-init.service' in failed_services" block: @@ -433,6 +720,12 @@ systemd: name: cloud-init state: restarted + # On K8s master with Calico CNI plugin there is error in first attempt: + # duplicate mac found! both 'cali770930d50fa' and 'cali67622b483b3' have mac 'ee:ee:ee:ee:ee:ee' + register: restart_cloud_init + until: restart_cloud_init is succeeded + retries: 1 + delay: 1 - name: Restore cloud-init config file when: cloud_init_cfg_ssh_deletekeys.changed @@ -483,7 +776,8 @@ ## Verify services - - name: Gather service facts + - name: Refresh service facts + when: failed_services | count > 0 service_facts: ~ - *SET_FAILED_SERVICES_FACT @@ -524,36 +818,87 @@ register: result failed_when: result.rc > 1 # May return code 1 even when correctly subscribed if system purpose is not defined - ### POST-UPGRADE -- UPDATE TO TARGET VERSION ### + # download-requirements.py fails if releasever = 8.4 (2ndQuadrant repo) + - name: Remove releasever DNF variable + file: + path: /etc/dnf/vars/releasever # file created by upgrade + state: absent - - name: Update release to 8.5 + # AWS: Resume HealthCheck process + + - name: Resume HealthCheck process for auto scaling groups + when: provider == 'aws' + run_once: true + delegate_to: localhost block: - - name: Determine if 8.5 is the latest version - command: dnf --releasever 8.5 list kernel # will fail when 8.5 is not the latest minor release - register: is_8_5_latest_version_available - changed_when: false - failed_when: is_8_5_latest_version_available.rc > 1 + - name: Check if file with original configuration of ASGs exists + stat: + path: "{{ asg_config_file_path }}" + get_attributes: false + get_checksum: false + get_mime: false + register: stat_asg_config_yml + + - name: Restore original configuration except for ReplaceUnhealthy process + when: stat_asg_config_yml.stat.exists + block: + - name: Load original configuration from backup + slurp: + src: "{{ asg_config_file_path }}" + register: slurp_asg_config_yml + + - name: Set ASG settings to restore + set_fact: + asgs_to_restore: "{{ slurp_asg_config_yml['content'] | b64decode | from_yaml }}" + + - name: Resume HealthCheck process + community.aws.ec2_asg: + name: "{{ item.auto_scaling_group_name }}" + suspend_processes: "{{ item.suspended_processes | union(['ReplaceUnhealthy']) }}" + region: "{{ aws_region }}" + loop_control: + label: "{{ item.auto_scaling_group_name }}" + loop: "{{ asgs_to_restore }}" + + - name: Remove backup of original configuration of ASGs + file: + path: "{{ asg_config_file_path }}" + state: absent - - name: Unlock release to latest # mirrors will not point to 8.5 when it is the latest - when: is_8_5_latest_version_available.rc == 1 + - name: Remove AWS credentials file file: - path: /etc/dnf/vars/releasever + path: "{{ aws_config_dir }}/credentials" state: absent - - name: Set release to 8.5 - when: is_8_5_latest_version_available.rc == 0 # 8.5 is not the latest - copy: - content: '8.5' - dest: /etc/dnf/vars/releasever - mode: u=rw,go=r - - - *UPDATE_ALL_PACKAGES - - - *REBOOT_SYSTEM_AFTER_UPDATE # 8.5 brings a new kernel update + - name: Restore AWS credentials file + vars: + _backup_path: "{{ aws_config_dir }}/credentials.rhel-7-upgrade.bak" + block: + - name: Check if backup of AWS credentials file exists + stat: + path: "{{ _backup_path }}" + get_attributes: false + get_checksum: false + get_mime: false + register: stat_aws_credentials_file_backup + + - name: Restore AWS credentials file + when: stat_aws_credentials_file_backup.stat.exists + copy: + src: "{{ _backup_path }}" + dest: "{{ aws_config_dir }}/credentials" + remote_src: true + mode: preserve + no_log: true - - *REFRESH_ANSIBLE_FACTS + - name: Remove backup of AWS credentials file + when: stat_aws_credentials_file_backup.stat.exists + file: + path: "{{ _backup_path }}" + state: absent - - name: Assert update to 8.5 succeeded - assert: - that: ansible_distribution_version is version(versions.target.full, '=') - quiet: true + - name: Uninstall AWS cli + when: install_awscli.changed + pip: + name: awscli + state: absent diff --git a/ci/pipelines/build.yaml b/ci/pipelines/build.yaml new file mode 100755 index 0000000000..3993a88dc8 --- /dev/null +++ b/ci/pipelines/build.yaml @@ -0,0 +1,79 @@ +--- +trigger: + branches: + include: + - develop + +pr: + branches: + include: + - develop + +pool: + name: $(agentPoolName) + +variables: + repository: epicli + tags: $(Build.SourceBranchName).$(Build.BuildId) + +jobs: + - job: Run_unit_tests + displayName: Run unit tests + steps: + - task: UsePythonVersion@0 + displayName: Use Python 3.10 + # To be compatible with the epicli's parent image (python:3.10-slim). + inputs: + versionSpec: 3.10 + + - task: Bash@3 + displayName: Install Python dependencies + inputs: + targetType: inline + script: | + python3 -m pip install boto3 click jinja2 jsonschema pytest pytest_mock python-json-logger pyyaml \ + ruamel.yaml setuptools twine wheel + + - task: Bash@3 + displayName: Run unit tests + continueOnError: true + inputs: + targetType: inline + script: | + pytest --junit-xml=unit_tests_results.xml + + - task: PublishTestResults@2 + displayName: Publish test results + inputs: + testResultsFiles: unit_tests_results.xml + searchFolder: $(System.DefaultWorkingDirectory) + failTaskOnFailedTests: true + testRunTitle: Python unit tests for epicli + + - job: Build_epicli_image + displayName: Build an epicli image + dependsOn: + - Run_unit_tests + steps: + - task: CmdLine@2 + displayName: Display image tags + inputs: + script: | + echo Image tags: $(containerRegistry)/$(repository):$(tags) + + - task: Docker@2 + displayName: Build the image + inputs: + containerRegistry: $(containerRegistry) + repository: $(repository) + command: build + Dockerfile: Dockerfile + tags: $(tags) + + - task: Docker@2 + displayName: Push the image + inputs: + containerRegistry: $(containerRegistry) + repository: $(repository) + command: push + tags: $(tags) diff --git a/ci/pipelines/linters.yaml b/ci/pipelines/linters.yaml new file mode 100755 index 0000000000..37528a38b8 --- /dev/null +++ b/ci/pipelines/linters.yaml @@ -0,0 +1,151 @@ +--- +trigger: + branches: + include: + - develop + +pr: + branches: + include: + - develop + +pool: + name: $(agentPoolName) + +variables: + ansible_lint_error_threshold: 338 + pylint_score_cli_threshold: 9.50 + pylint_score_tests_threshold: 9.78 + rubocop_linter_threshold: 183 + +jobs: + - job: Run_linters + displayName: Run linters + steps: + - task: UsePythonVersion@0 + displayName: Use Python 3.10 + # To be compatible with the epicli's parent image (python:3.10-slim). + inputs: + versionSpec: 3.10 + + - task: Bash@3 + displayName: Install Ansible Lint and its dependencies + # Installing Ansible 5.2.0 to be compatible with the epicli image. + inputs: + targetType: inline + script: | + python3 -m pip install --upgrade ansible==5.2.0 ansible-lint ansible-lint-junit==0.16 lxml pip setuptools + + - task: Bash@3 + displayName: Run Ansible Lint + inputs: + targetType: inline + script: | + set -e + if ansible-lint -p ansible --show-relpath --nocolor 1> ansible_lint_stdout 2> ansible_lint_stderr \ + || grep 'violation(s) that are fatal' ansible_lint_stderr; then + # Suppress the next line when the "load-failure" bug in ansible-lint is solved + # https://github.com/ansible/ansible-lint/issues/2217 + sed -i '/load-failure/d' ansible_lint_stdout + error_count=$(wc -l < ansible_lint_stdout) + # Convert to junit + ansible-lint-junit ansible_lint_stdout -o ansible_lint_output.xml + test $error_count -le $(ansible_lint_error_threshold) + else + exit 1 + fi + + - task: PublishTestResults@2 + displayName: Publish Ansible Lint test results + inputs: + testResultsFiles: ansible_lint_output.xml + searchFolder: $(System.DefaultWorkingDirectory) + testRunTitle: Ansible Lint test results + + - task: Bash@3 + displayName: Install Pylint and its dependencies + inputs: + targetType: inline + script: | + # epicli deps: click + python3 -m pip install --upgrade pylint pylint-fail-under pylint-junit \ + click + + - task: Bash@3 + displayName: Run Pylint on CLI code + inputs: + targetType: inline + script: | + python3 -m pylint ./cli \ + --rcfile .pylintrc \ + --fail-under=$(pylint_score_cli_threshold) \ + --output cli_code_results.xml + + - task: PublishTestResults@2 + displayName: Publish Pylint test results for CLI Code + inputs: + testResultsFiles: cli_code_results.xml + searchFolder: $(System.DefaultWorkingDirectory) + testRunTitle: Pylint test results for CLI Code + + - task: Bash@3 + displayName: Run Pylint on test code + inputs: + targetType: inline + script: | + python3 -m pylint ./tests \ + --rcfile .pylintrc \ + --fail-under=$(pylint_score_tests_threshold) \ + --output test_code_results.xml \ + --disable=F0401 # Disable import-error checking + + - task: PublishTestResults@2 + displayName: Publish Pylint test results for test code + inputs: + testResultsFiles: test_code_results.xml + searchFolder: $(System.DefaultWorkingDirectory) + testRunTitle: Pylint test results for test code + + - task: Bash@3 + displayName: Install Rubocop and its dependencies + inputs: + targetType: inline + script: | + set -e + apt-get -y update + apt-get -y install rubygems + gem install rubocop-ast:1.17.0 rubocop:1.28.2 rubocop-junit_formatter + + - task: Bash@3 + displayName: Run Rubocop linter on test code + inputs: + targetType: inline + script: | + rubocop ./tests \ + -c .rubocop.yml \ + --require rubocop/formatter/junit_formatter \ + --format RuboCop::Formatter::JUnitFormatter \ + --out rubocop_results.xml \ + --fail-level error + + - task: Bash@3 + displayName: Assert number of linter failures + inputs: + targetType: inline + script: | + set -e + # Fetch number of detected failures from results file, then test if it does not exceed the declared threshold + # rubocop_linter_threshold is set based on latest linter results performed after code cleaning + detected_failures=$( \ + grep --only-matching 'failures=.[0-9]*.' rubocop_results.xml | \ + grep --only-matching '[0-9]*') + echo "Number of detected failures: $detected_failures" + echo "Failures threshold value: $(rubocop_linter_threshold)" + test $detected_failures -le $(rubocop_linter_threshold) + + - task: PublishTestResults@2 + displayName: Publish Rubocop linting test results + inputs: + testResultsFiles: rubocop_results.xml + searchFolder: $(System.DefaultWorkingDirectory) + testRunTitle: Rubocop linting test results diff --git a/cli/epicli.py b/cli/epicli.py index fbf55163f5..86b7373a69 100644 --- a/cli/epicli.py +++ b/cli/epicli.py @@ -21,6 +21,7 @@ from cli.src.commands.Test import Test from cli.src.commands.Upgrade import Upgrade from cli.src.Config import Config, SUPPORTED_OS +from cli.src.helpers.argparse_helpers import comma_separated_type from cli.src.helpers.build_io import get_output_path, save_to_file from cli.src.helpers.cli_helpers import prompt_for_password, query_yes_no from cli.src.helpers.time_helpers import format_time @@ -38,30 +39,32 @@ def main(): formatter_class=argparse.RawTextHelpFormatter) # setup some root arguments - parser.add_argument('--version', action='version', help='Shows the CLI version', version=VERSION) - parser.add_argument('--licenses', action='version', - help='Shows the third party packages and their licenses the CLI is using.', - version=json.dumps(LICENSES, indent=4)) + parser.add_argument('--auto-approve', dest='auto_approve', action="store_true", + help='Auto approve any user input queries asked by epicli.') + parser.add_argument('--licenses', action='version', version=json.dumps(LICENSES, indent=4), + help='Shows the third party packages and their licenses the CLI is using.') + parser.add_argument('--log-count', dest='log_count', type=str, + help='Roleover count where each CLI run will generate a new log.') + parser.add_argument('--log-date-format', dest='log_date_format', type=str, + help='''Format for the logging date/time. Uses the default Python strftime formatting, +more information here: https://docs.python.org/3.7/library/time.html#time.strftime''') parser.add_argument('-l', '--log-file', dest='log_name', type=str, - help='The name of the log file written to the output directory') + help='The name of the log file written to the output directory.') parser.add_argument('--log-format', dest='log_format', type=str, help='''Format for the logging string. Uses the default Python log formatting, more information here: https://docs.python.org/3.7/library/logging.html''') - parser.add_argument('--log-date-format', dest='log_date_format', type=str, - help='''Format for the logging date/time. Uses the default Python strftime formatting, -more information here: https://docs.python.org/3.7/library/time.html#time.strftime''') - parser.add_argument('--log-count', dest='log_count', type=str, - help='Roleover count where each CLI run will generate a new log.') - parser.add_argument('--log-type', choices=['plain', 'json'], default='plain', - dest='log_type', action='store', help='''Type of logs that will be written to the output file. + parser.add_argument('--log-type', choices=['plain', 'json'], default='plain', dest='log_type', action='store', + help='''Type of logs that will be written to the output file. Currently supported formats are plain text or JSON''') + parser.add_argument('--no-color', dest='no_color', action="store_true", + help='Disables output coloring.') parser.add_argument('--validate-certs', choices=['true', 'false'], default='true', action='store', dest='validate_certs', help='''[Experimental]: Disables certificate checks for certain Ansible operations which might have issues behind proxies (https://github.com/ansible/ansible/issues/32750). Should NOT be used in production for security reasons.''') - parser.add_argument('--auto-approve', dest='auto_approve', action="store_true", - help='Auto approve any user input queries asked by Epicli') + parser.add_argument('--version', action='version', version=VERSION, + help='Shows the CLI version.') # set debug verbosity level. def debug_level(x): @@ -113,14 +116,15 @@ def debug_level(x): config.log_type = args.log_type config.log_count = args.log_count config.validate_certs = True if args.validate_certs == 'true' else False - if 'offline_requirements' in args and not args.offline_requirements is None: + if 'offline_requirements' in args and args.offline_requirements is not None: config.offline_requirements = args.offline_requirements - if 'wait_for_pods' in args and not args.wait_for_pods is None: + if 'wait_for_pods' in args and args.wait_for_pods is not None: config.wait_for_pods = args.wait_for_pods if 'upgrade_components' in args and args.upgrade_components: config.upgrade_components = args.upgrade_components config.debug = args.debug config.auto_approve = args.auto_approve + config.no_color = args.no_color or os.getenv('NO_COLOR', '') != '' try: return args.func(args) @@ -213,6 +217,8 @@ def apply_parser(subparsers): help='Number of pings after which Ansible will fail.') optional.add_argument('--ansible-forks', dest='ansible_forks', type=int, required=False, action='store', default=10, help='Sets the number of forks in ansible.cfg.') + optional.add_argument('--full-download', dest='full_download', required=False, action='store_true', default=False, + help='When used epicli will download all the available requirements for each feature supported.') sub_parser._action_groups.append(optional) def run_apply(args): @@ -262,12 +268,12 @@ def upgrade_parser(subparsers): 'jmx_exporter', 'kafka', 'kafka_exporter', - 'kibana', + 'opensearch_dashboards', 'kubernetes', 'load_balancer', 'logging', 'node_exporter', - 'opendistro_for_elasticsearch', + 'opensearch', 'postgresql', 'postgres_exporter', 'prometheus', @@ -275,18 +281,6 @@ def upgrade_parser(subparsers): 'zookeeper', ]) - def comma_separated_type(choices): - """Return a function that splits and checks comma-separated values.""" - def splitarg(arg): - values = arg.replace(' ','').lower().split(',') - for value in values: - if value not in choices: - raise argparse.ArgumentTypeError( - 'invalid choice: {!r} (choose from {})' - .format(value, ', '.join(map(repr, choices)))) - return values - return splitarg - #required required.add_argument('-b', '--build', dest='build_directory', type=str, required=True, help='Absolute path to directory with build artifacts.') @@ -306,6 +300,8 @@ def splitarg(arg): help='Number of pings after which Ansible will fail.') optional.add_argument('--ansible-forks', dest='ansible_forks', type=int, required=False, action='store', default=10, help='Sets the number of forks in ansible.cfg.') + optional.add_argument('--full-download', dest='full_download', required=False, action='store_true', default=False, + help='When used epicli will download all the available requirements for each feature supported.') sub_parser._action_groups.append(optional) def run_upgrade(args): @@ -328,15 +324,24 @@ def test_parser(subparsers): help='Absolute path to directory with build artifacts.') #optional - group_list = '{' + ', '.join(SpecCommand.get_spec_groups()) + '}' - optional.add_argument('-g', '--group', choices=SpecCommand.get_spec_groups(), default='all', action='store', dest='group', required=False, metavar=group_list, - help='Group of tests to be run, e.g. kafka.') + TEST_GROUPS = SpecCommand.get_spec_groups() + include_choices = ['all'] + TEST_GROUPS + + optional.add_argument('-e', '--exclude', type=comma_separated_type(choices=TEST_GROUPS), + dest='excluded_groups', required=False, + help='Group of tests to be skipped, e.g. -e kafka,kafka_exporter.') + optional.add_argument('-i', '--include', default='all', type=comma_separated_type(choices=include_choices), + dest='included_groups', required=False, + help='Group of tests to be run, e.g. -i kafka,kafka_exporter.') + optional.add_argument('-k', '--kubeconfig-remote-path', type=os.path.abspath, + dest='kubeconfig_remote_path', required=False, + help='Absolute path to kubeconfig file on K8s master host, e.g. /etc/kubernetes/admin.conf.') sub_parser._action_groups.append(optional) def run_test(args): experimental_query() adjust_paths_from_build(args) - with Test(args) as cmd: + with Test(args, TEST_GROUPS) as cmd: return cmd.test() sub_parser.set_defaults(func=run_test) diff --git a/cli/licenses.py b/cli/licenses.py index 93eaa36c2d..ae17bca280 100644 --- a/cli/licenses.py +++ b/cli/licenses.py @@ -14,7 +14,7 @@ }, { "Name": "ansible-core", - "Version": "2.12.1", + "Version": "2.12.6", "Summary": "Radically simple IT automation", "Home-page": "https://ansible.com/", "Author": "Ansible, Inc.", @@ -120,7 +120,7 @@ }, { "Name": "azure-common", - "Version": "1.1.27", + "Version": "1.1.28", "Summary": "Microsoft Azure Client Library for Python (Common)", "Home-page": "https://github.com/Azure/azure-sdk-for-python", "Author": "Microsoft Corporation", @@ -131,7 +131,7 @@ }, { "Name": "azure-core", - "Version": "1.21.1", + "Version": "1.24.0", "Summary": "Microsoft Azure Core Library for Python", "Home-page": "https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/core/azure-core", "Author": "Microsoft Corporation", @@ -154,9 +154,10 @@ "Summary": "Azure Data Lake Store Filesystem Client Library for Python", "Home-page": "https://github.com/Azure/azure-data-lake-store-python", "Author": "Microsoft Corporation", - "License": "Other", + "License": "MIT License", "License URL": "https://api.github.com/repos/azure/azure-data-lake-store-python/license", - "License repo": "\ufeffThe MIT License (MIT)\n\nCopyright (c) 2016 Microsoft\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n" + "License repo": "\ufeffThe MIT License (MIT)\n\nCopyright (c) 2016 Microsoft\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n", + "License text": "MIT License\n\nCopyright (c) [year] [fullname]\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n" }, { "Name": "azure-graphrbac", @@ -171,7 +172,7 @@ }, { "Name": "azure-identity", - "Version": "1.7.1", + "Version": "1.10.0", "Summary": "Microsoft Azure Identity Library for Python", "Home-page": "https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/identity/azure-identity", "Author": "Microsoft Corporation", @@ -401,7 +402,7 @@ }, { "Name": "azure-mgmt-cosmosdb", - "Version": "7.0.0b2", + "Version": "7.0.0b6", "Summary": "Microsoft Azure Cosmos DB Management Client Library for Python", "Home-page": "https://github.com/Azure/azure-sdk-for-python", "Author": "Microsoft Corporation", @@ -929,8 +930,8 @@ }, { "Name": "azure-mgmt-sqlvirtualmachine", - "Version": "1.0.0b1", - "Summary": "Microsoft Azure Sqlvirtualmachine Management Client Library for Python", + "Version": "1.0.0b2", + "Summary": "Microsoft Azure Sql Virtual Machine Management Client Library for Python", "Home-page": "https://github.com/Azure/azure-sdk-for-python", "Author": "Microsoft Corporation", "License": "MIT License", @@ -951,7 +952,7 @@ }, { "Name": "azure-mgmt-synapse", - "Version": "2.1.0b4", + "Version": "2.1.0b5", "Summary": "Microsoft Azure Synapse Management Client Library for Python", "Home-page": "https://github.com/Azure/azure-sdk-for-python", "Author": "Microsoft Corporation", @@ -1061,7 +1062,7 @@ }, { "Name": "bcrypt", - "Version": "3.2.0", + "Version": "3.2.2", "Summary": "Modern password hashing for your software and your servers", "Home-page": "https://github.com/pyca/bcrypt/", "Author": "The Python Cryptographic Authority developers", @@ -1072,7 +1073,7 @@ }, { "Name": "boto3", - "Version": "1.20.43", + "Version": "1.23.10", "Summary": "The AWS SDK for Python", "Home-page": "https://github.com/boto/boto3", "Author": "Amazon Web Services", @@ -1083,7 +1084,7 @@ }, { "Name": "botocore", - "Version": "1.23.43", + "Version": "1.26.10", "Summary": "Low-level, data-driven core of boto 3.", "Home-page": "https://github.com/boto/botocore", "Author": "Amazon Web Services", @@ -1094,11 +1095,13 @@ }, { "Name": "certifi", - "Version": "2021.10.8", + "Version": "2022.5.18.1", "Summary": "Python package for providing Mozilla's CA Bundle.", - "Home-page": "https://certifiio.readthedocs.io/en/latest/", + "Home-page": "https://github.com/certifi/python-certifi", "Author": "Kenneth Reitz", - "License": "MPL-2.0" + "License": "Other", + "License URL": "https://api.github.com/repos/certifi/python-certifi/license", + "License repo": "This package contains a modified version of ca-bundle.crt:\n\nca-bundle.crt -- Bundle of CA Root Certificates\n\nCertificate data from Mozilla as of: Thu Nov 3 19:04:19 2011#\nThis is a bundle of X.509 certificates of public Certificate Authorities\n(CA). These were automatically extracted from Mozilla's root certificates\nfile (certdata.txt). This file can be found in the mozilla source tree:\nhttps://hg.mozilla.org/mozilla-central/file/tip/security/nss/lib/ckfw/builtins/certdata.txt\nIt contains the certificates in PEM format and therefore\ncan be directly used with curl / libcurl / php_curl, or with\nan Apache+mod_ssl webserver for SSL client authentication.\nJust configure this file as the SSLCACertificateFile.#\n\n***** BEGIN LICENSE BLOCK *****\nThis Source Code Form is subject to the terms of the Mozilla Public License,\nv. 2.0. If a copy of the MPL was not distributed with this file, You can obtain\none at http://mozilla.org/MPL/2.0/.\n\n***** END LICENSE BLOCK *****\n@(#) $RCSfile: certdata.txt,v $ $Revision: 1.80 $ $Date: 2011/11/03 15:11:58 $\n" }, { "Name": "cffi", @@ -1121,7 +1124,7 @@ }, { "Name": "charset-normalizer", - "Version": "2.0.10", + "Version": "2.0.12", "Summary": "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet.", "Home-page": "https://github.com/ousret/charset_normalizer", "Author": "Ahmed TAHRI @Ousret", @@ -1130,6 +1133,14 @@ "License repo": "MIT License\n\nCopyright (c) 2019 TAHRI Ahmed R.\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.", "License text": "MIT License\n\nCopyright (c) [year] [fullname]\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n" }, + { + "Name": "click", + "Version": "8.1.3", + "Summary": "Composable command line interface toolkit", + "Home-page": "https://palletsprojects.com/p/click/", + "Author": "Armin Ronacher", + "License": "BSD-3-Clause" + }, { "Name": "colorama", "Version": "0.4.4", @@ -1143,7 +1154,7 @@ }, { "Name": "cryptography", - "Version": "36.0.1", + "Version": "37.0.2", "Summary": "cryptography is a package which provides cryptographic recipes and primitives to Python developers.", "Home-page": "https://github.com/pyca/cryptography", "Author": "The Python Cryptographic Authority and individual contributors", @@ -1175,9 +1186,9 @@ }, { "Name": "fabric", - "Version": "2.6.0", + "Version": "2.7.0", "Summary": "High level SSH command execution", - "Home-page": "http://fabfile.org", + "Home-page": "https://fabfile.org", "Author": "Jeff Forcier", "License": "BSD" }, @@ -1202,9 +1213,9 @@ }, { "Name": "invoke", - "Version": "1.6.0", + "Version": "1.7.1", "Summary": "Pythonic task execution", - "Home-page": "http://docs.pyinvoke.org", + "Home-page": "https://pyinvoke.org", "Author": "Jeff Forcier", "License": "BSD" }, @@ -1232,7 +1243,7 @@ }, { "Name": "Jinja2", - "Version": "3.0.3", + "Version": "3.1.2", "Summary": "A very fast and expressive template engine.", "Home-page": "https://palletsprojects.com/p/jinja/", "Author": "Armin Ronacher", @@ -1240,7 +1251,7 @@ }, { "Name": "jmespath", - "Version": "0.10.0", + "Version": "1.0.0", "Summary": "JSON Matching Expressions", "Home-page": "https://github.com/jmespath/jmespath.py", "Author": "James Saryerwinnie", @@ -1261,12 +1272,12 @@ }, { "Name": "jsonschema", - "Version": "4.4.0", + "Version": "4.5.1", "Summary": "An implementation of JSON Schema validation for Python", - "Home-page": "https://github.com/Julian/jsonschema", + "Home-page": "https://github.com/python-jsonschema/jsonschema", "Author": "Julian Berman", "License": "MIT License", - "License URL": "https://api.github.com/repos/julian/jsonschema/license", + "License URL": "https://api.github.com/repos/python-jsonschema/jsonschema/license", "License repo": "Copyright (c) 2013 Julian Berman\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in\nall copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\nTHE SOFTWARE.\n", "License text": "MIT License\n\nCopyright (c) [year] [fullname]\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n" }, @@ -1283,7 +1294,7 @@ }, { "Name": "MarkupSafe", - "Version": "2.0.1", + "Version": "2.1.1", "Summary": "Safely add untrusted strings to HTML/XML markup.", "Home-page": "https://palletsprojects.com/p/markupsafe/", "Author": "Armin Ronacher", @@ -1299,7 +1310,7 @@ }, { "Name": "msal", - "Version": "1.16.0", + "Version": "1.17.0", "Summary": "The Microsoft Authentication Library (MSAL) for Python library enables your app to access the Microsoft Cloud by supporting authentication of users with Microsoft Azure Active Directory accounts (AAD) and Microsoft Accounts (MSA) using industry standard OAuth2 and OpenID Connect.", "Home-page": "https://github.com/AzureAD/microsoft-authentication-library-for-python", "Author": "Microsoft Corporation", @@ -1331,7 +1342,7 @@ }, { "Name": "oauthlib", - "Version": "3.1.1", + "Version": "3.2.0", "Summary": "A generic, spec-compliant, thorough implementation of the OAuth request-signing logic", "Home-page": "https://github.com/oauthlib/oauthlib", "Author": "The OAuthlib Community", @@ -1352,7 +1363,7 @@ }, { "Name": "paramiko", - "Version": "2.9.2", + "Version": "2.11.0", "Summary": "SSH2 protocol library", "Home-page": "https://paramiko.org", "Author": "Jeff Forcier", @@ -1360,12 +1371,12 @@ }, { "Name": "pathlib2", - "Version": "2.3.6", + "Version": "2.3.7.post1", "Summary": "Object-oriented filesystem paths", - "Home-page": "https://github.com/mcmtroffaes/pathlib2", + "Home-page": "https://github.com/jazzband/pathlib2", "Author": "Matthias C. M. Troffaes", "License": "MIT License", - "License URL": "https://api.github.com/repos/mcmtroffaes/pathlib2/license", + "License URL": "https://api.github.com/repos/jazzband/pathlib2/license", "License repo": "The MIT License (MIT)\n\nCopyright (c) 2014-2017 Matthias C. M. Troffaes\nCopyright (c) 2012-2014 Antoine Pitrou and contributors\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n\n", "License text": "MIT License\n\nCopyright (c) [year] [fullname]\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n" }, @@ -1389,7 +1400,7 @@ }, { "Name": "psutil", - "Version": "5.9.0", + "Version": "5.9.1", "Summary": "Cross-platform lib for process and system monitoring in Python.", "Home-page": "https://github.com/giampaolo/psutil", "Author": "Giampaolo Rodola", @@ -1421,7 +1432,7 @@ }, { "Name": "Pygments", - "Version": "2.11.2", + "Version": "2.12.0", "Summary": "Pygments is a syntax highlighting package written in Python.", "Home-page": "https://pygments.org/", "Author": "Georg Brandl", @@ -1429,13 +1440,13 @@ }, { "Name": "PyJWT", - "Version": "2.3.0", + "Version": "2.4.0", "Summary": "JSON Web Token implementation in Python", "Home-page": "https://github.com/jpadilla/pyjwt", "Author": "Jose Padilla", "License": "MIT License", "License URL": "https://api.github.com/repos/jpadilla/pyjwt/license", - "License repo": "The MIT License (MIT)\n\nCopyright (c) 2015 Jos\u00e9 Padilla\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n", + "License repo": "The MIT License (MIT)\n\nCopyright (c) 2015-2022 Jos\u00e9 Padilla\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n", "License text": "MIT License\n\nCopyright (c) [year] [fullname]\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n" }, { @@ -1451,7 +1462,7 @@ }, { "Name": "pyOpenSSL", - "Version": "21.0.0", + "Version": "22.0.0", "Summary": "Python wrapper module around the OpenSSL library", "Home-page": "https://pyopenssl.org/", "Author": "The pyOpenSSL developers", @@ -1459,14 +1470,11 @@ }, { "Name": "pyparsing", - "Version": "3.0.7", - "Summary": "Python parsing module", - "Home-page": "https://github.com/pyparsing/pyparsing/", - "Author": "Paul McGuire", - "License": "MIT License", - "License URL": "https://api.github.com/repos/pyparsing/pyparsing/license", - "License repo": "Permission is hereby granted, free of charge, to any person obtaining\na copy of this software and associated documentation files (the\n\"Software\"), to deal in the Software without restriction, including\nwithout limitation the rights to use, copy, modify, merge, publish,\ndistribute, sublicense, and/or sell copies of the Software, and to\npermit persons to whom the Software is furnished to do so, subject to\nthe following conditions:\n\nThe above copyright notice and this permission notice shall be\nincluded in all copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND,\nEXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF\nMERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.\nIN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY\nCLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,\nTORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\nSOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n", - "License text": "MIT License\n\nCopyright (c) [year] [fullname]\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n" + "Version": "3.0.9", + "Summary": "pyparsing module - Classes and methods to define and execute parsing grammars", + "Home-page": "", + "Author": "", + "License": "" }, { "Name": "pyrsistent", @@ -1520,7 +1528,7 @@ }, { "Name": "requests-oauthlib", - "Version": "1.3.0", + "Version": "1.3.1", "Summary": "OAuthlib authentication support for Requests.", "Home-page": "https://github.com/requests/requests-oauthlib", "Author": "Kenneth Reitz", @@ -1558,7 +1566,7 @@ }, { "Name": "ruamel.yaml", - "Version": "0.17.20", + "Version": "0.17.21", "Summary": "ruamel.yaml is a YAML parser/emitter that supports roundtrip preservation of comments, seq/map flow style, and map key order", "Home-page": "https://sourceforge.net/p/ruamel-yaml/code/ci/default/tree", "Author": "Anthon van der Neut", @@ -1566,7 +1574,7 @@ }, { "Name": "s3transfer", - "Version": "0.5.0", + "Version": "0.5.2", "Summary": "An Amazon S3 Transfer Manager", "Home-page": "https://github.com/boto/s3transfer", "Author": "Amazon Web Services", @@ -1629,9 +1637,17 @@ "License repo": "Copyright (c) 2011-2020 Sergey Astanin and contributors\n\nPermission is hereby granted, free of charge, to any person obtaining\na copy of this software and associated documentation files (the\n\"Software\"), to deal in the Software without restriction, including\nwithout limitation the rights to use, copy, modify, merge, publish,\ndistribute, sublicense, and/or sell copies of the Software, and to\npermit persons to whom the Software is furnished to do so, subject to\nthe following conditions:\n\nThe above copyright notice and this permission notice shall be\nincluded in all copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND,\nEXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF\nMERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND\nNONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE\nLIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\nOF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION\nWITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n", "License text": "MIT License\n\nCopyright (c) [year] [fullname]\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n" }, + { + "Name": "typing_extensions", + "Version": "4.2.0", + "Summary": "Backported and Experimental Type Hints for Python 3.7+", + "Home-page": "", + "Author": "", + "License": "" + }, { "Name": "urllib3", - "Version": "1.26.8", + "Version": "1.26.9", "Summary": "HTTP library with thread-safe connection pooling, file post, and more.", "Home-page": "https://urllib3.readthedocs.io/", "Author": "Andrey Petrov", @@ -1647,18 +1663,18 @@ }, { "Name": "wrapt", - "Version": "1.13.3", + "Version": "1.14.1", "Summary": "Module for decorators, wrappers and monkey patching.", "Home-page": "https://github.com/GrahamDumpleton/wrapt", "Author": "Graham Dumpleton", "License": "BSD 2-Clause \"Simplified\" License", "License URL": "https://api.github.com/repos/grahamdumpleton/wrapt/license", - "License repo": "Copyright (c) 2013-2019, Graham Dumpleton\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n\n* Redistributions of source code must retain the above copyright notice, this\n list of conditions and the following disclaimer.\n\n* Redistributions in binary form must reproduce the above copyright notice,\n this list of conditions and the following disclaimer in the documentation\n and/or other materials provided with the distribution.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\nAND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\nIMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\nARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE\nLIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\nCONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\nSUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\nINTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\nCONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\nARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\nPOSSIBILITY OF SUCH DAMAGE.\n", + "License repo": "Copyright (c) 2013-2022, Graham Dumpleton\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n\n* Redistributions of source code must retain the above copyright notice, this\n list of conditions and the following disclaimer.\n\n* Redistributions in binary form must reproduce the above copyright notice,\n this list of conditions and the following disclaimer in the documentation\n and/or other materials provided with the distribution.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\nAND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\nIMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\nARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE\nLIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\nCONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\nSUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\nINTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\nCONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\nARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\nPOSSIBILITY OF SUCH DAMAGE.\n", "License text": "BSD 2-Clause License\n\nCopyright (c) [year], [fullname]\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n\n1. Redistributions of source code must retain the above copyright notice, this\n list of conditions and the following disclaimer.\n\n2. Redistributions in binary form must reproduce the above copyright notice,\n this list of conditions and the following disclaimer in the documentation\n and/or other materials provided with the distribution.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\nAND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\nIMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\nFOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\nDAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\nSERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\nCAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\nOR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\nOF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n" }, { "Name": "xmltodict", - "Version": "0.12.0", + "Version": "0.13.0", "Summary": "Makes working with XML feel like you are working with JSON", "Home-page": "https://github.com/martinblech/xmltodict", "Author": "Martin Blech", diff --git a/cli/src/Config.py b/cli/src/Config.py index 989a78a91e..4de058aa33 100644 --- a/cli/src/Config.py +++ b/cli/src/Config.py @@ -1,5 +1,6 @@ import os from os.path import expanduser +from pathlib import Path from typing import Dict, List @@ -7,7 +8,7 @@ SUPPORTED_OS: Dict[str, List[str]] = { - 'almalinux-8': ['x86_64'], + 'almalinux-8': ['x86_64','aarch64'], 'rhel-8': ['x86_64'], 'ubuntu-20.04': ['x86_64'] } @@ -32,6 +33,8 @@ def __init__(self): self._log_count = 10 self._log_type = 'plain' + self._full_download: bool = False + self._input_manifest_path: Path = None self._validate_certs = True self._debug = 0 self._auto_approve = False @@ -39,6 +42,23 @@ def __init__(self): self._wait_for_pods = False self._upgrade_components = [] self._vault_password_location = os.path.join(expanduser("~"), '.epicli/vault.cfg') + self._no_color: bool = False + + @property + def full_download(self) -> bool: + return self._full_download + + @full_download.setter + def full_download(self, full_download: bool): + self._full_download = full_download + + @property + def input_manifest_path(self) -> Path: + return self._input_manifest_path + + @input_manifest_path.setter + def input_manifest_path(self, input_manifest_path: Path): + self._input_manifest_path = input_manifest_path @property def docker_cli(self): @@ -59,7 +79,7 @@ def log_file(self): @log_file.setter def log_file(self, log_file): - if not log_file is None: + if log_file is not None: self._log_file = log_file @property @@ -68,7 +88,7 @@ def log_format(self): @log_format.setter def log_format(self, log_format): - if not log_format is None: + if log_format is not None: self._log_format = log_format @property @@ -77,7 +97,7 @@ def log_date_format(self): @log_date_format.setter def log_date_format(self, log_date_format): - if not log_date_format is None: + if log_date_format is not None: self._log_date_format = log_date_format @property @@ -86,7 +106,7 @@ def log_count(self): @log_count.setter def log_count(self, log_count): - if not log_count is None: + if log_count is not None: self._log_count = log_count @property @@ -95,7 +115,7 @@ def log_type(self): @log_type.setter def log_type(self, log_type): - if not log_type is None: + if log_type is not None: if log_type in LOG_TYPES: self._log_type = log_type else: @@ -107,7 +127,7 @@ def validate_certs(self): @validate_certs.setter def validate_certs(self, validate_certs): - if not validate_certs is None: + if validate_certs is not None: self._validate_certs = validate_certs @property @@ -116,7 +136,7 @@ def debug(self): @debug.setter def debug(self, debug): - if not debug is None: + if debug is not None: self._debug = debug @property @@ -125,7 +145,7 @@ def auto_approve(self): @auto_approve.setter def auto_approve(self, auto_approve): - if not auto_approve is None: + if auto_approve is not None: self._auto_approve = auto_approve @property @@ -134,7 +154,7 @@ def vault_password_location(self): @vault_password_location.setter def vault_password_location(self, vault_password_location): - if not vault_password_location is None: + if vault_password_location is not None: self._vault_password_location = vault_password_location @property @@ -143,7 +163,7 @@ def offline_requirements(self): @offline_requirements.setter def offline_requirements(self, offline_requirements): - if not offline_requirements is None: + if offline_requirements is not None: if not os.path.isdir(offline_requirements): raise Exception(f'offline_requirements path "{offline_requirements}" is not a valid path.') @@ -158,7 +178,7 @@ def wait_for_pods(self): @wait_for_pods.setter def wait_for_pods(self, wait_for_pods): - if not wait_for_pods is None: + if wait_for_pods is not None: self._wait_for_pods = wait_for_pods @property @@ -169,6 +189,14 @@ def upgrade_components(self): def upgrade_components(self, upgrade_components): self._upgrade_components = upgrade_components + @property + def no_color(self) -> bool: + return self._no_color + + @no_color.setter + def no_color(self, no_color: bool): + self._no_color = no_color + instance = None def __new__(cls): diff --git a/cli/src/Log.py b/cli/src/Log.py index 18b8107cf8..81cb5977fd 100644 --- a/cli/src/Log.py +++ b/cli/src/Log.py @@ -3,6 +3,7 @@ import os import threading +import click from pythonjsonlogger import jsonlogger from cli.src.Config import Config @@ -10,18 +11,12 @@ class ColorFormatter(logging.Formatter): - grey = '\x1b[38;21m' - yellow = '\x1b[33;21m' - red = '\x1b[31;21m' - bold_red = '\x1b[31;1m' - reset = '\x1b[0m' - FORMATS = { - logging.DEBUG: grey + 'format' + reset, - logging.INFO: grey + 'format' + reset, - logging.WARNING: yellow + 'format' + reset, - logging.ERROR: red + 'format' + reset, - logging.CRITICAL: bold_red + 'format' + reset + logging.DEBUG: click.style('format', fg='bright_black'), # grey + logging.INFO: click.style('format'), + logging.WARNING: click.style('format', fg='yellow'), + logging.ERROR: click.style('format', fg='red'), + logging.CRITICAL: click.style('format', fg='red', bold=True) } def format(self, record): @@ -32,6 +27,24 @@ def format(self, record): return formatter.format(record) +class UncolorFormatter(logging.Formatter): + """ + Formatter that removes ANSI styling information (escape sequences). + """ + def format(self, record: logging.LogRecord) -> str: + return click.unstyle(super().format(record)) + + +class UncolorJsonFormatter(jsonlogger.JsonFormatter): + """ + JSON formatter that removes ANSI styling information (escape sequences). + """ + def format(self, record: logging.LogRecord) -> str: + if isinstance(record.msg, str): + record.msg = click.unstyle(record.msg) + return super().format(record) + + class Log: class __LogBase: stream_handler = None @@ -42,8 +55,9 @@ def __init__(self): # create stream handler with color formatter self.stream_handler = logging.StreamHandler() - color_formatter = ColorFormatter() - self.stream_handler.setFormatter(color_formatter) + formatter = logging.Formatter(config.log_format, + datefmt=config.log_date_format) if config.no_color else ColorFormatter() + self.stream_handler.setFormatter(formatter) # create file handler log_path = os.path.join(get_output_path(), config.log_file) @@ -55,10 +69,10 @@ def __init__(self): # attach propper formatter to file_handler (plain|json) if config.log_type == 'plain': - file_formatter = logging.Formatter(config.log_format, datefmt=config.log_date_format) + file_formatter = UncolorFormatter(config.log_format, datefmt=config.log_date_format) self.file_handler.setFormatter(file_formatter) elif config.log_type == 'json': - json_formatter = jsonlogger.JsonFormatter(config.log_format, datefmt=config.log_date_format) + json_formatter = UncolorJsonFormatter(config.log_format, datefmt=config.log_date_format) self.file_handler.setFormatter(json_formatter) @@ -80,27 +94,38 @@ def __init__(self, logger_name): threading.Thread.__init__(self) self.logger = Log(logger_name) self.daemon = False - self.fdRead, self.fdWrite = os.pipe() - self.pipeReader = os.fdopen(self.fdRead) + self.fd_read, self.fd_write = os.pipe() + self.pipe_reader = os.fdopen(self.fd_read) self.start() - self.errorStrings = ['error', 'Error', 'ERROR', 'fatal', 'FAILED'] - self.warningStrings = ['warning', 'warning', 'WARNING'] - self.stderrstrings = [] + self.error_strings = ['error', 'Error', 'ERROR', 'fatal', 'FAILED'] + self.warning_strings = ['warning', 'warning', 'WARNING'] + self.output_error_lines = [] def fileno(self): - return self.fdWrite + return self.fd_write def run(self): - for line in iter(self.pipeReader.readline, ''): + """Run thread logging everything.""" + colored_loggers = ['AnsibleCommand', 'SpecCommand', 'TerraformCommand'] + logger_short_name = self.logger.name.split('.')[-1] + with_error_detection = logger_short_name in ['TerraformCommand'] + with_level_detection = logger_short_name not in colored_loggers + + for line in iter(self.pipe_reader.readline, ''): line = line.strip('\n') - if any([substring in line for substring in self.errorStrings]): - self.stderrstrings.append(line) - self.logger.error(line) - elif any([substring in line for substring in self.warningStrings]): + if with_error_detection and any(string in line for string in self.error_strings): + self.output_error_lines.append(line) + if with_level_detection: + if any(string in line for string in self.error_strings): + self.logger.error(line) + elif any(string in line for string in self.warning_strings): self.logger.warning(line) + else: + self.logger.info(line) else: self.logger.info(line) - self.pipeReader.close() + + self.pipe_reader.close() def close(self): - os.close(self.fdWrite) + os.close(self.fd_write) diff --git a/cli/src/ansible/AnsibleCommand.py b/cli/src/ansible/AnsibleCommand.py index 858951a572..57102f3d1d 100644 --- a/cli/src/ansible/AnsibleCommand.py +++ b/cli/src/ansible/AnsibleCommand.py @@ -2,17 +2,20 @@ import subprocess import time +from click import style from cli.src.Config import Config from cli.src.Log import Log, LogPipe -ansible_verbosity = ['NONE','-v','-vv','-vvv','-vvvv'] + +ANSIBLE_VERBOSITY = ['NONE','-v','-vv','-vvv','-vvvv'] +HIGHLIGHTED = {'fg': 'bright_blue'} class AnsibleCommand: def __init__(self, working_directory=os.path.dirname(__file__)): self.logger = Log(__name__) self.working_directory = working_directory - + self.colors_enabled = not Config().no_color def run_task(self, hosts, inventory, module, args=None): cmd = ['ansible'] @@ -28,33 +31,36 @@ def run_task(self, hosts, inventory, module, args=None): cmd.append(hosts) if Config().debug > 0: - cmd.append(ansible_verbosity[Config().debug]) + cmd.append(ANSIBLE_VERBOSITY[Config().debug]) - self.logger.info('Running: "' + ' '.join(module) + '"') + self.logger.info(f'Running: "{style(module, **HIGHLIGHTED) if self.colors_enabled else module}"') logpipe = LogPipe(__name__) with subprocess.Popen(cmd, stdout=logpipe, stderr=logpipe) as sp: logpipe.close() if sp.returncode != 0: - raise Exception('Error running: "' + ' '.join(cmd) + '"') - else: - self.logger.info('Done running "' + ' '.join(cmd) + '"') + raise Exception(f'Error running: "{" ".join(cmd)}"') - def run_task_with_retries(self, inventory, module, hosts, retries, timeout=10, args=None): + self.logger.info(f'Done running "{" ".join(cmd)}"') + + def run_task_with_retries(self, inventory, module, hosts, retries, delay=10, args=None): for i in range(retries): try: self.run_task(hosts=hosts, inventory=inventory, module=module, args=args) break - except Exception as e: + except Exception as e: # pylint: disable=broad-except self.logger.error(e) self.logger.warning('Retry running task: ' + str(i + 1) + '/' + str(retries)) - time.sleep(timeout) + time.sleep(delay) else: raise Exception(f'Failed running task after {str(retries)} retries') - def run_playbook(self, inventory, playbook_path, vault_file=None): + def run_playbook(self, inventory, playbook_path, vault_file=None, extra_vars:list[str]=None): + """ + :param extra_vars: playbook's variables passed via `--extra-vars` option + """ cmd = ['ansible-playbook'] if inventory is not None and len(inventory) > 0: @@ -63,12 +69,16 @@ def run_playbook(self, inventory, playbook_path, vault_file=None): if vault_file is not None: cmd.extend(["--vault-password-file", vault_file]) + if extra_vars: + for var in extra_vars: + cmd.extend(['--extra-vars', var]) + cmd.append(playbook_path) if Config().debug > 0: - cmd.append(ansible_verbosity[Config().debug]) + cmd.append(ANSIBLE_VERBOSITY[Config().debug]) - self.logger.info('Running: "' + ' '.join(playbook_path) + '"') + self.logger.info(f'Running: "{style(playbook_path, **HIGHLIGHTED) if self.colors_enabled else playbook_path}"') logpipe = LogPipe(__name__) with subprocess.Popen(cmd, stdout=logpipe, stderr=logpipe) as sp: @@ -76,18 +86,19 @@ def run_playbook(self, inventory, playbook_path, vault_file=None): if sp.returncode != 0: raise Exception('Error running: "' + ' '.join(cmd) + '"') - else: - self.logger.info('Done running "' + ' '.join(cmd) + '"') - def run_playbook_with_retries(self, inventory, playbook_path, retries, timeout=10): + self.logger.info('Done running "' + ' '.join(cmd) + '"') + + def run_playbook_with_retries(self, inventory, playbook_path, retries, delay=10, extra_vars:list[str]=None): for i in range(retries): try: self.run_playbook(inventory=inventory, - playbook_path=playbook_path) + playbook_path=playbook_path, + extra_vars=extra_vars) break - except Exception as e: + except Exception as e: # pylint: disable=broad-except self.logger.error(e) self.logger.warning('Retry running playbook: ' + str(i + 1) + '/' + str(retries)) - time.sleep(timeout) + time.sleep(delay) else: raise Exception(f'Failed running playbook after {str(retries)} retries') diff --git a/cli/src/ansible/AnsibleConfigFileCreator.py b/cli/src/ansible/AnsibleConfigFileCreator.py index 056a2a62b4..afdabe8f35 100644 --- a/cli/src/ansible/AnsibleConfigFileCreator.py +++ b/cli/src/ansible/AnsibleConfigFileCreator.py @@ -1,6 +1,7 @@ import os from collections import OrderedDict +from cli.src.Config import Config from cli.src.helpers.build_io import save_ansible_config_file from cli.src.Step import Step @@ -50,6 +51,8 @@ def process_ansible_options(self): self.add_setting('defaults', 'forks', self.ansible_options['forks']) # Ubuntu 18 upgraded to 20 has Python 2 and 3 so 'auto_legacy' doesn't work for PostgreSQL Ansible modules self.add_setting('defaults', 'interpreter_python', 'auto') + if not Config().no_color: + self.add_setting('defaults', 'force_color', 'true') def create(self): self.logger.info('Creating ansible.cfg') diff --git a/cli/src/ansible/AnsibleInventoryCreator.py b/cli/src/ansible/AnsibleInventoryCreator.py index d964e28d01..6738fd5270 100644 --- a/cli/src/ansible/AnsibleInventoryCreator.py +++ b/cli/src/ansible/AnsibleInventoryCreator.py @@ -43,12 +43,14 @@ def get_inventory(self): return self.group_duplicated(inventory) def get_roles_for_feature(self, component_key): - features_map = select_single(self.config_docs, lambda x: x.kind == 'configuration/feature-mapping') - return features_map.specification.roles_mapping[component_key] + features_map = select_single(self.config_docs, lambda x: x.kind == 'configuration/feature-mappings') + feature_roles = features_map.specification.mappings[component_key] + enabled_roles = self.get_enabled_roles() + return [role for role in feature_roles if role in enabled_roles] def get_available_roles(self): - features_map = select_single(self.config_docs, lambda x: x.kind == 'configuration/feature-mapping') - return features_map.specification.available_roles + features = select_single(self.config_docs, lambda x: x.kind == 'configuration/features') + return features.specification.features def get_enabled_roles(self): roles = self.get_available_roles() diff --git a/cli/src/ansible/AnsibleVarsGenerator.py b/cli/src/ansible/AnsibleVarsGenerator.py index d44fd8086a..502baa5627 100644 --- a/cli/src/ansible/AnsibleVarsGenerator.py +++ b/cli/src/ansible/AnsibleVarsGenerator.py @@ -73,7 +73,7 @@ def generate(self): # are not compatible with the new ones, defaults are used for template processing roles_with_defaults = [ 'grafana', 'haproxy', 'image_registry', 'jmx_exporter', 'kafka', 'kafka_exporter', - 'kibana', 'logging', 'node_exporter', 'postgres_exporter', + 'logging', 'node_exporter', 'opensearch', 'opensearch_dashboards', 'postgres_exporter', 'postgresql', 'prometheus', 'rabbitmq', 'repository' ] # now lets add any external configs we want to load @@ -141,15 +141,19 @@ def write_role_manifest_vars(self, ansible_dir, role, kind): self.write_role_vars(ansible_dir, role, document, vars_file_name='manifest.yml') def populate_group_vars(self, ansible_dir): + input_manifest_path: str = str(Config().input_manifest_path.absolute()) if Config().input_manifest_path else '' + main_vars = ObjDict() main_vars['admin_user'] = self.cluster_model.specification.admin_user - main_vars['validate_certs'] = Config().validate_certs - main_vars['offline_requirements'] = Config().offline_requirements - main_vars['wait_for_pods'] = Config().wait_for_pods + main_vars['epiphany_version'] = VERSION + main_vars['input_manifest_path'] = input_manifest_path main_vars['is_upgrade_run'] = self.is_upgrade_run + main_vars['offline_requirements'] = Config().offline_requirements main_vars['roles_with_generated_vars'] = sorted(self.roles_with_generated_vars) main_vars['upgrade_components'] = Config().upgrade_components - main_vars['epiphany_version'] = VERSION + main_vars['validate_certs'] = Config().validate_certs + main_vars['wait_for_pods'] = Config().wait_for_pods + main_vars['full_download'] = Config().full_download # Consider to move this to the provider level. if self.cluster_model.provider != 'any': diff --git a/cli/src/commands/Apply.py b/cli/src/commands/Apply.py index 49b5466ee4..b29d46867f 100644 --- a/cli/src/commands/Apply.py +++ b/cli/src/commands/Apply.py @@ -1,6 +1,9 @@ import os import sys +from pathlib import Path +from typing import Dict +from cli.src.Config import Config from cli.src.ansible.AnsibleRunner import AnsibleRunner from cli.src.helpers.build_io import (get_build_path, get_inventory_path, get_manifest_path, load_inventory, @@ -42,14 +45,8 @@ def __init__(self, input_data): self.infrastructure_docs = [] self.all_docs = [] - - def __enter__(self): - return self - - - def __exit__(self, exc_type, exc_value, traceback): - pass - + Config().full_download = input_data.full_download + Config().input_manifest_path = Path(self.file) def load_documents(self): # Load the input docs from the input @@ -213,23 +210,26 @@ def assert_no_master_downscale(self): raise Exception("ControlPlane downscale is not supported yet. Please revert your 'kubernetes_master' count to previous value or increase it to scale up Kubernetes.") + def __load_configuration_doc(self, kind: str) -> Dict: + doc = select_first(self.input_docs, lambda x: x.kind == kind) + if not doc: + return load_schema_obj(schema_types.DEFAULT, 'common', kind) + + with DefaultMerger([doc]) as doc_merger: + return doc_merger.run()[0] + def assert_no_postgres_nodes_number_change(self): - feature_mapping = select_first(self.input_docs, lambda x: x.kind == 'configuration/feature-mapping') - if feature_mapping: - with DefaultMerger([feature_mapping]) as doc_merger: - feature_mapping = doc_merger.run() - feature_mapping = feature_mapping[0] - else: - feature_mapping = load_schema_obj(schema_types.DEFAULT, 'common', 'configuration/feature-mapping') + feature_mappings = self.__load_configuration_doc('configuration/feature-mappings') + features = self.__load_configuration_doc('configuration/features') components = self.cluster_model.specification.components if self.inventory: next_postgres_node_count = 0 prev_postgres_node_count = len(self.inventory.list_hosts(pattern='postgresql')) - postgres_available = [x for x in feature_mapping.specification.available_roles if x.name == 'postgresql'] + postgres_available = [x for x in features.specification.features if x.name == 'postgresql'] if postgres_available[0].enabled: - for key, roles in feature_mapping.specification.roles_mapping.items(): - if ('postgresql') in roles and key in components: + for key, features in feature_mappings.specification.mappings.items(): + if ('postgresql') in features and key in components: next_postgres_node_count = next_postgres_node_count + components[key].count if prev_postgres_node_count > 0 and prev_postgres_node_count != next_postgres_node_count: diff --git a/cli/src/commands/Init.py b/cli/src/commands/Init.py index a69cefe5a5..f11db038aa 100644 --- a/cli/src/commands/Init.py +++ b/cli/src/commands/Init.py @@ -49,6 +49,9 @@ def init(self): config_docs = config_appender.run() docs = [*config_docs, *infra_docs] + else: + with ConfigurationAppender(docs) as config_appender: + config_appender.add_feature_mappings() # set the provider and version for all docs for doc in docs: diff --git a/cli/src/commands/Test.py b/cli/src/commands/Test.py index 9effc7f33d..de4dee6edf 100644 --- a/cli/src/commands/Test.py +++ b/cli/src/commands/Test.py @@ -1,17 +1,24 @@ import os -from cli.src.helpers.build_io import (ANSIBLE_INVENTORY_FILE, SPEC_OUTPUT_DIR, - load_manifest) +from cli.src.ansible.AnsibleCommand import AnsibleCommand +from cli.src.helpers.build_io import (SPEC_OUTPUT_DIR, + get_inventory_path_for_build, load_inventory, load_manifest) from cli.src.helpers.doc_list_helpers import select_single -from cli.src.spec.SpecCommand import SpecCommand +from cli.src.spec.SpecCommand import SPEC_TESTS_PATH, SpecCommand from cli.src.Step import Step class Test(Step): - def __init__(self, input_data): + def __init__(self, input_data, test_groups): super().__init__(__name__) self.build_directory = input_data.build_directory - self.group = input_data.group + self.inventory_path = get_inventory_path_for_build(self.build_directory) + self.excluded_groups = input_data.excluded_groups + self.included_groups = input_data.included_groups + self.kubeconfig_remote_path = input_data.kubeconfig_remote_path + self.all_groups = test_groups + self.available_groups = self.__get_available_test_groups() + self.selected_groups = self.__get_selected_test_groups() def __enter__(self): super().__enter__() @@ -20,16 +27,75 @@ def __enter__(self): def __exit__(self, exc_type, exc_value, traceback): pass + def __get_inventory_groups(self, append_implicit: bool) -> list[str]: + """ + Get list of groups from Ansible inventory + + :param append_implicit: if True, `common` group, which is not present in inventory, is appended + """ + inventory_groups = load_inventory(self.inventory_path).list_groups() + if append_implicit: + inventory_groups.append('common') + return inventory_groups + + def __get_available_test_groups(self) -> list[str]: + """ + Get list of all test groups that can be run + """ + inventory_groups = self.__get_inventory_groups(True) + return [group for group in self.all_groups if group in inventory_groups] + + def __get_selected_test_groups(self) -> list[str]: + """ + Get list of test groups selected to be run + """ + selected_groups = ['all'] if 'all' in self.included_groups else self.included_groups + + # exclude test groups + if self.excluded_groups: + included_groups = self.available_groups if 'all' in self.included_groups else self.included_groups + selected_groups = [group for group in included_groups if group not in self.excluded_groups] + + return selected_groups + + def __is_env_preparation_needed(self) -> bool: + """ + Check whether additional actions are needed in order to run selected test groups + """ + if self.kubeconfig_remote_path: + kubectl_groups = ['applications', 'kubernetes_master'] + if any(group in kubectl_groups for group in self.selected_groups): + return True + if 'all' in self.selected_groups and any(group in kubectl_groups for group in self.available_groups): + return True + + return False + + def __prepare_env(self): + if self.__is_env_preparation_needed(): + playbook_path = str(SPEC_TESTS_PATH) + '/pre_run/ansible/kubernetes_master/copy-kubeconfig.yml' + ansible_command = AnsibleCommand() + ansible_command.run_playbook(inventory=self.inventory_path, + playbook_path=playbook_path, + extra_vars=[f'kubeconfig_remote_path={self.kubeconfig_remote_path}']) + + def __clean_up_env(self): + if self.__is_env_preparation_needed(): + playbook_path = str(SPEC_TESTS_PATH) + '/post_run/ansible/kubernetes_master/undo-copy-kubeconfig.yml' + ansible_command = AnsibleCommand() + ansible_command.run_playbook(inventory=self.inventory_path, playbook_path=playbook_path) + def test(self): + """ + Run spec tests for selected groups + """ + if not self.selected_groups: + raise Exception('No test group specified to run') + # get manifest documents docs = load_manifest(self.build_directory) cluster_model = select_single(docs, lambda x: x.kind == 'epiphany-cluster') - # get inventory - path_to_inventory = os.path.join(self.build_directory, ANSIBLE_INVENTORY_FILE) - if not os.path.isfile(path_to_inventory): - raise Exception(f'No "{ANSIBLE_INVENTORY_FILE}" inside the build directory: "{self.build_directory}"') - # get admin user admin_user = cluster_model.specification.admin_user if not os.path.isfile(admin_user.key_path): @@ -40,8 +106,15 @@ def test(self): if not os.path.exists(spec_output): os.makedirs(spec_output) - # run the spec tests + if 'all' not in self.selected_groups: + self.logger.info(f'Selected test groups: {", ".join(self.selected_groups)}') + + self.__prepare_env() + + # run tests spec_command = SpecCommand() - spec_command.run(spec_output, path_to_inventory, admin_user.name, admin_user.key_path, self.group) + spec_command.run(spec_output, self.inventory_path, admin_user.name, admin_user.key_path, self.selected_groups) + + self.__clean_up_env() return 0 diff --git a/cli/src/commands/Upgrade.py b/cli/src/commands/Upgrade.py index b8a685493c..585f1a1d02 100644 --- a/cli/src/commands/Upgrade.py +++ b/cli/src/commands/Upgrade.py @@ -1,7 +1,9 @@ import os import re import time +from pathlib import Path +from cli.src.Config import Config from cli.src.ansible.AnsibleCommand import AnsibleCommand from cli.src.ansible.AnsibleRunner import AnsibleRunner from cli.src.helpers.build_io import copy_files_recursively @@ -23,6 +25,9 @@ def __init__(self, input_data): self.input_docs = [] self.ping_retries: int = input_data.ping_retries + Config().full_download = input_data.full_download + Config().input_manifest_path = Path(self.file) if self.file else None + def __enter__(self): super().__enter__() return self diff --git a/cli/src/helpers/argparse_helpers.py b/cli/src/helpers/argparse_helpers.py new file mode 100644 index 0000000000..80eb1199a0 --- /dev/null +++ b/cli/src/helpers/argparse_helpers.py @@ -0,0 +1,14 @@ +import argparse + + +# Used by multiple epicli parsers +def comma_separated_type(choices): + """Return a function that splits and checks comma-separated values.""" + def split_arg(arg): + values = arg.replace(' ', '').lower().split(',') + for value in values: + if value not in choices: + raise argparse.ArgumentTypeError( + f'invalid choice: {value!r} (choose from {", ".join([repr(choice) for choice in choices])})') + return values + return split_arg diff --git a/cli/src/providers/aws/APIProxy.py b/cli/src/providers/aws/APIProxy.py index 39fd43b374..92848fd24c 100644 --- a/cli/src/providers/aws/APIProxy.py +++ b/cli/src/providers/aws/APIProxy.py @@ -10,8 +10,10 @@ def __init__(self, cluster_model, config_docs=[]): self.cluster_model = cluster_model self.config_docs = config_docs credentials = self.cluster_model.specification.cloud.credentials - self.session = boto3.session.Session(aws_access_key_id=credentials.key, - aws_secret_access_key=credentials.secret, + session_token = credentials.session_token if credentials.session_token else None + self.session = boto3.session.Session(aws_access_key_id=credentials.access_key_id, + aws_secret_access_key=credentials.secret_access_key, + aws_session_token=session_token, region_name=self.cluster_model.specification.cloud.region) def __enter__(self): @@ -61,7 +63,7 @@ def get_ips_for_feature(self, component_key): def login(self, env=None): # Pass to match the interface of the 'aws' provider APIProxy. For 'was' provider we already login with - # key and secret when we create the BOTO3 session. + # key_id and secret when we create the BOTO3 session. pass def get_image_id(self, os_full_name): diff --git a/cli/src/providers/azure/InfrastructureBuilder.py b/cli/src/providers/azure/InfrastructureBuilder.py index 5f9e35e896..0df01f7e60 100644 --- a/cli/src/providers/azure/InfrastructureBuilder.py +++ b/cli/src/providers/azure/InfrastructureBuilder.py @@ -79,6 +79,7 @@ def run(self): item.specification.address_prefix == subnet_definition['address_pool']) if subnet is None: + subnet_nsg_association_name = '' subnet = self.get_subnet(subnet_definition, component_key, 0) infrastructure.append(subnet) @@ -93,6 +94,7 @@ def run(self): nsg.specification.name, 0) infrastructure.append(subnet_nsg_association) + subnet_nsg_association_name = subnet_nsg_association.specification.name availability_set = None if 'availability_set' in component_value: @@ -118,7 +120,7 @@ def run(self): vm_config, subnet.specification.name, public_ip_name, - subnet_nsg_association.specification.name, + subnet_nsg_association_name, index) infrastructure.append(network_interface) diff --git a/cli/src/schema/ConfigurationAppender.py b/cli/src/schema/ConfigurationAppender.py index 40f00f76c2..d0d1de0bc6 100644 --- a/cli/src/schema/ConfigurationAppender.py +++ b/cli/src/schema/ConfigurationAppender.py @@ -1,3 +1,5 @@ +from typing import Callable, Dict, List + from cli.src.helpers.config_merger import merge_with_defaults from cli.src.helpers.data_loader import load_schema_obj, schema_types from cli.src.helpers.doc_list_helpers import select_first, select_single @@ -6,45 +8,60 @@ class ConfigurationAppender(Step): - REQUIRED_DOCS = ['epiphany-cluster', 'configuration/feature-mapping', 'configuration/shared-config'] + REQUIRED_DOCS = ['epiphany-cluster', + 'configuration/features', + 'configuration/feature-mappings', + 'configuration/shared-config'] def __init__(self, input_docs): super().__init__(__name__) - self.cluster_model = select_single(input_docs, lambda x: x.kind == 'epiphany-cluster') - self.input_docs = input_docs + self.__cluster_model: Dict = select_single(input_docs, lambda x: x.kind == 'epiphany-cluster') + self.__input_docs: List[Dict] = input_docs - def run(self): - configuration_docs = [] + def __append_config(self, config_docs: List[Dict], document: Dict): + document['version'] = VERSION + config_docs.append(document) + + def __add_doc(self, config_docs: List[Dict], document_kind: str): + doc = select_first(self.__input_docs, lambda x, kind=document_kind: x.kind == kind) + if doc is None: + doc = load_schema_obj(schema_types.DEFAULT, 'common', document_kind) + self.logger.info(f'Adding: {doc.kind}') + + self.__append_config(config_docs, doc) + + def __feature_selector(self, feature_key: str, config_selector: str) -> Callable: + return lambda x, key=feature_key, selector=config_selector: x.kind == f'configuration/{key}' and x.name == selector - def append_config(doc): - doc['version'] = VERSION - configuration_docs.append(doc) + def add_feature_mappings(self): + feature_mappings: List[Dict] = [] + self.__add_doc(feature_mappings, 'configuration/feature-mappings') + + if feature_mappings is not None: + self.__input_docs.append(feature_mappings[0]) + + def run(self): + configuration_docs: List[Dict] = [] for document_kind in ConfigurationAppender.REQUIRED_DOCS: - doc = select_first(self.input_docs, lambda x: x.kind == document_kind) - if doc is None: - doc = load_schema_obj(schema_types.DEFAULT, 'common', document_kind) - self.logger.info("Adding: " + doc.kind) - append_config(doc) - else: - append_config(doc) - - for component_key, component_value in self.cluster_model.specification.components.items(): + self.__add_doc(configuration_docs, document_kind) + + for component_key, component_value in self.__cluster_model.specification.components.items(): if component_value.count < 1: continue - features_map = select_first(configuration_docs, lambda x: x.kind == 'configuration/feature-mapping') + feature_mappings = select_first(configuration_docs, lambda x: x.kind == 'configuration/feature-mappings') config_selector = component_value.configuration - for feature_key in features_map.specification.roles_mapping[component_key]: - config = select_first(self.input_docs, lambda x: x.kind == 'configuration/' + feature_key and x.name == config_selector) - if config is not None: - append_config(config) - if config is None: - config = select_first(configuration_docs, lambda - x: x.kind == 'configuration/' + feature_key and x.name == config_selector) - if config is None: - config = merge_with_defaults('common', 'configuration/' + feature_key, config_selector, self.input_docs) - self.logger.info("Adding: " + config.kind) - append_config(config) + for feature_key in feature_mappings.specification.mappings[component_key]: + first_input_docs_config = select_first(self.__input_docs, self.__feature_selector(feature_key, config_selector)) + if first_input_docs_config is not None: + self.__append_config(configuration_docs, first_input_docs_config) + else: + first_config = select_first(configuration_docs, self.__feature_selector(feature_key, config_selector)) + + if first_config is None: + merged_config = merge_with_defaults('common', f'configuration/{feature_key}', config_selector, self.__input_docs) + self.logger.info(f'Adding: {merged_config.kind}') + self.__append_config(configuration_docs, merged_config) return configuration_docs diff --git a/cli/src/spec/SpecCommand.py b/cli/src/spec/SpecCommand.py index 6a372aa672..bcc19fc90f 100644 --- a/cli/src/spec/SpecCommand.py +++ b/cli/src/spec/SpecCommand.py @@ -1,12 +1,14 @@ import os import shutil import subprocess +from pathlib import Path from subprocess import PIPE, Popen +from cli.src.Config import Config from cli.src.helpers.data_loader import BASE_DIR from cli.src.Log import Log, LogPipe -SPEC_TEST_PATH = BASE_DIR + '/tests/spec' +SPEC_TESTS_PATH = Path(BASE_DIR).resolve() / 'tests' / 'spec' class SpecCommand: def __init__(self): @@ -22,13 +24,13 @@ def check_dependencies(self): if shutil.which('ruby') is None or shutil.which('gem') is None: raise Exception(error_str) - p = subprocess.Popen(['gem', 'query', '--local'], stdout=PIPE) + p = subprocess.Popen(['gem', 'list', '--local'], stdout=PIPE) out, err = p.communicate() if all(n in out.decode('utf-8') for n in required_gems) is False: raise Exception(error_str) - def run(self, spec_output, inventory, user, key, group): + def run(self, spec_output, inventory, user, key, groups): self.check_dependencies() env = os.environ.copy() @@ -37,12 +39,15 @@ def run(self, spec_output, inventory, user, key, group): env['user'] = user env['keypath'] = key - cmd = f'rake inventory={inventory} user={user} keypath={key} spec_output={spec_output} spec:{group}' + if not Config().no_color: + env['rspec_extra_opts'] = '--force-color' + + cmd = f'rake inventory={inventory} user={user} keypath={key} spec_output={spec_output} spec:{" spec:".join(groups)}' self.logger.info(f'Running: "{cmd}"') logpipe = LogPipe(__name__) - with Popen(cmd.split(' '), cwd=SPEC_TEST_PATH, env=env, stdout=logpipe, stderr=logpipe) as sp: + with Popen(cmd.split(' '), cwd=SPEC_TESTS_PATH, env=env, stdout=logpipe, stderr=logpipe) as sp: logpipe.close() if sp.returncode != 0: @@ -52,11 +57,9 @@ def run(self, spec_output, inventory, user, key, group): @staticmethod - def get_spec_groups(): - listdir = os.listdir(f'{SPEC_TEST_PATH}/spec') - groups = ['all'] - for entry in listdir: - if os.path.isdir(f'{SPEC_TEST_PATH}/spec/{entry}'): - groups = groups + [entry] - sorted(groups, key=str.lower) - return groups + def get_spec_groups() -> list[str]: + """Get test groups based on directories.""" + groups_path = SPEC_TESTS_PATH / 'spec' + groups = [str(item.name) for item in groups_path.iterdir() if item.is_dir()] + + return sorted(groups) diff --git a/cli/src/terraform/TerraformCommand.py b/cli/src/terraform/TerraformCommand.py index c330f4f698..3cc943e952 100644 --- a/cli/src/terraform/TerraformCommand.py +++ b/cli/src/terraform/TerraformCommand.py @@ -45,7 +45,8 @@ def run(self, command, env, auto_approve=False, auto_retries=1): if auto_approve: cmd.append('-auto-approve') - cmd.append('-no-color') + if Config().no_color: + cmd.append('-no-color') cmd = ' '.join(cmd) self.logger.info(f'Running: "{cmd}"') @@ -60,7 +61,7 @@ def run(self, command, env, auto_approve=False, auto_retries=1): with subprocess.Popen(cmd, stdout=logpipe, stderr=logpipe, env=env, shell=True) as sp: logpipe.close() retries = retries + 1 - do_retry = next((True for s in logpipe.stderrstrings if 'RetryableError' in s), False) + do_retry = next((True for line in logpipe.output_error_lines if 'RetryableError' in line), False) if do_retry and retries <= auto_retries: self.logger.warning(f'Terraform failed with "RetryableError" error. Retry: {str(retries)}/{str(auto_retries)}') diff --git a/docs/architecture/logical-view.md b/docs/architecture/logical-view.md index 47d9acde34..ab3a65c922 100644 --- a/docs/architecture/logical-view.md +++ b/docs/architecture/logical-view.md @@ -51,14 +51,14 @@ Source | Purpose /var/log/zookeeper/version-2/* | Zookeeper's logs Containers | Kubernetes components that run in a container -`Filebeat`, unlike `Grafana`, pushes data to database (`Elasticsearch`) instead of pulling them. +`Filebeat`, unlike `Grafana`, pushes data to database (`OpenSearch`) instead of pulling them. [Read more](https://www.elastic.co/products/beats/filebeat) about `Filebeat`. -### Elasticsearch +### OpenSearch -`Elasticsearch` is highly scalable and full-text search enabled analytics engine. Epiphany Platform uses it for storage and analysis of logs. +`OpenSearch` is highly scalable and full-text search enabled analytics engine. Epiphany Platform uses it for storage and analysis of logs. -[Read more](https://www.elastic.co/guide/en/elasticsearch/reference/7.x/index.html) +[Read more](https://opensearch.org/docs/latest) ### Elasticsearch Curator @@ -66,11 +66,11 @@ Containers | Kubernetes components that run in a container [Read more](https://www.elastic.co/guide/en/elasticsearch/client/curator/5.8/index.html) -### Kibana +### OpenSearch Dashboards -`Kibana` like `Grafana` is used in Epiphany for visualization, in addition it has full text search capabilities. `Kibana` uses `Elasticsearch` as datasource for logs, it allows to create full text queries, dashboards and analytics that are performed on logs. +`OpenSearch Dashboards` like `Grafana` is used in Epiphany for visualization. It uses `OpenSearch` as datasource for logs, it allows to create full text queries, dashboards and analytics that are performed on logs. -[Read more](https://www.elastic.co/products/kibana) +[Read more](https://opensearch.org/docs/latest/dashboards/index/) ## Computing diff --git a/docs/architecture/process-view.md b/docs/architecture/process-view.md index 366bb2ee83..a124c7fd16 100644 --- a/docs/architecture/process-view.md +++ b/docs/architecture/process-view.md @@ -24,8 +24,8 @@ metrics from different kinds of exporters. ## Logging -Epiphany uses `Elasticsearch` as key-value database with `Filebeat` for gathering logs and `Kibana` as user interface to write queries and analyze logs. +Epiphany uses `OpenSearch` as key-value database with `Filebeat` for gathering logs and `OpenSearch Dashboards` as user interface to write queries and analyze logs. ![Logging process view](diagrams/process-view/logging-process-view.svg) -`Filebeat` gathers OS and application logs and ships them to `Elasticsearch`. Queries from `Kibana` are run against `Elasticsearch` key-value database. \ No newline at end of file +`Filebeat` gathers OS and application logs and ships them to `OpenSearch`. Queries from `Kibana` are run against `OpenSearch` key-value database. \ No newline at end of file diff --git a/docs/assets/images/lifecycle.png b/docs/assets/images/lifecycle.png deleted file mode 100644 index 54e9c87f0b..0000000000 --- a/docs/assets/images/lifecycle.png +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:76fdaca8ec741b24864480886f7c929cd17f928f7e91d9abf23e8aadec1273b3 -size 12045 diff --git a/docs/changelogs/CHANGELOG-0.5.md b/docs/changelogs/CHANGELOG-0.5.md index 9f1a8f9e36..9acb3929a2 100644 --- a/docs/changelogs/CHANGELOG-0.5.md +++ b/docs/changelogs/CHANGELOG-0.5.md @@ -82,7 +82,7 @@ - [#381](https://github.com/epiphany-platform/epiphany/issues/381) - Add AWS EC2 Root Volume encryption - [#782](https://github.com/epiphany-platform/epiphany/issues/781) - All disks encryption documentation - AWS - [#782](https://github.com/epiphany-platform/epiphany/issues/782) - All disks encryption documentation - Azure -- [#784](https://github.com/epiphany-platform/epiphany/issues/784) - Switch to Open Distro for Elasticsearch +- [#784](https://github.com/epiphany-platform/epiphany/issues/784) - Switch to Open Distro for ElasticSearch - [Data storage](/docs/home/howto/DATABASES.md#how-to-start-working-with-opendistro-for-elasticsearch) - [Centralized logging](/docs/home/howto/LOGGING.md#centralized-logging-setup) diff --git a/docs/changelogs/CHANGELOG-2.0.md b/docs/changelogs/CHANGELOG-2.0.md index e47bac6ce5..003a6e4dc9 100644 --- a/docs/changelogs/CHANGELOG-2.0.md +++ b/docs/changelogs/CHANGELOG-2.0.md @@ -1,5 +1,58 @@ + # Changelog 2.0 +## [2.0.1] YYYY-MM-DD + +### Added + +- [#2932](https://github.com/epiphany-platform/epiphany/issues/2932) - Support `epicli upgrade` for RHEL/AlmaLinux 8 +- [#3057](https://github.com/epiphany-platform/epiphany/issues/3057) - Additional AWS authentication option +- [#3101](https://github.com/epiphany-platform/epiphany/issues/3101) - Add support for ARM architecture for AlmaLinux 8 +- [#3105](https://github.com/epiphany-platform/epiphany/issues/3105) - Add manifest file parsing +- [#3131](https://github.com/epiphany-platform/epiphany/issues/3131) - Optimize Grafana dashboards downloading +- [#3116](https://github.com/epiphany-platform/epiphany/issues/3116) - Optimize files downloading +- [#3106](https://github.com/epiphany-platform/epiphany/issues/3106) - Add image-registry configuration reading +- [#3140](https://github.com/epiphany-platform/epiphany/issues/3140) - Allow to disable OpenSearch audit logs +- [#3218](https://github.com/epiphany-platform/epiphany/issues/3218) - Add support for original output coloring +- [#3079](https://github.com/epiphany-platform/epiphany/issues/3079) - OpenSearch improvement - add dedicated user for Filebeat +- [#3207](https://github.com/epiphany-platform/epiphany/issues/3207) - Add filtering mechanism for the sensitive data + +### Fixed + +- [#3153](https://github.com/epiphany-platform/epiphany/issues/3153) - AlmaLinux 8.5 installation fails resolving dependencies +- [#3164](https://github.com/epiphany-platform/epiphany/issues/3164) - Specify version and allow containerd.io package downgrade in haproxy_runc role +- [#3179](https://github.com/epiphany-platform/epiphany/issues/3179) - terraform fails when `use_network_security_groups` is set to `false` +- [#3165](https://github.com/epiphany-platform/epiphany/issues/3165) - download-requirements.py may fail due to expired certificate +- [#3189](https://github.com/epiphany-platform/epiphany/issues/3189) - Fix configuration/feature-mapping enabling +- [#3152](https://github.com/epiphany-platform/epiphany/issues/3152) - Use a stable tag for the quay.io/ceph/ceph:v16.2.7 image +- [#3209](https://github.com/epiphany-platform/epiphany/issues/3209) - [Ubuntu] download-requirements.py ignores package version when resolving dependencies +- [#3231](https://github.com/epiphany-platform/epiphany/issues/3231) - epicli may fail on "Download image haproxy-2.2.2-alpine.tar" task +- [#3210](https://github.com/epiphany-platform/epiphany/issues/3210) - [Ubuntu] download-requirements.py downloads redundant package dependencies +- [#3190](https://github.com/epiphany-platform/epiphany/issues/3190) - Enable configuration of kubelet enable-controller-attach-detach argument via input manifest + +### Updated + +- [#3080](https://github.com/epiphany-platform/epiphany/issues/3080) - update Filebeat to the latest compatible version with OpenSearch +- [#2982](https://github.com/epiphany-platform/epiphany/issues/2982) - Using AKS and EKS Terraform configuration directly with Epiphany. +- [#2870](https://github.com/epiphany-platform/epiphany/issues/2870) - OpenDistro for ElasticSearch replaced by OpenSearch +- [#3163](https://github.com/epiphany-platform/epiphany/issues/3163) - Upgrade Python dependencies +- [#3097](https://github.com/epiphany-platform/epiphany/issues/3097) - Split available_roles and roles_mapping into separate yaml documents +- [#3229](https://github.com/epiphany-platform/epiphany/issues/3229) - Update crane to v0.11.0 + +### Deprecated + +- Support for Modules: + [Azure Basic Infrastructure](https://github.com/epiphany-platform/m-azure-basic-infrastructure) (AzBI) module + [Azure AKS](https://github.com/epiphany-platform/m-azure-kubernetes-service) (AzKS) module + [AWS Basic Infrastructure](https://github.com/epiphany-platform/m-aws-basic-infrastructure) (AwsBI) module + [AWS EKS](https://github.com/epiphany-platform/m-aws-kubernetes-service) (AwsKS) module + +### Breaking changes + +- Schema `configuration/feature-mapping` changed. The document was splitted into two separate docs `configuration/features` and `configuration/feature-mappings`. + +- AWS credentials configuration parameters are renamed from `specification.cloud.credentials.key` and `specification.cloud.credentials.secret` to `specification.cloud.credentials.access_key_id` and `specification.cloud.credentials.secret_access_key`. + ## [2.0.0] 2022-05-09 ### Added @@ -54,7 +107,7 @@ - [#2803](https://github.com/epiphany-platform/epiphany/issues/2803) - Refactor: rename 'kafka_var' setting - [#2995](https://github.com/epiphany-platform/epiphany/issues/2995) - Update expired RHUI client certificate before installing any RHEL packages - [#3049](https://github.com/epiphany-platform/epiphany/issues/3049) - HAProxy upgrade fails on re-run trying to remove haproxy_exporter -- [#3006](https://github.com/epiphany-platform/epiphany/issues/3006) - install 'containerd.io=1.4.12-*' failed, when upgrade from v1.3.0 to 2.0.0dev +- [#3006](https://github.com/epiphany-platform/epiphany/issues/3006) - install `containerd.io=1.4.12-*` failed, when upgrade from v1.3.0 to 2.0.0dev - [#3065](https://github.com/epiphany-platform/epiphany/issues/3065) - Flag `delete_os_disk_on_termination` has no effect when removing cluster ### Updated diff --git a/docs/home/ARM.md b/docs/home/ARM.md index 1fc7b36ad1..c0b6d15bf8 100644 --- a/docs/home/ARM.md +++ b/docs/home/ARM.md @@ -1,8 +1,4 @@ # ARM -### **NOTE** ---------------- -ARM Architecture is not supported in Epiphany 2.0.0. This feature will be added to platform in version 2.0.1 --------------- From Epiphany v1.1.0 preliminary support for the ```arm64``` architecture was added. As the ```arm64``` architecture is relatively new to the datacenter at the time of writing only a subset of providers, operating systems, components and applications are supported. Support will be extended in the future when there is a need for it. @@ -14,7 +10,7 @@ Besides making sure that the selected providers, operating systems, components a ### Providers -| Provider | CentOS 7.x | RedHat 7.x | Ubuntu 18.04 | +| Provider | AlmaLinux 8.4 | RedHat 8.x | Ubuntu 20.04 | | - | - | - | - | | Any | :heavy_check_mark: | :x: | :x: | | AWS | :heavy_check_mark: | :x: | :x: | @@ -22,7 +18,7 @@ Besides making sure that the selected providers, operating systems, components a ### Components -| Component | CentOS 7.x | RedHat 7.x | Ubuntu 18.04 | +| Component | AlmaLinux 8.4 | RedHat 8.x | Ubuntu 20.04 | | - | - | - | - | | repository | :heavy_check_mark: | :x: | :x: | | kubernetes_master | :heavy_check_mark: | :x: | :x: | @@ -33,11 +29,12 @@ Besides making sure that the selected providers, operating systems, components a | monitoring | :heavy_check_mark: | :x: | :x: | | load_balancer | :heavy_check_mark: | :x: | :x: | | postgresql | :heavy_check_mark: | :x: | :x: | -| opendistro_for_elasticsearch | :heavy_check_mark: | :x: | :x: | +| opensearch | :heavy_check_mark: | :x: | :x: | | single_machine | :heavy_check_mark: | :x: | :x: | ***Notes*** +- ```Rook/Ceph Cluster Storage``` is not supported on ```arm64``` and needs to be disabled in feature-mapping. - For the ```postgresql``` component the ```pgpool``` and ```pgbouncer``` extensions for load-balancing and replication are not yet supported on ```arm64```. These should be disabled in the ```postgressql``` and ```applications``` configurations. - While not defined in any of the component configurations, the ```elasticsearch_curator``` role is currently not supported on ```arm64``` and should be removed from the ```feature-mapping``` configuration if defined. - If you want to download ```arm64``` requirements from an ```x86_64``` machine, you can try to use a container as described [here](./howto/CLUSTER.md#downloading-offline-requirements-with-a-docker-container). @@ -67,7 +64,7 @@ provider: any title: Epiphany cluster Config specification: prefix: arm - name: centos + name: almalinux admin_user: key_path: /shared/ssh/id_rsa name: admin @@ -96,9 +93,9 @@ specification: rabbitmq: count: 2 machine: rabbitmq-machine-arm - opendistro_for_elasticsearch: + opensearch: count: 1 - machine: opendistro-machine-arm + machine: opensearch-machine-arm repository: count: 1 machine: repository-machine-arm @@ -168,7 +165,7 @@ specification: ip: x.x.x.x --- kind: infrastructure/virtual-machine -name: opendistro-machine-arm +name: opensearch-machine-arm provider: any based_on: logging-machine specification: @@ -210,7 +207,7 @@ name: default specification: applications: - name: auth-service # requires PostgreSQL to be installed in cluster - enabled: yes + enabled: true image_path: epiphanyplatform/keycloak:9.0.0 use_local_image_registry: true #image_pull_secret_name: regcred @@ -227,7 +224,7 @@ specification: user: auth-db-user password: PASSWORD_TO_CHANGE - name: rabbitmq - enabled: yes + enabled: true image_path: rabbitmq:3.8.9 use_local_image_registry: true #image_pull_secret_name: regcred # optional @@ -258,7 +255,7 @@ specification: ### ```AWS``` provider - Important is to specify the correct ```arm64``` machine type for component which can be found [here](https://aws.amazon.com/ec2/instance-types/a1/). -- Important is to specify the correct ```arm64``` OS image which currently is only ```CentOS 7.9.2009 aarch64```. +- Important is to specify the correct ```arm64``` OS image which currently is only ```AlmaLinux OS 8.4.20211015 aarch64``` or newer. ```yaml --- @@ -268,14 +265,14 @@ provider: aws title: Epiphany cluster Config specification: prefix: arm - name: centos + name: almalinux admin_user: key_path: /shared/ssh/testenvs/id_rsa - name: centos + name: ec2-user cloud: credentials: - key: xxxx - secret: xxxx + access_key_id: xxxx + secret_access_key: xxxx region: eu-west-1 use_public_ips: true components: @@ -319,9 +316,9 @@ specification: machine: rabbitmq-machine-arm subnets: - address_pool: 10.1.8.0/24 - opendistro_for_elasticsearch: + opensearch: count: 1 - machine: opendistro-machine-arm + machine: opensearch-machine-arm subnets: - address_pool: 10.1.10.0/24 repository: @@ -335,7 +332,7 @@ title: "Virtual Machine Infra" provider: aws name: default specification: - os_full_name: CentOS 7.9.2009 aarch64 + os_full_name: AlmaLinux OS 8.4.20211015 aarch64 --- kind: infrastructure/virtual-machine name: kafka-machine-arm @@ -394,7 +391,7 @@ specification: size: a1.medium --- kind: infrastructure/virtual-machine -name: opendistro-machine-arm +name: opensearch-machine-arm provider: aws based_on: logging-machine specification: diff --git a/docs/home/COMPONENTS.md b/docs/home/COMPONENTS.md index 4f0edec445..a5e7693f35 100644 --- a/docs/home/COMPONENTS.md +++ b/docs/home/COMPONENTS.md @@ -10,21 +10,17 @@ Note that versions are default versions and can be changed in certain cases thro | Kubernetes Dashboard | 2.3.1 | https://github.com/kubernetes/dashboard | [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0) | | Kubernetes metrics-scraper | 1.0.7 | https://github.com/kubernetes-sigs/dashboard-metrics-scraper | [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0) | | containerd | 1.5.11 | https://github.com/containerd/containerd | [Apache License 2.0](https://github.com/containerd/containerd/blob/main/LICENSE) | -| Calico | 3.20.3 | https://github.com/projectcalico/calico | [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0) | +| Calico | 3.23.3 | https://github.com/projectcalico/calico | [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0) | | Flannel | 0.14.0 | https://github.com/coreos/flannel/ | [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0) | -| Canal | 3.20.3 | https://github.com/projectcalico/calico | [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0) | +| Canal | 3.23.3 | https://github.com/projectcalico/calico | [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0) | | Coredns | 1.8.4 | https://github.com/coredns/coredns | [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0) | | Kafka | 2.8.1 | https://github.com/apache/kafka | [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0) | | Zookeeper | 3.5.8 | https://github.com/apache/zookeeper | [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0) | | RabbitMQ | 3.8.9 | https://github.com/rabbitmq/rabbitmq-server | [Mozilla Public License](https://www.mozilla.org/en-US/MPL/) | | Docker CE | 20.10.8 | https://docs.docker.com/engine/release-notes/ | [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0) | | KeyCloak | 14.0.0 | https://github.com/keycloak/keycloak | [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0) | -| Elasticsearch OSS | 7.10.2 | https://github.com/elastic/elasticsearch | https://github.com/elastic/elasticsearch/blob/master/LICENSE.txt | -| Elasticsearch Curator OSS | 5.8.3 | https://github.com/elastic/curator | https://github.com/elastic/curator/blob/master/LICENSE.txt | -| Opendistro for Elasticsearch | 1.13.x | https://opendistro.github.io/for-elasticsearch/ | [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0) | -| Opendistro for Elasticsearch Kibana | 1.13.1 | https://opendistro.github.io/for-elasticsearch-docs/docs/kibana/ | [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0) | -| Filebeat | 7.9.2 | https://github.com/elastic/beats | [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0) | -| Filebeat Helm Chart | 7.9.2 | https://github.com/elastic/helm-charts | [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0) | +| Filebeat | 7.12.1 | https://github.com/elastic/beats | [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0) | +| Filebeat Helm Chart | 7.12.1 | https://github.com/elastic/helm-charts | [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0) | | Prometheus | 2.31.1 | https://github.com/prometheus/prometheus | [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0) | | Grafana | 8.3.2 | https://github.com/grafana/grafana | [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0) | | Node Exporter | 1.3.1 | https://github.com/prometheus/node_exporter | [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0) | @@ -47,15 +43,17 @@ Note that versions are default versions and can be changed in certain cases thro | ------------------------- | ------- | ----------------------------------------------------- | ----------------------------------------------------------------- | | Terraform | 1.1.3 | https://www.terraform.io/ | [Mozilla Public License 2.0](https://github.com/hashicorp/terraform/blob/master/LICENSE) | | Terraform AzureRM provider | 2.91.0 | https://github.com/terraform-providers/terraform-provider-azurerm | [Mozilla Public License 2.0](https://github.com/terraform-providers/terraform-provider-azurerm/blob/master/LICENSE) | -| Terraform AWS provider | 3.71.0 | https://github.com/terraform-providers/terraform-provider-aws | [Mozilla Public License 2.0](https://github.com/terraform-providers/terraform-provider-aws/blob/master/LICENSE) | -| Crane | 0.4.1 | https://github.com/google/go-containerregistry/tree/main/cmd/crane | [Apache License 2.0](https://github.com/google/go-containerregistry/blob/main/LICENSE) | +| Terraform AWS provider | 3.71.0 | https://github.com/terraform-providers/terraform-provider-aws | [Mozilla Public License 2.0](https://github.com/terraform-providers/terraform-provider-aws/blob/master/LICENSE) | +| Crane | 0.11.0 | https://github.com/google/go-containerregistry/tree/main/cmd/crane | [Apache License 2.0](https://github.com/google/go-containerregistry/blob/main/LICENSE) | +| Git | latest | https://github.com/git/git | [GNU GENERAL PUBLIC LICENSE Version 2](https://github.com/git/git/blob/master/COPYING) | +| aws-cli | 2.0.30 | https://github.com/aws/aws-cli | [Apache License 2.0](https://github.com/aws/aws-cli/blob/develop/LICENSE.txt) | ## Epicli Python dependencies | Component | Version | Repo/Website | License | | --------- | ------- | ------------ | ------- | | adal | 1.2.7 | https://github.com/AzureAD/azure-activedirectory-library-for-python | [Other](https://api.github.com/repos/azuread/azure-activedirectory-library-for-python/license) | -| ansible-core | 2.12.1 | https://ansible.com/ | GPLv3+ | +| ansible-core | 2.12.6 | https://ansible.com/ | GPLv3+ | | ansible | 5.2.0 | https://ansible.com/ | GPLv3+ | | antlr4-python3-runtime | 4.7.2 | http://www.antlr.org | BSD | | applicationinsights | 0.11.10 | https://github.com/Microsoft/ApplicationInsights-Python | [MIT License](https://api.github.com/repos/microsoft/applicationinsights-python/license) | @@ -66,12 +64,12 @@ Note that versions are default versions and can be changed in certain cases thro | azure-cli-core | 2.32.0 | https://github.com/Azure/azure-cli | [MIT License](https://api.github.com/repos/azure/azure-cli/license) | | azure-cli-telemetry | 1.0.6 | https://github.com/Azure/azure-cli | [MIT License](https://api.github.com/repos/azure/azure-cli/license) | | azure-cli | 2.32.0 | https://github.com/Azure/azure-cli | [MIT License](https://api.github.com/repos/azure/azure-cli/license) | -| azure-common | 1.1.27 | https://github.com/Azure/azure-sdk-for-python | [MIT License](https://api.github.com/repos/azure/azure-sdk-for-python/license) | -| azure-core | 1.21.1 | https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/core/azure-core | MIT License | +| azure-common | 1.1.28 | https://github.com/Azure/azure-sdk-for-python | [MIT License](https://api.github.com/repos/azure/azure-sdk-for-python/license) | +| azure-core | 1.24.0 | https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/core/azure-core | MIT License | | azure-cosmos | 3.2.0 | https://github.com/Azure/azure-documentdb-python | [MIT License](https://api.github.com/repos/azure/azure-documentdb-python/license) | -| azure-datalake-store | 0.0.52 | https://github.com/Azure/azure-data-lake-store-python | [Other](https://api.github.com/repos/azure/azure-data-lake-store-python/license) | +| azure-datalake-store | 0.0.52 | https://github.com/Azure/azure-data-lake-store-python | [MIT License](https://api.github.com/repos/azure/azure-data-lake-store-python/license) | | azure-graphrbac | 0.60.0 | https://github.com/Azure/azure-sdk-for-python | [MIT License](https://api.github.com/repos/azure/azure-sdk-for-python/license) | -| azure-identity | 1.7.1 | https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/identity/azure-identity | MIT License | +| azure-identity | 1.10.0 | https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/identity/azure-identity | MIT License | | azure-keyvault-administration | 4.0.0b3 | https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/keyvault/azure-keyvault-administration | MIT License | | azure-keyvault-keys | 4.5.0b4 | https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/keyvault/azure-keyvault-keys | MIT License | | azure-keyvault | 1.1.0 | https://github.com/Azure/azure-sdk-for-python | [MIT License](https://api.github.com/repos/azure/azure-sdk-for-python/license) | @@ -93,7 +91,7 @@ Note that versions are default versions and can be changed in certain cases thro | azure-mgmt-containerregistry | 8.2.0 | https://github.com/Azure/azure-sdk-for-python | [MIT License](https://api.github.com/repos/azure/azure-sdk-for-python/license) | | azure-mgmt-containerservice | 16.1.0 | https://github.com/Azure/azure-sdk-for-python | [MIT License](https://api.github.com/repos/azure/azure-sdk-for-python/license) | | azure-mgmt-core | 1.3.0 | https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/core/azure-mgmt-core | MIT License | -| azure-mgmt-cosmosdb | 7.0.0b2 | https://github.com/Azure/azure-sdk-for-python | [MIT License](https://api.github.com/repos/azure/azure-sdk-for-python/license) | +| azure-mgmt-cosmosdb | 7.0.0b6 | https://github.com/Azure/azure-sdk-for-python | [MIT License](https://api.github.com/repos/azure/azure-sdk-for-python/license) | | azure-mgmt-databoxedge | 1.0.0 | https://github.com/Azure/azure-sdk-for-python | [MIT License](https://api.github.com/repos/azure/azure-sdk-for-python/license) | | azure-mgmt-datalake-analytics | 0.2.1 | https://github.com/Azure/azure-sdk-for-python | [MIT License](https://api.github.com/repos/azure/azure-sdk-for-python/license) | | azure-mgmt-datalake-nspkg | 3.0.1 | https://github.com/Azure/azure-sdk-for-python | [MIT License](https://api.github.com/repos/azure/azure-sdk-for-python/license) | @@ -141,9 +139,9 @@ Note that versions are default versions and can be changed in certain cases thro | azure-mgmt-servicelinker | 1.0.0b1 | https://github.com/Azure/azure-sdk-for-python | [MIT License](https://api.github.com/repos/azure/azure-sdk-for-python/license) | | azure-mgmt-signalr | 1.0.0 | https://github.com/Azure/azure-sdk-for-python | [MIT License](https://api.github.com/repos/azure/azure-sdk-for-python/license) | | azure-mgmt-sql | 3.0.1 | https://github.com/Azure/azure-sdk-for-python | [MIT License](https://api.github.com/repos/azure/azure-sdk-for-python/license) | -| azure-mgmt-sqlvirtualmachine | 1.0.0b1 | https://github.com/Azure/azure-sdk-for-python | [MIT License](https://api.github.com/repos/azure/azure-sdk-for-python/license) | +| azure-mgmt-sqlvirtualmachine | 1.0.0b2 | https://github.com/Azure/azure-sdk-for-python | [MIT License](https://api.github.com/repos/azure/azure-sdk-for-python/license) | | azure-mgmt-storage | 19.0.0 | https://github.com/Azure/azure-sdk-for-python | [MIT License](https://api.github.com/repos/azure/azure-sdk-for-python/license) | -| azure-mgmt-synapse | 2.1.0b4 | https://github.com/Azure/azure-sdk-for-python | [MIT License](https://api.github.com/repos/azure/azure-sdk-for-python/license) | +| azure-mgmt-synapse | 2.1.0b5 | https://github.com/Azure/azure-sdk-for-python | [MIT License](https://api.github.com/repos/azure/azure-sdk-for-python/license) | | azure-mgmt-trafficmanager | 0.51.0 | https://github.com/Azure/azure-sdk-for-python | [MIT License](https://api.github.com/repos/azure/azure-sdk-for-python/license) | | azure-mgmt-web | 4.0.0 | https://github.com/Azure/azure-sdk-for-python | [MIT License](https://api.github.com/repos/azure/azure-sdk-for-python/license) | | azure-multiapi-storage | 0.7.0 | https://github.com/Azure/azure-multiapi-storage-python | [MIT License](https://api.github.com/repos/azure/azure-multiapi-storage-python/license) | @@ -153,68 +151,69 @@ Note that versions are default versions and can be changed in certain cases thro | azure-synapse-artifacts | 0.10.0 | https://github.com/Azure/azure-sdk-for-python | [MIT License](https://api.github.com/repos/azure/azure-sdk-for-python/license) | | azure-synapse-managedprivateendpoints | 0.3.0 | https://github.com/Azure/azure-sdk-for-python | [MIT License](https://api.github.com/repos/azure/azure-sdk-for-python/license) | | azure-synapse-spark | 0.2.0 | https://github.com/Azure/azure-sdk-for-python | [MIT License](https://api.github.com/repos/azure/azure-sdk-for-python/license) | -| bcrypt | 3.2.0 | https://github.com/pyca/bcrypt/ | [Apache License 2.0](https://api.github.com/repos/pyca/bcrypt/license) | -| boto3 | 1.20.43 | https://github.com/boto/boto3 | [Apache License 2.0](https://api.github.com/repos/boto/boto3/license) | -| botocore | 1.23.43 | https://github.com/boto/botocore | [Apache License 2.0](https://api.github.com/repos/boto/botocore/license) | -| certifi | 2021.10.8 | https://certifiio.readthedocs.io/en/latest/ | MPL-2.0 | +| bcrypt | 3.2.2 | https://github.com/pyca/bcrypt/ | [Apache License 2.0](https://api.github.com/repos/pyca/bcrypt/license) | +| boto3 | 1.23.10 | https://github.com/boto/boto3 | [Apache License 2.0](https://api.github.com/repos/boto/boto3/license) | +| botocore | 1.26.10 | https://github.com/boto/botocore | [Apache License 2.0](https://api.github.com/repos/boto/botocore/license) | +| certifi | 2022.5.18.1 | https://github.com/certifi/python-certifi | [Other](https://api.github.com/repos/certifi/python-certifi/license) | | cffi | 1.15.0 | http://cffi.readthedocs.org | MIT | | chardet | 3.0.4 | https://github.com/chardet/chardet | [GNU Lesser General Public License v2.1](https://api.github.com/repos/chardet/chardet/license) | -| charset-normalizer | 2.0.10 | https://github.com/ousret/charset_normalizer | [MIT License](https://api.github.com/repos/ousret/charset_normalizer/license) | +| charset-normalizer | 2.0.12 | https://github.com/ousret/charset_normalizer | [MIT License](https://api.github.com/repos/ousret/charset_normalizer/license) | +| click | 8.1.3 | https://palletsprojects.com/p/click/ | BSD-3-Clause | | colorama | 0.4.4 | https://github.com/tartley/colorama | [BSD 3-Clause "New" or "Revised" License](https://api.github.com/repos/tartley/colorama/license) | -| cryptography | 36.0.1 | https://github.com/pyca/cryptography | [Other](https://api.github.com/repos/pyca/cryptography/license) | +| cryptography | 37.0.2 | https://github.com/pyca/cryptography | [Other](https://api.github.com/repos/pyca/cryptography/license) | | Deprecated | 1.2.13 | https://github.com/tantale/deprecated | [MIT License](https://api.github.com/repos/tantale/deprecated/license) | | Antergos Linux | 2015.10 (ISO-Rolling) | https://github.com/python-distro/distro | [Apache License 2.0](https://api.github.com/repos/python-distro/distro/license) | -| fabric | 2.6.0 | http://fabfile.org | BSD | +| fabric | 2.7.0 | https://fabfile.org | BSD | | humanfriendly | 10.0 | https://humanfriendly.readthedocs.io | MIT | | idna | 3.3 | https://github.com/kjd/idna | [BSD 3-Clause "New" or "Revised" License](https://api.github.com/repos/kjd/idna/license) | -| invoke | 1.6.0 | http://docs.pyinvoke.org | BSD | +| invoke | 1.7.1 | https://pyinvoke.org | BSD | | isodate | 0.6.1 | https://github.com/gweis/isodate/ | [BSD 3-Clause "New" or "Revised" License](https://api.github.com/repos/gweis/isodate/license) | | javaproperties | 0.5.2 | https://github.com/jwodder/javaproperties | [MIT License](https://api.github.com/repos/jwodder/javaproperties/license) | -| Jinja2 | 3.0.3 | https://palletsprojects.com/p/jinja/ | BSD-3-Clause | -| jmespath | 0.10.0 | https://github.com/jmespath/jmespath.py | [Other](https://api.github.com/repos/jmespath/jmespath.py/license) | +| Jinja2 | 3.1.2 | https://palletsprojects.com/p/jinja/ | BSD-3-Clause | +| jmespath | 1.0.0 | https://github.com/jmespath/jmespath.py | [Other](https://api.github.com/repos/jmespath/jmespath.py/license) | | jsondiff | 1.3.1 | https://github.com/ZoomerAnalytics/jsondiff | [MIT License](https://api.github.com/repos/zoomeranalytics/jsondiff/license) | -| jsonschema | 4.4.0 | https://github.com/Julian/jsonschema | [MIT License](https://api.github.com/repos/julian/jsonschema/license) | +| jsonschema | 4.5.1 | https://github.com/python-jsonschema/jsonschema | [MIT License](https://api.github.com/repos/python-jsonschema/jsonschema/license) | | knack | 0.9.0 | https://github.com/microsoft/knack | [MIT License](https://api.github.com/repos/microsoft/knack/license) | -| MarkupSafe | 2.0.1 | https://palletsprojects.com/p/markupsafe/ | BSD-3-Clause | +| MarkupSafe | 2.1.1 | https://palletsprojects.com/p/markupsafe/ | BSD-3-Clause | | msal-extensions | 0.3.1 | https://github.com/AzureAD/microsoft-authentication-extensions-for-python | MIT | -| msal | 1.16.0 | https://github.com/AzureAD/microsoft-authentication-library-for-python | [Other](https://api.github.com/repos/azuread/microsoft-authentication-library-for-python/license) | +| msal | 1.17.0 | https://github.com/AzureAD/microsoft-authentication-library-for-python | [Other](https://api.github.com/repos/azuread/microsoft-authentication-library-for-python/license) | | msrest | 0.6.21 | https://github.com/Azure/msrest-for-python | [MIT License](https://api.github.com/repos/azure/msrest-for-python/license) | | msrestazure | 0.6.4 | https://github.com/Azure/msrestazure-for-python | [MIT License](https://api.github.com/repos/azure/msrestazure-for-python/license) | -| oauthlib | 3.1.1 | https://github.com/oauthlib/oauthlib | [BSD 3-Clause "New" or "Revised" License](https://api.github.com/repos/oauthlib/oauthlib/license) | +| oauthlib | 3.2.0 | https://github.com/oauthlib/oauthlib | [BSD 3-Clause "New" or "Revised" License](https://api.github.com/repos/oauthlib/oauthlib/license) | | packaging | 20.9 | https://github.com/pypa/packaging | [Other](https://api.github.com/repos/pypa/packaging/license) | -| paramiko | 2.9.2 | https://paramiko.org | LGPL | -| pathlib2 | 2.3.6 | https://github.com/mcmtroffaes/pathlib2 | [MIT License](https://api.github.com/repos/mcmtroffaes/pathlib2/license) | +| paramiko | 2.11.0 | https://paramiko.org | LGPL | +| pathlib2 | 2.3.7.post1 | https://github.com/jazzband/pathlib2 | [MIT License](https://api.github.com/repos/jazzband/pathlib2/license) | | pkginfo | 1.8.2 | https://code.launchpad.net/~tseaver/pkginfo/trunk | MIT | | portalocker | 1.7.1 | https://github.com/WoLpH/portalocker | [Other](https://api.github.com/repos/wolph/portalocker/license) | -| psutil | 5.9.0 | https://github.com/giampaolo/psutil | [BSD 3-Clause "New" or "Revised" License](https://api.github.com/repos/giampaolo/psutil/license) | +| psutil | 5.9.1 | https://github.com/giampaolo/psutil | [BSD 3-Clause "New" or "Revised" License](https://api.github.com/repos/giampaolo/psutil/license) | | pycparser | 2.21 | https://github.com/eliben/pycparser | [Other](https://api.github.com/repos/eliben/pycparser/license) | | PyGithub | 1.55 | https://github.com/pygithub/pygithub | [GNU Lesser General Public License v3.0](https://api.github.com/repos/pygithub/pygithub/license) | -| Pygments | 2.11.2 | https://pygments.org/ | BSD License | -| PyJWT | 2.3.0 | https://github.com/jpadilla/pyjwt | [MIT License](https://api.github.com/repos/jpadilla/pyjwt/license) | +| Pygments | 2.12.0 | https://pygments.org/ | BSD License | +| PyJWT | 2.4.0 | https://github.com/jpadilla/pyjwt | [MIT License](https://api.github.com/repos/jpadilla/pyjwt/license) | | PyNaCl | 1.4.0 | https://github.com/pyca/pynacl/ | [Apache License 2.0](https://api.github.com/repos/pyca/pynacl/license) | -| pyOpenSSL | 21.0.0 | https://pyopenssl.org/ | Apache License, Version 2.0 | -| pyparsing | 3.0.7 | https://github.com/pyparsing/pyparsing/ | [MIT License](https://api.github.com/repos/pyparsing/pyparsing/license) | +| pyOpenSSL | 22.0.0 | https://pyopenssl.org/ | Apache License, Version 2.0 | +| pyparsing | 3.0.9 | https://github.com/pyparsing/pyparsing/ | [MIT License](https://api.github.com/repos/pyparsing/pyparsing/license) | | pyrsistent | 0.18.1 | http://github.com/tobgu/pyrsistent/ | [MIT License](https://api.github.com/repos/tobgu/pyrsistent/license) | | PySocks | 1.7.1 | https://github.com/Anorov/PySocks | [Other](https://api.github.com/repos/anorov/pysocks/license) | | python-dateutil | 2.8.2 | https://github.com/dateutil/dateutil | [Other](https://api.github.com/repos/dateutil/dateutil/license) | | python-json-logger | 2.0.2 | http://github.com/madzak/python-json-logger | [BSD 2-Clause "Simplified" License](https://api.github.com/repos/madzak/python-json-logger/license) | | PyYAML | 6.0 | https://pyyaml.org/ | MIT | -| requests-oauthlib | 1.3.0 | https://github.com/requests/requests-oauthlib | [ISC License](https://api.github.com/repos/requests/requests-oauthlib/license) | +| requests-oauthlib | 1.3.1 | https://github.com/requests/requests-oauthlib | [ISC License](https://api.github.com/repos/requests/requests-oauthlib/license) | | requests | 2.27.1 | https://requests.readthedocs.io | Apache 2.0 | | resolvelib | 0.5.5 | https://github.com/sarugaku/resolvelib | [ISC License](https://api.github.com/repos/sarugaku/resolvelib/license) | | ruamel.yaml.clib | 0.2.6 | https://sourceforge.net/p/ruamel-yaml-clib/code/ci/default/tree | MIT | -| ruamel.yaml | 0.17.20 | https://sourceforge.net/p/ruamel-yaml/code/ci/default/tree | MIT license | -| s3transfer | 0.5.0 | https://github.com/boto/s3transfer | [Apache License 2.0](https://api.github.com/repos/boto/s3transfer/license) | +| ruamel.yaml | 0.17.21 | https://sourceforge.net/p/ruamel-yaml/code/ci/default/tree | MIT license | +| s3transfer | 0.5.2 | https://github.com/boto/s3transfer | [Apache License 2.0](https://api.github.com/repos/boto/s3transfer/license) | | scp | 0.13.6 | https://github.com/jbardin/scp.py | [Other](https://api.github.com/repos/jbardin/scp.py/license) | | semver | 2.13.0 | https://github.com/python-semver/python-semver | [BSD 3-Clause "New" or "Revised" License](https://api.github.com/repos/python-semver/python-semver/license) | | six | 1.16.0 | https://github.com/benjaminp/six | [MIT License](https://api.github.com/repos/benjaminp/six/license) | | sshtunnel | 0.1.5 | https://github.com/pahaz/sshtunnel | [MIT License](https://api.github.com/repos/pahaz/sshtunnel/license) | | tabulate | 0.8.9 | https://github.com/astanin/python-tabulate | [MIT License](https://api.github.com/repos/astanin/python-tabulate/license) | -| urllib3 | 1.26.8 | https://urllib3.readthedocs.io/ | MIT | +| typing_extensions | 4.2.0 | https://github.com/python/typing_extensions | [Other](https://github.com/python/typing_extensions/blob/main/LICENSE) | +| urllib3 | 1.26.9 | https://urllib3.readthedocs.io/ | MIT | | websocket-client | 0.56.0 | https://github.com/websocket-client/websocket-client.git | BSD | -| wrapt | 1.13.3 | https://github.com/GrahamDumpleton/wrapt | [BSD 2-Clause "Simplified" License](https://api.github.com/repos/grahamdumpleton/wrapt/license) | -| xmltodict | 0.12.0 | https://github.com/martinblech/xmltodict | [MIT License](https://api.github.com/repos/martinblech/xmltodict/license) | -| PyYAML | 6.0 | https://github.com/yaml/pyyaml | [MIT License](https://github.com/yaml/pyyaml/blob/master/LICENSE) | +| wrapt | 1.14.1 | https://github.com/GrahamDumpleton/wrapt | [BSD 2-Clause "Simplified" License](https://api.github.com/repos/grahamdumpleton/wrapt/license) | +| xmltodict | 0.13.0 | https://github.com/martinblech/xmltodict | [MIT License](https://api.github.com/repos/martinblech/xmltodict/license) | ## Predefined Grafana dashboards diff --git a/docs/home/DEVELOPMENT.md b/docs/home/DEVELOPMENT.md index 678d05f8f7..3b06869d04 100644 --- a/docs/home/DEVELOPMENT.md +++ b/docs/home/DEVELOPMENT.md @@ -173,13 +173,13 @@ The serverspec tests are integrated in Epicli. To run them you can extend the la "pythonPath": "${config:python.pythonPath}", "env": { "PYTHONPATH": "${workspaceFolder}" }, "console": "integratedTerminal", - "args": ["test", "-b", "${workspaceFolder}/clusters/buildfolder/", "-g", "postgresql"] + "args": ["test", "-b", "${workspaceFolder}/clusters/buildfolder/", "-i", "kafka,postgresql"] }, ... ``` -Where the ```-b``` argument points to the build folder of a cluster. The ```-g``` argument can be used to execute a subset of tests and is optional. Omitting ```-g``` will execute all tests. +Where the ```-b``` argument points to the build folder of a cluster. The ```-i``` argument can be used to execute a subset of tests and is optional. Omitting ```-i``` will execute all tests. ## Epicli Python dependencies diff --git a/docs/home/HOWTO.md b/docs/home/HOWTO.md index f2edd42acd..ed218e5f19 100644 --- a/docs/home/HOWTO.md +++ b/docs/home/HOWTO.md @@ -34,8 +34,8 @@ - [How to configure scalable Prometheus setup](./howto/MONITORING.md#how-to-configure-scalable-prometheus-setup) - [Import and create Grafana dashboards](./howto/MONITORING.md#import-and-create-grafana-dashboards) - [How to setup default admin password and user in Grafana](./howto/MONITORING.md#how-to-setup-default-admin-password-and-user-in-grafana) - - [How to configure Kibana - Open Distro](./howto/MONITORING.md#how-to-configure-kibana---open-distro) - - [How to configure default user passwords for Kibana - Open Distro, Open Distro for Elasticsearch and Filebeat](./howto/MONITORING.md#how-to-configure-default-user-passwords-for-kibana---open-distro-open-distro-for-elasticsearch-and-filebeat) + - [How to configure OpenSearch Dashboards](./howto/MONITORING.md#how-to-configure-opensearch-dashboards) + - [How to configure default passwords for service users in OpenSearch Dashboards, OpenSearch and Filebeat](./howto/MONITORING.md#how-to-configure-default-passwords-for-service-users-in-opensearch-dashboards-opensearch-and-filebeat) - [How to configure scalable Prometheus setup](./howto/MONITORING.md#how-to-configure-scalable-prometheus-setup) - [How to configure Azure additional monitoring and alerting](./howto/MONITORING.md#how-to-configure-azure-additional-monitoring-and-alerting) - [How to configure AWS additional monitoring and alerting](./howto/MONITORING.md#how-to-configure-aws-additional-monitoring-and-alerting) @@ -59,6 +59,7 @@ - [Run apply after upgrade](./howto/UPGRADE.md#run-apply-after-upgrade) - [Kubernetes applications](./howto/UPGRADE.md#kubernetes-applications) - [Kafka upgrade](./howto/UPGRADE.md#how-to-upgrade-kafka) + - [Migration from Open Distro for Elasticsearch to OpenSearch](./howto/UPGRADE.md#migration-from-open-distro-for-elasticsearch--kibana-to-opensearch-and-opensearch-dashboards) - [Open Distro for Elasticsearch upgrade](./howto/UPGRADE.md#open-distro-for-elasticsearch-upgrade) - [Node exporter upgrade](./howto/UPGRADE.md#node-exporter-upgrade) - [RabbitMQ upgrade](./howto/UPGRADE.md#rabbitmq-upgrade) @@ -67,6 +68,7 @@ - [Terraform upgrade from Epiphany 1.x to 2.x](./howto/UPGRADE.md#terraform-upgrade-from-epiphany-1.x-to-2.x) - [Security](./howto/SECURITY.md) + - [How to run epicli with temporary AWS credentials](./howto/SECURITY.md#how-to-run-epicli-with-temporary-aws-credentials) - [How to add/remove additional users to/from OS](./howto/SECURITY.md#how-to-addremove-additional-users-tofrom-os) - [How to enable/disable Epiphany service user](./howto/SECURITY.md#how-to-enabledisable-epiphany-service-user) - [How to use TLS/SSL certificate with HA Proxy](./howto/SECURITY.md#how-to-use-tlsssl-certificate-with-ha-proxy) @@ -113,6 +115,7 @@ - [Centralized logging setup](./howto/LOGGING.md#centralized-logging-setup) - [How to add multiline support for Filebeat logs](./howto/LOGGING.md#how-to-add-multiline-support-for-filebeat-logs) - [How to deploy Filebeat as Daemonset in K8s](./howto/LOGGING.md#how-to-deploy-filebeat-as-daemonset-in-k8s) + - [Audit logs](./howto/LOGGING.md#audit-logs) - [Maintenance](./howto/MAINTENANCE.md) - [Verification of service state](./howto/MAINTENANCE.md#verification-of-service-state) @@ -126,7 +129,7 @@ - [AWS Security groups](./howto/SECURITY_GROUPS.md#aws-security-groups) - [AWS Security groups full yaml file](./howto/SECURITY_GROUPS.md#aws-setting-groups-full-yaml-file) -- [Modules](./howto/MODULES.md) +- [K8S-Modules](./howto/K8S_MODULES.md) - [Repository](./howto/REPOSITORY.md) - [Introduction](./howto/REPOSITORY.md#introduction) diff --git a/docs/home/LIFECYCLE.md b/docs/home/LIFECYCLE.md index 6623f4535b..d3bd36cb14 100644 --- a/docs/home/LIFECYCLE.md +++ b/docs/home/LIFECYCLE.md @@ -33,8 +33,40 @@ The LTS version will be released once a year and will be supported for up to 3 y | [1.1.x STS](../changelogs/CHANGELOG-1.1.md) | 30 Jun 2021 | 1.1.0 | 30 Jun 2021 | 30 Dec 2021 | | [1.2.x STS](../changelogs/CHANGELOG-1.2.md) | 30 Sep 2021 | 1.2.0 | 30 Sep 2021 | 30 Mar 2022 | | [1.3.x STS](../changelogs/CHANGELOG-1.3.md) | 19 Jan 2022 | 1.3.0 | 19 Jan 2022 | 30 Jun 2022 | -| 2.0.x LTS | est. 01 Apr 2022 | - | - | est. 01 Apr 2025 | +| [2.0.x LTS](../changelogs/CHANGELOG-2.0.md) | 09 May 2022 | 2.0.0 | 09 May 2022 | 09 May 2025 | +| :arrow_right: 2.0.1 LTS | est. 31 Aug 2022 | --- | --- | 09 May 2025 | +| :arrow_right: 2.0.2 LTS | est. 31 Oct 2022 | --- | --- | 09 May 2025 | -![lifecycle](../assets/images/lifecycle.png) - -source: [LIFECYCLE_GANTT.md](LIFECYCLE_GANTT.md) +```mermaid +gantt +title Epiphany Platform lifecycle +dateFormat YYYY-MM-DD +section 0.2.x +0.2.x support cycle :a, 2019-02-19, 2020-04-06 +section 0.3.x +0.3.x support cycle :a, 2019-08-02, 2020-07-01 +section 0.4.x +0.4.x support cycle :a, 2019-10-11, 2020-10-22 +section 0.5.x +0.5.x support cycle :a, 2020-01-17, 2021-01-02 +section 0.6.x +0.6.x support cycle :a, 2020-04-06, 2021-04-01 +section 0.7.x +0.7.x support cycle :a, 2020-07-01, 2021-06-30 +section 0.8.x +0.8.x support cycle :a, 2020-10-22, 2021-09-30 +section 0.9.x +0.9.x support cycle :a, 2021-01-19, 2021-12-30 +section 1.0.x +1.0.x support cycle (LTS - 3 years) :crit, 2021-04-01, 2024-04-01 +section 1.1.x +1.1.x - 6 months :a, 2021-06-30, 2021-12-30 +section 1.2.x +1.2.x - 6 months :a, 2021-09-30, 2022-03-30 +section 1.3.x +1.3.x - 6 months :active, 2022-01-19, 2022-06-30 +section 2.0.x +2.0.x support cycle (LTS - 3 years) :crit, 2022-05-09, 2025-05-09 +2.0.1 patch for LTS :crit, 2022-08-31, 2025-05-09 +2.0.2 patch for LTS :crit, 2022-10-31, 2025-05-09 +``` diff --git a/docs/home/LIFECYCLE_GANTT.md b/docs/home/LIFECYCLE_GANTT.md deleted file mode 100644 index ff5d446d66..0000000000 --- a/docs/home/LIFECYCLE_GANTT.md +++ /dev/null @@ -1,40 +0,0 @@ -# Epiphany Platform lifecycle - Gantt chart - -```mermaid -gantt -title Epiphany Platform lifecycle -dateFormat YYYY-MM-DD -section 0.2.x -0.2.x support cycle :done, 2019-02-19, 2020-04-06 -section 0.3.x -0.3.x support cycle :done, 2019-08-02, 2020-07-01 -section 0.4.x -0.4.x support cycle :done, 2019-10-11, 2020-10-22 -section 0.5.x -0.5.x support cycle :done, 2020-01-17, 2021-01-02 -section 0.6.x -0.6.x support cycle :done, 2020-04-06, 2021-04-01 -section 0.7.x -0.7.x support cycle :done, 2020-07-01, 2021-06-30 -section 0.8.x -0.8.x support cycle :done, 2020-10-22, 2021-09-30 -section 0.9.x -0.9.x support cycle :active, 2021-01-19, 2021-12-30 -section 1.0.x -1.0.x support cycle (LTS - 3 years) :crit, 2021-04-01, 2024-04-01 -section 1.1.x -1.1.x - 6 months :active, 2021-06-30, 2021-12-30 -section 1.2.x -1.2.x - 6 months :active, 2021-09-30, 2022-03-30 -section 1.3.x -1.3.x - 6 months :active, 2022-01-19, 2022-06-30 -section 2.0.x -2.0.x support cycle (LTS - 3 years) :crit, 2022-03-30, 2025-03-30 -``` - -This is a source for the image used in [LIFECYCLE.md](LIFECYCLE.md) file. -Currently, Github doesn't support it natively (but feature request was made: [link](https://github.community/t/feature-request-support-mermaid-markdown-graph-diagrams-in-md-files/1922) ). - -Extensions for browsers: -- [Chrome](https://chrome.google.com/webstore/detail/github-%2B-mermaid/goiiopgdnkogdbjmncgedmgpoajilohe) -- [Firefox](https://addons.mozilla.org/en-US/firefox/addon/github-mermaid) diff --git a/docs/home/RESOURCES.md b/docs/home/RESOURCES.md index 03dac4c716..75adb34694 100644 --- a/docs/home/RESOURCES.md +++ b/docs/home/RESOURCES.md @@ -42,8 +42,8 @@ Here are some materials concerning Epiphany tooling and cluster components - bot 2. [RabbitMQ](https://www.rabbitmq.com/) - [RabbitMQ Getting started](https://www.rabbitmq.com/getstarted.html) 5. Central logging - 1. [Elasticsearch](https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html) - 2. [Kibana](https://www.elastic.co/guide/en/kibana/current/index.html) + 1. [OpenSearch Dashboards](https://opensearch.org/docs/latest/dashboards/index/) + 2. [OpenSearch](https://opensearch.org/docs/latest) 3. [Filebeat](https://www.elastic.co/guide/en/beats/filebeat/current/index.html) - Beats platform reference(https://www.elastic.co/guide/en/beats/libbeat/current/index.html) 6. Load Balancing diff --git a/docs/home/SECURITY.md b/docs/home/SECURITY.md index e66633969e..df2fe12407 100644 --- a/docs/home/SECURITY.md +++ b/docs/home/SECURITY.md @@ -11,8 +11,12 @@ changes made in settings of your antivirus/antimalware solution. ## Contents -- [Users and roles created by Epiphany](#users-and-roles-created-by-epiphany) -- [Ports used by components in Epiphany](#ports-used-by-components-in-epiphany) +- [Security related information](#security-related-information) + - [Contents](#contents) + - [Users and roles created by epiphany](#users-and-roles-created-by-epiphany) + - [Ports used by components in Epiphany](#ports-used-by-components-in-epiphany) + - [Connection protocols and ciphers used by components in Epiphany](#connection-protocols-and-ciphers-used-by-components-in-epiphany) + - [Notes](#notes) ### Users and roles created by epiphany @@ -61,15 +65,15 @@ different values. The list does not include ports that are bound to the loopback - 9093 - encrypted communication (if TLS/SSL is enabled) - unconfigurable random port from ephemeral range - JMX (for local access only), see note [[1]](#notes) -5. Elasticsearch: +5. OpenSearch: - - 9200 - Elasticsearch REST communication - - 9300 - Elasticsearch nodes communication + - 9200 - OpenSearch REST communication + - 9300 - OpenSearch nodes communication - 9600 - Performance Analyzer (REST API) -6. Kibana: +6. OpenSearch Dashboards: - - 5601 - Kibana web UI + - 5601 - OpenSearch Dashboards web UI 7. Prometheus: diff --git a/docs/home/howto/BACKUP.md b/docs/home/howto/BACKUP.md index 45ee9378dc..14f84c7d06 100644 --- a/docs/home/howto/BACKUP.md +++ b/docs/home/howto/BACKUP.md @@ -125,11 +125,11 @@ Recovery includes all backed up files Logging backup includes: -- Elasticsearch database snapshot -- Elasticsearch configuration ``/etc/elasticsearch/`` -- Kibana configuration ``/etc/kibana/`` +- OpenSearch database snapshot +- OpenSearch configuration ``/usr/share/opensearch/config/`` +- OpenSearch Dashboards configuration ``/usr/share/opensearch-dashboards/config/`` -Only single-node Elasticsearch backup is supported. Solution for multi-node Elasticsearch cluster will be added in +Only single-node OpenSearch backup is supported. Solution for multi-node OpenSearch cluster will be added in future release. ### Monitoring diff --git a/docs/home/howto/CLUSTER.md b/docs/home/howto/CLUSTER.md index 5bd2fd1f0d..aff0aa2160 100644 --- a/docs/home/howto/CLUSTER.md +++ b/docs/home/howto/CLUSTER.md @@ -57,10 +57,9 @@ Disable: 2. Prepend `kubernetes_master` mapping (or any other mapping if you don't deploy Kubernetes) with: ```yaml - kind: configuration/feature-mapping + kind: configuration/feature-mappings specification: - ... - roles_mapping: + mappings: ... kubernetes_master: - repository @@ -290,12 +289,12 @@ specification: kubernetes_node: count: 2 --- -kind: configuration/feature-mapping -title: "Feature mapping to roles" +kind: configuration/feature-mappings +title: "Feature mapping to components" provider: name: default specification: - roles_mapping: + mappings: kubernetes_master: - repository - image-registry @@ -396,13 +395,13 @@ To set up the cluster do the following steps from the provisioning machine: cloud: region: eu-west-2 credentials: - key: aws_key - secret: aws_secret + access_key_id: aws_key + secret_access_key: aws_secret use_public_ips: false default_os_image: default ``` - The [region](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RegionsAndAvailabilityZones.html) lets you chose the optimal place to deploy your cluster. The `key` and `secret` are needed by Terraform and can be generated in the AWS console. More information about that [here](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys) + The [region](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RegionsAndAvailabilityZones.html) lets you chose the optimal place to deploy your cluster. The `access_key_id` and `secret_access_key` are needed by Terraform and can be generated in the AWS console. More information about that [here](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys) Azure: @@ -571,7 +570,7 @@ specification: count: 0 rabbitmq: count: 0 - opendistro_for_elasticsearch: + opensearch: count: 0 single_machine: count: 1 @@ -671,40 +670,44 @@ specification: Epiphany gives you the ability to define custom components. This allows you to define a custom set of roles for a component you want to use in your cluster. It can be useful when you for example want to maximize usage of the available machines you have at your disposal. -The first thing you will need to do is define it in the `configuration/feature-mapping` configuration. To get this configuration you can run `epicli init ... --full` command. In the `available_roles` roles section you can see all the available roles that Epiphany provides. The `roles_mapping` is where all the Epiphany components are defined and were you need to add your custom components. +The first thing you will need to do is define it in the `configuration/features` and the `configuration/feature-mappings` configurations. To get these configurations you can run `epicli init ... --full` command. In the `configuration/features` doc you can see all the available features that Epiphany provides. The `configuration/feature-mappings` doc is where all the Epiphany components are defined and where you can add your custom components. -Below are parts of an example `configuration/feature-mapping` were we define a new `single_machine_new` component. We want to use Kafka instead of RabbitMQ and don`t need applications and postgres since we don't want a Keycloak deployment: +Below are parts of an example `configuration/features` and `configuration/feature-mappings` docs where we define a new `single_machine_new` component. We want to use Kafka instead of RabbitMQ and don't need applications and postgres since we don't want a Keycloak deployment: ```yaml -kind: configuration/feature-mapping -title: Feature mapping to roles +kind: configuration/features +title: "Features to be enabled/disabled" name: default specification: - available_roles: # All entries here represent the available roles within Epiphany - - name: repository - enabled: yes - - name: firewall - enabled: yes - - name: image-registry - ... - roles_mapping: # All entries here represent the default components provided with Epiphany - ... + features: # All entries here represent the available features within Epiphany + - name: repository + enabled: yes + - name: firewall + enabled: yes + - name: image-registry + ... +--- +kind: configuration/feature-mappings +title: "Feature mapping to components" +name: default +specification: + mappings: # All entries here represent the default components provided with Epiphany single_machine: - - repository - - image-registry - - kubernetes-master - - applications - - rabbitmq - - postgresql - - firewall + - repository + - image-registry + - kubernetes-master + - applications + - rabbitmq + - postgresql + - firewall # Below is the new single_machine_new definition single_machine_new: - - repository - - image-registry - - kubernetes-master - - kafka - - firewall - ... + - repository + - image-registry + - kubernetes-master + - kafka + - firewall + ... ``` Once defined the new `single_machine_new` can be used inside the `epiphany-cluster` configuration: @@ -753,7 +756,7 @@ Kubernetes master | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check Kubernetes node | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: | [#1580](https://github.com/epiphany-platform/epiphany/issues/1580) Kafka | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | --- Load Balancer | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: | --- -Opendistro for elasticsearch | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | --- +OpenSearch | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | --- Postgresql | :x: | :x: | :heavy_check_mark: | :heavy_check_mark: | [#1577](https://github.com/epiphany-platform/epiphany/issues/1577) RabbitMQ | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | [#1578](https://github.com/epiphany-platform/epiphany/issues/1578), [#1309](https://github.com/epiphany-platform/epiphany/issues/1309) RabbitMQ K8s | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | [#1486](https://github.com/epiphany-platform/epiphany/issues/1486) diff --git a/docs/home/howto/DATABASES.md b/docs/home/howto/DATABASES.md index f07752058f..bd03185fdd 100644 --- a/docs/home/howto/DATABASES.md +++ b/docs/home/howto/DATABASES.md @@ -453,11 +453,10 @@ Properly configured application (kubernetes service) to use fully HA configurati PostgreSQL native replication is now deprecated and removed. Use [PostgreSQL HA replication with repmgr](#how-to-set-up-postgresql-ha-replication-with-repmgr-cluster) instead. -## How to start working with OpenDistro for Elasticsearch +## How to start working with OpenSearch -OpenDistro for Elasticsearch -is [an Apache 2.0-licensed distribution of Elasticsearch enhanced with enterprise security, alerting, SQL](https://opendistro.github.io/for-elasticsearch/). -In order to start working with OpenDistro change machines count to value greater than 0 in your cluster configuration: +OpenSearch is the [successor](https://opendistro.github.io/for-elasticsearch-docs/) of OpenDistro for ElasticSearch project. Epiphany is providing an [automated solution](./UPGRADE.md#migration-from-open-distro-for-elasticsearch--kibana-to-opensearch-and-opensearch-dashboards) for migrating your existing ODFE installation to OpenSearch. +On the other hand, if you plan to just start working with OpenSearch - change machines count to value greater than 0 in your cluster configuration: ```yaml kind: epiphany-cluster @@ -473,35 +472,36 @@ specification: ... logging: count: 1 - opendistro_for_elasticsearch: + opensearch: count: 2 ``` -**Installation with more than one node will always be clustered** - Option to configure the non-clustered installation of more than one node for Open Distro is not supported. +**Installation with more than one node will always be clustered** - Option to configure the non-clustered installation of more than one node for OpenSearch is not supported. ```yaml -kind: configuration/opendistro-for-elasticsearch -title: OpenDistro for Elasticsearch Config +kind: configuration/opensearch +title: OpenSearch Config name: default specification: - cluster_name: EpiphanyElastic + cluster_name: EpiphanyOpenSearch ``` -By default, Kibana is deployed only for `logging` component. If you want to deploy Kibana -for `opendistro_for_elasticsearch` you have to modify feature mapping. Use below configuration in your manifest. +By default, OpenSearch Dashboards (previously Kibana) is deployed only for `logging` component. If you want to deploy it +for `opensearch` component you have to: +- modify feature mapping by adding `opensearch-dashboards` under `opensearch` component (see configuration below) +- set up `kibanaserver` user and its password in `configuration/opensearch`, see [Opensearch user and password configuration](./MONITORING.md#opensearch-component) ```yaml -kind: configuration/feature-mapping -title: "Feature mapping to roles" +kind: configuration/feature-mappings +title: "Feature mapping to components" name: default specification: - roles_mapping: - opendistro_for_elasticsearch: - - opendistro-for-elasticsearch + mappings: + opensearch: - node-exporter - filebeat - firewall - - kibana + - opensearch-dashboards ``` -Filebeat running on `opendistro_for_elasticsearch` hosts will always point to centralized logging hosts (./LOGGING.md). +Filebeat running on `opensearch` hosts will always point to centralized logging hosts ( [more info](./LOGGING.md) ). diff --git a/docs/home/howto/K8S_MODULES.md b/docs/home/howto/K8S_MODULES.md new file mode 100644 index 0000000000..3e6e405748 --- /dev/null +++ b/docs/home/howto/K8S_MODULES.md @@ -0,0 +1,11 @@ +# K8S_MODULES + +Previous modules approach that included: +* [Azure Basic Infrastructure](https://github.com/epiphany-platform/m-azure-basic-infrastructure) (AzBI) module +* [Azure AKS](https://github.com/epiphany-platform/m-azure-kubernetes-service) (AzKS) module +* [AWS Basic Infrastructure](https://github.com/epiphany-platform/m-aws-basic-infrastructure) (AwsBI) module +* [AWS EKS](https://github.com/epiphany-platform/m-aws-kubernetes-service) (AwsKS) module + +has been depricated - Epiphany will no longer support usage of these modules. + +For Kubernetes services (AKS and EKS) that can be used alongside Epiphany, please refer to [k8s-modules](https://github.com/epiphany-platform/k8s-modules). diff --git a/docs/home/howto/KUBERNETES.md b/docs/home/howto/KUBERNETES.md index c3d3d452b1..cc4ea04daf 100644 --- a/docs/home/howto/KUBERNETES.md +++ b/docs/home/howto/KUBERNETES.md @@ -143,15 +143,15 @@ specification: count: 2 ``` -2. Enable `applications` in feature-mapping in initial configuration manifest. +2. Enable `applications` in `configuration/features` in initial configuration manifest. ```yaml --- -kind: configuration/feature-mapping -title: Feature mapping to roles +kind: configuration/features +title: "Features to be enabled/disabled" name: default specification: - available_roles: + features: - name: applications enabled: true ``` diff --git a/docs/home/howto/LOGGING.md b/docs/home/howto/LOGGING.md index e419b2a543..983c4613d9 100644 --- a/docs/home/howto/LOGGING.md +++ b/docs/home/howto/LOGGING.md @@ -1,119 +1,114 @@ # Centralized logging setup -For centralized logging Epiphany uses [OpenDistro for Elasticsearch](https://opendistro.github.io/for-elasticsearch/). -In order to enable centralized logging, be sure that `count` property for `logging` feature is greater than 0 in your +For centralized logging Epiphany uses [Open Search](https://opensearch.org/) stack - an opensource successor[1] of Elasticsearch & Kibana projects. + +In order to enable centralized logging, be sure to set `count` property for `logging` feature to the value greater than 0 in your configuration manifest. ```yaml kind: epiphany-cluster -... +[...] specification: - ... + [...] components: kubernetes_master: count: 1 kubernetes_node: count: 0 - ... + [...] logging: - count: 1 - ... + count: 1 # <<------ + [...] ``` ## Default feature mapping for logging +Below example shows a default feature mapping for logging: ```yaml -... -logging: - - logging - - kibana - - node-exporter - - filebeat - - firewall +[...] +roles_mapping: +[...] + logging: + - logging + - opensearch-dashboards + - node-exporter + - filebeat + - firewall ... ``` -The `logging` role replaced `elasticsearch` role. This change was done to enable Elasticsearch usage also for data +The `logging` role has replaced `elasticsearch` role. This change was done to enable Elasticsearch usage also for data storage - not only for logs as it was till 0.5.0. -Default configuration of `logging` and `opendistro_for_elasticsearch` roles is identical ( -./DATABASES.md#how-to-start-working-with-opendistro-for-elasticsearch). To modify configuration of centralized logging -adjust and use the following defaults in your manifest: +Default configuration of `logging` and `opensearch` roles is identical ( more info [here](./DATABASES.md#how-to-start-working-with-opensearch) ). To modify configuration of centralized logging +adjust to your needs the following default values in your manifest: ```yaml +[...] kind: configuration/logging title: Logging Config name: default specification: - cluster_name: EpiphanyElastic + cluster_name: EpiphanyOpensearch clustered: True paths: - data: /var/lib/elasticsearch - repo: /var/lib/elasticsearch-snapshots - logs: /var/log/elasticsearch + data: /var/lib/opensearch + repo: /var/lib/opensearch-snapshots + logs: /var/log/opensearch ``` -## How to manage Opendistro for Elasticsearch data +## How to manage OpenSearch data -Elasticsearch stores data using JSON documents, and an Index is a collection of documents. As in every database, it's -crucial to correctly maintain data in this one. It's almost impossible to deliver database configuration which will fit -to every type of project and data stored in. Epiphany deploys preconfigured Opendistro Elasticsearch, but this -configuration may not meet user requirements. Before going to production, configuration should be tailored to the -project needs. All configuration tips and tricks are available -in [official documentation](https://opendistro.github.io/for-elasticsearch-docs/). +OpenSearch stores data using JSON documents, and an Index is a collection of documents. As in every database, it's crucial to correctly maintain data in this one. It's almost impossible to deliver database configuration which will fit to every type of project and data stored in. Epiphany deploys preconfigured OpenSearch instance but this configuration may not meet any single user requirements. That's why, before going to production, stack configuration should be tailored to the project needs. All configuration tips and tricks are available in [official documentation](https://opensearch.org/docs/latest). -The main and most important decisions to take before you deploy cluster are: +The main and most important decisions to take before you deploy the cluster are: -1) How many Nodes are needed -2) How big machines and/or storage data disks need to be used +- how many nodes are needed +- how big machines and/or storage data disks need to be used -These parameters are defined in yaml file, and it's important to create a big enough cluster. +These parameters can be defined in manifest yaml file. It is important to create a big enough cluster. ```yaml specification: + [..] components: logging: - count: 1 # Choose number of nodes + count: 1 # Choose number of nodes that suits your needs + machines: + - logging-machine-n + [..] --- kind: infrastructure/virtual-machine title: "Virtual Machine Infra" -name: logging-machine +name: logging-machine-n specification: - size: Standard_DS2_v2 # Choose machine size + size: Standard_DS2_v2 # Choose a VM size that suits your needs ``` -If it's required to have Elasticsearch which works in cluster formation configuration, except setting up more than one -machine in yaml config file please acquaint dedicated -support [article](https://opendistro.github.io/for-elasticsearch-docs/docs/elasticsearch/cluster/) and adjust -Elasticsearch configuration file. +If it's required to have OpenSearch instance which works in cluster formation configuration, except setting up more than one machine in yaml config file please acquaint dedicated +support [article](https://opensearch.org/docs/latest/troubleshoot/index/) and adjust +OpenSearch configuration file. -At this moment Opendistro for Elasticsearch does not support plugin similar -to [ILM](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-lifecycle-management.html), log rotation -is possible only by configuration created in Index State Management. +We also want to strongly encourage you to get familiar with a bunch of plugins and policies available along with OpenSearch with the following ones among them: -`ISM - Index State Management` - is a plugin that provides users and administrative panel to monitor the indices and -apply policies at different index stages. ISM lets users automate periodic, administrative operations by triggering them -based on index age, size, or number of documents. Using the ISM plugin, can define policies that automatically handle -index rollovers or deletions. ISM is installed with Opendistro by default - user does not have to enable this. Official -documentation is available -in [Opendistro for Elasticsearch website](https://opendistro.github.io/for-elasticsearch-docs/docs/im/ism/). +`ISM - Index State Management` - is a plugin that allows users and administrative panel to monitor the indices and apply policies at different index stages. ISM lets users automate periodic, administrative operations by triggering them based on index age, size, or number of documents. Using the ISM plugin, can define policies that automatically handle index rollovers or deletions. Official plugin documentation is available [here](https://opensearch.org/docs/latest/im-plugin/ism/index/). To reduce the consumption of disk resources, every index you created should use -well-designed [policy](https://opendistro.github.io/for-elasticsearch-docs/docs/im/ism/policies/). +well-designed [policy](https://opensearch.org/docs/latest/im-plugin/ism/policies/). Among others these two index actions might save machine from filling up disk space: -[`Index Rollover`](https://opendistro.github.io/for-elasticsearch-docs/docs/im/ism/policies/#rollover) - rolls an alias +[`Index Rollover`](https://opensearch.org/docs/latest/im-plugin/ism/policies/#rollover) - rolls an alias to a new index. Set up correctly max index size / age or minimum number of documents to keep index size in requirements framework. -[`Index Deletion`](https://opendistro.github.io/for-elasticsearch-docs/docs/im/ism/policies/#delete) - deletes indexes +[`Index Deletion`](https://opensearch.org/docs/latest/im-plugin/ism/policies/#delete) - deletes indexes managed by policy -Combining these actions, adapting them to data amount and specification users are able to create policy which will -maintain data in cluster for example: to secure node from fulfilling disk space. +Combining these actions and adapting them to data amount and specification, users are able to create policy which will +maintain their data in cluster for example to secure node from fulfilling disk space. -There is example of policy below. Be aware that this is only example, and it needs to be adjusted to environment needs. +There is an example of such policy below. Be aware that this is only example and as avery example it needs to be adjusted to actual environment needs. ```json { @@ -181,64 +176,64 @@ There is example of policy below. Be aware that this is only example, and it nee } ``` -Example above shows configuration with rollover daily or when index achieve 1 GB size. Indexes older than 14 days will +Example above shows configuration with rollover index policy on a daily basis or when the index achieve 1 GB size. Indexes older than 14 days will be deleted. States and conditionals could be combined. Please -see [policies](https://opendistro.github.io/for-elasticsearch-docs/docs/im/ism/policies/) documentation for more +see [policies](https://opensearch.org/docs/latest/im-plugin/ism/policies/) documentation for more details. -`Apply Policy` +#### Apply Policy -To apply policy use similar API request as presented below: +To apply a policy you can use similar API request as presented below: -``` -PUT _template/template_01 +```sh +PUT _index_template/ism_rollover ``` ```json { "index_patterns": ["filebeat*"], "settings": { - "opendistro.index_state_management.rollover_alias": "filebeat" - "opendistro.index_state_management.policy_id": "epi_policy" + "plugins.index_state_management.rollover_alias": "filebeat" + "plugins.index_state_management.policy_id": "epi_policy" } } ``` After applying this policy, every new index created under this one will apply to it. There is also possibility to apply -policy to already existing policies by assigning them to policy in Index Management Kibana panel. +policy to already existing policies by assigning them to policy in dashboard Index Management panel. -## How to export Kibana reports to CSV format +## How to export Dashboards reports -Since v1.0 Epiphany provides the possibility to export reports from Kibana to CSV, PNG or PDF using the Open Distro for -Elasticsearch Kibana reports feature. +Since v1.0 Epiphany provides the possibility to export reports from Kibana to CSV, PNG or PDF using the Open Distro for Elasticsearch Kibana reports feature. And after migrating from Elastic stack to OpenSearch stack you can make use of the OpenSearch Reporting feature to achieve this and more. -Check more details about the plugin and how to export reports in the -[documentation](https://opendistro.github.io/for-elasticsearch-docs/docs/kibana/reporting) +Check more details about the OpenSearch Reports plugin and how to export reports in the +[documentation](https://github.com/opensearch-project/dashboards-reports/blob/main/README.md#opensearch-dashboards-reports). -`Note: Currently in Open Distro for Elasticsearch Kibana the following plugins are installed and enabled by default: security, alerting, anomaly detection, index management, query workbench, notebooks, reports, alerting, gantt chart plugins.` +Notice: Currently in the OpenSearch stack the following plugins are installed and enabled by default: security, alerting, anomaly detection, index management, query workbench, notebooks, reports, alerting, gantt chart plugins. -You can easily check enabled default plugins for Kibana using the following command on the logging machine: -`./bin/kibana-plugin list` in Kibana directory. +You can easily check enabled default plugins for Dashboards component using the following command on the logging machine: +`./bin/opensearch-dashboards-plugin list` in directory where you've installed _opensearch-dashboards_. --- ## How to add multiline support for Filebeat logs -In order to properly handle multilines in files harvested by Filebeat you have to provide `multiline` definition in the -configuration manifest. Using the following code you will be able to specify which lines are part of a single event. +In order to properly handle multiline outputs in files harvested by Filebeat you have to provide `multiline` definition in the cluster configuration manifest. Using the following code you will be able to specify which lines are part of a single event. By default, postgresql block is provided, you can use it as example: ```yaml +[..] postgresql_input: multiline: pattern: >- '^\d{4}-\d{2}-\d{2} ' negate: true match: after +[..] ``` -Supported inputs: `common_input`,`postgresql_input`,`container_input` +Supported inputs: `common_input`,`postgresql_input`,`container_input`. More details about multiline options you can find in the [official documentation](https://www.elastic.co/guide/en/beats/filebeat/current/multiline-examples.html) @@ -253,19 +248,44 @@ specification: k8s_as_cloud_service: true ``` -## How to use default Kibana dashboards +## How to use default OpenSearch dashboards +--- +This feature is not working in current version of OpenSearch and so the `setup.dashboards.enabled` is set with value _false_ as a workaround. + +--- It is possible to configure `setup.dashboards.enabled` and `setup.dashboards.index` Filebeat settings using `specification.kibana.dashboards` key in `configuration/filebeat` doc. -When `specification.kibana.dashboards.enabled` is set to `auto`, the corresponding setting in Filebeat configuration file will be set to `true` only if Kibana is configured to be present on the host. +When `specification.kibana.dashboards.enabled` is set to `auto`, the corresponding setting in Filebeat configuration file will be set to `true` only if OpenSearch Dashboards component is configured to be present on the host. Other possible values are `true` and `false`. Default configuration: -``` +```yaml specification: - kibana: +[..] + opensearch: dashboards: enabled: auto index: filebeat-* ``` -Note: Setting `specification.kibana.dashboards.enabled` to `true` not providing Kibana will result in a Filebeat crash. +Notice: Setting `specification.kibana.dashboards.enabled` to `true` not providing Kibana will result in a Filebeat crash. + +
    + +--- +[1] More information about migrating from Elasticsearch & Kibana to OpenSearch & OpenSearch Dashboards can be found [here](./UPGRADE.md#migration-from-open-distro-for-elasticsearch--kibana-to-opensearch-and-opensearch-dashboards). + +## Audit logs + +There is an [option](https://opensearch.org/docs/latest/security-plugin/audit-logs/) to enable +OpenSearch audit logs which is switched on in Epiphany by default using the following configuration part: + +```yaml +kind: configuration/logging +specification: + opensearch_security: + audit: + type: internal_opensearch +``` + +Use the empty string value to switch audit logging off. diff --git a/docs/home/howto/MAINTENANCE.md b/docs/home/howto/MAINTENANCE.md index 52cc3de205..f42ead43c6 100644 --- a/docs/home/howto/MAINTENANCE.md +++ b/docs/home/howto/MAINTENANCE.md @@ -121,12 +121,12 @@ To check status of Node Exporter, use the command: status prometheus-node-exporter ``` -#### - Elasticsearch +#### - OpenSearch -To check status of Elasticsearch, use the command: +To check the status of OpenSearch we can use the command: ```shell -systemct status elasticsearch +systemct status opensearch ``` Check if service is listening on 9200 (API communication port): @@ -141,7 +141,7 @@ Check if service is listening on 9300 (nodes communication port): netstat -antup | grep 9300 ``` -Check status of Elasticsearch cluster: +We can also check the status of OpenSearch cluster: ```shell :9200/_cluster/health diff --git a/docs/home/howto/MODULES.md b/docs/home/howto/MODULES.md index 05e7df4727..bf5adfbc80 100644 --- a/docs/home/howto/MODULES.md +++ b/docs/home/howto/MODULES.md @@ -108,12 +108,12 @@ AWS: rabbitmq: count: 0 --- - kind: configuration/feature-mapping - title: Feature mapping to roles + kind: configuration/feature-mappings + title: "Feature mapping to components" name: your-cluster-name # <----- make unified with other places and build directory name provider: any specification: - roles_mapping: + mappings: repository: - repository - image-registry diff --git a/docs/home/howto/MONITORING.md b/docs/home/howto/MONITORING.md index 3f2917c2d0..a5181bfd75 100644 --- a/docs/home/howto/MONITORING.md +++ b/docs/home/howto/MONITORING.md @@ -11,10 +11,10 @@ Grafana: - [How to setup default admin password and user in Grafana](#how-to-setup-default-admin-password-and-user-in-grafana) - [Import and create Grafana dashboards](#import-and-create-grafana-dashboards) -Kibana: +OpenSearch Dashboards: -- [How to configure Kibana](#how-to-configure-kibana) -- [How to configure default user password in Kibana](#how-to-configure-default-user-password-in-kibana) +- [How to configure Dashboards](#how-to-configure-opensearch-dashboards) +- [How to configure default passwords for service users in OpenSearch Dashboards, OpenSearch and Filebeat](#how-to-configure-default-passwords-for-service-users-in-opensearch-dashboards-opensearch-and-filebeat) RabbitMQ: @@ -231,103 +231,98 @@ When dashboard creation or import succeeds you will see it on your dashboard lis *Note: For some dashboards, there is no data to visualize until there is traffic activity for the monitored component.* -# Kibana +# OpenSearch Dashboards -Kibana is an free and open frontend application that sits on top of the Elastic Stack, providing search and data visualization capabilities for data indexed in Elasticsearch. For more informations about Kibana please refer to [the official website](https://www.elastic.co/what-is/kibana). +OpenSearch Dashboards ( a Kibana counterpart ) is an open source search and analytics visualization layer. It also serves as a user interface for many OpenSearch project plugins. For more information please refer to [the official website](https://opensearch.org/docs/latest/dashboards/index/). -## How to configure Kibana - Open Distro +## How to configure OpenSearch Dashboards -In order to start viewing and analyzing logs with Kibana, you first need to add an index pattern for Filebeat according to the following steps: +In order to start viewing and analyzing logs with Dashboards tool, you first need to add an index pattern for Filebeat according to the following procedure: -1. Goto the `Management` tab -2. Select `Index Patterns` -3. On the first step define as index pattern: +1. Goto the `Stack Management` tab +2. Select `Index Patterns` --> `Create index pattern` +3. Define an index pattern: `filebeat-*` - Click next. + and click next. 4. Configure the time filter field if desired by selecting `@timestamp`. This field represents the time that events occurred or were processed. You can choose not to have a time field, but you will not be able to narrow down your data by a time range. -This filter pattern can now be used to query the Elasticsearch indices. +This filter pattern can now be used to query the OpenSsearch indices. -By default Kibana adjusts the UTC time in `@timestamp` to the browser's local timezone. This can be changed in `Management` > `Advanced Settings` > `Timezone for date formatting`. +By default OpenSearch Dashoboards adjusts the UTC time in `@timestamp` to the browser's local timezone. This can be changed in `Stack Management` > `Advanced Settings` > `Timezone for date formatting`. -## How to configure default user passwords for Kibana - Open Distro, Open Distro for Elasticsearch and Filebeat +## How to configure default passwords for service users in OpenSearch Dashboards, OpenSearch and Filebeat -To configure admin password for Kibana - Open Distro and Open Distro for Elasticsearch you need to follow the procedure below. -There are separate procedures for `logging` and `opendistro-for-elasticsearch` roles since most of the times for `opendistro-for-elasticsearch`, `kibanaserver` and `logstash` users are not required to be present. +Epiphany provides two componenets that include OpenSearch installation: `logging` (by default includes OpenSearch-Dashboards as well) and `opensearch`. +In order to learn more about both components, please look through documentation: +- [logging](./LOGGING.md#centralized-logging-setup) +- [opensearch](./DATABASES.md#how-to-start-working-with-opensearch) + +To configure admin password for OpenSearch Dashoboards ( previously Kibana ) and OpenSearch you need to follow the procedure below. ### Logging component -#### - Logging role +#### Logging role + +Default users configured by Epiphany for `logging` role are: +- `kibanaserver`[1] - needed by default Epiphany installation of Dashboards +- `filebeatservice` - needed by default Epiphany installation of Filebeat +Note that `logstash` user from earlier versions of Epiphany, has been replaced by dedicated `filebeatservice` user. + +**We strongly advice to set different password for each user.** -By default Epiphany removes users that are listed in `demo_users_to_remove` section of `configuration/logging` doc. -By default, `kibanaserver` user (needed by default Epiphany installation of Kibana) and `logstash` (needed by default Epiphany -installation of Filebeat) are not removed. If you want to perform configuration by Epiphany, set `kibanaserver_user_active` to `true` -for `kibanaserver` user or `logstash_user_active` for `logstash` user. For `logging` role, those settings are already set to `true` by default. -We strongly advice to set different password for each user. +Additionally, Epiphany removes users that are listed in `demo_users_to_remove` section of `configuration/logging` manifest document. -To change `admin` user's password, change value for `admin_password` key. For `kibanaserver` and `logstash`, change values -for `kibanaserver_password` and `logstash_password` keys respectively. Changes from logging role will be propagated to Kibana -and Filebeat configuration. +To change `admin` user's password, you need to change the value for `admin_password` key ( see the example below ). For `kibanaserver` and `filebeatservice`, you need to change values for `kibanaserver_password` and `filebeatservice_password` keys respectively. Changes from logging role will be propagated to OpenSearch Dashboards and Filebeat configuration accordingly. ```yaml kind: configuration/logging title: Logging Config name: default specification: - ... + [...] admin_password: YOUR_PASSWORD kibanaserver_password: YOUR_PASSWORD - kibanaserver_user_active: true - logstash_password: YOUR_PASSWORD - logstash_user_active: true + filebeatservice_password: PASSWORD_TO_CHANGE demo_users_to_remove: - kibanaro - readall + - logstash - snapshotrestore ``` -#### - Kibana role - -To set password of `kibanaserver` user, which is used by Kibana for communication with Open Distro Elasticsearch backend follow the procedure -described in [Logging role](#-logging-role). +### OpenSearch component -#### - Filebeat role +Default user provided by Epiphany for OpenSearch role is `admin`. Additionally, Epiphany removes all demo users except `admin` user. +Those users are listed in `demo_users_to_remove` section of `configuration/opensearch` manifest doc ( see example below ). +To change `admin` user's password, change value for the `admin_password` key. -To set password of `logstash` user, which is used by Filebeat for communication with Open Distro Elasticsearch backend follow the procedure described -in [Logging role](#-logging-role). +**We strongly advice to set different password for admin user.** -### Open Distro for Elasticsearch component - -By default Epiphany removes all demo users except `admin` user. Those users are listed in `demo_users_to_remove` section -of `configuration/opendistro-for-elasticsearch` doc. If you want to keep `kibanaserver` user (needed by default Epiphany installation of Kibana), -you need to remove it from `demo_users_to_remove` list and set `kibanaserver_user_active` to `true` in order to change the default password. -We strongly advice to set different password for each user. - -To change `admin` user's password, change value for `admin_password` key. For `kibanaserver` and `logstash`, change values for `kibanaserver_password` -and `logstash_password` keys respectively. +Note that adding `opensearch-dashboards` mapping in `configuration/feature-mappings` under `opensearch` component requires commenting out `kibanaserver` user in `demo_users_to_remove` section (as presented in configuration below). This step should be followed by changing default password for `kibanaserver` user by modifying value for `kibanaserver_password` key. ```yaml -kind: configuration/opendistro-for-elasticsearch -title: Open Distro for Elasticsearch Config +kind: configuration/opensearch +title: OpenSearch Config name: default specification: - ... + [...] admin_password: YOUR_PASSWORD - kibanaserver_password: YOUR_PASSWORD - kibanaserver_user_active: false - logstash_password: YOUR_PASSWORD - logstash_user_active: false + kibanaserver_password: YOUR_PASSWPRD demo_users_to_remove: - kibanaro - readall - snapshotrestore - logstash - - kibanaserver + # - kibanaserver ``` -### Upgrade of Elasticsearch, Kibana and Filebeat +### Upgrade of OpenSearch, OpenSearch Dashboards and Filebeat + +Keep in mind that during the upgrade process Epiphany takes `kibanaserver` (for Dashboards) and `logstash` (for Filebeat) user passwords and re-applies them to upgraded configuration of Filebeat and Kibana. So if these password phrases differ from what was setup before upgrade, you should reflect these changes upon next login process. + +Epiphany upgrade of OpenSearch, OpenSearch Dashboards or Filebeat components will fail if `kibanaserver` or `logstash` usernames were changed in configuration of OpenSearch, OpenSearch Dashboards or Filebeat before. -During upgrade Epiphany takes `kibanaserver` (for Kibana) and `logstash` (for Filebeat) user passwords and re-applies them to upgraded configuration of Filebeat and Kibana. Epiphany upgrade of Open Distro, Kibana or Filebeat will fail if `kibanaserver` or `logstash` usernames were changed in configuration of Kibana, Filebeat or Open Distro for Elasticsearch. +[1] For the backward compatibility needs, some naming conventions ( ie. kibanaserver user name ) are still present within the new ( OpenSearch ) platform though they will be suppresed in the future. In aftermath, Epiphany stack is also still using these names. # HAProxy diff --git a/docs/home/howto/RETENTION.md b/docs/home/howto/RETENTION.md index 6ae5b8d87f..753b84ab42 100644 --- a/docs/home/howto/RETENTION.md +++ b/docs/home/howto/RETENTION.md @@ -1,7 +1,7 @@ An Epiphany cluster has a number of components which log, collect and retain data. To make sure that these do not exceed the usable storage of the machines they running on, the following configurations are available. -## Elasticsearch +## OpenSearch TODO diff --git a/docs/home/howto/SECURITY.md b/docs/home/howto/SECURITY.md index 6546bd2622..c9cf5ee3c3 100644 --- a/docs/home/howto/SECURITY.md +++ b/docs/home/howto/SECURITY.md @@ -1,3 +1,29 @@ +## How to run epicli with temporary AWS credentials + +There is a [possibility](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html) +to generate temporary AWS credentials to manage resources. + +To use this feature, first create temporary credentials: + +```shell +aws sts get-session-token --duration-seconds --serial-number --token-code +``` + +Then these credentials can be used in Epiphany config: + +```yaml +kind: epiphany-cluster +title: Epiphany cluster Config +provider: aws +name: default +specification: + cloud: + credentials: + access_key_id: + secret_access_key: + session_token: +``` + ## How to enable/disable Epiphany service user There are a few ways to perform such task in Linux. To lock/unlock any user you can use the standard tools, such diff --git a/docs/home/howto/SECURITY_GROUPS.md b/docs/home/howto/SECURITY_GROUPS.md index d9f84a09f3..271aea315c 100644 --- a/docs/home/howto/SECURITY_GROUPS.md +++ b/docs/home/howto/SECURITY_GROUPS.md @@ -244,8 +244,8 @@ specification: cloud: region: eu-central-1 credentials: - key: YOUR_AWS_KEY - secret: YOUR_AWS_SECRET + access_key_id: YOUR_AWS_KEY + secret_access_key: YOUR_AWS_SECRET use_public_ips: true components: repository: @@ -278,7 +278,7 @@ specification: count: 0 rabbitmq: count: 0 - opendistro_for_elasticsearch: + opensearch: count: 0 single_machine: count: 0 diff --git a/docs/home/howto/UPGRADE.md b/docs/home/howto/UPGRADE.md index 676c33f9ff..ca3d700944 100644 --- a/docs/home/howto/UPGRADE.md +++ b/docs/home/howto/UPGRADE.md @@ -1,5 +1,45 @@ # Upgrade +- [Upgrade](#upgrade) + - [Introduction](#introduction) + - [Online upgrade](#online-upgrade) + - [Online prerequisites](#online-prerequisites) + - [Start the online upgrade](#start-the-online-upgrade) + - [Offline upgrade](#offline-upgrade) + - [Offline prerequisites](#offline-prerequisites) + - [Start the offline upgrade](#start-the-offline-upgrade) + - [Additional parameters](#additional-parameters) + - [Run *apply* after *upgrade*](#run-apply-after-upgrade) + - [Kubernetes applications](#kubernetes-applications) + - [How to upgrade Kafka](#how-to-upgrade-kafka) + - [Kafka upgrade](#kafka-upgrade) + - [ZooKeeper upgrade](#zookeeper-upgrade) + - [Migration from Open Distro for Elasticsearch & Kibana to OpenSearch and OpenSearch Dashboards](#migration-from-open-distro-for-elasticsearch--kibana-to-opensearch-and-opensearch-dashboards) + - [Open Distro for Elasticsearch upgrade](#open-distro-for-elasticsearch-upgrade) + - [Node exporter upgrade](#node-exporter-upgrade) + - [RabbitMQ upgrade](#rabbitmq-upgrade) + - [Kubernetes upgrade](#kubernetes-upgrade) + - [Prerequisites](#prerequisites) + - [PostgreSQL upgrade](#postgresql-upgrade) + - [Versions](#versions) + - [Prerequisites](#prerequisites-1) + - [Upgrade](#upgrade-1) + - [Manual actions](#manual-actions) + - [Post-upgrade processing](#post-upgrade-processing) + - [Statistics](#statistics) + - [Delete old cluster](#delete-old-cluster) + - [Terraform upgrade from Epiphany 1.x to 2.x](#terraform-upgrade-from-epiphany-1x-to-2x) + - [Azure](#azure) + - [v0.12.6 => v0.13.x](#v0126--v013x) + - [v0.13.x => v0.14.x](#v013x--v014x) + - [v0.14.x => v1.0.x](#v014x--v10x) + - [v1.0.x => v1.1.3](#v10x--v113) + - [AWS](#aws) + - [v0.12.6 => v0.13.x](#v0126--v013x-1) + - [v0.13.x => v0.14.x](#v013x--v014x-1) + - [v0.14.x => v1.0.x](#v014x--v10x-1) + - [v1.0.x => v1.1.3](#v10x--v113-1) + ## Introduction From Epicli 0.4.2 and up the CLI has the ability to perform upgrades on certain components on a cluster. The components @@ -51,10 +91,10 @@ Your airgapped existing cluster should meet the following requirements: 3. The cluster machines/vm`s should be accessible through SSH with a set of SSH keys you provided and configured on each machine yourself. 4. A provisioning machine that: - - Has access to the SSH keys - - Has access to the build output from when the cluster was first created. - - Is on the same network as your cluster machines - - Has Epicli 0.4.2 or up running. +- Has access to the SSH keys +- Has access to the build output from when the cluster was first created. +- Is on the same network as your cluster machines +- Has Epicli 0.4.2 or up running. *Note. To run Epicli check the [Prerequisites](./PREREQUISITES.md)* ### Start the online upgrade @@ -86,10 +126,10 @@ Your airgapped existing cluster should meet the following requirements: - Runs the same distribution as the airgapped cluster machines/vm`s (AlmaLinux 8, RedHat 8, Ubuntu 20.04) - Has access to the internet. 5. A provisioning machine that: - - Has access to the SSH keys - - Has access to the build output from when the cluster was first created. - - Is on the same network as your cluster machines - - Has Epicli 0.4.2 or up running. +- Has access to the SSH keys +- Has access to the build output from when the cluster was first created. +- Is on the same network as your cluster machines +- Has Epicli 0.4.2 or up running. --- **NOTE** @@ -200,18 +240,18 @@ specification: count: 1 rabbitmq: count: 0 - opendistro_for_elasticsearch: + opensearch: count: 0 name: clustername prefix: 'prefix' title: Epiphany cluster Config --- -kind: configuration/feature-mapping -title: Feature mapping to roles +kind: configuration/feature-mappings +title: "Feature mapping to components" provider: azure name: default specification: - roles_mapping: + mappings: kubernetes_master: - kubernetes-master - helm @@ -260,18 +300,30 @@ then start with the rest **one by one**. More detailed information about ZooKeeper you can find in [ZooKeeper documentation](https://cwiki.apache.org/confluence/display/ZOOKEEPER). -## Open Distro for Elasticsearch upgrade +## Migration from Open Distro for Elasticsearch & Kibana to OpenSearch and OpenSearch Dashboards --- **NOTE** -Before upgrade procedure make sure you have a data backup! +Make sure you have a backup before proceeding to migration steps described below! --- +Following the decision of Elastic NV[1] on ceasing open source options available for Elasticsearch and Kibana and releasing them under the Elastic license (more info [here](https://github.com/epiphany-platform/epiphany/issues/2870)) Epiphany team decided to implement a mechanism of automatic migration from ElasticSearch 7.10.2 to OpenSearch 1.2.4. + +It is important to remember, that while the new platform makes an effort to continue to support a broad set of third party tools (ie. Beats) there can be some drawbacks or even malfunctions as not everything has been tested or has explicitly been added to OpenSearch compatibility scope[2]. +Additionally some of the components (ie. ElasticSearch Curator) or some embedded service accounts ( ie. *kibanaserver*) can be still found in OpenSearch environment but they will be phased out. + +Keep in mind, that for the current version of OpenSearch and OpenSearch Dashboards it is necessary to include the `filebeat` component along with the loggging one in order to implement the workaround for *Kibana API not available* [bug](https://github.com/opensearch-project/OpenSearch-Dashboards/issues/656#issuecomment-978036236). + +Upgrade of the ESS/ODFE versions not shipped with the previous Epiphany releases is not supported. If your environment is customized it needs to be standardized ( as described in [this](https://opensearch.org/docs/latest/upgrade-to/upgrade-to/#upgrade-paths) table ) prior to running the subject migration. + +Migration of Elasticsearch Curator is not supported. More info on use of Curator in OpenSearch environment can be found [here](https://github.com/opensearch-project/OpenSearch/issues/1352). + +[1] https://www.elastic.co/pricing/faq/licensing#what-are-the-key-changes-being-made-to-the-elastic-license + +[2] https://opensearch.org/docs/latest/clients/agents-and-ingestion-tools/index/ -Since Epiphany v1.0.0 we provide upgrade elasticsearch-oss package to v7.10.2 and opendistro-\* plugins package to -v1.13.\*. Upgrade will be performed automatically when the upgrade procedure detects your `logging` -, `opendistro_for_elasticsearch` or `kibana` hosts. +Upgrade will be performed automatically when the upgrade procedure detects your `logging`, `opensearch` or `kibana` hosts. Upgrade of Elasticsearch uses API calls (GET, PUT, POST) which requires an admin TLS certificate. By default, Epiphany generates self-signed certificates for this purpose but if you use your own, you have to provide the admin certificate's @@ -284,7 +336,7 @@ logging: cert_path: /etc/elasticsearch/custom-admin.pem key_path: /etc/elasticsearch/custom-admin-key.pem -opendistro_for_elasticsearch: +opensearch: upgrade_config: custom_admin_certificate: cert_path: /etc/elasticsearch/custom-admin.pem diff --git a/docs/home/howto/kubernetes/PERSISTENT_STORAGE.md b/docs/home/howto/kubernetes/PERSISTENT_STORAGE.md index 57a02eaae7..5d4801871c 100644 --- a/docs/home/howto/kubernetes/PERSISTENT_STORAGE.md +++ b/docs/home/howto/kubernetes/PERSISTENT_STORAGE.md @@ -27,6 +27,15 @@ To add Rook/Ceph support in Epiphany you need to add to your cluster configurati Adding the storage is described below in separate sections for Azure, AWS and on premise environments. Rook/Ceph configuration in Epiphany is described after add disk paragraphs. +#### Parameter `enable_controller_attach_detach` + +Rook requires Kubelet parameter `--enable-controller-attach-detach` set to `true`. From Epiphany v2.0.1 by default this parameter is set to `true`. Users who would like to change its value, can achieve that by modifying `specification.advanced.enable_controller_attach_detach` setting in `configuration/kubernetes-master` doc. +*Note*: In Epiphany v2.0.0 `--enable-controller-attach-detach` parameter is set by default to `false`. In order to change its value, manual steps on each of affected Kubernetes node are required: +- modify file `/var/lib/kubelet/kubeadm-flags.env` by removing attach-detach flag +- add flag to `/var/lib/kubelet/config.yaml` file and set its value to `true` +- restart kubelet with `systemctl restart kubelet` +See [Set Kubelet parameters via a config file](https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/) for more information about Kubelet parameters. + #### Create disks for Rook/Ceph Cluster Storage - Azure To create Rook/Ceph Cluster Storage on Azure first you need to add empty disk resource to Kubernetes cluster in key `specification.additional_disks`, under `kind: infrastructure/virtual-machine` for configuration of kubernetes node machine: diff --git a/pytest.ini b/pytest.ini index 042744a4e9..0d7b30de6e 100644 --- a/pytest.ini +++ b/pytest.ini @@ -5,4 +5,5 @@ filterwarnings = ignore:The distutils package is deprecated:DeprecationWarning:packaging.tags testpaths = tests/unit/ + ansible/playbooks/roles/repository/library/tests/ ansible/playbooks/roles/repository/files/download-requirements/tests/ diff --git a/schema/any/defaults/configuration/minimal-cluster-config.yml b/schema/any/defaults/configuration/minimal-cluster-config.yml index 02a57099b1..7252b49dcf 100644 --- a/schema/any/defaults/configuration/minimal-cluster-config.yml +++ b/schema/any/defaults/configuration/minimal-cluster-config.yml @@ -1,3 +1,4 @@ +--- kind: epiphany-cluster title: "Epiphany cluster Config" provider: any @@ -46,10 +47,10 @@ specification: count: 1 machines: - default-rabbitmq - opendistro_for_elasticsearch: + opensearch: count: 1 machines: - - default-opendistro + - default-opensearch --- kind: infrastructure/machine provider: any @@ -130,7 +131,7 @@ specification: --- kind: infrastructure/machine provider: any -name: default-opendistro +name: default-opensearch specification: - hostname: opendistro # YOUR-MACHINE-HOSTNAME + hostname: opensearch # YOUR-MACHINE-HOSTNAME ip: 192.168.100.112 # YOUR-MACHINE-IP diff --git a/schema/any/defaults/epiphany-cluster.yml b/schema/any/defaults/epiphany-cluster.yml index 27a3014ac2..f3dc9a0a08 100644 --- a/schema/any/defaults/epiphany-cluster.yml +++ b/schema/any/defaults/epiphany-cluster.yml @@ -1,3 +1,4 @@ +--- kind: epiphany-cluster title: "Epiphany cluster Config" provider: any @@ -41,7 +42,7 @@ specification: count: 0 machines: [] configuration: default - opendistro_for_elasticsearch: + opensearch: count: 0 machines: [] configuration: default diff --git a/schema/aws/defaults/configuration/minimal-cluster-config.yml b/schema/aws/defaults/configuration/minimal-cluster-config.yml index 629e1b5675..620773fde2 100644 --- a/schema/aws/defaults/configuration/minimal-cluster-config.yml +++ b/schema/aws/defaults/configuration/minimal-cluster-config.yml @@ -1,3 +1,4 @@ +--- kind: epiphany-cluster title: "Epiphany cluster Config" provider: aws @@ -12,8 +13,8 @@ specification: k8s_as_cloud_service: False use_public_ips: False # When not using public IPs you have to provide connectivity via private IPs (VPN) credentials: - key: XXXX-XXXX-XXXX - secret: XXXXXXXXXXXXXXXX + access_key_id: XXXX-XXXX-XXXX + secret_access_key: XXXXXXXXXXXXXXXX default_os_image: default components: repository: @@ -34,5 +35,5 @@ specification: count: 1 rabbitmq: count: 1 - opendistro_for_elasticsearch: + opensearch: count: 1 diff --git a/schema/aws/defaults/epiphany-cluster.yml b/schema/aws/defaults/epiphany-cluster.yml index f50a21cb6d..24bb238e33 100644 --- a/schema/aws/defaults/epiphany-cluster.yml +++ b/schema/aws/defaults/epiphany-cluster.yml @@ -1,3 +1,4 @@ +--- kind: epiphany-cluster title: "Epiphany cluster Config" provider: aws @@ -14,8 +15,9 @@ specification: use_public_ips: False # When not using public IPs you have to provide connectivity via private IPs (VPN) region: eu-west-2 credentials: # todo change it to get credentials from vault - key: 3124-4124-4124 - secret: DADFAFHCJHCAUYEAk + access_key_id: 3124-4124-4124 + secret_access_key: DADFAFHCJHCAUYEAk + session_token: '' network: use_network_security_groups: True default_os_image: default @@ -76,7 +78,7 @@ specification: subnets: - address_pool: 10.1.8.0/24 availability_zone: eu-west-2a - opendistro_for_elasticsearch: + opensearch: count: 0 machine: logging-machine configuration: default diff --git a/schema/aws/defaults/infrastructure/cloud-os-image-defaults.yml b/schema/aws/defaults/infrastructure/cloud-os-image-defaults.yml index caace6d377..b631e9e4a0 100644 --- a/schema/aws/defaults/infrastructure/cloud-os-image-defaults.yml +++ b/schema/aws/defaults/infrastructure/cloud-os-image-defaults.yml @@ -1,3 +1,4 @@ +--- kind: infrastructure/cloud-os-image-defaults title: "Cloud OS Image Defaults" name: default diff --git a/schema/aws/defaults/infrastructure/default-security-group.yml b/schema/aws/defaults/infrastructure/default-security-group.yml index d3676a9b96..3d5785499d 100644 --- a/schema/aws/defaults/infrastructure/default-security-group.yml +++ b/schema/aws/defaults/infrastructure/default-security-group.yml @@ -1,3 +1,4 @@ +--- kind: infrastructure/default-security-group title: "Default Security Group Config" provider: aws diff --git a/schema/aws/defaults/infrastructure/efs-storage.yml b/schema/aws/defaults/infrastructure/efs-storage.yml index 56e62d5ccf..68d76c4f5f 100644 --- a/schema/aws/defaults/infrastructure/efs-storage.yml +++ b/schema/aws/defaults/infrastructure/efs-storage.yml @@ -1,3 +1,4 @@ +--- kind: infrastructure/efs-storage title: "Elastic File System Config" provider: aws diff --git a/schema/aws/defaults/infrastructure/internet-gateway.yml b/schema/aws/defaults/infrastructure/internet-gateway.yml index a00f42eff8..433c17cee5 100644 --- a/schema/aws/defaults/infrastructure/internet-gateway.yml +++ b/schema/aws/defaults/infrastructure/internet-gateway.yml @@ -1,3 +1,4 @@ +--- kind: infrastructure/internet-gateway title: "Internet Gateway Config" provider: aws diff --git a/schema/aws/defaults/infrastructure/public-key.yml b/schema/aws/defaults/infrastructure/public-key.yml index 28efb65fc2..e570dab976 100644 --- a/schema/aws/defaults/infrastructure/public-key.yml +++ b/schema/aws/defaults/infrastructure/public-key.yml @@ -1,3 +1,4 @@ +--- kind: infrastructure/public-key title: "Public Key" provider: aws diff --git a/schema/aws/defaults/infrastructure/resource-group.yml b/schema/aws/defaults/infrastructure/resource-group.yml index 6f8cbaa3f0..a65b690fc4 100644 --- a/schema/aws/defaults/infrastructure/resource-group.yml +++ b/schema/aws/defaults/infrastructure/resource-group.yml @@ -1,3 +1,4 @@ +--- kind: infrastructure/resource-group title: "Resource Group" provider: aws diff --git a/schema/aws/defaults/infrastructure/route-table-association.yml b/schema/aws/defaults/infrastructure/route-table-association.yml index 39d15037d4..6dd321db97 100644 --- a/schema/aws/defaults/infrastructure/route-table-association.yml +++ b/schema/aws/defaults/infrastructure/route-table-association.yml @@ -1,3 +1,4 @@ +--- kind: infrastructure/route-table-association title: Route Table Association Config" provider: aws diff --git a/schema/aws/defaults/infrastructure/route-table.yml b/schema/aws/defaults/infrastructure/route-table.yml index 48743eeb1a..1f45ef1d66 100644 --- a/schema/aws/defaults/infrastructure/route-table.yml +++ b/schema/aws/defaults/infrastructure/route-table.yml @@ -1,3 +1,4 @@ +--- kind: infrastructure/route-table title: "Route Table Config" provider: aws diff --git a/schema/aws/defaults/infrastructure/security-group-rule.yml b/schema/aws/defaults/infrastructure/security-group-rule.yml index 0f35d847f3..2da9a4eb2f 100644 --- a/schema/aws/defaults/infrastructure/security-group-rule.yml +++ b/schema/aws/defaults/infrastructure/security-group-rule.yml @@ -1,3 +1,4 @@ +--- kind: infrastructure/security-group-rule title: "Default Group Rule" provider: aws diff --git a/schema/aws/defaults/infrastructure/security-group.yml b/schema/aws/defaults/infrastructure/security-group.yml index 71c2db4070..bc1044b636 100644 --- a/schema/aws/defaults/infrastructure/security-group.yml +++ b/schema/aws/defaults/infrastructure/security-group.yml @@ -1,3 +1,4 @@ +--- kind: infrastructure/security-group title: "Security Group Config" provider: aws diff --git a/schema/aws/defaults/infrastructure/subnet.yml b/schema/aws/defaults/infrastructure/subnet.yml index 84406b72af..4f77b5b672 100644 --- a/schema/aws/defaults/infrastructure/subnet.yml +++ b/schema/aws/defaults/infrastructure/subnet.yml @@ -1,3 +1,4 @@ +--- kind: infrastructure/subnet title: "Subnet Config" provider: aws diff --git a/schema/aws/defaults/infrastructure/virtual-machine.yml b/schema/aws/defaults/infrastructure/virtual-machine.yml index 7e27b4ebfa..68dbd5ff1b 100644 --- a/schema/aws/defaults/infrastructure/virtual-machine.yml +++ b/schema/aws/defaults/infrastructure/virtual-machine.yml @@ -1,3 +1,4 @@ +--- kind: infrastructure/virtual-machine title: "Virtual Machine Infra" provider: aws @@ -364,8 +365,8 @@ specification: destination_port_range: "9300" source_address_prefix: "10.1.0.0/20" destination_address_prefix: "0.0.0.0/0" - - name: Kibana - description: Allow Kibana + - name: OpenSearchDashboards + description: Allow OpenSearch Dashboards direction: Inbound protocol: "Tcp" destination_port_range: "5601" diff --git a/schema/aws/defaults/infrastructure/vpc.yml b/schema/aws/defaults/infrastructure/vpc.yml index b706448b9e..1d51c22d02 100644 --- a/schema/aws/defaults/infrastructure/vpc.yml +++ b/schema/aws/defaults/infrastructure/vpc.yml @@ -1,3 +1,4 @@ +--- kind: infrastructure/vpc title: "VPC Config" provider: aws diff --git a/schema/aws/validation/infrastructure/default-security-group.yml b/schema/aws/validation/infrastructure/default-security-group.yml index d1c291a417..7d97d9b11e 100644 --- a/schema/aws/validation/infrastructure/default-security-group.yml +++ b/schema/aws/validation/infrastructure/default-security-group.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Default-security-group specification schema" description: "Default-security-group specification schema" @@ -6,7 +7,7 @@ properties: name: type: string cluster_name: - type: string + type: string vpc_name: type: string rules: diff --git a/schema/aws/validation/infrastructure/efs-storage.yml b/schema/aws/validation/infrastructure/efs-storage.yml index d69719885d..b627ec3070 100644 --- a/schema/aws/validation/infrastructure/efs-storage.yml +++ b/schema/aws/validation/infrastructure/efs-storage.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Efs-storage specification schema" description: "Efs-storage specification schema" diff --git a/schema/aws/validation/infrastructure/internet-gateway.yml b/schema/aws/validation/infrastructure/internet-gateway.yml index fe6a32b2bc..b743c2abba 100644 --- a/schema/aws/validation/infrastructure/internet-gateway.yml +++ b/schema/aws/validation/infrastructure/internet-gateway.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Internet-gateway specification schema" description: "Internet-gateway specification schema" diff --git a/schema/aws/validation/infrastructure/public-key.yml b/schema/aws/validation/infrastructure/public-key.yml index 27a9f7bbca..f54a3f94cf 100644 --- a/schema/aws/validation/infrastructure/public-key.yml +++ b/schema/aws/validation/infrastructure/public-key.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Public-key specification schema" description: "Public-key specification schema" diff --git a/schema/aws/validation/infrastructure/resource-group.yml b/schema/aws/validation/infrastructure/resource-group.yml index 9d38a83a81..f86ac8f9b7 100644 --- a/schema/aws/validation/infrastructure/resource-group.yml +++ b/schema/aws/validation/infrastructure/resource-group.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Resource-group specification schema" description: "Resource-group specification schema" diff --git a/schema/aws/validation/infrastructure/route-table-association.yml b/schema/aws/validation/infrastructure/route-table-association.yml index e6e0279040..87be70c00c 100644 --- a/schema/aws/validation/infrastructure/route-table-association.yml +++ b/schema/aws/validation/infrastructure/route-table-association.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Route-table-association specification schema" description: "Route-table-association specification schema" diff --git a/schema/aws/validation/infrastructure/route-table.yml b/schema/aws/validation/infrastructure/route-table.yml index 44cd565ec3..2d13954e13 100644 --- a/schema/aws/validation/infrastructure/route-table.yml +++ b/schema/aws/validation/infrastructure/route-table.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Route-table specification schema" description: "Route-table specification schema" diff --git a/schema/aws/validation/infrastructure/security-group-rule.yml b/schema/aws/validation/infrastructure/security-group-rule.yml index 5d6e4fdefa..e31245511c 100644 --- a/schema/aws/validation/infrastructure/security-group-rule.yml +++ b/schema/aws/validation/infrastructure/security-group-rule.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Security-group-rule specification schema" description: "Security-group-rule specification schema" diff --git a/schema/aws/validation/infrastructure/security-group.yml b/schema/aws/validation/infrastructure/security-group.yml index 9023bca08f..e05e3195cb 100644 --- a/schema/aws/validation/infrastructure/security-group.yml +++ b/schema/aws/validation/infrastructure/security-group.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Security-group specification schema" description: "Security-group specification schema" diff --git a/schema/aws/validation/infrastructure/subnet.yml b/schema/aws/validation/infrastructure/subnet.yml index 0d49f682cd..75e479bbc1 100644 --- a/schema/aws/validation/infrastructure/subnet.yml +++ b/schema/aws/validation/infrastructure/subnet.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Subnet specification schema" description: "Subnet specification schema" diff --git a/schema/aws/validation/infrastructure/virtual-machine.yml b/schema/aws/validation/infrastructure/virtual-machine.yml index 6669f35ceb..5cd57c9371 100644 --- a/schema/aws/validation/infrastructure/virtual-machine.yml +++ b/schema/aws/validation/infrastructure/virtual-machine.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Virtual machine specification schema" description: "Virtual machine specification schema" diff --git a/schema/aws/validation/infrastructure/vpc.yml b/schema/aws/validation/infrastructure/vpc.yml index 6af678a4c7..64ff0d8f88 100644 --- a/schema/aws/validation/infrastructure/vpc.yml +++ b/schema/aws/validation/infrastructure/vpc.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Vpc specification schema" description: "Vpc specification schema" diff --git a/schema/azure/defaults/configuration/minimal-cluster-config.yml b/schema/azure/defaults/configuration/minimal-cluster-config.yml index ecb4d2b695..dee1136b0a 100644 --- a/schema/azure/defaults/configuration/minimal-cluster-config.yml +++ b/schema/azure/defaults/configuration/minimal-cluster-config.yml @@ -1,3 +1,4 @@ +--- kind: epiphany-cluster title: "Epiphany cluster Config" provider: azure @@ -32,5 +33,5 @@ specification: count: 1 rabbitmq: count: 1 - opendistro_for_elasticsearch: + opensearch: count: 1 diff --git a/schema/azure/defaults/epiphany-cluster.yml b/schema/azure/defaults/epiphany-cluster.yml index 6e5026e9a7..c066f29f0a 100644 --- a/schema/azure/defaults/epiphany-cluster.yml +++ b/schema/azure/defaults/epiphany-cluster.yml @@ -1,3 +1,4 @@ +--- kind: epiphany-cluster title: "Epiphany cluster Config" provider: azure @@ -76,7 +77,7 @@ specification: configuration: default subnets: - address_pool: 10.1.8.0/24 - opendistro_for_elasticsearch: + opensearch: count: 0 alt_component_name: '' machine: logging-machine diff --git a/schema/azure/defaults/infrastructure/availability-set.yml b/schema/azure/defaults/infrastructure/availability-set.yml index 66c5b9937e..f7ab017c29 100644 --- a/schema/azure/defaults/infrastructure/availability-set.yml +++ b/schema/azure/defaults/infrastructure/availability-set.yml @@ -1,3 +1,4 @@ +--- kind: infrastructure/availability-set title: "Availability Set Config" provider: azure diff --git a/schema/azure/defaults/infrastructure/cloud-init-custom-data.yml b/schema/azure/defaults/infrastructure/cloud-init-custom-data.yml index 960039cdef..ed75e9a09d 100644 --- a/schema/azure/defaults/infrastructure/cloud-init-custom-data.yml +++ b/schema/azure/defaults/infrastructure/cloud-init-custom-data.yml @@ -1,3 +1,4 @@ +--- kind: infrastructure/cloud-init-custom-data title: cloud-init user-data provider: azure diff --git a/schema/azure/defaults/infrastructure/cloud-os-image-defaults.yml b/schema/azure/defaults/infrastructure/cloud-os-image-defaults.yml index f867b5dc3c..9355439e08 100644 --- a/schema/azure/defaults/infrastructure/cloud-os-image-defaults.yml +++ b/schema/azure/defaults/infrastructure/cloud-os-image-defaults.yml @@ -1,3 +1,4 @@ +--- kind: infrastructure/cloud-os-image-defaults title: "Cloud OS Image Defaults" name: default diff --git a/schema/azure/defaults/infrastructure/network-interface-security-group-association.yml b/schema/azure/defaults/infrastructure/network-interface-security-group-association.yml index bdbefcb464..cad79028e5 100644 --- a/schema/azure/defaults/infrastructure/network-interface-security-group-association.yml +++ b/schema/azure/defaults/infrastructure/network-interface-security-group-association.yml @@ -1,3 +1,4 @@ +--- kind: infrastructure/network-interface-security-group-association title: "Network Interface Security Group Association" provider: azure diff --git a/schema/azure/defaults/infrastructure/network-interface.yml b/schema/azure/defaults/infrastructure/network-interface.yml index 678b4ba417..ac812ff06b 100644 --- a/schema/azure/defaults/infrastructure/network-interface.yml +++ b/schema/azure/defaults/infrastructure/network-interface.yml @@ -1,3 +1,4 @@ +--- kind: infrastructure/network-interface title: "Network Interface Config" provider: azure @@ -9,5 +10,5 @@ specification: ip_configuration_name: SET_BY_AUTOMATION subnet_name: SET_BY_AUTOMATION use_public_ip: SET_BY_AUTOMATION - public_ip_name: SET_BY_AUTOMATION - enable_accelerated_networking: SET_BY_AUTOMATION \ No newline at end of file + public_ip_name: SET_BY_AUTOMATION + enable_accelerated_networking: SET_BY_AUTOMATION diff --git a/schema/azure/defaults/infrastructure/network-security-group.yml b/schema/azure/defaults/infrastructure/network-security-group.yml index 659a73906b..fffae4c132 100644 --- a/schema/azure/defaults/infrastructure/network-security-group.yml +++ b/schema/azure/defaults/infrastructure/network-security-group.yml @@ -1,7 +1,8 @@ +--- kind: infrastructure/network-security-group title: "Security Group Config" provider: azure name: default specification: name: SET_BY_AUTOMATION - rules: [] \ No newline at end of file + rules: [] diff --git a/schema/azure/defaults/infrastructure/public-ip.yml b/schema/azure/defaults/infrastructure/public-ip.yml index e949e3dbe6..dc4ca62991 100644 --- a/schema/azure/defaults/infrastructure/public-ip.yml +++ b/schema/azure/defaults/infrastructure/public-ip.yml @@ -1,3 +1,4 @@ +--- kind: infrastructure/public-ip title: "Public IP Config" provider: azure @@ -6,4 +7,4 @@ specification: name: SET_BY_AUTOMATION allocation_method: SET_BY_AUTOMATION idle_timeout_in_minutes: SET_BY_AUTOMATION - sku: SET_BY_AUTOMATION \ No newline at end of file + sku: SET_BY_AUTOMATION diff --git a/schema/azure/defaults/infrastructure/resource-group.yml b/schema/azure/defaults/infrastructure/resource-group.yml index 12ce00d718..78df17b5e8 100644 --- a/schema/azure/defaults/infrastructure/resource-group.yml +++ b/schema/azure/defaults/infrastructure/resource-group.yml @@ -1,7 +1,8 @@ +--- kind: infrastructure/resource-group title: "Resource Group" provider: azure name: default specification: name: SET_BY_AUTOMATION - region: SET_BY_AUTOMATION \ No newline at end of file + region: SET_BY_AUTOMATION diff --git a/schema/azure/defaults/infrastructure/storage-share.yml b/schema/azure/defaults/infrastructure/storage-share.yml index deb08ad6c1..1ff7384990 100644 --- a/schema/azure/defaults/infrastructure/storage-share.yml +++ b/schema/azure/defaults/infrastructure/storage-share.yml @@ -1,3 +1,4 @@ +--- kind: infrastructure/storage-share title: "Azure shared storage" provider: azure @@ -5,4 +6,4 @@ name: default specification: name: SET_BY_AUTOMATION storage_account_name: SET_BY_AUTOMATION - quota: 50 \ No newline at end of file + quota: 50 diff --git a/schema/azure/defaults/infrastructure/subnet-network-security-group-association.yml b/schema/azure/defaults/infrastructure/subnet-network-security-group-association.yml index db65c30cd8..f90f8cfe92 100644 --- a/schema/azure/defaults/infrastructure/subnet-network-security-group-association.yml +++ b/schema/azure/defaults/infrastructure/subnet-network-security-group-association.yml @@ -1,3 +1,4 @@ +--- kind: infrastructure/subnet-network-security-group-association title: "Subnet Network Security Group Association" provider: azure diff --git a/schema/azure/defaults/infrastructure/subnet.yml b/schema/azure/defaults/infrastructure/subnet.yml index 73cb551c0e..0c78d66e3f 100644 --- a/schema/azure/defaults/infrastructure/subnet.yml +++ b/schema/azure/defaults/infrastructure/subnet.yml @@ -1,3 +1,4 @@ +--- kind: infrastructure/subnet title: "Subnet Config" provider: azure @@ -5,4 +6,4 @@ name: default specification: name: SET_BY_AUTOMATION address_prefix: SET_BY_AUTOMATION - security_group_name: SET_BY_AUTOMATION \ No newline at end of file + security_group_name: SET_BY_AUTOMATION diff --git a/schema/azure/defaults/infrastructure/virtual-machine.yml b/schema/azure/defaults/infrastructure/virtual-machine.yml index cd5f43129e..a04e61ad44 100644 --- a/schema/azure/defaults/infrastructure/virtual-machine.yml +++ b/schema/azure/defaults/infrastructure/virtual-machine.yml @@ -1,3 +1,4 @@ +--- kind: infrastructure/virtual-machine title: "Virtual Machine Infra" provider: azure @@ -389,8 +390,8 @@ specification: destination_port_range: "9300" source_address_prefix: "10.1.0.0/20" destination_address_prefix: "0.0.0.0/0" - - name: Kibana - description: Allow Kibana + - name: OpenSearchDashboards + description: Allow OpenSearch Dashboards priority: 203 direction: Inbound access: Allow diff --git a/schema/azure/defaults/infrastructure/vnet.yml b/schema/azure/defaults/infrastructure/vnet.yml index a437c3d5b2..7d76c924c7 100644 --- a/schema/azure/defaults/infrastructure/vnet.yml +++ b/schema/azure/defaults/infrastructure/vnet.yml @@ -1,3 +1,4 @@ +--- kind: infrastructure/vnet title: "VNET Config" provider: azure diff --git a/schema/azure/validation/infrastructure/availability-set.yml b/schema/azure/validation/infrastructure/availability-set.yml index 2210c6d43c..0e9f3e04fe 100644 --- a/schema/azure/validation/infrastructure/availability-set.yml +++ b/schema/azure/validation/infrastructure/availability-set.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Availability-set specification schema" description: "Availability-set specification schema" diff --git a/schema/azure/validation/infrastructure/cloud-init-custom-data.yml b/schema/azure/validation/infrastructure/cloud-init-custom-data.yml index 4f49590041..d1ad6e0658 100644 --- a/schema/azure/validation/infrastructure/cloud-init-custom-data.yml +++ b/schema/azure/validation/infrastructure/cloud-init-custom-data.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Cloud-init-custom-data specification schema" description: "Cloud-init-custom-data specification schema" diff --git a/schema/azure/validation/infrastructure/network-interface-security-group-association.yml b/schema/azure/validation/infrastructure/network-interface-security-group-association.yml index 860886fe44..1df79ec307 100644 --- a/schema/azure/validation/infrastructure/network-interface-security-group-association.yml +++ b/schema/azure/validation/infrastructure/network-interface-security-group-association.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Network-interface-security-group-association specification schema" description: "Network-interface-security-group-association specification schema" diff --git a/schema/azure/validation/infrastructure/network-interface.yml b/schema/azure/validation/infrastructure/network-interface.yml index 65371cfd3a..f043828b9c 100644 --- a/schema/azure/validation/infrastructure/network-interface.yml +++ b/schema/azure/validation/infrastructure/network-interface.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Network-interface specification schema" description: "Network-interface specification schema" diff --git a/schema/azure/validation/infrastructure/network-security-group.yml b/schema/azure/validation/infrastructure/network-security-group.yml index 91bd2c5f5b..1da730758c 100644 --- a/schema/azure/validation/infrastructure/network-security-group.yml +++ b/schema/azure/validation/infrastructure/network-security-group.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Network-security-group specification schema" description: "Network-security-group specification schema" diff --git a/schema/azure/validation/infrastructure/public-ip.yml b/schema/azure/validation/infrastructure/public-ip.yml index cde20e7287..37d438b9f9 100644 --- a/schema/azure/validation/infrastructure/public-ip.yml +++ b/schema/azure/validation/infrastructure/public-ip.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Public-ip specification schema" description: "Public-ip specification schema" diff --git a/schema/azure/validation/infrastructure/resource-group.yml b/schema/azure/validation/infrastructure/resource-group.yml index 39887bdc14..1b6c0d1729 100644 --- a/schema/azure/validation/infrastructure/resource-group.yml +++ b/schema/azure/validation/infrastructure/resource-group.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Resource-group specification schema" description: "Resource-group specification schema" diff --git a/schema/azure/validation/infrastructure/storage-share.yml b/schema/azure/validation/infrastructure/storage-share.yml index a09439d678..c6e646307f 100644 --- a/schema/azure/validation/infrastructure/storage-share.yml +++ b/schema/azure/validation/infrastructure/storage-share.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Storage-share specification schema" description: "Storage-share specification schema" diff --git a/schema/azure/validation/infrastructure/subnet-network-security-group-association.yml b/schema/azure/validation/infrastructure/subnet-network-security-group-association.yml index 6932697d04..f7fffaa45e 100644 --- a/schema/azure/validation/infrastructure/subnet-network-security-group-association.yml +++ b/schema/azure/validation/infrastructure/subnet-network-security-group-association.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Subnet-network-security-group-association specification schema" description: "Subnet-network-security-group-association specification schema" diff --git a/schema/azure/validation/infrastructure/subnet.yml b/schema/azure/validation/infrastructure/subnet.yml index 1c19b3187a..05ad8d50ae 100644 --- a/schema/azure/validation/infrastructure/subnet.yml +++ b/schema/azure/validation/infrastructure/subnet.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Subnet specification schema" description: "Subnet specification schema" diff --git a/schema/azure/validation/infrastructure/virtual-machine.yml b/schema/azure/validation/infrastructure/virtual-machine.yml index f44e742dcb..fd11fb0440 100644 --- a/schema/azure/validation/infrastructure/virtual-machine.yml +++ b/schema/azure/validation/infrastructure/virtual-machine.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Virtual-machine specification schema" description: "Virtual-machine specification schema" diff --git a/schema/azure/validation/infrastructure/vnet.yml b/schema/azure/validation/infrastructure/vnet.yml index db1a88b6d9..4349bc3d55 100644 --- a/schema/azure/validation/infrastructure/vnet.yml +++ b/schema/azure/validation/infrastructure/vnet.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Vnet specification schema" description: "Vnet specification schema" diff --git a/schema/common/defaults/configuration/applications.yml b/schema/common/defaults/configuration/applications.yml index 1ca1bbf561..96e8308043 100644 --- a/schema/common/defaults/configuration/applications.yml +++ b/schema/common/defaults/configuration/applications.yml @@ -1,3 +1,4 @@ +--- kind: configuration/applications title: "Kubernetes Applications Config" name: default diff --git a/schema/common/defaults/configuration/backup.yml b/schema/common/defaults/configuration/backup.yml index 2a5467b4ad..f5b0be9bee 100644 --- a/schema/common/defaults/configuration/backup.yml +++ b/schema/common/defaults/configuration/backup.yml @@ -1,3 +1,4 @@ +--- kind: configuration/backup title: Backup Config name: default diff --git a/schema/common/defaults/configuration/elasticsearch-curator.yml b/schema/common/defaults/configuration/elasticsearch-curator.yml index ce5aee6831..13b3b5fcb0 100644 --- a/schema/common/defaults/configuration/elasticsearch-curator.yml +++ b/schema/common/defaults/configuration/elasticsearch-curator.yml @@ -1,3 +1,4 @@ +--- kind: configuration/elasticsearch-curator title: Elasticsearch Curator name: default diff --git a/schema/common/defaults/configuration/feature-mapping.yml b/schema/common/defaults/configuration/feature-mappings.yml similarity index 54% rename from schema/common/defaults/configuration/feature-mapping.yml rename to schema/common/defaults/configuration/feature-mappings.yml index 72d4ad09b0..df1b5a816e 100644 --- a/schema/common/defaults/configuration/feature-mapping.yml +++ b/schema/common/defaults/configuration/feature-mappings.yml @@ -1,58 +1,9 @@ -kind: configuration/feature-mapping -title: "Feature mapping to roles" +--- +kind: configuration/feature-mappings +title: "Feature mapping to components" name: default specification: - available_roles: - - name: repository - enabled: true - - name: firewall - enabled: true - - name: image-registry - enabled: true - - name: kubernetes-master - enabled: true - - name: kubernetes-node - enabled: true - - name: helm - enabled: true - - name: logging - enabled: true - - name: opendistro-for-elasticsearch - enabled: true - - name: elasticsearch-curator - enabled: true - - name: kibana - enabled: true - - name: filebeat - enabled: true - - name: prometheus - enabled: true - - name: grafana - enabled: true - - name: node-exporter - enabled: true - - name: jmx-exporter - enabled: true - - name: zookeeper - enabled: true - - name: kafka - enabled: true - - name: rabbitmq - enabled: true - - name: kafka-exporter - enabled: true - - name: postgresql - enabled: true - - name: postgres-exporter - enabled: true - - name: haproxy - enabled: true - - name: applications - enabled: true - - name: rook - enabled: true - - roles_mapping: + mappings: kafka: - zookeeper - jmx-exporter @@ -68,7 +19,7 @@ specification: - firewall logging: - logging - - kibana + - opensearch-dashboards - node-exporter - filebeat - firewall @@ -126,8 +77,8 @@ specification: - node-exporter - filebeat - firewall - opendistro_for_elasticsearch: - - opendistro-for-elasticsearch + opensearch: + - opensearch - node-exporter - filebeat - firewall diff --git a/schema/common/defaults/configuration/features.yml b/schema/common/defaults/configuration/features.yml new file mode 100644 index 0000000000..091223414c --- /dev/null +++ b/schema/common/defaults/configuration/features.yml @@ -0,0 +1,54 @@ +--- +kind: configuration/features +title: "Features to be enabled/disabled" +name: default +specification: + features: + - name: repository + enabled: true + - name: firewall + enabled: true + - name: image-registry + enabled: true + - name: kubernetes-master + enabled: true + - name: kubernetes-node + enabled: true + - name: helm + enabled: true + - name: logging + enabled: true + - name: opensearch + enabled: true + - name: elasticsearch-curator + enabled: true + - name: opensearch-dashboards + enabled: true + - name: filebeat + enabled: true + - name: prometheus + enabled: true + - name: grafana + enabled: true + - name: node-exporter + enabled: true + - name: jmx-exporter + enabled: true + - name: zookeeper + enabled: true + - name: kafka + enabled: true + - name: rabbitmq + enabled: true + - name: kafka-exporter + enabled: true + - name: postgresql + enabled: true + - name: postgres-exporter + enabled: true + - name: haproxy + enabled: true + - name: applications + enabled: true + - name: rook + enabled: true diff --git a/schema/common/defaults/configuration/filebeat.yml b/schema/common/defaults/configuration/filebeat.yml index 22f77ff9a8..d7b7a063ce 100644 --- a/schema/common/defaults/configuration/filebeat.yml +++ b/schema/common/defaults/configuration/filebeat.yml @@ -1,8 +1,9 @@ +--- kind: configuration/filebeat title: Filebeat name: default specification: - kibana: + opensearch: dashboards: index: filebeat-* enabled: auto diff --git a/schema/common/defaults/configuration/firewall.yml b/schema/common/defaults/configuration/firewall.yml index 8a9d66493c..b4b85bc11a 100644 --- a/schema/common/defaults/configuration/firewall.yml +++ b/schema/common/defaults/configuration/firewall.yml @@ -1,3 +1,4 @@ +--- kind: configuration/firewall title: OS level firewall name: default @@ -45,7 +46,7 @@ specification: enabled: true ports: - 9308/tcp - kibana: + opensearch_dashboards: enabled: true ports: - 5601/tcp @@ -71,7 +72,7 @@ specification: enabled: true ports: - 9100/tcp - opendistro_for_elasticsearch: + opensearch: enabled: true ports: - 9200/tcp diff --git a/schema/common/defaults/configuration/grafana.yml b/schema/common/defaults/configuration/grafana.yml index f958ff6195..9999d5ebec 100644 --- a/schema/common/defaults/configuration/grafana.yml +++ b/schema/common/defaults/configuration/grafana.yml @@ -1,3 +1,4 @@ +--- kind: configuration/grafana title: "Grafana" name: default diff --git a/schema/common/defaults/configuration/haproxy.yml b/schema/common/defaults/configuration/haproxy.yml index 4e34b495f8..820a36c5fe 100644 --- a/schema/common/defaults/configuration/haproxy.yml +++ b/schema/common/defaults/configuration/haproxy.yml @@ -1,3 +1,4 @@ +--- kind: configuration/haproxy title: "HAProxy" name: default diff --git a/schema/common/defaults/configuration/helm-charts.yml b/schema/common/defaults/configuration/helm-charts.yml index 2be4d5c997..dd106ee308 100644 --- a/schema/common/defaults/configuration/helm-charts.yml +++ b/schema/common/defaults/configuration/helm-charts.yml @@ -1,3 +1,4 @@ +--- kind: configuration/helm-charts title: "Helm charts" name: default diff --git a/schema/common/defaults/configuration/helm.yml b/schema/common/defaults/configuration/helm.yml index 54b1c5f010..789ed79f52 100644 --- a/schema/common/defaults/configuration/helm.yml +++ b/schema/common/defaults/configuration/helm.yml @@ -1,3 +1,4 @@ +--- kind: configuration/helm title: "Helm" name: default diff --git a/schema/common/defaults/configuration/image-registry.yml b/schema/common/defaults/configuration/image-registry.yml index 61aeb50f2a..c66d46129f 100644 --- a/schema/common/defaults/configuration/image-registry.yml +++ b/schema/common/defaults/configuration/image-registry.yml @@ -1,3 +1,4 @@ +--- kind: configuration/image-registry title: "Epiphany image registry" name: default @@ -9,242 +10,195 @@ specification: images_to_load: x86_64: generic: - - name: "epiphanyplatform/keycloak:14.0.0" - file_name: keycloak-14.0.0.tar - - name: "rabbitmq:3.8.9" - file_name: rabbitmq-3.8.9.tar - - name: "kubernetesui/dashboard:v2.3.1" - file_name: dashboard-v2.3.1.tar - - name: "kubernetesui/metrics-scraper:v1.0.7" - file_name: metrics-scraper-v1.0.7.tar - # postgres - - name: bitnami/pgpool:4.2.4 - file_name: pgpool-4.2.4.tar - - name: bitnami/pgbouncer:1.16.0 - file_name: pgbouncer-1.16.0.tar - # ceph - - name: "rook/ceph:v1.8.8" - file_name: ceph-v1.8.8.tar + applications: + - name: "epiphanyplatform/keycloak:14.0.0" + file_name: keycloak-14.0.0.tar + - name: "rabbitmq:3.8.9" + file_name: rabbitmq-3.8.9.tar + - name: "bitnami/pgpool:4.2.4" + file_name: pgpool-4.2.4.tar + - name: "bitnami/pgbouncer:1.16.0" + file_name: pgbouncer-1.16.0.tar + kubernetes-master: + - name: "kubernetesui/dashboard:v2.3.1" + file_name: dashboard-v2.3.1.tar + - name: "kubernetesui/metrics-scraper:v1.0.7" + file_name: metrics-scraper-v1.0.7.tar + rook: + - name: "k8s.gcr.io/sig-storage/csi-attacher:v3.4.0" + file_name: csi-attacher-v3.4.0.tar + - name: "k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.5.0" + file_name: csi-node-driver-registrar-v2.5.0.tar + - name: "k8s.gcr.io/sig-storage/csi-provisioner:v3.1.0" + file_name: csi-provisioner-v3.1.0.tar + - name: "k8s.gcr.io/sig-storage/csi-resizer:v1.4.0" + file_name: csi-resizer-v1.4.0.tar + - name: "k8s.gcr.io/sig-storage/csi-snapshotter:v5.0.1" + file_name: csi-snapshotter-v5.0.1.tar + - name: "quay.io/ceph/ceph:v16.2.7-20220510" + file_name: ceph-v16.2.7-20220510.tar + - name: "quay.io/cephcsi/cephcsi:v3.5.1" + file_name: cephcsi-v3.5.1.tar + - name: "quay.io/csiaddons/k8s-sidecar:v0.2.1" + file_name: k8s-sidecar-v0.2.1.tar + - name: "quay.io/csiaddons/volumereplication-operator:v0.3.0" + file_name: volumereplication-operator-v0.3.0.tar + - name: "rook/ceph:v1.8.8" + file_name: ceph-v1.8.8.tar current: - - name: "haproxy:2.2.2-alpine" - file_name: haproxy-2.2.2-alpine.tar - # K8s v1.22.4 - Epiphany 1.3 - # https://github.com/kubernetes/kubernetes/blob/v1.22.4/build/dependencies.yaml - - name: "k8s.gcr.io/kube-apiserver:v1.22.4" - file_name: kube-apiserver-v1.22.4.tar - - name: "k8s.gcr.io/kube-controller-manager:v1.22.4" - file_name: kube-controller-manager-v1.22.4.tar - - name: "k8s.gcr.io/kube-proxy:v1.22.4" - file_name: kube-proxy-v1.22.4.tar - - name: "k8s.gcr.io/kube-scheduler:v1.22.4" - file_name: kube-scheduler-v1.22.4.tar - - name: "k8s.gcr.io/coredns/coredns:v1.8.4" - file_name: coredns-v1.8.4.tar - - name: "k8s.gcr.io/etcd:3.5.0-0" - file_name: etcd-3.5.0-0.tar - - name: "k8s.gcr.io/pause:3.5" - file_name: pause-3.5.tar - - name: "k8s.gcr.io/sig-storage/csi-attacher:v3.4.0" - file_name: csi-attacher-v3.4.0.tar - - name: "k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.5.0" - file_name: csi-node-driver-registrar-v2.5.0.tar - - name: "k8s.gcr.io/sig-storage/csi-provisioner:v3.1.0" - file_name: csi-provisioner-v3.1.0.tar - - name: "k8s.gcr.io/sig-storage/csi-resizer:v1.4.0" - file_name: csi-resizer-v1.4.0.tar - - name: "k8s.gcr.io/sig-storage/csi-snapshotter:v5.0.1" - file_name: csi-snapshotter-v5.0.1.tar - # flannel - - name: "quay.io/coreos/flannel:v0.14.0-amd64" - file_name: flannel-v0.14.0-amd64.tar - - name: "quay.io/coreos/flannel:v0.14.0" - file_name: flannel-v0.14.0.tar - - name: "quay.io/ceph/ceph:v16.2.7" - file_name: ceph-v16.2.7.tar - - name: "quay.io/cephcsi/cephcsi:v3.5.1" - file_name: cephcsi-v3.5.1.tar - - name: "quay.io/csiaddons/k8s-sidecar:v0.2.1" - file_name: k8s-sidecar-v0.2.1.tar - - name: "quay.io/csiaddons/volumereplication-operator:v0.3.0" - file_name: volumereplication-operator-v0.3.0.tar - # canal & calico - - name: "calico/cni:v3.20.3" - file_name: cni-v3.20.3.tar - - name: "calico/kube-controllers:v3.20.3" - file_name: kube-controllers-v3.20.3.tar - - name: "calico/node:v3.20.3" - file_name: node-v3.20.3.tar - - name: "calico/pod2daemon-flexvol:v3.20.3" - file_name: pod2daemon-flexvol-v3.20.3.tar + haproxy: + - name: "haproxy:2.2.2-alpine" + file_name: haproxy-2.2.2-alpine.tar + kubernetes-master: + # K8s v1.22.4 - Epiphany 1.3 + # https://github.com/kubernetes/kubernetes/blob/v1.22.4/build/dependencies.yaml + - name: "k8s.gcr.io/kube-apiserver:v1.22.4" + file_name: kube-apiserver-v1.22.4.tar + - name: "k8s.gcr.io/kube-controller-manager:v1.22.4" + file_name: kube-controller-manager-v1.22.4.tar + - name: "k8s.gcr.io/kube-proxy:v1.22.4" + file_name: kube-proxy-v1.22.4.tar + - name: "k8s.gcr.io/kube-scheduler:v1.22.4" + file_name: kube-scheduler-v1.22.4.tar + - name: "k8s.gcr.io/coredns/coredns:v1.8.4" + file_name: coredns-v1.8.4.tar + - name: "k8s.gcr.io/etcd:3.5.0-0" + file_name: etcd-3.5.0-0.tar + - name: "k8s.gcr.io/pause:3.5" + file_name: pause-3.5.tar + # flannel + - name: "quay.io/coreos/flannel:v0.14.0" + file_name: flannel-v0.14.0.tar + # flannel for canal - Epiphany 2.0.1 + - name: "quay.io/coreos/flannel:v0.15.1" + file_name: flannel-v0.15.1.tar + # canal & calico - Epiphany 2.0.1 + - name: "calico/cni:v3.23.3" + file_name: cni-v3.23.3.tar + - name: "calico/kube-controllers:v3.23.3" + file_name: kube-controllers-v3.23.3.tar + - name: "calico/node:v3.23.3" + file_name: node-v3.23.3.tar legacy: - # K8s v1.21.7 - Epiphany 1.3 (transitional version) - - name: "k8s.gcr.io/kube-apiserver:v1.21.7" - file_name: kube-apiserver-v1.21.7.tar - - name: "k8s.gcr.io/kube-controller-manager:v1.21.7" - file_name: kube-controller-manager-v1.21.7.tar - - name: "k8s.gcr.io/kube-proxy:v1.21.7" - file_name: kube-proxy-v1.21.7.tar - - name: "k8s.gcr.io/kube-scheduler:v1.21.7" - file_name: kube-scheduler-v1.21.7.tar - - name: "k8s.gcr.io/coredns/coredns:v1.8.0" - file_name: coredns-v1.8.0.tar - - name: "k8s.gcr.io/etcd:3.4.13-0" - file_name: etcd-3.4.13-0.tar - - name: "k8s.gcr.io/pause:3.4.1" - file_name: pause-3.4.1.tar - # K8s v1.20.12 - Epiphany 1.3 (transitional version) - - name: "k8s.gcr.io/kube-apiserver:v1.20.12" - file_name: kube-apiserver-v1.20.12.tar - - name: "k8s.gcr.io/kube-controller-manager:v1.20.12" - file_name: kube-controller-manager-v1.20.12.tar - - name: "k8s.gcr.io/kube-proxy:v1.20.12" - file_name: kube-proxy-v1.20.12.tar - - name: "k8s.gcr.io/kube-scheduler:v1.20.12" - file_name: kube-scheduler-v1.20.12.tar - - name: "k8s.gcr.io/coredns:1.7.0" - file_name: coredns-1.7.0.tar - - name: "k8s.gcr.io/pause:3.2" - file_name: pause-3.2.tar - # K8s v1.19.15 - Epiphany 1.3 (transitional version) - - name: "k8s.gcr.io/kube-apiserver:v1.19.15" - file_name: kube-apiserver-v1.19.15.tar - - name: "k8s.gcr.io/kube-controller-manager:v1.19.15" - file_name: kube-controller-manager-v1.19.15.tar - - name: "k8s.gcr.io/kube-proxy:v1.19.15" - file_name: kube-proxy-v1.19.15.tar - - name: "k8s.gcr.io/kube-scheduler:v1.19.15" - file_name: kube-scheduler-v1.19.15.tar - # K8s v1.18.6 - Epiphany 0.7.1 - 1.2 - - name: "k8s.gcr.io/kube-apiserver:v1.18.6" - file_name: kube-apiserver-v1.18.6.tar - - name: "k8s.gcr.io/kube-controller-manager:v1.18.6" - file_name: kube-controller-manager-v1.18.6.tar - - name: "k8s.gcr.io/kube-proxy:v1.18.6" - file_name: kube-proxy-v1.18.6.tar - - name: "k8s.gcr.io/kube-scheduler:v1.18.6" - file_name: kube-scheduler-v1.18.6.tar - - name: "k8s.gcr.io/coredns:1.6.7" - file_name: coredns-1.6.7.tar - - name: "k8s.gcr.io/etcd:3.4.3-0" - file_name: etcd-3.4.3-0.tar - # flannel - - name: "quay.io/coreos/flannel:v0.12.0-amd64" - file_name: flannel-v0.12.0-amd64.tar - - name: "quay.io/coreos/flannel:v0.12.0" - file_name: flannel-v0.12.0.tar - # canal & calico - - name: "calico/cni:v3.15.0" - file_name: cni-v3.15.0.tar - - name: "calico/kube-controllers:v3.15.0" - file_name: kube-controllers-v3.15.0.tar - - name: "calico/node:v3.15.0" - file_name: node-v3.15.0.tar - - name: "calico/pod2daemon-flexvol:v3.15.0" - file_name: pod2daemon-flexvol-v3.15.0.tar + kubernetes-master: + # CNI plugins - Epiphany 1.3 - 2.0.0 + - name: "quay.io/coreos/flannel:v0.14.0-amd64" + file_name: flannel-v0.14.0-amd64.tar + - name: "calico/cni:v3.20.3" + file_name: cni-v3.20.3.tar + - name: "calico/kube-controllers:v3.20.3" + file_name: kube-controllers-v3.20.3.tar + - name: "calico/node:v3.20.3" + file_name: node-v3.20.3.tar + - name: "calico/pod2daemon-flexvol:v3.20.3" + file_name: pod2daemon-flexvol-v3.20.3.tar + # K8s v1.21.7 - Epiphany 1.3 (transitional version) + - name: "k8s.gcr.io/kube-apiserver:v1.21.7" + file_name: kube-apiserver-v1.21.7.tar + - name: "k8s.gcr.io/kube-controller-manager:v1.21.7" + file_name: kube-controller-manager-v1.21.7.tar + - name: "k8s.gcr.io/kube-proxy:v1.21.7" + file_name: kube-proxy-v1.21.7.tar + - name: "k8s.gcr.io/kube-scheduler:v1.21.7" + file_name: kube-scheduler-v1.21.7.tar + - name: "k8s.gcr.io/coredns/coredns:v1.8.0" + file_name: coredns-v1.8.0.tar + - name: "k8s.gcr.io/etcd:3.4.13-0" + file_name: etcd-3.4.13-0.tar + - name: "k8s.gcr.io/pause:3.4.1" + file_name: pause-3.4.1.tar + # K8s v1.20.12 - Epiphany 1.3 (transitional version) + - name: "k8s.gcr.io/kube-apiserver:v1.20.12" + file_name: kube-apiserver-v1.20.12.tar + - name: "k8s.gcr.io/kube-controller-manager:v1.20.12" + file_name: kube-controller-manager-v1.20.12.tar + - name: "k8s.gcr.io/kube-proxy:v1.20.12" + file_name: kube-proxy-v1.20.12.tar + - name: "k8s.gcr.io/kube-scheduler:v1.20.12" + file_name: kube-scheduler-v1.20.12.tar + - name: "k8s.gcr.io/coredns:1.7.0" + file_name: coredns-1.7.0.tar + - name: "k8s.gcr.io/pause:3.2" + file_name: pause-3.2.tar + # K8s v1.19.15 - Epiphany 1.3 (transitional version) + - name: "k8s.gcr.io/kube-apiserver:v1.19.15" + file_name: kube-apiserver-v1.19.15.tar + - name: "k8s.gcr.io/kube-controller-manager:v1.19.15" + file_name: kube-controller-manager-v1.19.15.tar + - name: "k8s.gcr.io/kube-proxy:v1.19.15" + file_name: kube-proxy-v1.19.15.tar + - name: "k8s.gcr.io/kube-scheduler:v1.19.15" + file_name: kube-scheduler-v1.19.15.tar + # K8s v1.18.6 - Epiphany 0.7.1 - 1.2 + - name: "k8s.gcr.io/kube-apiserver:v1.18.6" + file_name: kube-apiserver-v1.18.6.tar + - name: "k8s.gcr.io/kube-controller-manager:v1.18.6" + file_name: kube-controller-manager-v1.18.6.tar + - name: "k8s.gcr.io/kube-proxy:v1.18.6" + file_name: kube-proxy-v1.18.6.tar + - name: "k8s.gcr.io/kube-scheduler:v1.18.6" + file_name: kube-scheduler-v1.18.6.tar + - name: "k8s.gcr.io/coredns:1.6.7" + file_name: coredns-1.6.7.tar + - name: "k8s.gcr.io/etcd:3.4.3-0" + file_name: etcd-3.4.3-0.tar + # flannel + - name: "quay.io/coreos/flannel:v0.12.0-amd64" + file_name: flannel-v0.12.0-amd64.tar + - name: "quay.io/coreos/flannel:v0.12.0" + file_name: flannel-v0.12.0.tar + # canal & calico + - name: "calico/cni:v3.15.0" + file_name: cni-v3.15.0.tar + - name: "calico/kube-controllers:v3.15.0" + file_name: kube-controllers-v3.15.0.tar + - name: "calico/node:v3.15.0" + file_name: node-v3.15.0.tar + - name: "calico/pod2daemon-flexvol:v3.15.0" + file_name: pod2daemon-flexvol-v3.15.0.tar aarch64: generic: - - name: "epiphanyplatform/keycloak:14.0.0" - file_name: keycloak-14.0.0.tar - - name: "rabbitmq:3.8.9" - file_name: rabbitmq-3.8.9.tar - - name: "kubernetesui/dashboard:v2.3.1" - file_name: dashboard-v2.3.1.tar - - name: "kubernetesui/metrics-scraper:v1.0.7" - file_name: metrics-scraper-v1.0.7.tar + applications: + - name: "epiphanyplatform/keycloak:14.0.0" + file_name: keycloak-14.0.0.tar + - name: "rabbitmq:3.8.9" + file_name: rabbitmq-3.8.9.tar + kubernetes-master: + - name: "kubernetesui/dashboard:v2.3.1" + file_name: dashboard-v2.3.1.tar + - name: "kubernetesui/metrics-scraper:v1.0.7" + file_name: metrics-scraper-v1.0.7.tar current: - - name: "haproxy:2.2.2-alpine" - file_name: haproxy-2.2.2-alpine.tar - # K8s v1.21.7 - Epiphany 1.3 - - name: "k8s.gcr.io/kube-apiserver:v1.22.4" - file_name: kube-apiserver-v1.22.4.tar - - name: "k8s.gcr.io/kube-controller-manager:v1.22.4" - file_name: kube-controller-manager-v1.22.4.tar - - name: "k8s.gcr.io/kube-proxy:v1.22.4" - file_name: kube-proxy-v1.22.4.tar - - name: "k8s.gcr.io/kube-scheduler:v1.22.4" - file_name: kube-scheduler-v1.22.4.tar - - name: "k8s.gcr.io/coredns/coredns:v1.8.4" - file_name: coredns-v1.8.4.tar - - name: "k8s.gcr.io/etcd:3.5.0-0" - file_name: etcd-3.5.0-0.tar - - name: "k8s.gcr.io/pause:3.5" - file_name: pause-3.5.tar - # flannel - - name: "quay.io/coreos/flannel:v0.14.0-arm64" - file_name: flannel-v0.14.0-arm64.tar - - name: "quay.io/coreos/flannel:v0.14.0" - file_name: flannel-v0.14.0.tar - # canal & calico - - name: "calico/cni:v3.20.3" - file_name: cni-v3.20.3.tar - - name: "calico/kube-controllers:v3.20.3" - file_name: kube-controllers-v3.20.3.tar - - name: "calico/node:v3.20.3" - file_name: node-v3.20.3.tar - - name: "calico/pod2daemon-flexvol:v3.20.3" - file_name: pod2daemon-flexvol-v3.20.3.tar - legacy: - # K8s v1.21.7 - Epiphany 1.3 (transitional version) - - name: "k8s.gcr.io/kube-apiserver:v1.21.7" - file_name: kube-apiserver-v1.21.7.tar - - name: "k8s.gcr.io/kube-controller-manager:v1.21.7" - file_name: kube-controller-manager-v1.21.7.tar - - name: "k8s.gcr.io/kube-proxy:v1.21.7" - file_name: kube-proxy-v1.21.7.tar - - name: "k8s.gcr.io/kube-scheduler:v1.21.7" - file_name: kube-scheduler-v1.21.7.tar - - name: "k8s.gcr.io/coredns/coredns:v1.8.0" - file_name: coredns-v1.8.0.tar - - name: "k8s.gcr.io/etcd:3.4.13-0" - file_name: etcd-3.4.13-0.tar - - name: "k8s.gcr.io/pause:3.4.1" - file_name: pause-3.4.1.tar - # K8s v1.20.12 - Epiphany 1.3 (transitional version) - - name: "k8s.gcr.io/kube-apiserver:v1.20.12" - file_name: kube-apiserver-v1.20.12.tar - - name: "k8s.gcr.io/kube-controller-manager:v1.20.12" - file_name: kube-controller-manager-v1.20.12.tar - - name: "k8s.gcr.io/kube-proxy:v1.20.12" - file_name: kube-proxy-v1.20.12.tar - - name: "k8s.gcr.io/kube-scheduler:v1.20.12" - file_name: kube-scheduler-v1.20.12.tar - - name: "k8s.gcr.io/coredns:1.7.0" - file_name: coredns-1.7.0.tar - - name: "k8s.gcr.io/pause:3.2" - file_name: pause-3.2.tar - # K8s v1.19.15 - Epiphany 1.3 (transitional version) - - name: "k8s.gcr.io/kube-apiserver:v1.19.15" - file_name: kube-apiserver-v1.19.15.tar - - name: "k8s.gcr.io/kube-controller-manager:v1.19.15" - file_name: kube-controller-manager-v1.19.15.tar - - name: "k8s.gcr.io/kube-proxy:v1.19.15" - file_name: kube-proxy-v1.19.15.tar - - name: "k8s.gcr.io/kube-scheduler:v1.19.15" - file_name: kube-scheduler-v1.19.15.tar - # K8s v1.18.6 - Epiphany 0.7.1 - 1.2 - - name: "k8s.gcr.io/kube-apiserver:v1.18.6" - file_name: kube-apiserver-v1.18.6.tar - - name: "k8s.gcr.io/kube-controller-manager:v1.18.6" - file_name: kube-controller-manager-v1.18.6.tar - - name: "k8s.gcr.io/kube-proxy:v1.18.6" - file_name: kube-proxy-v1.18.6.tar - - name: "k8s.gcr.io/kube-scheduler:v1.18.6" - file_name: kube-scheduler-v1.18.6.tar - - name: "k8s.gcr.io/coredns:1.6.7" - file_name: coredns-1.6.7.tar - - name: "k8s.gcr.io/etcd:3.4.3-0" - file_name: etcd-3.4.3-0.tar - # flannel - - name: "quay.io/coreos/flannel:v0.12.0-arm64" - file_name: flannel-v0.12.0-arm64.tar - - name: "quay.io/coreos/flannel:v0.12.0" - file_name: flannel-v0.12.0.tar - # canal & calico - - name: "calico/cni:v3.15.0" - file_name: cni-v3.15.0.tar - - name: "calico/kube-controllers:v3.15.0" - file_name: kube-controllers-v3.15.0.tar - - name: "calico/node:v3.15.0" - file_name: node-v3.15.0.tar - - name: "calico/pod2daemon-flexvol:v3.15.0" - file_name: pod2daemon-flexvol-v3.15.0.tar + haproxy: + - name: "haproxy:2.2.2-alpine" + file_name: haproxy-2.2.2-alpine.tar + kubernetes-master: + # K8s v1.22.4 - Epiphany 1.3 + - name: "k8s.gcr.io/kube-apiserver:v1.22.4" + file_name: kube-apiserver-v1.22.4.tar + - name: "k8s.gcr.io/kube-controller-manager:v1.22.4" + file_name: kube-controller-manager-v1.22.4.tar + - name: "k8s.gcr.io/kube-proxy:v1.22.4" + file_name: kube-proxy-v1.22.4.tar + - name: "k8s.gcr.io/kube-scheduler:v1.22.4" + file_name: kube-scheduler-v1.22.4.tar + - name: "k8s.gcr.io/coredns/coredns:v1.8.4" + file_name: coredns-v1.8.4.tar + - name: "k8s.gcr.io/etcd:3.5.0-0" + file_name: etcd-3.5.0-0.tar + - name: "k8s.gcr.io/pause:3.5" + file_name: pause-3.5.tar + # flannel + - name: "quay.io/coreos/flannel:v0.14.0" + file_name: flannel-v0.14.0.tar + # flannel for canal - Epiphany 2.0.1 + - name: "quay.io/coreos/flannel:v0.15.1" + file_name: flannel-v0.15.1.tar + # canal & calico - Epiphany 2.0.1 + - name: "calico/cni:v3.23.3" + file_name: cni-v3.23.3.tar + - name: "calico/kube-controllers:v3.23.3" + file_name: kube-controllers-v3.23.3.tar + - name: "calico/node:v3.23.3" + file_name: node-v3.23.3.tar + legacy: {} # arm64 on AlmaLinux added in 2.0.1 diff --git a/schema/common/defaults/configuration/jmx-exporter.yml b/schema/common/defaults/configuration/jmx-exporter.yml index 4d44105bf2..b211ef7f7b 100644 --- a/schema/common/defaults/configuration/jmx-exporter.yml +++ b/schema/common/defaults/configuration/jmx-exporter.yml @@ -1,3 +1,4 @@ +--- kind: configuration/jmx-exporter title: "JMX exporter" name: default diff --git a/schema/common/defaults/configuration/kafka-exporter.yml b/schema/common/defaults/configuration/kafka-exporter.yml index 720b9c9c85..9bec681a96 100644 --- a/schema/common/defaults/configuration/kafka-exporter.yml +++ b/schema/common/defaults/configuration/kafka-exporter.yml @@ -1,3 +1,4 @@ +--- kind: configuration/kafka-exporter title: "Kafka exporter" name: default diff --git a/schema/common/defaults/configuration/kafka.yml b/schema/common/defaults/configuration/kafka.yml index 30feadfdf9..3f6e121c7d 100644 --- a/schema/common/defaults/configuration/kafka.yml +++ b/schema/common/defaults/configuration/kafka.yml @@ -1,3 +1,4 @@ +--- kind: configuration/kafka title: "Kafka" name: default diff --git a/schema/common/defaults/configuration/kibana.yml b/schema/common/defaults/configuration/kibana.yml deleted file mode 100644 index bea9fbb13b..0000000000 --- a/schema/common/defaults/configuration/kibana.yml +++ /dev/null @@ -1,5 +0,0 @@ -kind: configuration/kibana -title: "Kibana" -name: default -specification: - kibana_log_dir: /var/log/kibana diff --git a/schema/common/defaults/configuration/kubernetes-master.yml b/schema/common/defaults/configuration/kubernetes-master.yml index fb987ec7a7..fd1d52b351 100644 --- a/schema/common/defaults/configuration/kubernetes-master.yml +++ b/schema/common/defaults/configuration/kubernetes-master.yml @@ -1,3 +1,4 @@ +--- kind: configuration/kubernetes-master title: Kubernetes Master Config name: default @@ -41,6 +42,7 @@ specification: hostname: 127.0.0.1 # change if you want a custom port port: 6443 + enable_controller_attach_detach: true # image_registry_secrets: # - email: emaul@domain.com # name: secretname diff --git a/schema/common/defaults/configuration/kubernetes-node.yml b/schema/common/defaults/configuration/kubernetes-node.yml index 8a143fb06e..62cfbdd9dd 100644 --- a/schema/common/defaults/configuration/kubernetes-node.yml +++ b/schema/common/defaults/configuration/kubernetes-node.yml @@ -1,3 +1,4 @@ +--- kind: configuration/kubernetes-node title: Kubernetes Node Config name: default diff --git a/schema/common/defaults/configuration/logging.yml b/schema/common/defaults/configuration/logging.yml index be687c2e65..5adfa60107 100644 --- a/schema/common/defaults/configuration/logging.yml +++ b/schema/common/defaults/configuration/logging.yml @@ -1,24 +1,31 @@ +--- kind: configuration/logging title: Logging Config name: default specification: - cluster_name: EpiphanyElastic + cluster_name: EpiphanyOpenSearch + opensearch_os_user: opensearch + opensearch_os_group: opensearch admin_password: PASSWORD_TO_CHANGE kibanaserver_password: PASSWORD_TO_CHANGE - kibanaserver_user_active: true - logstash_password: PASSWORD_TO_CHANGE - logstash_user_active: true + filebeatservice_password: PASSWORD_TO_CHANGE demo_users_to_remove: - - kibanaro - - readall - - snapshotrestore + - kibanaro + - readall + - snapshotrestore + - logstash paths: - data: /var/lib/elasticsearch - repo: /var/lib/elasticsearch-snapshots - logs: /var/log/elasticsearch + opensearch_home: /usr/share/opensearch + opensearch_conf_dir: /usr/share/opensearch/config + opensearch_log_dir: /var/log/opensearch + opensearch_snapshots_dir: /var/lib/opensearch-snapshots + opensearch_data_dir: /var/lib/opensearch + opensearch_perftop_dir: /usr/share/opensearch/perftop jvm_options: - Xmx: 1g # see https://www.elastic.co/guide/en/elasticsearch/reference/7.9/heap-size.html - opendistro_security: + Xmx: 1g + opensearch_security: + audit: + type: internal_opensearch # https://opensearch.org/docs/latest/security-plugin/audit-logs ssl: transport: enforce_hostname_verification: true diff --git a/schema/common/defaults/configuration/node-exporter.yml b/schema/common/defaults/configuration/node-exporter.yml index 4393708abd..22007472fa 100644 --- a/schema/common/defaults/configuration/node-exporter.yml +++ b/schema/common/defaults/configuration/node-exporter.yml @@ -1,3 +1,4 @@ +--- kind: configuration/node-exporter title: "Node exporter" name: default diff --git a/schema/common/defaults/configuration/opendistro-for-elasticsearch.yml b/schema/common/defaults/configuration/opendistro-for-elasticsearch.yml deleted file mode 100644 index 9f3979d722..0000000000 --- a/schema/common/defaults/configuration/opendistro-for-elasticsearch.yml +++ /dev/null @@ -1,27 +0,0 @@ -kind: configuration/opendistro-for-elasticsearch -title: Open Distro for Elasticsearch Config -name: default -specification: - cluster_name: EpiphanyElastic - clustered: true - admin_password: PASSWORD_TO_CHANGE - kibanaserver_password: PASSWORD_TO_CHANGE - kibanaserver_user_active: false - logstash_password: PASSWORD_TO_CHANGE - logstash_user_active: false - demo_users_to_remove: - - kibanaro - - readall - - snapshotrestore - - logstash - - kibanaserver - paths: - data: /var/lib/elasticsearch - repo: /var/lib/elasticsearch-snapshots - logs: /var/log/elasticsearch - jvm_options: - Xmx: 1g # see https://www.elastic.co/guide/en/elasticsearch/reference/7.9/heap-size.html - opendistro_security: - ssl: - transport: - enforce_hostname_verification: true diff --git a/schema/common/defaults/configuration/opensearch-dashboards.yml b/schema/common/defaults/configuration/opensearch-dashboards.yml new file mode 100644 index 0000000000..60d2a1c6d3 --- /dev/null +++ b/schema/common/defaults/configuration/opensearch-dashboards.yml @@ -0,0 +1,14 @@ +--- +kind: configuration/opensearch-dashboards +title: "OpenSearch-Dashboards" +name: default +specification: + dashboards_os_user: opensearch_dashboards + dashboards_os_group: opensearch_dashboards + dashboards_user: kibanaserver + dashboards_user_password: PASSWORD_TO_CHANGE + paths: + dashboards_home: /usr/share/opensearch-dashboards + dashboards_conf_dir: /usr/share/opensearch-dashboards/config + dashboards_plugin_bin_path: /usr/share/opensearch-dashboards/bin/opensearch-dashboards-plugin + dashboards_log_dir: /var/log/opensearch-dashboards diff --git a/schema/common/defaults/configuration/opensearch.yml b/schema/common/defaults/configuration/opensearch.yml new file mode 100644 index 0000000000..7ebaf1390c --- /dev/null +++ b/schema/common/defaults/configuration/opensearch.yml @@ -0,0 +1,31 @@ +--- +kind: configuration/opensearch +title: OpenSearch Config +name: default +specification: + cluster_name: EpiphanyOpenSearch + opensearch_os_user: opensearch + opensearch_os_group: opensearch + admin_password: PASSWORD_TO_CHANGE + kibanaserver_password: PASSWORD_TO_CHANGE + demo_users_to_remove: + - kibanaro + - readall + - snapshotrestore + - logstash + - kibanaserver + paths: + opensearch_home: /usr/share/opensearch + opensearch_conf_dir: /usr/share/opensearch/config + opensearch_log_dir: /var/log/opensearch + opensearch_snapshots_dir: /var/lib/opensearch-snapshots + opensearch_data_dir: /var/lib/opensearch + opensearch_perftop_dir: /usr/share/opensearch/perftop + jvm_options: + Xmx: 1g + opensearch_security: + audit: + type: internal_opensearch # https://opensearch.org/docs/latest/security-plugin/audit-logs + ssl: + transport: + enforce_hostname_verification: true diff --git a/schema/common/defaults/configuration/postgres-exporter.yml b/schema/common/defaults/configuration/postgres-exporter.yml index d9a59ac022..d7b6d438e5 100644 --- a/schema/common/defaults/configuration/postgres-exporter.yml +++ b/schema/common/defaults/configuration/postgres-exporter.yml @@ -1,3 +1,4 @@ +--- kind: configuration/postgres-exporter title: Postgres exporter name: default diff --git a/schema/common/defaults/configuration/postgresql.yml b/schema/common/defaults/configuration/postgresql.yml index 6cffb6280f..9b1d6a4acf 100644 --- a/schema/common/defaults/configuration/postgresql.yml +++ b/schema/common/defaults/configuration/postgresql.yml @@ -1,3 +1,4 @@ +--- kind: configuration/postgresql title: PostgreSQL name: default diff --git a/schema/common/defaults/configuration/prometheus.yml b/schema/common/defaults/configuration/prometheus.yml index 3d2cb3d54d..413cb6e1c7 100644 --- a/schema/common/defaults/configuration/prometheus.yml +++ b/schema/common/defaults/configuration/prometheus.yml @@ -1,3 +1,4 @@ +--- kind: configuration/prometheus title: "Prometheus" name: default diff --git a/schema/common/defaults/configuration/rabbitmq.yml b/schema/common/defaults/configuration/rabbitmq.yml index 7ed9512e48..f9a27b6570 100644 --- a/schema/common/defaults/configuration/rabbitmq.yml +++ b/schema/common/defaults/configuration/rabbitmq.yml @@ -1,3 +1,4 @@ +--- kind: configuration/rabbitmq title: "RabbitMQ" name: default diff --git a/schema/common/defaults/configuration/recovery.yml b/schema/common/defaults/configuration/recovery.yml index fc6da4f091..1ee8fede0c 100644 --- a/schema/common/defaults/configuration/recovery.yml +++ b/schema/common/defaults/configuration/recovery.yml @@ -1,3 +1,4 @@ +--- kind: configuration/recovery title: Recovery Config name: default diff --git a/schema/common/defaults/configuration/repository.yml b/schema/common/defaults/configuration/repository.yml index fdac785a3d..eca58548fa 100644 --- a/schema/common/defaults/configuration/repository.yml +++ b/schema/common/defaults/configuration/repository.yml @@ -1,3 +1,4 @@ +--- kind: configuration/repository title: "Epiphany requirements repository" name: default diff --git a/schema/common/defaults/configuration/rook.yml b/schema/common/defaults/configuration/rook.yml index 408a47e102..b442b31843 100644 --- a/schema/common/defaults/configuration/rook.yml +++ b/schema/common/defaults/configuration/rook.yml @@ -35,4 +35,4 @@ specification: image: rook/ceph:v1.8.8 cephClusterSpec: cephVersion: - image: quay.io/ceph/ceph:v16.2.7 + image: quay.io/ceph/ceph:v16.2.7-20220510 diff --git a/schema/common/defaults/configuration/shared-config.yml b/schema/common/defaults/configuration/shared-config.yml index 219a653414..e127bfdc90 100644 --- a/schema/common/defaults/configuration/shared-config.yml +++ b/schema/common/defaults/configuration/shared-config.yml @@ -1,3 +1,4 @@ +--- kind: configuration/shared-config title: "Shared configuration that will be visible to all roles" name: default diff --git a/schema/common/defaults/configuration/zookeeper.yml b/schema/common/defaults/configuration/zookeeper.yml index d701491680..79b393e9d4 100644 --- a/schema/common/defaults/configuration/zookeeper.yml +++ b/schema/common/defaults/configuration/zookeeper.yml @@ -1,3 +1,4 @@ +--- kind: configuration/zookeeper title: "Zookeeper" name: default diff --git a/schema/common/validation/configuration/applications.yml b/schema/common/validation/configuration/applications.yml index 4867cad707..a4c09ea4a3 100644 --- a/schema/common/validation/configuration/applications.yml +++ b/schema/common/validation/configuration/applications.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Application specification schema" description: "Application specification schema" diff --git a/schema/common/validation/configuration/backup.yml b/schema/common/validation/configuration/backup.yml index 23ecbc6797..2c00762919 100644 --- a/schema/common/validation/configuration/backup.yml +++ b/schema/common/validation/configuration/backup.yml @@ -1,3 +1,4 @@ +--- $schema: 'http://json-schema.org/draft-07/schema#' type: object required: diff --git a/schema/common/validation/configuration/elasticsearch-curator.yml b/schema/common/validation/configuration/elasticsearch-curator.yml index 906b0afeba..c0393e750e 100644 --- a/schema/common/validation/configuration/elasticsearch-curator.yml +++ b/schema/common/validation/configuration/elasticsearch-curator.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Elasticsearch-curator specification schema" description: "Elasticsearch-curator specification schema" diff --git a/schema/common/validation/configuration/feature-mapping.yml b/schema/common/validation/configuration/feature-mappings.yml similarity index 75% rename from schema/common/validation/configuration/feature-mapping.yml rename to schema/common/validation/configuration/feature-mappings.yml index 85b954b095..a1d919e4a8 100644 --- a/schema/common/validation/configuration/feature-mapping.yml +++ b/schema/common/validation/configuration/feature-mappings.yml @@ -1,18 +1,10 @@ +--- "$id": "#/specification" -title: "Feature-mapping specification schema" -description: "Feature-mapping specification schema" +title: "Feature-mappings specification schema" +description: "Feature-mappings specification schema" type: object properties: - available_roles: - type: array - items: - type: object - properties: - name: - type: string - enabled: - type: boolean - roles_mapping: + mappings: type: object properties: kafka: @@ -55,7 +47,7 @@ properties: type: array items: type: string - opendistro_for_elasticsearch: + opensearch: type: array items: type: string diff --git a/schema/common/validation/configuration/features.yml b/schema/common/validation/configuration/features.yml new file mode 100644 index 0000000000..91df885b09 --- /dev/null +++ b/schema/common/validation/configuration/features.yml @@ -0,0 +1,15 @@ +--- +"$id": "#/specification" +title: "Features to be enabled/disabled schema" +description: "Features to be enabled/disabled schema" +type: object +properties: + features: + type: array + items: + type: object + properties: + name: + type: string + enabled: + type: boolean diff --git a/schema/common/validation/configuration/filebeat.yml b/schema/common/validation/configuration/filebeat.yml index 02c7af95dc..9696d9937e 100644 --- a/schema/common/validation/configuration/filebeat.yml +++ b/schema/common/validation/configuration/filebeat.yml @@ -1,9 +1,10 @@ +--- "$id": "#/specification" title: "Filebeat specification schema" description: "Filebeat specification schema" type: object properties: - kibana: + opensearch: type: object properties: dashboards: diff --git a/schema/common/validation/configuration/firewall.yml b/schema/common/validation/configuration/firewall.yml index 82148a9453..b28f83e4f9 100644 --- a/schema/common/validation/configuration/firewall.yml +++ b/schema/common/validation/configuration/firewall.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Firewall specification schema" description: "Firewall specification schema" @@ -89,7 +90,7 @@ properties: type: array items: type: string - kibana: + opensearch_dashboards: type: object properties: enabled: @@ -134,7 +135,7 @@ properties: type: array items: type: string - opendistro_for_elasticsearch: + opensearch: type: object properties: enabled: diff --git a/schema/common/validation/configuration/grafana.yml b/schema/common/validation/configuration/grafana.yml index d107fe4d41..706981a001 100644 --- a/schema/common/validation/configuration/grafana.yml +++ b/schema/common/validation/configuration/grafana.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Grafana specification schema" description: "Grafana specification schema" diff --git a/schema/common/validation/configuration/haproxy.yml b/schema/common/validation/configuration/haproxy.yml index 96d45b470f..f9031e4291 100644 --- a/schema/common/validation/configuration/haproxy.yml +++ b/schema/common/validation/configuration/haproxy.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Haproxy specification schema" description: "Haproxy specification schema" diff --git a/schema/common/validation/configuration/helm.yml b/schema/common/validation/configuration/helm.yml index 777489b9e0..95de586fc9 100644 --- a/schema/common/validation/configuration/helm.yml +++ b/schema/common/validation/configuration/helm.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Helm specification schema" description: "Helm specification schema" diff --git a/schema/common/validation/configuration/image-registry.yml b/schema/common/validation/configuration/image-registry.yml index aaf08f8f03..a08d12c701 100644 --- a/schema/common/validation/configuration/image-registry.yml +++ b/schema/common/validation/configuration/image-registry.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Image-registry specification schema" description: "Image-registry specification schema" @@ -19,54 +20,72 @@ properties: type: object properties: generic: - type: array - items: - type: object - properties: - name: - type: string - file_name: - type: string + type: object + properties: + [applications, rabbitmq, kubernetes-master, postgresql, rook]: + type: array + items: + type: object + properties: + name: + type: string + file_name: + type: string current: - type: array - items: - type: object - properties: - name: - type: string - file_name: - type: string + type: object + properties: + [haproxy, kubernetes-master]: + type: array + items: + type: object + properties: + name: + type: string + file_name: + type: string legacy: - type: array - items: - type: object - properties: - name: - type: string - file_name: - type: string + type: object + properties: + kubernetes-master: + type: array + items: + type: object + properties: + name: + type: string + file_name: + type: string aarch64: type: object properties: generic: - type: array - items: - type: object - properties: - name: - type: string - file_name: - type: string + type: object + properties: + [applications, rabbitmq, kubernetes-master]: + type: array + items: + type: object + properties: + name: + type: string + file_name: + type: string current: - type: array - items: - type: object - properties: - name: - type: string - file_name: - type: string + type: object + properties: + [haproxy, kubernetes-master]: + type: array + items: + type: object + properties: + name: + type: string + file_name: + type: string legacy: - type: array - items: - items: {} + type: object + properties: + kubernetes-master: + type: array + items: + items: {} diff --git a/schema/common/validation/configuration/jmx-exporter.yml b/schema/common/validation/configuration/jmx-exporter.yml index b9edd66fd4..73f4ece24b 100644 --- a/schema/common/validation/configuration/jmx-exporter.yml +++ b/schema/common/validation/configuration/jmx-exporter.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Jmx-exporter specification schema" description: "Jmx-exporter specification schema" diff --git a/schema/common/validation/configuration/kafka-exporter.yml b/schema/common/validation/configuration/kafka-exporter.yml index e98fe687f1..cdb415fae5 100644 --- a/schema/common/validation/configuration/kafka-exporter.yml +++ b/schema/common/validation/configuration/kafka-exporter.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Kafka-exporter specification schema" description: "Kafka-exporter specification schema" diff --git a/schema/common/validation/configuration/kafka.yml b/schema/common/validation/configuration/kafka.yml index ebbd14ba68..311b90bc73 100644 --- a/schema/common/validation/configuration/kafka.yml +++ b/schema/common/validation/configuration/kafka.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Kafka specification schema" description: "Kafka specification schema" diff --git a/schema/common/validation/configuration/kibana.yml b/schema/common/validation/configuration/kibana.yml deleted file mode 100644 index 17b77c2e15..0000000000 --- a/schema/common/validation/configuration/kibana.yml +++ /dev/null @@ -1,7 +0,0 @@ -"$id": "#/specification" -title: "Kibana specification schema" -description: "Kibana specification schema" -type: object -properties: - kibana_log_dir: - type: string diff --git a/schema/common/validation/configuration/kubernetes-master.yml b/schema/common/validation/configuration/kubernetes-master.yml index 717c2efc95..0986ae8778 100644 --- a/schema/common/validation/configuration/kubernetes-master.yml +++ b/schema/common/validation/configuration/kubernetes-master.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "K8s-master specification schema" description: "K8s-master specification schema" @@ -134,6 +135,8 @@ properties: - api_server required: - local + enable_controller_attach_detach: + type: boolean required: - api_server_args - controller_manager_args diff --git a/schema/common/validation/configuration/kubernetes-node.yml b/schema/common/validation/configuration/kubernetes-node.yml index c203483d16..33d71d3ecb 100644 --- a/schema/common/validation/configuration/kubernetes-node.yml +++ b/schema/common/validation/configuration/kubernetes-node.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "K8s-nodes specification schema" description: "K8s-nodes specification schema" diff --git a/schema/common/validation/configuration/logging.yml b/schema/common/validation/configuration/logging.yml index 2a434160a0..72d1c92d4e 100644 --- a/schema/common/validation/configuration/logging.yml +++ b/schema/common/validation/configuration/logging.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Logging specification schema" description: "Logging specification schema" @@ -5,36 +6,48 @@ type: object properties: cluster_name: type: string + opensearch_os_user: + type: string + opensearch_os_group: + type: string admin_password: type: string kibanaserver_password: type: string - kibanaserver_user_active: - type: boolean - logstash_password: + filebeatservice_password: type: string - logstash_user_active: - type: boolean demo_users_to_remove: type: array - items: {} + items: + type: string paths: type: object properties: - data: + opensearch_home: + type: string + opensearch_conf_dir: + type: string + opensearch_log_dir: type: string - repo: + opensearch_snapshots_dir: type: string - logs: + opensearch_data_dir: + type: string + opensearch_perftop_dir: type: string jvm_options: type: object properties: Xmx: type: string - opendistro_security: + opensearch_security: type: object properties: + audit: + type: object + properties: + type: + type: string ssl: type: object properties: diff --git a/schema/common/validation/configuration/node-exporter.yml b/schema/common/validation/configuration/node-exporter.yml index 2d65589397..06defe4eae 100644 --- a/schema/common/validation/configuration/node-exporter.yml +++ b/schema/common/validation/configuration/node-exporter.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Node-exporter specification schema" description: "Node-exporter specification schema" diff --git a/schema/common/validation/configuration/opensearch-dashboards.yml b/schema/common/validation/configuration/opensearch-dashboards.yml new file mode 100644 index 0000000000..510b38e3e0 --- /dev/null +++ b/schema/common/validation/configuration/opensearch-dashboards.yml @@ -0,0 +1,25 @@ +--- +kind: configuration/opensearch-dashboards +title: "OpenSearch Dashboards specification schema" +name: default +type: object +properties: + dashboards_os_user: + type: string + dashboards_os_group: + type: string + dashboards_user: + type: string + dashboards_user_password: + type: string + paths: + type: object + properties: + dashboards_home: + type: string + dashboards_conf_dir: + type: string + dashboards_plugin_bin_path: + type: string + dashboards_log_dir: + type: string diff --git a/schema/common/validation/configuration/opendistro-for-elasticsearch.yml b/schema/common/validation/configuration/opensearch.yml similarity index 57% rename from schema/common/validation/configuration/opendistro-for-elasticsearch.yml rename to schema/common/validation/configuration/opensearch.yml index 3992bc36ab..ad3ebcd0b8 100644 --- a/schema/common/validation/configuration/opendistro-for-elasticsearch.yml +++ b/schema/common/validation/configuration/opensearch.yml @@ -1,22 +1,19 @@ +--- "$id": "#/specification" -title: "Opendistro-for-elasticsearch specification schema" -description: "Opendistro-for-elasticsearch specification schema" +title: "opensearch schema" +description: "OpenSearch specification schema" type: object properties: cluster_name: type: string - clustered: - type: boolean + opensearch_os_user: + type: string + opensearch_os_group: + type: string admin_password: type: string kibanaserver_password: type: string - kibanaserver_user_active: - type: boolean - logstash_password: - type: string - logstash_user_active: - type: boolean demo_users_to_remove: type: array items: @@ -24,20 +21,31 @@ properties: paths: type: object properties: - data: + opensearch_home: + type: string + opensearch_conf_dir: type: string - repo: + opensearch_log_dir: type: string - logs: + opensearch_snapshots_dir: + type: string + opensearch_data_dir: + type: string + opensearch_perftop_dir: type: string jvm_options: type: object properties: Xmx: type: string - opendistro_security: + opensearch_security: type: object properties: + audit: + type: object + properties: + type: + type: string ssl: type: object properties: diff --git a/schema/common/validation/configuration/postgres-exporter.yml b/schema/common/validation/configuration/postgres-exporter.yml index e4b9227047..229f937f5d 100644 --- a/schema/common/validation/configuration/postgres-exporter.yml +++ b/schema/common/validation/configuration/postgres-exporter.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Postgres-exporter specification schema" description: "Postgres-exporter specification schema" diff --git a/schema/common/validation/configuration/postgresql.yml b/schema/common/validation/configuration/postgresql.yml index 0f3f0344c7..6e2ef5f130 100644 --- a/schema/common/validation/configuration/postgresql.yml +++ b/schema/common/validation/configuration/postgresql.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Postgresql specification schema" description: "Postgresql specification schema" diff --git a/schema/common/validation/configuration/prometheus.yml b/schema/common/validation/configuration/prometheus.yml index 88d84eba2f..b453476563 100644 --- a/schema/common/validation/configuration/prometheus.yml +++ b/schema/common/validation/configuration/prometheus.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Prometheus specification schema" description: "Prometheus specification schema" diff --git a/schema/common/validation/configuration/rabbitmq.yml b/schema/common/validation/configuration/rabbitmq.yml index 6368c623b9..27dda8a07a 100644 --- a/schema/common/validation/configuration/rabbitmq.yml +++ b/schema/common/validation/configuration/rabbitmq.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Rabbitmq specification schema" description: "Rabbitmq specification schema" diff --git a/schema/common/validation/configuration/recovery.yml b/schema/common/validation/configuration/recovery.yml index 6225f7c269..298064684f 100644 --- a/schema/common/validation/configuration/recovery.yml +++ b/schema/common/validation/configuration/recovery.yml @@ -1,3 +1,4 @@ +--- $schema: 'http://json-schema.org/draft-07/schema#' type: object required: diff --git a/schema/common/validation/configuration/repository.yml b/schema/common/validation/configuration/repository.yml index 319bbe7b42..aba2eae989 100644 --- a/schema/common/validation/configuration/repository.yml +++ b/schema/common/validation/configuration/repository.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Repository specification schema" description: "Repository specification schema" diff --git a/schema/common/validation/configuration/rook.yml b/schema/common/validation/configuration/rook.yml index cc829d56f8..87063193a5 100644 --- a/schema/common/validation/configuration/rook.yml +++ b/schema/common/validation/configuration/rook.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Rook specification schema" description: "Rook specification schema" diff --git a/schema/common/validation/configuration/shared-config.yml b/schema/common/validation/configuration/shared-config.yml index 0dfb1a8f9b..908861f488 100644 --- a/schema/common/validation/configuration/shared-config.yml +++ b/schema/common/validation/configuration/shared-config.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Shared-config specification schema" description: "Shared-config specification schema" diff --git a/schema/common/validation/configuration/zookeeper.yml b/schema/common/validation/configuration/zookeeper.yml index eab86b2f04..50d9048ffe 100644 --- a/schema/common/validation/configuration/zookeeper.yml +++ b/schema/common/validation/configuration/zookeeper.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Zookeeper specification schema" description: "Zookeeper specification schema" diff --git a/schema/common/validation/core/base.yml b/schema/common/validation/core/base.yml index 654cd892d9..ca1196be37 100644 --- a/schema/common/validation/core/base.yml +++ b/schema/common/validation/core/base.yml @@ -1,33 +1,25 @@ -{ - "$schema": "http://json-schema.org/draft-07/schema#", - "type": "object", - "required": [ - "provider", # always keep this in first place in the index! - "kind", - "name", - "title", - "version", - "specification" - ], - "properties": { - "kind": { - "$id": "#/properties/kind", - "type": "string", - "title": "The Kind Schema", - "default": "", - "pattern": "" - }, - "title": { - "$ref": "#/definitions/title" - }, - "name": { - "$ref": "#/definitions/name" - }, - "provider": { - "$ref": "#/definitions/provider" - }, - "version": { - "$ref": "#/definitions/version" - } - } -} \ No newline at end of file +--- +"$schema": http://json-schema.org/draft-07/schema# +type: object +required: +- provider # always keep this in first place in the index! +- kind +- name +- title +- version +- specification +properties: + kind: + "$id": "#/properties/kind" + type: string + title: The Kind Schema + default: '' + pattern: '' + title: + "$ref": "#/definitions/title" + name: + "$ref": "#/definitions/name" + provider: + "$ref": "#/definitions/provider" + version: + "$ref": "#/definitions/version" diff --git a/schema/common/validation/core/definitions.yml b/schema/common/validation/core/definitions.yml index 99e6efa038..e18ac8dfc2 100644 --- a/schema/common/validation/core/definitions.yml +++ b/schema/common/validation/core/definitions.yml @@ -1,3 +1,4 @@ +--- name: type: string title: The Name Schema @@ -34,4 +35,4 @@ unvalidated_specification: type: - 'object' - 'null' - title: The Specification Schema \ No newline at end of file + title: The Specification Schema diff --git a/schema/common/validation/epiphany-cluster.yml b/schema/common/validation/epiphany-cluster.yml index 4637b5a908..793bf8c76e 100644 --- a/schema/common/validation/epiphany-cluster.yml +++ b/schema/common/validation/epiphany-cluster.yml @@ -1,3 +1,4 @@ +--- "$id": "#/specification" title: "Cluster specification schema" description: "The main cluster specification" @@ -79,19 +80,25 @@ properties: type: object title: The Credentials Schema required: - - key - - secret + - access_key_id + - secret_access_key properties: - key: - "$id": "#/properties/specification/properties/cloud/properties/credentials/properties/key" + access_key_id: + "$id": "#/properties/specification/properties/cloud/properties/credentials/properties/access_key_id" type: string - title: The Key Schema + title: The Key Id Schema pattern: "^(.*)$" - secret: - "$id": "#/properties/specification/properties/cloud/properties/credentials/properties/secret" + secret_access_key: + "$id": "#/properties/specification/properties/cloud/properties/credentials/properties/secret_access_key" type: string title: The Secret Schema pattern: "^(.*)$" + session_token: + "$id": "#/properties/specification/properties/cloud/properties/credentials/properties/session_token" + type: string + title: The session token + description: "Session token cannot contain whitespaces" + pattern: "^[^\\s]*$" network: "$id": "#/properties/specification/properties/cloud/properties/network" type: object @@ -137,9 +144,9 @@ properties: properties: kubernetes_master: properties: - count: { type: integer, enum: [0] } + count: {type: integer, enum: [0]} then: properties: kubernetes_node: properties: - count: { type: integer, enum: [0] } + count: {type: integer, enum: [0]} diff --git a/terraform/aws/epiphany-cluster.j2 b/terraform/aws/epiphany-cluster.j2 index 152e2aee0c..1b42d231b3 100644 --- a/terraform/aws/epiphany-cluster.j2 +++ b/terraform/aws/epiphany-cluster.j2 @@ -20,7 +20,10 @@ terraform { } provider "aws" { - access_key = "{{ specification.cloud.credentials.key }}" - secret_key = "{{ specification.cloud.credentials.secret }}" + access_key = "{{ specification.cloud.credentials.access_key_id }}" + secret_key = "{{ specification.cloud.credentials.secret_access_key }}" +{% if specification.cloud.credentials.session_token is defined %} + token = "{{ specification.cloud.credentials.session_token }}" +{% endif %} region = "{{ specification.cloud.region }}" } diff --git a/tests/spec/Rakefile b/tests/spec/Rakefile index f334dd5699..7c5372e8fa 100644 --- a/tests/spec/Rakefile +++ b/tests/spec/Rakefile @@ -1,6 +1,7 @@ -require 'rake' require 'net/ssh' +require 'rake' require 'rspec/core/rake_task' +require 'yaml' unless ENV['inventory'] print "ERROR: Inventory file must be specified by 'inventory' environment variable\n" @@ -31,21 +32,23 @@ all_hosts = {} ungrouped_hosts = {} current_group = nil -File::open(ENV['inventory']) do |f| - while line = f.gets +File.open(ENV['inventory']) do |f| + while (line = f.gets) md = line.match(/^([^#]+)/) # matches lines not starting with a '#' character next unless md + line = md[0] - if line =~ /^\[([^\]]+)\]/ - current_group = $1 - elsif line =~ /(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})/ # regex for IP address - host_ip = $1 + case line + when /^\[([^\]]+)\]/ + current_group = Regexp.last_match(1) + when /(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})/ # regex for IP address + host_ip = Regexp.last_match(1) host_name = line.split.first if current_group groups[current_group] ||= {} - groups[current_group][host_ip] = host_name # e.g. "zookeeper"=>{"192.168.0.1"=>"vm-01.localdomain"} + groups[current_group][host_ip] = host_name # e.g. "zookeeper"=>{"192.168.0.1"=>"vm-01.localdomain"} else ungrouped_hosts[host_ip] = host_name end @@ -55,14 +58,10 @@ File::open(ENV['inventory']) do |f| end # Add hosts for 'common' group (not present in inventory) -if all_hosts.length - groups['common'] = all_hosts -end +groups['common'] = all_hosts if all_hosts.length -groups.keys.each do |group| - if !File.directory?("spec/#{group}") || Dir.empty?("spec/#{group}") - groups.delete(group) - end +groups.each_key do |group| + groups.delete(group) if !File.directory?("spec/#{group}") || Dir.empty?("spec/#{group}") end # Re-order postgres hosts to put primary node at the start of the host listing. @@ -72,51 +71,63 @@ end # pg_primary_node_host: primary node host. # pg_primary_node_ip: primary node ip. # pg_last_node_host: last standby node host (if present) -if groups.has_key?("postgresql") - if groups["postgresql"].size > 1 +if groups.key?('postgresql') + if groups['postgresql'].size > 1 pg_primary = {} pg_standy = {} - groups["postgresql"].keys.each do |host| + groups['postgresql'].each_key do |host| Net::SSH.start(host, ENV['user'], keys: [ENV['keypath']], use_agent: false) do |ssh| - result = ssh.exec!("sudo su - postgres -c \"repmgr node check --role\"") - if result.include? "primary" - pg_primary[host] = groups["postgresql"][host] + result = ssh.exec!('sudo su - postgres -c "repmgr node check --role"') + if result.include? 'primary' + pg_primary[host] = groups['postgresql'][host] else - pg_standy[host] = groups["postgresql"][host] + pg_standy[host] = groups['postgresql'][host] end end end - groups["postgresql"] = pg_primary.merge(pg_standy) - ENV['pg_last_node_host'] = groups["postgresql"].values[-1] + groups['postgresql'] = pg_primary.merge(pg_standy) + ENV['pg_last_node_host'] = groups['postgresql'].values[-1] end - ENV['pg_primary_node_host'] = groups["postgresql"].values[0] - ENV['pg_primary_node_ip'] = groups["postgresql"].keys[0] + ENV['pg_primary_node_host'] = groups['postgresql'].values[0] + ENV['pg_primary_node_ip'] = groups['postgresql'].keys[0] end -puts groups - -task :spec => 'spec:all' -task :default => :spec +task spec: 'spec:all' +task default: :spec namespace :spec do - task :all => groups.keys.map {|group| 'spec:' + group } - task :default => :all + task all: groups.keys.map { |group| "spec:#{group}" } + task default: :all # Tasks for groups - groups.keys.each do |group| - task group.to_sym => groups[group].keys.map {|host| 'spec:' + group + ':' + host } - groups[group].keys.each do |host| + groups.each_key do |group| + task group.to_sym => groups[group].keys.map { |host| "spec:#{group}:#{host}" } + groups[group].each_key do |host| desc "Run tests for group '#{group}'" - task_name = group + ':' + host - RSpec::Core::RakeTask.new(task_name) do |t| + task_name = "#{group}:#{host}" + RSpec::Core::RakeTask.new(task_name) do |task| ENV['TARGET_HOST'] = host - puts "Testing " + task_name - t.pattern = "spec/#{group}/*_spec.rb" - t.fail_on_error = true # to detect RuntimeError (when error occured outside of example) - t.rspec_opts = "--format documentation --format RspecJunitFormatter " \ - "--out " + ENV['spec_output'] + - Time.now.strftime("%Y-%m-%d_%H-%M-%S") + "_#{group}_#{groups[group][host]}.xml" - end + puts '---' + puts "Testing #{task_name}" + task.pattern = "spec/#{group}/*_spec.rb" + task.fail_on_error = true # to detect RuntimeError (when error occured outside of example) + task.rspec_opts = '--format documentation --format RspecJunitFormatter ' \ + '--out ' + ENV['spec_output'] + + Time.now.strftime('%Y-%m-%d_%H-%M-%S') + "_#{group}_#{groups[group][host]}.xml" + # Append extra options + task.rspec_opts += " #{ENV['rspec_extra_opts']}" if ENV['rspec_extra_opts'] + end + end + end + + # Print selected groups + selected_groups = {} + if Rake.application.top_level_tasks.include? 'spec:all' + selected_groups = groups + else + groups.each_key do |group| + selected_groups[group] = groups[group] if Rake.application.top_level_tasks.include? "spec:#{group}" end end + puts selected_groups.to_yaml.gsub("---\n", '') end diff --git a/tests/spec/post_run/ansible/kubernetes_master/undo-copy-kubeconfig.yml b/tests/spec/post_run/ansible/kubernetes_master/undo-copy-kubeconfig.yml new file mode 100644 index 0000000000..567e6fecfc --- /dev/null +++ b/tests/spec/post_run/ansible/kubernetes_master/undo-copy-kubeconfig.yml @@ -0,0 +1,40 @@ +# Serverspec tests require $HOME/.kube/config on kubernetes_master hosts. +# This playbook reverts temporary changes. + +- hosts: kubernetes_master + gather_facts: false + vars: + undo_file_path: ~/.copy-kubeconfig-undo.yml + tasks: + - name: Check if undo file exists + stat: + path: "{{ undo_file_path }}" + get_attributes: false + get_checksum: false + get_mime: false + register: stat_undo_file + + - name: Undo temporary changes + when: stat_undo_file.stat.exists + block: + - name: Load undo file + slurp: + src: "{{ undo_file_path }}" + register: slurp_undo_file + + - name: Set paths to remove + set_fact: + kubeconfig_paths: "{{ slurp_undo_file['content'] | b64decode | from_yaml }}" + + - name: Remove paths + file: + path: "{{ item.path }}" + state: absent + loop_control: + label: "{{ item.path }}" + loop: "{{ kubeconfig_paths | selectattr('stat.exists', '==', false) }}" + + - name: Clean up undo file + file: + path: "{{ undo_file_path }}" + state: absent diff --git a/tests/spec/pre_run/ansible/kubernetes_master/copy-kubeconfig.yml b/tests/spec/pre_run/ansible/kubernetes_master/copy-kubeconfig.yml new file mode 100644 index 0000000000..79296da7b9 --- /dev/null +++ b/tests/spec/pre_run/ansible/kubernetes_master/copy-kubeconfig.yml @@ -0,0 +1,68 @@ +# Serverspec tests require $HOME/.kube/config on kubernetes_master hosts. +# This playbook should only make changes that can be reverted. + +- hosts: kubernetes_master + gather_facts: false + vars: + undo_file_path: ~/.copy-kubeconfig-undo.yml + module_defaults: + stat: + get_attributes: false + get_checksum: false + get_mime: false + tasks: + - name: Assert kubeconfig_remote_path variable is defined + assert: + that: kubeconfig_remote_path is defined + quiet: true + + - name: Get info on remote user + setup: + gather_subset: min + become_user: "{{ ansible_user }}" + become: true + + - name: Check if paths exist + stat: + path: "{{ item }}" + get_attributes: false + get_checksum: false + get_mime: false + register: stat_kubeconfig_paths + loop: + - "{{ ansible_facts.user_dir }}/.kube" + - "{{ ansible_facts.user_dir }}/.kube/config" + + - name: Check if undo file exists + stat: + path: "{{ undo_file_path }}" + register: stat_undo_file + + - name: Save info on initial state to file # to undo changes + when: + - not stat_undo_file.stat.exists + - not stat_kubeconfig_paths.results[1].stat.exists + copy: + dest: "{{ undo_file_path }}" + mode: u=rw,g=r,o= + content: | + # This file is managed by Ansible and is needed to restore original state. DO NOT EDIT. + {{ stat_kubeconfig_paths.results | json_query('[].{path: item, stat: stat}') | to_nice_yaml(indent=2) }} + + - name: Create ~/.kube directory + when: not stat_kubeconfig_paths.results[0].stat.exists + file: + path: "{{ ansible_facts.user_dir }}/.kube" + state: directory + mode: u=rwx,go= + + - name: Copy kubeconfig file + when: not stat_kubeconfig_paths.results[1].stat.exists + become: true + copy: + src: "{{ kubeconfig_remote_path }}" + dest: "{{ ansible_facts.user_dir }}/.kube/config" + remote_src: true + owner: "{{ ansible_facts.user_uid }}" + group: "{{ ansible_facts.user_gid }}" + mode: u=rw,go= diff --git a/tests/spec/spec/filebeat/filebeat_spec.rb b/tests/spec/spec/filebeat/filebeat_spec.rb index d4783ae5f2..31db49c081 100644 --- a/tests/spec/spec/filebeat/filebeat_spec.rb +++ b/tests/spec/spec/filebeat/filebeat_spec.rb @@ -8,13 +8,13 @@ # Configurable passwords for ES users were introduced in v0.10.0. # For testing upgrades, we use default passwords for now but they should be read from filebeat.yml (remote host). -es_logstash_user_password = readDataYaml('configuration/logging')['specification']['logstash_password'] || 'logstash' -es_logstash_user_is_active = readDataYaml('configuration/logging')['specification']['logstash_user_active'] -es_logstash_user_is_active = true if es_logstash_user_is_active.nil? +es_filebeat_user_password = readDataYaml('configuration/logging')['specification']['filebeatservice_password'] || 'PASSWORD_TO_CHANGE' +es_filebeat_user_is_active = !listInventoryHosts('logging').empty? + +filebeat_user = upgradeRun? ? 'logstash' : 'filebeatservice' es_kibanaserver_user_password = readDataYaml('configuration/logging')['specification']['kibanaserver_password'] || 'kibanaserver' -es_kibanaserver_user_is_active = readDataYaml('configuration/logging')['specification']['kibanaserver_user_active'] -es_kibanaserver_user_is_active = true if es_kibanaserver_user_is_active.nil? +es_kibanaserver_user_is_active = !listInventoryHosts('logging').empty? es_api_port = 9200 kibana_api_port = 5601 @@ -44,11 +44,11 @@ end end -if es_logstash_user_is_active +if es_filebeat_user_is_active listInventoryHosts('logging').each do |val| describe 'Check the connection to the Elasticsearch hosts' do let(:disable_sudo) { false } - describe command("curl -k -u logstash:#{es_logstash_user_password} -o /dev/null -s -w '%{http_code}' https://#{val}:#{es_api_port}") do + describe command("curl -k -u #{filebeat_user}:#{es_filebeat_user_password} -o /dev/null -s -w '%{http_code}' https://#{val}:#{es_api_port}") do it 'is expected to be equal' do expect(subject.stdout.to_i).to eq 200 end diff --git a/tests/spec/spec/kibana/kibana_spec.rb b/tests/spec/spec/kibana/kibana_spec.rb deleted file mode 100644 index b6f79e839b..0000000000 --- a/tests/spec/spec/kibana/kibana_spec.rb +++ /dev/null @@ -1,89 +0,0 @@ -require 'spec_helper' - -# Configurable passwords for ES users were introduced in v0.10.0. -# For testing upgrades, we use the default password for now but it should be read from kibana.yml (remote host). -es_kibanaserver_user_password = readDataYaml('configuration/logging')['specification']['kibanaserver_password'] || 'kibanaserver' -es_kibanaserver_user_is_active = readDataYaml('configuration/logging')['specification']['kibanaserver_user_active'] -es_kibanaserver_user_is_active = true if es_kibanaserver_user_is_active.nil? - -es_api_port = 9200 -kibana_default_port = 5601 - -describe 'Check if Kibana package is installed' do - describe package('opendistroforelasticsearch-kibana') do - it { should be_installed } - end -end - -describe 'Check if Kibana service is running' do - describe service('kibana') do - it { should be_enabled } - it { should be_running } - end -end - -describe 'Check if Kibana user exists' do - describe group('kibana') do - it { should exist } - end - describe user('kibana') do - it { should exist } - it { should belong_to_group 'kibana' } - end -end - -describe 'Check Kibana directories and config files' do - describe file('/etc/kibana') do - it { should exist } - it { should be_a_directory } - end - describe file('/etc/kibana/kibana.yml') do - it { should exist } - it { should be_a_file } - end - describe file('/etc/logrotate.d/kibana') do - it { should exist } - it { should be_a_file } - end -end - -describe 'Check if non-empty Kibana log file exists' do - describe command('find /var/log/kibana -maxdepth 1 -name kibana.log* -size +0 -type f | wc -l') do - its(:exit_status) { should eq 0 } - its('stdout.to_i') { should > 0 } - end -end - -if es_kibanaserver_user_is_active - listInventoryHosts('logging').each do |val| - describe 'Check the connection to the Elasticsearch hosts' do - let(:disable_sudo) { false } - describe command("curl -k -u kibanaserver:#{es_kibanaserver_user_password} -o /dev/null -s -w '%{http_code}' https://#{val}:#{es_api_port}") do - it 'is expected to be equal' do - expect(subject.stdout.to_i).to eq 200 - end - end - end - end - - listInventoryHosts('kibana').each do |val| - describe 'Check Kibana app HTTP status code' do - let(:disable_sudo) { false } - describe command("curl -u kibanaserver:#{es_kibanaserver_user_password} -o /dev/null -s -w '%{http_code}' http://#{val}:#{kibana_default_port}/app/kibana") do - it 'is expected to be equal' do - expect(subject.stdout.to_i).to eq 200 - end - end - end - end -end - -listInventoryHosts('kibana').each do |val| - describe 'Check Kibana health' do - let(:disable_sudo) { false } - describe command("curl http://#{val}:#{kibana_default_port}/api/status") do - its(:stdout_as_json) { should include('status' => include('overall' => include('state' => 'green'))) } - its(:exit_status) { should eq 0 } - end - end -end diff --git a/tests/spec/spec/logging/logging_spec.rb b/tests/spec/spec/logging/logging_spec.rb index 617f74bc59..da2b9997e6 100644 --- a/tests/spec/spec/logging/logging_spec.rb +++ b/tests/spec/spec/logging/logging_spec.rb @@ -1,41 +1,55 @@ require 'spec_helper' - # Configurable passwords for ES users were introduced in v0.10.0. # For testing upgrades, we use the default password for now but we're going to switch to TLS auth. +es_kibanaserver_user_password = readDataYaml('configuration/logging')['specification']['kibanaserver_password'] || 'kibanaserver' es_admin_password = readDataYaml('configuration/logging')['specification']['admin_password'] || 'admin' - es_rest_api_port = 9200 es_transport_port = 9300 +opensearch_dashboards_port = 5601 + +describe 'Check if opensearch service is running' do + describe service('opensearch') do + it { should be_enabled } + it { should be_running } + end +end -describe 'Check if Elasticsearch service is running' do - describe service('elasticsearch') do +describe 'Check if opensearch-dashboard service is running' do + describe service('opensearch-dashboards') do it { should be_enabled } it { should be_running } end end -describe 'Check if elasticsearch user exists' do - describe group('elasticsearch') do +describe 'Check if opensearch user exists' do + describe group('opensearch') do it { should exist } end - describe user('elasticsearch') do + describe user('opensearch') do it { should exist } - it { should belong_to_group 'elasticsearch' } + it { should belong_to_group 'opensearch' } + end +end +describe 'Check if opensearch_dashboards user exists' do + describe group('opensearch_dashboards') do + it { should exist } + end + describe user('opensearch_dashboards') do + it { should exist } + it { should belong_to_group 'opensearch_dashboards' } end end - describe 'Check Elasticsearch directories and config files' do let(:disable_sudo) { false } - describe file('/etc/elasticsearch') do + describe file('/usr/share/opensearch') do it { should exist } it { should be_a_directory } end - describe file('/etc/elasticsearch/elasticsearch.yml') do + describe file('/usr/share/opensearch/config/opensearch.yml') do it { should exist } it { should be_a_file } end end - describe 'Check if the ports are open' do let(:disable_sudo) { false } describe port(es_rest_api_port) do @@ -55,10 +69,7 @@ end end end -end - -listInventoryHosts('logging').each do |val| - describe 'Check Elasticsearch health' do + describe 'Check OpenSearch health' do let(:disable_sudo) { false } describe command("curl -k -u admin:#{es_admin_password} https://#{val}:#{es_rest_api_port}/_cluster/health?pretty=true") do its(:stdout_as_json) { should include('status' => /green|yellow/) } @@ -66,4 +77,19 @@ its(:exit_status) { should eq 0 } end end + describe 'Check OpenSearch Dashboard HTTP status code' do + let(:disable_sudo) { false } + describe command("curl -u kibanaserver:#{es_kibanaserver_user_password} -o /dev/null -s -w '%{http_code}' http://#{val}:#{opensearch_dashboards_port}/app/login") do + it 'is expected to be equal' do + expect(subject.stdout.to_i).to eq 200 + end + end + end + describe 'Check OpenSearch Dashboards health' do + let(:disable_sudo) { false } + describe command("curl http://#{val}:#{opensearch_dashboards_port}/api/status") do + its(:stdout_as_json) { should include('status' => include('overall' => include('state' => 'green'))) } + its(:exit_status) { should eq 0 } + end + end end diff --git a/tests/spec/spec/opensearch/opensearch_spec.rb b/tests/spec/spec/opensearch/opensearch_spec.rb new file mode 100644 index 0000000000..41ea711aa0 --- /dev/null +++ b/tests/spec/spec/opensearch/opensearch_spec.rb @@ -0,0 +1,63 @@ +require 'spec_helper' +# Configurable passwords for ES users were introduced in v0.10.0. +# For testing upgrades, we use the default password for now but we're going to switch to TLS auth. +es_admin_password = readDataYaml('configuration/opensearch')['specification']['admin_password'] || 'admin' +es_rest_api_port = 9200 +es_transport_port = 9300 + +describe 'Check if opensearch service is running' do + describe service('opensearch') do + it { should be_enabled } + it { should be_running } + end +end + +describe 'Check if opensearch user exists' do + describe group('opensearch') do + it { should exist } + end + describe user('opensearch') do + it { should exist } + it { should belong_to_group 'opensearch' } + end +end + +describe 'Check Elasticsearch directories and config files' do + let(:disable_sudo) { false } + describe file('/usr/share/opensearch') do + it { should exist } + it { should be_a_directory } + end + describe file('/usr/share/opensearch/config/opensearch.yml') do + it { should exist } + it { should be_a_file } + end +end +describe 'Check if the ports are open' do + let(:disable_sudo) { false } + describe port(es_rest_api_port) do + it { should be_listening } + end + describe port(es_transport_port) do + it { should be_listening } + end +end + +listInventoryHosts('opensearch').each do |val| + describe 'Check Elasticsearch nodes status codes' do + let(:disable_sudo) { false } + describe command("curl -k -u admin:#{es_admin_password} -o /dev/null -s -w '%{http_code}' https://#{val}:#{es_rest_api_port}") do + it 'is expected to be equal' do + expect(subject.stdout.to_i).to eq 200 + end + end + end + describe 'Check OpenSearch health' do + let(:disable_sudo) { false } + describe command("curl -k -u admin:#{es_admin_password} https://#{val}:#{es_rest_api_port}/_cluster/health?pretty=true") do + its(:stdout_as_json) { should include('status' => /green|yellow/) } + its(:stdout_as_json) { should include('number_of_nodes' => countInventoryHosts('opensearch')) } + its(:exit_status) { should eq 0 } + end + end +end diff --git a/tests/spec/spec/postgresql/postgresql_spec.rb b/tests/spec/spec/postgresql/postgresql_spec.rb index 6d7caeb269..7aa270b74d 100644 --- a/tests/spec/spec/postgresql/postgresql_spec.rb +++ b/tests/spec/spec/postgresql/postgresql_spec.rb @@ -479,7 +479,8 @@ def get_elasticsearch_query(message_pattern:, size: 20, with_sort: true) _source: ['message', '@timestamp'], query: { query_string: { - query: "log.file.path:(\\/var\\/log\\/postgresql\\/postgresql\\-13\\-main.log OR \\/var\\/log\\/postgresql\\/postgresql.log) AND message:#{message_pattern} AND @timestamp:[now-30m TO now]" + query: 'log.file.path:(\\/var\\/log\\/postgresql\\/postgresql\\-13\\-main.log OR \\/var\\/log\\/postgresql\\/postgresql.log)'\ + " AND message:#{message_pattern} AND message:\"LOG: AUDIT\" AND @timestamp:[now-30m TO now]" } }, size: size diff --git a/tests/unit/helpers/constants.py b/tests/unit/helpers/constants.py index aca283fd94..312a7ca47a 100644 --- a/tests/unit/helpers/constants.py +++ b/tests/unit/helpers/constants.py @@ -53,8 +53,8 @@ 'use_service_principal': False, 'region': 'West Europe', 'credentials': { - 'key': '1111-1111-1111', - 'secret': 'XXXXXXXXXXXXXXX' + 'access_key_id': '1111-1111-1111', + 'secret_access_key': 'XXXXXXXXXXXXXXX' }, 'default_os_image': 'default' }, diff --git a/tests/unit/helpers/test_data_loader.py b/tests/unit/helpers/test_data_loader.py index d8239d36b1..e486043c15 100644 --- a/tests/unit/helpers/test_data_loader.py +++ b/tests/unit/helpers/test_data_loader.py @@ -31,8 +31,8 @@ 'use_public_ips': False, 'credentials': { - 'key': 'XXXX-XXXX-XXXX', - 'secret': 'XXXXXXXXXXXXXXXX' + 'access_key_id': 'XXXX-XXXX-XXXX', + 'secret_access_key': 'XXXXXXXXXXXXXXXX' }, 'default_os_image': 'default' }, @@ -47,7 +47,7 @@ 'postgresql': {'count': 1}, 'load_balancer': {'count': 1}, 'rabbitmq': {'count': 1}, - 'opendistro_for_elasticsearch': {'count': 1} + 'opensearch': {'count': 1} } } } diff --git a/tests/unit/providers/data/APIProxy_data.py b/tests/unit/providers/data/APIProxy_data.py old mode 100644 new mode 100755 index 99bbfe7fd9..35ab9a0ac6 --- a/tests/unit/providers/data/APIProxy_data.py +++ b/tests/unit/providers/data/APIProxy_data.py @@ -88,8 +88,9 @@ def CLUSTER_MODEL(provider: str) -> ObjDict: 'default_os_image': 'default', 'hostname_domain_extension': '', 'credentials': { - 'key': 'key', - 'secret': 'secret' + 'access_key_id': 'key', + 'secret_access_key': 'secret', + 'session_token': 'token' } }, 'components': {