Skip to content
This repository has been archived by the owner on Dec 9, 2020. It is now read-only.

start of nested stacks #41

Closed
wants to merge 2 commits into from
Closed

Conversation

detiber
Copy link
Contributor

@detiber detiber commented Oct 17, 2016

No description provided.

@detiber
Copy link
Contributor Author

detiber commented Oct 17, 2016

@cooktheryan working branch for nested stack development.

@cooktheryan
Copy link
Contributor

@detiber

I know this is super MVP but is there a way that we can provide this as an input in the template or at run-time to be bypassed.

[rcook@localhost cloudformation]$ grep -i "s3" ./* -R
./files/nested/three_master_infra_asg_node_asg_no_bastion.yaml:    Default: 'https://s3.amazonaws.com/openshift-cloudformation-templates/vpc/default.yaml'
./files/nested/three_master_infra_asg_node_asg_no_bastion.yaml:    Default: 'https://s3.amazonaws.com/openshift-cloudformation-templates/security-groups/default.yaml'
./files/nested/three_master_infra_asg_node_asg_no_bastion.yaml:    Default: 'https://s3.amazonaws.com/openshift-cloudformation-templates/iam-profiles/default.yaml'
./files/nested/three_master_infra_asg_node_asg_no_bastion.yaml:    Default: 'https://s3.amazonaws.com/openshift-cloudformation-templates/control-plane/default.yaml'

@detiber
Copy link
Contributor Author

detiber commented Oct 24, 2016

@cooktheryan indeed, they are already exposed as parameters in the template, just need to expose them in the template_parameters for the cloudformation task.

@stuartbfox
Copy link

Any movement on this? Not being able to specify the amount of nodes at deploy time via either ASG or jinja rendering is a bit of a blocker

@cooktheryan
Copy link
Contributor

@stuartbfox @detiber has been running this mainly but other items have popped up in his schedule.

From a high level though i believe the first iteration will not include a bastion if that makes any difference

@detiber
Copy link
Contributor Author

detiber commented Nov 14, 2016

@stuartbfox @cooktheryan I hope to pick this back up later this week.

@cooktheryan
Copy link
Contributor

@detiber you may want to swap in the ec2_zones_by_region.py you patched up for me. I was failing super early on in the playbook run during vpc creation due to ec2 classic

@cooktheryan
Copy link
Contributor

All items launched as expected

@detiber
Copy link
Contributor Author

detiber commented Nov 21, 2016

@cooktheryan I think the latest commit addresses the ec2_zones_by_region issue.

@stevekuznetsov
Copy link

@detiber Do you think it would be wise to use the k8s master and minion roles here as the base for our master and node roles, then add a whitelist of permissions we see OpenShift needing that the Kubernetes roles don't give us on top?

- ':'
- - 'HTTPS'
- Ref: MasterApiPort
- /healthz/ready

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's going on here? Why is the target https:443/healthz/ready ? Don't we need a hostname?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, no need to specify the hostname, This specifies an https health check using port 443 and the path /healthz/ready. While it looks odd, it is the correct format for an ELB health check target.

@detiber
Copy link
Contributor Author

detiber commented Nov 29, 2016

@stevekuznetsov We actually need fewer permissions than what the kubernetes iam permissions require. The ones that we are using now are a bit overly broad and can be further restricted.

@tomassedovic
Copy link
Contributor

Hey folks! I'd like to do something similar for OpenStack and I've a few questions, if you don't mind.

  1. How is this exepected to be used? Running ansible-playbook nested_setup.yml, waiting for it to finish, get/write the inventory of the created nodes and then running the openshift-ansible installer against the environment created by the playbook here? Or is there something that integrates it all into a single command?

  2. What's the relationship between cloudformation_setup.yml and nested_setup.yml? As far as I can see, you'd use one or the other, but not together. Is that correct?

  3. This playbook creates the master nodes, but not the infra or app ones. Are they going to be created elsewhere or have they not just made it into this patch yet?

  4. What's the root volume for? The instance's root partition?

  5. As far as I can tell, the master01, master02 and master03 nodes are identical. Why not create an autoscaling/resource group (not sure what the CloudFormation name for this is) and set the count to 3? Does it have to do with the subnets associated with each node?

  6. What is the ec2_zones_by_region lookup for? Getting the availability zones so we have each master node in a different AZ to make the deployment more resilient?

Apologies for the stream of questions. Thanks!

@stevekuznetsov
Copy link

I don't have all the answers for you, but I have some:

get/write the inventory of the created nodes and then running the openshift-ansible installer against the environment created by the playbook here?

The appropriate way to interface a provisioner script with other playbooks is through a dynamic inventory. I'm working on some updates to this patch that will use the default Ansible dynamic EC2 inventory and add support for host groups that OpenShift-Ansible needs.

is there something that integrates it all into a single command?

There may be in the future, but there is not today. I assume the packaged and disseminated installer will have a nice flow for this, but that's a ways off.

What's the relationship between cloudformation_setup.yml and nested_setup.yml?

The former has some hard-coded values for faster startup, but yes they achieve similar end results. The latter is the one meant for general consumption.

This playbook creates the master nodes, but not the infra or app ones. Are they going to be created elsewhere or have they not just made it into this patch yet?

This is still WIP. I have a patch for this patch that adds things like this, but it's not quite ready yet.

What's the root volume for? The instance's root partition?

Yes, where the OS + friends live.

Why not create an autoscaling/resource group (not sure what the CloudFormation name for this is) and set the count to 3?

@detiber and I had talked about this ... I don't know what is more common for AWS but an auto-scaling group that is never intended to scale seems weird. In my patch we generate the config for each of these nodes using a template so there isn't a chance that there will be drift between hard-coded master node instances, anyway.

@tomassedovic
Copy link
Contributor

Thanks for your reply @stevekuznetsov. I understand this is WIP, I just wondered whether I was missing regarding app nodes.

@pschiffe pschiffe added the aws label Jan 5, 2017
cooktheryan pushed a commit that referenced this pull request Jun 16, 2017
* First cut at the rhc-ose-ansible structure

* New OSE3 docker host builder and OpenStack ansible provisioning support

* Support for supplying flavor name and moved around variables

* Refactored OpenStack provisioning to be a generic role. Created OpenShift specific playbook

* Registry Role for ansible playbooks

* Added immediate=yes to have firwalld port take affect; restructured registry role; changed true to yes in module parameters

* added post_install role

* adding playbook

* Migration of CICD server provisioning to Ansible

* Adding nginx auth layer

* Removing key name from registry

* Refactoring and renaming

* adding openshift-ansible's post install roles

* removing deprecated files

* Shell for role variable info

* removing extra files

* Add OpenStack SSH key parameter check

* Replacing yum commands and normalizing comments

* fixed README

* Renaming template files with .j2 for clarity

* Add OpenStack security group detection and creation resolves #106

* Change to using split to iterate and SSH rule create only once

* Reorder instances names to sort by env_id

* Change default_env_id of "testenv" to local env OS_USERNAME resolves #142

* Prepend 'casl' to default_env_id

* Add connection test to OpenStack before proceeding

* First cut at DNS ansible roles

* Updated defaults and tasks for dns-server

* Add subscription-manager support for Hosted or Satellite

* Refactor role to dynamically determine rhsm_method

* Removes rhsm_method
* Renames rhsm_server to rhsm_satellite
* Add additional pre_task checks (hosted + key)
* Change conditionals from rhsm_method check to rhsm_satellite defined
* Change repos disable/enable from key to if repos are defined
* Update README and examples in inventory file

* Fix bad syntax with extra 'and' in when using rhsm_pool

* Refactor use of rhsm_password to prevent display to CLI

* Cosmetic changes to task names and move yum clean all to prereqs

* Remove vars_prompt, add info to README to re-enable and for ansible-vault

* Add openstack pre_tasks and ansible_sudo when calling role

* Add deprovision playbook using nova list with sanity checks

- Add minimum length check for env_id
- Add max_instances check
- Remove dynamic openstack.py inventory
- Add override to bypass checks

* Refactor debug flag to be dry_run and other small changes

- Removed debug statements and instead display on pause prompt
- Moved to playbooks directory

* Add ansible_sudo: true to subscription-manager task

* This matches PR#133 enabling ansible_sudo: true when calling that role
* Also changes max_instances check from >= to just > to allow 2 full default environments to be removed (6 max_instances)

* Updated to fix broken/missing 'defaults'...

* Add unique image logic and rename playbook to terminate.yml

* Add OSE provision prerequisites

- Install required packages
- Update pacakges (moved from main.yml)
- Install and disable firewalld
- Install iptables-services and disable iptables
- Verify and set hostname if needed

* Add SELinux check and fail if not enforcing

* Remove getenforce and firewall tasks and use facts

- Uses Ansible collected facts to determine SELinux status
- Adds ansible_sudo: true when calling role
- Adds tag to role when calling it

* Add docker role

- Largely taken from cicd docker.yml
- Changed to using a template for docker-storage-setup
- Using variables for both DEV and VG defined in defaults
- Using pvs command to check for use of DEV and VG before proceeding

* Add org parameter to Satellite with user/pass

* Fix typo in task name

* Updated dns-server role based on feedback

* Changes by JayKayy for a full provision of OpenShift on OpenStack

* Role for disconnected git server

* Added additional yum dependency and corrected spelling

* Added example of disconnected git inventory file

* Changes to allow runs from inside a container. Also allows for running upstream openshift-ansible installer

* Reverting previous commit and making template adjustments

* Subscription manager role should accomodate orgs with spaces

* Fixing unescaped newline

* Channging hard coded host groups to match openshift-ansible expected host groups. Importing byo playbook now instead of nested ansible run. Need to refactor how we generate hostnames to make it fit this.

* Updated to run as root rather than cloud-user, for now...

* Updated inventory template to include openshift_hostname and openshift_public_hostname

* Wrapping in a script to tie the two playbooks together

* Updating ose-provision with DNS workarounds / fixes

* Removed spaces causing issues...

* DNS fix to support OSEv3.2

* Add floating IP support when using Neutron

* Updated to remove repos from playbook + fix typo

* Cleande up hostname role to make it more generic

* Image name for DNS server becomes configurable.

* Updated inventory and template file to make cluster config optional

* Removing temporary file

* Loosen up the DNS server a bit to allow for ETL OSP installs

* Re-implements original subscription-manager role invokation that was
removed in PR# 168.

* Enhanced provisioning script with better error checking, diretory awareness, and improved help output

* Should be looking for generated inventory file in SCRIPTS_BASE_DIR

* Add Neutron floating IP support for Issue #195

* Add check for and set_fact if Neutron is in use which is used by several tasks
* This PR was originally longer and contained the now split off PR #197

* first attempt at securing the registry

* Minor updates for ansible 2.1 compatibility

* Updated CICD implementation to support ETL OSP env

* Updated OSE inventory file with some clean-up

* Add enhancements for for terminate playbook

* Fixes Issue #206
* Add check for valid item when attempting to delete objects
* Add debug on all variables when using dry_run
* Changed default ansible_ssh_user to cloud-user in line with standard cloud guest image
* Add count for ips and volumes to display since these may not always be the same as instance count
* Enhance displayed warning/note message to include new counts
* It is possible for an instance to not have a floating IP for whatever reason (such as manually deallocating or releasing the IP), in this case SSH will not work to the instance so it will not be included in the host group to attempt subscription manager unregister, but will still be deleted
* It is possible that an instance will have a volume created but not attached. In this case as a precautionary measure I am excluding these unattached volumes from the deletion in case this was intentionally detached to preserve data. We can further discuss if this should be a parameter to override instead or if we need to change this behavior.
* Excluded instances in ERROR state as they will most likely not delete. We can discuss if this should be parameterized instead.
* Added prompt variable defaulted to true but can be set to false
* Added unregister variable defaulted to true but can be set to false

* Adding NFS support and fixing template labels so we get a router and registry out of the box.

* testing changes

* tested changes

* fixing defaults and removing host from test playbook

* adding clenaup test book and fixed typo

* Allow passing of ansible extra-vars in provisioning script

* Change --environment to --extra-vars and add usage.

* added check for already secured registry and uses actualy openshift_common dependency

* fixed readiness probe by adding logic for 3.1 vs 3.2

* Fix malformed file to address Issue #210

* Pulling out file paths into variables to account for containerized installs

* fixed error message logic for already secured registry

* added tasks to disable and re-enable deployment triggers, remove debug task

* Fixes Issue #163 if rhsm_password is not defined

* Adding a post-install playbook with secure-registry and ssh key sync.

* Node storage now uses node specific storage var; search for generated inventory file sorts by timestamp not name

* Initial commit exposing registry service

* move registry_hostname to inventory

* Updated env_id to be a sub-domain + make the logic a bit more flexible

* Enabled default subdomain/'apps'

* Updated inventory template file to include 'openshift_deployment_type'

* Adding LDAP and HTPasswd examples for an auth provider to base inventory file

* Fixing port number in LDAP example

* Refactor OpenStack security group creation

* Adds new openstack-security-groups role
* Addresses Issue #211 and adds all instances to default group
* Defines default security group variable with all groups/rules
* Sets security group variables per type (master,node,nfs,dns)
* Supports specifying no security group for a type (e.g. nfs)
* Uses new Ansible 2.x modules

* Refactor to playbook and split data structure out

* Split single security group variable into one per type
* Moves 'default' security group from role into variable
* Moves default security group variables back to openshift-common role
* Converts openstack-security-group role into playbook
* Playbook called on every openstack-create invocation as before
* Simplifies security group tasks and removes type bhecking
* Iterate through seucrity groups and build a comma-separated list of groups

* Add detection of non-Neutron env

* Add UDP 8053 to default master security group

* Adjusting docker role, adding support for logging/metrics, and updating client container

* OpenShift Management Role

* Fixing ansible impl to work with OSP9 and ansible 2.2

* Correcting formatting

* Added process / contribution info

* Updated default security group rules (#7)

* Openstack heat (#2)

* Adding a role to invoke openstack heat

* Adding readme

* Pulling parameters out to inventory file

* start of end-to-end playbook

* More enhancements and refactoring to make dynamic inventory the driver for an openshift install

* Switching to variable substituted path to config.yaml playbook

* Changes to allow defining of number of nodes/infranodes.

* Added labels to inventory

* Start of end-to-end functionality

* Enhancements to support openstack heat provisioning

* Updating inventory sample to remove some deprecation warnings

* Working towards making the secure-registry role 'become' aware

* Fixing node labels and removing secure-registry as it's no longer needed

* No longer need insecure registry line, as installer will secure our registry

* Adjusted dynamic inventory to filter by clusterid

* Minor updates to dynamic inventory bug

* Adding a refactored sample inventory directory

* Refactoring playbooks for better directory structure, and to narrow down host groups

* Adding volume mounts to heat template

* Moving dns playbooks back to original location

* Fixing incorrect file path

* Cleaning up inventory samples

* One more hostname to clean up

* Changing var name

* changed openshift-provision to openshift-prep

* Adjusting current provision script to avoid breakage by new openstack-heat code

* Updating PR Template with Team mention (#10)

* Install playbook defaults to the assumption that casl-ansible and openshift-ansible are checked out to the same directory

* Removing unnecessary task

* Fixing two significant bugs in the HEAT deployment (#13)

* Updated values in sample inventory (#17)

* Adding documentation and docker containers so others can begin testin… (#16)

* Adding documentation and docker containers so others can begin testing cluster provisioning

* Making updates per comments by @oybed

* Fixing formatting changes for links

* Renaming openstack images to align with CoP naming (#18)

* Defaulting the DNS instance to a small flavor (#20)

* Nagios (#11)

* First cut at the nagios work

* Added NRPE service enabled

* Updated implementation to be a bit more flexible

* Updated logic to include checks for services

* Added support for DNS and NFS checks

* Updated templates and config files

* Updated check_service script to simplify and avoid false negatives

* Added support for OpenShift checks

* Added README for the playbook

* Updated README

* DNS server should NOT run docker (#25)

* Readme (#26)

* Updated documentation and example inventory

* Update README.md

Added "hint"

* Update README.md

Fix numbering in the markdown

* Update README.md

* Added docker_volume_size to the sample inventory

* Added rhsm_pool to the sample inventory

* Updated README per comments

* Ensure DNS configuration has wildcards set for infra nodes (#24)

* Ensure DNS configuration has wildcards set for infra nodes

* Updated to include all cluster hosts for DNS entries

* Updated DNS server role + example playbook (#27)

* Updated DNS server role + example playbook

* Updated DNS server role + example playbook

* Dns selinux (#28)

* Updated DNS server role + example playbook

* Updated DNS server role + example playbook

* Updated for SELinux boolean

* Openshift mgmt (#30)

Added prune_projects to the openshift-management role along with Ansible tower support

* Created initial CHANGELOG.md

* Updating to development release of ansible 2.3.0 to pull down bug fixes in HEAT module (#21)

* Workaround for Ansible 2.3 breakage (#31)

* Added quotes where needed and fixed some other minor bugs (#33)

* Fixing awk check (#34)

* Updating client image to lock it to ansible 2.3 and install some addi… (#32)

* Updating client image to lock it to ansible 2.3 and install some additional dependencies

* First attempt at a docker-compose based solution

* Renaming image

* Stack refactor (#38)

* Refactored openstack-stack role to:

- Convert static heat template files to ansible templates
- Include native ansible groups via openstack metadata. This removes the need for a playbook to map host groups
- Some code cleanup

* Deleting commentd out code and irrelevant plays

* Refactored openstack-stack role to:

- Convert static heat template files to ansible templates
- Include native ansible groups via openstack metadata. This removes the need for a playbook to map host groups
- Some code cleanup

* Deleting commentd out code and irrelevant plays

* Replacing stack parameters with jinja expressions

* Updating sample inventory to work with latest dynamic inventory changes

* updating inventory with host group mapping. making sync keys optional

* Missing cluster_hosts group

* Updating to add infra_hosts

* Updating inventory per comments from oybed and sabre1041

* First attempt at a simple multi-master support (#39)

* First attempt at a simple multi-master support

* Removing unneeded inventory

* adding default number of masters and lower number of nodes

* Some fixes (#41)

* Fix the sample inventory

The `openstack_nameservers` variable needs to be a list of strings, we
need to set the Openshift labels in OSv3.yml and we show an example of
using the username/password/poll for RHEL subscriptions.

* Update the READMEs

This fixes some of the paths, explains that we need to pass
`openstack_ssh_public_key` to the end-to-end playbook and includes the
full Docker command since there is no `run.sh` script.  Oh and Heat is
not an acronym :).

* Fixes to the readme and inventory

* Use docker-compose

* Correcting the sample inventory for an HA cluster (#40)

* Correcting the sample inventory for an HA cluster

* Adding node label mapping

* Updating to mre generic IPs

* Updating to OSP ocata repo, as there are some bugs with newton's channel (#44)

* Use the correct variable name in create_users (#43)

The user creation was failing, because it was looking for the
`demo_users` variable while the samples put the data under
`create_users`.

* Upgrading jinja2 to work correctly with latest templates (#45)

* Fix rpm deps (#46)

* Upgrading jinja2 to work correctly with latest templates

* Updated to solve rpm deps + other version issues

* Clean-up

* Updating control-host settings and env

* Updating control-host settings and env

* Updating README and names to align across all components

* Setting the TERM var for better shell experience

* Conditionally set the openshift_master_default_subdomain to avoid overriding it unecessary (#47)

* Update README.md

* Update CASL to use nsupdate for DNS records (#48)

* Updated to use nsupdate for DNS records

* Updated formatting of dict

* Updating descriptive text

* Support for external DNS config

* Upgrading jinja2 to work correctly with latest templates

* Latest update for nsupdate

* Updated to use nsupdate for DNS records

* Updated formatting of dict

* Updating descriptive text

* Support for external DNS config

* Latest update for nsupdate

* Updated to support external public/private DNS server(s)

* Updated DNS server handling

* Updated DNS server handling

* Updated DNS server handling

* Eliminated the  from the sample inventories

* Updated sample inventory to point to 2 separate DNS servers for private/public

* Playbook clean-up

* Adding 'python-dns'

* splitting subscription manager calls to allow for a clean pre-install playbook

* Move the openstack provisioning playbooks

They'll live in playbooks/provisioning/openstack from now on.

* Add a single provisioning playbook

* Symlink roles to provisioning/openstack/roles

* Add a sample inventory for openstack provisioning

* Add license for openstack.py in inventory

It's under the GPLv3+ while the rest of the repo is Apache 2.

* Add readme

* Move pre_tasks from to the openstack provisioner

We should probably not pollute the role namespace with a name as common
as "common". Moving the pre_task.yml to provisioners/openstack instead.

* Add default values to provision-openstack.yml

* Fix privileges in the pre-install playbook

* Always let the openshift nodes access the DNS

When `node_ingress_cidr` to limit the IP range for the DNS server, this
can prevent the actual openshift nodes from accessing it as well.

This commit makes the access from the `openstack_subnet_prefix` always
pass through and uses `node_ingress_cidr` for additional
access control.

* Add a flat sec group for openstack provider

Add a openstack_flat_secgroup, defaults to False.
When set, merges sec rules for master, node, etcd, infra nodes into a
single group. Less secure, but might help to mitigate quota limitations.
Update docs. Use timeout 30s to mitigate the error:
Timeout (12s) waiting for privilege escalation prompt.

Signed-off-by: Bogdan Dobrelya <[email protected]>

* Add ansible.cfg for openstack provider

Signed-off-by: Bogdan Dobrelya <[email protected]>

* Drop atomic-openshift-utils, update docs for origin

TODO use with
when: ansible_distribution == 'CentOS'
Also update docs for origin

Signed-off-by: Bogdan Dobrelya <[email protected]>

* Gather facts for provision playbook

Provision tasks use facts like ansible_hostname and few others.
W/o gathering facts, those expire, and the provision playbook cannot
be reapplied in order to update the existing heat stack.
Refresh the facts cache by specifying gather_facts: true.

Signed-off-by: Bogdan Dobrelya <[email protected]>

* Update sample inventory with the latest changes

* Fix yamllint errors

* Remove the extraneous DNS directory

It's a CASL-specific helper, not necessary for the provisioning
playbooks.

* Fix flake8 errors with the openstack inventory
@openshift-bot openshift-bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Feb 11, 2018
@openshift-bot
Copy link

@detiber: PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

jaywryan pushed a commit to jaywryan/openshift-ansible-contrib that referenced this pull request Jul 3, 2018
* First cut at the rhc-ose-ansible structure

* New OSE3 docker host builder and OpenStack ansible provisioning support

* Support for supplying flavor name and moved around variables

* Refactored OpenStack provisioning to be a generic role. Created OpenShift specific playbook

* Registry Role for ansible playbooks

* Added immediate=yes to have firwalld port take affect; restructured registry role; changed true to yes in module parameters

* added post_install role

* adding playbook

* Migration of CICD server provisioning to Ansible

* Adding nginx auth layer

* Removing key name from registry

* Refactoring and renaming

* adding openshift-ansible's post install roles

* removing deprecated files

* Shell for role variable info

* removing extra files

* Add OpenStack SSH key parameter check

* Replacing yum commands and normalizing comments

* fixed README

* Renaming template files with .j2 for clarity

* Add OpenStack security group detection and creation resolves openshift#106

* Change to using split to iterate and SSH rule create only once

* Reorder instances names to sort by env_id

* Change default_env_id of "testenv" to local env OS_USERNAME resolves openshift#142

* Prepend 'casl' to default_env_id

* Add connection test to OpenStack before proceeding

* First cut at DNS ansible roles

* Updated defaults and tasks for dns-server

* Add subscription-manager support for Hosted or Satellite

* Refactor role to dynamically determine rhsm_method

* Removes rhsm_method
* Renames rhsm_server to rhsm_satellite
* Add additional pre_task checks (hosted + key)
* Change conditionals from rhsm_method check to rhsm_satellite defined
* Change repos disable/enable from key to if repos are defined
* Update README and examples in inventory file

* Fix bad syntax with extra 'and' in when using rhsm_pool

* Refactor use of rhsm_password to prevent display to CLI

* Cosmetic changes to task names and move yum clean all to prereqs

* Remove vars_prompt, add info to README to re-enable and for ansible-vault

* Add openstack pre_tasks and ansible_sudo when calling role

* Add deprovision playbook using nova list with sanity checks

- Add minimum length check for env_id
- Add max_instances check
- Remove dynamic openstack.py inventory
- Add override to bypass checks

* Refactor debug flag to be dry_run and other small changes

- Removed debug statements and instead display on pause prompt
- Moved to playbooks directory

* Add ansible_sudo: true to subscription-manager task

* This matches PR#133 enabling ansible_sudo: true when calling that role
* Also changes max_instances check from >= to just > to allow 2 full default environments to be removed (6 max_instances)

* Updated to fix broken/missing 'defaults'...

* Add unique image logic and rename playbook to terminate.yml

* Add OSE provision prerequisites

- Install required packages
- Update pacakges (moved from main.yml)
- Install and disable firewalld
- Install iptables-services and disable iptables
- Verify and set hostname if needed

* Add SELinux check and fail if not enforcing

* Remove getenforce and firewall tasks and use facts

- Uses Ansible collected facts to determine SELinux status
- Adds ansible_sudo: true when calling role
- Adds tag to role when calling it

* Add docker role

- Largely taken from cicd docker.yml
- Changed to using a template for docker-storage-setup
- Using variables for both DEV and VG defined in defaults
- Using pvs command to check for use of DEV and VG before proceeding

* Add org parameter to Satellite with user/pass

* Fix typo in task name

* Updated dns-server role based on feedback

* Changes by JayKayy for a full provision of OpenShift on OpenStack

* Role for disconnected git server

* Added additional yum dependency and corrected spelling

* Added example of disconnected git inventory file

* Changes to allow runs from inside a container. Also allows for running upstream openshift-ansible installer

* Reverting previous commit and making template adjustments

* Subscription manager role should accomodate orgs with spaces

* Fixing unescaped newline

* Channging hard coded host groups to match openshift-ansible expected host groups. Importing byo playbook now instead of nested ansible run. Need to refactor how we generate hostnames to make it fit this.

* Updated to run as root rather than cloud-user, for now...

* Updated inventory template to include openshift_hostname and openshift_public_hostname

* Wrapping in a script to tie the two playbooks together

* Updating ose-provision with DNS workarounds / fixes

* Removed spaces causing issues...

* DNS fix to support OSEv3.2

* Add floating IP support when using Neutron

* Updated to remove repos from playbook + fix typo

* Cleande up hostname role to make it more generic

* Image name for DNS server becomes configurable.

* Updated inventory and template file to make cluster config optional

* Removing temporary file

* Loosen up the DNS server a bit to allow for ETL OSP installs

* Re-implements original subscription-manager role invokation that was
removed in PR# 168.

* Enhanced provisioning script with better error checking, diretory awareness, and improved help output

* Should be looking for generated inventory file in SCRIPTS_BASE_DIR

* Add Neutron floating IP support for Issue openshift#195

* Add check for and set_fact if Neutron is in use which is used by several tasks
* This PR was originally longer and contained the now split off PR openshift#197

* first attempt at securing the registry

* Minor updates for ansible 2.1 compatibility

* Updated CICD implementation to support ETL OSP env

* Updated OSE inventory file with some clean-up

* Add enhancements for for terminate playbook

* Fixes Issue openshift#206
* Add check for valid item when attempting to delete objects
* Add debug on all variables when using dry_run
* Changed default ansible_ssh_user to cloud-user in line with standard cloud guest image
* Add count for ips and volumes to display since these may not always be the same as instance count
* Enhance displayed warning/note message to include new counts
* It is possible for an instance to not have a floating IP for whatever reason (such as manually deallocating or releasing the IP), in this case SSH will not work to the instance so it will not be included in the host group to attempt subscription manager unregister, but will still be deleted
* It is possible that an instance will have a volume created but not attached. In this case as a precautionary measure I am excluding these unattached volumes from the deletion in case this was intentionally detached to preserve data. We can further discuss if this should be a parameter to override instead or if we need to change this behavior.
* Excluded instances in ERROR state as they will most likely not delete. We can discuss if this should be parameterized instead.
* Added prompt variable defaulted to true but can be set to false
* Added unregister variable defaulted to true but can be set to false

* Adding NFS support and fixing template labels so we get a router and registry out of the box.

* testing changes

* tested changes

* fixing defaults and removing host from test playbook

* adding clenaup test book and fixed typo

* Allow passing of ansible extra-vars in provisioning script

* Change --environment to --extra-vars and add usage.

* added check for already secured registry and uses actualy openshift_common dependency

* fixed readiness probe by adding logic for 3.1 vs 3.2

* Fix malformed file to address Issue openshift#210

* Pulling out file paths into variables to account for containerized installs

* fixed error message logic for already secured registry

* added tasks to disable and re-enable deployment triggers, remove debug task

* Fixes Issue openshift#163 if rhsm_password is not defined

* Adding a post-install playbook with secure-registry and ssh key sync.

* Node storage now uses node specific storage var; search for generated inventory file sorts by timestamp not name

* Initial commit exposing registry service

* move registry_hostname to inventory

* Updated env_id to be a sub-domain + make the logic a bit more flexible

* Enabled default subdomain/'apps'

* Updated inventory template file to include 'openshift_deployment_type'

* Adding LDAP and HTPasswd examples for an auth provider to base inventory file

* Fixing port number in LDAP example

* Refactor OpenStack security group creation

* Adds new openstack-security-groups role
* Addresses Issue openshift#211 and adds all instances to default group
* Defines default security group variable with all groups/rules
* Sets security group variables per type (master,node,nfs,dns)
* Supports specifying no security group for a type (e.g. nfs)
* Uses new Ansible 2.x modules

* Refactor to playbook and split data structure out

* Split single security group variable into one per type
* Moves 'default' security group from role into variable
* Moves default security group variables back to openshift-common role
* Converts openstack-security-group role into playbook
* Playbook called on every openstack-create invocation as before
* Simplifies security group tasks and removes type bhecking
* Iterate through seucrity groups and build a comma-separated list of groups

* Add detection of non-Neutron env

* Add UDP 8053 to default master security group

* Adjusting docker role, adding support for logging/metrics, and updating client container

* OpenShift Management Role

* Fixing ansible impl to work with OSP9 and ansible 2.2

* Correcting formatting

* Added process / contribution info

* Updated default security group rules (openshift#7)

* Openstack heat (openshift#2)

* Adding a role to invoke openstack heat

* Adding readme

* Pulling parameters out to inventory file

* start of end-to-end playbook

* More enhancements and refactoring to make dynamic inventory the driver for an openshift install

* Switching to variable substituted path to config.yaml playbook

* Changes to allow defining of number of nodes/infranodes.

* Added labels to inventory

* Start of end-to-end functionality

* Enhancements to support openstack heat provisioning

* Updating inventory sample to remove some deprecation warnings

* Working towards making the secure-registry role 'become' aware

* Fixing node labels and removing secure-registry as it's no longer needed

* No longer need insecure registry line, as installer will secure our registry

* Adjusted dynamic inventory to filter by clusterid

* Minor updates to dynamic inventory bug

* Adding a refactored sample inventory directory

* Refactoring playbooks for better directory structure, and to narrow down host groups

* Adding volume mounts to heat template

* Moving dns playbooks back to original location

* Fixing incorrect file path

* Cleaning up inventory samples

* One more hostname to clean up

* Changing var name

* changed openshift-provision to openshift-prep

* Adjusting current provision script to avoid breakage by new openstack-heat code

* Updating PR Template with Team mention (openshift#10)

* Install playbook defaults to the assumption that casl-ansible and openshift-ansible are checked out to the same directory

* Removing unnecessary task

* Fixing two significant bugs in the HEAT deployment (openshift#13)

* Updated values in sample inventory (openshift#17)

* Adding documentation and docker containers so others can begin testin… (openshift#16)

* Adding documentation and docker containers so others can begin testing cluster provisioning

* Making updates per comments by @oybed

* Fixing formatting changes for links

* Renaming openstack images to align with CoP naming (openshift#18)

* Defaulting the DNS instance to a small flavor (openshift#20)

* Nagios (openshift#11)

* First cut at the nagios work

* Added NRPE service enabled

* Updated implementation to be a bit more flexible

* Updated logic to include checks for services

* Added support for DNS and NFS checks

* Updated templates and config files

* Updated check_service script to simplify and avoid false negatives

* Added support for OpenShift checks

* Added README for the playbook

* Updated README

* DNS server should NOT run docker (openshift#25)

* Readme (openshift#26)

* Updated documentation and example inventory

* Update README.md

Added "hint"

* Update README.md

Fix numbering in the markdown

* Update README.md

* Added docker_volume_size to the sample inventory

* Added rhsm_pool to the sample inventory

* Updated README per comments

* Ensure DNS configuration has wildcards set for infra nodes (openshift#24)

* Ensure DNS configuration has wildcards set for infra nodes

* Updated to include all cluster hosts for DNS entries

* Updated DNS server role + example playbook (openshift#27)

* Updated DNS server role + example playbook

* Updated DNS server role + example playbook

* Dns selinux (openshift#28)

* Updated DNS server role + example playbook

* Updated DNS server role + example playbook

* Updated for SELinux boolean

* Openshift mgmt (openshift#30)

Added prune_projects to the openshift-management role along with Ansible tower support

* Created initial CHANGELOG.md

* Updating to development release of ansible 2.3.0 to pull down bug fixes in HEAT module (openshift#21)

* Workaround for Ansible 2.3 breakage (openshift#31)

* Added quotes where needed and fixed some other minor bugs (openshift#33)

* Fixing awk check (openshift#34)

* Updating client image to lock it to ansible 2.3 and install some addi… (openshift#32)

* Updating client image to lock it to ansible 2.3 and install some additional dependencies

* First attempt at a docker-compose based solution

* Renaming image

* Stack refactor (openshift#38)

* Refactored openstack-stack role to:

- Convert static heat template files to ansible templates
- Include native ansible groups via openstack metadata. This removes the need for a playbook to map host groups
- Some code cleanup

* Deleting commentd out code and irrelevant plays

* Refactored openstack-stack role to:

- Convert static heat template files to ansible templates
- Include native ansible groups via openstack metadata. This removes the need for a playbook to map host groups
- Some code cleanup

* Deleting commentd out code and irrelevant plays

* Replacing stack parameters with jinja expressions

* Updating sample inventory to work with latest dynamic inventory changes

* updating inventory with host group mapping. making sync keys optional

* Missing cluster_hosts group

* Updating to add infra_hosts

* Updating inventory per comments from oybed and sabre1041

* First attempt at a simple multi-master support (openshift#39)

* First attempt at a simple multi-master support

* Removing unneeded inventory

* adding default number of masters and lower number of nodes

* Some fixes (openshift#41)

* Fix the sample inventory

The `openstack_nameservers` variable needs to be a list of strings, we
need to set the Openshift labels in OSv3.yml and we show an example of
using the username/password/poll for RHEL subscriptions.

* Update the READMEs

This fixes some of the paths, explains that we need to pass
`openstack_ssh_public_key` to the end-to-end playbook and includes the
full Docker command since there is no `run.sh` script.  Oh and Heat is
not an acronym :).

* Fixes to the readme and inventory

* Use docker-compose

* Correcting the sample inventory for an HA cluster (openshift#40)

* Correcting the sample inventory for an HA cluster

* Adding node label mapping

* Updating to mre generic IPs

* Updating to OSP ocata repo, as there are some bugs with newton's channel (openshift#44)

* Use the correct variable name in create_users (openshift#43)

The user creation was failing, because it was looking for the
`demo_users` variable while the samples put the data under
`create_users`.

* Upgrading jinja2 to work correctly with latest templates (openshift#45)

* Fix rpm deps (openshift#46)

* Upgrading jinja2 to work correctly with latest templates

* Updated to solve rpm deps + other version issues

* Clean-up

* Updating control-host settings and env

* Updating control-host settings and env

* Updating README and names to align across all components

* Setting the TERM var for better shell experience

* Conditionally set the openshift_master_default_subdomain to avoid overriding it unecessary (openshift#47)

* Update README.md

* Update CASL to use nsupdate for DNS records (openshift#48)

* Updated to use nsupdate for DNS records

* Updated formatting of dict

* Updating descriptive text

* Support for external DNS config

* Upgrading jinja2 to work correctly with latest templates

* Latest update for nsupdate

* Updated to use nsupdate for DNS records

* Updated formatting of dict

* Updating descriptive text

* Support for external DNS config

* Latest update for nsupdate

* Updated to support external public/private DNS server(s)

* Updated DNS server handling

* Updated DNS server handling

* Updated DNS server handling

* Eliminated the  from the sample inventories

* Updated sample inventory to point to 2 separate DNS servers for private/public

* Playbook clean-up

* Adding 'python-dns'

* splitting subscription manager calls to allow for a clean pre-install playbook

* Move the openstack provisioning playbooks

They'll live in playbooks/provisioning/openstack from now on.

* Add a single provisioning playbook

* Symlink roles to provisioning/openstack/roles

* Add a sample inventory for openstack provisioning

* Add license for openstack.py in inventory

It's under the GPLv3+ while the rest of the repo is Apache 2.

* Add readme

* Move pre_tasks from to the openstack provisioner

We should probably not pollute the role namespace with a name as common
as "common". Moving the pre_task.yml to provisioners/openstack instead.

* Add default values to provision-openstack.yml

* Fix privileges in the pre-install playbook

* Always let the openshift nodes access the DNS

When `node_ingress_cidr` to limit the IP range for the DNS server, this
can prevent the actual openshift nodes from accessing it as well.

This commit makes the access from the `openstack_subnet_prefix` always
pass through and uses `node_ingress_cidr` for additional
access control.

* Add a flat sec group for openstack provider

Add a openstack_flat_secgroup, defaults to False.
When set, merges sec rules for master, node, etcd, infra nodes into a
single group. Less secure, but might help to mitigate quota limitations.
Update docs. Use timeout 30s to mitigate the error:
Timeout (12s) waiting for privilege escalation prompt.

Signed-off-by: Bogdan Dobrelya <[email protected]>

* Add ansible.cfg for openstack provider

Signed-off-by: Bogdan Dobrelya <[email protected]>

* Drop atomic-openshift-utils, update docs for origin

TODO use with
when: ansible_distribution == 'CentOS'
Also update docs for origin

Signed-off-by: Bogdan Dobrelya <[email protected]>

* Gather facts for provision playbook

Provision tasks use facts like ansible_hostname and few others.
W/o gathering facts, those expire, and the provision playbook cannot
be reapplied in order to update the existing heat stack.
Refresh the facts cache by specifying gather_facts: true.

Signed-off-by: Bogdan Dobrelya <[email protected]>

* Update sample inventory with the latest changes

* Fix yamllint errors

* Remove the extraneous DNS directory

It's a CASL-specific helper, not necessary for the provisioning
playbooks.

* Fix flake8 errors with the openstack inventory
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
aws needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants