This document discusses the requirements, current expected behavior, and how to try out what exists so far. It covers the installation with the default CNI (OVNKubernetes).
- OpenStack Platform Support
- Table of Contents
- Reference Documents
- OpenStack Requirements
- OpenStack Credentials
- Standalone Single-Node Development Environment
- Running The Installer
- Post Install Operations
- Reporting Issues
- Observability
- Privileges
- Control plane machine set
- Known Issues and Workarounds
- Troubleshooting your cluster
- Customizing your install
- Installing OpenShift on OpenStack User-Provisioned Infrastructure
- Deploying OpenShift bare-metal workers
- Deploying OpenShift single root I/O virtualization (SRIOV) workers
- Deploying OpenShift with OVS-DPDK
- Deploying OpenShift with an external load balancer
- Provider Networks
- Migrate the Image Registry from Cinder to Swift
- Image Registry With A Custom PVC Backend
- Adding Worker Nodes By Hand
- Connecting workers nodes and pods to an IPv6 network
- Connecting worker nodes to a dedicated Manila network
- Learn about the OpenShift on OpenStack networking infrastructure design
- Deploying OpenShift vGPU workers
The OpenShift installation on OpenStack platform relies on a number of core services being available:
- Keystone
- Neutron
- Nova
- with Metadata service enabled
- Glance
- Storage solution for the image registry, one of:
- Swift
- Cinder
In order to run the latest version of the installer in OpenStack, at a bare minimum you need the following quota to run a default cluster. While it is possible to run the cluster with fewer resources than this, it is not recommended. Certain cases, such as deploying without FIPs, or deploying with an external load balancer are documented below, and are not included in the scope of this recommendation.
For a successful installation it is required:
- Floating IPs: 2 (plus one that will be created and destroyed by the Installer during the installation process)
- Security Groups: 3
- Security Group Rules: 60
- Routers: 1
- Subnets: 1
- Server Groups: 2, plus one per additional Availability zone in each machine-pool
- RAM: 112 GB
- vCPUs: 28
- Volume Storage: 700 GB
- Instances: 7
- Depending on the type of image registry backend either 1 Swift container or an additional 100 GB volume.
- OpenStack resource tagging
Note The installer will check OpenStack quota limits to make sure that the requested resources can be created. Note that it won't check for resource availability in the cloud, but only on the quotas.
You may need to increase the security group related quotas from their default values. For example (as an OpenStack administrator):
openstack quota set --secgroups 8 --secgroup-rules 100 <project>`
Once you configure the quota for your project, please ensure that the user for the installer has the proper privileges.
The default deployment stands up 3 master nodes, which is the minimum amount required for a cluster. For each master node you stand up, you will need 1 instance, and 1 port available in your quota. They should be assigned a flavor with at least 16 GB RAM, 4 vCPUs, and 100 GB Disk (or Root Volume). It is theoretically possible to run with a smaller flavor, but be aware that if it takes too long to stand up services, or certain essential services crash, the installer could time out, leading to a failed install.
The master nodes are placed in a single Server group with "soft anti-affinity" policy by default; the machines will therefore be created on separate hosts when possible. Note that this is also the case when the master nodes are deployed across multiple availability zones that were specified by their failure domain.
The default deployment stands up 3 worker nodes. Worker nodes host the applications you run on OpenShift. The flavor assigned to the worker nodes should have at least 2 vCPUs, 8 GB RAM and 100 GB Disk (or Root Volume). However, if you are experiencing Out Of Memory
issues, or your installs are timing out, try increasing the size of your flavor to match the master nodes: 4 vCPUs and 16 GB RAM.
The worker nodes are placed in a single Server group with "soft anti-affinity" policy by default; the machines will therefore be created on separate hosts when possible.
See the OpenShift documentation for more information on the worker nodes.
The bootstrap node is a temporary node that is responsible for standing up the control plane on the masters. Only one bootstrap node will be stood up and it will be deprovisioned once the production control plane is ready. To do so, you need 1 instance, and 1 port. We recommend a flavor with a minimum of 16 GB RAM, 4 vCPUs, and 100 GB Disk (or Root Volume), it can be created using below command:
openstack flavor create --ram 16384 --disk 128 --vcpu 4 okd-cluster
If Swift is available in the cloud where the installation is being performed, it is used as the default backend for the OpenShift image registry. At the time of installation only an empty container is created without loading any data. Later on, for the system to work properly, you need to have enough free space to store the container images.
In this case the user must have swiftoperator
permissions. As an OpenStack administrator:
openstack role add --user <user> --project <project> swiftoperator
If Swift is not available, the PVC storage is used as the backend. For this purpose, a persistent volume of 100 GB will be created in Cinder and mounted to the image registry pod during the installation.
Note If you are deploying a cluster in an Availability Zone where Swift isn't available but where Cinder is, it is recommended to deploy the Image Registry with Cinder backend. It will try to schedule the volume into the same AZ as the Nova zone where the PVC is located; otherwise it'll pick the default availability zone. If needed, the Image registry can be moved to another availability zone by a day 2 operation.
If you want to force Cinder to be used as a backend for the Image Registry, you need to remove the swiftoperator
permissions. As an OpenStack administrator:
openstack role remove --user <user> --project <project> swiftoperator
Note Since Cinder supports only ReadWriteOnce access mode, it's not possible to have more than one replica of the image registry pod.
Etcd, which runs on the control plane nodes, has disk requirements that need to be met to ensure the stability of the cluster.
Generally speaking, it is advised to choose for the control plane nodes a flavour that is backed by SSD in order to reduce latency.
If the ephemeral disk that gets attached to instances of the chosen flavor does not meet etcd requirements, check if the cloud has a more performant volume type and use a custom install-config.yaml
to deploy the control plane with root volumes. However, please note that Ceph RBD (and any other network-attached storage) can result in unpredictable network latencies. Prefer PCI passthrough of an NVM device instead.
In order to measure the performance of your disk, you can use fio:
sudo podman run \
--volume "/var/lib/etcd:/mount:z" \
docker.io/ljishen/fio \
--directory=/mount \
--name=iotest \
--size=22m \
--bs=2300 \
--fdatasync=1 \
--ioengine=sync \
--rw=write
The command must be run as superuser.
In the command output, look for the the 99th percentile of fdatasync durations (fsync/fdatasync/sync_file_range
-> sync percentiles
). The number must be less than 10ms (or 10000µs: fio fluidly adjusts the scale between ms/µs/ns depending on the numbers).
Look for spikes. Even if the baseline latency looks good, there may be spikes where it comes up, triggering issues that result in API being unavailable.
Prometheus collects etcd-specific metrics.
Once the cluster is up, Prometheus provides useful metrics here:
https://prometheus-k8s-openshift-monitoring.apps.<cluster name>.<domain name>/graph?g0.range_input=2h&g0.stacked=0&g0.expr=histogram_quantile(0.99%2C%20rate(etcd_disk_wal_fsync_duration_seconds_bucket%5B5m%5D))&g0.tab=0&g1.range_input=2h&g1.expr=histogram_quantile(0.99%2C%20rate(etcd_disk_backend_commit_duration_seconds_bucket%5B5m%5D))&g1.tab=0&g2.range_input=2h&g2.expr=etcd_server_health_failures&g2.tab=0
Click "Login with OpenShift", enter kubeadmin
and the password printed out by the installer.
The units are in seconds and should stay under 10ms (0.01s) at all times. The etcd_health
graph should remain at 0.
In order to collect relevant information interactively, run the conformance tests:
git clone https://github.com/openshift/origin/
make WHAT=cmd/openshift-tests
export KUBECONFIG=<path/to/kubeconfig>
_output/local/bin/linux/amd64/openshift-tests run openshift/conformance/parallel
The entire test suite takes over an hour to complete. Run it and check the Prometheus logs afterwards.
The public network should be created by the OpenStack administrator. Verify the name/ID of the 'External' network:
openstack network list --long -c ID -c Name -c "Router Type"
+--------------------------------------+----------------+-------------+
| ID | Name | Router Type |
+--------------------------------------+----------------+-------------+
| 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External |
+--------------------------------------+----------------+-------------+
Note If the Neutron
trunk
service plug-in is enabled, trunk port will be created by default. For more information, please refer to neutron trunk port.
Nova metadata service must be enabled and available at http://169.254.169.254
. Currently the service is used to deliver Ignition config files to Nova instances and provide information about the machine to kubelet
.
The creation of images in Glance should be available to the user. Now Glance is used for two things:
-
Right after the installation starts, the installer automatically uploads the actual
RHCOS
binary image to Glance with the name<clusterID>-rhcos
. The image exists throughout the life of the cluster and is removed along with it. -
The installer stores bootstrap ignition configs in a temporary image called
<clusterID>-ignition
. This is not a canonical use of the service, but this solution allows us to unify the installation process, since Glance is available on all OpenStack clouds, unlike Swift. The image exists for a limited period of time while the bootstrap process is running (normally 10-30 minutes), and then is automatically deleted.
You must have a clouds.yaml
file in order to run the installer. The installer will look for a clouds.yaml
file in the following locations in order:
- Value of
OS_CLIENT_CONFIG_FILE
environment variable - Current directory
- unix-specific user config directory (
~/.config/openstack/clouds.yaml
) - unix-specific site config directory (
/etc/openstack/clouds.yaml
)
In many OpenStack distributions, you can generate a clouds.yaml
file through Horizon. Otherwise, you can make a clouds.yaml
file yourself.
Information on this file can be found here and it looks like:
clouds:
shiftstack:
auth:
auth_url: http://10.10.14.42:5000/v3
project_name: shiftstack
username: shiftstack_user
password: XXX
user_domain_name: Default
project_domain_name: Default
dev-evn:
region_name: RegionOne
auth:
username: 'devuser'
password: XXX
project_name: 'devonly'
auth_url: 'https://10.10.14.22:5001/v2.0'
The file can contain information about several clouds. For instance, the example above describes two clouds: shiftstack
and dev-evn
.
In order to determine which cloud to use, the user can either specify it in the install-config.yaml
file under platform.openstack.cloud
or with OS_CLOUD
environment variable. If both are omitted, then the cloud name defaults to openstack
.
To update the OpenStack credentials on a running OpenShift cluster, upload the new clouds.yaml
to the openstack-credentials
secret in the kube-system
namespace.
For example:
oc set data -n kube-system secret/openstack-credentials clouds.yaml="$(<path/to/clouds.yaml)"
Please note that the credentials MUST be in the openstack
stanza of clouds
.
If your OpenStack cluster uses self signed CA certificates for endpoint authentication, add the cacert
key to your clouds.yaml
. Its value should be a valid path to your CA cert, and the file should be readable by the user who runs the installer. The path can be either absolute, or relative to the current working directory while running the installer.
For example:
clouds:
shiftstack:
auth: ...
cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem"
If you would like to set up an isolated development environment, you may use a bare metal host running CentOS 7. The following repository includes some instructions and scripts to help with creating a single-node OpenStack development environment for running the installer. Please refer to this documentation for further details.
OpenStack support has known issues. We will be documenting workarounds until we are able to resolve these bugs in the upcoming releases. To see the latest status of any bug, read through bugzilla or github link provided in that bug's description. If you know of a possible workaround that hasn't been documented yet, please comment in that bug's tracking link so we can address it as soon as possible. Also note that any bug listed in these documents is already a top priority issue for the dev team, and will be resolved as soon as possible. If you find more bugs during your runs, please read the section on issue reporting.
Please head to openshift.com/try to get the latest versions of the installer, and instructions to run it.
Before running the installer, we recommend you create a directory for each cluster you plan to deploy. See the documents on the recommended workflow for more information about why you should do it this way.
mkdir ostest
cp install-config.yaml ostest/install-config.yaml
All the OpenShift nodes get created in an OpenStack tenant network and as such, can't be accessed directly in most OpenStack deployments. We will briefly explain how to set up access to the OpenShift API with and without floating IP addresses.
This method allows you to attach two floating IP (FIP) addresses to endpoints in OpenShift.
A standard deployment uses three floating IP addresses in total:
- External access to the OpenShift API
- External access to the workloads (apps) running on the OpenShift cluster
- Temporary IP address for bootstrap log collection
The first two addresses (API and Ingress) are generally created up-front and have the corresponding DNS records resolve to them.
The third floating IP is created automatically by the installer and will be destroyed along with all the other bootstrap resources. If the bootstrapping process fails, the installer will try to SSH into the bootstrap node and collect the logs.
The deployed OpenShift cluster will need two floating IP addresses: one to attach to the API load balancer (apiFloatingIP
) and one for the OpenShift applications (ingressFloatingIP
). Note that apiFloatingIP
is the IP address you will add to your install-config.yaml
or select in the interactive installer prompt.
You can create them like so:
openstack floating ip create --description "API <cluster name>.<base domain>" <external network>
# => <apiFloatingIP>
openstack floating ip create --description "Ingress <cluster name>.<base domain>" <external network>
# => <ingressFloatingIP>
Note These IP addresses will not show up attached to any particular server (e.g. when running
openstack server list
). Similarly, the API and Ingress ports will always be in theDOWN
state. This is because the ports are not attached to the servers directly. Instead, their fixed IP addresses are managed by keepalived. This has no record in Neutron's database and as such, is not visible to OpenStack.
The network traffic will flow through even though the IPs and ports do not show up in the servers.
For more details, read the OpenShift on OpenStack networking infrastructure design document.
You will also need to add the following records to your DNS:
api.<cluster name>.<base domain>. IN A <apiFloatingIP>
*.apps.<cluster name>.<base domain>. IN A <ingressFloatingIP>
If you're unable to create and publish these DNS records, you can add them to your /etc/hosts
file.
<apiFloatingIP> api.<cluster name>.<base domain>
<ingressFloatingIP> console-openshift-console.apps.<cluster name>.<base domain>
<ingressFloatingIP> integrated-oauth-server-openshift-authentication.apps.<cluster name>.<base domain>
<ingressFloatingIP> oauth-openshift.apps.<cluster name>.<base domain>
<ingressFloatingIP> prometheus-k8s-openshift-monitoring.apps.<cluster name>.<base domain>
<ingressFloatingIP> grafana-openshift-monitoring.apps.<cluster name>.<base domain>
<ingressFloatingIP> <app name>.apps.<cluster name>.<base domain>
WARNING: This workaround will make the API accessible only to the computer with these
/etc/hosts
entries. This is fine for your own testing (and it is enough for the installation to succeed), but it is not enough for a production deployment. In addition, if you create new OpenShift apps or routes, you will have to add their entries too, because/etc/hosts
does not support wildcard entries.
If you have specified the API floating IP (either via the installer prompt or by adding the apiFloatingIP
entry in your install-config.yaml
) the installer will attach the Floating IP address to the api-port
automatically.
If you have created the API DNS record, you should be able access the OpenShift API.
In the same manner, you may have specified an Ingress floating IP by adding the ingressFloatingIP
entry in your install-config.yaml
, in which case the installer attaches the Floating IP address to the ingress-port
automatically.
If ingressFloatingIP
is empty or absent in install-config.yaml
, the Ingress port will be created but not attached to any floating IP. You can manually attach the Ingress floating IP to the ingress-port after the cluster is created.
That can be done in the following steps:
openstack port show <cluster name>-<clusterID>-ingress-port
Then attach the FIP to it:
openstack floating ip set --port <cluster name>-<clusterID>-ingress-port <ingressFloatingIP>
This assumes the floating IP and corresponding *.apps
DNS record exists.
If you cannot or don't want to pre-create a floating IP address, the installation should still succeed, however the installer will fail waiting for the API.
WARNING: The installer will fail if it can't reach the bootstrap OpenShift API in 20 minutes.
Even if the installer times out, the OpenShift cluster should still come up. Once the bootstrapping process is in place, it should all run to completion. So you should be able to deploy OpenShift without any floating IP addresses and DNS records and create everything yourself after the cluster is up.
To run the installer, you have the option of using the interactive wizard, or providing your own install-config.yaml
file for it. The wizard is the easier way to run the installer, but passing your own install-config.yaml
enables you to use more fine grained customizations. If you are going to create your own install-config.yaml
, read through the available OpenStack customizations.
./openshift-install create cluster --dir ostest
If you want to create an install config without deploying a cluster, you can use the command:
./openshift-install create install-config --dir ostest
Currently:
- Deploys an isolated tenant network
- Deploys a bootstrap instance to bootstrap the OpenShift cluster
- Deploys 3 master nodes
- Once the masters are deployed, the bootstrap instance is destroyed
- Deploys 3 worker nodes
Look for a message like this to verify that your install succeeded:
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run
export KUBECONFIG=/home/stack/ostest/auth/kubeconfig
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.ostest.shiftstack.com
INFO Login to the console with user: kubeadmin, password: xxx
If you want to see the status of the apps and services in your cluster during, or after a deployment, first export your administrator's kubeconfig:
export KUBECONFIG=ostest/auth/kubeconfig
After a finished deployment, there should be a node for each master and worker server created. You can check this with the command:
oc get nodes
To see the version of your OpenShift cluster, do:
oc get clusterversion
To see the status of you operators, do:
oc get clusteroperator
Finally, to see all the running pods in your cluster, you can do:
oc get pods -A
To destroy the cluster, point it to your cluster with this command:
./openshift-install --log-level debug destroy cluster --dir ostest
Then, you can delete the folder containing the cluster metadata:
rm -rf ostest/
Groups of Compute nodes are managed using the MachineSet resource. It is possible to create additional MachineSets post-install, for example to assign workloads to specific machines.
When running on OpenStack, the MachineSet has platform-specific fields under spec.template.spec.providerSpec.value
. For more information about the values that you can set in the providerSpec
, see the API definition.
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_ID>
machine.openshift.io/cluster-api-machine-role: <node_role>
machine.openshift.io/cluster-api-machine-type: <node_role>
name: <infrastructure_ID>-<node_role>
namespace: openshift-machine-api
spec:
replicas: <number_of_replicas>
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: <infrastructure_ID>
machine.openshift.io/cluster-api-machineset: <infrastructure_ID>-<node_role>
template:
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_ID>
machine.openshift.io/cluster-api-machine-role: <node_role>
machine.openshift.io/cluster-api-machine-type: <node_role>
machine.openshift.io/cluster-api-machineset: <infrastructure_ID>-<node_role>
spec:
providerSpec:
value:
apiVersion: openstackproviderconfig.openshift.io/v1alpha1
cloudName: openstack
cloudsSecret:
name: openstack-cloud-credentials
namespace: openshift-machine-api
flavor: <nova_flavor>
image: <glance_image_name_or_location>
serverGroupID: <UUID of the pre-created Nova server group (optional)>
kind: OpenstackProviderSpec
networks:
- filter: {}
subnets:
- filter:
name: <subnet_name>
tags: openshiftClusterID=<infrastructure_ID>
securityGroups:
- filter: {}
name: <infrastructure_ID>-<node_role>
serverMetadata:
Name: <infrastructure_ID>-<node_role>
openshiftClusterID: <infrastructure_ID>
tags:
- openshiftClusterID=<infrastructure_ID>
trunk: true
userDataSecret:
name: <node_role>-user-data
availabilityZone: <optional_openstack_availability_zone>
To define a MachineSet with multiple networks, the primarySubnet
value in the providerSpec
must be set to the OpenStack subnet that you want the Kubernetes endpoints of the nodes to be published on. For most use cases, this is the same subnet(s) as the subnets listed in controlPlanePort in the install-config.yaml
.
After you set the subnet, add all of the networks that you want to attach to your machines to the Networks
list in providerSpec
. You must also add the network that the primary subnet is part of to this list.
The serverGroupID
property of the MachineSet
resource is used to create machines in that OpenStack server group. The server group must exist in OpenStack before you can apply the new MachineSet
resource.
In order to hint the Nova scheduler to spread the Machines across different hosts, first create a Server Group with the desired policy:
openstack server group create --policy=anti-affinity <server-group-name>
## OR ##
openstack --os-compute-api-version=2.15 server group create --policy=soft-anti-affinity <server-group-name>
If the command is successful, the OpenStack CLI will return the ID of the newly
created Server Group. Paste it in the optional serverGroupID
property of the
MachineSet.
In order to use Availability Zones, create one MachineSet per target
Availability Zone, and set the Availability Zone in the availabilityZone
property of the MachineSet.
You can shift ingress/egress traffic from the default OpenShift on OpenStack load balancer to a load balancer that you provide. To do so, the instance that it runs from must be able to access every machine in your cluster. You might ensure this access by creating the instance on a subnet that is within your cluster's network, and then attaching a router interface to that subnet from the OpenShift-external-router
[object/instance/whatever]. This can also be accomplished by attaching floating ips to the machines you want to add to your load balancer.
Add the following external facing services to your new load balancer:
- The master nodes serve the OpenShift API on port 6443 using TCP.
- The apps hosted on the worker nodes are served on ports 80, and 443. They are both served using TCP.
Note Make sure the instance that your new load balancer is running on has security group rules that allow TCP traffic over these ports.
The following HAProxy
config file demonstrates a basic configuration for an external load balancer:
listen <cluster-name>-api-6443
bind 0.0.0.0:6443
mode tcp
balance roundrobin
server <cluster-name>-master-0 192.168.0.154:6443 check
server <cluster-name>-master-1 192.168.0.15:6443 check
server <cluster-name>-master-2 192.168.3.128:6443 check
listen <cluster-name>-apps-443
bind 0.0.0.0:443
mode tcp
balance roundrobin
server <cluster-name>-worker-0 192.168.3.18:443 check
server <cluster-name>-worker-1 192.168.2.228:443 check
server <cluster-name>-worker-2 192.168.1.253:443 check
listen <cluster-name>-apps-80
bind 0.0.0.0:80
mode tcp
balance roundrobin
server <cluster-name>-worker-0 192.168.3.18:80 check
server <cluster-name>-worker-1 192.168.2.228:80 check
server <cluster-name>-worker-2 192.168.1.253:80 check
To ensure that your API and apps are accessible through your load balancer, create or update your DNS entries for those endpoints. To use your new load balancing service for external traffic, make sure the IP address for these DNS entries is the IP address your load balancer is reachable at.
<load balancer ip> api.<cluster-name>.<base domain>
<load balancer ip> apps.<cluster-name>.base domain>
One good way to test whether or not you can reach the API is to run the oc
command. If you can't do that easily, you can use this curl command:
curl https://api.<cluster-name>.<base domain>:6443/version --insecure
Result:
{
"major": "1",
"minor": "19",
"gitVersion": "v1.19.2+4abb4a7",
"gitCommit": "4abb4a77838037b8dbb8e4ca34e63c4a129654c8",
"gitTreeState": "clean",
"buildDate": "2020-11-12T05:46:36Z",
"goVersion": "go1.15.2",
"compiler": "gc",
"platform": "linux/amd64"
}
Note The versions in the sample output may differ from your own. As long as you get a JSON payload response, the API is accessible.
The simplest way to verify that apps are reachable is to open the OpenShift console in a web browser. If you don't have access to a web browser, query the console with the following curl command:
curl http://console-openshift-console.apps.<cluster-name>.<base domain> -I -L --insecure
Result:
HTTP/1.1 302 Found
content-length: 0
location: https://console-openshift-console.apps.<cluster-name>.<base domain>/
cache-control: no-cacheHTTP/1.1 200 OK
referrer-policy: strict-origin-when-cross-origin
set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure
x-content-type-options: nosniff
x-dns-prefetch-control: off
x-frame-options: DENY
x-xss-protection: 1; mode=block
date: Tue, 17 Nov 2020 08:42:10 GMT
content-type: text/html; charset=utf-8
set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None
cache-control: private
If you need to update the OpenStack cloud provider configuration you can edit the ConfigMap containing it:
oc edit configmap -n openshift-config cloud-provider-config
Note It can take a while to reconfigure the cluster depending on the size of it. The reconfiguration is completed once no node is getting
SchedulingDisabled
taint anymore.
There are several things you can change:
If you need to modify the direct cloud provider options, then edit the config
key in the ConfigMap. A brief list of possible options is shown in Cloud Provider configuration section.
If you ran the installer with a custom CA certificate, then this certificate can be changed while the cluster is running. To change your certificate, edit the value of the ca-cert.pem
key in the cloud-provider-config
configmap with a valid PEM certificate.
This script moves one node from its host to a different host.
Requirements:
- environment variable
OS_CLOUD
pointing to aclouds
entry with admin credentials inclouds.yaml
- environment variable
KUBECONFIG
pointing to admin OpenShift credentials
#!/usr/bin/env bash
set -Eeuo pipefail
if [ $# -lt 1 ]; then
echo "Usage: '$0 node_name'"
exit 64
fi
# Check for admin OpenStack credentials
openstack server list --all-projects >/dev/null || { >&2 echo "The script needs OpenStack admin credentials. Exiting"; exit 77; }
# Check for admin OpenShift credentials
oc adm top node >/dev/null || { >&2 echo "The script needs OpenShift admin credentials. Exiting"; exit 77; }
set -x
declare -r node_name="$1"
declare server_id
server_id="$(openstack server list --all-projects -f value -c ID -c Name | grep "$node_name" | cut -d' ' -f1)"
readonly server_id
# Drain the node
oc adm cordon "$node_name"
oc adm drain "$node_name" --delete-emptydir-data --ignore-daemonsets --force
# Power off the server
oc debug "node/${node_name}" -- chroot /host shutdown -h 1
# Verify the server is shutoff
until openstack server show "$server_id" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done
# Migrate the node
openstack server migrate --wait "$server_id"
# Resize VM
openstack server resize confirm "$server_id"
# Wait for the resize confirm to finish
until openstack server show "$server_id" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done
# Restart VM
openstack server start "$server_id"
# Wait for the node to show up as Ready:
until oc get node "$node_name" | grep -q "^${node_name}[[:space:]]\+Ready"; do sleep 5; done
# Uncordon the node
oc adm uncordon "$node_name"
# Wait for cluster operators to stabilize
until oc get co -o go-template='statuses: {{ range .items }}{{ range .status.conditions }}{{ if eq .type "Degraded" }}{{ if ne .status "False" }}DEGRADED{{ end }}{{ else if eq .type "Progressing"}}{{ if ne .status "False" }}PROGRESSING{{ end }}{{ else if eq .type "Available"}}{{ if ne .status "True" }}NOTAVAILABLE{{ end }}{{ end }}{{ end }}{{ end }}' | grep -qv '\(DEGRADED\|PROGRESSING\|NOTAVAILABLE\)'; do sleep 5; done
Please see the Issue Tracker for current known issues. Please report a new issue if you do not find an issue related to any trouble you’re having.