-
Notifications
You must be signed in to change notification settings - Fork 47
/
Openshift-CLuster-Installation-Lab-1.txt
412 lines (242 loc) · 12.9 KB
/
Openshift-CLuster-Installation-Lab-1.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
Deploying OpenShift 3.9 Cluster
################################
infra1structure Setup:
*********************
Hostname IP Address CPUs RAM HDD OS Role
master.openshift.example.com 192.168.0.50 2 2 GB /dev/sda (100 GB) CentOS7.X Master Node
node1.openshift.example.com 192.168.0.51 2 2 GB /dev/sda (100 GB) CentOS7.X Worker Node
node2.openshift.example.com 192.168.0.52 2 2 GB /dev/sda (100 GB) CentOS7.X Worker Node
infra1.openshift.example.com 192.168.0.53 2 2 GB /dev/sda (100 GB) CentOS7.X infra1 Node
#################################
# Preparing Nodes: #
#################################
Step 1: Set the hostname:
*************************
[root@master ~]# hostnamectl set-hostname master.openshift.example.com # For master node
Use the above command to set the hostname for other nodes accordingly.
[root@master ~]# bash
Step 2: Configure Static IP Address:
************************************
[root@master ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE="Ethernet"
BOOTPROTO="none"
IPADDR=192.168.0.50
NETMASK=255.255.255.0
GATEWAY=192.168.0.1
DNS1=192.168.0.1
DEFROUTE="yes"
IPV6INIT="no"
NAME="ens33"
DEVICE="ens33"
ONBOOT="yes"
:wq (save and exit)
The above IP settings for "master" Node, use the same settings for other nodes accordingly.
Don't forget to changes the IP Address for nodes.
Down the Network Connection, reload the configuration and up it again on all nodes.
~]# nmcli connection down ens33
~]# nmcli connection reload
~]# nmcli connection up ens33
Configure /etc/hosts file for name resolution as following on all nodes:
~]# vim /etc/hosts
192.168.0.50 master.openshift.example.com master
192.168.0.51 node1.openshift.example.com node1
192.168.0.52 node2.openshift.example.com node2
192.168.0.53 infra1.openshift.example.com infra1
Step 3: Use the below command to update the System on all nodes:
~]# yum update -y
Use the below command to compare the kernel on all nodes:
~]# echo "Latest Installed Kernel : $(rpm -q kernel --last | head -n 1 | awk '{print $1}')" ; echo "Current Running Kernel : kernel-$(uname -r)"
If you have different kernel in above command output, then you need to reboot all the system. otherwise jump to Step 4.
~]# reboot
Step 4: Once the systems came back UP/ONLINE, Install the following Packages on all nodes:
~]# yum install -y wget git nano net-tools docker-1.13.1 bind-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct openssl-devel httpd-tools NetworkManager python-cryptography python-devel python-passlib java-1.8.0-openjdk-headless "@Development Tools"
Step 5: Configure Ansible Repository on master Node only.
*********************************************************
[root@master ~]# vim /etc/yum.repos.d/ansible.repo
[ansible]
name = Ansible Repo
baseurl = https://releases.ansible.com/ansible/rpm/release/epel-7-x86_64/
enabled = 1
gpgcheck = 0
:wq (save and exit)
Step 6: Start and Enable NetworkManager and Docker Services on all nodes:
*************************************************************************
~]# systemctl start NetworkManager
~]# systemctl enable NetworkManager
~]# systemctl status NetworkManager
~]# systemctl start docker && systemctl enable docker && systemctl status docker
Step 7: Install Ansible Package and Clone Openshift-Ansible Git Repo on Master Machine :
****************************************************************************************
[root@master ~]# yum -y install ansible-2.6* pyOpenSSL
[root@master ~]# git clone https://github.com/openshift/openshift-ansible.git
[root@master ~]# cd openshift-ansible && git fetch && git checkout release-3.9
Step 8: Generate SSH Keys on Master Node and install it on all nodes:
*********************************************************************
[root@master ~]# ssh-keygen -f ~/.ssh/id_rsa -N ''
[root@master ~]# for host in master.example.com node1.openshift.example.com node2.openshift.example.com infra1.openshift.example.com ; do ssh-copy-id -i ~/.ssh/id_rsa.pub $host; done
Step 9: Now Create Your Own Inventory file for Ansible as following on master Node:
[root@master ~]# vim ~/inventory.ini
[OSEv3:children]
masters
nodes
etcd
[masters]
master.openshift.example.com openshift_ip=192.168.0.50
[etcd]
master.openshift.example.com openshift_ip=192.168.0.50
[nodes]
master.openshift.example.com openshift_ip=192.168.0.50 openshift_schedulable=true
node1.openshift.example.com openshift_ip=192.168.0.51 openshift_schedulable=true openshift_node_labels="{'region': 'primary', 'zone': 'default'}"
node2.openshift.example.com openshift_ip=192.168.0.52 openshift_schedulable=true openshift_node_labels="{'region': 'primary', 'zone': 'default'}"
infra1.openshift.example.com openshift_ip=192.168.0.53 openshift_schedulable=true openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
[OSEv3:vars]
debug_level=4
ansible_ssh_user=root
openshift_enable_service_catalog=true
ansible_service_broker_install=true
containerized=false
os_sdn_network_plugin_name='redhat/openshift-ovs-multitenant'
openshift_disable_check=disk_availability,docker_storage,memory_availability,docker_image_availability
openshift_node_kubelet_args={'pods-per-core': ['10']}
deployment_type=origin
openshift_deployment_type=origin
openshift_release=v3.9.0
openshift_pkg_version=-3.9.0
openshift_image_tag=v3.9.0
openshift_service_catalog_image_version=v3.9.0
template_service_broker_image_version=v3.9.0
osm_use_cockpit=true
# put the router on dedicated infra1 node
openshift_hosted_router_selector='region=infra'
openshift_master_default_subdomain=apps.openshift.example.com
openshift_public_hostname=master.openshift.example.com
# put the image registry on dedicated infra1 node
openshift_hosted_registry_selector='region=infra'
:wq (save and exit)
Step 10: Use the below ansible playbook command to check the prerequisites to deply OpenShift Cluster on master Node:
[root@master ~]# ansible-playbook -i ~/inventory.ini playbooks/prerequisites.yml
Step 11: Once prerequisites completed without any error use the below ansible playbook to Deploy OpenShift Cluster on master Node:
[root@master ~]# ansible-playbook -i ~/inventory.ini playbooks/deploy_cluster.yml
Now you have to wait approx 20-30 Minutes to complete the Installation.
Step 12: Once the Installation is completed, Create a admin user in OpenShift with Password "Pas$$w0rd" from master Node:
Install httpd-tools on master
[root@master ~]# htpasswd -c /etc/origin/master/htpasswd admin
New password: Pas$$w0rd
Re-type new password: Pas$$w0rd
[root@master ~]# ls -l /etc/origin/master/htpasswd
[root@master ~]# cat /etc/origin/master/htpasswd
Open /etc/origin/master/master-config.yaml file and do the following changes:
identityProviders:
- challenge: true
login: true
mappingMethod: claim
name: allow_all
provider:
apiVersion: v1
kind: HTPasswdPasswordIdentityProvider ## This line
file: /etc/origin/master/htpasswd ## This line
:wq (save and exit)
Restart the below services:
[root@master ~]# systemctl restart origin-master-controllers.service
[root@master ~]# systemctl restart origin-master-api.service
Step 13: Use the below command to assign cluster-admin Role to admin user:
[root@master ~]# oc adm policy add-cluster-role-to-user cluster-admin admin
Step 14: Use the below command to login as admin user on CLI:
[root@master ~]# oc login
Authentication required for https://master.example.com:8443 (openshift)
Username: admin
Password: Pas$$w0rd
Login successful.
You have access to the following projects and can switch between them with 'oc project <projectname>':
* default
kube-public
kube-system
logging
management-infra1
openshift
openshift-infra1
openshift-node
openshift-web-console
Using project "default".
[root@master ~]# oc whoami
admin
Step 15: Use below command to list the projects, pods, nodes, Replication Controllers, Services and Deployment Config.
[root@master ~]# oc get projects
NAME DISPLAY NAME STATUS
default Active
kube-public Active
kube-system Active
logging Active
management-infra1 Active
openshift Active
openshift-infra1 Active
openshift-node Active
openshift-web-console Active
[root@master ~]# oc get nodes
NAME STATUS ROLES AGE VERSION
master.example.com Ready master 29m v1.9.1+a0ce1bc657
node1.example.com Ready compute 22m v1.9.1+a0ce1bc657
node2.example.com Ready compute 22m v1.9.1+a0ce1bc657
node3.example.com Ready <none> 22m v1.9.1+a0ce1bc657
[root@master ~]# oc get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default docker-registry-1-krmpx 1/1 Running 0 15m
default registry-console-1-z9sbd 1/1 Running 0 14m
default router-1-q7g5v 1/1 Running 0 15m
openshift-web-console webconsole-5f649b49b5-hr9dq 1/1 Running 0 14m
[root@master ~]# oc get rc
NAME DESIRED CURRENT READY AGE
docker-registry-1 1 1 1 15m
registry-console-1 1 1 1 14m
router-1 1 1 1 16m
[root@master ~]# oc get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
docker-registry ClusterIP 172.30.34.127 <none> 5000/TCP 16m
kubernetes ClusterIP 172.30.0.1 <none> 443/TCP,53/UDP,53/TCP 35m
registry-console ClusterIP 172.30.62.197 <none> 9000/TCP 15m
router ClusterIP 172.30.141.143 <none> 80/TCP,443/TCP,1936/TCP 16m
[root@master ~]# oc get dc
NAME REVISION DESIRED CURRENT TRIGGERED BY
docker-registry 1 1 1 config
registry-console 1 1 1 config
router 1 1 1 config
Verifying Multiple etcd Hosts:
##############################
On a master host, verify the etcd cluster health, substituting for the FQDNs of your etcd hosts in
the following:
[root@master ~]# yum install etcd -y
[root@master ~]# etcdctl -C https://master.openshift.example.com:2379 --ca-file=/etc/origin/master/master.etcd-ca.crt --cert-file=/etc/origin/master/master.etcd-client.crt --key-file=/etc/origin/master/master.etcd-client.key cluster-health
member b211b9f9d8f23828 is healthy: got healthy result from https://192.168.56.10:2379
cluster is healthy
Also verify the member list is correct:
[root@master ~]# etcdctl -C https://master.openshift.example.com:2379 --ca-file=/etc/origin/master/master.etcd-ca.crt --cert-file=/etc/origin/master/master.etcd-client.crt --key-file=/etc/origin/master/master.etcd-client.key member list
b211b9f9d8f23828: name=master.example.com peerURLs=https://192.168.56.10:2380 clientURLs=https://192.168.56.10:2379 isLeader=true
Now you can access OpenShift Cluster using Web Browser as following:
https://master.openshift.example.com:8443
Username: admin
Password: Pas$$w0rd
Step 17: Now Create a Demo Project
Click on "Create Project"
Name: demo
Display Name: demo project
Description: This is Demo Project
Click on Create.
Now Deploy an application in demo project for testing:
Click in demo project -> Click on Brows Catalog -> Select PHP -> Next
Version : 7.0 - latest
Application Name: demoapp1
Git Repo: https://github.com/sureshchandrarhca15/OpenShift39.git
Next -> Finish.
Now click on Overview in demo project
See, demoapp1 build is running. so wait for build completion.
Once build completed, there is a Pod Running for testapp and will have an URL as below:
http://demoapp1-demo.apps.openshift.example.com
Now click on Pod Icon to get the details.
NOTE: I don't have DNS to resolve App URL. So, I am using /etc/hosts file to resolve it using infra1 node IP Address.
because on infra1 node our router pod is running so all traffic will routed from infra1 node in OpenShift Cluster.
~]# vim /etc/hosts
#Add the below line
192.168.0.53 demoapp1-demo.apps.openshift.example.com
:wq (save and exit)
Now Click on Pod URL Link and application should be accessible.
NOTE: You have to configure /etc/hosts file on that system from where you are accessing the Openshift Dashboard.