Main Menu

Search

OCNE: Oracle Cloud Native Environment: Scaling Up and Adding Kubernetes Node Using Olcnectl Module Update Command

 Below is the procedure

1. List the current OCNE nodes using olcnectl command on Operator node as follows.

In below command change the environment name as needed.

olcnectl module instances --environment-name cne1-ha-env


olcnectl module instances --environment-name cne1-ha-env

INFO[13/10/23 03:07:56] Starting local API server                    

INFO[13/10/23 03:07:57] Starting local API server                    

INSTANCE             MODULE    STATE    

cne1-ha-control1:8090 node      installed

cne1-ha-helm         helm      created  

cne1-ha-istio        istio     installed

cne1-ha-worker1:8090 node      installed

cne1-ha-worker2:8090 node      installed

grafana              grafana   installed

prometheus           prometheus installed

cne1-ha-cluster      kubernetes installed


In above example sample output we just have one control node.


2. For adding the second control node run, olcnectl module update command as follows. Please note that there is no need to provide the load-balancer or virtual IP flags to olcnectl command for doing module update


olcnectl module update \

--environment-name cne1-ha-env \

--name cne1-ha-cluster \

--control-plane-nodes cne1-ha-control1:8090,cne1-ha-control2:8090 \

--worker-nodes cne1-ha-worker1:8090,cne1-ha-worker2:8090


$ olcnectl module update \

> --environment-name cne1-ha-env \

> --name cne1-ha-cluster \

> --control-plane-nodes cne1-ha-control1:8090,cne1-ha-control2:8090 \

> --worker-nodes cne1-ha-worker1:8090,cne1-ha-worker2:8090

INFO[13/10/23 03:09:19] Starting local API server                    

? [WARNING] Update will shift your workload and some pods will lose data if they rely on local storage. Do you want to continue? (y/N? [WARNING] Update will shift your workload and some pods will lose data if they rely on local storage. Do you want to continue? Yes

Taking backup of modules before update

Backup of modules succeeded.

Updating modules

Update successful


3. Now list the olcnectl command to list whether the newly added node is shown.


In below command change the environment name as needed.


olcnectl module instances --environment-name cne1-ha-env



[opc@cne1-ha-operator ~]$ olcnectl module instances --environment-name cne1-ha-env

INFO[13/10/23 03:13:37] Starting local API server                    

INFO[13/10/23 03:13:38] Starting local API server                    

INSTANCE             MODULE    STATE    

cne1-ha-worker1:8090 node      installed

cne1-ha-worker2:8090 node      installed

cne1-ha-cluster      kubernetes installed

cne1-ha-control1:8090 node      installed

cne1-ha-istio        istio     installed

cne1-ha-helm         helm      created  

grafana              grafana   installed

prometheus           prometheus installed

cne1-ha-control2:8090 node      installed

[opc@cne1-ha-operator ~]$ 


4. Now check whether the kube-system pods are running on the newly added node using below command on control node 1.


kubectl get pods -n kube-system


[opc@cne1-ha-operator ~]$ kubectl get pods -n kube-system

The connection to the server localhost:8080 was refused - did you specify the right host or port?

[opc@cne1-ha-operator ~]$ 

[opc@cne1-ha-operator ~]$ ssh cne1-ha-control1

Activate the web console with: systemctl enable --now cockpit.socket


Last login: Mon Oct  2 18:58:46 2023 from 10.0.1.247

[opc@cne1-ha-control1 ~]$ 

[opc@cne1-ha-control1 ~]$ kubectl get pods -n kube-system

NAME                                       READY   STATUS    RESTARTS       AGE

coredns-6fdffc7bfc-tzr9p                   1/1     Running   10             113d

coredns-6fdffc7bfc-wrws4                   1/1     Running   2              10d

etcd-cne1-ha-control1                      1/1     Running   15             195d

etcd-cne1-ha-control2                      1/1     Running   15             2d12h

kube-apiserver-cne1-ha-control1            1/1     Running   20             195d

kube-apiserver-cne1-ha-control2            1/1     Running   21             2d12h

kube-controller-manager-cne1-ha-control1   1/1     Running   21             195d

kube-controller-manager-cne1-ha-control2   1/1     Running   15             2d12h

kube-flannel-ds-8ttpr                      1/1     Running   21             140d

kube-flannel-ds-r7pgc                      1/1     Running   20 (15m ago)   140d

kube-flannel-ds-rvc7l                      1/1     Running   20             140d

kube-flannel-ds-wlqzf                      1/1     Running   3 (15m ago)    2d12h

kube-proxy-9hdzl                           1/1     Running   15             195d

kube-proxy-hqx5n                           1/1     Running   15             195d

kube-proxy-t5ckp                           1/1     Running   15             195d

kube-proxy-xz9t5                           1/1     Running   1              2d12h

kube-scheduler-cne1-ha-control1            1/1     Running   18             195d

kube-scheduler-cne1-ha-control2            1/1     Running   16             2d12h

metrics-server-77dfc8475-qskxw             1/1     Running   13             134d



No comments:

Post a Comment