Main Menu

Search

OCNE 1.9: How To Install Oracle Cloud Native Environment 1.9 Kubernetes Cluster With Calico CNI Network?

Following are steps to install Oracle Cloud Native Environment (OCNE) 1.9 version with Calico CNI.

In this example we will install HA Kubernetes Cluster with One Operator, 2 Kubernetes Control Nodes and 2 Worker nodes. All the nodes are OL8 based nodes with UEK7 kernel. For Kubernetes control plane HA we are using Load balancer listening on port 6443.

1) From the Operator node to itself and Control nodes set passwordless SSH.

(a) Generate SSH keygen files on Operator node using below command

ssh-keygen -t rsa

(b) Setup passwordless SSH from Operator node to all other nodes and to itself

For this copy the contents of /home/opc/.ssh/id_rsa.pub on Operator node and add it to the end of /home/opc/.ssh/authorized_keys files on all the other OCNE control & worker nodes. Also add the entry in /home/opc/.ssh/authorized_keys file itself on Operator node.

2) Install the oracle-olcne-release-el8 release package on Operator node

sudo dnf -y install oracle-olcne-release-el8

3) Enable the OCNE 1,9 version yum repositories on Operator node

sudo dnf config-manager --enable ol8_olcne19 ol8_addons ol8_baseos_latest ol8_appstream ol8_kvm_appstream ol8_UEKR7

4) Disable old OCNE version repos on Operator node

sudo dnf config-manager --disable ol8_olcne18 ol8_olcne17 ol8_olcne16 ol8_olcne15 ol8_olcne14 ol8_olcne13 ol8_olcne12 ol8_developer

5) Install the olcnectl software package  on Operator node

sudo dnf -y install olcnectl

6) On all the Kubernetes Control and Worker nodes disable and stop the firewalld service

sudo systemctl disable firewalld && sudo systemctl stop firewalld

7) On the all Kubernetes Control and Worker Nodes, update the CRIO proxy configuration file to include Kubernetes cluster IP 10.96.0.1 in NO PROXY

For this follow below steps on all the Kubernetes control and worker nodes.

(a) Create /etc/systemd/system/crio.service.d directory.

(b) Under /etc/systemd/system/crio.service.d directory create filename with proxy.conf with below content

[Service]
Environment="NO_PROXY=10.96.0.1"

(c) Reload system daemon and restart CRIO service 

sudo systemctl daemon-reload
sudo systemctl restart crio.service

8) On Operator node run olcnectl provision command to provision the kubernetes environment with pod network specified as calico

olcnectl provision --api-server calico19-control1 --control-plane-nodes calico19-control1,calico19-control2 --worker-nodes calico19-worker1,calico19-worker2 --environment-name calico19-env --name calico19-cluster --pod-network calico --load-balancer 129.XX.XX.45:6443 --yes

Replace 129.x IP with load balancer IP.

9) Run olcnectl update-config command to setup setup certificates for olcnectl command for environment created.

olcnectl module instances --environment-name calico19-env --update-config

10) Run olcnectl module instances command to validate that the Kubernetes nodes are shown in installed state

olcnectl module instances --environment-name calico19-env
Below is sample output.
INSTANCE              MODULE    STATE
calico19-control2:8090 node      installed
calico19-worker1:8090 node      installed
calico19-worker2:8090 node      installed
calico19-cluster      kubernetes installed
calico19-control1:8090 node      installed

11) Setup Kubectl for the newly installed environment

For this run below commands on control node 1.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=$HOME/.kube/config
echo 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc

12) Validate that all the nodes are in ready state and all the pods are in running state.

For this run below commands.

kubectl get nodes
kubectl get pods -A

Below is sample output of kubectl get pods command. You should see calico pods as running.

calico-apiserver       calico-apiserver-58584cfc6d-755vc             1/1     Running   0             26m
calico-apiserver       calico-apiserver-58584cfc6d-fs6tz             1/1     Running   0             26m
calico-system          calico-kube-controllers-8659dc5cfd-p6jjb      1/1     Running   0             27m
calico-system          calico-node-59m7w                             1/1     Running   0             27m
calico-system          calico-node-c9hmh                             1/1     Running   0             27m
calico-system          calico-node-gwdwk                             1/1     Running   0             27m
calico-system          calico-node-sk6hc                             1/1     Running   0             27m
calico-system          calico-typha-7dd76bf494-44k2p                 1/1     Running   0             27m
calico-system          calico-typha-7dd76bf494-jd5xd                 1/1     Running   0             27m
calico-system          csi-node-driver-8cc4p                         2/2     Running   0             27m
calico-system          csi-node-driver-d6hbg                         2/2     Running   0             27m
calico-system          csi-node-driver-mvzrq                         2/2     Running   0             27m
calico-system          csi-node-driver-xctlb                         2/2     Running   0             27m
kube-system            coredns-5859f68d4-bhtw2                       1/1     Running   0             27m
kube-system            coredns-5859f68d4-fd4hw                       1/1     Running   0             27m
kube-system            etcd-calico19-control1                        1/1     Running   4             27m
kube-system            etcd-calico19-control2                        1/1     Running   2             27m
kube-system            kube-apiserver-calico19-control1              1/1     Running   4             27m
kube-system            kube-apiserver-calico19-control2              1/1     Running   3             27m
kube-system            kube-controller-manager-calico19-control1     1/1     Running   4             27m
kube-system            kube-controller-manager-calico19-control2     1/1     Running   2             27m
kube-system            kube-proxy-l7l4n                              1/1     Running   0             27m
kube-system            kube-proxy-nmpzw                              1/1     Running   0             27m
kube-system            kube-proxy-nxj88                              1/1     Running   0             27m
kube-system            kube-proxy-zgk48                              1/1     Running   0             27m
kube-system            kube-scheduler-calico19-control1              1/1     Running   4             27m
kube-system            kube-scheduler-calico19-control2              1/1     Running   2             27m
kubernetes-dashboard   kubernetes-dashboard-547d4b479c-b2jb8         1/1     Running   1 (26m ago)   27m
ocne-modules           verrazzano-module-operator-76b48fb4d6-g9fk9   1/1     Running   0             27m
tigera-operator        tigera-operator-94cb8c689-fqzj6               1/1     Running   0             27m


13) Check the Calico tigera status

For this run below command 

kubectl get tigerastatus

You should see output as follows which shows apiserver and calico are available (True value for AVAILABLE column).

NAME        AVAILABLE   PROGRESSING   DEGRADED   SINCE
apiserver   True        False         False      28m
calico      True        False         False      29m





Keywords

calico calicocni cni tigera kubernetes ocne cne devops pod network networking how to

No comments:

Post a Comment