Main Menu

Search

Showing posts with label ocne. Show all posts
Showing posts with label ocne. Show all posts

OCNE 1.X: OCNE Environment Deployment YAML File For Installing/Upgrading/Scaling Up & Down Kubenetes Cluster Using OLCNECTL Utility

Please refer to below Git linux which provides example OCNE 1.x release environment deployment yaml file for installing/upgrading/scaling up & down kubenetes cluster using olcnectl utility.

https://github.com/tarbots/ocne-1x-installation-deployment-yaml-file


Keywords:

github gitlink links environment environments once olcne upgrade scale update installation installing install new deploy deployment yaml yml

OCNE 2.X: "ocne cluster console" Command To Connect to OCNE OCK Node

Below command can be used to connect to OCNE 2.x OCK instance from the workstation CLI node.

ocne cluster console --node <node> --direct



Keywords

connect connecting ssh sshing nodes console consoles how to howto doc docs


OCNE 2.x: "ocne cluster dump" Command To Collect Diagnostics Data in Oracle Cloud Native Environment

Following ocne cluster dump command can be used to collect diagnostic data in Oracle Cloud Native Environment 2.x version for debugging the issues.

ocne cluster dump -k <kubeconfig file> -m -d /<directory to save logs>

OCNE: Podman/Crictl Pull Commands To Pull OCK Image

Below are crictl and podman pull commands to pull OCK image of OCNE 2.x version. Change the OCK image version in below command as needed.

crictl pull container-registry.oracle.com/olcne/ock:1.32

podman pull container-registry.oracle.com/olcne/ock:1.32



Keywords:

OCNE Oracle Cloud Native Environment olcne CNE OCK Container Engine Kubernetes command commands ocne pulling 

OCNE: How To Check Exact OCNE Version and Kubernetes Version In Oracle Cloud Native Environment 1.x Release?

Following are steps to Check Exact OCNE Version and Kubernetes Version In Oracle Cloud Native Environment 1.x Release

From the OCNE operator node list of OCNE and Kubernetes packages that are installed on the Operator node.

For this run below command 

#rpm -qa --last | egrep -i "olcne|kube"

olcne-api-server-1.9.2-3.el8.x86_64           Wed 26 Feb 2025 03:48:15 PM GMT

olcne-utils-1.9.2-3.el8.x86_64                Wed 26 Feb 2025 03:48:14 PM GMT

kubectl-1.29.9-3.el8.x86_64                   Wed 26 Feb 2025 03:48:10 PM GMT

olcne-selinux-1.0.0-9.el8.x86_64              Wed 26 Feb 2025 03:47:53 PM GMT

olcnectl-1.9.2-3.el8.x86_64                   Wed 26 Feb 2025 03:46:14 PM GMT


You can also list the Kubernetes and OCNE package versions on the Control Node using above command.

To gather the Kubernetes version on the Kubernetes nodes, You can run below kubectl command on the Control Node where you have setup kubectl environment

#kubectl get nodes -owide


NAME         STATUS   ROLES           AGE   VERSION         INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                   KERNEL-VERSION                   CONTAINER-RUNTIME

ol8-19ctrl   Ready    control-plane   16m   v1.29.9+3.el8   10.0.1.30     <none>        Oracle Linux Server 8.10   5.15.0-304.171.4.el8uek.x86_64   cri-o://1.29.1

ol8-19wrkr   Ready    <none>          15m   v1.29.9+3.el8   10.0.1.73     <none>        Oracle Linux Server 8.10   5.15.0-304.171.4.el8uek.x86_64   cri-o://1.29.1


OCNE: Install Oracle Cloud Native Environment (OCNE) 1.9 Non HA on OCI (Oracle Cloud Infrastructure)

Below are steps to Install Oracle Cloud Native Environment (OCNE) 1.9 Non HA on OCI (Oracle Cloud Infrastructure)

1) Provision 3 OL8 instances from OCI Cloud portal - 1 for Operator, 1 for control and 1 for Worker nodes. You have have more worker nodes as well if you would like. Latest OL8 instances come with UEK7 kernel. Default OCI user is opc.

2) For opc user, enabled passwordless SSH from Operator node to control & worker nodes and to itself.

For this run below steps.

Generate public key for operator node, by running below command.
# ssh-keygen -t rsa 
Above command will generate /home/opc/.ssh/id_rsa.pub file which is public key file.

Copy the content inside /home/opc/.ssh/id_rsa.pub key on Operator node and append it to the end of /home/opc/.ssh/authorized_keys file on Operator, Control and worker nodes.

3) Verify that the passwordless SSH works from Operator node to itself and to control and worker nodes using ssh command.

4) On Operator node, Control and all worker nodes, install oracle-olcne-release-el8 package

#sudo dnf -y install oracle-olcne-release-el8

sudo dnf -y install oracle-olcne-release-el8
Last metadata expiration check: 1:27:12 ago on Wed 26 Feb 2025 03:41:24 AM GMT.
Dependencies resolved.
===========================================================================================
 Package                       Architecture Version          Repository               Size
===========================================================================================
Installing:
 oracle-ocne-release-el8       x86_64       1.0-12.el8       ol8_baseos_latest        16 k

Transaction Summary
===========================================================================================
Install  1 Package

Total download size: 16 k
Installed size: 20 k
Downloading Packages:
oracle-ocne-release-el8-1.0-12.el8.x86_64.rpm              214 kB/s |  16 kB     00:00    
-------------------------------------------------------------------------------------------
Total                                                      209 kB/s |  16 kB     00:00     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                   1/1 
  Installing       : oracle-ocne-release-el8-1.0-12.el8.x86_64                         1/1 
  Running scriptlet: oracle-ocne-release-el8-1.0-12.el8.x86_64                         1/1 
  Verifying        : oracle-ocne-release-el8-1.0-12.el8.x86_64                         1/1 

Installed:
  oracle-ocne-release-el8-1.0-12.el8.x86_64                                                

Complete!

5) On Operator, Control and Worker nodes backup /etc/yum.repos.d/oracle-ocne-ol8.repo file and update the file to change ol8 developer repo name from ol8_developer_olcne to ol8_developer. For this run below command.
# sudo sed -i 's/ol8_developer_olcne/ol8_developer/g' /etc/yum.repos.d/oracle-ocne-ol8.repo

6) On Operator, Control, and Worker nodes, Enable OLCNE 1.9 and other OL8 & kernel yum repositories

# sudo dnf config-manager --enable ol8_olcne19 ol8_addons ol8_baseos_latest ol8_appstream ol8_kvm_appstream ol8_UEKR7


7) On Operator, Control, and Worker nodes, Disable old OCNE repos.

sudo dnf config-manager --disable ol8_olcne18 ol8_olcne17 ol8_olcne16 ol8_olcne15 ol8_olcne14 ol8_olcne13 ol8_olcne12 ol8_UEKR6


8) On Operator, Control, and Worker nodes, Verify that OCNE 1.9 repo and other repos enabled in above steps (3) are enabled.

#sudo dnf repolist enabled

sudo dnf repolist enabled
Repository ol8_developer is listed more than once in the configuration
repo id                        repo name
ol8_MySQL84                    MySQL 8.4 Server Community for Oracle Linux 8 (x86_64)
ol8_MySQL84_tools_community    MySQL 8.4 Tools Community for Oracle Linux 8 (x86_64)
ol8_MySQL_connectors_community MySQL Connectors Community for Oracle Linux 8 (x86_64)
ol8_UEKR7                      Latest Unbreakable Enterprise Kernel Release 7 for Oracle Linux 8 (x86_64)
ol8_addons                     Oracle Linux 8 Addons (x86_64)
ol8_appstream                  Oracle Linux 8 Application Stream (x86_64)
ol8_baseos_latest              Oracle Linux 8 BaseOS Latest (x86_64)
ol8_ksplice                    Ksplice for Oracle Linux 8 (x86_64)
ol8_kvm_appstream              Oracle Linux 8 KVM Application Stream (x86_64)
ol8_oci_included               Oracle Software for OCI users on Oracle Linux 8 (x86_64)
ol8_olcne19                    Oracle Cloud Native Environment version 1.9 (x86_64)


9) On Operator node

Install the olcnectl software package: 

# sudo dnf -y install olcnectl

sudo dnf -y install olcnectl
Repository ol8_developer is listed more than once in the configuration
Oracle Linux 8 BaseOS Latest (x86_64)                      215 kB/s | 4.3 kB     00:00    
Oracle Linux 8 Application Stream (x86_64)                 379 kB/s | 4.5 kB     00:00    
Oracle Linux 8 Addons (x86_64)                             286 kB/s | 3.5 kB     00:00    
Oracle Cloud Native Environment version 1.9 (x86_64)       736 kB/s |  89 kB     00:00    
Latest Unbreakable Enterprise Kernel Release 7 for Oracle  269 kB/s | 3.5 kB     00:00    
Oracle Linux 8 KVM Application Stream (x86_64)             8.4 MB/s | 1.6 MB     00:00    
Dependencies resolved.
===========================================================================================
 Package             Architecture      Version                Repository              Size
===========================================================================================
Installing:
 olcnectl            x86_64            1.9.2-3.el8            ol8_olcne19            4.8 M

Transaction Summary
===========================================================================================
Install  1 Package

Total download size: 4.8 M
Installed size: 15 M
Downloading Packages:
olcnectl-1.9.2-3.el8.x86_64.rpm                             21 MB/s | 4.8 MB     00:00    
-------------------------------------------------------------------------------------------
Total                                                       20 MB/s | 4.8 MB     00:00     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                   1/1 
  Installing       : olcnectl-1.9.2-3.el8.x86_64                                       1/1 
  Verifying        : olcnectl-1.9.2-3.el8.x86_64                                       1/1 

Installed:
  olcnectl-1.9.2-3.el8.x86_64                                                              

Complete!
[opc@rhck-opr yum.repos.d]$ 



10) Run olcnectl provision command to create OCNE Kubernetes environment.

In below command replace the names for --api-server flag with operator node, --control-plane-nodes flag with control nodes, --worker-nodes with worker nodes. For --environment-name give the desired OCNE environment name of choice, for --name give Kubernetes cluster name of choice.

olcnectl provision \
--api-server rhck-opr \
--control-plane-nodes rhck-ctrl \
--worker-nodes rhck-wrkr \
--environment-name cne-rhck-env \
--name cne-rhck-nonha-cluster \
--yes

Below is console output of the success run of provision command for reference.

#olcnectl provision \
> --api-server rhck-opr \
> --control-plane-nodes rhck-ctrl \
> --worker-nodes rhck-wrkr \
> --environment-name cne-rhck-env \
> --name cne-rhck-nonha-cluster \
> --yes
INFO[26/02/25 05:34:51] Generating certificate authority             
INFO[26/02/25 05:34:51] Generating certificate for rhck-opr          
INFO[26/02/25 05:34:51] Generating certificate for rhck-ctrl         
INFO[26/02/25 05:34:52] Generating certificate for rhck-wrkr         
INFO[26/02/25 05:34:52] Creating directory "/etc/olcne/certificates/" on rhck-opr 
INFO[26/02/25 05:34:52] Copying local file at "certificates/ca/ca.cert" to "/etc/olcne/certificates/ca.cert" on rhck-opr 
INFO[26/02/25 05:34:52] Copying local file at "certificates/rhck-opr/node.cert" to "/etc/olcne/certificates/node.cert" on rhck-opr 
INFO[26/02/25 05:34:52] Copying local file at "certificates/rhck-opr/node.key" to "/etc/olcne/certificates/node.key" on rhck-opr 
INFO[26/02/25 05:34:52] Creating directory "/etc/olcne/certificates/" on rhck-ctrl 
INFO[26/02/25 05:34:52] Copying local file at "certificates/ca/ca.cert" to "/etc/olcne/certificates/ca.cert" on rhck-ctrl 
INFO[26/02/25 05:34:52] Copying local file at "certificates/rhck-ctrl/node.cert" to "/etc/olcne/certificates/node.cert" on rhck-ctrl 
INFO[26/02/25 05:34:52] Copying local file at "certificates/rhck-ctrl/node.key" to "/etc/olcne/certificates/node.key" on rhck-ctrl 
INFO[26/02/25 05:34:52] Creating directory "/etc/olcne/certificates/" on rhck-wrkr 
INFO[26/02/25 05:34:53] Copying local file at "certificates/ca/ca.cert" to "/etc/olcne/certificates/ca.cert" on rhck-wrkr 
INFO[26/02/25 05:34:53] Copying local file at "certificates/rhck-wrkr/node.cert" to "/etc/olcne/certificates/node.cert" on rhck-wrkr 
INFO[26/02/25 05:34:53] Copying local file at "certificates/rhck-wrkr/node.key" to "/etc/olcne/certificates/node.key" on rhck-wrkr 
INFO[26/02/25 05:34:53] Apply api-server configuration on rhck-opr:
* Install oracle-olcne-release
* Enable olcne19 repo
* Install API Server
    Add firewall port 8091/tcp
 
INFO[26/02/25 05:34:53] Apply control-plane configuration on rhck-ctrl:
* Install oracle-olcne-release
* Enable olcne19 repo
* Configure firewall rule:
    Add interface cni0 to trusted zone
    Add ports: 8090/tcp 10250/tcp 10255/tcp 9100/tcp 8472/udp 6443/tcp
* Disable swap
* Load br_netfilter module
* Load Bridge Tunable Parameters:
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1
* Set SELinux to permissive
* Install and enable olcne-agent
 
INFO[26/02/25 05:34:53] Apply worker configuration on rhck-wrkr:
* Install oracle-olcne-release
* Enable olcne19 repo
* Configure firewall rule:
    Add interface cni0 to trusted zone
    Add ports: 8090/tcp 10250/tcp 10255/tcp 9100/tcp 8472/udp
* Disable swap
* Load br_netfilter module
* Load Bridge Tunable Parameters:
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1
* Set SELinux to permissive
* Install and enable olcne-agent
 
Environment cne-rhck-env created.
Modules created successfully.
Modules installed successfully.
INFO[26/02/25 05:49:19] Kubeconfig for instance "cne-rhck-nonha-cluster" in environment "cne-rhck-env" written to kubeconfig.cne-rhck-env.cne-rhck-nonha-cluster


11) Update OCNE Config For running olcnectl command without --api-server argument. For this run below command on Operator node. 

In below command replace the --api-server node name with the Operator node name, --environment-name with the OCNE environment name which was given in provision command in above step (10)

olcnectl module instances \
--api-server rhck-opr:8091 \
--environment-name cne-rhck-env \
--update-config

Now again rerun olcnectl module instances command again without --api-server argument as follows. This command will list the Control & Worker Nodes and Kubernetes cluster name.
olcnectl module instances --environment-name cne-rhck-env

#olcnectl module instances --environment-name cne-rhck-env
INSTANCE               MODULE     STATE    
rhck-ctrl:8090         node       installed
rhck-wrkr:8090         node       installed
cne-rhck-nonha-cluster kubernetes installed


12) Setup the kubectl environment on Control node to run kubectl commands for Kubernetes operations. For this run below commands on Control node.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=$HOME/.kube/config
echo 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc

13) Validate kubectl is working on Control Node and Kubernetes Nodes are Ready and Pods Are In Running State. For this run below kubectl commands.
kubectl get nodes
kubectl get pods -A

Below are sample outputs for reference.

# kubectl get nodes

NAME        STATUS   ROLES           AGE   VERSION
rhck-ctrl   Ready    control-plane   11m   v1.29.9+3.el8
rhck-wrkr   Ready    <none>          10m   v1.29.9+3.el8

# kubectl get pods -A

NAMESPACE              NAME                                          READY   STATUS    RESTARTS   AGE
kube-system            coredns-5859f68d4-2z6vq                       1/1     Running   0          11m
kube-system            coredns-5859f68d4-lqxxk                       1/1     Running   0          11m
kube-system            etcd-rhck-ctrl                                1/1     Running   0          11m
kube-system            kube-apiserver-rhck-ctrl                      1/1     Running   0          11m
kube-system            kube-controller-manager-rhck-ctrl             1/1     Running   0          11m
kube-system            kube-flannel-ds-gz548                         1/1     Running   0          8m49s
kube-system            kube-flannel-ds-rmpdt                         1/1     Running   0          8m49s
kube-system            kube-proxy-ffnzs                              1/1     Running   0          10m
kube-system            kube-proxy-n7kxf                              1/1     Running   0          11m
kube-system            kube-scheduler-rhck-ctrl                      1/1     Running   0          11m
kubernetes-dashboard   kubernetes-dashboard-547d4b479c-fnjtf         1/1     Running   0          8m48s
ocne-modules           verrazzano-module-operator-6cb74478bf-xv8z2   1/1     Running   0          8m48s

Now you have a installed OCNE Kubernetes environment ready to go.

- - -
Keywords added for search:

OCNE installation

OCNE: Olcnectl Command To Uninstall Istio Module In Oracle Cloud Native Environment 1.x Versions

Below olcnectl command can be used uninstall Istio Module which is installed.

olcnectl module uninstall --name <istio module name> --environment-name <ocne environment name>
Below is example snippet

#olcnectl module uninstall --name istio-module --environment-name ocne-environment

Modules uninstalled successfully.

OCNE: Where Can I Find Kubernetes Incremental Upgrade / Update Paths Information Which Happened As Part Of OCNE Upgrade?

On OCNE Operator node, below file has that info.


/etc/olcne/modules/kubernetes/1.0.0/kubernetes.yaml


Below is snippet of kubernetes update path info from above file.


    updatepath:

     1.14.8:

       - 1.14.9

       - 1.15.11

     1.14.9:

       - 1.15.11

     1.15.11:

       - 1.16.8

     1.16.8:

       - 1.17.9

     1.17.4:

       - 1.18.10

     1.17.6:

       - 1.18.10

     1.17.9:

       - 1.18.18

     1.18.10:

       - 1.19.8

     1.18.18:

       - 1.19.8

     1.19.8:

       - 1.20.6

     1.20.6:

       - 1.21.14-3

     1.20.11:

       - 1.21.14-3

     1.21.6:

       - 1.22.16

     1.21.14:

       - 1.22.16

     1.21.14-3:

       - 1.22.16

     1.22.8:

       - 1.23.14

     1.22.14:

       - 1.23.14

     1.22.16:

       - 1.23.14

     1.23.7:

       - 1.24.15

     1.23.11:

       - 1.24.15

     1.23.14:

       - 1.24.15

     1.24.5:

       - 1.24.15

     1.24.8:

       - 1.24.15