Main Menu

Search

Showing posts with label kubernetes how to docs. Show all posts
Showing posts with label kubernetes how to docs. Show all posts

PODMAN: Curl Command To List Container Images And Their Tags From Podman Private Registry Running On Remote Host

Following curl command can be used to list container images and their tags from podman private registry running on remote host. In below command replace private-registry with the hostname of the podman private registry host and 5000 with the port number of private podman registry.

curl https://private-registry:5000/v2/_catalog | jq '.repositories[]'  | sort  | xargs -I _ curl -s -k -X GET https://private-registry:5000/v2/_/tags/list

If you can connecting the insecure private registry using self signed certificates, following curl command with --insecure option can be used.

curl --insecure https://private-registry:5000/v2/_catalog | jq '.repositories[]'  | sort  | xargs -I _ curl -s -k -X GET https://private-registry:5000/v2/_/tags/list

Below is the example console output of above command.

{"name":"olcne","tags":["latest"]}
{"name":"olcne/alertmanager","tags":["v0.24.0"]}
{"name":"olcne/apiserver","tags":["v3.27.0","v3.25.1-2"]}
{"name":"olcne/ceph","tags":["v1.12.3","v1.11.6-2","v17.2.5-2","v1.10.9-1"]}
{"name":"olcne/cephcsi","tags":["v3.7.2-1","v3.9.0","v3.7.2-2"]}
{"name":"olcne/cert-manager-acmesolver","tags":["v1.14.5"]}
{"name":"olcne/cert-manager-cainjector","tags":["v1.16.3","v1.14.5","v1.9.1"]}
{"name":"olcne/cert-manager-controller","tags":["v1.16.3","v1.14.5","v1.9.1"]}
{"name":"olcne/cert-manager-startupapicheck","tags":["v1.16.3","v1.14.5"]}
{"name":"olcne/cert-manager-webhook","tags":["v1.9.1","v1.16.3","v1.14.5"]}
{"name":"olcne/cert-manager-webhook-oci","tags":["v1.1.0"]}


Keywords:

Remote external registries private mirror mirroring containers image connecting

KUBERNETES: Kubectl Command To List All Pods And Nodes On Which They are Running

Below Kubectl command can be used to list all pods and nodes on which they are running.

kubectl get pod -o=custom-columns=NODE:.spec.nodeName,NAME:.metadata.name --all-namespaces

Below is sample output.

NODE       NAME
worker2    grafana-78ccdc896d-rsntv
worker2    istio-egressgateway-688d8db5f6-4k57n
worker3    istio-egressgateway-688d8db5f6-lmk42
worker2    istio-ingressgateway-97bd4fb86-2l4xx
worker1    istio-ingressgateway-97bd4fb86-ljjfr
worker3    istiod-6945f949c8-m9tsx
worker1    istiod-6945f949c8-x9mnm
worker3    prometheus-server-85577f7c59-nxlv9


Key words:

commands run running placed pod node how to docs identify identifying

KUBERNETES: Operating System Commands To Validate Kubernetes Related OS Settings For Comparing and Troubleshooting Issues

This document provides useful Operating system level commands for validating following OS settings related to Kubernetes

  • firewalld
  • selinux
  • rpm packages list 
  • iptables & iptable nat rules
  • system level settings - sysctl -a output
  • Operating system release and kernel versions
  • List of all the services on the operating system
  • All the repositories that are enabled and disabled
  • Swap settings
  • Loaded modules (modprobe settings)
  • Sudo users 

Commands listed in this document will come in handy to troubleshoot Kubernetes issues where Kubernetes environment was working before and broken recently. For e.g. any issues post the OS patching, Kubernetes upgrade or other changes etc. 

If there are plans to do any upgrade or maintenance, it is good to capture these outputs on the Kubernetes nodes before the upgrade and after the upgrade/maintenance. In case there are any issues after upgrade/maintenance you can capture the command outputs again and compare whats changed.

Following are the list of commands to use.

List the loaded modules

lsmod

List all the system level settings (sysctl settings)

sysctl -a

Capture the kernel version

uname -a

Capture Operating system release version.

cat /etc/*release*

Capture and save the current IP tables.

iptables-save

Capture the current IP tables NAT rules.

iptables -L -t nat -vn --line-number

List all the system level services which are enabled/disabled/stopped/started

systemctl list-unit-files

Validate SELinux status

sestatus

List all the firewalld rules

sudo firewall-cmd --list-all-zones

Validate swap settings to see if it is disabled

cat /proc/swaps

List all the Yum/DNF repositories which are enabled and disabled.

sudo dnf repolist all

List all the rpm packages that are installed and when they were last updated

rpm -qa --last

Get the list of all the users

getent passwd

Capture the current sshd config settings

cat /etc/ssh/sshd_config

Capture the current sudoers configuration file

cat /etc/sudoers

Capture the current sudoer users info

ls -lrt /etc/sudoers.d/*

Validate the current users which have sudoer permissions

getent passwd | cut -f1 -d: | sudo xargs -L1 sudo -l -U | grep -v 'not allowed'


Keywords

OS operating system systems OS command commands validating checking check compare comparison maintenance outage window changed changes changing troubleshoot debug debugging broke broken issues problem problems

KUBERNETES: Kubectl Command To Do Rollout Restart of Flannel Pods

Kubectl Command To Do Rollout Restart of Flannel Pods

kubectl rollout restart ds kube-flannel-ds -n kube-system


Keywords:


flannel pod pods podnetwork network networks networking


KUBERNETES: How to Install Nginx Ingress Controller on Kubernetes Cluster Using Helm

Following are steps to install nginx ingress controller using helm.

1) Add the helm repo for nginx ingress controller, in this example helm-ingress-nginx 

helm repo add helm-ingress-nginx https://kubernetes.github.io/ingress-nginx
  
2) Verify and Update the downloaded Nginx ingress controller helm chart   

Update the helm repo to make sure you have latest nginx ingress controller helm chart. Following is the command.

helm repo update
  
List the Helm repositories to make sure you see nginx ingress controller chart. Following is the command.

helm repo list

3) Create a Kubernetes namespace for nginx ingress controller. In this example ingressns

kubectl create namespace ingressns

4) Install nginx ingress controller using helm

helm upgrade --install ingressns helm-ingress-nginx/ingress-nginx --namespace ingressns

5) Verify that the Nginx Ingress Controller pod is up

kubectl get pods -n ingress

KUBERNETES: Kubectl Command to Decode Grafana Admin User Password

Below Kubectl Command can  be used to Decode Grafana Admin User Password. 

kubectl get secret --namespace $NAMESPACE grafana \
    -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

KUBERNETES: Grafana Configuration .ini File Which Has Admin User Password Credentials

/etc/grafana/grafana.ini is the file which has grafana Admin User Password Credentials

You will see below block if user credentials are specific in Grafana /etc/grafana/grafana.ini file.

[security]
admin_user = admin
admin_password = admin


KUBERNETES How to Start / Run Temporary Pod On Specific Node In Kubernetes

You can start a pod temporarily on specific node for testing by using kubectl command as follows:

kubectl run <name> --image=<image> --overrides='{"apiVersion": "v1", "spec": {"nodeSelector": {"kubernetes.io/hostname": "<hostname>"}}}' --restart=Never

With above command a temporary pod with specified name and image will be started on the node specified. The pod will not start once you delete the pod since restart is specified as "Never".

For e.g. to start nginx pod using nginx image in Oracle container registry on worker1 node, your command will look as follows:

kubectl run nginx --image=container-registry.oracle.com/olcne/nginx:1.17.7 --overrides='{"apiVersion": "v1", "spec": {"nodeSelector": {"kubernetes.io/hostname": "worker1"}}}' --restart=Never


Keywords:

temporarily starting running pod pods test testing without deployment deployments yaml file files pull pulliing particular desired choice

KUBERNETES: How to Start / Run Temporary Pod For Testing In Kubernetes

You can start a pod temporarily for testing by using kubectl command as follows:

kubectl run <name> --image=<image> --restart=Never

With above command a temporay pod with specified name and image will be deployed. The pod will not start once you delete the pod since restart is specified as "Never"

For e.g. to start nginx pod using nginx image in Oracle container registry, your command will look as follows:

kubectl run nginx --image=container-registry.oracle.com/olcne/nginx:1.17.7 --restart=Never


Keywords:

temporarily starting running pod pods test testing without deployment deployments yaml file files pull pulliing

KUBERNETES: How To Install and Configure HAProxy As Load Balancer Kubernetes HA Cluster?

Following are steps to Install and Configure HAProxy As Load Balancer Kubernetes HA Cluster?

1) Install haproxy package using below command.
# dnf -y install haproxy

2) Validate if firewalld is running on the node.
# systemctl status firewalld.service 
If firewalld is running, add the port on which HAproxy will be listening to the firewalld to allow communication.
For e.g. if HAproxy is going to listen on 6443 port, add 6443 tcp port to the firewalld.

For this run below command.
# firewall-cmd --permanent --add-port=6443/tcp
Reload the firewalld after the change.

# firewall-cmd --reload
3)Backup the /etc/haproxy/haproxy.cfg original configuration file.
# cp -rp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.ORIG

4) Make sure SELinux is set to permissive.

For checking if SELinux is set to permissive run below command.
# sestatus

If you want to SELinux to be enforcing and strict, set SELinux rules to allow communication to HAProxy service and port.

5) Remove the existing content of /etc/haproxy/haproxy.cfg file and update with below content.

In below HAproxy configuration make following changes:

Bind port in this example is 6443, change it to the port you want HAproxy to be listening on to which backend Kubernetes nodes can connect.

Change the IP and Port for server node1, server node2 to the backend Kubernetes nodes IP and port. In this case we have backend set of 2 Kubernetes nodes. If you have more backend nodes, add entries as server node4, server node4, .... and update the entries with the backend kubernetes nodes IP and port.
#--------------------------------------------------------------------
# Kubernetes Cluster Control Plane Nodes Load Balancing
#--------------------------------------------------------------------
defaults
    maxconn 20000
    mode    tcp
    option  dontlognull
    timeout http-request 10s
    timeout queue        1m
    timeout connect      10s
    timeout client       86400s
    timeout server       86400s
    timeout tunnel       86400s
frontend k8s-api
    bind :::6443 v4v6
    mode tcp
    default_backend k8s-api
backend k8s-api
    option  httpchk GET /readyz HTTP/1.0
    option  log-health-checks
    http-check expect status 200
    mode tcp
    balance roundrobin
    default-server verify none check-ssl inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 5000 maxqueue 5000 weight 100
    server node1 10.10.10.1:6443 check
    server node2 10.10.10.2:6443 check

6) Restart and enable HAproxy service.
# systemctl start haproxy.service
# systemctl enable --now haproxy 

Key Words:

Ha proxy,  installing, configuring, high availability, balancing, loadbalancer

KUBERNETES: How to Change ETCD Leader From One Control Node To Other Using ETCDCTL Commands?

 1. Identify which Kubernetes Control Node has the leader. For his run below etcdctl command. In below command for —endpoints give the https client address endpoint of each control node.

sudo ETCDCTL_API=3 etcdctl --endpoints=https://10.0.0.182:2379,https://10.0.0.236:2379,https://10.0.0.4:2379,https://10.0.0.155:2379,https://10.0.0.96:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt   --cert=/etc/kubernetes/pki/etcd/server.crt   --key=/etc/kubernetes/pki/etcd/server.key endpoint status -w table

Below is the sample output.

+-------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|        ENDPOINT         |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+-------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://10.0.0.182:2379 |  d5907367f2bc466 |   3.5.3 |  9.7 MB |     false |      false |         9 |     110761 |             110761 |        |
| https://10.0.0.236:2379 | e1c82ddedc10c490 |   3.5.3 |  9.7 MB |     false |      false |         9 |     110761 |             110761 |        |
|   https://10.0.0.4:2379 | 86d10b822621f9cf |   3.5.3 |  9.7 MB |      true |      false |         9 |     110761 |             110761 |        |
| https://10.0.0.155:2379 | 9869473a238b9297 |   3.5.3 |  9.6 MB |     false |      false |         9 |     110761 |             110761 |        |
|  https://10.0.0.96:2379 | c4438cde3e1e5536 |   3.5.3 |  9.5 MB |     false |      false |         9 |     110761 |             110761 |        |
+-------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

2. Run below etcdctl command to change the leader to new control node.

sudo ETCDCTL_API=3 etcdctl --endpoints=<end-points> --cacert=/etc/kubernetes/pki/etcd/ca.crt   --cert=/etc/kubernetes/pki/etcd/server.crt   --key=/etc/kubernetes/pki/etcd/server.key move-leader <member ID>

For example, below is the example command.

In below command for —endpoints give the https client address endpoint of each control node, Replace the leader ID in this case d5907367f2bc466  with the leader ID of the Control Node you wish to change the Leader to.

sudo ETCDCTL_API=3 etcdctl --endpoints=https://10.0.0.182:2379,https://10.0.0.236:2379,https://10.0.0.4:2379,https://10.0.0.155:2379,https://10.0.0.96:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt   --cert=/etc/kubernetes/pki/etcd/server.crt   --key=/etc/kubernetes/pki/etcd/server.key move-leader d5907367f2bc466

Below is the sample console output you would see.

Leadership transferred from 86d10b822621f9cf to d5907367f2bc466

3. Run above command in step (1) to check if the leader moved to new control node.

Below is example output. As you can see leader is changed now to member ID d5907367f2bc466


+-------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|        ENDPOINT         |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+-------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://10.0.0.182:2379 |  d5907367f2bc466 |   3.5.3 |  9.7 MB |      true |      false |        10 |     111728 |             111728 |        |
| https://10.0.0.236:2379 | e1c82ddedc10c490 |   3.5.3 |  9.7 MB |     false |      false |        10 |     111728 |             111728 |        |
|   https://10.0.0.4:2379 | 86d10b822621f9cf |   3.5.3 |  9.7 MB |     false |      false |        10 |     111728 |             111728 |        |
| https://10.0.0.155:2379 | 9869473a238b9297 |   3.5.3 |  9.6 MB |     false |      false |        10 |     111728 |             111728 |        |
|  https://10.0.0.96:2379 | c4438cde3e1e5536 |   3.5.3 |  9.5 MB |     false |      false |        10 |     111728 |             111728 |        |
+-------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+



KUBERNETES: KUBECTL Command To List Pods And Their Associated Container Listening Ports

Below command can be used.

kubectl get pods -A -o custom-columns="POD:.metadata.name,PORTS:.spec.containers[*].ports[*].containerPort"

Below is sample output of above command.

POD                                           PORTS
nginx-6f4cfc8479-22zlh                        80
nginx-6f4cfc8479-2trg9                        80
nginx-6f4cfc8479-7sfpj                        80
nginx-6f4cfc8479-mkfng                        80
nginx-6f4cfc8479-nffq6                        80
nginx-6f4cfc8479-vcmx8                        80
coredns-5859f68d4-7p95k                       53,53,9153
coredns-5859f68d4-f7hnb                       53,53,9153

KUBERNETES: KUBECTL Command To Check Health Status and Liveliness Probe (livez check) Of Kubernetes Components

Below kubectl command can be used on the Kubernetes Control plane node.

kubectl get --raw='/readyz?verbose'

Alternatively below curl command can also be used instead.

curl -k https://localhost:6443/livez?verbose

Below is sample output of above command.

[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
livez check passed

KUBERNETES: KUBECTL Command To Check UnHealthy Status Pods

Below command can be used. Below command will list all the pods in all the namespaces which are in unhealthy status. 

kubectl get pods -o wide --all-namespaces | grep -vE 'Running|Completed'

Above command will list all the pods which are in other status than the running or completed status

KUBERNETES: KUBECTL Command To Check Kubernetes Components Status

Below command can be used to check the Kubernetes components status.

kubectl get cs

Below is sample output.

#kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE   ERROR
controller-manager   Healthy   ok        
scheduler            Healthy   ok        
etcd-0   

KUBERNETES: KUBECTL Command To List The Pods and Their CPU & Memory Resource Limits

Below command can be used. Below command will list all the pods in all the namespaces.

kubectl get pods -A -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[].resources.limits.cpu}{"\t"}{.spec.containers[].resources.limits.memory}{"\n"}{end}'

For listing the pods and their limits in specific namespace below command can be used.

kubectl get pods -n <namespace> -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[].resources.limits.cpu}{"\t"}{.spec.containers[].resources.limits.memory}{"\n"}{end}'

Below is sample output of above command. In below output memory is listed as 50Mi which is 50MB and CPU is 100m which is means 1/10th of CPU which is 0.1 CPU.

kube-flannel-ds-4w6lq 100m 50Mi
kube-flannel-ds-69xn6 100m 50Mi
kube-flannel-ds-d7drw 100m 50Mi
kube-flannel-ds-f87dw 100m 50Mi
kube-flannel-ds-rwcjl 100m 50Mi
kube-flannel-ds-xlnh7 100m 50Mi

KUBERNETES: How To Identify Orphaned Pod In Kubernetes Cluster?

Orphaned Pods can be identified by looking into messages system logs on Oracle Linux. 

Below command can be used to check for orphaned pods in Kubernetes cluster on Oracle Linux.

sudo tail /var/log/messages | grep "orphaned pod" | awk '{print $23}' | cut -d\\ -f2 | cut -d\" -f2 | uniq

KUBERNETES: KUBECTL Command To List All Pod Disruption Budgets (PDB) Set In Cluster

Below command can be used.

kubectl get pdb -A

KUBERNETES: KUBECTL Command To Force Delete A Pod

Below command can be used.

kubectl delete pod -n <name-space> <pod-name> --force --grace-period=0

KUBERNETES (K8S): ETCDCTL Command To Delete Pod Registry Entry From ETCD Database

NOTE: Deleting the Pod registry entry from etcd database will delete the pod from Kubernetes cluster. This action should never be performed and may cause issues with Kubernetes cluster. This action needs to be taken with lot of precaution, thorough analysis and guidance.

Below is the command.

sudo ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key del <pod registry entry>

For e.g. command can look as follows:

sudo ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key del /registry/pods/default/nginx-static-pod-control1

To List the Pod registry entries in etcd you can run below command.

sudo ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key get /registry/pods --prefix --keys-only