kubectl
kubeconfig
The default location of the Kubeconfig file is $HOME/.kube/config
on Linux and macOS, and %USERPROFILE%\.kube\config
on Windows.
To view current config:
kubectl config view
To choose current context:
kubectl config use-context <context-name>
We ca use multiple kubeconfig files by storing the file enumeration separated by :
to the $KUBECONFIG
environment variable.
mkdir ~/.kube/clusters
# move the kubeconfig files to the folder
mv cluster1.config ~/.kube/clusters
export KUBECONFIG=$(find ~/.kube/clusters -type f | sed ':a;N;s/\n/:/;ba')
Other way would be to merge the multiple kubeconfig files in one and store it in ~/.kube/config
the default location that it’s used when the $KUBECONFIG
environment variable it’s not set.
KUBECONFIG="/path/cluster1.config:/path/cluster2.config"
kubectl config view --flatten > ~/.kube/config
one-line merge configs
$ cp ~/.kube/config ~/.kube/config.bak && KUBECONFIG=~/.kube/config:/path/to/new/config kubectl config view --flatten > /tmp/config && mv /tmp/config ~/.kube/config
One-line merge configs in wsl, sourced from windows user profile:
# merge configs in wsl, sourced from windows user profile, target config in wsl
cp ~/.kube/config ~/.kube/config.bak \
&& KUBECONFIG=$(find $(wslpath "$(wslvar USERPROFILE)")/.kube/clusters -type f | sed ':a;N;s/\n/:/;ba') \
kubectl config view --flatten > /tmp/config \
&& mv /tmp/config ~/.kube/config && unset KUBECONFIG
# merge configs in wsl, sourced from windows user profile, target config in windows
USERPROFILE=$(wslpath "$(wslvar USERPROFILE)") \
&& echo "WINDOWS USERPROFILE: $USERPROFILE" \
&& cp $USERPROFILE/.kube/config $USERPROFILE/.kube/config.bak \
&& KUBECONFIG=$(find $USERPROFILE/.kube/clusters -type f | sed ':a;N;s/\n/:/;ba') \
kubectl config view --flatten > /tmp/config \
&& mv /tmp/config $USERPROFILE/.kube/config && unset KUBECONFIG && unset USERPROFILE
Linux, set proper permissions for kubeconfig file:
# notice warning about insecure permissions
$ helm list
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/romeo/.kube/config
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/romeo/.kube/config
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
# change config permissions
$ chmod 0600 ~/.kube/config
# recheck if warning is gone
$ helm list -A
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
argo-cd argocd 1 2022-12-21 15:08:04.794180549 +0200 EET deployed argo-cd-1.0.0
cert-manager-csi-driver cert-manager 1 2022-12-18 22:47:38.516132109 +0200 EET deployed cert-manager-csi-driver-v0.5.0 v0.5.0
ingress-nginx ingress-nginx 1 2022-12-18 22:53:37.516625036 +0200 EET deployed ingress-nginx-4.4.0 1.5.1
nfs-client kube-system 1 2022-12-18 22:20:13.652990319 +0200 EET deployed nfs-subdir-external-provisioner-4.0.17 4.0.2
prometheus monitoring 1 2022-12-19 10:50:23.472744036 +0200 EET deployed kube-prometheus-stack-43.1.1 0.61.
krew - kubectl plugin manager
krew is a package manager for kubectl plugins. It allows you to discover, install, and manage kubectl plugins.
install krew
Installing krew on Linux can be done following maintainers guide.
On windows you can download the latest release from [krew releases] and run the following command:
C:\Users\romeo>krew install krew
WARNING: To be able to run kubectl plugins, you need to add the
"%USERPROFILE%\.krew\bin" directory to your PATH environment variable
and restart your shell.
Updated the local copy of plugin index.
Installing plugin: krew
W1227 01:43:29.492484 4664 install.go:160] Skipping plugin "krew", it is already installed
search plugins
C:\Users\romeo>kubectl krew search
NAME DESCRIPTION INSTALLED
access-matrix Show an RBAC access matrix for server resources no
accurate Manage Accurate, a multi-tenancy controller no
install ctx plugin
$ kubectl krew install ctx
Updated the local copy of plugin index.
Installing plugin: ctx
Installed plugin: ctx
C:\Users\romeo>kubectl krew install ctx
Updated the local copy of plugin index.
Installing plugin: ctx
W1227 01:51:22.381421 33400 install.go:164] failed to install plugin "ctx": plugin "ctx" does not offer installation for this platform
failed to install some plugins: [ctx]: plugin "ctx" does not offer installation for this platform
Error: exit status 1
install konfig plugin
C:\Users\romeo>kubectl krew install konfig
Updated the local copy of plugin index.
Installing plugin: konfig
W1228 16:53:23.956983 28860 install.go:164] failed to install plugin "konfig": plugin "konfig" does not offer installation for this platform
failed to install some plugins: [konfig]: plugin "konfig" does not offer installation for this platform
Error: exit status 1
install config-registry plugin
after installing krew and connfig-registry plugin, you can use the following command to switch between clusters
kubectl config-registry
kubectx
kubectx
is a utility to manage and switch between kubectl contexts. It does not support KUBECONFIG
environment variable with multiple kubeconfig files.
C:\Users\romeo>kubectx
error: kubeconfig error: failed to load: cannot determine kubeconfig path: multiple files in KUBECONFIG are currently not supported
In linux multiple kubeconfig files are currently not supported, error is not thrown, but it’s not possible to switch between contexts.
install fzf
It’s recommanded to insstall fzf on your machine, so you can interactively choose between the entries using the arrow keys, or to fuzzy search as you type.
$ sudo apt-get install fzf
C:\Users\romeo>scoop install fzf
Updating Scoop...
Updating 'main' bucket...
Converting 'main' bucket to git repo...
Checking repo... OK
The main bucket was added successfully.
Scoop was updated successfully!
Installing 'fzf' (0.35.1) [64bit] from main bucket
fzf-0.35.1-windows_amd64.zip (1.3 MB) [==================================] 100%
Checking hash of fzf-0.35.1-windows_amd64.zip ... ok.
Extracting fzf-0.35.1-windows_amd64.zip ... done.
Linking ~\scoop\apps\fzf\current => ~\scoop\apps\fzf\0.35.1
Creating shim for 'fzf'.
'fzf' (0.35.1) was installed successfully!
See also:
[^] https://github.com/junegunn/fzf#using-linux-package-managers
Commands
Get events
$ kubectl get events -n argocd
LAST SEEN TYPE REASON OBJECT MESSAGE
4m2s Warning FailedGetResourceMetric horizontalpodautoscaler/argo-cd-argocd-repo-server-hpa failed to get memory utilization: missing request for memory
4m2s Warning FailedGetResourceMetric horizontalpodautoscaler/argo-cd-argocd-server-hpa failed to get memory utilization: missing request for memory
top
C:\>kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k1-cp1 136m 7% 1982Mi 84%
k1-cp2 124m 6% 1786Mi 76%
k1-cp3 153m 8% 1863Mi 79%
k1-w1 266m 14% 4653Mi 81%
k1-w2 174m 9% 4447Mi 78%
k1-w3 134m 7% 4599Mi 81%
k1-w4 111m 5% 2755Mi 48%
C:\Users\romeo>kubectl top pod -A
NAMESPACE NAME CPU(cores) MEMORY(bytes)
argocd argo-cd-argocd-application-controller-0 6m 256Mi
argocd argo-cd-argocd-applicationset-controller-7885d49dd7-c4rq5 1m 23Mi
argocd argo-cd-argocd-applicationset-controller-7885d49dd7-mcxvw 2m 36Mi
argocd argo-cd-argocd-notifications-controller-84b9ff599d-947jg 1m 22Mi
argocd argo-cd-argocd-repo-server-766776c6db-q5fz6 1m 96Mi
argocd argo-cd-argocd-repo-server-766776c6db-q626j 1m 108Mi
argocd argo-cd-argocd-server-69b48d444f-2bm57 2m 32Mi
argocd argo-cd-argocd-server-69b48d444f-dk22s 1m 35Mi
argocd argo-cd-redis-ha-haproxy-776d4dc75f-9lsl2 3m 70Mi
argocd argo-cd-redis-ha-haproxy-776d4dc75f-9vxgr 3m 69Mi
argocd argo-cd-redis-ha-haproxy-776d4dc75f-p5p4w 3m 70Mi
argocd argo-cd-redis-ha-server-0 12m 72Mi
argocd argo-cd-redis-ha-server-1 10m 74Mi
argocd argo-cd-redis-ha-server-2 13m 71Mi
cert-manager cert-manager-5d79dfd764-9b5l4 1m 37Mi
cert-manager cert-manager-cainjector-b74585f7b-j5lnw 1m 66Mi
cert-manager cert-manager-csi-driver-58vhx 1m 23Mi
cert-manager cert-manager-csi-driver-5bzh9 1m 20Mi
cert-manager cert-manager-csi-driver-mz6bz 1m 19Mi
cert-manager cert-manager-csi-driver-pz6w2 1m 20Mi
cert-manager cert-manager-webhook-7cd9ffcd55-gqf8h 1m 13Mi
default dnsutils 0m 0Mi
default hello-mtls-5bf7bc6c96-2mk5n 1m 12Mi
default rancher-demo-5bff6f6db5-h5wrr 0m 2Mi
default rancher-demo-5bff6f6db5-pqcc2 1m 2Mi
default rancher-demo-5bff6f6db5-rpzfh 1m 2Mi
elasticsearch elasticsearch-master-0 4m 2489Mi
elasticsearch elasticsearch-master-1 5m 2498Mi
elasticsearch elasticsearch-master-2 5m 2517Mi
guestbook guestbook-ui-76f97c94c-c2x95 1m 14Mi
ingress-nginx ingress-nginx-controller-fbdb57f95-4hw7d 2m 86Mi
kube-system calico-kube-controllers-56fd7b8dc-gbrm7 3m 24Mi
kube-system calico-node-4kx85 17m 78Mi
kube-system calico-node-f6jn9 21m 104Mi
kube-system calico-node-l2gfm 19m 79Mi
kube-system calico-node-qgz5s 16m 80Mi
kube-system calico-node-t8n5t 17m 81Mi
kube-system calico-node-wpsk8 21m 82Mi
kube-system calico-node-wt2gb 19m 78Mi
kube-system calico-typha-56dbf6c8db-x87hq 4m 27Mi
kube-system coredns-74d6c5659f-5l7kv 1m 18Mi
kube-system coredns-74d6c5659f-zkhn6 1m 18Mi
kube-system dns-autoscaler-59b8867c86-mfn5h 1m 7Mi
kube-system kube-apiserver-k1-cp1 43m 999Mi
kube-system kube-apiserver-k1-cp2 40m 543Mi
kube-system kube-apiserver-k1-cp3 55m 882Mi
kube-system kube-controller-manager-k1-cp1 1m 21Mi
kube-system kube-controller-manager-k1-cp2 1m 22Mi
kube-system kube-controller-manager-k1-cp3 12m 109Mi
kube-system kube-multus-ds-amd64-4ps4b 0m 2Mi
kube-system kube-multus-ds-amd64-867nn 0m 2Mi
kube-system kube-multus-ds-amd64-cwwq7 0m 2Mi
kube-system kube-multus-ds-amd64-dnnmb 0m 2Mi
kube-system kube-multus-ds-amd64-mb9nf 0m 3Mi
kube-system kube-multus-ds-amd64-p5lxj 0m 19Mi
kube-system kube-multus-ds-amd64-qdvnf 0m 2Mi
kube-system kube-proxy-6kzd2 5m 20Mi
kube-system kube-proxy-7fncq 4m 20Mi
kube-system kube-proxy-jx8mc 3m 20Mi
kube-system kube-proxy-p8djt 8m 18Mi
kube-system kube-proxy-rg987 3m 24Mi
kube-system kube-proxy-s7sx5 3m 19Mi
kube-system kube-proxy-tmrtr 3m 20Mi
kube-system kube-scheduler-k1-cp1 3m 36Mi
kube-system kube-scheduler-k1-cp2 2m 28Mi
kube-system kube-scheduler-k1-cp3 2m 34Mi
kube-system kubernetes-dashboard-648989c4b4-bg5qp 1m 9Mi
kube-system kubernetes-metrics-scraper-84bbbc8b75-x5z7h 1m 13Mi
kube-system metrics-server-68b8967c9f-9flr4 4m 24Mi
kube-system nfs-client-nfs-subdir-external-provisioner-b89bbfb5b-sg8x6 1m 8Mi
kube-system nodelocaldns-28hxl 3m 15Mi
kube-system nodelocaldns-7n7wb 3m 11Mi
kube-system nodelocaldns-fkjds 2m 11Mi
kube-system nodelocaldns-hc889 2m 11Mi
kube-system nodelocaldns-lvjxw 2m 12Mi
kube-system nodelocaldns-rgvns 3m 12Mi
kube-system nodelocaldns-v64cm 1m 11Mi
kubernetes-dashboard kubernetes-dashboard-5d9456785d-hgvt2 1m 15Mi
kubescape kubescape-6f9fc56656-t4zrd 1m 19Mi
metallb-system controller-678c55bc7b-hv2jl 1m 14Mi
monitoring alertmanager-kube-prometheus-stack-alertmanager-0 1m 21Mi
monitoring kube-prometheus-stack-grafana-fc96597df-khkp8 1m 213Mi
monitoring kube-prometheus-stack-kube-state-metrics-75b97d7857-7bftp 2m 17Mi
monitoring kube-prometheus-stack-operator-7fbd8f97dd-45lt2 1m 25Mi
monitoring kube-prometheus-stack-prometheus-node-exporter-2vqz7 1m 9Mi
monitoring kube-prometheus-stack-prometheus-node-exporter-62d8v 1m 11Mi
monitoring kube-prometheus-stack-prometheus-node-exporter-cbwcn 1m 10Mi
monitoring kube-prometheus-stack-prometheus-node-exporter-nmpn2 1m 9Mi
monitoring kube-prometheus-stack-prometheus-node-exporter-nqkcn 2m 10Mi
monitoring kube-prometheus-stack-prometheus-node-exporter-tl6jc 1m 9Mi
monitoring kube-prometheus-stack-prometheus-node-exporter-vpllt 1m 9Mi
monitoring prometheus-kube-prometheus-stack-prometheus-0 70m 799Mi
pgadmin pgadmin-pgadmin4-5ff4d4d559-fkrtp 1m 146Mi
portal-app portal-app-angular-nginx-5cc94f8b8f-4qfjk 1m 3Mi
portal-app portal-app-authserver-aspnet-core-58b748dbbc-q8pph 2m 322Mi
portal-app portal-app-hostapi-aspnet-core-696b675cfc-kcbgt 3m 304Mi
postgresql-ha postgresql-ha-pgpool-74b6d8bfbb-sgj4k 7m 250Mi
postgresql-ha postgresql-ha-postgresql-0 12m 121Mi
postgresql-ha postgresql-ha-postgresql-1 10m 100Mi
postgresql-ha postgresql-ha-postgresql-2 9m 99Mi
redis-cluster redis-cluster-0 5m 7Mi
redis-cluster redis-cluster-1 4m 10Mi
redis-cluster redis-cluster-2 5m 7Mi
redis-cluster redis-cluster-3 5m 7Mi
redis-cluster redis-cluster-4 4m 7Mi
redis-cluster redis-cluster-5 5m 9Mi
sealed-secrets sealed-secrets-56fc944cd7-ws5ns 1m 11Mi
step autocert2-76f5fcbfc8-wcldh 1m 11Mi
velero velero-8475886c8c-p6mq4 1m 55Mi
#Get CPU and Memory current usage of all Nodes
kubectl top nodes
#Get CPU and Memory Requests and Limits for Nodes
kubectl describe nodes
OR
kubectl describe nodes | grep 'Name:\| cpu\| memory'
#Get CPU and Memory current usage of pods in all Namespaces
kubectl top pods --all-namespaces
#Get CPU and Memory current usage of containers running in pods in all Namespaces
kubectl top pods --all-namespaces --containers=true
#Sort (descending order) current CPU usage of pods in all Namespaces
kubectl top pods --all-namespaces | sort --key 2 -b | awk 'NR<2{print $0;next}{print $0| "sort --key 3 --numeric -b --reverse"}'
#Sort (descending order) current Memory usage of pods in all Namespaces
kubectl top pods --all-namespaces | sort --key 2 -b | awk 'NR<2{print $0;next}{print $0| "sort --key 4 --numeric -b --reverse"}'
#Sort (descending order) current CPU usage of containers in pods in all Namespaces
kubectl top pods --all-namespaces --containers=true | sort --key 4 -b | awk 'NR<1{print $0;next}{print $0| "sort --key 4 --numeric -b --reverse"}'
#Sort (descending order) current Memory usage of containers in pods in all Namespaces
kubectl top pods --all-namespaces --containers=true | sort --key 5 -b | awk 'NR<1{print $0;next}{print $0| "sort --key 5 --numeric -b --reverse"}'