Kubectl

  • Generate a Pod (remove dry-run if you want to apply it)
$ kubectl run --generator=run-pod/v1 example --image=ubuntu  --dry-run=client  -o yaml --command=true sleep 10
  • Run a debug container in a pod
$ kubectl debug -it my-test-data-connector-747964d7c5-6nrck -n trendminer --image ubuntu
  • Query the api server from within a Pod
# Make sure the serviceaccount used by the Pod has permissions
curl https://kubernetes.default.svc/openapi/v2 --cacert /run/secrets/kubernetes.io/serviceaccount/ca.crt --header "Authorization: Bearer $(cat /run/secrets/kubernetes.io/serviceaccount/token)"
  • Use custom columns to get the name of the pods
$ kubectl get pods -l app=my-config-hub -o custom-columns=":metadata.name" --no-headers
my-config-hub-5b85796fb7-qprs2
 
* Spin up debug container
```sh
kubectl run -it --rm --image=ubuntu -- bash
  • See the requests.limit enforced in the container
# If you specify a limit for your memory, you can see this being reflected via cgroups in the container.
# In this case the limit was set to '4861Mi' which correspondents with '5097127936' bytes
$ kubectl exec -n trendminer my-zeppelin-648745f8c-vft6s -ti /bin/sh
 
# cgroups v2
$ cat /sys/fs/cgroup/memory/memory.limit_in_bytes
5097127936
  • Port forward the ‘test’ deployment to port 8080 of the node
kubectl port-forward deployment-test 8080

Merge Multiple Kubeconfig Files

  • Copy the target kubeconfig files to your machine
  • Install the konfig plugin for kubectl via krew
$ kubectl krew install konfig
  • Import the target kubeconfig files
$ kubectl krew import --save /tmp/config1
$ kubectl krew import --save /tmp/config2
  • List the contexts
$ kubectl config get-contexts
CURRENT   NAME                   CLUSTER                AUTHINFO                                   NAMESPACE
*         cluster1               cluster1           clusterUser_rg-aks-internal_aks-internal
          cluster2               cluster2               default
 
  • Switch to a different context
$ kubectl config use-context cluster2
Switched to context "cluster2".
$ kubectl config get-contexts
CURRENT   NAME                   CLUSTER                AUTHINFO                                   NAMESPACE
          cluster1               cluster1           clusterUser_rg-aks-internal_aks-internal
*         cluster2               cluster2               default
 

Directly Query Kubelet Metrics

You can directly query the metrics of a kubelet using the following endpoints. This can come in handy if you for instance want to know the current imageFs used.

$ kubectl get --raw "/api/v1/nodes/my-test-turing31/proxy/stats/summary"
$ kubectl get --raw "/api/v1/nodes/my-test-turing31/proxy/metrics/resource"

Use Custom Columns via Kubectl

$ kubectl get pods -A -o custom-columns=":metadata.name,:metadata.namespace,:spec.containers[0].image"
 
my-postgres-configuration-qgcgst-2j5xd           trendminer      docker.trendminer.net:5000/my-postgres-configuration:1.0.2203211626
my-assets-6fb5c89b88-8f9qm                       trendminer      docker.trendminer.net:5000/my-assets:2.8.4
my-keycloak-674dd87744-4k5vg                     trendminer      docker.trendminer.net:5000/my-keycloak:1.0.2203181320-3671586
my-compute-6d89c67689-2x6wj                      trendminer      docker.trendminer.net:5000/my-compute:1.2.2203221403-426c403
ingress-nginx-controller-6965799df6-tfj7d        ingress-nginx   k8s.gcr.io/ingress-nginx/controller:v1.1.1
my-config-hub-7b89bb449b-88r69                   trendminer      docker.trendminer.net:5000/my-config-hub:1.0.2203211437-f9c6c68
my-hps-8655f6c9c4-p79w2                          trendminer      docker.trendminer.net:5000/my-hps:2.3.2203211840-10f1366
my-datasource-587974797c-vvt24                   trendminer      docker.trendminer.net:5000/my-datasource:1.0.2203231409-d1b1a9a
coredns-5789895cd-2cnmf                          kube-system     rancher/mirrored-coredns-coredns:1.8.6

Delete All Pods in Failed State

$ kubectl delete pods -A --field-selector status.phase=Failed

Directly Communicate with Kubernetes API via Curl

  • You can directly send a request to the Kubernetes API server. You do however need to specify certificates.
    • client-admin.crt: this is the client certificate representing that you are doing this request as the ‘admin’ user
    • client-admin.key: this is the key matching the certificate
    • ca-cert.crt: the certificate authority used to sign all certificates in the cluster
    • All these certificates are generated by k3s automatically in /mnt/data/kubernetes/k3s/server/tls/
$ curl -v --cert /tmp/client-admin.crt --key /tmp/client-admin.key --cacert /tmp/server-ca.crt https://10.92.94.77:6443/version
{
  "major": "1",
  "minor": "23",
  "gitVersion": "v1.23.4+k3s1",
  "gitCommit": "43b1cb48200d8f6af85c16ed944d68fcc96b6506",
  "gitTreeState": "clean",
  "buildDate": "2022-02-24T22:38:17Z",
  "goVersion": "go1.17.5",
  "compiler": "gc",
  "platform": "linux/amd64"
}%

Delete All Pods in All Namespaces

$ kubectl delete --all pods --all-namespaces
 

Generate YAML Manifests via CLI

You can generate any manifest file using the kubectl create --dry-run=client command. Use the -h argument if you are not sure what the options are.

$ kubectl create --dry-run=client role -o yaml x --verb=get,list,watch --resource=job
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  creationTimestamp: null
  name: x
rules:
- apiGroups:
  - batch
  resources:
  - jobs
  verbs:
  - get
  - list

Quickly search API

You can very easily and interactively search available fields in the API using kubectl explain and kubectl explore.

 kubectl explain deployment.spec.template
 
GROUP:      apps
KIND:       Deployment
VERSION:    v1
 
FIELD: template <PodTemplateSpec>
 
DESCRIPTION:
    Template describes the pods that will be created. The only allowed
    template.spec.restartPolicy value is "Always".
    PodTemplateSpec describes the data a pod should have when created from a
    template
 
FIELDS:
  metadata <ObjectMeta>
    Standard object's metadata. More info:
    https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
 
  spec <PodSpec>
    Specification of the desired behavior of the pod. More info:
    https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

The explore plugin can be used to interactively search available fields. However this is a plugin that needs to be installed with krew.

 kubectl explore deployment.spec

K9s

Skins

You can find available skins here. These have to be placed in ~/.config/k9s/skins/.

 ls -l ~/.config/k9s/skins/            
total 8
-rw-r--r--. 1 vvanouytsel vvanouytsel 2519 Jan 24  2025 gruvbox.yaml
-rw-r--r--. 1 vvanouytsel vvanouytsel 2870 Feb 11 10:08 nord.yaml

You can reference these skins in your config file via the ui.skin key.

 cat ~/.config/k9s/config.yaml
k9s:
  liveViewAutoRefresh: false
  screenDumpDir: /home/vvanouytsel/.local/state/k9s/screen-dumps
  refreshRate: 2
  maxConnRetry: 5
  readOnly: false
  noExitOnCtrlC: false
  ui:
    skin: gruvbox
    enableMouse: false
    headless: false
    logoless: false
    crumbsless: false
    reactive: false
    noIcons: false
    defaultsToFullScreen: false
  skipLatestRevCheck: false
  disablePodCounting: false
  shellPod:
    image: busybox:1.35.0
    namespace: default
    limits:
      cpu: 100m
      memory: 100Mi
  imageScans:
    enable: false
    exclusions:
      namespaces: []
      labels: {}
  logger:
    tail: 100
    buffer: 5000
    sinceSeconds: -1
    textWrap: false
    showTime: false
  thresholds:
    cpu:
      critical: 90
      warn: 70
    memory:
      critical: 90
      warn: 70

If you want to use different skins for different clusters (to help identify when you are running against production clusters), you can do that by overwrite the skin the config file of your specific cluster.

With this setup all my clusters use gruvbox and my aks-development cluster uses the nord skin.

 cat ~/.local/share/k9s/clusters/aks-development/aks-development/config.yaml
k9s:
  cluster: aks-development
  skin: nord
  namespace:
    active: all
    lockFavorites: false
    favorites:
    - all
  view:
    active: pods
  featureGates:
    nodeShell: false
  portForwardAddress: localhost

Plugins

You can use plugins to add custom functionality. Copy your plugins to ~/.config/k9s/plugin.yml.

  • Add debug containers via ‘shift-d’
plugins:
  debug:
    shortCut: Shift-D
    confirm: false
    description: Debug
    scopes:
    - containers
    command: kubectl
    background: false
    args:
    - debug
    - -it
    - -n
    - $NAMESPACE
    - $POD
    - --target
    - $NAME
    - --image
    - busybox:1.35.0

Plugins

Using tcpdump in a Pod

If you need to troubleshoot network connectivity in a pod, you can use the sniff kubectl plugin.

Install the sniff plugin.

$ kubectl krew install sniff

Run sniff to capture a dump in capture.pcap.

$ kubectl sniff $POD -n $NAMSEPACE -o capture.pcap -p

You can now open the capture.pcap file in wireshark to have a look at the dump.

Dealing with PodSecurity

It might be possible that your pod is enforcing a Pod Security Standard. In that case it might be that the above sniff command does not work if priviliged containers are blocked due to the PodSecurity profile.

In that case you can check what PodSecurity profile is used by checking the label of the namespace.

$ kubectl get namespace example --show-labels
NAME        STATUS   AGE    LABELS
example     Active   668d  hello=world,security.kubernetes.io/audit=restricted,pod-security.kubernetes.io/enforce=baseline

In this example you can see that the baseline profile is used, which blocks priviliged containers.

If you have the cluster permissions, you can temporary change this by enabling a different profile. In this case the priviliged profile is used, which basically allows everything.

$ kubectl label namespace example pod-security.kubernetes.io/enforce=privileged --overwrite

Now you can run priviliged containers, and thus the sniff command from above. Make sure that you re-enable the previous profile whenever you are done!

$ kubectl label namespace example pod-security.kubernetes.io/enforce=baseline --overwrite