Kubectl Commands Cheat Sheet: A Complete Practical Guide for 2025

Kubectl commands are the essential CLI tools used to interact with Kubernetes clusters, enabling DevOps engineers to deploy, manage, monitor, and troubleshoot containerized applications across distributed environments with precision and efficiency.

Whether you’re preparing for the CKA exam, managing production workloads, or automating CI/CD pipelines, mastering kubectl is non-negotiable for any serious DevOps professional. This comprehensive guide covers everything from basic cluster interactions to advanced debugging techniques that will transform your Kubernetes workflow.

Kubectl Commands Cheat Sheet A Complete Practical Guide
Kubectl Commands Cheat Sheet A Complete Practical Guide

Getting Started with Kubectl

What is Kubectl?

Kubectl (pronounced “kube-cuttle” or “kube-C-T-L”) is the official command-line interface for Kubernetes. It communicates with the Kubernetes API server to perform operations on your cluster, from simple resource queries to complex application deployments.

Installation and Setup

Before diving into kubectl commands, ensure you have kubectl installed and configured:

# Install kubectl on Linux
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

# Verify installation
kubectl version --client

Expected Output:

Client Version: v1.29.1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3

Basic Cluster Operations

Checking Cluster Status

Understanding your cluster’s health is the first step in any kubectl workflow:

# Check kubectl and cluster version
kubectl version

Expected Output:

Client Version: v1.29.1
Server Version: v1.29.0

# Display cluster information
kubectl cluster-info

Expected Output:

Kubernetes control plane is running at https://kubernetes.docker.internal:6443
CoreDNS is running at https://kubernetes.docker.internal:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

# Check cluster nodes
kubectl get nodes

Expected Output:

NAME             STATUS   ROLES           AGE   VERSION
docker-desktop   Ready    control-plane   5d    v1.29.0

Configuration Management

Managing kubectl configuration is crucial for multi-cluster environments:

# View current configuration
kubectl config view

# List available contexts
kubectl config get-contexts

# Switch context
kubectl config use-context production-cluster

# Set default namespace
kubectl config set-context --current --namespace=development

Working with Pods

Pods are the fundamental execution units in Kubernetes. Mastering pod management with kubectl is essential for day-to-day operations.

Viewing Pods

# List all pods in current namespace
kubectl get pods

# List pods with more details
kubectl get pods -o wide

# List pods in all namespaces
kubectl get pods --all-namespaces

# List pods with specific labels
kubectl get pods -l app=nginx

Expected Output:

NAME                     READY   STATUS    RESTARTS   AGE
nginx-deployment-abc123  1/1     Running   0          2m
web-server-def456        1/1     Running   0          5m

Describing Pods

The kubectl describe command provides detailed information about pod configuration, events, and status:

# Describe a specific pod
kubectl describe pod nginx-deployment-abc123

Expected Output:

Name:             nginx-deployment-abc123
Namespace:        default
Priority:         0
Service Account:  default
Node:            docker-desktop/192.168.65.4
Start Time:       Mon, 27 Sep 2025 10:30:00 +0000
Labels:          app=nginx
                 pod-template-hash=abc123
Status:          Running
IP:              10.1.0.15
...
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  2m    default-scheduler  Successfully assigned default/nginx-deployment-abc123 to docker-desktop
  Normal  Pulled     2m    kubelet            Container image "nginx:latest" already present on machine
  Normal  Created    2m    kubelet            Created container nginx
  Normal  Started    2m    kubelet            Started container nginx

Accessing Pod Logs

Effective log management is crucial for troubleshooting applications:

# View pod logs
kubectl logs nginx-deployment-abc123

# Follow logs in real-time
kubectl logs -f nginx-deployment-abc123

# View logs from previous container instance
kubectl logs nginx-deployment-abc123 --previous

# View logs from specific container in multi-container pod
kubectl logs nginx-deployment-abc123 -c nginx-container

# View logs with timestamps
kubectl logs nginx-deployment-abc123 --timestamps

# Tail last 50 lines
kubectl logs nginx-deployment-abc123 --tail=50

Executing Commands in Pods

The kubectl exec command allows you to run commands inside containers:

# Execute interactive shell
kubectl exec -it nginx-deployment-abc123 -- /bin/bash

# Execute single command
kubectl exec nginx-deployment-abc123 -- ls -la /app

# Execute command in specific container
kubectl exec -it nginx-deployment-abc123 -c nginx-container -- /bin/bash

# Execute command without TTY allocation
kubectl exec nginx-deployment-abc123 -- cat /etc/hostname

Pro Tip: Always use -it flags for interactive sessions (-i for interactive, -t for TTY allocation).

Managing Services and Deployments

Services and Deployments form the backbone of Kubernetes applications. Understanding how to manage them with kubectl commands is essential for production operations.

Working with Services

Services provide stable network endpoints for accessing pods:

# List all services
kubectl get services
kubectl get svc  # shorthand

# Describe a service
kubectl describe svc nginx-service

# Get service endpoints
kubectl get endpoints nginx-service

# Expose a deployment as a service
kubectl expose deployment nginx-deployment --port=80 --type=LoadBalancer

Expected Output:

NAME            TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
nginx-service   LoadBalancer   10.96.123.45   localhost     80:30080/TCP   5m

Managing Deployments

Deployments manage replica sets and provide declarative updates for applications:

# List deployments
kubectl get deployments
kubectl get deploy  # shorthand

# Describe deployment
kubectl describe deployment nginx-deployment

# Check deployment status
kubectl rollout status deployment/nginx-deployment

# View deployment history
kubectl rollout history deployment/nginx-deployment

Expected Output:

NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3/3     3            3           10m

Working with StatefulSets and DaemonSets

StatefulSets and DaemonSets require special handling compared to regular Deployments:

# StatefulSets management
kubectl get sts
kubectl get statefulsets  # full name

# Describe StatefulSet
kubectl describe sts mysql

# Check StatefulSet rollout status
kubectl rollout status statefulset/mysql --timeout=300s

# Scale StatefulSet (scaling down deletes highest-ordinal pods first)
kubectl scale statefulset mysql --replicas=5

# Rolling restart StatefulSet
kubectl rollout restart statefulset/mysql

# Delete StatefulSet (keeps PVCs by default)
kubectl delete sts mysql --cascade=orphan  # leaves pods running
kubectl delete sts mysql --cascade=foreground  # waits for pod deletion

# DaemonSets management  
kubectl get ds -n kube-system
kubectl get daemonsets --all-namespaces

# Restart DaemonSet (useful for config updates)
kubectl rollout restart ds/ingress-nginx-controller -n ingress-nginx

# Update DaemonSet image
kubectl set image ds/log-agent agent=myrepo/agent:v2.1 -n observability

# Check DaemonSet status
kubectl rollout status ds/fluentd -n kube-system

# Get DaemonSet pods on specific node
kubectl get pods --field-selector=spec.nodeName=worker-1 -n kube-system

Expected Output:

NAME    DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
mysql   3         3         3       3            3           <none>          5m

NAME                          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
ingress-nginx-controller      3         3         3       3            3           <none>          10m

Service Types and Network Discovery

Understanding service types and endpoint discovery is crucial for networking:

# List services with types
kubectl get svc -o wide

# Create different service types
kubectl create service clusterip backend --tcp=80:8080        # Internal only
kubectl create service nodeport frontend --tcp=80:8080       # External access via node ports  
kubectl expose deployment api --type=LoadBalancer --port=80  # Cloud load balancer

# Service discovery and endpoints
kubectl get endpoints nginx-service
kubectl describe endpoints nginx-service

# Modern endpoint discovery (Kubernetes 1.19+)
kubectl get endpointslices
kubectl get endpointslice -l kubernetes.io/service-name=nginx-service

# Service DNS testing
kubectl run dns-test --image=busybox:1.36 --rm -it --restart=Never -- nslookup nginx-service.default.svc.cluster.local

Service Types Comparison:

TypeUse CaseAccess MethodExternal Access
ClusterIPInternal servicesService name/IPNo
NodePortDev/testingNode IP:NodePortYes (limited)
LoadBalancerProduction externalCloud LB IPYes (recommended)
ExternalNameExternal service mappingDNS CNAMEN/A

Advanced Configuration Management

# Server-side apply (modern best practice)
kubectl apply -f deployment.yaml --server-side --field-manager=ci-pipeline

# Legacy last-applied annotation (superseded by server-side apply)
kubectl apply view-last-applied deployment/nginx  # Legacy - use server-side apply instead
kubectl apply set-last-applied -f deployment.yaml # Legacy - use server-side apply instead

# Prune with field manager scope (only affects managed objects)  
kubectl apply -f ./manifests/ --prune -l app=myapp --field-manager=gitops

# Advanced impersonation with user info
kubectl get --raw /apis/authentication.k8s.io/v1/userinfo --as=alice --as-group=developers

# Command execution with explicit options
kubectl run test --image=busybox:1.36 --restart=Never --command -- echo "hello world"

⚠️ Legacy vs Modern Practices:

  • Legacy: kubectl apply view-last-applied / set-last-applied
  • Modern: Use kubectl apply --server-side --field-manager=<name> for better conflict resolution and field ownership tracking

Creating and Applying Resources

Understanding the difference between kubectl create and kubectl apply is fundamental for resource management.

Using kubectl create

The kubectl create command creates resources from command line or files:

# Create deployment from command line
kubectl create deployment nginx --image=nginx:latest

# Create service from command line
kubectl create service nodeport nginx --tcp=80:80

# Create namespace
kubectl create namespace development

# Create secret
kubectl create secret generic mysecret --from-literal=username=admin --from-literal=password=secret123

# Create configmap
kubectl create configmap myconfig --from-literal=database_url=mongodb://localhost:27017

# Create from file
kubectl create -f deployment.yaml

# Create from multiple files
kubectl create -f ./configs/

Using kubectl apply

The kubectl apply command is the preferred method for declarative resource management:

# Apply single file
kubectl apply -f deployment.yaml

# Apply all files in directory
kubectl apply -f ./manifests/

# Apply with recursive directory traversal
kubectl apply -R -f ./configurations/

# Apply from URL
kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/staging/pod

# Dry run to validate
kubectl apply --dry-run=client -f deployment.yaml

# Server-side dry run
kubectl apply --dry-run=server -f deployment.yaml

Key Differences: kubectl create vs kubectl apply

Featurekubectl createkubectl apply
Use CaseOne-time resource creationDeclarative configuration management
IdempotencyNot idempotent (fails if resource exists)Idempotent (updates existing resources)
Best PracticeDevelopment and testingProduction and GitOps workflows
Configuration TrackingNo trackingTracks last-applied configuration
UpdatesCannot update existing resourcesCan update existing resources

Sample Deployment Manifest

Here’s a comprehensive deployment example you can use with kubectl apply:

# nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.21
        ports:
        - containerPort: 80
        resources:
          requests:
            memory: "64Mi"
            cpu: "250m"
          limits:
            memory: "128Mi"
            cpu: "500m"
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: LoadBalancer

Apply this configuration:

kubectl apply -f nginx-deployment.yaml

Editing and Deleting Resources

Editing Resources

Kubectl provides multiple ways to modify existing resources:

# Edit deployment in default editor
kubectl edit deployment nginx-deployment

# Edit with specific editor
KUBE_EDITOR="nano" kubectl edit deployment nginx-deployment

# Edit service
kubectl edit svc nginx-service

# Patch deployment with strategic merge
kubectl patch deployment nginx-deployment -p '{"spec":{"replicas":5}}'

# Patch with JSON patch
kubectl patch pod nginx-deployment-abc123 --type='json' -p='[{"op": "replace", "path": "/spec/containers/0/image", "value":"nginx:1.22"}]'

Deleting Resources

Safe resource deletion is crucial in production environments:

# Delete specific pod
kubectl delete pod nginx-deployment-abc123

# Delete deployment
kubectl delete deployment nginx-deployment

# Delete service
kubectl delete svc nginx-service

# Delete multiple resources
kubectl delete pod,svc nginx-pod nginx-service

# Delete by label
kubectl delete pods -l app=nginx

# Delete all pods in namespace
kubectl delete pods --all

# Delete from file
kubectl delete -f deployment.yaml

# Graceful deletion with custom timeout
kubectl delete pod nginx-deployment-abc123 --grace-period=60

# Force delete (use with caution)
kubectl delete pod nginx-deployment-abc123 --force --grace-period=0

# Delete namespace (deletes all resources within)
kubectl delete namespace development

Warning: Always double-check your delete commands, especially in production environments. Consider using --dry-run flag first.

Scaling and Rollout Management

Scaling Applications

The kubectl scale command enables horizontal scaling of applications:

# Scale deployment to 5 replicas
kubectl scale deployment nginx-deployment --replicas=5

# Scale replica set
kubectl scale rs nginx-deployment-abc123 --replicas=3

# Scale multiple deployments
kubectl scale deployment nginx-deployment web-deployment --replicas=2

# Conditional scaling (only if current replicas = 3)
kubectl scale deployment nginx-deployment --current-replicas=3 --replicas=5

# Auto-scaling with HPA (Horizontal Pod Autoscaler)
kubectl autoscale deployment nginx-deployment --cpu-percent=80 --min=1 --max=10

Expected Output:

deployment.apps/nginx-deployment scaled

Rollout Management

Managing application updates safely is critical for production stability:

# Update deployment image
kubectl set image deployment/nginx-deployment nginx=nginx:1.22

# Check rollout status
kubectl rollout status deployment/nginx-deployment

# View rollout history
kubectl rollout history deployment/nginx-deployment

# View specific revision details
kubectl rollout history deployment/nginx-deployment --revision=2

# Rollback to previous version
kubectl rollout undo deployment/nginx-deployment

# Rollback to specific revision
kubectl rollout undo deployment/nginx-deployment --to-revision=1

# Pause rollout
kubectl rollout pause deployment/nginx-deployment

# Resume rollout
kubectl rollout resume deployment/nginx-deployment

# Restart deployment (recreate pods)
kubectl rollout restart deployment/nginx-deployment

Expected Output:

deployment.apps/nginx-deployment restarted
Waiting for deployment "nginx-deployment" rollout to finish: 1 of 3 updated replicas are available...
deployment "nginx-deployment" successfully rolled out

Rolling Update Strategy

Configure rolling updates in your deployment manifest:

spec:
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1

Namespaces and Context Management

Namespaces provide logical separation of resources within a cluster, essential for multi-tenant environments.

Working with Namespaces

# List all namespaces
kubectl get namespaces
kubectl get ns  # shorthand

# Create namespace
kubectl create namespace production

# Describe namespace
kubectl describe namespace production

# Delete namespace
kubectl delete namespace production

# Get resources in specific namespace
kubectl get pods -n kube-system

# Get resources across all namespaces
kubectl get pods --all-namespaces

# Set default namespace for current context
kubectl config set-context --current --namespace=production

Context Management

Contexts combine cluster, user, and namespace information:

# View current context
kubectl config current-context

# List all contexts
kubectl config get-contexts

# Switch context
kubectl config use-context staging-cluster

# Create new context
kubectl config set-context development --cluster=minikube --user=developer --namespace=dev

# Rename context
kubectl config rename-context old-context new-context

# Delete context
kubectl config delete-context unused-context

Multi-Context Workflow Example

# Production operations
kubectl config use-context production
kubectl get pods -n app-prod

# Switch to staging
kubectl config use-context staging
kubectl apply -f new-feature.yaml

# Switch back to development
kubectl config use-context development
kubectl logs -f test-pod

Debugging and Troubleshooting

Effective troubleshooting is crucial for maintaining healthy Kubernetes clusters.

Resource Monitoring

# Check node resource usage
kubectl top nodes

# Check pod resource usage
kubectl top pods

# Check pod resource usage in specific namespace
kubectl top pods -n kube-system

# Sort pods by CPU usage
kubectl top pods --sort-by=cpu

# Sort pods by memory usage
kubectl top pods --sort-by=memory

# Check resource usage with containers
kubectl top pods --containers=true

Expected Output:

NAME             CPU(cores)   MEMORY(bytes)   
nginx-pod        2m           8Mi             
web-server       15m          64Mi            

Advanced Debugging

# Debug pod issues
kubectl describe pod problematic-pod

# Check events in current namespace
kubectl get events --sort-by=.metadata.creationTimestamp

# Check events for specific pod
kubectl get events --field-selector involvedObject.name=problematic-pod

# Port forwarding for local access
kubectl port-forward pod/nginx-pod 8080:80

# Port forward service
kubectl port-forward service/nginx-service 8080:80

# Debug with temporary pod
kubectl run debug-pod --image=busybox:1.28 --rm -it --restart=Never -- sh

# Copy files from pod
kubectl cp nginx-pod:/app/config.json ./config.json

# Copy files to pod
kubectl cp ./local-file.txt nginx-pod:/app/

Network Troubleshooting

# Test connectivity with temporary pod
kubectl run netshoot --image=nicolaka/netshoot --rm -it --restart=Never

# DNS troubleshooting
kubectl run dns-test --image=busybox:1.28 --rm -it --restart=Never -- nslookup kubernetes.default

# Check service endpoints
kubectl get endpoints

# Describe service for troubleshooting
kubectl describe svc problematic-service

Debugging Node Issues

Cluster Component Status (Legacy)

Deprecation Note: kubectl get componentstatuses is deprecated as of Kubernetes 1.19+ and may not be available in newer clusters. Modern alternatives provide more accurate health information.

# Legacy component status (may not work in newer clusters)
kubectl get componentstatuses
kubectl get cs  # shorthand

# Modern alternatives for cluster health
kubectl get --raw /healthz
kubectl get --raw /livez
kubectl get --raw /readyz

# Check individual components
kubectl get pods -n kube-system
kubectl get endpoints kube-scheduler -n kube-system
kubectl get endpoints kube-controller-manager -n kube-system

# Control plane component logs
kubectl logs -n kube-system -l component=kube-apiserver
kubectl logs -n kube-system -l component=kube-scheduler
kubectl logs -n kube-system -l component=kube-controller-manager

Expected Output (Legacy):

NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   

Modern Health Check:

# Comprehensive cluster health using modern endpoints
curl -k https://kubernetes-api-server:6443/healthz
# Returns: ok

# Detailed health information
kubectl get --raw /livez?verbose
kubectl get --raw /readyz?verbose

Node Management and Scheduling

## Advanced kubectl Operations

### API Documentation and Field Discovery

Understanding Kubernetes API structure is crucial for advanced operations:

```bash
# Get comprehensive field documentation
kubectl explain deployment.spec.strategy --recursive

# Explore pod specification fields
kubectl explain pod.spec.containers.resources

# Get all available fields for any resource
kubectl explain service --recursive

# Quick field lookup
kubectl explain deployment.spec.replicas

Expected Output:

KIND:     Deployment
VERSION:  apps/v1

FIELD:    replicas &lt;integer>

DESCRIPTION:
     Number of desired pods. This is a pointer to distinguish between explicit
     zero and not specified. Defaults to 1.

Configuration Diff and Validation

Always preview changes before applying them to production:

# Show what would change before applying
kubectl diff -f deployment.yaml

# Diff entire directory
kubectl diff -f k8s/

# Diff with kustomization
kubectl diff -k overlays/production/

# Server-side diff (more accurate)
kubectl diff --server-side -f deployment.yaml

Expected Output:

diff -u -N /tmp/LIVE-123456789/apps.v1.Deployment.default.nginx-deployment /tmp/MERGED-987654321/apps.v1.Deployment.default.nginx-deployment
--- /tmp/LIVE-123456789/apps.v1.Deployment.default.nginx-deployment
+++ /tmp/MERGED-987654321/apps.v1.Deployment.default.nginx-deployment
@@ -6,7 +6,7 @@
     deployment.kubernetes.io/revision: "1"
   creationTimestamp: "2025-09-27T10:00:00Z"
   generation: 1
-  replicas: 3
+  replicas: 5
   selector:
     matchLabels:
       app: nginx

Resource Replacement and Updates

# Replace resource from file (destructive)
kubectl replace -f updated-deployment.yaml

# Force replace (deletes and recreates)
kubectl replace -f deployment.yaml --force

# Replace with grace period
kubectl replace -f pod.yaml --grace-period=30

Warning: kubectl replace --force will cause downtime as it deletes and recreates resources.

Advanced Wait Operations

Use kubectl wait for robust automation and CI/CD pipelines:

# Wait for deployment to be available
kubectl wait --for=condition=available deploy/myapp --timeout=300s

# Wait for job completion
kubectl wait --for=condition=complete job/batch-process --timeout=600s

# Wait for pod to be ready
kubectl wait --for=condition=ready pod/database-0 --timeout=120s

# Wait using JSONPath conditions
kubectl wait --for=jsonpath='{.status.phase}=Bound' pvc/data-storage --timeout=60s

# Wait for custom resource conditions
kubectl wait --for=condition=Ready prometheus/monitoring --timeout=180s

# Wait for multiple resources
kubectl wait --for=condition=available deploy/frontend deploy/backend --timeout=300s

Container Attachment and Interaction

# Attach to running container's main process
kubectl attach -it pod/web-server -c nginx

# Attach without TTY
kubectl attach pod/web-server -c nginx

# Attach to specific container in multi-container pod
kubectl attach -it pod/multi-app -c sidecar

# Detach from attachment (Ctrl+P, Ctrl+Q)

Key Difference: kubectl attach connects to the main process (PID 1), while kubectl exec starts a new process.

Kustomize Integration

# Render kustomization without applying
kubectl kustomize overlays/production/

# Apply kustomized configuration
kubectl kustomize overlays/production/ | kubectl apply -f -

# Diff kustomized configuration
kubectl diff -k overlays/production/

# Kustomize with remote bases
kubectl kustomize github.com/kubernetes-sigs/kustomize/examples/helloWorld?ref=v1.0.6

Plugin Management and Custom Plugin Usage

# List installed plugins
kubectl plugin list

# Install useful plugins via krew
kubectl krew install whoami ctx ns tree view-secret

# Use installed plugins - Identity and context information
kubectl whoami

Expected Output:

# kubectl plugin list
The following compatible plugins are available:

/usr/local/bin/kubectl-ctx
/usr/local/bin/kubectl-ns  
/usr/local/bin/kubectl-tree
/usr/local/bin/kubectl-whoami
/usr/local/bin/kubectl-view-secret

# kubectl whoami
kubecfg:certauth:admin

Popular Plugin Examples:

# Context switching (kubectl-ctx plugin)
kubectl ctx                          # List contexts
kubectl ctx production              # Switch to production context
kubectl ctx -                       # Switch to previous context

# Namespace switching (kubectl-ns plugin)  
kubectl ns                          # List namespaces
kubectl ns development             # Switch to development namespace
kubectl ns -                       # Switch to previous namespace

# Resource tree visualization (kubectl-tree plugin)
kubectl tree deployment nginx-deployment

# Secret viewing (kubectl-view-secret plugin)
kubectl view-secret app-secret username    # Decode and view secret value
kubectl view-secret app-secret --all       # View all keys in secret

# Multi-pod log tailing (kubectl-tail plugin)
kubectl tail -l app=nginx -n production

# Resource capacity analysis (kubectl-resource-capacity plugin)
kubectl resource-capacity --pods --sort cpu.limit

Expected Output for kubectl tree:

NAMESPACE  NAME                                   READY  REASON  AGE
default    Deployment/nginx-deployment            -              5m
default    ├─ReplicaSet/nginx-deployment-abc123   -              5m  
default    │ ├─Pod/nginx-deployment-abc123-def   True           5m
default    │ ├─Pod/nginx-deployment-abc123-ghi   True           5m
default    │ └─Pod/nginx-deployment-abc123-jkl   True           5m
default    └─Service/nginx-service                -              3m

Creating Custom Plugins:

# Custom plugin script example: kubectl-pod-shell
#!/bin/bash
# Save as kubectl-pod-shell in PATH
POD_NAME=$1
kubectl exec -it $POD_NAME -- /bin/bash

# Usage after making executable:
chmod +x kubectl-pod-shell
kubectl pod-shell nginx-deployment-abc123

Advanced Event Management

Note: kubectl events command is stable in Kubernetes 1.23+ but may not be available by default in older clusters. Fall back to kubectl get events if needed.

# Modern event management (Kubernetes 1.23+)
kubectl events

# Get events sorted by timestamp
kubectl events --sort-by='.lastTimestamp'

# Watch events in real-time
kubectl events --watch

# Filter events by type
kubectl events --types=Warning

# Get events for specific object
kubectl events --for pod/nginx-deployment-abc123

# Get events across all namespaces
kubectl events --all-namespaces

# Legacy event management (fallback for older clusters)
kubectl get events --sort-by=.metadata.creationTimestamp
kubectl get events --field-selector=type=Warning
kubectl get events --field-selector=involvedObject.name=nginx-pod --all-namespaces

Cluster Diagnostics Bundle

# Generate comprehensive cluster diagnostics
kubectl cluster-info dump

# Save diagnostics to directory
kubectl cluster-info dump --output-directory=./cluster-dump

# Dump specific namespace
kubectl cluster-info dump --namespaces=kube-system --output-directory=./system-dump

Working with Custom Resources

# List custom resource definitions
kubectl get crd

# Get custom resources
kubectl get prometheus

# Describe CRD
kubectl describe crd prometheuses.monitoring.coreos.com

# Get CRD with custom columns
kubectl get prometheus -o custom-columns=NAME:.metadata.name,VERSION:.spec.version,REPLICAS:.spec.replicas

Authentication, RBAC, and Security

Permission Testing and Validation

# Check if you can perform specific actions
kubectl auth can-i create pods
kubectl auth can-i delete deployments --namespace=production
kubectl auth can-i '*' '*'  # check all permissions

# Check permissions for other users
kubectl auth can-i create secrets --as=developer@company.com
kubectl auth can-i delete nodes --as=system:serviceaccount:default:my-sa

# List all permissions for current user
kubectl auth can-i --list

# List permissions in specific namespace
kubectl auth can-i --list --namespace=production

# Check permissions with impersonation
kubectl auth can-i create pods --as=alice --as-group=developers

Expected Output:

yes
Warning: resource 'deployments' is not namespaced
no

RBAC Management

# Apply RBAC configurations safely
kubectl auth reconcile -f rbac.yaml

# Create service account
kubectl create serviceaccount api-service

# Create role
kubectl create role pod-reader --verb=get,list,watch --resource=pods

# Create cluster role
kubectl create clusterrole node-reader --verb=get,list,watch --resource=nodes

# Create role binding
kubectl create rolebinding pod-reader-binding --role=pod-reader --user=jane@company.com

# Create cluster role binding
kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --group=admins

Certificate Management

# List certificate signing requests
kubectl get csr

# Approve certificate request
kubectl certificate approve mycsr

# Deny certificate request
kubectl certificate deny mycsr

# Get certificate details
kubectl describe csr mycsr

# Create CSR from file
kubectl apply -f certificate-request.yaml

Command Impersonation

# Impersonate user
kubectl get pods --as=alice@company.com

# Impersonate user with groups
kubectl get pods --as=alice@company.com --as-group=developers --as-group=frontend-team

# Use service account token
kubectl get pods --token=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)

# Impersonate system service account
kubectl get pods --as=system:serviceaccount:kube-system:default

Node and Scheduling Controls

Node Tainting and Scheduling

# Add taint to node
kubectl taint nodes worker-1 role=database:NoSchedule

# Add taint with effect
kubectl taint nodes worker-1 maintenance=true:NoExecute

# Remove taint (note the trailing minus)
kubectl taint nodes worker-1 role-

# List node taints
kubectl describe node worker-1 | grep -A5 Taints

# Multiple taints
kubectl taint nodes worker-1 role=db:NoSchedule environment=prod:NoSchedule

Taint Effects:

  • NoSchedule: Pods won’t be scheduled on the node
  • PreferNoSchedule: Kubernetes will try to avoid scheduling pods
  • NoExecute: Pods will be evicted if already running

Pod Disruption and Eviction

# Drain node safely (respects PodDisruptionBudgets)
kubectl drain worker-1 --ignore-daemonsets --delete-emptydir-data

# Drain with custom grace period
kubectl drain worker-1 --grace-period=300 --ignore-daemonsets

# Force drain (dangerous in production)
kubectl drain worker-1 --force --ignore-daemonsets

# Evict specific pod (API-driven)
kubectl delete pod nginx-pod --grace-period=30

# Check pod disruption budgets
kubectl get pdb
kubectl describe pdb frontend-pdb

Node Labeling and Selection

# Label nodes
kubectl label nodes worker-1 hardware=gpu
kubectl label nodes worker-1 environment=production

# Remove label
kubectl label nodes worker-1 hardware-

# Select nodes by label
kubectl get nodes -l hardware=gpu
kubectl get nodes -l environment=production

# Node selector in pod spec
kubectl run gpu-pod --image=tensorflow/tensorflow:gpu --overrides='{"spec":{"nodeSelector":{"hardware":"gpu"}}}'

Advanced Resource Management

Batch Operations and Jobs

# Create job from command line
kubectl create job backup --image=backup:latest -- /backup.sh

# Create job from cronjob
kubectl create job manual-backup --from=cronjob/scheduled-backup

# Create cronjob
kubectl create cronjob daily-backup --schedule="0 2 * * *" --image=backup:latest -- /backup.sh

# Delete job with cleanup
kubectl delete job backup --cascade=foreground

# Delete completed jobs
kubectl delete job --field-selector=status.successful=1

# Monitor job progress
kubectl get jobs -w
kubectl describe job backup

Storage Management

# List storage classes
kubectl get storageclass
kubectl get sc  # shorthand

# List persistent volumes
kubectl get pv

# List persistent volume claims
kubectl get pvc

# Watch PVC binding
kubectl get pvc --watch

# Describe storage binding
kubectl describe pvc data-storage

# Get PV details with custom columns
kubectl get pv -o custom-columns=NAME:.metadata.name,CAPACITY:.spec.capacity.storage,STATUS:.status.phase,CLAIM:.spec.claimRef.name

Ingress and Gateway Management

# List ingress resources
kubectl get ingress
kubectl get ing  # shorthand

# Describe ingress configuration
kubectl describe ingress web-ingress

# Get ingress with wide output
kubectl get ingress -o wide

# Check ingress controller logs
kubectl logs -n ingress-nginx deployment/ingress-nginx-controller

# Create ingress from command line
kubectl create ingress web --rule="example.com/*=web-service:80"

Power User Flags and Advanced Querying

Advanced kubectl get Options

# Watch resources in real-time
kubectl get pods --watch
kubectl get pods --watch-only  # only show changes

# Show resource labels
kubectl get pods --show-labels

# Label selectors
kubectl get pods -l app=nginx
kubectl get pods -l 'environment in (production,staging)'
kubectl get pods -l 'app!=nginx'

# Field selectors
kubectl get pods --field-selector=status.phase=Running
kubectl get pods --field-selector=spec.nodeName=worker-1
kubectl get events --field-selector=type=Warning

# Complex JSONPath queries
kubectl get pods -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.podIP}{"\n"}{end}'

# Go template output
kubectl get pods -o go-template='{{range .items}}{{.metadata.name}} {{.status.phase}}{{"\n"}}{{end}}'

# Go template from file
kubectl get pods -o go-template-file=pod-template.gotmpl

# Custom columns from file
kubectl get pods -o custom-columns-file=pod-columns.txt

# Sort by custom fields
kubectl get pods --sort-by=.metadata.creationTimestamp
kubectl get pods --sort-by=.status.containerStatuses[0].restartCount

Advanced Logging Options

# Logs since specific time
kubectl logs deploy/nginx --since=2h
kubectl logs deploy/nginx --since-time=2025-09-27T10:00:00Z

# Multi-pod logs with label selector
kubectl logs -l app=nginx --max-log-requests=10

# Logs with prefix showing pod name
kubectl logs -l app=nginx --prefix=true

# Logs from previous container instance
kubectl logs pod/nginx --previous

# Limit log requests for performance
kubectl logs -l app=nginx --max-log-requests=5

# Tail logs from multiple pods
kubectl logs -f -l app=nginx --max-log-requests=20

Advanced Apply Operations

# Server-side apply (preferred for GitOps)
kubectl apply -f deployment.yaml --server-side

# Apply with custom field manager
kubectl apply -f deployment.yaml --server-side --field-manager=ci-pipeline

# Prune resources with label selector (GitOps-style drift control)
kubectl apply -f ./manifests/ -l app=myapp --prune

# Apply with validation
kubectl apply -f deployment.yaml --validate=true

# View last applied configuration
kubectl apply view-last-applied deployment/nginx

# Set last applied configuration manually
kubectl apply set-last-applied -f deployment.yaml

Advanced Delete Operations

# Control cascade behavior
kubectl delete deployment nginx --cascade=foreground  # wait for pods
kubectl delete deployment nginx --cascade=background  # default behavior
kubectl delete deployment nginx --cascade=orphan     # leave pods running

# Delete without waiting
kubectl delete pod nginx --wait=false

# Delete with timeout
kubectl delete pod nginx --timeout=60s

# Force delete immediately (dangerous)
kubectl delete pod nginx --grace-period=0 --force

# Delete by label with confirmation
kubectl delete pods -l app=nginx --dry-run=client
kubectl delete pods -l app=nginx

Enhanced Debugging and Troubleshooting

Advanced Debugging with Ephemeral Containers

# Debug pod with ephemeral container
kubectl debug pod/broken-app --image=busybox:1.36 -it --target=app

# Debug by copying pod
kubectl debug pod/broken-app --copy-to=debug-copy --image=ubuntu:22.04 -it

# Debug node by creating pod
kubectl debug node/worker-1 --image=ubuntu:22.04 -it

# Debug with specific container image
kubectl debug pod/app --image=nicolaka/netshoot -it --share-processes --copy-to=debug-app

# Debug distroless containers
kubectl debug pod/distroless-app --image=busybox:1.36 -it --target=app --share-processes

Advanced Resource Monitoring

# Resource usage with containers breakdown
kubectl top pods --containers=true

# Resource usage for specific namespace
kubectl top pods -n kube-system

# Sort by resource usage
kubectl top pods --sort-by=cpu
kubectl top pods --sort-by=memory

# Node resource usage
kubectl top nodes

# Resource usage with labels
kubectl top pods -l app=nginx

Note: Requires metrics-server to be installed and running in your cluster.

Advanced Port Forwarding

# Port forward to service
kubectl port-forward service/nginx 8080:80

# Port forward with specific address (security risk)
kubectl port-forward --address 0.0.0.0 pod/nginx 8080:80

# Port forward with multiple ports
kubectl port-forward pod/nginx 8080:80 8443:443

# Port forward to random local port
kubectl port-forward pod/nginx :80

# Background port forwarding
kubectl port-forward pod/nginx 8080:80 &amp;

Security Warning: Using --address 0.0.0.0 exposes the port to external traffic. Use carefully.

Advanced Resource Generation and Configuration

ConfigMap and Secret Generation

# Create configmap from literal values
kubectl create configmap app-config --from-literal=database_url=postgres://localhost --from-literal=debug=true

# Create configmap from file
kubectl create configmap app-config --from-file=config.properties

# Create configmap from directory
kubectl create configmap app-configs --from-file=./configs/

# Create configmap from env file
kubectl create configmap app-env --from-env-file=app.env

# Create TLS secret
kubectl create secret tls tls-secret --cert=server.crt --key=server.key

# Create docker registry secret
kubectl create secret docker-registry regcred --docker-server=registry.example.com --docker-username=user --docker-password=pass

# Create generic secret from files
kubectl create secret generic app-secret --from-file=username.txt --from-file=password.txt

# Create secret from literal values
kubectl create secret generic app-creds --from-literal=username=admin --from-literal=password=secret123

Advanced Resource Setting

# Set container image
kubectl set image deployment/api api=myregistry/api:v2.1

# Set resource limits and requests
kubectl set resources deployment/api -c api --limits=cpu=500m,memory=512Mi --requests=cpu=100m,memory=128Mi

# Set environment variables
kubectl set env deployment/api DATABASE_URL=postgresql://newdb:5432/app

# Set environment from configmap
kubectl set env deployment/api --from=configmap/app-config

# Set environment from secret
kubectl set env deployment/api --from=secret/app-secret

# Remove environment variable
kubectl set env deployment/api DATABASE_URL-

# Set service account
kubectl set serviceaccount deployment/api api-service-account

# Set subject for role binding
kubectl set subject rolebinding admin-binding --user=alice@company.com

Modern kubectl run Limitations and Best Practices

Important Note: As of Kubernetes 1.18+, kubectl run only creates Pods by default and no longer generates Deployments, ReplicaSets, or other workload resources. This change aligns with kubectl’s focus on providing clear, predictable behavior.

# Modern kubectl run - Creates ONLY pods
kubectl run nginx --image=nginx:latest

# For deployments, use create deployment instead
kubectl create deployment nginx --image=nginx:latest

# Generate deployment manifest without applying
kubectl create deployment nginx --image=nginx:latest --dry-run=client -o yaml > deployment.yaml

# kubectl run with specific overrides (pod-only)
kubectl run nginx --image=nginx:latest --overrides='{"spec":{"containers":[{"name":"nginx","resources":{"limits":{"cpu":"100m","memory":"128Mi"}}}]}}'

# Run with restart policy (for job-like behavior)
kubectl run test-job --image=busybox:1.36 --restart=OnFailure -- /bin/sh -c "echo hello; sleep 30"

# Interactive temporary pod (deleted after exit)
kubectl run debug --image=busybox:1.36 -it --rm --restart=Never -- sh

Migration Guide:

# Old behavior (pre-1.18) - NO LONGER WORKS
kubectl run nginx --image=nginx --replicas=3  # Creates deployment

# New approach (1.18+)
kubectl create deployment nginx --image=nginx --replicas=3  # Explicit deployment creation

Complete kubectl Commands Index

AreaCommand(s)Purpose
Discoveryversion, cluster-info, cluster-info dump, api-resources, api-versions, explainInspect API/server & documentation
Get/Describeget, describe (+ -A, -l, --field-selector, --watch)List & detail resources
Createcreate (ns, deploy, job, cronjob, cm, secret, sa, role, rolebinding, pvc)Imperative resource creation
Applyapply (-f, -R, --server-side, --dry-run, --prune -l)Declarative configuration management
Replacereplace (--force)Replace object from file
Patchpatch (strategic, merge, json)Partial resource updates
Diffdiff -fPreview configuration changes
Deletedelete (--cascade, --wait=false, --timeout)Remove resources
Scalescale (deploy, rs, sts)Change replica count
Setset image, resources, env, serviceaccountUpdate resource fields
Rolloutrollout status, history, undo, restartDeployment management
Observelogs, top nodes, pods, eventsMonitoring & logs
Exec/Attachexec, attachInteract with containers
Portport-forward (pod/svc)Local network tunnel
CopycpMove files to/from containers
Debugdebug (ephemeral containers)Troubleshooting shell
Networkexpose, proxyServices & API proxy
Namespacesget ns, create ns, delete nsMulti-tenancy management
Contextsconfig view, get-contexts, use-contextCluster connection management
Nodescordon, uncordon, drain, taintNode scheduling control
Auth/RBACauth can-i, auth reconcile, create rolePermission management
Certificatescertificate approve, deny, CSR objectsPKI management
Pluginsplugin list, (krew installed separately)CLI extensions
Kustomizekustomize <dir>Configuration rendering
Waitwait --for=conditionSynchronous operations

Advanced Output Formatting and Queries

API Proxy and Security Considerations

# Start kubectl proxy (with security warnings)
kubectl proxy --port=8080

# Access API through proxy
curl http://localhost:8080/api/v1/namespaces/default/pods

# More secure proxy binding (recommended)
kubectl proxy --port=8080 --address='127.0.0.1'  # Explicit localhost binding

# Proxy with access restrictions
kubectl proxy --port=8080 --accept-hosts='^localhost$,^127\.0\.0\.1

### Complex JSONPath and Output Formatting

```bash
# Extract multiple fields with JSONPath
kubectl get pods -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.phase}{"\t"}{.spec.nodeName}{"\n"}{end}'

# Get container images from all pods
kubectl get pods -o jsonpath='{range .items[*].spec.containers[*]}{.image}{"\n"}{end}' | sort | uniq

# Get pod resource requests and limits
kubectl get pods -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[0].resources.requests.cpu}{"\t"}{.spec.containers[0].resources.limits.memory}{"\n"}{end}'

# Complex custom columns
kubectl get pods -o custom-columns=NAME:.metadata.name,STATUS:.status.phase,IP:.status.podIP,NODE:.spec.nodeName,AGE:.metadata.creationTimestamp

# Save custom columns template to file
echo 'NAME:.metadata.name,READY:.status.containerStatuses[0].ready,STATUS:.status.phase' > pod-columns.txt
kubectl get pods -o custom-columns-file=pod-columns.txt

# Go template with conditionals
kubectl get pods -o go-template='{{range .items}}{{if eq .status.phase "Running"}}{{.metadata.name}} is running{{"\n"}}{{end}}{{end}}'

# Go template with functions
kubectl get pods -o go-template='{{range .items}}{{.metadata.name | upper}} - {{.status.phase | lower}}{{"\n"}}{{end}}'

Advanced Sorting and Filtering

# Sort by different fields
kubectl get pods --sort-by=.status.containerStatuses[0].restartCount
kubectl get pods --sort-by=.spec.nodeName
kubectl get events --sort-by=.firstTimestamp

# Complex field selectors
kubectl get pods --field-selector=status.phase=Running,spec.nodeName=worker-1
kubectl get events --field-selector=reason=Failed
kubectl get services --field-selector=spec.type=LoadBalancer

# Combine label and field selectors
kubectl get pods -l app=nginx --field-selector=status.phase=Running

# Multiple namespace filtering
kubectl get pods --all-namespaces --field-selector=metadata.namespace!=kube-system

Dry Run and Validation

# Client-side dry run
kubectl apply --dry-run=client -f deployment.yaml

# Server-side dry run
kubectl apply --dry-run=server -f deployment.yaml

# Validate configuration
kubectl apply --validate=true -f deployment.yaml

# Generate YAML without applying
kubectl create deployment nginx --image=nginx --dry-run=client -o yaml > deployment.yaml

Real-World Use Cases

CI/CD Pipeline Integration

Integrating kubectl commands into CI/CD pipelines is essential for modern DevOps workflows:

#!/bin/bash
# deploy.sh - Production deployment script

set -e

# Set context
kubectl config use-context production

# Apply configurations
kubectl apply -f k8s/namespace.yaml
kubectl apply -f k8s/configmap.yaml
kubectl apply -f k8s/secret.yaml
kubectl apply -f k8s/deployment.yaml
kubectl apply -f k8s/service.yaml

# Wait for deployment
kubectl rollout status deployment/myapp -n production --timeout=300s

# Verify deployment
kubectl get pods -n production -l app=myapp

# Run smoke tests
kubectl run smoke-test --image=curlimages/curl --rm -it --restart=Never -- \
  curl -f http://myapp-service.production.svc.cluster.local/health

echo "Deployment successful!"

Blue-Green Deployment

# Blue-Green deployment strategy
kubectl apply -f blue-deployment.yaml
kubectl wait --for=condition=available --timeout=300s deployment/myapp-blue

# Switch traffic
kubectl patch service myapp-service -p '{"spec":{"selector":{"version":"blue"}}}'

# Cleanup old green deployment
kubectl delete deployment myapp-green

Canary Deployment

# Canary deployment with traffic splitting
kubectl apply -f canary-deployment.yaml
kubectl scale deployment myapp-canary --replicas=1
kubectl scale deployment myapp-stable --replicas=4

# Monitor metrics and gradually shift traffic
kubectl scale deployment myapp-canary --replicas=2
kubectl scale deployment myapp-stable --replicas=3

Backup and Restore

# Backup all resources in namespace
kubectl get all -n production -o yaml > production-backup.yaml

# Backup specific resources
kubectl get configmap,secret -n production -o yaml > config-backup.yaml

# Restore from backup
kubectl apply -f production-backup.yaml

Cluster Maintenance

# Pre-maintenance checks
kubectl get nodes
kubectl get pods --all-namespaces --field-selector=status.phase!=Running

# Drain node for maintenance
kubectl drain worker-node-1 --ignore-daemonsets --delete-emptydir-data

# Post-maintenance validation
kubectl uncordon worker-node-1
kubectl get nodes

Kubectl Cheat Sheet

Essential Commands Quick Reference

TaskCommand
Cluster Infokubectl cluster-info
Get Nodeskubectl get nodes
Get Podskubectl get pods
Pod Detailskubectl describe pod <pod-name>
Pod Logskubectl logs <pod-name>
Execute in Podkubectl exec -it <pod-name> -- /bin/bash
Apply Configkubectl apply -f <file.yaml>
Delete Resourcekubectl delete <resource> <name>
Scale Deploymentkubectl scale deployment <name> --replicas=3
Rollout Statuskubectl rollout status deployment/<name>
Port Forwardkubectl port-forward pod/<name> 8080:80
Get Serviceskubectl get svc
Create Namespacekubectl create namespace <name>
Switch Contextkubectl config use-context <context-name>
Resource Usagekubectl top pods

Common Resource Shortcuts

ResourceShort Name
podspo
servicessvc
deploymentsdeploy
replicasetsrs
namespacesns
nodesno
persistentvolumespv
persistentvolumeclaimspvc
configmapscm
secretssecret

Output Formats

FormatFlagDescription
Default(none)Human-readable table
Wide-o wideAdditional columns
YAML-o yamlYAML format
JSON-o jsonJSON format
Name Only-o nameResource names only
Custom-o custom-columns=<spec>Custom column output
JSONPath-o jsonpath=<template>JSONPath expressions

Best Practices and Productivity Hacks

Kubectl Aliases

Setting up aliases can significantly speed up your workflow:

# Add to ~/.bashrc or ~/.zshrc
alias k='kubectl'
alias kg='kubectl get'
alias kd='kubectl describe'
alias kdel='kubectl delete'
alias kaf='kubectl apply -f'
alias kdry='kubectl --dry-run=client -o yaml'

# Namespace shortcuts
alias kgp='kubectl get pods'
alias kgs='kubectl get svc'
alias kgd='kubectl get deployment'

# Context switching
alias kctx='kubectl config use-context'
alias kns='kubectl config set-context --current --namespace'

Kubectl Auto-completion

Enable auto-completion for improved productivity:

# Bash
echo 'source &lt;(kubectl completion bash)' >>~/.bashrc

# Zsh
echo 'source &lt;(kubectl completion zsh)' >>~/.zshrc

# With alias
echo 'complete -F __start_kubectl k' >>~/.bashrc

Useful Kubectl Plugins

Extend kubectl functionality with plugins:

# Install krew (kubectl plugin manager)
(
  set -x; cd "$(mktemp -d)" &amp;&amp;
  OS="$(uname | tr '[:upper:]' '[:lower:]')" &amp;&amp;
  ARCH="$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')" &amp;&amp;
  curl -fsSLO "https://github.com/kubernetes-sigs/krew/releases/latest/download/krew.tar.gz" &amp;&amp;
  tar zxvf krew.tar.gz &amp;&amp;
  KREW=./krew-"${OS}_${ARCH}" &amp;&amp;
  "$KREW" install krew
)

# Useful plugins
kubectl krew install ctx      # Context switching
kubectl krew install ns       # Namespace switching
kubectl krew install tree     # Resource tree view
kubectl krew install tail     # Multi-pod log tailing
kubectl krew install view-secret  # Secret decoding

Environment Setup

# .kubectl_helpers - Source this file for enhanced productivity

# Function to quickly switch contexts and namespaces
kswitch() {
    kubectl config use-context $1
    if [ ! -z "$2" ]; then
        kubectl config set-context --current --namespace=$2
    fi
}

# Function to get pod by partial name
kgpo() {
    kubectl get pods | grep $1
}

# Function to exec into pod by partial name
kexec() {
    POD=$(kubectl get pods --no-headers -o custom-columns=":metadata.name" | grep $1 | head -1)
    kubectl exec -it $POD -- ${2:-/bin/bash}
}

# Function to get logs with follow
klogs() {
    POD=$(kubectl get pods --no-headers -o custom-columns=":metadata.name" | grep $1 | head -1)
    kubectl logs -f $POD
}

Configuration Best Practices

  1. Use Namespaces: Always organize resources with namespaces
  2. Resource Limits: Define resource requests and limits
  3. Labels and Selectors: Use consistent labeling strategies
  4. Health Checks: Implement readiness and liveness probes
  5. Secrets Management: Never hardcode secrets in manifests
  6. Version Control: Store all Kubernetes manifests in Git
  7. Dry Run: Always test with --dry-run before applying

Security Best Practices

# Use service accounts with minimal permissions
kubectl create serviceaccount limited-sa
kubectl create rolebinding limited-binding --clusterrole=view --serviceaccount=default:limited-sa

# Scan for security issues
kubectl auth can-i --list --as=system:serviceaccount:default:limited-sa

# Network policies for pod communication
kubectl apply -f network-policy.yaml

# Pod security standards
kubectl label namespace production pod-security.kubernetes.io/enforce=restricted

# Validate RBAC permissions
kubectl auth reconcile -f rbac-config.yaml --dry-run=client

# Audit resource access
kubectl get rolebindings,clusterrolebindings --all-namespaces -o wide

# Check for overprivileged service accounts
kubectl get clusterrolebindings -o json | jq '.items[] | select(.subjects[]?.kind=="ServiceAccount") | {name: .metadata.name, role: .roleRef.name, subjects: .subjects}'

Performance and Resource Optimization

# Resource quota management
kubectl describe quota --all-namespaces
kubectl get resourcequota -o yaml

# Limit range enforcement
kubectl get limitrange --all-namespaces
kubectl describe limitrange default-limits

# Horizontal Pod Autoscaler status
kubectl get hpa
kubectl describe hpa frontend-hpa

# Vertical Pod Autoscaler (if installed)
kubectl get vpa
kubectl describe vpa recommendation-vpa

# Pod disruption budget validation
kubectl get pdb
kubectl describe pdb critical-app-pdb

# Check for resource contention
kubectl top nodes --sort-by=cpu
kubectl top pods --sort-by=memory --all-namespaces

GitOps and CI/CD Integration

# Declarative cluster state management
kubectl apply -f cluster-state/ --recursive --prune -l managed-by=gitops

# Drift detection and correction
kubectl diff -f cluster-state/ --recursive
kubectl apply -f cluster-state/ --recursive --server-side

# Blue-green deployment automation
kubectl patch service frontend -p '{"spec":{"selector":{"version":"blue"}}}'
kubectl rollout status deployment/frontend-blue --timeout=300s

# Canary deployment traffic shifting
kubectl patch service frontend -p '{"spec":{"selector":{"version":"canary"}}}'
kubectl scale deployment frontend-canary --replicas=2
kubectl scale deployment frontend-stable --replicas=8

# Configuration validation in pipelines
kubectl apply --dry-run=server --validate=true -f manifests/
kubectl conftest verify --policy opa-policies manifests/

Advanced Troubleshooting Workflows

# Comprehensive cluster health check
kubectl get componentstatuses
kubectl get nodes -o wide
kubectl top nodes
kubectl get events --sort-by=.lastTimestamp | tail -20

# Application layer debugging
kubectl get pods --all-namespaces --field-selector=status.phase!=Running
kubectl describe pods -l app=problematic-app
kubectl logs -l app=problematic-app --previous --max-log-requests=10

# Network connectivity testing
kubectl run netshoot --image=nicolaka/netshoot --rm -it --restart=Never
kubectl exec -it netshoot -- dig kubernetes.default.svc.cluster.local
kubectl exec -it netshoot -- nslookup frontend-service

# Storage troubleshooting
kubectl get pv,pvc --all-namespaces
kubectl describe pv problematic-volume
kubectl get events --field-selector=involvedObject.kind=PersistentVolumeClaim

# Resource constraint analysis
kubectl describe node worker-1 | grep -A5 "Allocated resources"
kubectl get pods --all-namespaces -o custom-columns=NAME:.metadata.name,CPU-REQ:.spec.containers[*].resources.requests.cpu,MEM-REQ:.spec.containers[*].resources.requests.memory

Production-Ready Examples and Templates

Here are battle-tested kubectl command combinations for common production scenarios:

Zero-Downtime Deployment Pipeline

#!/bin/bash
# production-deploy.sh - Safe production deployment workflow

set -euo pipefail

NAMESPACE="production"
APP="frontend"
NEW_IMAGE="$1"

echo "🚀 Starting zero-downtime deployment for $APP"

# 1. Validate cluster access and permissions
kubectl auth can-i create,update,patch deployments -n $NAMESPACE || exit 1

# 2. Preview changes
echo "📋 Previewing changes..."
kubectl set image deployment/$APP app=$NEW_IMAGE --dry-run=client -o yaml | kubectl diff -f -

# 3. Apply the update
kubectl set image deployment/$APP app=$NEW_IMAGE -n $NAMESPACE

# 4. Wait for rollout with timeout
echo "⏳ Waiting for rollout to complete..."
kubectl rollout status deployment/$APP -n $NAMESPACE --timeout=600s

# 5. Verify health
echo "🔍 Verifying deployment health..."
kubectl wait --for=condition=available deployment/$APP -n $NAMESPACE --timeout=120s

# 6. Run smoke tests
FRONTEND_POD=$(kubectl get pods -n $NAMESPACE -l app=$APP -o jsonpath='{.items[0].metadata.name}')
kubectl exec $FRONTEND_POD -n $NAMESPACE -- curl -f http://localhost:8080/health

echo "✅ Deployment completed successfully!"

Comprehensive cluster health check

#!/bin/bash
# cluster-health-check.sh - Complete cluster diagnostics

echo "🏥 Kubernetes Cluster Health Report"
echo "=================================="

# Node health
echo "📊 Node Status:"
kubectl get nodes -o custom-columns=NAME:.metadata.name,STATUS:.status.conditions[?(@.type==\"Ready\")].status,VERSION:.status.nodeInfo.kubeletVersion,UPTIME:.metadata.creationTimestamp

# Resource utilization
echo -e "\n💾 Resource Utilization:"
kubectl top nodes 2>/dev/null || echo "⚠️ Metrics server not available"

# Critical system pods
echo -e "\n🔧 System Pods Status:"
kubectl get pods -n kube-system --field-selector=status.phase!=Running 2>/dev/null || echo "✅ All system pods running"

# Recent warning events (modern approach)
echo -e "\n⚠️ Recent Warning Events:"
kubectl events --types=Warning --sort-by='.lastTimestamp' | tail -10 2>/dev/null || kubectl get events --all-namespaces --field-selector=type=Warning --sort-by='.lastTimestamp' | tail -10

# Storage health
echo -e "\n💽 Storage Health:"
kubectl get pv,pvc --all-namespaces | grep -v Bound || echo "⚠️ Some volumes not bound properly"

# Modern cluster health endpoints
echo -e "\n🩺 Cluster Health Endpoints:"
kubectl get --raw /livez &amp;&amp; echo " - Liveness: OK" || echo " - Liveness: FAILED"
kubectl get --raw /readyz &amp;&amp; echo " - Readiness: OK" || echo " - Readiness: FAILED"

# Network connectivity test
echo -e "\n🌐 Network Connectivity Test:"
kubectl run connectivity-test --image=busybox:1.36 --rm -it --restart=Never --quiet -- nslookup kubernetes.default 2>/dev/null &amp;&amp; echo "✅ DNS resolution working" || echo "❌ DNS issues detected"

Advanced Debugging Toolkit

# debug-toolkit.sh - Comprehensive debugging commands

debug_pod() {
    local pod=$1
    local namespace=${2:-default}
    
    echo "🐛 Debugging pod: $pod in namespace: $namespace"
    
    # Basic pod information
    kubectl get pod $pod -n $namespace -o wide
    
    # Detailed description
    echo -e "\n📝 Pod Description:"
    kubectl describe pod $pod -n $namespace
    
    # Container logs
    echo -e "\n📋 Container Logs (last 50 lines):"
    kubectl logs $pod -n $namespace --tail=50
    
    # Previous container logs if crashed
    echo -e "\n🔄 Previous Container Logs (if any):"
    kubectl logs $pod -n $namespace --previous 2>/dev/null || echo "No previous container logs"
    
    # Events related to the pod
    echo -e "\n📅 Related Events:"
    kubectl get events -n $namespace --field-selector involvedObject.name=$pod
    
    # Resource usage
    echo -e "\n📊 Resource Usage:"
    kubectl top pod $pod -n $namespace --containers 2>/dev/null || echo "Metrics not available"
    
    # Network debugging
    echo -e "\n🌐 Network Debug Shell (if pod is running):"
    if kubectl get pod $pod -n $namespace -o jsonpath='{.status.phase}' | grep -q Running; then
        echo "Access debug shell with: kubectl exec -it $pod -n $namespace -- /bin/sh"
        echo "Or create ephemeral debug container: kubectl debug $pod -n $namespace --image=nicolaka/netshoot -it"
    fi
}

# Usage: debug_pod &lt;pod-name> [namespace]

Automated Resource Cleanup

#!/bin/bash
# cleanup-resources.sh - Safe resource cleanup automation

cleanup_completed_jobs() {
    local namespace=${1:-default}
    echo "🧹 Cleaning up completed jobs in namespace: $namespace"
    
    # Delete successful jobs older than 1 day
    kubectl get jobs -n $namespace --field-selector=status.successful=1 -o json | \
    jq -r '.items[] | select(.status.completionTime != null) | select((.status.completionTime | fromdateiso8601) &lt; (now - 86400)) | .metadata.name' | \
    xargs -I {} kubectl delete job {} -n $namespace --cascade=foreground
    
    # Delete failed jobs older than 3 days
    kubectl get jobs -n $namespace --field-selector=status.failed=1 -o json | \
    jq -r '.items[] | select(.status.conditions[].lastTransitionTime != null) | select((.status.conditions[].lastTransitionTime | fromdateiso8601) &lt; (now - 259200)) | .metadata.name' | \
    xargs -I {} kubectl delete job {} -n $namespace --cascade=foreground
}

cleanup_evicted_pods() {
    local namespace=${1:-default}
    echo "🗑️ Cleaning up evicted pods in namespace: $namespace"
    
    kubectl get pods -n $namespace --field-selector=status.phase=Failed -o json | \
    jq -r '.items[] | select(.status.reason == "Evicted") | .metadata.name' | \
    xargs -I {} kubectl delete pod {} -n $namespace
}

cleanup_unused_configmaps() {
    local namespace=${1:-default}
    echo "📄 Identifying unused ConfigMaps in namespace: $namespace"
    
    # This requires careful validation - only shows potentially unused ones
    kubectl get configmaps -n $namespace -o json | \
    jq -r '.items[] | select(.metadata.name != "kube-root-ca.crt") | .metadata.name' | \
    while read cm; do
        if ! kubectl get pods -n $namespace -o yaml | grep -q "configMapRef:\|configMap:" | grep -q "$cm"; then
            echo "⚠️ Potentially unused ConfigMap: $cm (manual verification needed)"
        fi
    done
}

# Safe usage with confirmation
read -p "Enter namespace to clean up (default: default): " target_namespace
target_namespace=${target_namespace:-default}

echo "⚠️ This will clean up resources in namespace: $target_namespace"
read -p "Continue? (y/N): " confirm

if [[ $confirm == [yY] ]]; then
    cleanup_completed_jobs $target_namespace
    cleanup_evicted_pods $target_namespace
    cleanup_unused_configmaps $target_namespace
    echo "✅ Cleanup completed for namespace: $target_namespace"
else
    echo "❌ Cleanup cancelled"
fi

Kubectl Command Chaining and Automation Patterns

Multi-Environment Management

# Environment switching with validation
switch_env() {
    local env=$1
    case $env in
        dev|development)
            kubectl config use-context dev-cluster
            kubectl config set-context --current --namespace=development
            ;;
        staging)
            kubectl config use-context staging-cluster  
            kubectl config set-context --current --namespace=staging
            ;;
        prod|production)
            kubectl config use-context prod-cluster
            kubectl config set-context --current --namespace=production
            # Additional safety check for production
            echo "⚠️ You are now in PRODUCTION environment"
            kubectl auth can-i create,update,delete deployments || echo "❌ Insufficient permissions"
            ;;
        *)
            echo "❌ Invalid environment. Use: dev, staging, or prod"
            return 1
            ;;
    esac
    echo "✅ Switched to $env environment"
    kubectl config current-context
}

# Bulk operations across namespaces
operate_across_namespaces() {
    local operation=$1
    local resource=$2
    local selector=$3
    
    for ns in $(kubectl get namespaces -o jsonpath='{.items[*].metadata.name}' | grep -E '^(app-|service-)'); do
        echo "🔄 Processing namespace: $ns"
        case $operation in
            "scale-down")
                kubectl scale deployment -l $selector --replicas=0 -n $ns
                ;;
            "scale-up") 
                kubectl scale deployment -l $selector --replicas=3 -n $ns
                ;;
            "restart")
                kubectl rollout restart deployment -l $selector -n $ns
                ;;
            "logs")
                echo "📋 Logs from $ns:"
                kubectl logs -l $selector -n $ns --tail=10 --prefix=true
                ;;
        esac
    done
}

# Usage examples:
# switch_env prod
# operate_across_namespaces scale-down deployment app=frontend
# operate_across_namespaces logs pod app=api

Resource Dependency Management

# Wait for dependencies before deploying
deploy_with_dependencies() {
    local app=$1
    local namespace=${2:-default}
    
    echo "🔗 Checking dependencies for $app deployment..."
    
    # Wait for database to be ready
    if kubectl get deployment database -n $namespace &amp;>/dev/null; then
        echo "⏳ Waiting for database..."
        kubectl wait --for=condition=available deployment/database -n $namespace --timeout=300s
    fi
    
    # Wait for Redis to be ready
    if kubectl get deployment redis -n $namespace &amp;>/dev/null; then
        echo "⏳ Waiting for Redis..."
        kubectl wait --for=condition=available deployment/redis -n $namespace --timeout=180s
    fi
    
    # Wait for configmaps and secrets
    echo "⏳ Waiting for configuration..."
    kubectl wait --for=jsonpath='{.data}' configmap/$app-config -n $namespace --timeout=60s
    kubectl get secret $app-secret -n $namespace || { echo "❌ Secret not found"; exit 1; }
    
    # Deploy the application
    echo "🚀 Deploying $app..."
    kubectl apply -f manifests/$app/ -n $namespace
    kubectl rollout status deployment/$app -n $namespace --timeout=600s
    
    # Verify deployment
    kubectl wait --for=condition=available deployment/$app -n $namespace --timeout=120s
    echo "✅ $app deployed successfully with all dependencies ready"
}

# Progressive rollout with health checks
progressive_rollout() {
    local deployment=$1
    local new_image=$2
    local namespace=${3:-default}
    
    echo "🎯 Starting progressive rollout for $deployment"
    
    # Get current replica count
    current_replicas=$(kubectl get deployment $deployment -n $namespace -o jsonpath='{.spec.replicas}')
    
    # Start with 1 replica for new version
    kubectl patch deployment $deployment -n $namespace -p "{\"spec\":{\"replicas\":1}}"
    kubectl set image deployment/$deployment app=$new_image -n $namespace
    
    # Wait for first pod to be ready
    kubectl rollout status deployment/$deployment -n $namespace --timeout=300s
    
    # Health check
    new_pod=$(kubectl get pods -n $namespace -l app=$deployment -o jsonpath='{.items[0].metadata.name}')
    kubectl wait --for=condition=ready pod/$new_pod -n $namespace --timeout=120s
    
    # Run health check
    kubectl exec $new_pod -n $namespace -- curl -f http://localhost:8080/health || {
        echo "❌ Health check failed, rolling back..."
        kubectl rollout undo deployment/$deployment -n $namespace
        exit 1
    }
    
    # Gradually scale up
    for replicas in $(seq 2 $current_replicas); do
        echo "📈 Scaling to $replicas replicas..."
        kubectl scale deployment $deployment --replicas=$replicas -n $namespace
        kubectl rollout status deployment/$deployment -n $namespace --timeout=300s
        sleep 30  # Allow time for load balancing
    done
    
    echo "✅ Progressive rollout completed successfully"
}

Frequently Asked Questions

What is kubectl in Kubernetes?

Kubectl is the official command-line interface (CLI) tool for Kubernetes that allows users to interact with Kubernetes clusters. It communicates with the Kubernetes API server to perform operations such as deploying applications, inspecting resources, viewing logs, and managing cluster components.

What are the most useful kubectl commands?

The most essential kubectl commands for daily operations include:

  • kubectl get pods – List running pods
  • kubectl logs <pod-name> – View pod logs
  • kubectl exec -it <pod-name> -- /bin/bash – Execute commands in pods
  • kubectl apply -f <file> – Apply configuration files
  • kubectl describe <resource> <name> – Get detailed resource information
  • kubectl scale deployment <name> --replicas=<number> – Scale applications

How do I check logs using kubectl?

To check logs using kubectl, use the kubectl logs command with various options:

kubectl logs &lt;pod-name>                    # Basic log viewing
kubectl logs -f &lt;pod-name>                 # Follow logs in real-time
kubectl logs &lt;pod-name> --tail=100         # Show last 100 lines
kubectl logs &lt;pod-name> --since=1h         # Logs from last hour
kubectl logs &lt;pod-name> -c &lt;container>     # Specific container logs

How do I scale pods with kubectl?

Use the kubectl scale command to adjust the number of pod replicas:

kubectl scale deployment &lt;deployment-name> --replicas=5
kubectl scale replicaset &lt;rs-name> --replicas=3
kubectl autoscale deployment &lt;name> --min=1 --max=10 --cpu-percent=80

What is the difference between kubectl apply and kubectl create?

Aspectkubectl createkubectl apply
PurposeCreates new resourcesDeclarative configuration management
BehaviorFails if resource existsUpdates existing resources
Best UseOne-time resource creationContinuous deployment and GitOps
Configuration TrackingNo trackingTracks last-applied configuration
IdempotencyNot idempotentIdempotent operations

Conclusion

Mastering kubectl commands is fundamental to successful Kubernetes operations. This comprehensive guide provides the foundation you need to effectively manage containerized applications, troubleshoot issues, and implement best practices in production environments.

Remember to practice these commands regularly, set up your development environment with aliases and auto-completion, and always test changes in non-production environments first. As Kubernetes continues to evolve, staying current with kubectl capabilities will keep you at the forefront of container orchestration.

For more advanced Kubernetes topics, explore our related guides on Terraform with Kubernetes automation, Ansible playbooks for Kubernetes management, AWS EKS cluster setup and management, GitHub Actions CI/CD with kubectl, and Linux process management for system administrators.


Related Resources:


<script type=”application/ld+json”> { “@context”: “https://schema.org&#8221;, “@type”: “FAQPage”, “mainEntity”: [ { “@type”: “Question”, “name”: “What is kubectl in Kubernetes?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “Kubectl is the official command-line interface (CLI) tool for Kubernetes that allows users to interact with Kubernetes clusters. It communicates with the Kubernetes API server to perform operations such as deploying applications, inspecting resources, viewing logs, and managing cluster components.” } }, { “@type”: “Question”, “name”: “What are the most useful kubectl commands?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “The most essential kubectl commands for daily operations include: kubectl get pods (list running pods), kubectl logs <pod-name> (view pod logs), kubectl exec -it <pod-name> — /bin/bash (execute commands in pods), kubectl apply -f <file> (apply configuration files), kubectl describe <resource> <name> (get detailed resource information), and kubectl scale deployment <name> –replicas=<number> (scale applications).” } }, { “@type”: “Question”, “name”: “How do I check logs using kubectl?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “To check logs using kubectl, use the kubectl logs command with various options: kubectl logs <pod-name> for basic log viewing, kubectl logs -f <pod-name> to follow logs in real-time, kubectl logs <pod-name> –tail=100 to show last 100 lines, kubectl logs <pod-name> –since=1h for logs from last hour, and kubectl logs <pod-name> -c <container> for specific container logs.” } }, { “@type”: “Question”, “name”: “How do I scale pods with kubectl?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “Use the kubectl scale command to adjust the number of pod replicas: kubectl scale deployment <deployment-name> –replicas=5, kubectl scale replicaset <rs-name> –replicas=3, or kubectl autoscale deployment <name> –min=1 –max=10 –cpu-percent=80 for automatic scaling.” } }, { “@type”: “Question”, “name”: “What is the difference between kubectl apply and kubectl create?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “kubectl create is used for one-time resource creation and fails if the resource already exists, while kubectl apply is used for declarative configuration management and can update existing resources. kubectl apply is idempotent and tracks configuration changes, making it ideal for continuous deployment and GitOps workflows, whereas kubectl create is better for initial resource creation and testing.” } }, { “@type”: “Question”, “name”: “How do I troubleshoot pods that won’t start?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “To troubleshoot pods that won’t start: 1) Check pod status with kubectl get pod and kubectl describe pod, 2) Examine events with kubectl get events, 3) Review current and previous logs with kubectl logs, 4) Verify node resources and constraints, 5) Validate image accessibility and registry credentials, and 6) Check ConfigMaps and Secrets dependencies.” } }, { “@type”: “Question”, “name”: “How do I perform safe production deployments?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “Safe production deployments require: 1) Preview changes with kubectl diff, 2) Use server-side apply with kubectl apply –server-side, 3) Monitor rollout with kubectl rollout status, 4) Verify health with kubectl wait, 5) Run smoke tests, and 6) Keep rollback ready with kubectl rollout undo. Always test in staging first and maintain proper RBAC permissions.” } }, { “@type”: “Question”, “name”: “What’s the difference between kubectl exec and kubectl attach?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “kubectl exec starts a new process in a container (like /bin/bash for debugging), while kubectl attach connects to the existing main process (PID 1). Use exec for interactive debugging and running commands, use attach for monitoring application output or interacting with the main application process.” } } ] } </script>

**⚠️ Security Warning:** `kubectl proxy` exposes the Kubernetes API server locally with your current user's permissions. Always bind to localhost only (`127.0.0.1`) and never expose to external interfaces unless absolutely necessary and properly secured. **Prefer `kubectl get --raw` and normal kubectl commands over `proxy` in CI/CD pipelines.**

**Secure Proxy Practices:**
```bash
# Good: Localhost only (default behavior)
kubectl proxy --port=8080

# Good: Explicit localhost binding
kubectl proxy --port=8080 --address=127.0.0.1

# DANGEROUS: Never do this in production
kubectl proxy --port=8080 --address=0.0.0.0  # Exposes API to external access

# Better alternative: Use kubectl directly instead of proxy when possible
kubectl get pods -o json | jq '.items[].metadata.name'

# Or use raw API calls for specific needs
kubectl get --raw /api/v1/namespaces/default/pods | jq '.items[].metadata.name'

Proxy Use Cases and Alternatives:

# Instead of using proxy for API exploration:
kubectl api-resources                    # List available resources
kubectl explain pod.spec.containers     # Get resource documentation
kubectl get --raw /openapi/v2          # OpenAPI specification

# For debugging API calls:
kubectl get pods -v=8                   # Verbose logging shows API calls
kubectl proxy &amp;                        # Background proxy
PROXY_PID=$!
curl -s http://localhost:8080/api/v1/namespaces/default/pods | jq
kill $PROXY_PID                        # Clean shutdown

Expected Proxy Output:

Starting to serve on 127.0.0.1:8080

API Access Example:

# Start proxy in background
kubectl proxy --port=8080 &amp;
PROXY_PID=$!

# Access cluster info via proxy
curl -s http://localhost:8080/api/v1/ | jq '.resources[] | select(.name == "pods")'

# Access specific namespace resources
curl -s http://localhost:8080/api/v1/namespaces/kube-system/pods | jq '.items[].metadata.name'

# Clean up
kill $PROXY_PID

Complex JSONPath and Output Formatting

# Extract multiple fields with JSONPath
kubectl get pods -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.phase}{"\t"}{.spec.nodeName}{"\n"}{end}'

# Get container images from all pods
kubectl get pods -o jsonpath='{range .items[*].spec.containers[*]}{.image}{"\n"}{end}' | sort | uniq

# Get pod resource requests and limits
kubectl get pods -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[0].resources.requests.cpu}{"\t"}{.spec.containers[0].resources.limits.memory}{"\n"}{end}'

# Complex custom columns
kubectl get pods -o custom-columns=NAME:.metadata.name,STATUS:.status.phase,IP:.status.podIP,NODE:.spec.nodeName,AGE:.metadata.creationTimestamp

# Save custom columns template to file
echo 'NAME:.metadata.name,READY:.status.containerStatuses[0].ready,STATUS:.status.phase' > pod-columns.txt
kubectl get pods -o custom-columns-file=pod-columns.txt

# Go template with conditionals
kubectl get pods -o go-template='{{range .items}}{{if eq .status.phase "Running"}}{{.metadata.name}} is running{{"\n"}}{{end}}{{end}}'

# Go template with functions
kubectl get pods -o go-template='{{range .items}}{{.metadata.name | upper}} - {{.status.phase | lower}}{{"\n"}}{{end}}'

Advanced Sorting and Filtering

# Sort by different fields
kubectl get pods --sort-by=.status.containerStatuses[0].restartCount
kubectl get pods --sort-by=.spec.nodeName
kubectl get events --sort-by=.firstTimestamp

# Complex field selectors
kubectl get pods --field-selector=status.phase=Running,spec.nodeName=worker-1
kubectl get events --field-selector=reason=Failed
kubectl get services --field-selector=spec.type=LoadBalancer

# Combine label and field selectors
kubectl get pods -l app=nginx --field-selector=status.phase=Running

# Multiple namespace filtering
kubectl get pods --all-namespaces --field-selector=metadata.namespace!=kube-system

Dry Run and Validation

# Client-side dry run
kubectl apply --dry-run=client -f deployment.yaml

# Server-side dry run
kubectl apply --dry-run=server -f deployment.yaml

# Validate configuration
kubectl apply --validate=true -f deployment.yaml

# Generate YAML without applying
kubectl create deployment nginx --image=nginx --dry-run=client -o yaml > deployment.yaml

Real-World Use Cases

CI/CD Pipeline Integration

Integrating kubectl commands into CI/CD pipelines is essential for modern DevOps workflows:

#!/bin/bash
# deploy.sh - Production deployment script

set -e

# Set context
kubectl config use-context production

# Apply configurations
kubectl apply -f k8s/namespace.yaml
kubectl apply -f k8s/configmap.yaml
kubectl apply -f k8s/secret.yaml
kubectl apply -f k8s/deployment.yaml
kubectl apply -f k8s/service.yaml

# Wait for deployment
kubectl rollout status deployment/myapp -n production --timeout=300s

# Verify deployment
kubectl get pods -n production -l app=myapp

# Run smoke tests
kubectl run smoke-test --image=curlimages/curl --rm -it --restart=Never -- \
  curl -f http://myapp-service.production.svc.cluster.local/health

echo "Deployment successful!"

Blue-Green Deployment

# Blue-Green deployment strategy
kubectl apply -f blue-deployment.yaml
kubectl wait --for=condition=available --timeout=300s deployment/myapp-blue

# Switch traffic
kubectl patch service myapp-service -p '{"spec":{"selector":{"version":"blue"}}}'

# Cleanup old green deployment
kubectl delete deployment myapp-green

Canary Deployment

# Canary deployment with traffic splitting
kubectl apply -f canary-deployment.yaml
kubectl scale deployment myapp-canary --replicas=1
kubectl scale deployment myapp-stable --replicas=4

# Monitor metrics and gradually shift traffic
kubectl scale deployment myapp-canary --replicas=2
kubectl scale deployment myapp-stable --replicas=3

Backup and Restore

# Backup all resources in namespace
kubectl get all -n production -o yaml > production-backup.yaml

# Backup specific resources
kubectl get configmap,secret -n production -o yaml > config-backup.yaml

# Restore from backup
kubectl apply -f production-backup.yaml

Cluster Maintenance

# Pre-maintenance checks
kubectl get nodes
kubectl get pods --all-namespaces --field-selector=status.phase!=Running

# Drain node for maintenance
kubectl drain worker-node-1 --ignore-daemonsets --delete-emptydir-data

# Post-maintenance validation
kubectl uncordon worker-node-1
kubectl get nodes

Kubectl Cheat Sheet

Essential Commands Quick Reference

TaskCommand
Cluster Infokubectl cluster-info
Get Nodeskubectl get nodes
Get Podskubectl get pods
Pod Detailskubectl describe pod <pod-name>
Pod Logskubectl logs <pod-name>
Execute in Podkubectl exec -it <pod-name> -- /bin/bash
Apply Configkubectl apply -f <file.yaml>
Delete Resourcekubectl delete <resource> <name>
Scale Deploymentkubectl scale deployment <name> --replicas=3
Rollout Statuskubectl rollout status deployment/<name>
Port Forwardkubectl port-forward pod/<name> 8080:80
Get Serviceskubectl get svc
Create Namespacekubectl create namespace <name>
Switch Contextkubectl config use-context <context-name>
Resource Usagekubectl top pods

Common Resource Shortcuts

ResourceShort Name
podspo
servicessvc
deploymentsdeploy
replicasetsrs
namespacesns
nodesno
persistentvolumespv
persistentvolumeclaimspvc
configmapscm
secretssecret

Output Formats

FormatFlagDescription
Default(none)Human-readable table
Wide-o wideAdditional columns
YAML-o yamlYAML format
JSON-o jsonJSON format
Name Only-o nameResource names only
Custom-o custom-columns=<spec>Custom column output
JSONPath-o jsonpath=<template>JSONPath expressions

Best Practices and Productivity Hacks

Kubectl Aliases

Setting up aliases can significantly speed up your workflow:

# Add to ~/.bashrc or ~/.zshrc
alias k='kubectl'
alias kg='kubectl get'
alias kd='kubectl describe'
alias kdel='kubectl delete'
alias kaf='kubectl apply -f'
alias kdry='kubectl --dry-run=client -o yaml'

# Namespace shortcuts
alias kgp='kubectl get pods'
alias kgs='kubectl get svc'
alias kgd='kubectl get deployment'

# Context switching
alias kctx='kubectl config use-context'
alias kns='kubectl config set-context --current --namespace'

Kubectl Auto-completion

Enable auto-completion for improved productivity:

# Bash
echo 'source &lt;(kubectl completion bash)' >>~/.bashrc

# Zsh
echo 'source &lt;(kubectl completion zsh)' >>~/.zshrc

# With alias
echo 'complete -F __start_kubectl k' >>~/.bashrc

Useful Kubectl Plugins

Extend kubectl functionality with plugins:

# Install krew (kubectl plugin manager)
(
  set -x; cd "$(mktemp -d)" &amp;&amp;
  OS="$(uname | tr '[:upper:]' '[:lower:]')" &amp;&amp;
  ARCH="$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')" &amp;&amp;
  curl -fsSLO "https://github.com/kubernetes-sigs/krew/releases/latest/download/krew.tar.gz" &amp;&amp;
  tar zxvf krew.tar.gz &amp;&amp;
  KREW=./krew-"${OS}_${ARCH}" &amp;&amp;
  "$KREW" install krew
)

# Useful plugins
kubectl krew install ctx      # Context switching
kubectl krew install ns       # Namespace switching
kubectl krew install tree     # Resource tree view
kubectl krew install tail     # Multi-pod log tailing
kubectl krew install view-secret  # Secret decoding

Environment Setup

# .kubectl_helpers - Source this file for enhanced productivity

# Function to quickly switch contexts and namespaces
kswitch() {
    kubectl config use-context $1
    if [ ! -z "$2" ]; then
        kubectl config set-context --current --namespace=$2
    fi
}

# Function to get pod by partial name
kgpo() {
    kubectl get pods | grep $1
}

# Function to exec into pod by partial name
kexec() {
    POD=$(kubectl get pods --no-headers -o custom-columns=":metadata.name" | grep $1 | head -1)
    kubectl exec -it $POD -- ${2:-/bin/bash}
}

# Function to get logs with follow
klogs() {
    POD=$(kubectl get pods --no-headers -o custom-columns=":metadata.name" | grep $1 | head -1)
    kubectl logs -f $POD
}

Configuration Best Practices

  1. Use Namespaces: Always organize resources with namespaces
  2. Resource Limits: Define resource requests and limits
  3. Labels and Selectors: Use consistent labeling strategies
  4. Health Checks: Implement readiness and liveness probes
  5. Secrets Management: Never hardcode secrets in manifests
  6. Version Control: Store all Kubernetes manifests in Git
  7. Dry Run: Always test with --dry-run before applying

Security Best Practices

# Use service accounts with minimal permissions
kubectl create serviceaccount limited-sa
kubectl create rolebinding limited-binding --clusterrole=view --serviceaccount=default:limited-sa

# Scan for security issues
kubectl auth can-i --list --as=system:serviceaccount:default:limited-sa

# Network policies for pod communication
kubectl apply -f network-policy.yaml

# Pod security standards
kubectl label namespace production pod-security.kubernetes.io/enforce=restricted

# Validate RBAC permissions
kubectl auth reconcile -f rbac-config.yaml --dry-run=client

# Audit resource access
kubectl get rolebindings,clusterrolebindings --all-namespaces -o wide

# Check for overprivileged service accounts
kubectl get clusterrolebindings -o json | jq '.items[] | select(.subjects[]?.kind=="ServiceAccount") | {name: .metadata.name, role: .roleRef.name, subjects: .subjects}'

Performance and Resource Optimization

# Resource quota management
kubectl describe quota --all-namespaces
kubectl get resourcequota -o yaml

# Limit range enforcement
kubectl get limitrange --all-namespaces
kubectl describe limitrange default-limits

# Horizontal Pod Autoscaler status
kubectl get hpa
kubectl describe hpa frontend-hpa

# Vertical Pod Autoscaler (if installed)
kubectl get vpa
kubectl describe vpa recommendation-vpa

# Pod disruption budget validation
kubectl get pdb
kubectl describe pdb critical-app-pdb

# Check for resource contention
kubectl top nodes --sort-by=cpu
kubectl top pods --sort-by=memory --all-namespaces

GitOps and CI/CD Integration

# Declarative cluster state management
kubectl apply -f cluster-state/ --recursive --prune -l managed-by=gitops

# Drift detection and correction
kubectl diff -f cluster-state/ --recursive
kubectl apply -f cluster-state/ --recursive --server-side

# Blue-green deployment automation
kubectl patch service frontend -p '{"spec":{"selector":{"version":"blue"}}}'
kubectl rollout status deployment/frontend-blue --timeout=300s

# Canary deployment traffic shifting
kubectl patch service frontend -p '{"spec":{"selector":{"version":"canary"}}}'
kubectl scale deployment frontend-canary --replicas=2
kubectl scale deployment frontend-stable --replicas=8

# Configuration validation in pipelines
kubectl apply --dry-run=server --validate=true -f manifests/
kubectl conftest verify --policy opa-policies manifests/

Advanced Troubleshooting Workflows

# Comprehensive cluster health check
kubectl get componentstatuses
kubectl get nodes -o wide
kubectl top nodes
kubectl get events --sort-by=.lastTimestamp | tail -20

# Application layer debugging
kubectl get pods --all-namespaces --field-selector=status.phase!=Running
kubectl describe pods -l app=problematic-app
kubectl logs -l app=problematic-app --previous --max-log-requests=10

# Network connectivity testing
kubectl run netshoot --image=nicolaka/netshoot --rm -it --restart=Never
kubectl exec -it netshoot -- dig kubernetes.default.svc.cluster.local
kubectl exec -it netshoot -- nslookup frontend-service

# Storage troubleshooting
kubectl get pv,pvc --all-namespaces
kubectl describe pv problematic-volume
kubectl get events --field-selector=involvedObject.kind=PersistentVolumeClaim

# Resource constraint analysis
kubectl describe node worker-1 | grep -A5 "Allocated resources"
kubectl get pods --all-namespaces -o custom-columns=NAME:.metadata.name,CPU-REQ:.spec.containers[*].resources.requests.cpu,MEM-REQ:.spec.containers[*].resources.requests.memory

Production-Ready Examples and Templates

Here are battle-tested kubectl command combinations for common production scenarios:

Zero-Downtime Deployment Pipeline

#!/bin/bash
# production-deploy.sh - Safe production deployment workflow

set -euo pipefail

NAMESPACE="production"
APP="frontend"
NEW_IMAGE="$1"

echo "🚀 Starting zero-downtime deployment for $APP"

# 1. Validate cluster access and permissions
kubectl auth can-i create,update,patch deployments -n $NAMESPACE || exit 1

# 2. Preview changes
echo "📋 Previewing changes..."
kubectl set image deployment/$APP app=$NEW_IMAGE --dry-run=client -o yaml | kubectl diff -f -

# 3. Apply the update
kubectl set image deployment/$APP app=$NEW_IMAGE -n $NAMESPACE

# 4. Wait for rollout with timeout
echo "⏳ Waiting for rollout to complete..."
kubectl rollout status deployment/$APP -n $NAMESPACE --timeout=600s

# 5. Verify health
echo "🔍 Verifying deployment health..."
kubectl wait --for=condition=available deployment/$APP -n $NAMESPACE --timeout=120s

# 6. Run smoke tests
FRONTEND_POD=$(kubectl get pods -n $NAMESPACE -l app=$APP -o jsonpath='{.items[0].metadata.name}')
kubectl exec $FRONTEND_POD -n $NAMESPACE -- curl -f http://localhost:8080/health

echo "✅ Deployment completed successfully!"

Comprehensive Cluster Health Check

#!/bin/bash
# cluster-health-check.sh - Complete cluster diagnostics

echo "🏥 Kubernetes Cluster Health Report"
echo "=================================="

# Node health
echo "📊 Node Status:"
kubectl get nodes -o custom-columns=NAME:.metadata.name,STATUS:.status.conditions[?(@.type==\"Ready\")].status,VERSION:.status.nodeInfo.kubeletVersion,UPTIME:.metadata.creationTimestamp

# Resource utilization
echo -e "\n💾 Resource Utilization:"
kubectl top nodes 2>/dev/null || echo "⚠️ Metrics server not available"

# Critical system pods
echo -e "\n🔧 System Pods Status:"
kubectl get pods -n kube-system --field-selector=status.phase!=Running 2>/dev/null || echo "✅ All system pods running"

# Recent warning events
echo -e "\n⚠️ Recent Warning Events:"
kubectl get events --all-namespaces --field-selector=type=Warning --sort-by='.lastTimestamp' | tail -10

# Storage health
echo -e "\n💽 Storage Health:"
kubectl get pv,pvc --all-namespaces | grep -v Bound || echo "⚠️ Some volumes not bound properly"

# Network connectivity test
echo -e "\n🌐 Network Connectivity Test:"
kubectl run connectivity-test --image=busybox:1.36 --rm -it --restart=Never --quiet -- nslookup kubernetes.default 2>/dev/null &amp;&amp; echo "✅ DNS resolution working" || echo "❌ DNS issues detected"

Advanced Debugging Toolkit

# debug-toolkit.sh - Comprehensive debugging commands

debug_pod() {
    local pod=$1
    local namespace=${2:-default}
    
    echo "🐛 Debugging pod: $pod in namespace: $namespace"
    
    # Basic pod information
    kubectl get pod $pod -n $namespace -o wide
    
    # Detailed description
    echo -e "\n📝 Pod Description:"
    kubectl describe pod $pod -n $namespace
    
    # Container logs
    echo -e "\n📋 Container Logs (last 50 lines):"
    kubectl logs $pod -n $namespace --tail=50
    
    # Previous container logs if crashed
    echo -e "\n🔄 Previous Container Logs (if any):"
    kubectl logs $pod -n $namespace --previous 2>/dev/null || echo "No previous container logs"
    
    # Events related to the pod
    echo -e "\n📅 Related Events:"
    kubectl get events -n $namespace --field-selector involvedObject.name=$pod
    
    # Resource usage
    echo -e "\n📊 Resource Usage:"
    kubectl top pod $pod -n $namespace --containers 2>/dev/null || echo "Metrics not available"
    
    # Network debugging
    echo -e "\n🌐 Network Debug Shell (if pod is running):"
    if kubectl get pod $pod -n $namespace -o jsonpath='{.status.phase}' | grep -q Running; then
        echo "Access debug shell with: kubectl exec -it $pod -n $namespace -- /bin/sh"
        echo "Or create ephemeral debug container: kubectl debug $pod -n $namespace --image=nicolaka/netshoot -it"
    fi
}

# Usage: debug_pod &lt;pod-name> [namespace]

Automated Resource Cleanup

#!/bin/bash
# cleanup-resources.sh - Safe resource cleanup automation

cleanup_completed_jobs() {
    local namespace=${1:-default}
    echo "🧹 Cleaning up completed jobs in namespace: $namespace"
    
    # Delete successful jobs older than 1 day
    kubectl get jobs -n $namespace --field-selector=status.successful=1 -o json | \
    jq -r '.items[] | select(.status.completionTime != null) | select((.status.completionTime | fromdateiso8601) &lt; (now - 86400)) | .metadata.name' | \
    xargs -I {} kubectl delete job {} -n $namespace --cascade=foreground
    
    # Delete failed jobs older than 3 days
    kubectl get jobs -n $namespace --field-selector=status.failed=1 -o json | \
    jq -r '.items[] | select(.status.conditions[].lastTransitionTime != null) | select((.status.conditions[].lastTransitionTime | fromdateiso8601) &lt; (now - 259200)) | .metadata.name' | \
    xargs -I {} kubectl delete job {} -n $namespace --cascade=foreground
}

cleanup_evicted_pods() {
    local namespace=${1:-default}
    echo "🗑️ Cleaning up evicted pods in namespace: $namespace"
    
    kubectl get pods -n $namespace --field-selector=status.phase=Failed -o json | \
    jq -r '.items[] | select(.status.reason == "Evicted") | .metadata.name' | \
    xargs -I {} kubectl delete pod {} -n $namespace
}

cleanup_unused_configmaps() {
    local namespace=${1:-default}
    echo "📄 Identifying unused ConfigMaps in namespace: $namespace"
    
    # This requires careful validation - only shows potentially unused ones
    kubectl get configmaps -n $namespace -o json | \
    jq -r '.items[] | select(.metadata.name != "kube-root-ca.crt") | .metadata.name' | \
    while read cm; do
        if ! kubectl get pods -n $namespace -o yaml | grep -q "configMapRef:\|configMap:" | grep -q "$cm"; then
            echo "⚠️ Potentially unused ConfigMap: $cm (manual verification needed)"
        fi
    done
}

# Safe usage with confirmation
read -p "Enter namespace to clean up (default: default): " target_namespace
target_namespace=${target_namespace:-default}

echo "⚠️ This will clean up resources in namespace: $target_namespace"
read -p "Continue? (y/N): " confirm

if [[ $confirm == [yY] ]]; then
    cleanup_completed_jobs $target_namespace
    cleanup_evicted_pods $target_namespace
    cleanup_unused_configmaps $target_namespace
    echo "✅ Cleanup completed for namespace: $target_namespace"
else
    echo "❌ Cleanup cancelled"
fi

Kubectl Command Chaining and Automation Patterns

Multi-Environment Management

# Environment switching with validation
switch_env() {
    local env=$1
    case $env in
        dev|development)
            kubectl config use-context dev-cluster
            kubectl config set-context --current --namespace=development
            ;;
        staging)
            kubectl config use-context staging-cluster  
            kubectl config set-context --current --namespace=staging
            ;;
        prod|production)
            kubectl config use-context prod-cluster
            kubectl config set-context --current --namespace=production
            # Additional safety check for production
            echo "⚠️ You are now in PRODUCTION environment"
            kubectl auth can-i create,update,delete deployments || echo "❌ Insufficient permissions"
            ;;
        *)
            echo "❌ Invalid environment. Use: dev, staging, or prod"
            return 1
            ;;
    esac
    echo "✅ Switched to $env environment"
    kubectl config current-context
}

# Bulk operations across namespaces
operate_across_namespaces() {
    local operation=$1
    local resource=$2
    local selector=$3
    
    for ns in $(kubectl get namespaces -o jsonpath='{.items[*].metadata.name}' | grep -E '^(app-|service-)'); do
        echo "🔄 Processing namespace: $ns"
        case $operation in
            "scale-down")
                kubectl scale deployment -l $selector --replicas=0 -n $ns
                ;;
            "scale-up") 
                kubectl scale deployment -l $selector --replicas=3 -n $ns
                ;;
            "restart")
                kubectl rollout restart deployment -l $selector -n $ns
                ;;
            "logs")
                echo "📋 Logs from $ns:"
                kubectl logs -l $selector -n $ns --tail=10 --prefix=true
                ;;
        esac
    done
}

# Usage examples:
# switch_env prod
# operate_across_namespaces scale-down deployment app=frontend
# operate_across_namespaces logs pod app=api

Resource Dependency Management

# Wait for dependencies before deploying
deploy_with_dependencies() {
    local app=$1
    local namespace=${2:-default}
    
    echo "🔗 Checking dependencies for $app deployment..."
    
    # Wait for database to be ready
    if kubectl get deployment database -n $namespace &amp;>/dev/null; then
        echo "⏳ Waiting for database..."
        kubectl wait --for=condition=available deployment/database -n $namespace --timeout=300s
    fi
    
    # Wait for Redis to be ready
    if kubectl get deployment redis -n $namespace &amp;>/dev/null; then
        echo "⏳ Waiting for Redis..."
        kubectl wait --for=condition=available deployment/redis -n $namespace --timeout=180s
    fi
    
    # Wait for configmaps and secrets
    echo "⏳ Waiting for configuration..."
    kubectl wait --for=jsonpath='{.data}' configmap/$app-config -n $namespace --timeout=60s
    kubectl get secret $app-secret -n $namespace || { echo "❌ Secret not found"; exit 1; }
    
    # Deploy the application
    echo "🚀 Deploying $app..."
    kubectl apply -f manifests/$app/ -n $namespace
    kubectl rollout status deployment/$app -n $namespace --timeout=600s
    
    # Verify deployment
    kubectl wait --for=condition=available deployment/$app -n $namespace --timeout=120s
    echo "✅ $app deployed successfully with all dependencies ready"
}

# Progressive rollout with health checks
progressive_rollout() {
    local deployment=$1
    local new_image=$2
    local namespace=${3:-default}
    
    echo "🎯 Starting progressive rollout for $deployment"
    
    # Get current replica count
    current_replicas=$(kubectl get deployment $deployment -n $namespace -o jsonpath='{.spec.replicas}')
    
    # Start with 1 replica for new version
    kubectl patch deployment $deployment -n $namespace -p "{\"spec\":{\"replicas\":1}}"
    kubectl set image deployment/$deployment app=$new_image -n $namespace
    
    # Wait for first pod to be ready
    kubectl rollout status deployment/$deployment -n $namespace --timeout=300s
    
    # Health check
    new_pod=$(kubectl get pods -n $namespace -l app=$deployment -o jsonpath='{.items[0].metadata.name}')
    kubectl wait --for=condition=ready pod/$new_pod -n $namespace --timeout=120s
    
    # Run health check
    kubectl exec $new_pod -n $namespace -- curl -f http://localhost:8080/health || {
        echo "❌ Health check failed, rolling back..."
        kubectl rollout undo deployment/$deployment -n $namespace
        exit 1
    }
    
    # Gradually scale up
    for replicas in $(seq 2 $current_replicas); do
        echo "📈 Scaling to $replicas replicas..."
        kubectl scale deployment $deployment --replicas=$replicas -n $namespace
        kubectl rollout status deployment/$deployment -n $namespace --timeout=300s
        sleep 30  # Allow time for load balancing
    done
    
    echo "✅ Progressive rollout completed successfully"
}

Frequently Asked Questions

What is kubectl in Kubernetes?

Kubectl is the official command-line interface (CLI) tool for Kubernetes that allows users to interact with Kubernetes clusters. It communicates with the Kubernetes API server to perform operations such as deploying applications, inspecting resources, viewing logs, and managing cluster components.

What are the most useful kubectl commands?

The most essential kubectl commands for daily operations include:

  • kubectl get pods – List running pods
  • kubectl logs <pod-name> – View pod logs
  • kubectl exec -it <pod-name> -- /bin/bash – Execute commands in pods
  • kubectl apply -f <file> – Apply configuration files
  • kubectl describe <resource> <name> – Get detailed resource information
  • kubectl scale deployment <name> --replicas=<number> – Scale applications

How do I check logs using kubectl?

To check logs using kubectl, use the kubectl logs command with various options:

kubectl logs &lt;pod-name>                    # Basic log viewing
kubectl logs -f &lt;pod-name>                 # Follow logs in real-time
kubectl logs &lt;pod-name> --tail=100         # Show last 100 lines
kubectl logs &lt;pod-name> --since=1h         # Logs from last hour
kubectl logs &lt;pod-name> -c &lt;container>     # Specific container logs

How do I scale pods with kubectl?

Use the kubectl scale command to adjust the number of pod replicas:

kubectl scale deployment &lt;deployment-name> --replicas=5
kubectl scale replicaset &lt;rs-name> --replicas=3
kubectl autoscale deployment &lt;name> --min=1 --max=10 --cpu-percent=80

What is the difference between kubectl apply and kubectl create?

Aspectkubectl createkubectl apply
PurposeCreates new resourcesDeclarative configuration management
BehaviorFails if resource existsUpdates existing resources
Best UseOne-time resource creationContinuous deployment and GitOps
Configuration TrackingNo trackingTracks last-applied configuration
IdempotencyNot idempotentIdempotent operations

Conclusion

Mastering kubectl commands is fundamental to successful Kubernetes operations. This comprehensive guide provides the foundation you need to effectively manage containerized applications, troubleshoot issues, and implement best practices in production environments.

Remember to practice these commands regularly, set up your development environment with aliases and auto-completion, and always test changes in non-production environments first. As Kubernetes continues to evolve, staying current with kubectl capabilities will keep you at the forefront of container orchestration.

For more advanced Kubernetes topics, explore our related guides on Terraform with Kubernetes automation, Ansible playbooks for Kubernetes management, AWS EKS cluster setup and management, GitHub Actions CI/CD with kubectl, and Linux process management for system administrators.


Related Resources:


<script type=”application/ld+json”> { “@context”: “https://schema.org&#8221;, “@type”: “FAQPage”, “mainEntity”: [ { “@type”: “Question”, “name”: “What is kubectl in Kubernetes?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “Kubectl is the official command-line interface (CLI) tool for Kubernetes that allows users to interact with Kubernetes clusters. It communicates with the Kubernetes API server to perform operations such as deploying applications, inspecting resources, viewing logs, and managing cluster components.” } }, { “@type”: “Question”, “name”: “What are the most useful kubectl commands?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “The most essential kubectl commands for daily operations include: kubectl get pods (list running pods), kubectl logs <pod-name> (view pod logs), kubectl exec -it <pod-name> — /bin/bash (execute commands in pods), kubectl apply -f <file> (apply configuration files), kubectl describe <resource> <name> (get detailed resource information), and kubectl scale deployment <name> –replicas=<number> (scale applications).” } }, { “@type”: “Question”, “name”: “How do I check logs using kubectl?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “To check logs using kubectl, use the kubectl logs command with various options: kubectl logs <pod-name> for basic log viewing, kubectl logs -f <pod-name> to follow logs in real-time, kubectl logs <pod-name> –tail=100 to show last 100 lines, kubectl logs <pod-name> –since=1h for logs from last hour, and kubectl logs <pod-name> -c <container> for specific container logs.” } }, { “@type”: “Question”, “name”: “How do I scale pods with kubectl?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “Use the kubectl scale command to adjust the number of pod replicas: kubectl scale deployment <deployment-name> –replicas=5, kubectl scale replicaset <rs-name> –replicas=3, or kubectl autoscale deployment <name> –min=1 –max=10 –cpu-percent=80 for automatic scaling.” } }, { “@type”: “Question”, “name”: “What is the difference between kubectl apply and kubectl create?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “kubectl create is used for one-time resource creation and fails if the resource already exists, while kubectl apply is used for declarative configuration management and can update existing resources. kubectl apply is idempotent and tracks configuration changes, making it ideal for continuous deployment and GitOps workflows, whereas kubectl create is better for initial resource creation and testing.” } }, { “@type”: “Question”, “name”: “How do I troubleshoot pods that won’t start?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “To troubleshoot pods that won’t start: 1) Check pod status with kubectl get pod and kubectl describe pod, 2) Examine events with kubectl get events, 3) Review current and previous logs with kubectl logs, 4) Verify node resources and constraints, 5) Validate image accessibility and registry credentials, and 6) Check ConfigMaps and Secrets dependencies.” } }, { “@type”: “Question”, “name”: “How do I perform safe production deployments?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “Safe production deployments require: 1) Preview changes with kubectl diff, 2) Use server-side apply with kubectl apply –server-side, 3) Monitor rollout with kubectl rollout status, 4) Verify health with kubectl wait, 5) Run smoke tests, and 6) Keep rollback ready with kubectl rollout undo. Always test in staging first and maintain proper RBAC permissions.” } }, { “@type”: “Question”, “name”: “What’s the difference between kubectl exec and kubectl attach?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “kubectl exec starts a new process in a container (like /bin/bash for debugging), while kubectl attach connects to the existing main process (PID 1). Use exec for interactive debugging and running commands, use attach for monitoring application output or interacting with the main application process.” } } ] } </script>

Similar Posts

Leave a Reply