Your First Kubernetes Pod: From YAML to Running Container Made Easy (2025)
Post 7 of 70 in the series “Mastering Kubernetes: A Practical Journey from Beginner to CKA”
Table of Contents
🔥 TL;DR
- Pods are the smallest deployable units in Kubernetes – containers always run inside pods, never directly
- Every pod manifest needs apiVersion, kind, metadata, and spec – these four sections define what Kubernetes creates
- Pod lifecycle states (Pending → Running → Succeeded/Failed) reveal exactly what’s happening during container startup
- Common pod failures stem from image pull issues, resource constraints, or configuration errors – systematic debugging saves hours
- Understanding pod networking and storage basics is essential before moving to higher-level controllers like Deployments
Introduction: Kubernetes Pod
Remember the first time you tried to ride a bicycle? You probably didn’t start by attempting wheelies or racing down hills. You began with the basics – learning to balance, pedal, and steer. Pods in Kubernetes are exactly like that first bike ride – they’re the fundamental building block you must master before tackling more complex workloads. Every Deployment, DaemonSet, and StatefulSet you’ll create later is really just a sophisticated way of managing pods.
What we’ll learn today:
- Writing your first pod manifest from scratch and understanding every line
- Navigating the pod lifecycle from creation to termination
- Debugging the most common pod failures that trip up both beginners and experts
- Understanding pod networking, storage, and resource management fundamentals
Why this matters: I’ve seen senior engineers spend hours debugging complex application issues that turned out to be basic pod misconfigurations. Understanding pods deeply isn’t just academic – it’s what separates developers who can deploy containers from those who can operate them reliably. When Netflix or Spotify deploy thousands of containers daily, every single one runs inside a pod that someone designed and debugged using the principles we’ll cover today.
Series context: In our previous post, we explored how the kube-scheduler intelligently places pods on nodes using sophisticated algorithms. Now we’re diving into what those pods actually are – the atomic units of deployment that everything we’ve learned about (API server communication, ETCD storage, and scheduling decisions) ultimately serves to create and maintain.
Prerequisites
What you need to know:
- Basic Docker concepts and container fundamentals
- Kubernetes cluster architecture (covered in Posts #2-6)
- Text editor skills for YAML editing
- Command line comfort with kubectl
📌 Quick Refresher: A pod is a wrapper around one or more containers that share the same network and storage. Think of it as a “logical host” – just like you might run multiple processes on a Linux server, you can run multiple containers in a pod, but most pods contain just one container.
Tools required:
- Access to a Kubernetes cluster (minikube, kind, or cloud cluster)
- kubectl configured and working
- Text editor (VS Code, vim, or nano)
- Container registry access (Docker Hub works fine)
Previous posts to read:
- Post #2: Kubernetes Architecture (essential for understanding where pods fit)
- Post #6: Kube-Scheduler (crucial for understanding how pods get placed)
Estimated time: 35-45 minutes including hands-on pod creation and troubleshooting
Step-by-Step Tutorial
Theory First: Understanding Kubernetes Pod Fundamentals
Before we start writing YAML, let’s understand what we’re actually creating. A pod isn’t just a container with extra steps – it’s a carefully designed abstraction that solves real operational problems.

Why doesn’t Kubernetes run containers directly instead of wrapping them in pods?
Pods solve the “multiple process” problem. In traditional deployments, you might run a web server and log collector on the same machine. Pods let you run closely related containers together while keeping them in separate, manageable units.
Step 1: Writing Your First Kubernetes Pod Manifest
Let’s start with the absolute basics – a pod manifest that actually works:
# my-first-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: my-first-pod
labels:
app: learning
environment: development
spec:
# Basic security hardening (principle of least privilege)
securityContext:
runAsNonRoot: true
runAsUser: 1000
containers:
- name: web-server
image: nginx:alpine
ports:
- containerPort: 80
name: http
🛠️ Your Turn: Write this manifest to a file called my-first-pod.yaml, then create and check logs with:
kubectl apply -f my-first-pod.yaml
kubectl get pods -w
kubectl logs my-first-pod
Breaking down every line:
apiVersion: v1 # Which version of the Kubernetes API to use
kind: Pod # What type of object we're creating
metadata: # Information about the pod (name, labels, etc.)
name: my-first-pod # Unique name within the namespace
labels: # Key-value pairs for organization and selection
app: learning
environment: development
spec: # The desired state specification
containers: # List of containers in this pod
- name: web-server # Container name (must be unique within the pod)
image: nginx:alpine # Container image to run
ports: # Ports the container exposes
- containerPort: 80
name: http # Optional name for the port
Creating and exploring your first pod:
# Create the pod
kubectl apply -f my-first-pod.yaml
# Watch it start up
kubectl get pods -w
# Check detailed information
kubectl describe pod my-first-pod
# Access the pod's logs
kubectl logs my-first-pod
# Get a shell inside the container
kubectl exec -it my-first-pod -- /bin/sh
Step 2: Kubernetes Pod Lifecycle Explained
Here’s where many people get confused – let’s trace exactly what happens when you create a pod:
Pod lifecycle phases:
# Watch a pod go through its lifecycle
kubectl get pods my-first-pod -o custom-columns=NAME:.metadata.name,STATUS:.status.phase,CONDITIONS:.status.conditions[*].type
# Possible statuses you'll see:
# Pending: Scheduler is finding a node
# ContainerCreating: Node is pulling image and starting container
# Running: All containers are running
# Succeeded: Pod completed successfully (for batch jobs)
# Failed: Pod failed and won't be restarted
# Unknown: Can't communicate with the node

Deep dive into pod conditions:
# Get detailed status information
kubectl get pod my-first-pod -o jsonpath='{.status.conditions[*]}' | jq
# You'll see conditions like:
# PodScheduled: true/false - Has scheduler assigned a node?
# Initialized: true/false - Have init containers completed?
# ContainersReady: true/false - Are all containers ready?
# Ready: true/false - Is pod ready to receive traffic?
Step 3: Adding Resource Management
Real-world pods need resource limits and requests. Here’s how to do it properly:
# resource-managed-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: resource-demo
labels:
app: resource-learning
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
containers:
- name: web-app
image: nginx:alpine
resources:
requests: # Minimum resources needed
cpu: "100m" # 100 millicore (0.1 CPU)
memory: "128Mi" # 128 megabytes
limits: # Maximum resources allowed
cpu: "500m" # 500 millicore (0.5 CPU)
memory: "256Mi" # 256 megabytes
ports:
- containerPort: 80
🛠️ Your Turn: Create this pod and observe resource allocation with:
kubectl apply -f resource-managed-pod.yaml
kubectl describe pod resource-demo | grep -A 10 "Requests\|Limits"
kubectl top pod resource-demo
Why resource management matters:
# Create the resource-managed pod
kubectl apply -f resource-managed-pod.yaml
# Check how resources are allocated
kubectl describe pod resource-demo | grep -A 10 "Requests\|Limits"
# See resource usage in real-time
kubectl top pod resource-demo
# Compare to the node's total capacity
kubectl describe node | grep -A 5 "Allocatable:"
Step 4: Environment Variables and Configuration
Most applications need configuration. Here’s how to provide it:
# configured-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: configured-app
labels:
app: config-demo
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
containers:
- name: app
image: nginx:alpine
env:
- name: ENVIRONMENT
value: "development"
- name: LOG_LEVEL
value: "debug"
- name: DATABASE_URL
value: "postgresql://localhost:5432/myapp"
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
ports:
- containerPort: 80
🛠️ Your Turn: Create the configured pod and explore environment variables:
kubectl apply -f configured-pod.yaml
kubectl exec configured-app -- env | grep -E "ENVIRONMENT|LOG_LEVEL|POD_NAME|NODE_NAME"
kubectl exec configured-app -- printenv POD_NAME NODE_NAME
Testing environment variable injection:
# Create the configured pod
kubectl apply -f configured-pod.yaml
# Check environment variables inside the container
kubectl exec configured-app -- env | grep -E "ENVIRONMENT|LOG_LEVEL|POD_NAME|NODE_NAME"
# See how Kubernetes populates dynamic values
kubectl exec configured-app -- printenv POD_NAME NODE_NAME
Step 5: Health Checks and Readiness
Production pods need health checks. Here’s how to implement them correctly:
# healthy-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: healthy-app
labels:
app: health-demo
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
containers:
- name: web-server
image: nginx:alpine
ports:
- containerPort: 80
# Liveness probe: Is the container healthy? (Use separate endpoint in production)
livenessProbe:
httpGet:
path: / # In production, use /healthz or dedicated health endpoint
port: 80
initialDelaySeconds: 30 # Wait 30s before first check
periodSeconds: 10 # Check every 10 seconds
failureThreshold: 3 # Restart after 3 failures
# Readiness probe: Is the container ready for traffic? (Separate endpoint recommended)
readinessProbe:
httpGet:
path: / # In production, use /ready or separate readiness endpoint
port: 80
initialDelaySeconds: 5 # Check readiness sooner
periodSeconds: 5 # Check more frequently
failureThreshold: 3 # Mark unready after 3 failures
resources:
requests:
cpu: "50m"
memory: "64Mi"
limits:
cpu: "200m"
memory: "128Mi"
💡 Production Best Practice: Use separate endpoints like /healthz for liveness and /ready for readiness probes. This allows different logic for “restart me” vs “don’t send traffic.”
Example exec probe for non-HTTP services:
livenessProbe:
exec:
command:
- /bin/sh
- -c
- "pg_isready -U myuser -d mydb"
initialDelaySeconds: 30
periodSeconds: 10
🛠️ Your Turn: Create this pod and watch health check behavior:
kubectl apply -f healthy-pod.yaml
kubectl get pod healthy-app -w
kubectl describe pod healthy-app | grep -A 5 "Liveness\|Readiness"
Understanding health check behavior:
# Create the healthy pod
kubectl apply -f healthy-pod.yaml
# Watch the readiness transition
kubectl get pod healthy-app -w
# Check health check history
kubectl describe pod healthy-app | grep -A 5 "Liveness\|Readiness"
# Simulate a health check failure (advanced)
kubectl exec healthy-app -- rm /usr/share/nginx/html/index.html
# Watch the pod get restarted due to liveness probe failure
🧠 Knowledge Check
- Q: What happens if you don’t specify resource requests for a pod? (Answer: The scheduler treats it as requiring zero resources, which can lead to node overcommitment and performance issues)
- Q: Can a pod have multiple containers sharing the same port? (Answer: No, containers in the same pod share a network namespace, so ports must be unique)
- Q: What’s the difference between liveness and readiness probes? (Answer: Liveness determines if the container should be restarted; readiness determines if it should receive traffic)
Step 6: Multi-Container Pods (Advanced Pattern)
While most pods have one container, sometimes you need multiple. Here’s a real-world example:
# multi-container-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: web-with-sidecar
labels:
app: multi-container-demo
spec:
# Security context applies to all containers (principle of least privilege)
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 1000
containers:
# Main application container
- name: web-server
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: shared-logs
mountPath: /var/log/nginx
# Sidecar container for log processing
- name: log-processor
image: busybox:latest
command: ['sh', '-c', 'tail -f /shared/access.log']
volumeMounts:
- name: shared-logs
mountPath: /shared
# Shared volume for log files
volumes:
- name: shared-logs
emptyDir:
sizeLimit: 1Gi # Prevent disk exhaustion
# Note: emptyDir is ephemeral - data is lost when pod is deleted
# Use persistent volumes for data that should survive pod restarts
🛠️ Your Turn: Create the multi-container pod and explore both containers:
kubectl apply -f multi-container-pod.yaml
kubectl get pod web-with-sidecar -o jsonpath='{.status.containerStatuses[*].name}'
kubectl logs web-with-sidecar -c web-server
kubectl logs web-with-sidecar -c log-processor
kubectl exec web-with-sidecar -c log-processor -- ls -la /shared
Why use multi-container pods?
# Create the multi-container pod
kubectl apply -f multi-container-pod.yaml
# Check both containers are running
kubectl get pod web-with-sidecar -o jsonpath='{.status.containerStatuses[*].name}'
# Access logs from different containers
kubectl logs web-with-sidecar -c web-server
kubectl logs web-with-sidecar -c log-processor
# Execute commands in specific containers
kubectl exec web-with-sidecar -c web-server -- ps aux
kubectl exec web-with-sidecar -c log-processor -- ls -la /shared
Verification Steps:
- ✅ You can write a basic pod manifest from memory
- ✅ You understand the pod lifecycle and status conditions
- ✅ You can configure resource limits and environment variables
- ✅ You know how to implement health checks properly
- ✅ You can debug pod issues systematically
Real-World Scenarios
Scenario 1: Debugging a Startup Company’s First Kubernetes Migration
The Problem: Last month, I helped a startup migrate their first application to Kubernetes. Their pod kept crashing with cryptic error messages, and the team was getting frustrated with “Kubernetes complexity.”
The failing pod configuration:
# Their original problematic pod
apiVersion: v1
kind: Pod
metadata:
name: broken-app
spec:
containers:
- name: api-server
image: company/api:latest
ports:
- containerPort: 3000
env:
- name: NODE_ENV
value: "production"
Issues we discovered through systematic debugging:
# 1. Check basic pod status
kubectl describe pod broken-app
# Output: "ImagePullBackOff" - the image didn't exist in their registry
# 2. After fixing image, new error appeared
kubectl logs broken-app
# Output: "ECONNREFUSED" - app couldn't connect to database
# 3. Further investigation revealed missing environment variables
kubectl exec broken-app -- printenv | grep DATABASE
# Output: (empty) - no database configuration provided
The corrected, production-ready version:
# Fixed pod with proper configuration
apiVersion: v1
kind: Pod
metadata:
name: fixed-app
labels:
app: api-server
version: v1.0.0
spec:
containers:
- name: api-server
image: company/api:v1.0.0 # Specific tag, not 'latest'
ports:
- containerPort: 3000
name: http
env:
- name: NODE_ENV
value: "production"
- name: DATABASE_URL
value: "postgresql://db.company.internal:5432/api"
- name: LOG_LEVEL
value: "info"
# Resource limits prevent one bad pod from taking down the node
resources:
requests:
cpu: "200m"
memory: "256Mi"
limits:
cpu: "1000m"
memory: "512Mi"
# Health checks ensure reliability
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 3000
initialDelaySeconds: 5
periodSeconds: 5
Lessons learned from this migration:
- Always use specific image tags, never
latestin production - Environment variables are critical – missing config causes cryptic failures
- Resource limits prevent cascading failures when applications misbehave
- Health checks are essential for reliable service operation
Scenario 2: High-Traffic E-commerce Pod Configuration
Airbnb’s Pod Patterns (based on public engineering talks):
# High-performance pod configuration for search API
apiVersion: v1
kind: Pod
metadata:
name: search-api
labels:
app: search-api
tier: api
performance: high
spec:
containers:
- name: search-service
image: registry.k8s.io/search-api:v2.1.0
ports:
- containerPort: 8080
name: http
# Aggressive resource allocation for high performance
resources:
requests:
cpu: "1000m" # 1 full CPU core minimum
memory: "2Gi" # 2GB memory minimum
limits:
cpu: "4000m" # Up to 4 CPU cores
memory: "8Gi" # Up to 8GB memory
# Fast health checks for rapid scaling
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 2
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 3
periodSeconds: 2
timeoutSeconds: 1
# Performance-critical environment variables
env:
- name: JAVA_OPTS
value: "-Xms2g -Xmx6g -XX:+UseG1GC"
- name: SEARCH_CACHE_SIZE
value: "1000000"
# Quality of Service class: Guaranteed (requests = limits)
# This ensures the pod gets dedicated resources
Key patterns for high-performance pods:
- Resource requests equal limits: Guarantees dedicated resources (QoS class: Guaranteed)
- Fast health checks: Rapid detection and recovery from failures
- JVM tuning: Optimized garbage collection for consistent performance
- Specific resource allocation: Based on actual profiling, not guesswork
Common mistakes in production pod configurations:
- Using
latestimage tags causing inconsistent deployments - No resource limits leading to noisy neighbor problems
- Missing health checks causing slow failure detection
- Inadequate environment variable validation
- Not considering pod QoS classes and their scheduling implications
Troubleshooting Tips
Common Error 1: “ImagePullBackOff”
Issue: Kubernetes can’t download the container image Solution:
# Check the exact error message
kubectl describe pod <pod-name> | grep -A 5 "Events:"
# Common causes and fixes:
# 1. Typo in image name
kubectl get pod <pod-name> -o jsonpath='{.spec.containers[*].image}'
# 2. Authentication issues with private registries
kubectl create secret docker-registry my-registry-secret \
--docker-server=myregistry.io \
--docker-username=myuser \
--docker-password=mypassword \
--docker-email=myemail@example.com
# Add imagePullSecrets to pod spec:
# imagePullSecrets:
# - name: my-registry-secret
# 3. Network connectivity issues
kubectl run debug-pod --image=busybox --rm -it -- nslookup docker.io
Common Error 2: “CrashLoopBackOff”
Issue: Container starts but immediately crashes, and Kubernetes keeps restarting it Solution:
# Get the exit code and reason
kubectl describe pod <pod-name> | grep -A 5 "Last State"
# Check application logs for errors
kubectl logs <pod-name> --previous # Logs from the crashed container
# Check if the container has a proper startup command
kubectl get pod <pod-name> -o jsonpath='{.spec.containers[*].command}'
# Common fixes:
# 1. Add proper startup command
# 2. Fix application configuration
# 3. Ensure dependencies are available
# 4. Check resource limits (app might be OOM killed)
Common Error 3: “Pending” status
Issue: Pod never gets scheduled to a node Solution:
# Check scheduling events
kubectl describe pod <pod-name> | grep -A 10 "Events:"
# Common causes:
# 1. Insufficient resources
kubectl describe nodes | grep -A 5 "Allocated resources:"
# 2. Node selector or affinity issues
kubectl get pod <pod-name> -o jsonpath='{.spec.nodeSelector}'
# 3. Taints and tolerations
kubectl describe nodes | grep -A 3 "Taints:"
Common Error 4: Container not responding to health checks
Issue: Readiness or liveness probes failing Solution:
# Check probe configuration
kubectl describe pod <pod-name> | grep -A 5 "Liveness:\|Readiness:"
# Test the endpoint manually
kubectl exec <pod-name> -- curl -f http://localhost:8080/health
# Common fixes:
# 1. Increase initialDelaySeconds for slow-starting apps
# 2. Adjust probe timeouts
# 3. Fix the health check endpoint in your application
# 4. Use different probe types (tcp vs http vs exec)
Debug Commands:
# Essential pod debugging commands
kubectl get pods -o wide # Basic pod information with nodes
kubectl describe pod <pod-name> # Detailed information and events
kubectl logs <pod-name> --previous # Logs from crashed containers
kubectl logs <pod-name> -c <container-name> # Logs from specific container
kubectl exec -it <pod-name> -- /bin/sh # Interactive shell in container
# Resource and status inspection
kubectl top pod <pod-name> # Resource usage
kubectl get pod <pod-name> -o yaml # Complete pod specification
kubectl get events --sort-by=.metadata.creationTimestamp | grep <pod-name> # Recent events
# Advanced debugging
kubectl port-forward <pod-name> 8080:80 # Forward local port to pod
kubectl cp <pod-name>:/path/to/file ./file # Copy files from pod
Where to get help:
- Kubernetes Pod Documentation
- Pod Lifecycle Guide
- CNCF Slack #kubernetes-users channel
Next Steps
What’s coming next: In Post #8, we’ll explore “ReplicaSets: Ensuring Pod Availability.” You’ll discover how Kubernetes maintains the desired number of pod replicas and handles failures automatically. We’ll build on your pod expertise to understand how higher-level controllers manage pods at scale.
Additional learning:
- Experiment with init containers for setup tasks
- Explore pod security contexts for security hardening
Practice challenges:
- Resource optimization: Create pods with different QoS classes (Guaranteed, Burstable, BestEffort) and observe scheduling behavior
- Health check mastery: Implement custom health check endpoints in a simple web application
- Multi-container scenarios: Build a pod with a main app and logging sidecar that share files via volumes
- Failure simulation: Intentionally break pods in different ways and practice systematic debugging
- Performance testing: Create resource-intensive pods and monitor their impact on node resources
Community engagement: Share your pod debugging victories! What was the most confusing pod issue you’ve solved? Which health check patterns work best for your applications? Your troubleshooting stories help others learn faster and avoid common pitfalls.
FAQ Section
Can I update a running pod’s configuration?
No, most pod specifications are immutable once created. You need to delete and recreate the pod, or better yet, use higher-level controllers like Deployments that handle updates gracefully.
Why do pods sometimes take a long time to start?
Common causes include image pull time (especially large images), slow application startup, insufficient resources, or waiting for dependencies. Use kubectl describe pod to identify bottlenecks.
What happens if a pod uses more resources than its limits?
For memory: the pod gets killed (OOMKilled). For CPU: the container gets throttled to the limit. This is why proper resource planning is crucial.
Can multiple pods share the same persistent storage?
It depends on the storage type. Some storage classes support ReadWriteMany access, allowing multiple pods to mount the same volume. Others only support ReadWriteOnce (single pod access).
Should I create pods directly or use controllers like Deployments?
Almost always use controllers. Direct pod creation is mainly for learning, debugging, or very specific one-off tasks. Controllers provide benefits like automatic restarts, scaling, and rolling updates.
Complete Pod Manifest Template
# production-pod-template.yaml
apiVersion: v1
kind: Pod
metadata:
name: production-app
labels:
app: my-application
version: v1.0.0
tier: api
annotations:
description: "Production-ready pod template"
prometheus.io/scrape: "true"
prometheus.io/port: "9090"
spec:
containers:
- name: main-app
image: myregistry.io/my-app:v1.0.0
ports:
- containerPort: 8080
name: http
protocol: TCP
- containerPort: 9090
name: metrics
protocol: TCP
# Resource management
resources:
requests:
cpu: "200m"
memory: "256Mi"
limits:
cpu: "1000m"
memory: "512Mi"
# Environment variables
env:
- name: ENV
value: "production"
- name: LOG_LEVEL
value: "info"
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# Health checks
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 2
# Volume mounts
volumeMounts:
- name: app-config
mountPath: /etc/config
readOnly: true
- name: temp-storage
mountPath: /tmp
# Pod-level configuration
restartPolicy: Always
terminationGracePeriodSeconds: 30
# Security context
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 1000
# Image pull secrets for private registries
imagePullSecrets:
- name: registry-secret
# Volumes
volumes:
- name: app-config
configMap:
name: app-config
- name: temp-storage
emptyDir:
sizeLimit: 1Gi
🔗 Series Navigation
Previous: Post #6 – Kube-Scheduler in Action: How Pods Find Their Home
Next: Post #8 – ReplicaSets: Ensuring Pod Availability
Progress: You’re now 10% through the Kubernetes Fundamentals series! 🎉
💡 Pro Tip: Always start with the simplest possible pod configuration and add complexity incrementally. Every additional feature (health checks, resource limits, environment variables) should solve a specific problem you’ve identified, not just be copied from examples.
📧 Never miss an update: Subscribe to get notified when new posts in this series are published. Next, we’re exploring ReplicaSets – the controllers that ensure your pods stay running even when things go wrong!
Tags: kubernetes, pods, containers, yaml, pod-lifecycle, debugging, health-checks, resources, multi-container, troubleshooting, cka-prep

3 Comments