Master Kubernetes Namespaces: Organizing Your Cluster Like a Pro 2025
Post 9 of 70 in the series “Mastering Kubernetes: A Practical Journey from Beginner to CKA”
Table of Contents
🔥 TL;DR
- Namespaces provide virtual clusters within a physical cluster – essential for multi-tenant environments
- Resource quotas and limits prevent one team’s workloads from starving another’s resources
- Network policies enable secure isolation between different environments and teams
- RBAC with namespaces creates fine-grained access control for enterprise security
- Proper namespace design scales from development teams to enterprise multi-tenancy
Introduction: Kubernetes Namespaces
Imagine you’re managing a large apartment building where different families live on different floors. Each family needs their own space, utilities shouldn’t interfere with each other, and the building manager needs to control access and resource usage per floor. That’s exactly what namespaces do for Kubernetes clusters – they create logical boundaries that enable multiple teams, environments, and applications to coexist safely in the same physical infrastructure.
What we’ll learn today:
- Creating and managing namespaces for different teams and environments
- Implementing resource quotas to prevent resource starvation and ensure fair sharing
- Setting up network policies for secure inter-namespace communication
- Building enterprise-grade multi-tenant cluster architectures
Why this matters: I’ve seen companies struggle with resource conflicts, security breaches, and operational chaos because they threw everything into the default namespace. I’ve also helped enterprises save millions by properly implementing multi-tenancy instead of spinning up separate clusters for every team. Understanding namespaces isn’t just about organization – it’s about cost efficiency, security, and operational sanity at scale. Companies like Airbnb and Spotify run hundreds of services across dozens of teams using sophisticated namespace strategies.
Series context: In our previous post, we mastered managing multiple pod replicas with Deployments and ReplicaSets. Now we’re scaling up organizationally – how do you manage multiple teams, environments, and applications in the same cluster without chaos? Namespaces provide the logical boundaries that make shared Kubernetes clusters practical and secure.
Prerequisites
What you need to know:
- Pod and Deployment fundamentals (covered in Posts #7-8)
- Basic understanding of Kubernetes resources (Services, ConfigMaps, Secrets)
- RBAC concepts from earlier posts
- kubectl command basics
📌 Quick Refresher: A namespace is like a virtual cluster inside your physical cluster. Most Kubernetes resources are namespaced (pods, services, deployments) while others are cluster-wide (nodes, persistent volumes, namespaces themselves).
Tools required:
- Access to a Kubernetes cluster with admin privileges
- kubectl configured and working
- Ability to create and modify RBAC resources
- Text editor for YAML manifests
Previous posts to read:
- Post #8: ReplicaSets vs Deployments (essential for understanding what gets organized)
- Post #5: Kube-APIServer (helpful for understanding RBAC integration)
Estimated time: 50-60 minutes including hands-on multi-tenant setup and testing
Step-by-Step Tutorial
Theory First: Understanding Namespace Architecture
Namespaces aren’t just folders – they’re active boundaries with security, networking, and resource implications:

Why doesn’t Kubernetes just use folders or labels to organize resources?
Namespaces provide active isolation – they can enforce resource limits, network policies, and RBAC boundaries. Labels are just metadata; namespaces are functional boundaries that the Kubernetes control plane actively enforces.
Step 1: Exploring Default Namespaces
Let’s start by understanding what’s already in your cluster:
# List all namespaces
kubectl get namespaces
# You'll see these default namespaces:
# default - where resources go if no namespace specified
# kube-system - Kubernetes control plane components
# kube-public - publicly readable by all users
# kube-node-lease - node heartbeat objects (Kubernetes 1.13+)
# See what's in each namespace
kubectl get pods -n kube-system
kubectl get pods -n default
kubectl get pods --all-namespaces
Understanding namespace scope:
# Some resources are namespaced
kubectl api-resources --namespaced=true | head -10
# Others are cluster-wide
kubectl api-resources --namespaced=false | head -10
# Check current namespace context
kubectl config view --minify | grep namespace
🛠️ Your Turn: Explore your cluster’s namespaces and understand what’s running where:
kubectl get namespaces --show-labels
kubectl get pods --all-namespaces -o wide
kubectl describe namespace kube-system | head -20
Step 2: Creating Your First Kubernetes Namespace
Let’s create namespaces for a realistic development scenario:
# development-namespaces.yaml
apiVersion: v1
kind: Namespace
metadata:
name: development
labels:
environment: dev
team: platform
cost-center: engineering
annotations:
description: "Development environment for testing"
owner: "platform-team@company.com"
finalizers:
- kubernetes # Prevents accidental deletion
---
apiVersion: v1
kind: Namespace
metadata:
name: staging
labels:
environment: staging
team: platform
cost-center: engineering
annotations:
description: "Staging environment for pre-production testing"
owner: "platform-team@company.com"
finalizers:
- kubernetes
---
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
environment: prod
team: platform
cost-center: engineering
annotations:
description: "Production environment - handle with care"
owner: "platform-team@company.com"
finalizers:
- kubernetes # Critical for production safety
Creating and working with namespaces:
# Create the namespaces
kubectl apply -f development-namespaces.yaml
# Verify creation
kubectl get namespaces --show-labels
# Set your current context to use a specific namespace
kubectl config set-context --current --namespace=development
# Verify context change
kubectl config view --minify | grep namespace
# Create a deployment in the development namespace
kubectl create deployment nginx-dev --image=nginxinc/nginx-unprivileged:alpine
kubectl get deployments # Should see nginx-dev
# Create the same deployment in production
kubectl create deployment nginx-prod --image=nginxinc/nginx-unprivileged:alpine -n production
kubectl get deployments -n production
🛠️ Your Turn: Create the namespaces and deploy applications to different environments:
kubectl apply -f development-namespaces.yaml
kubectl config set-context --current --namespace=development
kubectl create deployment test-app --image=nginxinc/nginx-unprivileged:alpine
kubectl get pods # Should show pods in development namespace
kubectl get pods -n production # Should be empty
Step 3: Implementing Resource Quotas
Now let’s add resource limits to prevent any namespace from consuming all cluster resources:
# resource-quotas.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: development-quota
namespace: development
spec:
hard:
# Compute resources
requests.cpu: "4" # Total CPU requests
requests.memory: 8Gi # Total memory requests
limits.cpu: "8" # Total CPU limits
limits.memory: 16Gi # Total memory limits
# Object counts
pods: "20" # Maximum number of pods
services: "10" # Maximum number of services
secrets: "10" # Maximum number of secrets
configmaps: "10" # Maximum number of configmaps
# Storage
persistentvolumeclaims: "4" # Maximum PVCs
requests.storage: 100Gi # Total storage requests
# Custom metrics (example)
requests.example.com/custom-metric: "100"
# Quota scopes for specific pod states
scopes: ["NotTerminating"] # Only count non-terminating pods
---
apiVersion: v1
kind: ResourceQuota
metadata:
name: production-quota
namespace: production
spec:
hard:
# Production gets more resources (with 15% buffer for updates)
requests.cpu: "20"
requests.memory: 40Gi
limits.cpu: "40"
limits.memory: 80Gi
pods: "100"
services: "50"
secrets: "50"
configmaps: "50"
persistentvolumeclaims: "20"
requests.storage: 500Gi
# GPU resources for ML workloads
requests.nvidia.com/gpu: "4"
scopes: ["NotTerminating"]
💡 **Quota Buffer Warning**: Always leave 10-15% buffer in quotas to prevent 'quota jail' where deployments can't be updated due to exhausted resources during rolling updates.
**Testing resource quota enforcement:**
```bash
# Apply the quotas
kubectl apply -f resource-quotas.yaml
# Check quota status
kubectl describe quota -n development
kubectl describe quota -n production
# Create a pod that exceeds quota (this should fail)
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: resource-hog
namespace: development
spec:
containers:
- name: app
image: nginxinc/nginx-unprivileged:alpine
resources:
requests:
cpu: "5" # Exceeds development quota of 4 CPU
memory: "10Gi" # Exceeds development quota of 8Gi
EOF
# Should see error: "exceeded quota"
# Note: Resource quotas DO affect Deployments - when the Deployment controller
# tries to create pods, those pods count against the namespace quota
Monitoring namespace resource usage:
# Monitor namespace resource consumption
kubectl top namespaces # Requires metrics-server
kubectl get --raw /apis/metrics.k8s.io/v1beta1/namespaces/ | jq
# Detailed namespace monitoring
kubectl describe quota -n development
kubectl top pods -n development
# Cost allocation tracking (with OpenCost)
kubectl get --raw /apis/opencost.io/v1/namespace | jq '.items[] | select(.name=="production")'
Step 4: Implementing Network Policies for Isolation
Let’s add network security between namespaces:
⚠️ Prerequisites: Network policies require a CNI plugin that supports them (Calico, Cilium, Weave Net, etc.). They won’t work with basic bridge or flannel CNI.
# network-policies.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: development-isolation
namespace: development
spec:
podSelector: {} # Apply to all pods in namespace
policyTypes:
- Ingress
- Egress
ingress:
# Allow traffic from same namespace
- from:
- namespaceSelector:
matchLabels:
environment: dev
# Allow traffic from staging for testing
- from:
- namespaceSelector:
matchLabels:
environment: staging
egress:
# Allow DNS resolution (critical!)
- to: []
ports:
- protocol: UDP
port: 53
# Allow all other egress (can be restricted further)
- {}
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: production-isolation
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
ingress:
# Production only accepts traffic from itself
- from:
- namespaceSelector:
matchLabels:
environment: prod
# Allow monitoring from kube-system
- from:
- namespaceSelector:
matchLabels:
name: kube-system
egress:
# Allow DNS (essential for service discovery)
- to: []
ports:
- protocol: UDP
port: 53
# Restricted egress - only same namespace
- to:
- namespaceSelector:
matchLabels:
environment: prod
# Allow external traffic (internet, databases)
- to: {}
ports:
- protocol: TCP
port: 443
- protocol: TCP
port: 80
💡 Network Policy Tip: Use kubectl network-policy plugin to visualize policy relationships and debug connectivity issues.
Testing network isolation:
# Apply network policies (requires CNI that supports NetworkPolicy)
kubectl apply -f network-policies.yaml
# Create test pods in different namespaces
kubectl run test-dev --image=busybox --restart=Never -n development -- sleep 3600
kubectl run test-prod --image=busybox --restart=Never -n production -- sleep 3600
# Test connectivity from development to production (should fail)
kubectl exec -n development test-dev -- wget -qO- --timeout=5 test-prod.production.svc.cluster.local || echo "Connection blocked by network policy"
# Test connectivity within development namespace (should work)
kubectl run test-dev-2 --image=nginxinc/nginx-unprivileged:alpine -n development
kubectl exec -n development test-dev -- wget -qO- --timeout=5 test-dev-2.development.svc.cluster.local
🛠️ Your Turn: Test network isolation between namespaces:
kubectl apply -f network-policies.yaml
kubectl run busybox-dev --image=busybox --restart=Never -n development -- sleep 3600
kubectl run busybox-prod --image=busybox --restart=Never -n production -- sleep 3600
# Try cross-namespace communication and observe blocking
Step 5: RBAC with Kubernetes Namespaces
Create role-based access control for different teams:
# namespace-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: developer
namespace: development
automountServiceAccountToken: false # Security best practice
---
# Dedicated service account for applications
apiVersion: v1
kind: ServiceAccount
metadata:
name: app-service-account
namespace: development
automountServiceAccountToken: true # Apps need tokens for API access
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: development
name: developer-role
rules:
- apiGroups: [""]
resources: ["pods", "services", "configmaps", "secrets"]
verbs: ["get", "list", "create", "update", "patch", "delete"]
- apiGroups: ["apps"]
resources: ["deployments", "replicasets"]
verbs: ["get", "list", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["pods/log", "pods/exec"]
verbs: ["get", "create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: developer-binding
namespace: development
subjects:
- kind: ServiceAccount
name: developer
namespace: development
- kind: User
name: alice@company.com
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: developer-role
apiGroup: rbac.authorization.k8s.io
---
# Production access - more restrictive
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: production
name: production-readonly
rules:
- apiGroups: [""]
resources: ["pods", "services", "configmaps"]
verbs: ["get", "list"]
- apiGroups: ["apps"]
resources: ["deployments", "replicasets"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: production-readonly-binding
namespace: production
subjects:
- kind: User
name: alice@company.com
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: production-readonly
apiGroup: rbac.authorization.k8s.io
💡 **Service Account Best Practice**: Create dedicated service accounts per application and disable automatic token mounting for human service accounts to reduce security risks.
**Testing RBAC permissions:**
```bash
# Apply RBAC configuration
kubectl apply -f namespace-rbac.yaml
# Check your current identity
kubectl whoami # Modern alternative to auth can-i --list
# Test permissions as the developer service account
kubectl auth can-i create pods --as=system:serviceaccount:development:developer -n development
# Should return: yes
kubectl auth can-i delete deployments --as=system:serviceaccount:development:developer -n production
# Should return: no
# Test cross-namespace access
kubectl auth can-i get pods --as=system:serviceaccount:development:developer -n production
# Should return: no (no permissions in production namespace)
🧠 Knowledge Check
What happens if you don’t specify a namespace when creating a resource?
It goes into your current context’s namespace, or ‘default’ if none is set
Can a pod in one namespace access a service in another namespace?
Yes, using the FQDN: service-name.namespace.svc.cluster.local, unless blocked by NetworkPolicy
Do resource quotas affect Deployments when they create pods?
Yes! When Deployment controllers create pods, those pods count against the namespace quota and can cause deployment failures
Step 6: Multi-Tenant Cluster Architecture
Let’s build a realistic multi-tenant setup for multiple teams with hierarchical organization:
💡 Enterprise Pattern: Consider Hierarchical Namespace Controller (HNC) for large organizations that need parent-child namespace relationships with inherited policies.
# multi-tenant-setup.yaml
# Team A - Frontend team
apiVersion: v1
kind: Namespace
metadata:
name: team-frontend
labels:
team: frontend
cost-center: engineering
tier: application
---
apiVersion: v1
kind: ResourceQuota
metadata:
name: frontend-quota
namespace: team-frontend
spec:
hard:
requests.cpu: "10"
requests.memory: 20Gi
limits.cpu: "20"
limits.memory: 40Gi
pods: "50"
services: "20"
---
# Team B - Backend team
apiVersion: v1
kind: Namespace
metadata:
name: team-backend
labels:
team: backend
cost-center: engineering
tier: application
---
apiVersion: v1
kind: ResourceQuota
metadata:
name: backend-quota
namespace: team-backend
spec:
hard:
requests.cpu: "15"
requests.memory: 30Gi
limits.cpu: "30"
limits.memory: 60Gi
pods: "75"
services: "30"
---
# Shared services namespace
apiVersion: v1
kind: Namespace
metadata:
name: shared-services
labels:
team: platform
cost-center: infrastructure
tier: platform
---
apiVersion: v1
kind: ResourceQuota
metadata:
name: shared-services-quota
namespace: shared-services
spec:
hard:
requests.cpu: "5"
requests.memory: 10Gi
limits.cpu: "10"
limits.memory: 20Gi
pods: "25"
services: "15"
Implementing team-specific network policies:
# team-network-policies.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: frontend-policy
namespace: team-frontend
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
ingress:
# Allow traffic from same team
- from:
- namespaceSelector:
matchLabels:
team: frontend
# Allow traffic from backend APIs
- from:
- namespaceSelector:
matchLabels:
team: backend
egress:
# Can call backend services
- to:
- namespaceSelector:
matchLabels:
team: backend
# Can call shared services
- to:
- namespaceSelector:
matchLabels:
team: platform
# External traffic
- {}
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-policy
namespace: team-backend
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
ingress:
# Allow from frontend
- from:
- namespaceSelector:
matchLabels:
team: frontend
# Allow from same team
- from:
- namespaceSelector:
matchLabels:
team: backend
egress:
# Can call shared services
- to:
- namespaceSelector:
matchLabels:
team: platform
# External traffic (databases, APIs)
- {}
🛠️ Your Turn: Set up a complete multi-tenant environment:
kubectl apply -f multi-tenant-setup.yaml
kubectl apply -f team-network-policies.yaml
# Deploy applications to each team namespace
kubectl create deployment frontend-app --image=nginxinc/nginx-unprivileged:alpine -n team-frontend
kubectl create deployment backend-app --image=nginxinc/nginx-unprivileged:alpine -n team-backend
kubectl create deployment shared-db --image=postgres:alpine -n shared-services
# Verify isolation and resource usage
kubectl describe quota -n team-frontend
kubectl describe quota -n team-backend
Verification Steps:
- ✅ You can create and manage multiple namespaces
- ✅ You understand how resource quotas prevent resource starvation
- ✅ You can implement network policies for namespace isolation
- ✅ You’ve configured RBAC for team-based access control
- ✅ You can design multi-tenant cluster architectures
Real-World Scenarios
Scenario 1: Enterprise Multi-Tenant Platform
The Challenge: Last year, I worked with a Fortune 500 company that needed to consolidate 50+ development teams onto a shared Kubernetes platform while maintaining strict cost control, security isolation, and compliance requirements.
Multi-tenant architecture design:
# Enterprise namespace strategy
# Format: {environment}-{team}-{component}
# Examples: prod-payments-api, dev-analytics-frontend, staging-auth-backend
# Cost center quotas
apiVersion: v1
kind: ResourceQuota
metadata:
name: payments-team-quota
namespace: prod-payments-api
spec:
hard:
# Based on team budget allocation
requests.cpu: "50" # $2000/month budget
requests.memory: 100Gi
limits.cpu: "100"
limits.memory: 200Gi
# Compliance limits
pods: "200"
secrets: "100" # Audit requirement
persistentvolumeclaims: "50"
# Cost control
requests.nvidia.com/gpu: "4" # GPU budget limit
---
# Security policies for PCI compliance
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: pci-compliance-policy
namespace: prod-payments-api
spec:
podSelector:
matchLabels:
compliance: pci-dss
policyTypes:
- Ingress
- Egress
ingress:
# Only allow traffic from approved namespaces
- from:
- namespaceSelector:
matchLabels:
security-zone: pci-approved
egress:
# Restrict egress to approved destinations
- to:
- namespaceSelector:
matchLabels:
security-zone: pci-approved
# Allow specific external services (payment processors)
- to: []
ports:
- protocol: TCP
port: 443
Automated namespace provisioning:
# GitOps-driven namespace creation
# Teams request namespaces via pull requests
# Automated pipeline validates and creates resources
# Automated namespace provisioning with safety guards
#!/bin/bash
TEAM=$1
ENVIRONMENT=$2
COST_CENTER=$3
# Safety check - prevent accidental deletion
echo "WARNING: Creating namespace ${ENVIRONMENT}-${TEAM} with cascading deletion protection"
# Create namespace with proper labels and finalizers
kubectl create namespace ${ENVIRONMENT}-${TEAM}
kubectl label namespace ${ENVIRONMENT}-${TEAM} \
team=${TEAM} \
environment=${ENVIRONMENT} \
cost-center=${COST_CENTER} \
managed-by=platform-team
# Add finalizer for protection
kubectl patch namespace ${ENVIRONMENT}-${TEAM} --patch '{"metadata":{"finalizers":["kubernetes"]}}'
# Apply GitOps-compatible annotations
kubectl annotate namespace ${ENVIRONMENT}-${TEAM} \
"provisioned-by=namespace-operator" \
"provisioned-at=$(date -Iseconds)"
Production safety measures:
# namespace-admission-controller.yaml
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
name: namespace-deletion-guard
webhooks:
- name: prevent-critical-namespace-deletion
rules:
- operations: ["DELETE"]
apiGroups: [""]
apiVersions: ["v1"]
resources: ["namespaces"]
# Webhook would check if namespace has critical labels/annotations
Namespace deletion safety:
# Always use dry-run first to see what would be deleted
echo "WARNING: Deleting namespaces destroys ALL contained resources!"
kubectl delete namespace test --dry-run=server
# Verify what's in the namespace before deletion
kubectl get all -n test
kubectl describe namespace test
# Safe deletion process
kubectl drain nodes --selector=namespace=test # If using node selectors
kubectl delete namespace test --wait=true
Results achieved:
- Consolidated from 50+ clusters to 5 shared clusters
- Reduced infrastructure costs by 60% while improving resource utilization
- Automated compliance reporting through namespace labeling
- Zero cross-team security incidents in 18 months
- Self-service namespace provisioning reduced platform team workload
Scenario 2: SaaS Platform Tenant Isolation
Shopify’s Multi-Tenant Strategy (based on public engineering talks):
# Customer isolation pattern
# Each major customer gets their own namespace for isolation
apiVersion: v1
kind: Namespace
metadata:
name: customer-acme-corp
labels:
customer: acme-corp
tier: enterprise
region: us-east-1
compliance: soc2
annotations:
customer-id: "12345"
billing-plan: "enterprise"
support-tier: "premium"
---
# Customer-specific resource quotas based on billing plan
apiVersion: v1
kind: ResourceQuota
metadata:
name: enterprise-customer-quota
namespace: customer-acme-corp
spec:
hard:
# Enterprise plan limits
requests.cpu: "100"
requests.memory: 200Gi
limits.cpu: "200"
limits.memory: 400Gi
# API rate limiting through pod counts
pods: "500"
services: "100"
# Storage based on plan
requests.storage: 1Ti
persistentvolumeclaims: "100"
---
# Strict network isolation for customer data
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: customer-isolation
namespace: customer-acme-corp
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
ingress:
# Only allow traffic from customer's namespace
- from:
- namespaceSelector:
matchLabels:
customer: acme-corp
# Allow platform services (monitoring, logging)
- from:
- namespaceSelector:
matchLabels:
tier: platform
egress:
# Customer can only access their own services
- to:
- namespaceSelector:
matchLabels:
customer: acme-corp
# Allow external APIs (customer integrations)
- {}
Key patterns for SaaS multi-tenancy:
- Customer-specific namespaces: Each major customer gets isolated environment
- Billing-based quotas: Resource limits tied to customer subscription plans
- Compliance labeling: Namespace labels drive compliance and audit reporting
- Network isolation: Strict policies prevent cross-customer data access
- Automated scaling: Quotas automatically adjust based on billing plan changes
Common enterprise mistakes I’ve observed:
- Putting all environments in the default namespace (no isolation)
- Setting resource quotas too low, causing unexpected deployment failures
- Not implementing network policies, allowing unrestricted cross-namespace communication
- Missing RBAC configuration, giving teams access to all namespaces
- Not planning for namespace lifecycle management (creation, updates, deletion)
Troubleshooting Tips
Common Error 1: “Forbidden: exceeded quota”
Issue: Resource creation fails due to namespace quota limits Solution:
# Check current quota usage
kubectl describe quota -n <namespace>
# See what's consuming resources
kubectl top pods -n <namespace>
kubectl get pods -n <namespace> -o custom-columns=NAME:.metadata.name,CPU:.spec.containers[*].resources.requests.cpu,MEMORY:.spec.containers[*].resources.requests.memory
# Options to fix:
# 1. Delete unused resources
kubectl get deployments -n <namespace>
kubectl delete deployment <unused-deployment> -n <namespace>
# 2. Increase quota (if justified)
kubectl patch resourcequota <quota-name> -n <namespace> --patch '{"spec":{"hard":{"requests.cpu":"10"}}}'
# 3. Optimize resource requests
kubectl patch deployment <deployment> -n <namespace> --patch '{"spec":{"template":{"spec":{"containers":[{"name":"<container>","resources":{"requests":{"cpu":"100m"}}}]}}}}'
Common Error 2: “Service not found” across namespaces
Issue: Application can’t reach services in other namespaces Solution:
# Use fully qualified domain names (FQDN)
# Format: service-name.namespace.svc.cluster.local
# Test DNS resolution
kubectl run debug-pod --image=busybox --restart=Never -n <source-namespace> -- nslookup service-name.target-namespace.svc.cluster.local
# Check network policies
kubectl describe networkpolicy -n <source-namespace>
kubectl describe networkpolicy -n <target-namespace>
# Verify service exists
kubectl get services -n <target-namespace>
# Test connectivity
kubectl exec debug-pod -n <source-namespace> -- wget -qO- --timeout=5 service-name.target-namespace.svc.cluster.local:80
Common Error 3: “Cannot create resource in namespace”
Issue: RBAC permissions don’t allow resource creation Solution:
# Check your permissions
kubectl auth can-i create pods -n <namespace>
kubectl auth can-i create deployments -n <namespace>
# Check which user/service account you're using
kubectl config view --minify
# Verify role bindings
kubectl describe rolebinding -n <namespace>
kubectl describe clusterrolebinding | grep <your-user>
# Check if namespace exists
kubectl get namespace <namespace>
# Create namespace if missing
kubectl create namespace <namespace>
Common Error 4: Network policy blocking expected traffic
Issue: Network policies are too restrictive Solution:
# List all network policies affecting the namespace
kubectl get networkpolicy -n <namespace>
# Check policy details
kubectl describe networkpolicy <policy-name> -n <namespace>
# Temporarily remove policies for testing (use carefully)
kubectl delete networkpolicy <policy-name> -n <namespace>
# Test connectivity without policies
kubectl exec <pod> -n <source-namespace> -- wget -qO- --timeout=5 <target-service>.<target-namespace>.svc.cluster.local
# Add specific allow rules
kubectl patch networkpolicy <policy-name> -n <namespace> --patch '{"spec":{"ingress":[{"from":[{"namespaceSelector":{"matchLabels":{"environment":"staging"}}}]}]}}'
Debug Commands:
# Essential namespace debugging commands
kubectl get namespaces --show-labels # List all namespaces with labels
kubectl describe namespace <namespace> # Detailed namespace information
kubectl get resourcequota -n <namespace> # Check quotas
kubectl describe quota <quota-name> -n <namespace> # Quota usage details
# Resource and permission analysis
kubectl get all -n <namespace> # All resources in namespace
kubectl auth can-i --list -n <namespace> # Your permissions in namespace
kubectl get rolebinding,clusterrolebinding --all-namespaces | grep <namespace> # RBAC for namespace
# Network and connectivity debugging
kubectl get networkpolicy -n <namespace> # Network policies
kubectl run debug-pod --image=busybox --restart=Never -n <namespace> -- sleep 3600 # Debug pod
kubectl exec debug-pod -n <namespace> -- nslookup kubernetes.default.svc.cluster.local # DNS test
Where to get help:
- Kubernetes Namespaces Documentation
- Resource Quotas Guide
- CNCF Slack #kubernetes-users channel
Next Steps
What’s coming next: In Post #10, we’ll explore “Services: Exposing Your Applications to the World.” You’ll discover how to provide stable network endpoints for your namespaced applications, building on the organized, multi-tenant clusters we can now create. Services solve the networking puzzle that becomes critical when you have multiple teams and environments.
Additional learning:
- Experiment with LimitRanges for default resource constraints
- Explore Pod Security Standards for namespace-level security
Practice challenges:
- Cost center simulation: Set up namespaces with different quota tiers and test resource allocation
- Security isolation: Implement network policies that allow specific service-to-service communication
- RBAC complexity: Create team-specific roles with read-only access to some namespaces and full access to others
- Migration scenario: Practice moving resources between namespaces safely
- Monitoring setup: Implement namespace-specific monitoring and alerting
Community engagement: Share your namespace organization strategies! How do you structure namespaces in your organization? What quota patterns work best for different team sizes? Have you discovered any creative approaches to multi-tenant security?
FAQ Section
Can I move a running pod from one namespace to another?
No, you cannot move pods between namespaces. You need to delete the pod from the source namespace and recreate it in the target namespace. Use Deployments to make this process smoother.
What happens to resources when I delete a namespace?
All namespaced resources (pods, services, deployments, etc.) in that namespace are automatically deleted. This is permanent and immediate – use with extreme caution in production.
Can services in different namespaces have the same name?
Yes! Services are namespaced resources, so you can have a “database” service in both development and production namespaces. They’re accessed using fully qualified domain names (FQDN).
Do namespaces affect cluster-wide resources like nodes or storage classes?
No, cluster-wide resources exist outside of namespaces and are accessible from all namespaces. This includes nodes, persistent volumes, storage classes, and cluster roles.
How many namespaces can I create in a cluster?
There’s no hard limit, but practical considerations include RBAC complexity, network policy management, and etcd storage. Most clusters comfortably handle hundreds of namespaces.
Can I set default resource requests/limits for a namespace?
Yes, using LimitRanges! They automatically apply default resource constraints to pods that don’t specify their own, and can also set minimum/maximum bounds.
Complete Multi-Tenant Namespace Template
# complete-namespace-template.yaml
apiVersion: v1
kind: Namespace
metadata:
name: team-example
labels:
team: example-team
environment: production
cost-center: engineering
compliance: soc2
tier: application
annotations:
description: "Example team production namespace"
owner: "example-team@company.com"
budget: "5000-usd-monthly"
created-by: "platform-team"
---
# Resource quota for cost control and fair sharing
apiVersion: v1
kind: ResourceQuota
metadata:
name: team-example-quota
namespace: team-example
spec:
hard:
# Compute resources
requests.cpu: "10"
requests.memory: 20Gi
limits.cpu: "20"
limits.memory: 40Gi
# Object counts
pods: "50"
services: "20"
secrets: "20"
configmaps: "20"
# Storage
persistentvolumeclaims: "10"
requests.storage: 100Gi
# Extended resources
requests.nvidia.com/gpu: "2"
---
# Limit range for default resource constraints
apiVersion: v1
kind: LimitRange
metadata:
name: team-example-limits
namespace: team-example
spec:
limits:
- type: Container
default: # Default limits if not specified
cpu: "500m"
memory: "512Mi"
defaultRequest: # Default requests if not specified
cpu: "100m"
memory: "128Mi"
max: # Maximum allowed
cpu: "2"
memory: "4Gi"
min: # Minimum required
cpu: "50m"
memory: "64Mi"
- type: Pod
max:
cpu: "4"
memory: "8Gi"
---
# Network policy for security isolation
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: team-example-policy
namespace: team-example
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
ingress:
# Allow traffic from same team
- from:
- namespaceSelector:
matchLabels:
team: example-team
# Allow traffic from shared services
- from:
- namespaceSelector:
matchLabels:
tier: platform
# Allow traffic from monitoring
- from:
- namespaceSelector:
matchLabels:
name: monitoring
egress:
# Allow to same team
- to:
- namespaceSelector:
matchLabels:
team: example-team
# Allow to shared services
- to:
- namespaceSelector:
matchLabels:
tier: platform
# Allow external traffic
- {}
---
# Service account for team workloads
apiVersion: v1
kind: ServiceAccount
metadata:
name: team-example-workload
namespace: team-example
labels:
app.kubernetes.io/managed-by: platform-team
---
# Role for team members
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: team-example
name: team-example-developer
rules:
- apiGroups: [""]
resources: ["pods", "services", "configmaps", "secrets"]
verbs: ["get", "list", "create", "update", "patch", "delete"]
- apiGroups: ["apps"]
resources: ["deployments", "replicasets", "daemonsets", "statefulsets"]
verbs: ["get", "list", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["pods/log", "pods/exec", "pods/portforward"]
verbs: ["get", "create"]
- apiGroups: ["networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get", "list", "create", "update", "patch", "delete"]
---
# Role binding for team members
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: team-example-developers
namespace: team-example
subjects:
- kind: User
name: alice@company.com
apiGroup: rbac.authorization.k8s.io
- kind: User
name: bob@company.com
apiGroup: rbac.authorization.k8s.io
- kind: ServiceAccount
name: team-example-workload
namespace: team-example
roleRef:
kind: Role
name: team-example-developer
apiGroup: rbac.authorization.k8s.io
---
# Pod security policy (deprecated - use Pod Security Admission instead)
# Modern Pod Security Standards
apiVersion: v1
kind: Namespace
metadata:
name: team-example
labels:
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/warn: restricted
---
# Legacy PSP for older clusters
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: team-example-psp
spec:
privileged: false
allowPrivilegeEscalation: false
requiredDropCapabilities:
- ALL
volumes:
- 'configMap'
- 'emptyDir'
- 'projected'
- 'secret'
- 'downwardAPI'
- 'persistentVolumeClaim'
runAsUser:
rule: 'MustRunAsNonRoot'
seLinux:
rule: 'RunAsAny'
fsGroup:
rule: 'RunAsAny'
🔗 Series Navigation
Previous: Post #8 – ReplicaSets vs Deployments: When to Use What
Next: Post #10 – Services: Exposing Your Applications to the World
Progress: You’re now 13% through the Kubernetes Fundamentals series! 🎉
💡 Pro Tip: Start with a simple namespace strategy and evolve it based on your organization’s needs. Over-engineering namespace hierarchies early can create more complexity than value. Begin with environment-based separation (dev/staging/prod) and add team-based separation as you scale.
📧 Never miss an update: Subscribe to get notified when new posts in this series are published. Next, we’re exploring Services – the networking layer that connects your organized, namespaced applications and makes them accessible to users and other services!
Tags: kubernetes, namespaces, multi-tenancy, resource-quotas, network-policies, rbac, cluster-organization, security-isolation, enterprise-kubernetes, cka-prep
