Skip to main content

Kubernetes Manual: Zero to Hero

Table of Contents

  1. Introduction
  2. Prerequisites
  3. Installation
  4. Core Concepts
  5. Basic Commands
  6. Working with Pods
  7. Deployments
  8. Services
  9. ConfigMaps and Secrets
  10. Persistent Volumes
  11. Namespaces
  12. Ingress
  13. Best Practices
  14. Troubleshooting
  15. Advanced Topics

Introduction

Kubernetes (k8s) is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. This manual will take you from absolute beginner to advanced user.

What is Kubernetes?

  • Container orchestration platform
  • Manages containerized applications across clusters
  • Provides automatic scaling, rolling updates, and self-healing
  • Originally developed by Google, now maintained by CNCF

Why Kubernetes?

  • Scalability: Auto-scale applications based on demand
  • High Availability: Self-healing and fault tolerance
  • Portability: Run anywhere (cloud, on-premise, hybrid)
  • Declarative Configuration: Define desired state, k8s maintains it

Prerequisites

Knowledge Requirements

  • Basic understanding of containers (Docker)
  • Command line familiarity
  • Basic networking concepts
  • YAML syntax

System Requirements

  • Linux, macOS, or Windows
  • Minimum 2GB RAM
  • Docker installed
  • kubectl command-line tool

Installation

Option 1: Minikube (Local Development)

# Install minikube
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube

# Start minikube
minikube start

# Verify installation
kubectl cluster-info

Option 2: Kind (Kubernetes in Docker)

# Install kind
go install sigs.k8s.io/kind@v0.20.0

# Create cluster
kind create cluster --name my-cluster

# Set kubectl context
kubectl cluster-info --context kind-my-cluster

Option 3: kubectl Installation

# Linux
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

# macOS
brew install kubectl

# Windows
choco install kubernetes-cli

Core Concepts

Cluster Architecture

┌─────────────────────────────────────────────────────────────┐
│ Kubernetes Cluster │
├─────────────────────┬───────────────────────────────────────┤
│ Control Plane │ Worker Nodes │
│ │ │
│ ┌─────────────────┐ │ ┌─────────────────┐ ┌───────────────┐ │
│ │ API Server │ │ │ kubelet │ │ kubelet │ │
│ │ │ │ │ │ │ │ │
│ │ etcd │ │ │ kube-proxy │ │ kube-proxy │ │
│ │ │ │ │ │ │ │ │
│ │ Scheduler │ │ │ Container │ │ Container │ │
│ │ │ │ │ Runtime │ │ Runtime │ │
│ │ Controller │ │ │ │ │ │ │
│ │ Manager │ │ │ Pods │ │ Pods │ │
│ └─────────────────┘ │ └─────────────────┘ └───────────────┘ │
└─────────────────────┴───────────────────────────────────────┘

Key Components

Control Plane Components

  • API Server: Frontend for Kubernetes control plane
  • etcd: Distributed key-value store for cluster data
  • Scheduler: Assigns pods to nodes
  • Controller Manager: Runs controller processes

Node Components

  • kubelet: Agent that communicates with control plane
  • kube-proxy: Network proxy on each node
  • Container Runtime: Runs containers (Docker, containerd)

Kubernetes Objects

Pod

  • Smallest deployable unit
  • Contains one or more containers
  • Shares network and storage

Deployment

  • Manages replica sets and pods
  • Provides declarative updates
  • Enables rolling updates and rollbacks

Service

  • Stable network endpoint for pods
  • Load balances traffic
  • Types: ClusterIP, NodePort, LoadBalancer

Namespace

  • Virtual cluster isolation
  • Resource organization
  • Access control boundaries

Basic Commands

Cluster Information

# Get cluster info
kubectl cluster-info

# Get nodes
kubectl get nodes

# Get all resources
kubectl get all

# Get detailed node info
kubectl describe node <node-name>

Context and Configuration

# Get current context
kubectl config current-context

# List all contexts
kubectl config get-contexts

# Switch context
kubectl config use-context <context-name>

# View config
kubectl config view

Resource Management

# Get resources
kubectl get <resource-type>
kubectl get pods
kubectl get deployments
kubectl get services

# Describe resources
kubectl describe <resource-type> <name>
kubectl describe pod my-pod

# Create resources
kubectl create -f <file.yaml>
kubectl apply -f <file.yaml>

# Delete resources
kubectl delete <resource-type> <name>
kubectl delete -f <file.yaml>

Working with Pods

Creating Your First Pod

# pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.21
ports:
- containerPort: 80
# Create pod
kubectl apply -f pod.yaml

# Get pods
kubectl get pods

# Check pod details
kubectl describe pod nginx-pod

# Get pod logs
kubectl logs nginx-pod

# Execute commands in pod
kubectl exec -it nginx-pod -- /bin/bash

# Port forward to access pod
kubectl port-forward nginx-pod 8080:80

Multi-Container Pod Example

# multi-container-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: multi-container-pod
spec:
containers:
- name: web
image: nginx:1.21
ports:
- containerPort: 80
- name: sidecar
image: busybox
command: ['sh', '-c', 'while true; do echo "Sidecar running"; sleep 30; done']

Pod Lifecycle States

  • Pending: Pod accepted but not scheduled
  • Running: Pod scheduled and at least one container running
  • Succeeded: All containers terminated successfully
  • Failed: All containers terminated, at least one failed
  • Unknown: Pod state cannot be determined

Deployments

Basic Deployment

# nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.21
ports:
- containerPort: 80
# Create deployment
kubectl apply -f nginx-deployment.yaml

# Get deployments
kubectl get deployments

# Scale deployment
kubectl scale deployment nginx-deployment --replicas=5

# Update deployment
kubectl set image deployment/nginx-deployment nginx=nginx:1.22

# Check rollout status
kubectl rollout status deployment/nginx-deployment

# Rollback deployment
kubectl rollout undo deployment/nginx-deployment

# Get rollout history
kubectl rollout history deployment/nginx-deployment

Deployment Strategies

Rolling Update (Default)

spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%

Recreate Strategy

spec:
strategy:
type: Recreate

Advanced Deployment Configuration

# advanced-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
spec:
containers:
- name: web
image: nginx:1.21
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 5

Services

Service Types

ClusterIP Service (Default)

# clusterip-service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-clusterip
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP

NodePort Service

# nodeport-service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-nodeport
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30080
type: NodePort

LoadBalancer Service

# loadbalancer-service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-loadbalancer
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer

Headless Service

# headless-service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-headless
spec:
clusterIP: None
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80

Service Discovery

# Get service endpoints
kubectl get endpoints

# DNS resolution inside cluster
nslookup <service-name>.<namespace>.svc.cluster.local

# Environment variables (automatic)
echo $NGINX_SERVICE_HOST
echo $NGINX_SERVICE_PORT

ConfigMaps and Secrets

ConfigMaps

Creating ConfigMaps

# From literal values
kubectl create configmap app-config \
--from-literal=database_url=postgresql://localhost/mydb \
--from-literal=debug=true

# From file
kubectl create configmap app-config --from-file=config.properties

# From directory
kubectl create configmap app-config --from-file=configs/

ConfigMap YAML

# configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
database_url: "postgresql://localhost/mydb"
debug: "true"
config.properties: |
property1=value1
property2=value2

Using ConfigMaps in Pods

# pod-with-configmap.yaml
apiVersion: v1
kind: Pod
metadata:
name: app-pod
spec:
containers:
- name: app
image: nginx
env:
- name: DATABASE_URL
valueFrom:
configMapKeyRef:
name: app-config
key: database_url
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: app-config

Secrets

Creating Secrets

# Generic secret
kubectl create secret generic db-secret \
--from-literal=username=admin \
--from-literal=password=secretpassword

# TLS secret
kubectl create secret tls tls-secret \
--cert=tls.crt \
--key=tls.key

# Docker registry secret
kubectl create secret docker-registry registry-secret \
--docker-server=registry.example.com \
--docker-username=user \
--docker-password=password

Secret YAML

# secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: db-secret
type: Opaque
data:
username: YWRtaW4= # base64 encoded 'admin'
password: c2VjcmV0cGFzc3dvcmQ= # base64 encoded 'secretpassword'

Using Secrets in Pods

# pod-with-secret.yaml
apiVersion: v1
kind: Pod
metadata:
name: app-pod
spec:
containers:
- name: app
image: nginx
env:
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-secret
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret
key: password
volumeMounts:
- name: secret-volume
mountPath: /etc/secrets
readOnly: true
volumes:
- name: secret-volume
secret:
secretName: db-secret

Persistent Volumes

PersistentVolume (PV)

# pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-storage
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: standard
hostPath:
path: /mnt/data

PersistentVolumeClaim (PVC)

# pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: standard

Using PVC in Deployment

# deployment-with-pvc.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
spec:
replicas: 1
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
spec:
containers:
- name: web
image: nginx
volumeMounts:
- name: storage
mountPath: /usr/share/nginx/html
volumes:
- name: storage
persistentVolumeClaim:
claimName: pvc-storage

Access Modes

  • ReadWriteOnce (RWO): Single node read-write
  • ReadOnlyMany (ROX): Multiple nodes read-only
  • ReadWriteMany (RWX): Multiple nodes read-write

Namespaces

Creating Namespaces

# Imperative
kubectl create namespace development
kubectl create namespace production

# Declarative
kubectl apply -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
name: staging
EOF

Working with Namespaces

# List namespaces
kubectl get namespaces

# Set default namespace
kubectl config set-context --current --namespace=development

# Get resources in specific namespace
kubectl get pods -n development
kubectl get all -n production

# Create resources in namespace
kubectl apply -f deployment.yaml -n development

Namespace YAML

# namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: development
labels:
environment: dev
team: backend

Resource Quotas

# resource-quota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: dev-quota
namespace: development
spec:
hard:
requests.cpu: "4"
requests.memory: 8Gi
limits.cpu: "8"
limits.memory: 16Gi
pods: "10"
services: "5"

Ingress

Ingress Controller

First, install an ingress controller (e.g., NGINX):

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/cloud/deploy.yaml

Basic Ingress

# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-service
port:
number: 80

TLS Ingress

# tls-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tls-ingress
spec:
tls:
- hosts:
- myapp.example.com
secretName: tls-secret
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-service
port:
number: 80

Multi-Service Ingress

# multi-service-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: multi-service-ingress
spec:
rules:
- host: myapp.example.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: api-service
port:
number: 8080
- path: /web
pathType: Prefix
backend:
service:
name: web-service
port:
number: 80

Best Practices

Security Best Practices

1. Use Non-Root Containers

spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 2000
containers:
- name: app
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL

2. Network Policies

# network-policy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress

3. Pod Security Standards

# pod-security.yaml
apiVersion: v1
kind: Namespace
metadata:
name: secure-namespace
labels:
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/warn: restricted

Resource Management

1. Resource Requests and Limits

resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"

2. Horizontal Pod Autoscaler

# hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: web-app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: web-app
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70

Health Checks

Liveness and Readiness Probes

livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3

readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 3

Configuration Management

1. Use Labels and Annotations

metadata:
labels:
app: web-app
version: v1.0.0
environment: production
tier: frontend
annotations:
description: "Web application frontend"
contact: "team@example.com"

2. Organize with Namespaces

  • Use namespaces to separate environments
  • Apply resource quotas
  • Implement RBAC per namespace

Troubleshooting

Common Commands

Pod Troubleshooting

# Get pod status
kubectl get pods
kubectl describe pod <pod-name>

# Check logs
kubectl logs <pod-name>
kubectl logs <pod-name> -c <container-name> # multi-container
kubectl logs <pod-name> --previous # previous instance

# Execute commands
kubectl exec -it <pod-name> -- /bin/bash
kubectl exec -it <pod-name> -c <container-name> -- /bin/bash

# Port forwarding
kubectl port-forward <pod-name> 8080:80

Service Troubleshooting

# Check service
kubectl get svc
kubectl describe svc <service-name>

# Check endpoints
kubectl get endpoints <service-name>

# Test connectivity
kubectl run test-pod --image=busybox -it --rm -- /bin/sh
# Inside test pod:
wget -qO- http://<service-name>.<namespace>.svc.cluster.local

Node Troubleshooting

# Check node status
kubectl get nodes
kubectl describe node <node-name>

# Check node resources
kubectl top nodes
kubectl top pods

# Drain node for maintenance
kubectl drain <node-name> --ignore-daemonsets
kubectl uncordon <node-name> # Re-enable scheduling

Common Issues and Solutions

1. ImagePullBackOff

# Check events
kubectl describe pod <pod-name>

# Common causes:
# - Wrong image name/tag
# - Missing image pull secrets
# - Network issues

2. CrashLoopBackOff

# Check logs
kubectl logs <pod-name> --previous

# Common causes:
# - Application crashes on startup
# - Missing dependencies
# - Resource constraints

3. Pending Pods

# Check events and node resources
kubectl describe pod <pod-name>
kubectl get nodes
kubectl top nodes

# Common causes:
# - Insufficient resources
# - Node selectors/affinity
# - Taints and tolerations

Advanced Topics

Custom Resource Definitions (CRDs)

# webapp-crd.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: webapps.example.com
spec:
group: example.com
versions:
- name: v1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
replicas:
type: integer
image:
type: string
scope: Namespaced
names:
plural: webapps
singular: webapp
kind: WebApp

StatefulSets

# statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: database
spec:
serviceName: database-service
replicas: 3
selector:
matchLabels:
app: database
template:
metadata:
labels:
app: database
spec:
containers:
- name: database
image: postgres:13
env:
- name: POSTGRES_DB
value: mydb
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi

DaemonSets

# daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: log-collector
spec:
selector:
matchLabels:
app: log-collector
template:
metadata:
labels:
app: log-collector
spec:
containers:
- name: log-collector
image: fluentd:latest
volumeMounts:
- name: log-volume
mountPath: /var/log
volumes:
- name: log-volume
hostPath:
path: /var/log

Jobs and CronJobs

Job

# job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: data-migration
spec:
template:
spec:
containers:
- name: migration
image: migration-tool:latest
command: ["./migrate.sh"]
restartPolicy: Never
backoffLimit: 4

CronJob

# cronjob.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: backup-job
spec:
schedule: "0 2 * * *" # Daily at 2 AM
jobTemplate:
spec:
template:
spec:
containers:
- name: backup
image: backup-tool:latest
command: ["./backup.sh"]
restartPolicy: OnFailure

Helm Package Manager

Installation

# Install Helm
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

# Add repositories
helm repo add stable https://charts.helm.sh/stable
helm repo update

Basic Helm Commands

# Search charts
helm search repo nginx

# Install chart
helm install my-nginx stable/nginx-ingress

# List releases
helm list

# Upgrade release
helm upgrade my-nginx stable/nginx-ingress

# Uninstall release
helm uninstall my-nginx

Monitoring and Observability

Prometheus and Grafana

# Add Prometheus Helm repo
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts

# Install Prometheus stack
helm install monitoring prometheus-community/kube-prometheus-stack

Metrics Server

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

GitOps with ArgoCD

# application.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-app
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/example/my-app
targetRevision: HEAD
path: k8s/
destination:
server: https://kubernetes.default.svc
namespace: default
syncPolicy:
automated:
prune: true
selfHeal: true

Conclusion

This manual covers the essential concepts and practical examples needed to master Kubernetes. Practice these concepts in a safe environment, start with simple deployments, and gradually work your way up to more complex scenarios.

Next Steps

  1. Set up a local Kubernetes cluster
  2. Practice with the examples provided
  3. Explore advanced topics like service mesh (Istio)
  4. Learn about cloud-specific Kubernetes services (EKS, GKE, AKS)
  5. Implement CI/CD pipelines with Kubernetes
  6. Study for Kubernetes certifications (CKA, CKAD, CKS)

Additional Resources


This manual is a living document. Keep it updated as Kubernetes evolves and new best practices emerge.