Kubernetes Security: Essential Best Practices for Cluster Hardening

Kubernetes Security: Essential Best Practices for Cluster Hardening

Whitespots Team ·
kubernetes
k8s
containers
devops

Introduction

Kubernetes has become the de facto standard for container orchestration, but its complexity introduces numerous security challenges. From RBAC misconfigurations to insecure pod deployments, a single vulnerability can expose your entire cluster. This guide covers essential Kubernetes security practices with real-world examples.

Common Kubernetes Security Issues

  1. Overly permissive RBAC policies
  2. Running privileged containers
  3. Missing network policies
  4. Exposed secrets in environment variables
  5. No pod security standards enforcement
  6. Unrestricted API server access
  7. Missing resource quotas and limits
  8. Insecure ingress configurations

Pod Security Best Practices

Vulnerable Pod Configuration

yaml
# VULNERABLE Pod - Multiple security issues apiVersion: v1 kind: Pod metadata: name: vulnerable-app spec: containers: - name: app image: myapp:latest # Using 'latest' tag # Running as root (default) securityContext: privileged: true # Dangerous! env: - name: DB_PASSWORD value: "supersecret123" # Secret in plaintext # No resource limits

Secure Pod Configuration

yaml
# SECURE Pod - Best practices applied apiVersion: v1 kind: Pod metadata: name: secure-app labels: app: secure-app tier: backend spec: # Use service account with minimal permissions serviceAccountName: app-service-account automountServiceAccountToken: false # Security context for the pod securityContext: runAsNonRoot: true runAsUser: 1001 runAsGroup: 1001 fsGroup: 1001 seccompProfile: type: RuntimeDefault containers: - name: app image: myapp:1.2.3 # Specific version imagePullPolicy: Always # Container security context securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1001 capabilities: drop: - ALL add: - NET_BIND_SERVICE # Resource limits resources: requests: memory: "128Mi" cpu: "100m" limits: memory: "256Mi" cpu: "200m" # Use secrets properly env: - name: DB_PASSWORD valueFrom: secretKeyRef: name: db-credentials key: password # Health checks livenessProbe: httpGet: path: /health port: 8080 initialDelaySeconds: 30 periodSeconds: 10 readinessProbe: httpGet: path: /ready port: 8080 initialDelaySeconds: 5 periodSeconds: 5 # Volume mounts volumeMounts: - name: tmp mountPath: /tmp - name: cache mountPath: /app/cache volumes: - name: tmp emptyDir: {} - name: cache emptyDir: {}

RBAC Configuration

Overly Permissive RBAC (Vulnerable)

yaml
# VULNERABLE: Cluster admin for regular app apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: app-binding subjects: - kind: ServiceAccount name: app-sa namespace: default roleRef: kind: ClusterRole name: cluster-admin # Too permissive! apiGroup: rbac.authorization.k8s.io

Least Privilege RBAC (Secure)

yaml
# SECURE: Minimal permissions apiVersion: v1 kind: ServiceAccount metadata: name: app-service-account namespace: production --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: app-role namespace: production rules: # Allow reading ConfigMaps - apiGroups: [""] resources: ["configmaps"] verbs: ["get", "list"] # Allow reading Secrets (specific ones only) - apiGroups: [""] resources: ["secrets"] resourceNames: ["db-credentials", "api-keys"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: app-role-binding namespace: production subjects: - kind: ServiceAccount name: app-service-account namespace: production roleRef: kind: Role name: app-role apiGroup: rbac.authorization.k8s.io

Network Policies

Default Deny All Traffic

yaml
# Start with deny-all, then allow specific traffic apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny-all namespace: production spec: podSelector: {} policyTypes: - Ingress - Egress

Allow Specific Traffic

yaml
# Allow traffic only from specific sources apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: backend-network-policy namespace: production spec: podSelector: matchLabels: app: backend policyTypes: - Ingress - Egress ingress: # Allow from frontend pods - from: - podSelector: matchLabels: app: frontend ports: - protocol: TCP port: 8080 # Allow from same namespace - from: - namespaceSelector: matchLabels: name: production ports: - protocol: TCP port: 8080 egress: # Allow DNS - to: - namespaceSelector: matchLabels: name: kube-system ports: - protocol: UDP port: 53 # Allow to database - to: - podSelector: matchLabels: app: postgres ports: - protocol: TCP port: 5432 # Allow HTTPS to external services - to: - namespaceSelector: {} ports: - protocol: TCP port: 443

Secrets Management

Creating Secrets Securely

bash
# WRONG: Secrets in command line (visible in history) kubectl create secret generic db-creds --from-literal=password=mysecret # BETTER: From file (ensure file is in .gitignore) echo -n 'supersecret' > /tmp/password.txt kubectl create secret generic db-creds --from-file=password=/tmp/password.txt rm /tmp/password.txt # BEST: Use external secret management # Sealed Secrets, External Secrets Operator, or Vault

Sealed Secrets Example

yaml
# Install Sealed Secrets controller first # kubectl apply -f https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.24.0/controller.yaml # Create sealed secret apiVersion: bitnami.com/v1alpha1 kind: SealedSecret metadata: name: db-credentials namespace: production spec: encryptedData: password: AgBpHt7... # Encrypted value safe to commit template: metadata: name: db-credentials namespace: production
bash
# Encrypt a secret echo -n 'mysecretpassword' | kubectl create secret generic db-credentials \ --dry-run=client \ --from-file=password=/dev/stdin \ -o yaml | \ kubeseal -o yaml > sealed-secret.yaml

Pod Security Standards

Pod Security Admission

yaml
# Enforce pod security standards at namespace level apiVersion: v1 kind: Namespace metadata: name: production labels: pod-security.kubernetes.io/enforce: restricted pod-security.kubernetes.io/audit: restricted pod-security.kubernetes.io/warn: restricted

Pod Security Policy (Deprecated, use Pod Security Standards)

yaml
# For clusters still using PSP apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: restricted spec: privileged: false allowPrivilegeEscalation: false requiredDropCapabilities: - ALL volumes: - 'configMap' - 'emptyDir' - 'projected' - 'secret' - 'downwardAPI' - 'persistentVolumeClaim' hostNetwork: false hostIPC: false hostPID: false runAsUser: rule: 'MustRunAsNonRoot' seLinux: rule: 'RunAsAny' supplementalGroups: rule: 'RunAsAny' fsGroup: rule: 'RunAsAny' readOnlyRootFilesystem: true

Resource Quotas and Limits

Namespace Resource Quota

yaml
apiVersion: v1 kind: ResourceQuota metadata: name: production-quota namespace: production spec: hard: requests.cpu: "10" requests.memory: 20Gi limits.cpu: "20" limits.memory: 40Gi persistentvolumeclaims: "10" services.loadbalancers: "2" services.nodeports: "0" # Disable NodePorts

Limit Ranges

yaml
apiVersion: v1 kind: LimitRange metadata: name: production-limit-range namespace: production spec: limits: # Container limits - max: memory: 2Gi cpu: "2" min: memory: 128Mi cpu: 100m default: memory: 512Mi cpu: 500m defaultRequest: memory: 256Mi cpu: 200m type: Container # Pod limits - max: memory: 4Gi cpu: "4" type: Pod

Secure Ingress Configuration

yaml
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: secure-ingress namespace: production annotations: # Force HTTPS nginx.ingress.kubernetes.io/force-ssl-redirect: "true" # Rate limiting nginx.ingress.kubernetes.io/limit-rps: "10" # CORS nginx.ingress.kubernetes.io/enable-cors: "true" nginx.ingress.kubernetes.io/cors-allow-methods: "GET, POST, PUT, DELETE" nginx.ingress.kubernetes.io/cors-allow-origin: "https://example.com" # Security headers nginx.ingress.kubernetes.io/configuration-snippet: | more_set_headers "X-Frame-Options: DENY"; more_set_headers "X-Content-Type-Options: nosniff"; more_set_headers "X-XSS-Protection: 1; mode=block"; more_set_headers "Strict-Transport-Security: max-age=31536000; includeSubDomains"; spec: ingressClassName: nginx tls: - hosts: - app.example.com secretName: tls-secret rules: - host: app.example.com http: paths: - path: / pathType: Prefix backend: service: name: app-service port: number: 80

API Server Security

Secure API Server Configuration

yaml
# kube-apiserver startup flags --anonymous-auth=false --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --audit-log-path=/var/log/kubernetes/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --tls-min-version=VersionTLS12 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256

Audit Policy

yaml
apiVersion: audit.k8s.io/v1 kind: Policy rules: # Log all requests at metadata level - level: Metadata omitStages: - RequestReceived # Log pod changes at request level - level: Request resources: - group: "" resources: ["pods"] # Log secrets at metadata level (don't log secret data) - level: Metadata resources: - group: "" resources: ["secrets"] # Don't log read-only URLs - level: None nonResourceURLs: - /healthz* - /version - /swagger*

Runtime Security with Falco

yaml
# Install Falco for runtime threat detection apiVersion: v1 kind: ConfigMap metadata: name: falco-rules namespace: falco data: custom-rules.yaml: | - rule: Unauthorized Process in Container desc: Detect processes not in whitelist condition: > spawned_process and container and not proc.name in (node, npm, python, java) output: > Unauthorized process started in container (user=%user.name command=%proc.cmdline container=%container.name) priority: WARNING - rule: Write to System Directory desc: Detect writes to /etc, /usr, /boot condition: > open_write and container and fd.name startswith /etc or fd.name startswith /usr or fd.name startswith /boot output: > Write to system directory (user=%user.name file=%fd.name container=%container.name) priority: ERROR

Security Scanning

bash
# Scan cluster with kube-bench kubectl apply -f https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job.yaml # Check results kubectl logs job/kube-bench # Scan with kubescape kubescape scan --format json --output results.json # Scan specific namespace kubescape scan namespace production # Scan against frameworks kubescape scan framework nsa # Trivy for Kubernetes trivy k8s --report summary cluster

Security Monitoring

yaml
# Example Prometheus monitoring for security metrics apiVersion: v1 kind: ConfigMap metadata: name: prometheus-rules namespace: monitoring data: security-rules.yml: | groups: - name: security interval: 30s rules: - alert: PodSecurityViolation expr: kube_pod_container_status_running{container!~".*"} == 1 annotations: summary: "Pod running without security context" - alert: PrivilegedContainer expr: kube_pod_container_status_running{container_security_privileged="true"} == 1 annotations: summary: "Privileged container detected" - alert: RootContainer expr: kube_pod_container_status_running{container_security_run_as_non_root="false"} == 1 annotations: summary: "Container running as root"

Kubernetes Security Checklist

  • ✅ Enable RBAC and use least privilege principle
  • ✅ Run containers as non-root
  • ✅ Use Pod Security Standards (restricted)
  • ✅ Implement network policies
  • ✅ Never store secrets in plain text
  • ✅ Use specific image tags, not latest
  • ✅ Set resource requests and limits
  • ✅ Enable audit logging
  • ✅ Scan images for vulnerabilities
  • ✅ Use read-only root filesystems
  • ✅ Drop all capabilities, add only needed ones
  • ✅ Disable automounting service account tokens
  • ✅ Enable API server authentication and authorization
  • ✅ Use TLS for all communications
  • ✅ Implement runtime security monitoring
  • ✅ Regular security assessments and updates
  • ✅ Use namespaces for isolation
  • ✅ Backup etcd securely

Conclusion

Kubernetes security requires a multi-layered approach covering cluster configuration, workload security, network isolation, and runtime protection. By implementing RBAC, Pod Security Standards, network policies, and proper secrets management, you significantly reduce your attack surface.

Security in Kubernetes is an ongoing process requiring continuous monitoring, regular updates, and security assessments. For comprehensive Kubernetes security reviews and cluster hardening, contact the Whitespots team for expert consultation.

Cookie Consent

Our website uses cookies to ensure the best user experience. Cookies help us to:

  • Authorize you

By clicking "Accept All Cookies", you consent to our use of cookies. You can also manage your preferences at any time by visiting our Cookie Settings page.

Learn More Manage Preferences