Kubernetes has become the default platform for running containerised workloads at scale. It has also become a high-value target: misconfigured clusters have led to cryptomining at scale, data breaches, and complete infrastructure compromise. The Kubernetes attack surface is large — the API server, etcd, kubelet, nodes, pods, and the application containers themselves all require hardening. Our cloud security practice includes Kubernetes security assessments against the CIS benchmark and NIST hardening guidelines.
This guide covers the critical security controls for production Kubernetes clusters, organised from cluster infrastructure through to application-level controls.
Understanding the Kubernetes Attack Surface
Before hardening, understand what attackers target:
| Component | What Attackers Target |
|---|---|
| API Server | Unauthenticated access, SSRF to cloud metadata, misconfigured RBAC |
| etcd | Unauthenticated port 2379, all cluster secrets stored in plaintext |
| kubelet | Anonymous authentication enabled, exec API abuse |
| Nodes | Container escape to host, privileged pods, hostPath mounts |
| Pods | Privileged containers, excessive capabilities, root processes |
| Container images | Vulnerable base images, malware in images, supply chain compromise |
| Secrets | Base64-encoded (not encrypted) Kubernetes Secrets |
| Cloud metadata | SSRF from pod → metadata API → IAM credentials |
1. Cluster Infrastructure Security
API Server Configuration
The Kubernetes API server is the brain of the cluster. Harden it:
# kube-apiserver flags (via kubeadm config or direct flags)
--anonymous-auth=false # Disable anonymous auth
--authorization-mode=Node,RBAC # Use RBAC (not AlwaysAllow!)
--enable-admission-plugins=NodeRestriction,PodSecurityAdmission
--audit-log-path=/var/log/kubernetes/audit.log
--audit-log-maxage=30
--audit-log-maxbackup=10
--audit-log-maxsize=100
--audit-policy-file=/etc/kubernetes/audit-policy.yaml
--tls-min-version=VersionTLS12
--disable-admission-plugins=AlwaysAdmit # Never allow this in prod
Audit policy — log security-relevant events:
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
# Log all requests to secrets at metadata level
- level: Metadata
resources:
- group: ""
resources: ["secrets"]
# Log pod execution at RequestResponse level
- level: RequestResponse
resources:
- group: ""
resources: ["pods/exec", "pods/portforward", "pods/attach"]
# Log auth failures
- level: Request
namespaces: ["kube-system"]
# Catch-all at Metadata level
- level: Metadata
etcd Security
etcd stores all cluster state — including Secrets (base64-encoded). Protect it:
- Encrypt etcd data at rest (configure encryption config in kube-apiserver)
- Restrict etcd access to only the API server — no direct external access to port 2379
- Use TLS for all etcd communication
- Backup etcd regularly — and test restoration
Enable encryption at rest for Secrets:
# /etc/kubernetes/encryption-config.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: <base64-encoded-32-byte-key>
- identity: {} # Fallback for existing unencrypted secrets
Pass to API server: --encryption-provider-config=/etc/kubernetes/encryption-config.yaml
After enabling, force-rewrite all Secrets: kubectl get secrets --all-namespaces -o json | kubectl replace -f -
2. Role-Based Access Control (RBAC)
RBAC is how Kubernetes controls who can do what. Misconfigured RBAC is the most common path to cluster compromise. Our DevSecOps consulting service embeds RBAC and pod security controls directly into your delivery pipeline so these settings are enforced from the first deployment.
Core Principles
- Least privilege — grant only the permissions needed for a specific task
- Namespace isolation — use Roles (namespace-scoped) instead of ClusterRoles where possible
- Service account per application — don’t use the
defaultservice account - Audit regularly — use tools like
kubectl-who-canandrbac-auditto find overpermissioned bindings
Dangerous RBAC Permissions to Avoid
These permissions effectively grant cluster admin access if given to a user or service account:
# DANGEROUS — wildcard on core resources
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
# DANGEROUS — can create/modify ClusterRoleBindings = privilege escalation
rules:
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["clusterrolebindings", "clusterroles"]
verbs: ["create", "update", "patch"]
# DANGEROUS — can exec into any pod
rules:
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
# DANGEROUS — can read all secrets
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "watch"]
Application Service Account
Each application should have a dedicated service account with minimum permissions:
# Service account
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-app
namespace: production
annotations:
# Disable token auto-mounting (explicit control)
---
# Role — only what the app actually needs
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: my-app-role
namespace: production
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "watch", "list"]
---
# Binding
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: my-app-binding
namespace: production
subjects:
- kind: ServiceAccount
name: my-app
namespace: production
roleRef:
kind: Role
name: my-app-role
apiGroup: rbac.authorization.k8s.io
In the Pod spec:
spec:
serviceAccountName: my-app
automountServiceAccountToken: false # If the app doesn't need K8s API access
3. Pod Security
Pod Security Admission (PSA)
Pod Security Admission replaced PodSecurityPolicies in K8s 1.25+. Apply security profiles at the namespace level:
# Label namespaces to enforce security standards
kubectl label namespace production \
pod-security.kubernetes.io/enforce=restricted \
pod-security.kubernetes.io/warn=restricted \
pod-security.kubernetes.io/audit=restricted
The restricted profile enforces:
- No privileged containers
- No privilege escalation
- Non-root user required
- Read-only root filesystem required
- No host namespaces (hostPID, hostIPC, hostNetwork)
- Seccomp profile required
- Limited capabilities (only
NET_BIND_SERVICEallowed)
For legacy workloads that can’t meet restricted, use baseline — it at least prevents the worst misconfigurations.
Secure Pod Spec
apiVersion: v1
kind: Pod
spec:
securityContext:
runAsNonRoot: true
runAsUser: 10001
runAsGroup: 10001
fsGroup: 10001
seccompProfile:
type: RuntimeDefault # Enable seccomp filtering
containers:
- name: app
securityContext:
allowPrivilegeEscalation: false # No setuid, no capabilities escalation
privileged: false # No host namespace access
readOnlyRootFilesystem: true # Prevent filesystem modification
capabilities:
drop: ["ALL"] # Drop all capabilities
add: [] # Add only if absolutely required
# Liveness/readiness probes (security: detect compromised containers)
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
# Resource limits (prevent resource exhaustion / cryptomining)
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
# Volume mounts — only necessary, no hostPath to sensitive directories
volumeMounts:
- name: tmp
mountPath: /tmp # writable tmp since rootfs is read-only
# No hostPath mounts to sensitive host directories
volumes:
- name: tmp
emptyDir: {}
# No hostPID, hostIPC, hostNetwork
automountServiceAccountToken: false
4. Network Policies
By default, all pods can communicate with all other pods in a Kubernetes cluster. Network Policies implement micro-segmentation:
Default Deny All
Apply to every namespace:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {} # Matches all pods
policyTypes:
- Ingress
- Egress
Then explicitly allow only required traffic:
# Allow frontend to talk to backend
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-backend
namespace: production
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
# Allow DNS egress (required for all pods)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns-egress
namespace: production
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
Important: Network Policies require a CNI plugin that enforces them (Calico, Cilium, Weave Net, etc.). The default kubenet/flannel does not enforce Network Policies.
5. Secrets Management
Kubernetes Secrets are base64-encoded by default — not encrypted, and accessible to anyone with get secrets RBAC permission or direct etcd access. For sensitive secrets, use external secrets management:
External Secrets Operator + AWS Secrets Manager
# External Secret — syncs from AWS Secrets Manager to K8s Secret
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: database-credentials
namespace: production
spec:
refreshInterval: 1h
secretStoreRef:
name: aws-secrets-manager
kind: SecretStore
target:
name: database-credentials # K8s Secret created/updated
creationPolicy: Owner
data:
- secretKey: DB_PASSWORD
remoteRef:
key: production/database
property: password
Options for external secrets:
- External Secrets Operator + AWS Secrets Manager / HashiCorp Vault / Azure Key Vault
- HashiCorp Vault Agent Injector — sidecar injects secrets as files
- AWS Secrets Manager CSI Driver — mount secrets as volumes
- Sealed Secrets — GitOps-friendly, secrets encrypted at rest in git
6. Container Image Security
Signing and Verification
Use Cosign (Sigstore) to sign images and verify signatures before admission:
# Sign an image after pushing
cosign sign --key cosign.key ghcr.io/myorg/myapp:v1.2.3
# Verify before running
cosign verify --key cosign.pub ghcr.io/myorg/myapp:v1.2.3
With Kyverno (policy engine), enforce signature verification at admission:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: verify-image-signatures
spec:
validationFailureAction: Enforce
rules:
- name: verify-signature
match:
any:
- resources:
kinds: ["Pod"]
verifyImages:
- imageReferences: ["ghcr.io/myorg/*"]
attestors:
- entries:
- keyless:
subject: "https://github.com/myorg/*"
issuer: "https://token.actions.githubusercontent.com"
Base Image Hygiene
- Use minimal base images: distroless, Alpine, or scratch
- Pin exact image digests in production (not mutable tags like
:latestor:main) - Scan images in CI/CD with Trivy or Snyk Container
- Rebuild images regularly to incorporate OS patch updates
# BAD
FROM node:latest
# GOOD
FROM node:20-alpine3.19@sha256:c0a3badbd8a0a760de903e00cedbca94588900568777027d41c20e0b5bef2ced
# Better — distroless for production
FROM gcr.io/distroless/nodejs20-debian12
7. Runtime Security
Falco
Falco is an open-source runtime security tool that monitors container and host behaviour and alerts on anomalies:
# Install via Helm
helm install falco falcosecurity/falco \
--namespace falco \
--set falco.grpc.enabled=true \
--set falco.grpcOutput.enabled=true
Default Falco rules detect:
- Shell spawned in a container (unexpected process execution)
- Sensitive file read (e.g.,
/etc/shadow,/etc/kubernetes/admin.conf) - Privilege escalation attempts
- Outbound connections to unexpected destinations
- Container drift (new files written to container filesystem)
Custom rule example:
- rule: Unexpected Kubernetes API Access from Pod
desc: A pod is making requests to the Kubernetes API server
condition: >
outbound and
fd.sport != 443 and
fd.rport = 6443 and
not proc.name in (kubectl, helm)
output: >
Unexpected Kubernetes API access from pod
(user=%user.name pod=%k8s.pod.name ns=%k8s.ns.name
command=%proc.cmdline)
priority: WARNING
8. Compliance and Continuous Scanning
CIS Kubernetes Benchmark
Run kube-bench to assess compliance with CIS Kubernetes Benchmark:
# Run kube-bench as a Job
kubectl apply -f https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job.yaml
kubectl logs -l app=kube-bench
Cluster Scanning Tools
| Tool | What It Scans | Output |
|---|---|---|
| kube-bench | CIS K8s Benchmark compliance | Pass/Fail per control |
| Trivy | Cluster misconfigs + image vulns | Severity-ranked findings |
| Popeye | Runtime misconfigurations | Cluster health report |
| Kubescape | NSA/CISA hardening guide | Risk score + findings |
| Falco | Runtime anomaly detection | Real-time alerts |
| Checkov | Helm charts + K8s YAML | IaC security findings |
Admission Controllers
Use Kyverno or OPA Gatekeeper to enforce security policies at admission (before pods are created):
# Kyverno policy — require resource limits on all pods
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-resource-limits
spec:
validationFailureAction: Enforce
rules:
- name: check-limits
match:
any:
- resources:
kinds: ["Pod"]
validate:
message: "Resource limits are required for all containers."
pattern:
spec:
containers:
- resources:
limits:
memory: "?*"
cpu: "?*"
Security Hardening Checklist
Cluster Level
- RBAC enabled (
--authorization-mode=RBAC) - Anonymous auth disabled
- Audit logging enabled
- etcd encryption at rest for Secrets
- etcd access restricted to API server
- Network policies enforcing default deny per namespace
- Admission controllers: NodeRestriction, PodSecurityAdmission
Node Level
- Nodes not directly accessible from internet
- kubelet anonymous auth disabled
- Node OS hardened (CIS L1 benchmark)
- Node upgrade strategy defined
Workload Level
- Pod Security Admission:
restrictedprofile on production namespaces - All pods: non-root, no privilege escalation, read-only root filesystem
- Service accounts: per-application, least privilege, no auto-mount where not needed
- Resource limits on all containers
Image and Supply Chain
- Container images scanned in CI/CD (Trivy/Snyk)
- Images signed (Cosign) and signature verified at admission
- Minimal base images (distroless/Alpine)
- No
:latesttags in production
Secrets
- External secrets manager used for sensitive secrets (not native K8s Secrets)
- etcd secrets encrypted at rest
- No secrets in environment variables from plaintext manifests
Runtime
- Falco deployed for runtime anomaly detection
- Alerts routing to SIEM
- Regular kube-bench scans with findings tracked
CyberneticsPlus conducts Kubernetes security assessments through our cloud security practice, and helps teams build secure container delivery pipelines through our DevSecOps consulting service. We implement security tooling (Falco, Kyverno, External Secrets) and integrate controls into your CI/CD workflow. Contact us to harden your Kubernetes environments.