Kubernetes
Protection patterns for kubectl operations including namespace deletion, resource management, and rollouts
shellfirm provides two groups of Kubernetes checks: kubernetes (standard) for the most dangerous operations and kubernetes-strict for broader coverage of cluster modifications. Both groups support the kubectl command and the common k alias.
Standard Kubernetes checks (kubernetes)
Delete namespace
| ID | kubernetes:delete_namespace |
| Severity | Critical |
| Filter | NotContains --dry-run |
| Alternative | kubectl get all -n <namespace> -- list all resources first to see what would be deleted |
| Blast radius | Counts resources in the namespace |
Deleting a namespace is one of the most destructive Kubernetes operations -- it removes every resource within it, including deployments, services, config maps, and persistent volume claims.
# Triggers
kubectl delete ns production
kubectl delete namespace staging
k delete ns my-app
# Does NOT trigger (dry-run)
kubectl delete ns staging --dry-run=client
Blast radius example:
yes to continue Esc to cancel ›The blast radius runs kubectl get all -n <namespace> --no-headers to count actual resources.
Delete all resources
| ID | kubernetes:delete_all_resources |
| Severity | Critical |
| Filter | NotContains --dry-run |
| Alternative | kubectl get <resource> -n <namespace> -- list resources first |
Deleting all resources of a type wipes everything in the namespace.
# Triggers
kubectl delete pods --all
kubectl delete deployments --all -n production
k delete services --all
# Does NOT trigger
kubectl delete pods --all --dry-run=client
kubectl delete pod mypod # specific pod, not --all
Apply with force
| ID | kubernetes:apply_force |
| Severity | High |
| Filter | NotContains --dry-run |
| Alternative | kubectl apply -f <file> -- apply without --force for in-place updates |
Force-applying deletes and recreates resources, causing downtime.
# Triggers
kubectl apply -f deployment.yaml --force
k apply --force -f pod.yaml
# Does NOT trigger
kubectl apply -f deployment.yaml
kubectl apply -f deployment.yaml --force --dry-run=client
kubectl apply --force-conflicts -f deployment.yaml
Drain node
| ID | kubernetes:drain_node |
| Severity | High |
Draining a node evicts all pods and marks the node unschedulable.
# Triggers
kubectl drain node01
kubectl drain node01 --ignore-daemonsets --delete-emptydir-data
k drain node01
Cordon node
| ID | kubernetes:cordon_node |
| Severity | Medium |
Cordoning prevents new pods from being scheduled on the node.
# Triggers
kubectl cordon node01
k cordon node01
# Does NOT trigger
kubectl uncordon node01
Replace with force
| ID | kubernetes:replace_force |
| Severity | High |
| Filter | NotContains --dry-run |
Force-replacing deletes and recreates resources, causing downtime.
# Triggers
kubectl replace -f deployment.yaml --force
# Does NOT trigger
kubectl replace -f deployment.yaml
kubectl replace -f deployment.yaml --force --dry-run=client
Helm checks
| ID | Command | Description | Severity |
|---|---|---|---|
helm:uninstall | helm uninstall <release> / helm delete <release> | Removes a Helm release and all its managed Kubernetes resources | High |
helm:rollback | helm rollback <release> <revision> | Rolling back reverts to a previous revision | Medium |
helm:upgrade_force | helm upgrade --force <release> <chart> | Force-upgrading deletes and recreates resources, causing downtime | High |
Strict Kubernetes checks (kubernetes-strict)
These checks cover broader modification operations. All support the --dry-run filter.
Delete resource
| ID | kubernetes-strict:delete_resource |
| Severity | High |
| Filter | NotContains --dry-run |
# Triggers
kubectl delete pod my-pod
kubectl delete deployment my-app
k delete svc my-service
# Does NOT trigger
kubectl delete pod my-pod --dry-run=client
Update resource
| ID | kubernetes-strict:update_resource |
| Severity | High |
| Filter | NotContains --dry-run |
# Triggers
kubectl set image deployment/my-app app=my-image:v2
k set env deployment/my-app ENV=production
Scale resource
| ID | kubernetes-strict:change_resource_size |
| Severity | High |
| Filter | NotContains --dry-run |
# Triggers
kubectl scale deployment my-app --replicas=0
k scale statefulset db --replicas=3
Rollout management
| ID | kubernetes-strict:rollout_resource |
| Severity | High |
| Filter | NotContains --dry-run |
# Triggers
kubectl rollout restart deployment/my-app
kubectl rollout undo deployment/my-app
k rollout pause deployment/my-app
k rollout resume deployment/my-app
Context-aware escalation
Kubernetes checks are especially powerful when combined with context-aware protection. shellfirm detects the current Kubernetes context via kubectl config current-context and escalates the challenge if it matches a production pattern.
Default production k8s patterns:
prodproductionprdlive
For example, if your current context is prod-us-east-1:
yes to continue Esc to cancel ›The challenge was escalated from your default (e.g., Math) to Yes because the k8s context contains "prod".
Configuring production patterns
# In ~/.config/shellfirm/settings.yaml
context:
production_k8s_patterns:
- prod
- production
- prd
- live
- my-company-prod
Dry-run as a safe escape
All Kubernetes checks support the --dry-run filter. This means you can preview any operation without triggering shellfirm:
# These all pass through without challenge
kubectl delete ns staging --dry-run=client
kubectl delete deployment my-app --dry-run=server
kubectl scale deployment my-app --replicas=0 --dry-run=client
This encourages the practice of dry-running destructive operations before executing them.