$ shellfirm

Kubernetes Context

How shellfirm detects production Kubernetes clusters and escalates protection

A kubectl delete namespace on your local minikube is harmless. The same command pointed at your production cluster can cause an outage. shellfirm reads your current Kubernetes context and escalates protection when it detects a production cluster.

How it works

When shellfirm evaluates a command, it runs:

kubectl config current-context

This returns the name of your active Kubernetes context (e.g., prod-us-east-1, staging-eu, minikube). If kubectl is not installed or the command fails, this step is silently skipped.

The returned context name is checked against a list of production patterns using case-insensitive substring matching. If any pattern matches, the risk level is set to Critical.

Default production patterns

These substrings trigger Critical risk level when found in the context name:

PatternExample matches
prodprod-us-east-1, my-prod-cluster, production-k8s
productionproduction, arn:aws:eks:us-east-1:123:cluster/production
prdprd-cluster, app-prd-01
livelive-eu-west, app-live

The matching is case-insensitive, so PROD, Prod, and prod all match.

Context label

When a Kubernetes context is detected, shellfirm adds a label to the challenge banner:

k8s=prod-us-east-1

Escalation behavior

On a production Kubernetes context (Critical risk level), challenges are escalated to at least Yes:

Configured challengeOn production k8s
MathYes
EnterYes
YesYes

Non-production contexts

Contexts that do not match any production pattern are treated normally. Common non-production context names:

  • minikube
  • docker-desktop
  • dev-cluster
  • staging-us-west-2
  • kind-local

Configuring production patterns

Edit ~/.shellfirm/settings.yaml to customize which context names are considered production:

context:
  production_k8s_patterns:
    - prod
    - production
    - prd
    - live
    - critical

You can also add patterns for specific naming conventions:

context:
  production_k8s_patterns:
    - prod
    - production
    - prd
    - live
    - "p1-"          # matches p1-us-east, p1-eu-west, etc.
    - "-tier0"       # matches app-tier0, service-tier0

Practical examples

Deleting a namespace on production

# Current k8s context: prod-us-east-1
kubectl delete namespace monitoring

# ============ RISKY COMMAND DETECTED ============
# Severity: CRITICAL
# Context: k8s=prod-us-east-1
# Description: Deletes an entire Kubernetes namespace and all its resources.
# Alternative: kubectl get all -n monitoring
#   (Lists all resources in the namespace first so you can review before deleting.)
# Challenge ESCALATED: Math -> Yes
#
# ? Type `yes` to continue Esc to cancel ›

Same command on local cluster

# Current k8s context: minikube
kubectl delete namespace monitoring

# ============ RISKY COMMAND DETECTED ============
# Severity: CRITICAL
# Description: Deletes an entire Kubernetes namespace and all its resources.
# Alternative: kubectl get all -n monitoring
#   (Lists all resources in the namespace first so you can review before deleting.)
#
# ? Solve the challenge:: 12 + 5 = ? Esc to cancel ›

Scaling down on production

# Current k8s context: production
kubectl scale deployment web --replicas=0

# ============ RISKY COMMAND DETECTED ============
# Severity: HIGH
# Context: k8s=production
# Description: Scales a deployment to zero replicas, effectively stopping all pods.
# Alternative: kubectl scale deployment web --replicas=1
#   (Scales down to a single replica instead of zero, keeping the service available.)
# Challenge ESCALATED: Math -> Yes
#
# ? Type `yes` to continue Esc to cancel ›

Performance note

The kubectl config current-context command has a 100ms timeout. If kubectl is slow or not installed, shellfirm skips Kubernetes context detection entirely without adding any delay to your command.