Blog Field Notes Cleaning Up a Kubernetes Manifest Directory That Got Away From You
Platform #kubernetes#gitops#aks#manifests

Cleaning Up a Kubernetes Manifest Directory That Got Away From You

The k8s/ directory had stale ingresses, ambiguously named files, missing service manifests, plaintext credentials in a text file, and image tags months out of date. Here is how it was restructured.

· Gideon Warui
ON THIS PAGE

Kubernetes manifest directories accumulate. A file gets added during a debug session, a second version of an ingress is created for testing, service manifests are added to the cluster but never written to disk, image tags drift from what’s actually running. Six months into operating <service> on AKS, the k8s/ directory looked like this:

k8s/
├── configmap.yaml
├── deployment-backend.yaml
├── deployment.yaml              # frontend? backend? unclear
├── ingress-dev.yaml             # live UAT ingress with TLS
├── ingress.yaml                 # old version, no TLS, no hostname
├── network-policy-backend.yaml
├── network-policy-frontend.yaml
├── secrets.txt                  # ← plaintext credentials
├── service.yaml                 # frontend only, backend missing
└── prod/
    ├── deployment-backend.yaml  # image tag from November 2025
    ├── deployment-frontend.yaml # image tag from December 2025
    ├── externalsecret-tls.yaml
    ├── ingress-backend.yaml
    └── ingress-frontend.yaml

The problems:

  • No clear UAT/prod separation — everything for dev/UAT was at the root, prod was in a subdirectory
  • deployment.yaml was the frontend deployment — there was no naming convention
  • ingress.yaml was a stale pre-TLS version that had been superseded by ingress-dev.yaml months earlier
  • secrets.txt contained the database password, Azure client secret, and <system> connection strings in plaintext
  • Backend service manifests didn’t exist in the repo (the services existed in the cluster, just not on disk)
  • Image tags in manifests were 3–5 months behind what was running

Checking the live cluster first

Before touching anything, I checked what was actually in the cluster:

kubectl get configmap -n <service>-prod --context <cluster>
kubectl get configmap -n <service>-dev --context <cluster>

kubectl get all -n <service>-prod --context <cluster>
kubectl get all -n <service>-dev --context <cluster>

This revealed that <service>-config (the ConfigMap) already existed in both namespaces with 7 keys, and two services (<service>-backend, <service>-frontend) were running in both namespaces but had no corresponding manifest files in the repo. The repo was behind the cluster in two directions: missing files, and stale content in the files that did exist.


The target structure

k8s/
├── uat/
│   ├── configmap.yaml
│   ├── deployment-backend.yaml
│   ├── deployment-frontend.yaml
│   ├── ingress.yaml
│   ├── network-policy-backend.yaml
│   ├── network-policy-frontend.yaml
│   ├── service-backend.yaml
│   └── service-frontend.yaml
└── prod/
    ├── configmap.yaml
    ├── deployment-backend.yaml
    ├── deployment-frontend.yaml
    ├── externalsecret-tls.yaml
    ├── ingress-backend.yaml
    ├── ingress-frontend.yaml
    ├── network-policy-backend.yaml
    ├── network-policy-frontend.yaml
    ├── service-backend.yaml
    └── service-frontend.yaml

Symmetrical. Every resource type appears in both environments. No ambiguous naming. Apply an entire environment with one command:

kubectl apply -f k8s/uat/ --context <cluster>
kubectl apply -f k8s/prod/ --context <cluster>

What changed and why

deployment.yamluat/deployment-frontend.yaml

The file was the frontend deployment for UAT. Renaming it to deployment-frontend.yaml makes it consistent with deployment-backend.yaml. Moving it to uat/ makes the environment explicit.

ingress.yaml → deleted

This was the original ingress from October 2025 — no TLS, no hostname, just a path rule. ingress-dev.yaml was the current version with TLS and the <ingress-host> hostname. The old file was dead code. Deleted.

ingress-dev.yamluat/ingress.yaml

The “dev” naming conflated the namespace name (<service>-dev) with the environment concept (UAT). The namespace is called <service>-dev for historical reasons; the environment it represents is UAT. ingress.yaml inside the uat/ directory is unambiguous.

secrets.txt → deleted

This file had no reason to exist in the repository. The secrets it contained were already in Azure Key Vault, managed via ExternalSecret. The plaintext file was a local working note from the initial setup that never got cleaned up.

rm k8s/secrets.txt

Image tags synced to live state

The prod backend deployment file had image: <service>-backend:20251109-192230 — a November build. The cluster was running 20260317-143958-prod. Updated:

# Image tag is managed by CI/CD pipeline
image: <acr-registry>.azurecr.io/<service>-backend:20260317-143958-prod

Missing service manifests created

The services existed in the cluster and had been created imperatively at some point — they just were never written to disk. Adding them now means a full cluster rebuild from manifests is possible:

apiVersion: v1
kind: Service
metadata:
  name: <service>-backend
  namespace: <service>-prod
spec:
  type: ClusterIP
  selector:
    app: <service>-backend
  ports:
    - name: http
      port: 3000
      targetPort: 3000

Committing the result

Git detected the renames correctly:

renamed: k8s/deployment-backend.yaml -> k8s/uat/deployment-backend.yaml
renamed: k8s/deployment.yaml -> k8s/uat/deployment-frontend.yaml
renamed: k8s/service.yaml -> k8s/uat/service-frontend.yaml
renamed: k8s/network-policy-backend.yaml -> k8s/uat/network-policy-backend.yaml
deleted: k8s/ingress.yaml
new file: k8s/uat/ingress.yaml
new file: k8s/uat/service-backend.yaml
new file: k8s/prod/configmap.yaml
new file: k8s/prod/service-backend.yaml
new file: k8s/prod/service-frontend.yaml

Git’s rename detection works when the content similarity is above a threshold. Moving a file without changing it registers as a rename, which preserves history. git log --follow k8s/uat/deployment-frontend.yaml shows the full history going back to when the file was called deployment.yaml.


The pattern for ongoing hygiene

Three rules that would have prevented this from accumulating:

  1. Never apply a resource without a manifest file. If kubectl create service is used to stand something up quickly, the YAML gets written to disk in the same session. kubectl get service <service>-backend -n <service>-prod -o yaml > k8s/prod/service-backend.yaml takes ten seconds.

  2. Delete stale files immediately, not eventually. The old ingress.yaml was superseded months before cleanup. Leaving it created ambiguity about which file was authoritative. If a file is no longer the source of truth, delete it when the replacement is applied.

  3. Image tags in manifests should match what’s running. The manifest is documentation as much as it is configuration. A tag from November 2025 in a manifest that’s been in production since March 2026 is misleading. Update it when the rollout completes.

#kubernetes#gitops#aks#manifests