Skip to content

Dynamic Storage with PVCs

In this tutorial you will use kinder’s built-in local-path-provisioner addon to dynamically provision storage for a stateful workload. You’ll create a PersistentVolumeClaim, deploy a pod that writes to it, delete the pod and verify the data survives, then see how the WaitForFirstConsumer binding mode works in a multi-node cluster. By the end you will understand the full PVC → PV → Pod lifecycle on a local kinder cluster with zero manual provisioning.

  • kinder installed
  • Docker (or Podman) installed and running
  • kubectl installed and on PATH
Terminal window
kinder create cluster

Confirm the provisioner Deployment is up:

Terminal window
kubectl get pods -n local-path-storage

Expected output:

NAME READY STATUS RESTARTS AGE
local-path-provisioner-... 1/1 Running 0 60s

Confirm local-path is the only StorageClass and is marked default:

Terminal window
kubectl get storageclass

Expected output:

NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 2m

Save the following as pvc.yaml:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi

Apply it:

Terminal window
kubectl apply -f pvc.yaml

Check the PVC status immediately:

Terminal window
kubectl get pvc data

Expected output:

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data Pending local-path 5s

The PVC is Pending — and that’s expected. Because of WaitForFirstConsumer, no PV has been created yet. The provisioner is waiting for a pod to consume the PVC before it picks a node and creates the backing volume.

Step 4: Deploy a pod that writes to the PVC

Section titled “Step 4: Deploy a pod that writes to the PVC”

Save the following as writer.yaml. The pod writes a timestamped message to /data/log.txt every 5 seconds:

apiVersion: v1
kind: Pod
metadata:
name: writer
spec:
containers:
- name: writer
image: busybox:1.37.0
command: ["sh", "-c", "while true; do echo \"$(date): writer alive\" >> /data/log.txt; sleep 5; done"]
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: data

Apply it:

Terminal window
kubectl apply -f writer.yaml

Watch the PVC transition from Pending to Bound as the pod is scheduled:

Terminal window
kubectl get pvc data --watch

Expected output (press Ctrl+C once it’s Bound):

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data Pending 1Gi RWO local-path 10s
data Bound pvc-8f2d1a4c-7b6e-4f9d-a1c2-3d4e5f6a7b8c 1Gi RWO local-path 20s

The provisioner ran a short-lived helperPod to create the backing directory under /opt/local-path-provisioner/ inside the node, then created the PV and bound it to the claim — all automatically.

Verify the pod is running:

Terminal window
kubectl get pod writer

Expected output:

NAME READY STATUS RESTARTS AGE
writer 1/1 Running 0 30s

Wait 15 seconds for the pod to write a few log entries, then read the file from inside the container:

Terminal window
kubectl exec writer -- cat /data/log.txt

Expected output (your timestamps will differ):

Fri Apr 10 14:23:05 UTC 2026: writer alive
Fri Apr 10 14:23:10 UTC 2026: writer alive
Fri Apr 10 14:23:15 UTC 2026: writer alive

Step 6: Delete the pod and verify data persists

Section titled “Step 6: Delete the pod and verify data persists”

This is the real test — does the data survive pod deletion? Delete the pod:

Terminal window
kubectl delete pod writer

Expected output:

pod "writer" deleted

Check the PVC — it should still be Bound:

Terminal window
kubectl get pvc data

Expected output:

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data Bound pvc-8f2d1a4c-7b6e-4f9d-a1c2-3d4e5f6a7b8c 1Gi RWO local-path 3m

The PVC (and its backing PV) survives pod deletion. The data on disk is untouched.

Now recreate the pod using the same manifest:

Terminal window
kubectl apply -f writer.yaml

Wait for it to be running:

Terminal window
kubectl wait --for=condition=Ready pod/writer --timeout=30s

Read the log file:

Terminal window
kubectl exec writer -- cat /data/log.txt

Expected output (you should see both the old entries from the first pod and new entries from the new pod):

Fri Apr 10 14:23:05 UTC 2026: writer alive
Fri Apr 10 14:23:10 UTC 2026: writer alive
Fri Apr 10 14:23:15 UTC 2026: writer alive
Fri Apr 10 14:25:42 UTC 2026: writer alive
Fri Apr 10 14:25:47 UTC 2026: writer alive

The file persisted across pod restart. The second pod attached to the same PVC, which is still backed by the same directory on the node’s filesystem.

PVC (data) PV (pvc-8f2d1a4c-...) Node filesystem
┌──────────┐ ┌──────────────────┐ ┌───────────────────────────────┐
│ 1Gi RWO │ ◄─── bound to ───► │ hostPath: │ ◄─── on ──► │ /opt/local-path-provisioner/ │
│ local- │ │ /opt/local-... │ │ pvc-8f2d1a4c-.../ │
│ path │ │ reclaim: Delete │ │ log.txt │
└──────────┘ └──────────────────┘ └───────────────────────────────┘
▲ ▲
│ volume mount │
│ │
┌──────────┐ │
│ Pod │ ─────────── writes /data/log.txt, appended to ──────────────────────┘
│ (writer) │
└──────────┘
  1. PVC createdkubectl apply -f pvc.yaml creates a claim referencing local-path. Because the binding mode is WaitForFirstConsumer, the provisioner does nothing yet.
  2. Pod scheduled — when you apply writer.yaml, the scheduler picks a node for the pod. The provisioner sees a pod referencing an unbound PVC and runs a helperPod on that node.
  3. PV created — the helperPod creates a directory under /opt/local-path-provisioner/pvc-<uuid>/ on the node, then a PersistentVolume with hostPath pointing at that directory is created and bound to the PVC.
  4. Pod mounts — the pod starts and mounts the PV into its filesystem at /data.
  5. Data persists on disk — writes go through the hostPath mount directly to the node’s filesystem. Even after the pod is deleted, the directory stays; a new pod referencing the same PVC reattaches to the same directory.

In a multi-node cluster, each PV is pinned to a single node — the node where the pod that first consumed the PVC was scheduled. This is an inherent property of local storage: the data lives on one node’s filesystem, so pods using that PVC cannot be rescheduled elsewhere.

Try it with a multi-node cluster:

multi-node.yaml
apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
- role: control-plane
- role: worker
- role: worker
Terminal window
kinder delete cluster
kinder create cluster --config multi-node.yaml
kubectl apply -f pvc.yaml
kubectl get pvc data

Expected output (note it’s still Pending):

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data Pending local-path 5s

Apply the writer pod:

Terminal window
kubectl apply -f writer.yaml
kubectl wait --for=condition=Ready pod/writer --timeout=60s

Inspect which node the pod landed on, and which node the PV is bound to:

Terminal window
kubectl get pod writer -o jsonpath='{.spec.nodeName}' && echo
kubectl get pv -o jsonpath='{.items[0].spec.nodeAffinity.required.nodeSelectorTerms[0].matchExpressions[0].values[0]}' && echo

Expected output (the two values must match):

kind-worker2
kind-worker2

The PV has node affinity binding it to the same node as the pod. This is why WaitForFirstConsumer is important: without it, the PV would be provisioned on an arbitrary node and then the scheduler would have to pin the pod to that node — limiting scheduling flexibility.

local-path-provisioner and its busybox:1.37.0 helper image are both included in kinder’s required addon image list, so they are automatically handled by:

  • kinder doctor offline-readiness check
  • kinder create cluster --air-gapped missing image error
  • The addon image pre-pull warning NOTE printed by kinder create cluster

See the Working Offline guide for the full pre-load workflow.

Delete the pod, PVC, and cluster:

Terminal window
kubectl delete pod writer
kubectl delete pvc data
kinder delete cluster

Deleting the PVC triggers the Delete reclaim policy, which removes the backing directory from the node. Deleting the cluster removes everything regardless — no manual cleanup is needed.