Dynamic Storage with PVCs
In this tutorial you will use kinder’s built-in local-path-provisioner addon to dynamically provision storage for a stateful workload. You’ll create a PersistentVolumeClaim, deploy a pod that writes to it, delete the pod and verify the data survives, then see how the WaitForFirstConsumer binding mode works in a multi-node cluster. By the end you will understand the full PVC → PV → Pod lifecycle on a local kinder cluster with zero manual provisioning.
Prerequisites
Section titled “Prerequisites”- kinder installed
- Docker (or Podman) installed and running
kubectlinstalled and on PATH
Step 1: Create the cluster
Section titled “Step 1: Create the cluster”kinder create clusterStep 2: Verify the provisioner is running
Section titled “Step 2: Verify the provisioner is running”Confirm the provisioner Deployment is up:
kubectl get pods -n local-path-storageExpected output:
NAME READY STATUS RESTARTS AGElocal-path-provisioner-... 1/1 Running 0 60sConfirm local-path is the only StorageClass and is marked default:
kubectl get storageclassExpected output:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGElocal-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 2mStep 3: Create a PersistentVolumeClaim
Section titled “Step 3: Create a PersistentVolumeClaim”Save the following as pvc.yaml:
apiVersion: v1kind: PersistentVolumeClaimmetadata: name: dataspec: accessModes: - ReadWriteOnce resources: requests: storage: 1GiApply it:
kubectl apply -f pvc.yamlCheck the PVC status immediately:
kubectl get pvc dataExpected output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEdata Pending local-path 5sThe PVC is Pending — and that’s expected. Because of WaitForFirstConsumer, no PV has been created yet. The provisioner is waiting for a pod to consume the PVC before it picks a node and creates the backing volume.
Step 4: Deploy a pod that writes to the PVC
Section titled “Step 4: Deploy a pod that writes to the PVC”Save the following as writer.yaml. The pod writes a timestamped message to /data/log.txt every 5 seconds:
apiVersion: v1kind: Podmetadata: name: writerspec: containers: - name: writer image: busybox:1.37.0 command: ["sh", "-c", "while true; do echo \"$(date): writer alive\" >> /data/log.txt; sleep 5; done"] volumeMounts: - name: data mountPath: /data volumes: - name: data persistentVolumeClaim: claimName: dataApply it:
kubectl apply -f writer.yamlWatch the PVC transition from Pending to Bound as the pod is scheduled:
kubectl get pvc data --watchExpected output (press Ctrl+C once it’s Bound):
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEdata Pending 1Gi RWO local-path 10sdata Bound pvc-8f2d1a4c-7b6e-4f9d-a1c2-3d4e5f6a7b8c 1Gi RWO local-path 20sThe provisioner ran a short-lived helperPod to create the backing directory under /opt/local-path-provisioner/ inside the node, then created the PV and bound it to the claim — all automatically.
Verify the pod is running:
kubectl get pod writerExpected output:
NAME READY STATUS RESTARTS AGEwriter 1/1 Running 0 30sStep 5: Verify data is being written
Section titled “Step 5: Verify data is being written”Wait 15 seconds for the pod to write a few log entries, then read the file from inside the container:
kubectl exec writer -- cat /data/log.txtExpected output (your timestamps will differ):
Fri Apr 10 14:23:05 UTC 2026: writer aliveFri Apr 10 14:23:10 UTC 2026: writer aliveFri Apr 10 14:23:15 UTC 2026: writer aliveStep 6: Delete the pod and verify data persists
Section titled “Step 6: Delete the pod and verify data persists”This is the real test — does the data survive pod deletion? Delete the pod:
kubectl delete pod writerExpected output:
pod "writer" deletedCheck the PVC — it should still be Bound:
kubectl get pvc dataExpected output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEdata Bound pvc-8f2d1a4c-7b6e-4f9d-a1c2-3d4e5f6a7b8c 1Gi RWO local-path 3mThe PVC (and its backing PV) survives pod deletion. The data on disk is untouched.
Now recreate the pod using the same manifest:
kubectl apply -f writer.yamlWait for it to be running:
kubectl wait --for=condition=Ready pod/writer --timeout=30sRead the log file:
kubectl exec writer -- cat /data/log.txtExpected output (you should see both the old entries from the first pod and new entries from the new pod):
Fri Apr 10 14:23:05 UTC 2026: writer aliveFri Apr 10 14:23:10 UTC 2026: writer aliveFri Apr 10 14:23:15 UTC 2026: writer aliveFri Apr 10 14:25:42 UTC 2026: writer aliveFri Apr 10 14:25:47 UTC 2026: writer aliveThe file persisted across pod restart. The second pod attached to the same PVC, which is still backed by the same directory on the node’s filesystem.
How it works
Section titled “How it works”PVC (data) PV (pvc-8f2d1a4c-...) Node filesystem┌──────────┐ ┌──────────────────┐ ┌───────────────────────────────┐│ 1Gi RWO │ ◄─── bound to ───► │ hostPath: │ ◄─── on ──► │ /opt/local-path-provisioner/ ││ local- │ │ /opt/local-... │ │ pvc-8f2d1a4c-.../ ││ path │ │ reclaim: Delete │ │ log.txt │└──────────┘ └──────────────────┘ └───────────────────────────────┘ ▲ ▲ │ volume mount │ │ │┌──────────┐ ││ Pod │ ─────────── writes /data/log.txt, appended to ──────────────────────┘│ (writer) │└──────────┘- PVC created —
kubectl apply -f pvc.yamlcreates a claim referencinglocal-path. Because the binding mode isWaitForFirstConsumer, the provisioner does nothing yet. - Pod scheduled — when you apply
writer.yaml, the scheduler picks a node for the pod. The provisioner sees a pod referencing an unbound PVC and runs ahelperPodon that node. - PV created — the helperPod creates a directory under
/opt/local-path-provisioner/pvc-<uuid>/on the node, then aPersistentVolumewithhostPathpointing at that directory is created and bound to the PVC. - Pod mounts — the pod starts and mounts the PV into its filesystem at
/data. - Data persists on disk — writes go through the hostPath mount directly to the node’s filesystem. Even after the pod is deleted, the directory stays; a new pod referencing the same PVC reattaches to the same directory.
Multi-node clusters
Section titled “Multi-node clusters”In a multi-node cluster, each PV is pinned to a single node — the node where the pod that first consumed the PVC was scheduled. This is an inherent property of local storage: the data lives on one node’s filesystem, so pods using that PVC cannot be rescheduled elsewhere.
Try it with a multi-node cluster:
apiVersion: kind.x-k8s.io/v1alpha4kind: Clusternodes: - role: control-plane - role: worker - role: workerkinder delete clusterkinder create cluster --config multi-node.yamlkubectl apply -f pvc.yamlkubectl get pvc dataExpected output (note it’s still Pending):
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEdata Pending local-path 5sApply the writer pod:
kubectl apply -f writer.yamlkubectl wait --for=condition=Ready pod/writer --timeout=60sInspect which node the pod landed on, and which node the PV is bound to:
kubectl get pod writer -o jsonpath='{.spec.nodeName}' && echokubectl get pv -o jsonpath='{.items[0].spec.nodeAffinity.required.nodeSelectorTerms[0].matchExpressions[0].values[0]}' && echoExpected output (the two values must match):
kind-worker2kind-worker2The PV has node affinity binding it to the same node as the pod. This is why WaitForFirstConsumer is important: without it, the PV would be provisioned on an arbitrary node and then the scheduler would have to pin the pod to that node — limiting scheduling flexibility.
Air-gapped clusters
Section titled “Air-gapped clusters”local-path-provisioner and its busybox:1.37.0 helper image are both included in kinder’s required addon image list, so they are automatically handled by:
kinder doctoroffline-readiness checkkinder create cluster --air-gappedmissing image error- The addon image pre-pull warning NOTE printed by
kinder create cluster
See the Working Offline guide for the full pre-load workflow.
Troubleshooting
Section titled “Troubleshooting”Clean up
Section titled “Clean up”Delete the pod, PVC, and cluster:
kubectl delete pod writerkubectl delete pvc datakinder delete clusterDeleting the PVC triggers the Delete reclaim policy, which removes the backing directory from the node. Deleting the cluster removes everything regardless — no manual cleanup is needed.
See also
Section titled “See also”- Local Path Provisioner addon reference — full addon configuration, disable instructions, and CVE details
- Host Directory Mounting — an alternative approach when you want to share a specific host directory (not dynamically provisioned storage)
- Working Offline — pre-loading local-path-provisioner images for air-gapped clusters