Skip to content

Metrics Server

Metrics Server enables kubectl top commands and provides the resource metrics API required by the Horizontal Pod Autoscaler (HPA). Without it, kubectl top returns an error and HPA cannot function.

kinder installs Metrics Server v0.8.1.

ResourceNamespacePurpose
Metrics Server deploymentkube-systemCollects CPU/memory metrics from kubelets
metrics.k8s.io APIServicecluster-scopedExposes the resource metrics API

Metrics Server is configured with --kubelet-insecure-tls to work with the self-signed certificates that kind nodes use. This flag is expected and safe in a local development environment.

  • kubectl top nodes — shows CPU and memory usage per node
  • kubectl top pods — shows CPU and memory usage per pod
  • HPA support — Horizontal Pod Autoscaler can read CPU/memory metrics and scale deployments automatically
Terminal window
kubectl top nodes

Expected output:

NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
kind-control-plane 150m 7% 512Mi 13%

Metrics Server is controlled by the addons.metricsServer field:

apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
addons:
metricsServer: true # default

See the Configuration Reference for all available addon fields.

apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
addons:
metricsServer: false

kubectl top will return an error and HPA resources will not be functional when Metrics Server is disabled.

Show resource usage across all namespaces:

Terminal window
kubectl top pods -A

Expected output:

NAMESPACE NAME CPU(cores) MEMORY(bytes)
kube-system coredns-787d4945fb-abcde 3m 15Mi
kube-system metrics-server-7d75f5b5-xyz 5m 20Mi
default my-app-6d4f8b9c-pqrst 12m 64Mi

Show pods in a specific namespace sorted by CPU usage:

Terminal window
kubectl top pods -n default --sort-by=cpu

The following example creates a deployment with resource requests and an HPA that scales it based on CPU utilization.

apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: nginx
resources:
requests:
cpu: "100m"
limits:
cpu: "200m"
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: my-app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-app
minReplicas: 1
maxReplicas: 5
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50

After applying, verify the HPA is reading metrics:

Terminal window
kubectl get hpa my-app-hpa

Expected output (once metrics are available after ~60 seconds):

NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
my-app-hpa Deployment/my-app 8%/50% 1 5 1 90s

Symptom: kubectl describe hpa my-app-hpa shows <unknown>/50% for the CPU metric target.

Cause: The target deployment’s containers do not have resources.requests.cpu set. Metrics Server calculates CPU utilization as current CPU / requested CPU. Without a request, the denominator is undefined and the metric cannot be calculated.

Fix: Add resources.requests.cpu to the container spec in the deployment and redeploy:

containers:
- name: my-app
image: nginx
resources:
requests:
cpu: "100m"

After redeploying, wait 60 seconds for Metrics Server to collect a new scrape cycle, then check the HPA again.