Metrics Server
Metrics Server enables kubectl top commands and provides the resource metrics API required by the Horizontal Pod Autoscaler (HPA). Without it, kubectl top returns an error and HPA cannot function.
kinder installs Metrics Server v0.8.1.
What gets installed
Section titled “What gets installed”| Resource | Namespace | Purpose |
|---|---|---|
| Metrics Server deployment | kube-system | Collects CPU/memory metrics from kubelets |
metrics.k8s.io APIService | cluster-scoped | Exposes the resource metrics API |
Metrics Server is configured with --kubelet-insecure-tls to work with the self-signed certificates that kind nodes use. This flag is expected and safe in a local development environment.
What you get
Section titled “What you get”kubectl top nodes— shows CPU and memory usage per nodekubectl top pods— shows CPU and memory usage per pod- HPA support — Horizontal Pod Autoscaler can read CPU/memory metrics and scale deployments automatically
How to verify
Section titled “How to verify”kubectl top nodesExpected output:
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%kind-control-plane 150m 7% 512Mi 13%Configuration
Section titled “Configuration”Metrics Server is controlled by the addons.metricsServer field:
apiVersion: kind.x-k8s.io/v1alpha4kind: Clusteraddons: metricsServer: true # defaultSee the Configuration Reference for all available addon fields.
How to disable
Section titled “How to disable”apiVersion: kind.x-k8s.io/v1alpha4kind: Clusteraddons: metricsServer: falsekubectl top will return an error and HPA resources will not be functional when Metrics Server is disabled.
Practical examples
Section titled “Practical examples”View pod resource usage
Section titled “View pod resource usage”Show resource usage across all namespaces:
kubectl top pods -AExpected output:
NAMESPACE NAME CPU(cores) MEMORY(bytes)kube-system coredns-787d4945fb-abcde 3m 15Mikube-system metrics-server-7d75f5b5-xyz 5m 20Midefault my-app-6d4f8b9c-pqrst 12m 64MiShow pods in a specific namespace sorted by CPU usage:
kubectl top pods -n default --sort-by=cpuSet up a Horizontal Pod Autoscaler
Section titled “Set up a Horizontal Pod Autoscaler”The following example creates a deployment with resource requests and an HPA that scales it based on CPU utilization.
apiVersion: apps/v1kind: Deploymentmetadata: name: my-appspec: replicas: 1 selector: matchLabels: app: my-app template: metadata: labels: app: my-app spec: containers: - name: my-app image: nginx resources: requests: cpu: "100m" limits: cpu: "200m"---apiVersion: autoscaling/v2kind: HorizontalPodAutoscalermetadata: name: my-app-hpaspec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: my-app minReplicas: 1 maxReplicas: 5 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 50After applying, verify the HPA is reading metrics:
kubectl get hpa my-app-hpaExpected output (once metrics are available after ~60 seconds):
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGEmy-app-hpa Deployment/my-app 8%/50% 1 5 1 90sTroubleshooting
Section titled “Troubleshooting”HPA shows unknown/50%
Section titled “HPA shows unknown/50%”Symptom: kubectl describe hpa my-app-hpa shows <unknown>/50% for the CPU metric target.
Cause: The target deployment’s containers do not have resources.requests.cpu set. Metrics Server calculates CPU utilization as current CPU / requested CPU. Without a request, the denominator is undefined and the metric cannot be calculated.
Fix: Add resources.requests.cpu to the container spec in the deployment and redeploy:
containers: - name: my-app image: nginx resources: requests: cpu: "100m"After redeploying, wait 60 seconds for Metrics Server to collect a new scrape cycle, then check the HPA again.