This alert fires when one or more virt-handler pods are running, but not
all of them have been in a Ready state for the last 10 minutes.
The virt-handler runs on every node that can schedule VMIs (as a
DaemonSet). Each node typically has one virt-handler pod.
Some nodes may have a running but not ready virt-handler. VMIs running on those
nodes might not be fully managed (e.g. domain updates, network or storage
changes). If the condition persists, it can lead to the NoReadyVirtHandler
alert for affected nodes.
Set the NAMESPACE environment variable:
$ export NAMESPACE="$(kubectl get kubevirt -A -o custom-columns="":.metadata.namespace)"
Check the status of the virt-handler pods:
$ kubectl -n $NAMESPACE get pods -l kubevirt.io=virt-handler -o wide
For pods that are running but not ready, inspect pod conditions and events:
$ kubectl -n $NAMESPACE describe pod -l kubevirt.io=virt-handler
If pods are in CrashLoopBackOff or to inspect runtime failures, check
non-ready virt-handler pod logs and look for errors:
$ kubectl -n $NAMESPACE logs -l kubevirt.io=virt-handler
Note: With multiple pods (DaemonSet), -l streams one pod’s logs; use a
pod name from step 2 to target a specific non-ready pod.
If needed, check the virt-handler DaemonSet and its events:
$ kubectl -n $NAMESPACE describe daemonset virt-handler
Check for node issues on nodes where virt-handler is not ready:
$ kubectl get nodes
Identify why some virt-handler pods are not ready (e.g. failed readiness
probe, resource pressure, node issues) and resolve the underlying cause so
all schedulable nodes have a ready virt-handler.
If you cannot resolve the issue, see the following resources: