This alert fires when a virtual machine instance (VMI), or virt-launcher
pod,
runs on a node that does not have a running virt-handler
pod. Such a VMI is
called orphaned.
Orphaned VMIs cannot be managed.
Check the status of the virt-handler
pods to view the nodes on which they
are running:
$ kubectl get pods --all-namespaces -o wide -l kubevirt.io=virt-handler
Check the status of the VMIs to identify VMIs running on nodes that do not
have a running virt-handler
pod:
$ kubectl get vmis --all-namespaces
Check the status of the virt-handler
daemon:
$ kubectl get daemonset virt-handler --all-namespaces
Example output:
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
virt-handler 2 2 2 2 2 kubernetes.io/os=linux 4h
The daemon set is considered healthy if the Desired
, Ready
, and
Available
columns contain the same value.
If the virt-handler
daemon set is not healthy, check the virt-handler
daemon set for pod deployment issues:
$ kubectl get daemonset virt-handler --all-namespaces -o yaml | jq .status
Check the nodes for issues such as a NotReady
status:
$ kubectl get nodes
Check the spec.workloads
stanza of the KubeVirt
custom resource (CR) for
a workloads placement policy:
$ kubectl get kubevirt kubevirt --all-namespaces -o yaml
If a workloads placement policy is configured, add the node with the VMI to the policy.
Possible causes for the removal of a virt-handler
pod from a node include
changes to the node’s taints and tolerations or to a pod’s scheduling rules.
Try to identify the root cause and resolve the issue.
See How Daemon Pods are scheduled for more information.
If you cannot resolve the issue, see the following resources: