This alert triggers when a virtual machine (VM) is in an unhealthy state for more than 10 minutes and does not have an associated VMI (VirtualMachineInstance).
The alert indicates that a VM is experiencing early-stage lifecycle issues before a VMI can be successfully created. This typically occurs during the initial phases of VM startup when KubeVirt is trying to provision resources, pull images, or schedule the workload.
Affected states:
Provisioning - The VM is preparing resources (DataVolumes, PVCs)Starting - VM is attempting to start but no VMI exists yetTerminating - VM is being deleted but without an active VMIError - Various scheduling, image, or resource allocation errors # Get VM details and status
$ kubectl get vm <vm-name> -n <namespace> -o yaml
# Check VM events for error messages
$ kubectl describe vm <vm-name> -n <namespace>
# Look for related events in the namespace
$ kubectl get events -n <namespace> --sort-by='.lastTimestamp'
# Check node resources and schedulability
$ kubectl get nodes -o wide
$ kubectl describe nodes
# Check storage classes and provisioners
$ kubectl get storageclass
$ kubectl get pv,pvc -n <namespace>
# For DataVolumes (if using)
$ kubectl get datavolume -n <namespace>
$ kubectl describe datavolume <dv-name> -n <namespace>
# If using containerDisk, verify image accessibility from the affected node
# Start a debug session on the node hosting the VM (or a representative node)
$ kubectl debug node/<node-name> -it --image=busybox
# Inside the debug pod, check which container runtime is used
$ ps aux | grep -E "(containerd|dockerd|crio)"
# For CRI-O/containerd clusters use crictl to pull the image
$ crictl pull <vm-disk-image>
# For Docker-based clusters (less common)
$ docker pull <vm-disk-image>
# Exit the debug session when done
$ exit
# Check image pull secrets if required
$ kubectl get secrets -n <namespace>
# Discover the KubeVirt installation namespace
$ export NAMESPACE="$(kubectl get kubevirt -A -o custom-columns="":.metadata.namespace)"
# Check KubeVirt CR conditions (expect Available=True)
$ kubectl get kubevirt -n "$NAMESPACE" \
-o jsonpath='{range .items[*].status.conditions[*]}{.type}={.status}{"\n"}{end}'
# Or check a single CR named 'kubevirt'
$ kubectl get kubevirt kubevirt -n "$NAMESPACE" \
-o jsonpath='{.status.conditions[?(@.type=="Available")].status}'
# Verify virt-controller is running
$ kubectl get pods -n "$NAMESPACE" \
-l kubevirt.io=virt-controller
# Check virt-controller logs for errors
# Replace <virt-controller-pod> with a pod name from the list above
$ kubectl logs -n "$NAMESPACE" <virt-controller-pod>
# Verify virt-handler is running
$ kubectl get pods -n "$NAMESPACE" \
-l kubevirt.io=virt-handler -o wide
# Check virt-handler logs for errors (daemonset uses per-node pods)
# Replace <virt-handler-pod> with a pod name from the list above
$ kubectl logs -n "$NAMESPACE" <virt-handler-pod>
$ kubectl get pvc -n <namespace>
$ kubectl describe pvc <pvc-name> -n <namespace>
# Validate from the node
$ kubectl debug node/<node-name> -it --image=busybox
# Inside the debug pod, detect runtime and pull
$ ps aux | grep -E "(containerd|dockerd|crio)"
# For CRI-O/containerd clusters:
$ crictl pull <image-name>
# For Docker-based clusters (less common):
$ docker pull <image-name>
$ exit
$ kubectl create secret docker-registry <secret-name> \
--docker-server=<registry-url> \
--docker-username=<username> \
--docker-password=<password>
$ kubectl uncordon <node-name>
# Edit VM specification directly
$ kubectl edit vm <vm-name> -n <namespace>
# Or patch specific fields
$ kubectl patch vm <vm-name> -n <namespace> --type='merge' \
-p='{"spec":{"template":{"spec":{"domain":{"resources": \
{"requests":{"memory":"2Gi"}}}}}}}'
$ kubectl create secret generic <secret-name> \
--from-literal=key=value
# Restart the VM to pick up spec changes
$ virtctl restart <vm-name> -n <namespace>
# Check current storage class status
$ kubectl get storageclass
$ kubectl describe storageclass <current-storage-class>
# Look for PVC provisioning errors
$ kubectl describe pvc <pvc-name> -n <namespace>
# If seeing "no volume provisioner" or similar errors,
# specify a working storage class in VM spec:
# spec.dataVolumeTemplates[].spec.pvc.storageClassName:
# <working-class>
Escalate to the cluster administrator if:
VirtControllerDown - May indicate controller issues preventing
VM processingLowKVMNodesCount - Related to insufficient KVM-capable nodesKubeVirtNoAvailableNodesToRunVMs - Indicates no nodes available
for VM schedulingIf you cannot resolve the issue, see the following resources: