This alert fires when running virtual machine instances (VMIs) in outdated
virt-launcher pods are detected 24 hours after the KubeVirt control plane has
been updated.
Outdated VMIs might not have access to new KubeVirt features.
Outdated VMIs will not receive the security fixes associated with the
virt-launcher pod update.
Identify the outdated VMIs:
$ kubectl get vmi -l kubevirt.io/outdatedLauncherImage --all-namespaces
Check the KubeVirt custom resource (CR) to determine whether
workloadUpdateMethods is configured in the workloadUpdateStrategy stanza:
$ kubectl get kubevirt --all-namespaces -o yaml
Check each outdated VMI to determine whether it is live-migratable:
$ kubectl get vmi <vmi> -o yaml
Example output:
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstance
...
status:
conditions:
- lastProbeTime: null
lastTransitionTime: null
message: cannot migrate VMI which does not use masquerade to connect to the pod network
reason: InterfaceNotLiveMigratable
status: "False"
type: LiveMigratable
Update the KubeVirt CR to enable automatic workload updates.
See Updating KubeVirt Workloads for more information.
If a VMI is not live-migratable and if runStrategy: always is set in the
corresponding VirtualMachine object, you can update the VMI by manually
stopping the virtual machine (VM):
$ virtctl stop --namespace <namespace> <vm>
A new VMI spins up immediately in an updated virt-launcher pod to replace the
stopped VMI. This is the equivalent of a restart action.
Note: Manually stopping a live-migratable VM is destructive and not recommended because it interrupts the workload.
If a VMI is live-migratable, you can update it by creating a
VirtualMachineInstanceMigration object that targets a specific running VMI.
The VMI is migrated into an updated virt-launcher pod.
Create a VirtualMachineInstanceMigration manifest and save it as
migration.yaml:
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstanceMigration
metadata:
name: <migration_name>
namespace: <namespace>
spec:
vmiName: <vmi_name>
Create a VirtualMachineInstanceMigration object to trigger the migration:
$ kubectl create -f migration.yaml
If you cannot resolve the issue, see the following resources: