This ‘lab’ targets deployment on one node as it uses Minikube and its hostpath
storage class which can create PersistentVolumes (PVs) on only one node at a time. In production use, a StorageClass capable of ReadWriteOnce or better operation should be deployed to ensure PVs are accessible from any node.
Labs - Experiment with CDI
Experiment with the Containerized Data Importer (CDI)
- You can experiment with this lab online at Killercoda
In this lab, you will learn how to use Containerized Data Importer (CDI) to import Virtual Machine images for use with Kubevirt. CDI simplifies the process of importing data from various sources into Kubernetes Persistent Volumes, making it easier to use that data within your virtual machines.
CDI introduces DataVolumes, custom resources meant to be used as abstractions of PVCs. A custom controller watches for DataVolumes and handles the creation of a target PVC with all the spec and annotations required for importing the data. Depending on the type of source, other specific CDI controller will start the import process and create a raw image named disk.img with the desired content into the target PVC.
Install the CDI
In this exercise we deploy the latest release of CDI using its Operator.
export VERSION=$(basename $(curl -s -w %{redirect_url} https://github.com/kubevirt/containerized-data-importer/releases/latest))
kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-operator.yaml
kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-cr.yaml
Check the status of the cdi CustomResource (CR) created in the previous step. The CR’s Phase will change from Deploying to Deployed as the pods it deploys are created and reach the Running state.
kubectl get cdi cdi -n cdi
Review the “cdi” pods that were added.
kubectl get pods -n cdi
Use CDI to Import a Disk Image
First, you need to create a DataVolume that points to the source data you want to import. In this example, we’ll use a DataVolume to import a Fedora37 Cloud Image into a PVC and launch a Virtual Machine making use of it.
cat <<EOF > dv_fedora.yml
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: "fedora"
spec:
storage:
resources:
requests:
storage: 5Gi
source:
http:
url: "https://download.fedoraproject.org/pub/fedora/linux/releases/40/Cloud/x86_64/images/Fedora-Cloud-Base-AmazonEC2.x86_64-40-1.14.raw.xz"
EOF
kubectl create -f dv_fedora.yml
A custom CDI controller will use this DataVolume to create a PVC with the same name and proper spec/annotations so that an import-specific controller detects it and launches an importer pod. This pod will gather the image specified in the source field.
kubectl get pvc fedora -o yaml
kubectl get pod # Make note of the pod name assigned to the import process
kubectl logs -f importer-fedora-pnbqh # Substitute your importer-fedora pod name here.
Notice that the importer downloaded the publicly available Fedora Cloud qcow image. Once the importer pod completes, this PVC is ready for use in kubevirt.
If the importer pod completes in error, you may need to retry it or specify a different URL to the fedora cloud image. To retry, first delete the importer pod and the DataVolume, and then recreate the DataVolume.
kubectl delete -f dv_fedora.yml --wait
kubectl create -f dv_fedora.yml
The following error occurs when the storage provider is not recognized by KubeVirt:
message: no accessMode defined DV nor on StorageProfile for standard StorageClass
Edit the DataVolume YAML to specify accessMode manually and retry it:
spec:
storage:
+ accessModes:
+ - ReadWriteOnce
Let’s create a Virtual Machine making use of it. Review the file vm1_pvc.yml.
wget https://kubevirt.io/labs/manifests/vm1_pvc.yml
cat vm1_pvc.yml
We change the yaml definition of this Virtual Machine to inject the default public key of user in the cloud instance.
# Generate a password-less SSH key using the default location.
ssh-keygen
PUBKEY=`cat ~/.ssh/id_rsa.pub`
sed -i "s%ssh-rsa.*%$PUBKEY%" vm1_pvc.yml
kubectl create -f vm1_pvc.yml
This will create and start a Virtual Machine named vm1. We can use the following command to check our Virtual Machine is running and to gather its IP. You are looking for the IP address beside the virt-launcher
pod.
kubectl get pod -o wide
Since we are running an all in one setup, the corresponding Virtual Machine is actually running on the same node, we can check its qemu process.
ps -ef | grep qemu | grep vm1
Wait for the Virtual Machine to boot and to be available for login. You may monitor its progress through the console. The speed at which the VM boots depends on whether baremetal hardware is used. It is much slower when nested virtualization is used, which is likely the case if you are completing this lab on an instance on a cloud provider.
virtctl console vm1
Disconnect from the virtual machine console by typing: ctrl+]
Finally, we will connect to vm1 Virtual Machine (VM) as a regular user would do, i.e. via ssh. This can be achieved by just ssh to the gathered ip in case we are in the Kubernetes software defined network (SDN). This is true, if we are connected to a node that belongs to the Kubernetes cluster network. Probably if you followed the Easy install using AWS or Easy install using GCP your cloud instance is already part of the cluster.
ssh fedora@VM_IP
On the other side, if you followed Easy install using minikube take into account that you will need to ssh into Minikube first, as shown below.
$ kubectl get vmi
NAME AGE PHASE IP NODENAME
vm1 109s Running 172.17.0.16 minikube
$ minikube ssh
_ _
_ _ ( ) ( )
___ ___ (_) ___ (_)| |/') _ _ | |_ __
/' _ ` _ `\| |/' _ `\| || , < ( ) ( )| '_`\ /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )( ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)
$ ssh fedora@172.17.0.16
The authenticity of host '172.17.0.16 (172.17.0.16)' can't be established.
ECDSA key fingerprint is SHA256:QmJUvc8vbM2oXiEonennW7+lZ8rVRGyhUtcQBVBTnHs.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '172.17.0.16' (ECDSA) to the list of known hosts.
fedora@172.17.0.16's password:
Finally, on a usual situation you will probably want to give access to your vm1 VM to someone else from outside the Kubernetes cluster nodes. Someone who is actually connecting from his or her laptop. This can be achieved with the virtctl tool already installed in Easy install using minikube. Note that this is the same case as connecting from our laptop to vm1 VM running on our local Minikube instance
First, we are going expose the ssh port of the vm1 as NodePort type. Then verify that the Kubernetes object service was created successfully on a random port of the Minikube or cloud instance.
$ virtctl expose vmi vm1 --name=vm1-ssh --port=20222 --target-port=22 --type=NodePort
Service vm1-ssh successfully exposed for vmi vm1
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
vm1-ssh NodePort 10.101.226.150 <none> 20222:32495/TCP 24m
Once exposed successfully, check the IP of your Minikube VM or cloud instance and verify you can reach the VM using your public SSH key previously configured. In case of cloud instances verify that security group applied allows traffic to the random port created.
minikube ip
192.168.39.74
$ ssh -i ~/.ssh/id_rsa fedora@192.168.39.74 -p 32495
Last login: Wed Oct 9 13:59:29 2019 from 172.17.0.1
[fedora@vm1 ~]$
This concludes this section of the lab.
You can watch how the laboratory is done in the following video: