Experiment with the Containerized Data Importer (CDI)

You can experiment this lab online at Katacoda

CDI is an utility designed to import Virtual Machine images for use with Kubevirt.

At a high level, a PersistentVolumeClaim (PVC) is created. A custom controller watches for importer specific claims, and when discovered, starts an import process to create a raw image named disk.img with the desired content into the associated PVC.

NOTE: This ‘lab’ targets deployment on one node as it uses hostpath storage provisioner which is randomly deployed to any node, causing that in the event of more than one nodes, only one will get the storage and that should be the node where the VM should be deployed on, otherwise, it will fail.

Install the CDI

We will first explore each component and install them. In this exercise we create a hostpath provisioner and storage class. Also we will deploy the CDI component using the Operator.

wget https://raw.githubusercontent.com/kubevirt/kubevirt.github.io/master/labs/manifests/storage-setup.yml
cat storage-setup.yml
kubectl create -f storage-setup.yml
export VERSION=$(curl -s https://github.com/kubevirt/containerized-data-importer/releases/latest | grep -o "v[0-9]\.[0-9]*\.[0-9]*")
kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-operator.yaml
kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-cr.yaml

Review the “cdi” pods that were added.

kubectl get pods -n cdi

Use the CDI

As an example, we will import a Fedora30 Cloud Image as a PVC and launch a Virtual Machine making use of it.

kubectl create -f https://raw.githubusercontent.com/kubevirt/kubevirt.github.io/master/labs/manifests/pvc_fedora.yml

This will create the PVC with a proper annotation so that CDI controller detects it and launches an importer pod to gather the image specified in the cdi.kubevirt.io/storage.import.endpoint annotation.

kubectl get pvc fedora -o yaml
kubectl get pod # Make note of the pod name assigned to the import process
kubectl logs -f importer-fedora-pnbqh   # Substitute your importer-fedora pod name here.

Notice that the importer downloaded the publicly available Fedora Cloud qcow image. Once the importer pod completes, this PVC is ready for use in kubevirt.

If the importer pod completes in error, you may need to retry it or specify a different URL to the fedora cloud image. To retry, first delete the importer pod and the PVC, and then recreate the PVC.

Let’s create a Virtual Machine making use of it. Review the file vm1_pvc.yml.

wget https://raw.githubusercontent.com/kubevirt/kubevirt.github.io/master/labs/manifests/vm1_pvc.yml
cat vm1_pvc.yml

We change the yaml definition of this Virtual Machine to inject the default public key of user in the cloud instance.

# Generate a password-less SSH key using the default location.
ssh-keygen
PUBKEY=`cat ~/.ssh/id_rsa.pub`
sed -i "s%ssh-rsa.*%$PUBKEY%" vm1_pvc.yml
kubectl create -f vm1_pvc.yml

This will create and start a Virtual Machine named vm1. We can use the following command to check our Virtual Machine is running and to gather its IP. You are looking for the IP address beside the virt-launcher pod.

kubectl get pod -o wide

Since we are running an all in one setup, the corresponding Virtual Machine is actually running on the same node, we can check its qemu process.

ps -ef | grep qemu | grep vm1

Wait for the Virtual Machine to boot and to be available for login. You may monitor its progress through the console. The speed at which the VM boots depends on whether baremetal hardware is used. It is much slower when nested virtualization is used, which is likely the case if you are completing this lab on an instance on a cloud provider.

./virtctl console vm1

Disconnect from the virtual machine console by typing: ctrl+]

Finally, we will connect to vm1 Virtual Machine (VM) as a regular user would do, i.e. via ssh. This can be achieved by just ssh to the gathered ip in case we are in the Kubernetes software defined network (SDN). This is true, if we are connected to a node that belongs to the Kubernetes cluster network. Probably if you followed the Easy install using AWS or Easy install using GCP your cloud instance is already part of the cluster.

ssh fedora@VM_IP

On the other side, if you followed Easy install using minikube take into account that you will need to ssh into Minikube first, as shown below.

$ kubectl get vmi
NAME      AGE       PHASE     IP            NODENAME
vm1       109s      Running   172.17.0.16   minikube

$ minikube ssh
                         _             _
            _         _ ( )           ( )
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

$ ssh fedora@172.17.0.16
The authenticity of host '172.17.0.16 (172.17.0.16)' can't be established.
ECDSA key fingerprint is SHA256:QmJUvc8vbM2oXiEonennW7+lZ8rVRGyhUtcQBVBTnHs.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '172.17.0.16' (ECDSA) to the list of known hosts.
fedora@172.17.0.16's password:

Finally, on a usual situation you will probably want to give access to your vm1 VM to someone else from outside the Kubernetes cluster nodes. Someone who is actually connecting from his or her laptop. This can be achieved with the virtctl tool already installed in Easy install using minikube. Note that this is the same case as connecting from our laptop to vm1 VM running on our local Minikube instance

First, we are going expose the ssh port of the vm1 as NodePort type. Then verify that the Kubernetes object service was created successfully on a random port of the Minikube or cloud instance.

$ virtctl expose vmi vm1 --name=vm1-ssh --port=20222 --target-port=22 --type=NodePort
  Service vm1-ssh successfully exposed for vmi vm1

$ kubectl get svc
NAME      TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)           AGE
vm1-ssh   NodePort   10.101.226.150   <none>        20222:32495/TCP   24m

Once exposed successfully, check the IP of your Minikube VM or cloud instance and verify you can reach the VM using your public SSH key previously configured. In case of cloud instances verify that security group applied allows traffic to the random port created.

$ minikube status
  host: Running
  kubelet: Running
  apiserver: Running
  kubectl: Correctly Configured: pointing to minikube-vm at 192.168.39.74
$ ssh -i ~/.ssh/id_rsa.pub fedora@192.168.39.74 -p 32495
  Last login: Wed Oct  9 13:59:29 2019 from 172.17.0.1
  [fedora@vm1 ~]$

This concludes this section of the lab.

You can watch how the laboratory is done in the following video:

Previous Lab