Easy install using Kind

Kind (Kubernetes in Docker) is a tool for running local Kubernetes clusters using Docker container “nodes”.

Kind was primarily designed for testing Kubernetes itself, but may be used for local development or CI.

In Step 1, we guide you through setting up your environment to launch Kubernetes via Kind

After it’s ready, dive into the two labs below to help you get acquainted with KubeVirt.

Step 1: Prepare Kind environment

This guide will help you deploying KubeVirt on Kubernetes, we’ll be using Kind.

If you have go (1.11+) and docker already installed the following command is all you need:

GO111MODULE="on" go get sigs.k8s.io/kind@v0.7.0 && kind create cluster

Note

Please use the latest go to do this, ideally go 1.13 or greater.

This will put kind in $(go env GOPATH)/bin. If you encounter the error kind: command not found after installation then you may need to either add that directory to your $PATH as shown here or do a manual installation by cloning the repo and run make build from the repository.

Stable binaries are also available on the releases page. Stable releases are generally recommended for CI usage in particular. To install, download the binary for your platform from “Assets” and place this into your $PATH:

curl -Lo ./kind "https://github.com/kubernetes-sigs/kind/releases/download/v0.7.0/kind-$(uname)-amd64"
chmod +x ./kind
mv ./kind /some-dir-in-your-PATH/kind

Our recommendation is to always run the latest (*) version of Kind available for your platform of choice, following their quick start.

To use kind, you will need to install docker.

Finally, you’ll need kubectl installed (*), it can be downloaded from here or installed using the means available for your platform.

(*): Ensure that *kubectl* version complies with the supported release skew (The version of kubectl should be close to Kubernetes server version).

Start Kind

Before starting with Kind, let’s verify whether nested virtualization is enabled on the host where Kind is being installed on, although, if you run kind on the bare-metal host then you can skip the following step:

cat /sys/module/kvm_intel/parameters/nested

If you get an N, follow the instructions described here for enabling it.

Note

Nested virtualization is not mandatory for testing KubeVirt in a virtual machine, but makes things smoother. If for any reason it can’t be enabled, don’t forget to enable emulation as shown in the Check for the Virtualization Extensions section.

Let’s begin, normally, Kind can be started with default values and those will be enough to run this quickstart guide.

For example to create a basic cluster of 1 node you can use the following command:

$ kind create cluster # Default cluster context name is `kind`.

If you want to have multiple clusters in the same server you can name them with the --name parameter:

$ kind create cluster --name kind

To retrieve the existing clusters you can execute the following commands:

$ kind get clusters
kind

In order to interact with a specific cluster, you only need to specify the cluster name as a context in kubectl:

$ kubectl cluster-info --context kind-kind

We’re ready to create the cluster with Kind, in this case we are using a cluster with one control-plane and two workers:

tee cluster.yaml <<EOC
kind: Cluster
apiVersion: kind.sigs.k8s.io/v1alpha3
nodes:
- role: control-plane
- role: worker
- role: worker
EOC

kind create cluster --config cluster.yaml
Creating cluster "kind" ...
 âś“ Ensuring node image (kindest/node:v1.17.0) đź–Ľ
 ✓ Preparing nodes 📦 📦 📦
 âś“ Writing configuration đź“ś
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 âś“ Installing StorageClass đź’ľ
 âś“ Joining worker nodes đźšś
Set kubectl context to "kind-kind"
You can now use your cluster with:

kubectl cluster-info --context kind-kind

Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂

Deploy KubeVirt Operator

Having the Kind cluster up and running, let’s set the version environment variable that will be used on few commands:

# On other OS you might need to define it like
export KUBEVIRT_VERSION="v0.25.0"

# On Linux you can obtain it using 'curl' via:
export KUBEVIRT_VERSION=$(curl -s https://api.github.com/repos/kubevirt/kubevirt/releases | grep tag_name | grep -v -- - | sort -V | tail -1 | awk -F':' '{print $2}' | sed 's/,//' | xargs)

echo $KUBEVIRT_VERSION

Now, using the kubectl tool, let’s deploy the KubeVirt Operator:

kubectl create -f https://github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-operator.yaml

Check that it’s running:

kubectl get pods -n kubevirt
NAME                             READY     STATUS              RESTARTS   AGE
virt-operator-6c5db798d4-9qg56   0/1       ContainerCreating   0          12s
...
virt-operator-6c5db798d4-9qg56   1/1       Running   0         28s

We can execute the command above few times or add the (-w) flag for watching the pods until the operator is in Running and Ready (1/1) status, then it’s time to head to the next section.

Check for the Virtualization Extensions

To check if your CPU supports virtualization extensions execute the following command:

egrep 'svm|vmx' /proc/cpuinfo

If the command doesn’t generate any output, create the following ConfigMap so that KubeVirt uses emulation mode, otherwise skip to the next section:

kubectl create configmap kubevirt-config -n kubevirt --from-literal debug.useEmulation=true

Deploy KubeVirt

KubeVirt is then deployed by creating a dedicated custom resource:

kubectl create -f https://github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-cr.yaml

Check the deployment:

kubectl get pods -n kubevirt
NAME                               READY     STATUS    RESTARTS   AGE
virt-api-649859444c-fmrb7          1/1       Running   0          2m12s
virt-api-649859444c-qrtb6          1/1       Running   0          2m12s
virt-controller-7f49b8f77c-kpfxw   1/1       Running   0          2m12s
virt-controller-7f49b8f77c-m2h7d   1/1       Running   0          2m12s
virt-handler-t4fgb                 1/1       Running   0          2m12s
virt-operator-6c5db798d4-9qg56     1/1       Running   0          6m41s

Once we applied the KubeVirt’s Custom Resource the operator took care of deploying the actual KubeVirt pods (virt-api, virt-controller and virt-handler). Again we’ll need to execute the command until everything is up and running (or use the -w flag).

Install virtctl

An additional binary is provided to get quick access to the serial and graphical ports of a VM, and handle start/stop operations. The tool is called virtctl and can be retrieved from the release page of KubeVirt:

curl -L -o virtctl \
    https://github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/virtctl-${KUBEVIRT_VERSION}-linux-amd64
chmod +x virtctl

If krew plugin manager is installed, virtctl can be installed via krew:

$ kubectl krew install virt

Then virtctl can be used as a kubectl plugin. For a list of available commands run:

$ kubectl virt help

Every occurrence throughout this guide of

$ ./virtctl <command>...

should then be read as

$ kubectl virt <command>...

Clean Up (after lab cleanups):

Delete the Kubernetes cluster with kind:

kind delete cluster

Step 2: KubeVirt labs

After you have connected to your instance through SSH, you can work through a couple of labs to help you get acquainted with KubeVirt and how to use it to create and deploy VMs with Kubernetes.

The first lab is “Use KubeVirt”. This lab walks through the creation of a Virtual Machine Instance (VMI) on Kubernetes and then it is shown how virtctl is used to interact with its console.

The second lab is “Experiment with CDI”. This lab shows how to use the Containerized Data Importer (CDI) to import a VM image into a Persistent Volume Claim (PVC) and then how to define a VM to make use of the PVC.

The third lab is “KubeVirt upgrades”. This lab shows how easy and safe is to upgrade your KubeVirt installation with zero down-time.

Found a bug?

We are interested in hearing about your experience.

If experience a problem with the labs, please report it to the kubevirt.io issue tracker.