A month ago I wanted to give kubevirt a quick try as that it’s one of the paths I was considering for refactoring lanecloud. My existing tooling for hacking on kubernetes has been a stack of k3s, kube-vip, and cilium. I wanted to build on top of that.

Disclaimer: point of this post is just to share a few quick hints.

Using cilium with multus

I’m just putting this here first because this was my biggest pain point.

Cilium has a flag called cni-exclusive that has to be disabled. If you don’t multus won’t work, and you’ll waste a lot of time. I eventually came across the hint on a discord server thread where Techno Tim had been beating on multus for a non-kubevirt use case.

Here’s the magic command in my multus install script that runs after installing cilium

  cilium config set cni-exclusive false

Quick Overview

So the whole thing with kubevirt is to be able to manage your VMs like pods—which it does quite well. However, in that spirit, it also wants to manage networking the same way. AKA: exposing services and do port-forwarding. This of course is fine for many use cases, and cilium plays nice with that.

Another However: you might want to treat your VM like a normal VM and put it on a VLAN and have it be fully routable / accessible. That is what having multus is all about: it lets you have additional CNI drivers on your kubernetes cluster so you can give your pods different types of interface–such as a bridged interface.

In order to achieve that objective, there’s several paths, but I focused on 2:

  • macvtap – This one is new, has heavy development from the kubevirt team. It has some advantages, such as not requiring a bridge interface, it just makes a new mac and hangs it off an existing interface. I’m sure there’s other advantages, but a disadvantage is it doesn’t work for hair-pinning traffic if you need to reach the control host. I have some scripts for setting it up, and probably could have made it work if I had found my cilium fix sooner.
  • bridge – This is as tried-and-true as it gets for linux virtual machines. You make a bridge, and new VMs attach to it. The bridge networking CNI is part of the CNI reference suite of plugins. I ended up settling on this approach at the end.

Reminder this was still just kind of a quick R&D exercise, so I cut some corners and quickly re-used some other tooling to get the job done.

Again the basic formula was:

  • bridged networking
  • k3s
  • cilium – although I’m not really taking advantage of it in this approach
  • multus
  • kubevirt

Speedrun

This is just stepping through parts of my kubevirt install scripts folder. Remember: there’s no substitute for hours of your own suffering, but I hope there’s a few hints for you here.

k3s base setup

  1. Create a bridge and put your primary interface on it. Normally I’d do this with netplan, and frankly would have the bridge interface on a separate VLAN from my management interface, but I just quickly did this with network manager and created bridge0. Your IP may change after you do this… hence do it first.
  2. Install K3s and assure flannel and servicelb aren’t configured.
  3. Install bridge cni plugins – The bridge is part of a greater bundle of reference plugins, I just filtered for bridge on the tar extraction
  4. Copy kubeconfig from your cluster to your workstation
  5. install cilium – I followed easymode for that

installing kubevirt

  1. install the virt plugin via krew – this plugin provides additional CLI comforts for interacting with the VMs
  2. install the kubevirt operator
  3. install kubevirt crds
  4. install kubevirt cdi - containerized data importer – Using CDI is a big part of doing business with kubevirt. It takes VM images and stores them in an OCI format that’s easier for kubevirt to deploy without a custom storage class. You’ll want to read about it.
  5. enable network bindings plugin feature gate – this is a kubevirt feature gate that was easiest when applied post-install.
  6. install multus – again: take note of the cilium setting cni-exclusive: false.

deploying VMs

  1. create a bridge attachment definition for bridge0 – See this manifest – the VM deployment will reference it.
  2. create a data volume – CDI will import this image and create a datavolume that the VM will use
  3. create a virtual machine – my example uses an armbian/debian cloud-init image and attaches is to the bridge network
  4. check your dhcp server and see if your VM is online.

useful commands

kubectl get dv
kubectl get vmi
kubectl get vm

It’s nice to watch these assets from within k9s. remember if you do esc : in k9s you can do manual queries for things like above.

Watching the cdi importer for creating the dv is great for troubleshooting.