How to run Kubernetes on HyperV

Disclaimer 

This paper comes with a repository on GitHub, where you can find the newest and greatest configurations. All parts mentioned here may change, so always refer for the latest version on git. While this exercise “works on my machine” you may need to do extensive bug hunting and figuring out for yourself. I hope the document will give you a really good overview of the process of setting up a Kube cluster on HyperV from WSL using Hashicorp tools. 

 

Post written by our Lead Cloud DevOps Architect Peter Marczis (May 2024)

Introduction 

Today's desktop PCs are small powerhouses and the prices for RAM and SSDs are low. I myself own a desktop PC with virtually unlimited resources compared to the one I was working on 10 years ago. I want to make this hardware as efficient as possible for me and what could be a better task than running a full-blown Kubernetes cluster locally.

I'm a hardcore Linuxer, have been for 20 years, but I have to admit that these days I spend my hours in front of Windows 11. The reason is simply that I do some things that are painful on Linux. With WSL2 and Visual Studio Code, we all have our development environment, so I decided to give HyperV a try.  

Why WSL in particular? I could have tried to install all those tools and get rid of the WSL part, but then I would have really missed my shouted shell, and I wanted to see where Microsoft was with the WSL integrations. Spoiler alert, I'm pleasantly surprised they went in the right direction, even that is still pretty jerky - but usable. 

In this article, we will use various tools to achieve our goal:
 

  • Windows HyperV: is a virtualisation platform provided by Microsoft.  
  • Hasicorp Packer: tool to automate the creation of machine images
  • Vagrant: orchestrator that manages the creation of development environments on virtualisation platforms
  • Kubernetes: High-level container orchestration platform 

Basics 

The goal of this exercise is to run a kube cluster, as production ready as possible, but of course this setup will never see production. Such as I started with HyperV and Packer, from WSL. Now, if you never heard these, likely you should stop and so some hours of google searches but let me try to summarize what is happening here.  

HyperV is a hypervisor, that runs virtual machines and has been included in Windows Pro edition for many years now.  I is not perfect for what we want to do but it is what it is, and I’m dedicated to use the provided tools. I didn’t really want to install VirtualBox or something similar. Don’t forget that you have to enable HyperV, and if you haven’t enabled virtualization features of your CPU in the BIOS – it is a good time to reboot and do so. 

Packer is an open source product made by Hashicorp, that us used to create base images from various sources. In my case I will use Ubuntu 22.04.4 LTS aka Jammy. The way it works is that it takes a downloaded the ISO image, boots up a HyperV machine, kidnaps the GRUB prompt, setups Autoinstall and kicks in the installation. 

What is Autoinstall? Over the years, different Linux distributions came up with different solutions to automate the OS installation process. We had kickstart, we had preseeding and so on. Ubuntu has Autoinstall now, and I wanted to go for the new and shinny, so I went with is. The configuration is quite simple, and I’m happy with it for now. You can get the initial autoinstall config by manually installing the system. After that you can simply get the autoinstall config from your manually made system. This approach is very user friendly and allows you to manually tune the configuration without playing with YAML or JSON files. 

#cloud-config
autoinstall:
  apt:
    disable_components: []
    fallback: abort
    geoip: true
    mirror-selection:
      primary:
      - country-mirror
      - arches: &id001
        - amd64
        - i386
        uri: http://archive.ubuntu.com/ubuntu/
      - arches: &id002
        - s390x
        - arm64
        - armhf
        - powerpc
        - ppc64el
        - riscv64
        uri: http://ports.ubuntu.com/ubuntu-ports
    preserve_sources_list: false
    security:
    - arches: *id001
      uri: http://security.ubuntu.com/ubuntu/
    - arches: *id002
      uri: http://ports.ubuntu.com/ubuntu-ports
  codecs:
    install: false
  drivers:
    install: false
  identity:
    hostname: ubuntu
    password: $y$j9T$sC3tQImc3zsBCjerZViXT0$mrC/i0T5DrPLdAFTE38ART8gAnhUD8OAAwX.9X3.w5.
    realname: vagrant
    username: vagrant
  kernel:
    package: linux-generic
  keyboard:
    layout: us
    toggle: null
    variant: ''
  locale: en_US.UTF-8
  network:
    ethernets:
      eth0:
        dhcp4: true
    version: 2
  oem:
    install: auto
  source:
    id: ubuntu-server-minimal
    search_drivers: false
  ssh:
    allow-pw: false
    authorized-keys: [
      "ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIFo9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9WhQ== vagrant insecure public key",
      "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN1YdxBpNlzxDqfJyw/QKow1F+wvG9hXGoqiysfJOn5Y vagrant insecure public key"
    ]
    install-server: true
  storage:
    layout:
      name: lvm
  updates: security
  packages:
    - linux-azure
    - vim
    - salt-minion
  version: 1
  late-commands: [
    "echo 'vagrant ALL=(ALL) NOPASSWD:ALL' >> /target/etc/sudoers.d/90-vagrant",
    "echo 'UseDNS no' >> /etc/ssh/sshd_config"
  ]

Vagrant

Now a couple of things to remark about this configuration. Since we will be using Vagrant (which I haven’t mentioned yet) you will need to modify your image a little bit. Vagrant is yet another Hashicorp tool, to spin up multiple VMs locally. You can think about it as Terraform for your local development VMs. Yes, we used VMs before everyone become a container junkie, and yes we still use VMs for this, because we aim to build a kube cluster.  

 
So, the necessary configuration for Vagrant includes: 

  • Vagrant user, with a password vagrant 
  • As a username and password is quite unsecure, even though this whole thing will be quite unsecure after all, I decided to go with a public / private key for SSH. Which is a twist of course, as we need to have a known private key. Luckily Vagrant comes with a public private key, so a private key we all know, but it will change this key during the first boot up. Please don’t freak out! The key used here is in the repository, and that is by design. 
  • I prefer to use “UseDNS no” locally, so login is much faster. 
  • We need to grant sudo rights with no password for the vagrant user. 
  • As you see later we will use Salt stack later to configure our nodes 
  • And surprise surprise we need something called linux-azure package. This package has the magic kernel modules, which will report the ip configuration of our box to hyperV. These will be used by Packer and Vagrant to figure out where to connect. 

Now that we have the cloud-config for the auto install we just need to tell Packer what to do. Below you can find our packer template: 

packer { 
    required_plugins { 
      hyperv = { 
        source  = "github.com/hashicorp/hyperv" 
        version = "~> 1" 
      } 
    } 
}  
source "hyperv-iso" "ubuntu" { 
  # Path to the ISO file 
  iso_url      = "/mnt/c/Users/marcz/Documents/pro/ubuntu-22.04.4-live-server-amd64.iso" 
  iso_checksum = "sha256:45f873de9f8cb637345d6e66a583762730bbea30277ef7b32c9c3bd6700a32b2" 
  # Specify the Hyper-V virtual switch name 
  switch_name  = "public_network" 
  # Hyper-V VM name 
  vm_name      = "PackerBuilder" 
  # Memory size (in MB) 
  memory       = "4096" 
  # CPU cores 
  cpus         = "12" 
  # Size of the virtual hard disk (in GB) 
  disk_size    = "130048" 
  boot_command = [ 
        "<esc><wait>", 
        "c<wait>", 
        "linux /casper/vmlinuz autoinstall 'ds=nocloud;s=http://{{ .HTTPIP }}:{{ .HTTPPort }}/'<enter><wait>", 
        "initrd /casper/initrd<enter><wait>", 
        "boot<enter>" 
  ] 
  boot_wait        = "5s" 
  http_directory   = "http" 
  shutdown_command = "echo 'vagrant' | sudo -S shutdown -P now" 
  ssh_username     = "vagrant" 
  #ssh_password     = "vagrant" 
  ssh_private_key_file = "./ssh-key" 
  ssh_wait_timeout = "10000s" 
} 
build { 
  sources = ["source.hyperv-iso.ubuntu"] 
  post-processors { 
    post-processor "shell-local" { 
      # Add metadata, and pack boxfile, than add it to vagrant 
      inline = [ 
        "echo '{ \"provider\": \"hyperv\" }' > output-ubuntu/metadata.json", 
        "cd output-ubuntu", 
        "tar -cvzf ../ubuntu.box ./*", 
        "cd -", 
        "vagrant box add ./ubuntu.box --name ubuntu --force" 
      ] 
    } 
  } 
} 

Just as before, let’s discuss some parts of this. As you see you need to feed the downloaded ISO (or better the hash of it) together with some parameters for the VM we will build the image on. 

One important part here is the boot_command. With this boot command, Packer will send and ESC key hit, wait a bit and type our lines into the GRUB command line. This way we can configure the autoi-nstall parameters for the kernel here, and kick off the boot process.  
The   

{{ .HTTPIP }}:{{ .HTTPPort }} 

part will be substituted by Packer dynamically with the right values. And behold with this we just arrived to our first misery with WSL, HyperV and Windows - the networking. By default, WSL2 (today) does something funky. It runs a VM with a NAT setup in order to access rest of the world from WSL level.  
Since we need a more advanced setup, this is just not good enough for a Kube cluster. In order to run our workloads, we need to configure WSL to use “mirrored” networking. This way your network cards will be visible for the WSL Linux, and it will get (hopefully) and IP address from your local network’s DHCP server. 

To do so you need to have a .wslconfig file in your home directory with a simple content: 

[wsl2] 
networkingMode = "mirrored" 

You also need to create a new network switch for Packer (and later for Vagrant) in HyperV which is connected to your network card.

(Some warning here, I did all of this on a beefy desktop PC, with a real network card, I don’t think this will be a smooth experience on a WiFi card.)  
I used the HyperV manager to achieve this:

In the “Virtual Switch Manager…”, click on “New virtual network switch” 

Next -> Pick “external”, and you will have something like this as seen above

We have simply named this new switch ‘public_network’, if you choose a different name you will need to change the switch name in the configuration.

Now our WSL2 and our packer machine are all running on our real local network, which must have a DHCP server somewhere. Everything is fine, we get all IP addresses and the new VM can connect to Packer's tiny web service.

If all went well, you can start using Packer right after the installation on your WSL, which is pretty easy.


Would you like to learn more about the Packer configuration? Go and check out part 2.