little-vm-helper (lvh) is a VM management tool, aimed for testing and development of features that depend on the kernel, such as BPF. It is used in cilium, tetragon, and pwru. It can also be used for kernel development. It is not meant, and should not be used for running production VMs. Fast booting and image building, as well as being storage efficient are the main goals.
It uses qemu and libguestfs tools. See dependencies.
Configurations for specific images used in the Cilium project can be found in: https://github.com/cilium/little-vm-helper-images.
For an example script, see scripts/example.sh.
LVH can be used to:
- build root images for VMs
- build kernels
- download kernels
- boot VMs using above
Build example images:
$ mkdir _data
$ go run ./cmd/lvh images example-config > _data/images.json
$ go run ./cmd/lvh images build --dir _data # this may require sudo as relies on /dev/kvm
The first command will create a configuration file:
jq . < _data/images.json
[
{
"name": "base",
"packages": [
"less",
"vim",
"sudo",
"openssh-server",
"curl"
],
"actions": [
{
"comment": "disable password for root",
"op": {
"Cmd": "passwd -d root"
},
"type": "run-command"
}
]
},
{
"name": "k8s",
"parent": "base",
"image_size": "20G",
"packages": [
"docker.io"
]
}
]
The configuration file includes:
- a set of packages for the image
- an optional parent image
- a set of actions to be performed after the installation of the packets. There are multiple actions supported, see pkg/images/actions.go.
Once the build-images
command completes, the two images described in the configuration file will
be present in the images directory. ote that the images are stored as sparse files so they take less
space:
$ ls -sh1 _data/images/*.img
856M _data/images/base.img
1.7G _data/images/k8s.img
$ mkdir -p _data/kernels
$ go run ./cmd/lvh kernels --dir _data init
$ go run ./cmd/lvh kernels --dir _data add bpf-next git:https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git --fetch
$ go run ./cmd/lvh kernels --dir _data build bpf-next
Please note, to cross-build for a different architecture, you can use the
--arch=arm64
or --arch=amd64
flag.
The configuration file keeps the url for a kernel, together with its configuration options:
$ jq . < _data/kernel.json
{
"kernels": [
{
"name": "bpf-next",
"url": "git:https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git"
}
],
"common_opts": [
[
"--enable",
"CONFIG_LOCALVERSION_AUTO"
],
... more options ...
]
}
There are options that are applied to all kernels (common_opts
) as well as
kernel-specific options.
The kernels are kept in worktrees. Specifically, there is a
git bare directory (git
) that holds all the objects, and one worktree per kernel. This allows
efficient fetching and, also, having each kernel on its own separate directory.
For example:
$ ls -1 _data/kernels
5.18/
bpf-next/
git/
Currently, kernels are built using the bzImage
for x86_64 or Image.gz
for
arm64, and tar-pkg
targets (see pkg/kernels/conf.go).
List the available versions
$ lvh kernels catalog
bpf-next
rhel8
4.9
4.19
[...]
6.3
6.6
Retrieve the tags for a given version:
$ lvh kernels catalog 6.6
6.6-20240123.120815
6.6-20240123.175813
[...]
6.6-20240404.144247
6.6-20240408.100959
6.6-main
See lvh kernels catalog --help
for more details.
Download the kernel and related artifacts (BTF, modules, etc.)
$ lvh kernels pull 6.6-main
$ find 6.6-main/ -maxdepth 3
6.6-main/
6.6-main/boot
6.6-main/boot/vmlinuz-6.6.25
6.6-main/boot/btf-6.6.25
6.6-main/boot/System.map-6.6.25
6.6-main/boot/vmlinux-6.6.25
6.6-main/boot/config-6.6.25
6.6-main/lib
6.6-main/lib/modules
6.6-main/lib/modules/6.6.25
See lvh kernels pull --help
for more details.
You can use the run
subcommand to start images.
For example:
go run ./cmd/lvh run --image _data/images/base.qcow2 --kernel _data/kernels/bpf-next/arch/x86_64/boot/bzImage
Or, to with the kernel installed in the image:
go run ./cmd/lvh run --image _data/images/base.qcow2
OCI images are also supported:
go run ./cmd/lvh run --image quay.io/lvh-images/root-images:main
Note: Building images and kernels is only supported on Linux. On the other hand, images and kernels already build on Linux can be booted in MacOS (both x86 and Arm). The only requirement is qemu-system-x86_64
. As MacOS does not support KVM, the commands to boot images are:
go run ./cmd/lvh run --image _data/images/base.qcow2 --qemu-disable-kvm
Existing packer builders (e.g,.https://github.com/cilium/packer-ci-build/blob/710ad61e7d5b0b6872770729a30bcdade2ee1acb/cilium-ubuntu.json#L19, https://www.packer.io/plugins/builders/qemu) are meant to manage VMs with longer lifetimes than a single use, and use facilities that introduce unnecessary overhead for our use-case.
Also, packer does not seem to have a way to provision images without booting a machine. There is an outdated chroot package https://github.com/summerwind/packer-builder-qemu-chroot, and cloud chroot builders (e.g., https://www.packer.io/plugins/builders/amazon/chroot that uses https://github.com/hashicorp/packer-plugin-sdk/tree/main/chroot).
That being said, if we need packer functionality we can create a packer plugin (https://www.packer.io/docs/plugins/creation#developing-plugins).
These tools also target production VMs with lifetime streching beyond a single use. As a result, they introduce overhead in booting time, provisioning time, and storage.
On debian distribution, here is a list of packages needed for LVH to work.
Action | Debian packages |
---|---|
Building images | qemu-kvm mmdebstrap debian-archive-keyring libguestfs-tools |
Building the Linux kernel | libncurses-dev gawk flex bison openssl libssl-dev dkms libelf-dev libudev-dev libpci-dev libiberty-dev autoconf llvm |
Cross-compile arm64 on x86_64 | gcc-aarch64-linux-gnu |
Cross-compile x86_64 on arm64 | gcc-x86-64-linux-gnu |
- development workflow for MacOS X
- images: configuration option for using different deb distros (hardcoded to sid now)
- images: build tetragon images
- unit tests
- e2e tests (kind)
- images: docker image with required binaries (libguestfs, mmdebstrap, etc.) to run the tool - [x] is that possible? libguestfs needs to boot a mini-VM
- kernels: add support for building kernels
- runner: qemu runner wrapper
- images bootable VMs: running qemu with --kernel is convinient for development. If we want to store images externally (e.g., AWS), it might make sense to support bootable VMs.
- improve boot time: minimal init, use qemu microvm (https://qemu.readthedocs.io/en/latest/system/i386/microvm.html, https://mergeboard.com/blog/2-qemu-microvm-docker/)
- images: on a failed run, save everything in a image-failed-$(date) directory
- use
guestfish --listen
(see https://github.com/libbpf/ci/blob/cbb3b92facbad705bbb619b496d0debb4b3d806f/prepare-rootfs/run.sh#L345)
- earlier attempt: https://github.com/kkourt/kvm-dev-scripts