Alexander Graf
Alexander currently works at AWS and is responsible for the Nitro Hypervisor. Previously, he worked on QEMU, KVM, openSUSE / SLES on ARM and U-Boot. Whenever something really useful comes to his mind, he implements it. Among others he did Mac OS X virtualization using KVM, nested SVM, KVM on PowerPC, a lot of work in QEMU for openSUSE on ARM and the UEFI compatibility layer in U-Boot.
Session
All virtual machines, in the most common use case, use system firmware present in the ‘standard’ path inside the hypervisor host while booting. For BIOS-booted VM, the firmware is normally SeaBIOS-based and for UEFI-booted VMs, it is edk2-based. Currently, when a cloud VM is launched, the firmware binary is supplied by the cloud provider and the end user has no control over it. For confidential VMs, this represents problems both for the end user and for the cloud provider.
- The end user gets firmware measurements for the attestation purposes, however, without an ability to provide a self built (or trusted) binary, these measurements can only indicate that the firmware hasn’t changed. The end user has to implicitly place some trust in the cloud-provider supplied firmware binary.
- The cloud provider can’t update the firmware (e.g. to fix a vulnerability) without disturbing user workloads. As firmware is included into launch measurements, just swapping the firmware will cause attestation errors. The problem is even worse for embargoed vulnerabilities.
This talk describes a method of supplying system (UEFI) firmware for VMs as part of the VM disk image. The cloud-provider would not need to look into/get access to the VM disk image. The VM will use the proposed mechanism to provide the firmware binary to the hypervisor. The hypervisor will use this mechanism to install the firmware binary into the guest ROM and regenerate the VM. Our initial approach will be solely based on QEMU/KVM/EDK2/UKI. The approach should eventually become widely adopted across the industry (other cloud providers, hypervisors/VMMs, etc ).
Our approach has several advantages compared to using an IGVM container image with an embedded firmware passed to the hypervisor when starting the guest.
- First of all, the firmware image is provided along with the guest VM image (using a UKI add-on). Therefore, the guest image and the firmware binary can be packaged together as one single unit. There is no need to store the firmware blob (inside an IGVM container) somewhere separately in the hypervisor host to pass it to the hypervisor when starting the guest.
- Secondly, the request to the hypervisor to install the firmware image is directly initiated by the guest. Therefore, the guest controls when to upgrade the firmware and which firmware image to upgrade to. There is no need for the hypervisor to make any decision on this issue. The hypervisor also does not need access to the VM image either.
- Lastly, it is possible to upgrade the firmware without re-deploying a new guest VM image (and a new IGVM image containing the new firmware image) . Upgrading to a new firmware image is possible by an already existing VM spawned from the current VM image by simply updating the UKI firmware add-on to a new updated PE binary and using the mechanism to install it in the guest ROM.
We intend to give a demo of our prototype in action.