Skip to content
This repository was archived by the owner on Jul 8, 2024. It is now read-only.

Virtio snapshotting with KVM VMI

Tamas K Lengyel edited this page Sep 8, 2022 · 10 revisions

Description

The following instruction detail the setup of capturing system state of a KVM virtual machine and transfering that state to Xen to be fuzzed. The original target of this setup was Virtio (virtio-net, virtio-blk and virtio-console) but could technically be performed for any target code that's running on top of KVM.

The setup relies on observing memory accesses performed by the KVM VM on memory that's designated for DMA. For each memory access the stack is unwound to determine the context of the memory access. For each unique stack-trace we can choose to take a full-system snapshot automatically. Since this can lead to several hundred snapshots it is also possible to limit the number of stack-frames that are used to calculate the access context. For example, by limiting the stack-trace to consider only the top 5 stack frames the number of snapshots taken are only around ~70.

Capture full-system snapshot when DMA access is observed on KVM

1. Grab KVM-VMI repository

git clone https://github.com/KVM-VMI/kvm-vmi.git --recursive

2. Install dependencies

sudo apt-get install libpixman-1-dev pkg-config zlib1g-dev libglib2.0-dev dh-autoreconf git build-essential cmake bc fakeroot flex bison libelf-dev libssl-dev ncurses-dev libvirt-daemon-system qemu-kvm bridge-utils dnsmasq libjson-c-dev libvirt-dev libcapstone-dev virt-manager golang-go autoconf-archive libunwind-dev

3. Compile host kernel

cd kvm-vmi/kvm

You can re-use your existing kernel configuration and run make olddefconfig or just run make x86_64_defconfig to go with defaults. Then enable the following option:

CONFIG_KVM_INTROSPECTION=y

Build

make -j8 deb-pkg

Install the new kernel:

cd ..
sudo dpkg -i linux-image*.deb

4. Compile QEMU

cd kvm-vmi/qemu
./configure --target-list=x86_64-softmmu --prefix=/opt/qemu-vmi
make
sudo make install

Add new QEMU to apparmor enable list:

echo "/opt/qemu-vmi/bin/qemu-system-x86_64 PUx," | sudo tee /etc/apparmor.d/local/usr.sbin.libvirtd

5. Compile libkvmi

git clone https://github.com/libvmi/libkvmi
cd libkvmi
./bootstrap
./configure
make
sudo make install
cd ..

6. Compile LibVMI

git clone https://github.com/libvmi/libvmi
cd libvmi
autoreconf -vif
./configure --disable-xen --disable-bareflank --disable-file
make
sudo make install

7. Compile dmamonitor

cd ~
git clone https://github.com/intel/kernel-fuzzer-for-xen-project
cd kernel-fuzzer-for-xen-project
autoreconf -vif
./configure --disable-xen
make

8. Reboot into KVM-VMI kernel

9. Install Ubuntu 20.04 VM using virt-manager with default settings

10. Compile guest kernel

cd ~
git clone --depth=2 https://github.com/torvalds/linux
cd linux

Apply kfx_dma_log patch:

git am ~/kernel-fuzzer-for-xen-project/patches/0001-kfx_dma_log-virtio-snapshotting.patch

Use existing config and run make olddefconfig or just use defaults via make x86_64_defconfig

Then enable the following options:

CONFIG_DMA_API_DEBUG=y
CONFIG_FRAME_POINTER=y
CONFIG_UNWINDER_FRAME_POINTER=y
CONFIG_KASAN=y
CONFIG_KASAN_GENERIC=y
CONFIG_KASAN_OUTLINE=y
CONFIG_UBSAN=y

Compile

make -j8 deb-pkg

11. Create kernel's json profile

cd ~
git clone --branch linux_non_canonical_addr https://github.com/tklengyel/dwarf2json
cd dwarf2json
go build
cd ..
~/dwarf2json/dwarf2json linux --elf ~/linux/vmlinux --system-map ~/linux/System.map > ~/kernel.json

12. Transfer the linux-image deb file to the VM (via scp for example) and install it.

13. Verify that the kernel boots and works with default settings

14. Edit /etc/default/grub in the VM

GRUB_DEFAULT=0
#GRUB_TIMEOUT_STYLE=hidden
GRUB_TIMEOUT=30
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="nopti nokaslr nmi_watchdog=0 quiet dma_debug=kfx swiotlb=force"
GRUB_CMDLINE_LINUX=""
GRUB_TERMINAL="console serial"

15. Shut down VM and copy the VM's XML to a file

You can copy-paste from virt-manager or use virsh dumpxml <domainname>.

Edit the XML and make sure to assign only a single vCPU to the VM and as little memory as possible (1GB for example).

Also make additional changes required in the XML:

  • <domain xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0" type="kvm">
  • <memballoon model="none"/>
  • <emulator>/opt/qemu-vmi/bin/qemu-system-x86_64</emulator>
  • <qemu:commandline> block (see below)

For example:

<domain xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0" type="kvm">
  <name>ubuntu20.04</name>
  <uuid>556d0d8c-ae27-4b20-9eed-af03cc092864</uuid>
  <metadata>
    <libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">
      <libosinfo:os id="http://ubuntu.com/ubuntu/20.04"/>
    </libosinfo:libosinfo>
  </metadata>
  <memory unit="KiB">1000000</memory>
  <currentMemory unit="KiB">1000000</currentMemory>
  <vcpu placement="static">1</vcpu>
  <os>
    <type arch="x86_64" machine="pc-q35-4.2">hvm</type>
    <boot dev="hd"/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <vmport state="off"/>
  </features>
  <cpu mode="host-model" check="partial"/>
  <clock offset="utc">
    <timer name="rtc" tickpolicy="catchup"/>
    <timer name="pit" tickpolicy="delay"/>
    <timer name="hpet" present="no"/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <pm>
    <suspend-to-mem enabled="no"/>
    <suspend-to-disk enabled="no"/>
  </pm>
  <devices>
    <emulator>/opt/qemu-vmi/bin/qemu-system-x86_64</emulator>
    <disk type="file" device="disk">
      <driver name="qemu" type="qcow2"/>
      <source file="/vms/ubuntu-2004.qcow2"/>
      <target dev="vda" bus="virtio"/>
      <address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
    </disk>
    <controller type="usb" index="0" model="ich9-ehci1">
      <address type="pci" domain="0x0000" bus="0x00" slot="0x1d" function="0x7"/>
    </controller>
    <controller type="usb" index="0" model="ich9-uhci1">
      <master startport="0"/>
      <address type="pci" domain="0x0000" bus="0x00" slot="0x1d" function="0x0" multifunction="on"/>
    </controller>
    <controller type="usb" index="0" model="ich9-uhci2">
      <master startport="2"/>
      <address type="pci" domain="0x0000" bus="0x00" slot="0x1d" function="0x1"/>
    </controller>
    <controller type="usb" index="0" model="ich9-uhci3">
      <master startport="4"/>
      <address type="pci" domain="0x0000" bus="0x00" slot="0x1d" function="0x2"/>
    </controller>
    <controller type="sata" index="0">
      <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
    </controller>
    <controller type="pci" index="0" model="pcie-root"/>
    <controller type="pci" index="1" model="pcie-root-port">
      <model name="pcie-root-port"/>
      <target chassis="1" port="0x10"/>
      <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
    </controller>
    <controller type="pci" index="2" model="pcie-root-port">
      <model name="pcie-root-port"/>
      <target chassis="2" port="0x11"/>
      <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
    </controller>
    <controller type="pci" index="3" model="pcie-root-port">
      <model name="pcie-root-port"/>
      <target chassis="3" port="0x12"/>
      <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
    </controller>
    <controller type="pci" index="4" model="pcie-root-port">
      <model name="pcie-root-port"/>
      <target chassis="4" port="0x13"/>
      <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
    </controller>
    <controller type="pci" index="5" model="pcie-root-port">
      <model name="pcie-root-port"/>
      <target chassis="5" port="0x14"/>
      <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
    </controller>
    <controller type="virtio-serial" index="0">
      <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
    </controller>
    <interface type="network">
      <mac address="52:54:00:64:83:93"/>
      <source network="default"/>
      <model type="virtio"/>
      <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
    </interface>
    <console type="pty">
      <target type="virtio" port="0"/>
    </console>
    <input type="mouse" bus="ps2"/>
    <input type="keyboard" bus="ps2"/>
    <video>
      <model type="none"/>
    </video>
    <memballoon model="none"/>
  </devices>
  <qemu:commandline>
    <qemu:arg value="-chardev"/>
    <qemu:arg value="socket,path=/tmp/introspector,id=chardev0,reconnect=10"/>
    <qemu:arg value="-object"/>
    <qemu:arg value="introspection,id=kvmi,chardev=chardev0"/>
  </qemu:commandline>
</domain>

16. Save XML and start it

virsh create <path to xml>

  1. Get memory map of VM
~/kernel-fuzzer-for-xen-project/scripts/kvm2kfx.sh ubuntu20.04

This will generate a couple files, one of which will be ubuntu20.04-memmap.

  1. Reboot VM

  2. Start DMA monitor while VM is at the GRUB screen:

./dmamonitor --domain ubuntu20.04 --json /path/to/kernel.json --memmap ubuntu20.04-memmap --wait-for-cr3 --kvmi /tmp/introspector

The dmamonitor tool will monitor the guest's DMA API usage and create snapshots of each unique DMA access observed based on the stacktrace at the EPT violations. This can potentially create hundreds of snapshots.

To limit snapshots use --stack-frames <number> to fingerprint DMA access context based on the top of frames on the stack. Alternatively boot the system once with no --memmap <memmap> specified but using --stacktrace --stack-save-unique stacktrace. This will create a file stacktrace showing all the observed DMA access context and their assosciated key. You can capture a snapshot for a specific key by booting the VM again and using --memmap <memmap> --stack-save-key <key>.

Fuzzing setup on Xen

1. Install Xen & KF/x

Follow instructions from https://github.com/intel/kernel-fuzzer-for-xen-project

2. Transfer snapshot files from KVM host

3. Create VM transplant shell to load KVM state into

Make sure memory is at least 1.5x~2x the memory size that was set on KVM

cat >transplant.cfg <<EOL
name="transplant"
builder="hvm"
vcpus=1
memory=1548
maxmem=1548
hap=1
vga="none"
vnc=0
nomigrate=1
vmtrace_buf_kb=65536
EOL
sudo su
xl create -p -e transplant.cfg
xl rename transplant snapshot-40

4. Load KVM snapshot into VM

mkdir snapshot-40
cd snapshot-40
tar xvf ../snapshot-40.tar.gz
~/kernel-fuzzer-for-xen-project/xen-transplant $(xl domid snapshot-40) regs-40.csv memmap-40 vmcore-40

5. Find fuzzing end-point

~/kernel-fuzzer-for-xen-project/scripts/stack_pop_check.sh $(xl domid snapshot-40) 2500000 stacktrace-40

6. Insert breakpoint at RIP found

echo -n -e '\xcc' > cc
rwmem --domid <transplant_domid> --write <RIP_FOUND_ABOVE> --file cc --limit 1

7. Get seed

rwmem --domid <transplant_domid> --read $(cat memaccess-*) --limit 100 --file input/seed

8. Start fuzzer

mkdir output
~/kernel-fuzzer-for-xen-project/AFLplusplus/afl-fuzz -i input -o output -- kfx --domid $(xl domid snapshot-40) --json kernel.json --address $(cat memaccess-40) --input @@ --input-limit 100 --ptcov --harness breakpoint --start-byte 0x90