Skip to content

Commit 0f8e26b

Browse files
committed
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Pull kvm updates from Paolo Bonzini: "Loongarch: - Clear LLBCTL if secondary mmu mapping changes - Add hypercall service support for usermode VMM x86: - Add a comment to kvm_mmu_do_page_fault() to explain why KVM performs a direct call to kvm_tdp_page_fault() when RETPOLINE is enabled - Ensure that all SEV code is compiled out when disabled in Kconfig, even if building with less brilliant compilers - Remove a redundant TLB flush on AMD processors when guest CR4.PGE changes - Use str_enabled_disabled() to replace open coded strings - Drop kvm_x86_ops.hwapic_irr_update() as KVM updates hardware's APICv cache prior to every VM-Enter - Overhaul KVM's CPUID feature infrastructure to track all vCPU capabilities instead of just those where KVM needs to manage state and/or explicitly enable the feature in hardware. Along the way, refactor the code to make it easier to add features, and to make it more self-documenting how KVM is handling each feature - Rework KVM's handling of VM-Exits during event vectoring; this plugs holes where KVM unintentionally puts the vCPU into infinite loops in some scenarios (e.g. if emulation is triggered by the exit), and brings parity between VMX and SVM - Add pending request and interrupt injection information to the kvm_exit and kvm_entry tracepoints respectively - Fix a relatively benign flaw where KVM would end up redoing RDPKRU when loading guest/host PKRU, due to a refactoring of the kernel helpers that didn't account for KVM's pre-checking of the need to do WRPKRU - Make the completion of hypercalls go through the complete_hypercall function pointer argument, no matter if the hypercall exits to userspace or not. Previously, the code assumed that KVM_HC_MAP_GPA_RANGE specifically went to userspace, and all the others did not; the new code need not special case KVM_HC_MAP_GPA_RANGE and in fact does not care at all whether there was an exit to userspace or not - As part of enabling TDX virtual machines, support support separation of private/shared EPT into separate roots. When TDX will be enabled, operations on private pages will need to go through the privileged TDX Module via SEAMCALLs; as a result, they are limited and relatively slow compared to reading a PTE. The patches included in 6.14 allow KVM to keep a mirror of the private EPT in host memory, and define entries in kvm_x86_ops to operate on external page tables such as the TDX private EPT - The recently introduced conversion of the NX-page reclamation kthread to vhost_task moved the task under the main process. The task is created as soon as KVM_CREATE_VM was invoked and this, of course, broke userspace that didn't expect to see any child task of the VM process until it started creating its own userspace threads. In particular crosvm refuses to fork() if procfs shows any child task, so unbreak it by creating the task lazily. This is arguably a userspace bug, as there can be other kinds of legitimate worker tasks and they wouldn't impede fork(); but it's not like userspace has a way to distinguish kernel worker tasks right now. Should they show as "Kthread: 1" in proc/.../status? x86 - Intel: - Fix a bug where KVM updates hardware's APICv cache of the highest ISR bit while L2 is active, while ultimately results in a hardware-accelerated L1 EOI effectively being lost - Honor event priority when emulating Posted Interrupt delivery during nested VM-Enter by queueing KVM_REQ_EVENT instead of immediately handling the interrupt - Rework KVM's processing of the Page-Modification Logging buffer to reap entries in the same order they were created, i.e. to mark gfns dirty in the same order that hardware marked the page/PTE dirty - Misc cleanups Generic: - Cleanup and harden kvm_set_memory_region(); add proper lockdep assertions when setting memory regions and add a dedicated API for setting KVM-internal memory regions. The API can then explicitly disallow all flags for KVM-internal memory regions - Explicitly verify the target vCPU is online in kvm_get_vcpu() to fix a bug where KVM would return a pointer to a vCPU prior to it being fully online, and give kvm_for_each_vcpu() similar treatment to fix a similar flaw - Wait for a vCPU to come online prior to executing a vCPU ioctl, to fix a bug where userspace could coerce KVM into handling the ioctl on a vCPU that isn't yet onlined - Gracefully handle xarray insertion failures; even though such failures are impossible in practice after xa_reserve(), reserving an entry is always followed by xa_store() which does not know (or differentiate) whether there was an xa_reserve() before or not RISC-V: - Zabha, Svvptc, and Ziccrse extension support for guests. None of them require anything in KVM except for detecting them and marking them as supported; Zabha adds byte and halfword atomic operations, while the others are markers for specific operation of the TLB and of LL/SC instructions respectively - Virtualize SBI system suspend extension for Guest/VM - Support firmware counters which can be used by the guests to collect statistics about traps that occur in the host Selftests: - Rework vcpu_get_reg() to return a value instead of using an out-param, and update all affected arch code accordingly - Convert the max_guest_memory_test into a more generic mmu_stress_test. The basic gist of the "conversion" is to have the test do mprotect() on guest memory while vCPUs are accessing said memory, e.g. to verify KVM and mmu_notifiers are working as intended - Play nice with treewrite builds of unsupported architectures, e.g. arm (32-bit), as KVM selftests' Makefile doesn't do anything to ensure the target architecture is actually one KVM selftests supports - Use the kernel's $(ARCH) definition instead of the target triple for arch specific directories, e.g. arm64 instead of aarch64, mainly so as not to be different from the rest of the kernel - Ensure that format strings for logging statements are checked by the compiler even when the logging statement itself is disabled - Attempt to whack the last LLC references/misses mole in the Intel PMU counters test by adding a data load and doing CLFLUSH{OPT} on the data instead of the code being executed. It seems that modern Intel CPUs have learned new code prefetching tricks that bypass the PMU counters - Fix a flaw in the Intel PMU counters test where it asserts that events are counting correctly without actually knowing what the events count given the underlying hardware; this can happen if Intel reuses a formerly microarchitecture-specific event encoding as an architectural event, as was the case for Top-Down Slots" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (151 commits) kvm: defer huge page recovery vhost task to later KVM: x86/mmu: Return RET_PF* instead of 1 in kvm_mmu_page_fault() KVM: Disallow all flags for KVM-internal memslots KVM: x86: Drop double-underscores from __kvm_set_memory_region() KVM: Add a dedicated API for setting KVM-internal memslots KVM: Assert slots_lock is held when setting memory regions KVM: Open code kvm_set_memory_region() into its sole caller (ioctl() API) LoongArch: KVM: Add hypercall service support for usermode VMM LoongArch: KVM: Clear LLBCTL if secondary mmu mapping is changed KVM: SVM: Use str_enabled_disabled() helper in svm_hardware_setup() KVM: VMX: read the PML log in the same order as it was written KVM: VMX: refactor PML terminology KVM: VMX: Fix comment of handle_vmx_instruction() KVM: VMX: Reinstate __exit attribute for vmx_exit() KVM: SVM: Use str_enabled_disabled() helper in sev_hardware_setup() KVM: x86: Avoid double RDPKRU when loading host/guest PKRU KVM: x86: Use LVT_TIMER instead of an open coded literal RISC-V: KVM: Add new exit statstics for redirected traps RISC-V: KVM: Update firmware counters for various events RISC-V: KVM: Redirect instruction access fault trap to guest ...
2 parents 382e391 + 931656b commit 0f8e26b

File tree

222 files changed

+2893
-1530
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

222 files changed

+2893
-1530
lines changed

Documentation/virt/kvm/api.rst

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1825,15 +1825,18 @@ emulate them efficiently. The fields in each entry are defined as follows:
18251825
the values returned by the cpuid instruction for
18261826
this function/index combination
18271827

1828-
The TSC deadline timer feature (CPUID leaf 1, ecx[24]) is always returned
1829-
as false, since the feature depends on KVM_CREATE_IRQCHIP for local APIC
1830-
support. Instead it is reported via::
1828+
x2APIC (CPUID leaf 1, ecx[21) and TSC deadline timer (CPUID leaf 1, ecx[24])
1829+
may be returned as true, but they depend on KVM_CREATE_IRQCHIP for in-kernel
1830+
emulation of the local APIC. TSC deadline timer support is also reported via::
18311831

18321832
ioctl(KVM_CHECK_EXTENSION, KVM_CAP_TSC_DEADLINE_TIMER)
18331833

18341834
if that returns true and you use KVM_CREATE_IRQCHIP, or if you emulate the
18351835
feature in userspace, then you can enable the feature for KVM_SET_CPUID2.
18361836

1837+
Enabling x2APIC in KVM_SET_CPUID2 requires KVM_CREATE_IRQCHIP as KVM doesn't
1838+
support forwarding x2APIC MSR accesses to userspace, i.e. KVM does not support
1839+
emulating x2APIC in userspace.
18371840

18381841
4.47 KVM_PPC_GET_PVINFO
18391842
-----------------------
@@ -7673,6 +7676,7 @@ branch to guests' 0x200 interrupt vector.
76737676
:Architectures: x86
76747677
:Parameters: args[0] defines which exits are disabled
76757678
:Returns: 0 on success, -EINVAL when args[0] contains invalid exits
7679+
or if any vCPUs have already been created
76767680

76777681
Valid bits in args[0] are::
76787682

MAINTAINERS

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -12686,8 +12686,8 @@ F: arch/arm64/include/asm/kvm*
1268612686
F: arch/arm64/include/uapi/asm/kvm*
1268712687
F: arch/arm64/kvm/
1268812688
F: include/kvm/arm_*
12689-
F: tools/testing/selftests/kvm/*/aarch64/
12690-
F: tools/testing/selftests/kvm/aarch64/
12689+
F: tools/testing/selftests/kvm/*/arm64/
12690+
F: tools/testing/selftests/kvm/arm64/
1269112691

1269212692
KERNEL VIRTUAL MACHINE FOR LOONGARCH (KVM/LoongArch)
1269312693
M: Tianrui Zhao <[email protected]>
@@ -12758,8 +12758,8 @@ F: arch/s390/kvm/
1275812758
F: arch/s390/mm/gmap.c
1275912759
F: drivers/s390/char/uvdevice.c
1276012760
F: tools/testing/selftests/drivers/s390x/uvdevice/
12761-
F: tools/testing/selftests/kvm/*/s390x/
12762-
F: tools/testing/selftests/kvm/s390x/
12761+
F: tools/testing/selftests/kvm/*/s390/
12762+
F: tools/testing/selftests/kvm/s390/
1276312763

1276412764
KERNEL VIRTUAL MACHINE FOR X86 (KVM/x86)
1276512765
M: Sean Christopherson <[email protected]>
@@ -12776,8 +12776,8 @@ F: arch/x86/include/uapi/asm/svm.h
1277612776
F: arch/x86/include/uapi/asm/vmx.h
1277712777
F: arch/x86/kvm/
1277812778
F: arch/x86/kvm/*/
12779-
F: tools/testing/selftests/kvm/*/x86_64/
12780-
F: tools/testing/selftests/kvm/x86_64/
12779+
F: tools/testing/selftests/kvm/*/x86/
12780+
F: tools/testing/selftests/kvm/x86/
1278112781

1278212782
KERNFS
1278312783
M: Greg Kroah-Hartman <[email protected]>

arch/arm64/include/uapi/asm/kvm.h

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -43,9 +43,6 @@
4343
#define KVM_COALESCED_MMIO_PAGE_OFFSET 1
4444
#define KVM_DIRTY_LOG_PAGE_OFFSET 64
4545

46-
#define KVM_REG_SIZE(id) \
47-
(1U << (((id) & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT))
48-
4946
struct kvm_regs {
5047
struct user_pt_regs regs; /* sp = sp_el0 */
5148

arch/loongarch/include/asm/kvm_host.h

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -162,6 +162,7 @@ enum emulation_result {
162162
#define LOONGARCH_PV_FEAT_UPDATED BIT_ULL(63)
163163
#define LOONGARCH_PV_FEAT_MASK (BIT(KVM_FEATURE_IPI) | \
164164
BIT(KVM_FEATURE_STEAL_TIME) | \
165+
BIT(KVM_FEATURE_USER_HCALL) | \
165166
BIT(KVM_FEATURE_VIRT_EXTIOI))
166167

167168
struct kvm_vcpu_arch {

arch/loongarch/include/asm/kvm_para.h

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,13 +13,16 @@
1313

1414
#define KVM_HCALL_CODE_SERVICE 0
1515
#define KVM_HCALL_CODE_SWDBG 1
16+
#define KVM_HCALL_CODE_USER_SERVICE 2
1617

1718
#define KVM_HCALL_SERVICE HYPERCALL_ENCODE(HYPERVISOR_KVM, KVM_HCALL_CODE_SERVICE)
1819
#define KVM_HCALL_FUNC_IPI 1
1920
#define KVM_HCALL_FUNC_NOTIFY 2
2021

2122
#define KVM_HCALL_SWDBG HYPERCALL_ENCODE(HYPERVISOR_KVM, KVM_HCALL_CODE_SWDBG)
2223

24+
#define KVM_HCALL_USER_SERVICE HYPERCALL_ENCODE(HYPERVISOR_KVM, KVM_HCALL_CODE_USER_SERVICE)
25+
2326
/*
2427
* LoongArch hypercall return code
2528
*/

arch/loongarch/include/asm/kvm_vcpu.h

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,7 @@ int kvm_emu_mmio_read(struct kvm_vcpu *vcpu, larch_inst inst);
4343
int kvm_emu_mmio_write(struct kvm_vcpu *vcpu, larch_inst inst);
4444
int kvm_complete_mmio_read(struct kvm_vcpu *vcpu, struct kvm_run *run);
4545
int kvm_complete_iocsr_read(struct kvm_vcpu *vcpu, struct kvm_run *run);
46+
int kvm_complete_user_service(struct kvm_vcpu *vcpu, struct kvm_run *run);
4647
int kvm_emu_idle(struct kvm_vcpu *vcpu);
4748
int kvm_pending_timer(struct kvm_vcpu *vcpu);
4849
int kvm_handle_fault(struct kvm_vcpu *vcpu, int fault);

arch/loongarch/include/uapi/asm/kvm_para.h

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,5 +17,6 @@
1717
#define KVM_FEATURE_STEAL_TIME 2
1818
/* BIT 24 - 31 are features configurable by user space vmm */
1919
#define KVM_FEATURE_VIRT_EXTIOI 24
20+
#define KVM_FEATURE_USER_HCALL 25
2021

2122
#endif /* _UAPI_ASM_KVM_PARA_H */

arch/loongarch/kvm/exit.c

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -709,6 +709,14 @@ static int kvm_handle_write_fault(struct kvm_vcpu *vcpu)
709709
return kvm_handle_rdwr_fault(vcpu, true);
710710
}
711711

712+
int kvm_complete_user_service(struct kvm_vcpu *vcpu, struct kvm_run *run)
713+
{
714+
update_pc(&vcpu->arch);
715+
kvm_write_reg(vcpu, LOONGARCH_GPR_A0, run->hypercall.ret);
716+
717+
return 0;
718+
}
719+
712720
/**
713721
* kvm_handle_fpu_disabled() - Guest used fpu however it is disabled at host
714722
* @vcpu: Virtual CPU context.
@@ -873,6 +881,28 @@ static int kvm_handle_hypercall(struct kvm_vcpu *vcpu)
873881
vcpu->stat.hypercall_exits++;
874882
kvm_handle_service(vcpu);
875883
break;
884+
case KVM_HCALL_USER_SERVICE:
885+
if (!kvm_guest_has_pv_feature(vcpu, KVM_FEATURE_USER_HCALL)) {
886+
kvm_write_reg(vcpu, LOONGARCH_GPR_A0, KVM_HCALL_INVALID_CODE);
887+
break;
888+
}
889+
890+
vcpu->stat.hypercall_exits++;
891+
vcpu->run->exit_reason = KVM_EXIT_HYPERCALL;
892+
vcpu->run->hypercall.nr = KVM_HCALL_USER_SERVICE;
893+
vcpu->run->hypercall.args[0] = kvm_read_reg(vcpu, LOONGARCH_GPR_A0);
894+
vcpu->run->hypercall.args[1] = kvm_read_reg(vcpu, LOONGARCH_GPR_A1);
895+
vcpu->run->hypercall.args[2] = kvm_read_reg(vcpu, LOONGARCH_GPR_A2);
896+
vcpu->run->hypercall.args[3] = kvm_read_reg(vcpu, LOONGARCH_GPR_A3);
897+
vcpu->run->hypercall.args[4] = kvm_read_reg(vcpu, LOONGARCH_GPR_A4);
898+
vcpu->run->hypercall.args[5] = kvm_read_reg(vcpu, LOONGARCH_GPR_A5);
899+
vcpu->run->hypercall.flags = 0;
900+
/*
901+
* Set invalid return value by default, let user-mode VMM modify it.
902+
*/
903+
vcpu->run->hypercall.ret = KVM_HCALL_INVALID_CODE;
904+
ret = RESUME_HOST;
905+
break;
876906
case KVM_HCALL_SWDBG:
877907
/* KVM_HCALL_SWDBG only in effective when SW_BP is enabled */
878908
if (vcpu->guest_debug & KVM_GUESTDBG_SW_BP_MASK) {

arch/loongarch/kvm/main.c

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -245,6 +245,24 @@ void kvm_check_vpid(struct kvm_vcpu *vcpu)
245245
trace_kvm_vpid_change(vcpu, vcpu->arch.vpid);
246246
vcpu->cpu = cpu;
247247
kvm_clear_request(KVM_REQ_TLB_FLUSH_GPA, vcpu);
248+
249+
/*
250+
* LLBCTL is a separated guest CSR register from host, a general
251+
* exception ERET instruction clears the host LLBCTL register in
252+
* host mode, and clears the guest LLBCTL register in guest mode.
253+
* ERET in tlb refill exception does not clear LLBCTL register.
254+
*
255+
* When secondary mmu mapping is changed, guest OS does not know
256+
* even if the content is changed after mapping is changed.
257+
*
258+
* Here clear WCLLB of the guest LLBCTL register when mapping is
259+
* changed. Otherwise, if mmu mapping is changed while guest is
260+
* executing LL/SC pair, LL loads with the old address and set
261+
* the LLBCTL flag, SC checks the LLBCTL flag and will store the
262+
* new address successfully since LLBCTL_WCLLB is on, even if
263+
* memory with new address is changed on other VCPUs.
264+
*/
265+
set_gcsr_llbctl(CSR_LLBCTL_WCLLB);
248266
}
249267

250268
/* Restore GSTAT(0x50).vpid */

arch/loongarch/kvm/vcpu.c

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1732,9 +1732,14 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
17321732
vcpu->mmio_needed = 0;
17331733
}
17341734

1735-
if (run->exit_reason == KVM_EXIT_LOONGARCH_IOCSR) {
1735+
switch (run->exit_reason) {
1736+
case KVM_EXIT_HYPERCALL:
1737+
kvm_complete_user_service(vcpu, run);
1738+
break;
1739+
case KVM_EXIT_LOONGARCH_IOCSR:
17361740
if (!run->iocsr_io.is_write)
17371741
kvm_complete_iocsr_read(vcpu, run);
1742+
break;
17381743
}
17391744

17401745
if (!vcpu->wants_to_run)

arch/riscv/include/asm/kvm_host.h

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -87,6 +87,11 @@ struct kvm_vcpu_stat {
8787
u64 csr_exit_kernel;
8888
u64 signal_exits;
8989
u64 exits;
90+
u64 instr_illegal_exits;
91+
u64 load_misaligned_exits;
92+
u64 store_misaligned_exits;
93+
u64 load_access_exits;
94+
u64 store_access_exits;
9095
};
9196

9297
struct kvm_arch_memory_slot {

arch/riscv/include/asm/kvm_vcpu_sbi.h

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -85,6 +85,7 @@ extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_rfence;
8585
extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_srst;
8686
extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_hsm;
8787
extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_dbcn;
88+
extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_susp;
8889
extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_sta;
8990
extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_experimental;
9091
extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_vendor;

arch/riscv/include/uapi/asm/kvm.h

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -179,6 +179,9 @@ enum KVM_RISCV_ISA_EXT_ID {
179179
KVM_RISCV_ISA_EXT_SSNPM,
180180
KVM_RISCV_ISA_EXT_SVADE,
181181
KVM_RISCV_ISA_EXT_SVADU,
182+
KVM_RISCV_ISA_EXT_SVVPTC,
183+
KVM_RISCV_ISA_EXT_ZABHA,
184+
KVM_RISCV_ISA_EXT_ZICCRSE,
182185
KVM_RISCV_ISA_EXT_MAX,
183186
};
184187

@@ -198,6 +201,7 @@ enum KVM_RISCV_SBI_EXT_ID {
198201
KVM_RISCV_SBI_EXT_VENDOR,
199202
KVM_RISCV_SBI_EXT_DBCN,
200203
KVM_RISCV_SBI_EXT_STA,
204+
KVM_RISCV_SBI_EXT_SUSP,
201205
KVM_RISCV_SBI_EXT_MAX,
202206
};
203207

@@ -211,9 +215,6 @@ struct kvm_riscv_sbi_sta {
211215
#define KVM_RISCV_TIMER_STATE_OFF 0
212216
#define KVM_RISCV_TIMER_STATE_ON 1
213217

214-
#define KVM_REG_SIZE(id) \
215-
(1U << (((id) & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT))
216-
217218
/* If you need to interpret the index values, here is the key: */
218219
#define KVM_REG_RISCV_TYPE_MASK 0x00000000FF000000
219220
#define KVM_REG_RISCV_TYPE_SHIFT 24

arch/riscv/kvm/Makefile

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,7 @@ kvm-y += vcpu_sbi_hsm.o
3030
kvm-$(CONFIG_RISCV_PMU_SBI) += vcpu_sbi_pmu.o
3131
kvm-y += vcpu_sbi_replace.o
3232
kvm-y += vcpu_sbi_sta.o
33+
kvm-y += vcpu_sbi_system.o
3334
kvm-$(CONFIG_RISCV_SBI_V01) += vcpu_sbi_v01.o
3435
kvm-y += vcpu_switch.o
3536
kvm-y += vcpu_timer.o

arch/riscv/kvm/vcpu.c

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,12 @@ const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = {
3434
STATS_DESC_COUNTER(VCPU, csr_exit_user),
3535
STATS_DESC_COUNTER(VCPU, csr_exit_kernel),
3636
STATS_DESC_COUNTER(VCPU, signal_exits),
37-
STATS_DESC_COUNTER(VCPU, exits)
37+
STATS_DESC_COUNTER(VCPU, exits),
38+
STATS_DESC_COUNTER(VCPU, instr_illegal_exits),
39+
STATS_DESC_COUNTER(VCPU, load_misaligned_exits),
40+
STATS_DESC_COUNTER(VCPU, store_misaligned_exits),
41+
STATS_DESC_COUNTER(VCPU, load_access_exits),
42+
STATS_DESC_COUNTER(VCPU, store_access_exits),
3843
};
3944

4045
const struct kvm_stats_header kvm_vcpu_stats_header = {

arch/riscv/kvm/vcpu_exit.c

Lines changed: 33 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -165,6 +165,17 @@ void kvm_riscv_vcpu_trap_redirect(struct kvm_vcpu *vcpu,
165165
vcpu->arch.guest_context.sstatus |= SR_SPP;
166166
}
167167

168+
static inline int vcpu_redirect(struct kvm_vcpu *vcpu, struct kvm_cpu_trap *trap)
169+
{
170+
int ret = -EFAULT;
171+
172+
if (vcpu->arch.guest_context.hstatus & HSTATUS_SPV) {
173+
kvm_riscv_vcpu_trap_redirect(vcpu, trap);
174+
ret = 1;
175+
}
176+
return ret;
177+
}
178+
168179
/*
169180
* Return > 0 to return to guest, < 0 on error, 0 (and set exit_reason) on
170181
* proper exit to userspace.
@@ -183,14 +194,32 @@ int kvm_riscv_vcpu_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
183194
run->exit_reason = KVM_EXIT_UNKNOWN;
184195
switch (trap->scause) {
185196
case EXC_INST_ILLEGAL:
197+
kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_ILLEGAL_INSN);
198+
vcpu->stat.instr_illegal_exits++;
199+
ret = vcpu_redirect(vcpu, trap);
200+
break;
186201
case EXC_LOAD_MISALIGNED:
202+
kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_MISALIGNED_LOAD);
203+
vcpu->stat.load_misaligned_exits++;
204+
ret = vcpu_redirect(vcpu, trap);
205+
break;
187206
case EXC_STORE_MISALIGNED:
207+
kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_MISALIGNED_STORE);
208+
vcpu->stat.store_misaligned_exits++;
209+
ret = vcpu_redirect(vcpu, trap);
210+
break;
188211
case EXC_LOAD_ACCESS:
212+
kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_ACCESS_LOAD);
213+
vcpu->stat.load_access_exits++;
214+
ret = vcpu_redirect(vcpu, trap);
215+
break;
189216
case EXC_STORE_ACCESS:
190-
if (vcpu->arch.guest_context.hstatus & HSTATUS_SPV) {
191-
kvm_riscv_vcpu_trap_redirect(vcpu, trap);
192-
ret = 1;
193-
}
217+
kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_ACCESS_STORE);
218+
vcpu->stat.store_access_exits++;
219+
ret = vcpu_redirect(vcpu, trap);
220+
break;
221+
case EXC_INST_ACCESS:
222+
ret = vcpu_redirect(vcpu, trap);
194223
break;
195224
case EXC_VIRTUAL_INST_FAULT:
196225
if (vcpu->arch.guest_context.hstatus & HSTATUS_SPV)

arch/riscv/kvm/vcpu_onereg.c

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -46,6 +46,8 @@ static const unsigned long kvm_isa_ext_arr[] = {
4646
KVM_ISA_EXT_ARR(SVINVAL),
4747
KVM_ISA_EXT_ARR(SVNAPOT),
4848
KVM_ISA_EXT_ARR(SVPBMT),
49+
KVM_ISA_EXT_ARR(SVVPTC),
50+
KVM_ISA_EXT_ARR(ZABHA),
4951
KVM_ISA_EXT_ARR(ZACAS),
5052
KVM_ISA_EXT_ARR(ZAWRS),
5153
KVM_ISA_EXT_ARR(ZBA),
@@ -65,6 +67,7 @@ static const unsigned long kvm_isa_ext_arr[] = {
6567
KVM_ISA_EXT_ARR(ZFHMIN),
6668
KVM_ISA_EXT_ARR(ZICBOM),
6769
KVM_ISA_EXT_ARR(ZICBOZ),
70+
KVM_ISA_EXT_ARR(ZICCRSE),
6871
KVM_ISA_EXT_ARR(ZICNTR),
6972
KVM_ISA_EXT_ARR(ZICOND),
7073
KVM_ISA_EXT_ARR(ZICSR),
@@ -145,6 +148,8 @@ static bool kvm_riscv_vcpu_isa_disable_allowed(unsigned long ext)
145148
case KVM_RISCV_ISA_EXT_SSTC:
146149
case KVM_RISCV_ISA_EXT_SVINVAL:
147150
case KVM_RISCV_ISA_EXT_SVNAPOT:
151+
case KVM_RISCV_ISA_EXT_SVVPTC:
152+
case KVM_RISCV_ISA_EXT_ZABHA:
148153
case KVM_RISCV_ISA_EXT_ZACAS:
149154
case KVM_RISCV_ISA_EXT_ZAWRS:
150155
case KVM_RISCV_ISA_EXT_ZBA:
@@ -162,6 +167,7 @@ static bool kvm_riscv_vcpu_isa_disable_allowed(unsigned long ext)
162167
case KVM_RISCV_ISA_EXT_ZFA:
163168
case KVM_RISCV_ISA_EXT_ZFH:
164169
case KVM_RISCV_ISA_EXT_ZFHMIN:
170+
case KVM_RISCV_ISA_EXT_ZICCRSE:
165171
case KVM_RISCV_ISA_EXT_ZICNTR:
166172
case KVM_RISCV_ISA_EXT_ZICOND:
167173
case KVM_RISCV_ISA_EXT_ZICSR:

arch/riscv/kvm/vcpu_sbi.c

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -70,6 +70,10 @@ static const struct kvm_riscv_sbi_extension_entry sbi_ext[] = {
7070
.ext_idx = KVM_RISCV_SBI_EXT_DBCN,
7171
.ext_ptr = &vcpu_sbi_ext_dbcn,
7272
},
73+
{
74+
.ext_idx = KVM_RISCV_SBI_EXT_SUSP,
75+
.ext_ptr = &vcpu_sbi_ext_susp,
76+
},
7377
{
7478
.ext_idx = KVM_RISCV_SBI_EXT_STA,
7579
.ext_ptr = &vcpu_sbi_ext_sta,

0 commit comments

Comments
 (0)