Skip to content

Commit 52918ed

Browse files
tlendackybonzini
authored andcommitted
KVM: SVM: Override default MMIO mask if memory encryption is enabled
The KVM MMIO support uses bit 51 as the reserved bit to cause nested page faults when a guest performs MMIO. The AMD memory encryption support uses a CPUID function to define the encryption bit position. Given this, it is possible that these bits can conflict. Use svm_hardware_setup() to override the MMIO mask if memory encryption support is enabled. Various checks are performed to ensure that the mask is properly defined and rsvd_bits() is used to generate the new mask (as was done prior to the change that necessitated this patch). Fixes: 28a1f3a ("kvm: x86: Set highest physical address bits in non-present/reserved SPTEs") Suggested-by: Sean Christopherson <[email protected]> Reviewed-by: Sean Christopherson <[email protected]> Signed-off-by: Tom Lendacky <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
1 parent d8010a7 commit 52918ed

File tree

1 file changed

+43
-0
lines changed

1 file changed

+43
-0
lines changed

arch/x86/kvm/svm.c

Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1307,6 +1307,47 @@ static void shrink_ple_window(struct kvm_vcpu *vcpu)
13071307
}
13081308
}
13091309

1310+
/*
1311+
* The default MMIO mask is a single bit (excluding the present bit),
1312+
* which could conflict with the memory encryption bit. Check for
1313+
* memory encryption support and override the default MMIO mask if
1314+
* memory encryption is enabled.
1315+
*/
1316+
static __init void svm_adjust_mmio_mask(void)
1317+
{
1318+
unsigned int enc_bit, mask_bit;
1319+
u64 msr, mask;
1320+
1321+
/* If there is no memory encryption support, use existing mask */
1322+
if (cpuid_eax(0x80000000) < 0x8000001f)
1323+
return;
1324+
1325+
/* If memory encryption is not enabled, use existing mask */
1326+
rdmsrl(MSR_K8_SYSCFG, msr);
1327+
if (!(msr & MSR_K8_SYSCFG_MEM_ENCRYPT))
1328+
return;
1329+
1330+
enc_bit = cpuid_ebx(0x8000001f) & 0x3f;
1331+
mask_bit = boot_cpu_data.x86_phys_bits;
1332+
1333+
/* Increment the mask bit if it is the same as the encryption bit */
1334+
if (enc_bit == mask_bit)
1335+
mask_bit++;
1336+
1337+
/*
1338+
* If the mask bit location is below 52, then some bits above the
1339+
* physical addressing limit will always be reserved, so use the
1340+
* rsvd_bits() function to generate the mask. This mask, along with
1341+
* the present bit, will be used to generate a page fault with
1342+
* PFER.RSV = 1.
1343+
*
1344+
* If the mask bit location is 52 (or above), then clear the mask.
1345+
*/
1346+
mask = (mask_bit < 52) ? rsvd_bits(mask_bit, 51) | PT_PRESENT_MASK : 0;
1347+
1348+
kvm_mmu_set_mmio_spte_mask(mask, mask, PT_WRITABLE_MASK | PT_USER_MASK);
1349+
}
1350+
13101351
static __init int svm_hardware_setup(void)
13111352
{
13121353
int cpu;
@@ -1361,6 +1402,8 @@ static __init int svm_hardware_setup(void)
13611402
}
13621403
}
13631404

1405+
svm_adjust_mmio_mask();
1406+
13641407
for_each_possible_cpu(cpu) {
13651408
r = svm_cpu_init(cpu);
13661409
if (r)

0 commit comments

Comments
 (0)