Skip to content

Commit 53da198

Browse files
KarimAllah AhmedBrian Maly
authored andcommitted
KVM/SVM: Allow direct access to MSR_IA32_SPEC_CTRL
[ Based on a patch from Paolo Bonzini <[email protected]> ] ... basically doing exactly what we do for VMX: - Passthrough SPEC_CTRL to guests (if enabled in guest CPUID) - Save and restore SPEC_CTRL around VMExit and VMEntry only if the guest actually used it. Signed-off-by: KarimAllah Ahmed <[email protected]> Signed-off-by: David Woodhouse <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Darren Kenny <[email protected]> Reviewed-by: Konrad Rzeszutek Wilk <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Jun Nakajima <[email protected]> Cc: [email protected] Cc: Dave Hansen <[email protected]> Cc: Tim Chen <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Asit Mallick <[email protected]> Cc: Arjan Van De Ven <[email protected]> Cc: Greg KH <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: Dan Williams <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Ashok Raj <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]> (cherry picked from commit b2ac58f) Orabug: 28069548 Signed-off-by: Mihai Carabas <[email protected]> Reviewed-by: Darren Kenny <[email protected]> Reviewed-by: Boris Ostrovsky <[email protected]> Signed-off-by: Brian Maly <[email protected]> Conflicts: arch/x86/kvm/svm.c Contextual and also we dropped msr_write_intercepted because we do not use it (we have other logic for IBRS usage). No changes to svm_vcpu_run() because we support IBRS and we have other code in place. Signed-off-by: Brian Maly <[email protected]>
1 parent eaaa119 commit 53da198

File tree

1 file changed

+31
-1
lines changed

1 file changed

+31
-1
lines changed

arch/x86/kvm/svm.c

Lines changed: 31 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -193,7 +193,7 @@ static const struct svm_direct_access_msrs {
193193
{ .index = MSR_IA32_LASTBRANCHTOIP, .always = false },
194194
{ .index = MSR_IA32_LASTINTFROMIP, .always = false },
195195
{ .index = MSR_IA32_LASTINTTOIP, .always = false },
196-
{ .index = MSR_IA32_SPEC_CTRL, .always = true },
196+
{ .index = MSR_IA32_SPEC_CTRL, .always = false },
197197
{ .index = MSR_IA32_PRED_CMD, .always = false },
198198
{ .index = MSR_INVALID, .always = false },
199199
};
@@ -3176,6 +3176,11 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
31763176
msr_info->data = svm->nested.vm_cr_msr;
31773177
break;
31783178
case MSR_IA32_SPEC_CTRL:
3179+
if (!msr_info->host_initiated &&
3180+
!guest_cpuid_has_ibrs(vcpu) &&
3181+
!guest_cpuid_has_ssbd(vcpu))
3182+
return 1;
3183+
31793184
msr_info->data = svm->spec_ctrl;
31803185
break;
31813186
case MSR_AMD64_VIRT_SPEC_CTRL:
@@ -3301,7 +3306,32 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
33013306
vcpu_unimpl(vcpu, "unimplemented wrmsr: 0x%x data 0x%llx\n", ecx, data);
33023307
break;
33033308
case MSR_IA32_SPEC_CTRL:
3309+
if (!msr->host_initiated &&
3310+
!guest_cpuid_has_ibrs(vcpu) &&
3311+
!guest_cpuid_has_ssbd(vcpu))
3312+
return 1;
3313+
3314+
/* The STIBP bit doesn't fault even if it's not advertised */
3315+
if (data & ~(SPEC_CTRL_IBRS | SPEC_CTRL_STIBP))
3316+
return 1;
3317+
33043318
svm->spec_ctrl = data;
3319+
3320+
if (!data)
3321+
break;
3322+
3323+
/*
3324+
* For non-nested:
3325+
* When it's written (to non-zero) for the first time, pass
3326+
* it through.
3327+
*
3328+
* For nested:
3329+
* The handling of the MSR bitmap for L2 guests is done in
3330+
* nested_svm_vmrun_msrpm.
3331+
* We update the L1 MSR bit as well since it will end up
3332+
* touching the MSR anyway now.
3333+
*/
3334+
set_msr_interception(svm->msrpm, MSR_IA32_SPEC_CTRL, 1, 1);
33053335
break;
33063336
case MSR_IA32_PRED_CMD:
33073337
if (!msr->host_initiated &&

0 commit comments

Comments
 (0)