Skip to content

Commit 878e70d

Browse files
tlendackybp3tk0v
authored andcommitted
x86/sev: Check for the presence of an SVSM in the SNP secrets page
During early boot phases, check for the presence of an SVSM when running as an SEV-SNP guest. An SVSM is present if not running at VMPL0 and the 64-bit value at offset 0x148 into the secrets page is non-zero. If an SVSM is present, save the SVSM Calling Area address (CAA), located at offset 0x150 into the secrets page, and set the VMPL level of the guest, which should be non-zero, to indicate the presence of an SVSM. [ bp: Touchups. ] Signed-off-by: Tom Lendacky <[email protected]> Signed-off-by: Borislav Petkov (AMD) <[email protected]> Link: https://lore.kernel.org/r/9d3fe161be93d4ea60f43c2a3f2c311fe708b63b.1717600736.git.thomas.lendacky@amd.com
1 parent b547fc2 commit 878e70d

File tree

6 files changed

+160
-10
lines changed

6 files changed

+160
-10
lines changed

Documentation/arch/x86/amd-memory-encryption.rst

Lines changed: 28 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -130,4 +130,31 @@ SNP feature support.
130130

131131
More details in AMD64 APM[1] Vol 2: 15.34.10 SEV_STATUS MSR
132132

133-
[1] https://www.amd.com/content/dam/amd/en/documents/processor-tech-docs/programmer-references/24593.pdf
133+
Secure VM Service Module (SVSM)
134+
===============================
135+
SNP provides a feature called Virtual Machine Privilege Levels (VMPL) which
136+
defines four privilege levels at which guest software can run. The most
137+
privileged level is 0 and numerically higher numbers have lesser privileges.
138+
More details in the AMD64 APM Vol 2, section "15.35.7 Virtual Machine
139+
Privilege Levels", docID: 24593.
140+
141+
When using that feature, different services can run at different protection
142+
levels, apart from the guest OS but still within the secure SNP environment.
143+
They can provide services to the guest, like a vTPM, for example.
144+
145+
When a guest is not running at VMPL0, it needs to communicate with the software
146+
running at VMPL0 to perform privileged operations or to interact with secure
147+
services. An example fur such a privileged operation is PVALIDATE which is
148+
*required* to be executed at VMPL0.
149+
150+
In this scenario, the software running at VMPL0 is usually called a Secure VM
151+
Service Module (SVSM). Discovery of an SVSM and the API used to communicate
152+
with it is documented in "Secure VM Service Module for SEV-SNP Guests", docID:
153+
58019.
154+
155+
(Latest versions of the above-mentioned documents can be found by using
156+
a search engine like duckduckgo.com and typing in:
157+
158+
site:amd.com "Secure VM Service Module for SEV-SNP Guests", docID: 58019
159+
160+
for example.)

arch/x86/boot/compressed/sev.c

Lines changed: 13 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -462,6 +462,13 @@ static bool early_snp_init(struct boot_params *bp)
462462
*/
463463
setup_cpuid_table(cc_info);
464464

465+
/*
466+
* Record the SVSM Calling Area (CA) address if the guest is not
467+
* running at VMPL0. The CA will be used to communicate with the
468+
* SVSM and request its services.
469+
*/
470+
svsm_setup_ca(cc_info);
471+
465472
/*
466473
* Pass run-time kernel a pointer to CC info via boot_params so EFI
467474
* config table doesn't need to be searched again during early startup
@@ -571,14 +578,12 @@ void sev_enable(struct boot_params *bp)
571578
/*
572579
* Enforce running at VMPL0.
573580
*
574-
* RMPADJUST modifies RMP permissions of a lesser-privileged (numerically
575-
* higher) privilege level. Here, clear the VMPL1 permission mask of the
576-
* GHCB page. If the guest is not running at VMPL0, this will fail.
577-
*
578-
* If the guest is running at VMPL0, it will succeed. Even if that operation
579-
* modifies permission bits, it is still ok to do so currently because Linux
580-
* SNP guests running at VMPL0 only run at VMPL0, so VMPL1 or higher
581-
* permission mask changes are a don't-care.
581+
* Use RMPADJUST (see the rmpadjust() function for a description of
582+
* what the instruction does) to update the VMPL1 permissions of a
583+
* page. If the guest is running at VMPL0, this will succeed. If the
584+
* guest is running at any other VMPL, this will fail. Linux SNP guests
585+
* only ever run at a single VMPL level so permission mask changes of a
586+
* lesser-privileged VMPL are a don't-care.
582587
*/
583588
if (rmpadjust((unsigned long)&boot_ghcb_page, RMP_PG_SIZE_4K, 1))
584589
sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_NOT_VMPL0);

arch/x86/include/asm/sev-common.h

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -163,6 +163,10 @@ struct snp_psc_desc {
163163
#define GHCB_TERM_NOT_VMPL0 3 /* SNP guest is not running at VMPL-0 */
164164
#define GHCB_TERM_CPUID 4 /* CPUID-validation failure */
165165
#define GHCB_TERM_CPUID_HV 5 /* CPUID failure during hypervisor fallback */
166+
#define GHCB_TERM_SECRETS_PAGE 6 /* Secrets page failure */
167+
#define GHCB_TERM_NO_SVSM 7 /* SVSM is not advertised in the secrets page */
168+
#define GHCB_TERM_SVSM_VMPL0 8 /* SVSM is present but has set VMPL to 0 */
169+
#define GHCB_TERM_SVSM_CAA 9 /* SVSM is present but CAA is not page aligned */
166170

167171
#define GHCB_RESP_CODE(v) ((v) & GHCB_MSR_INFO_MASK)
168172

arch/x86/include/asm/sev.h

Lines changed: 32 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -152,9 +152,32 @@ struct snp_secrets_page {
152152
u8 vmpck2[VMPCK_KEY_LEN];
153153
u8 vmpck3[VMPCK_KEY_LEN];
154154
struct secrets_os_area os_area;
155-
u8 rsvd3[3840];
155+
156+
u8 vmsa_tweak_bitmap[64];
157+
158+
/* SVSM fields */
159+
u64 svsm_base;
160+
u64 svsm_size;
161+
u64 svsm_caa;
162+
u32 svsm_max_version;
163+
u8 svsm_guest_vmpl;
164+
u8 rsvd3[3];
165+
166+
/* Remainder of page */
167+
u8 rsvd4[3744];
156168
} __packed;
157169

170+
/*
171+
* The SVSM Calling Area (CA) related structures.
172+
*/
173+
struct svsm_ca {
174+
u8 call_pending;
175+
u8 mem_available;
176+
u8 rsvd1[6];
177+
178+
u8 svsm_buffer[PAGE_SIZE - 8];
179+
};
180+
158181
#ifdef CONFIG_AMD_MEM_ENCRYPT
159182
extern void __sev_es_ist_enter(struct pt_regs *regs);
160183
extern void __sev_es_ist_exit(void);
@@ -181,6 +204,14 @@ static __always_inline void sev_es_nmi_complete(void)
181204
extern int __init sev_es_efi_map_ghcbs(pgd_t *pgd);
182205
extern void sev_enable(struct boot_params *bp);
183206

207+
/*
208+
* RMPADJUST modifies the RMP permissions of a page of a lesser-
209+
* privileged (numerically higher) VMPL.
210+
*
211+
* If the guest is running at a higher-privilege than the privilege
212+
* level the instruction is targeting, the instruction will succeed,
213+
* otherwise, it will fail.
214+
*/
184215
static inline int rmpadjust(unsigned long vaddr, bool rmp_psize, unsigned long attrs)
185216
{
186217
int rc;

arch/x86/kernel/sev-shared.c

Lines changed: 76 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,21 @@
2323
#define sev_printk_rtl(fmt, ...)
2424
#endif
2525

26+
/*
27+
* SVSM related information:
28+
* When running under an SVSM, the VMPL that Linux is executing at must be
29+
* non-zero. The VMPL is therefore used to indicate the presence of an SVSM.
30+
*
31+
* During boot, the page tables are set up as identity mapped and later
32+
* changed to use kernel virtual addresses. Maintain separate virtual and
33+
* physical addresses for the CAA to allow SVSM functions to be used during
34+
* early boot, both with identity mapped virtual addresses and proper kernel
35+
* virtual addresses.
36+
*/
37+
static u8 snp_vmpl __ro_after_init;
38+
static struct svsm_ca *boot_svsm_caa __ro_after_init;
39+
static u64 boot_svsm_caa_pa __ro_after_init;
40+
2641
/* I/O parameters for CPUID-related helpers */
2742
struct cpuid_leaf {
2843
u32 fn;
@@ -1269,3 +1284,64 @@ static enum es_result vc_check_opcode_bytes(struct es_em_ctxt *ctxt,
12691284

12701285
return ES_UNSUPPORTED;
12711286
}
1287+
1288+
/*
1289+
* Maintain the GPA of the SVSM Calling Area (CA) in order to utilize the SVSM
1290+
* services needed when not running in VMPL0.
1291+
*/
1292+
static void __head svsm_setup_ca(const struct cc_blob_sev_info *cc_info)
1293+
{
1294+
struct snp_secrets_page *secrets_page;
1295+
u64 caa;
1296+
1297+
BUILD_BUG_ON(sizeof(*secrets_page) != PAGE_SIZE);
1298+
1299+
/*
1300+
* Check if running at VMPL0.
1301+
*
1302+
* Use RMPADJUST (see the rmpadjust() function for a description of what
1303+
* the instruction does) to update the VMPL1 permissions of a page. If
1304+
* the guest is running at VMPL0, this will succeed and implies there is
1305+
* no SVSM. If the guest is running at any other VMPL, this will fail.
1306+
* Linux SNP guests only ever run at a single VMPL level so permission mask
1307+
* changes of a lesser-privileged VMPL are a don't-care.
1308+
*
1309+
* Use a rip-relative reference to obtain the proper address, since this
1310+
* routine is running identity mapped when called, both by the decompressor
1311+
* code and the early kernel code.
1312+
*/
1313+
if (!rmpadjust((unsigned long)&RIP_REL_REF(boot_ghcb_page), RMP_PG_SIZE_4K, 1))
1314+
return;
1315+
1316+
/*
1317+
* Not running at VMPL0, ensure everything has been properly supplied
1318+
* for running under an SVSM.
1319+
*/
1320+
if (!cc_info || !cc_info->secrets_phys || cc_info->secrets_len != PAGE_SIZE)
1321+
sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_SECRETS_PAGE);
1322+
1323+
secrets_page = (struct snp_secrets_page *)cc_info->secrets_phys;
1324+
if (!secrets_page->svsm_size)
1325+
sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_NO_SVSM);
1326+
1327+
if (!secrets_page->svsm_guest_vmpl)
1328+
sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_SVSM_VMPL0);
1329+
1330+
RIP_REL_REF(snp_vmpl) = secrets_page->svsm_guest_vmpl;
1331+
1332+
caa = secrets_page->svsm_caa;
1333+
1334+
/*
1335+
* An open-coded PAGE_ALIGNED() in order to avoid including
1336+
* kernel-proper headers into the decompressor.
1337+
*/
1338+
if (caa & (PAGE_SIZE - 1))
1339+
sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_SVSM_CAA);
1340+
1341+
/*
1342+
* The CA is identity mapped when this routine is called, both by the
1343+
* decompressor code and the early kernel code.
1344+
*/
1345+
RIP_REL_REF(boot_svsm_caa) = (struct svsm_ca *)caa;
1346+
RIP_REL_REF(boot_svsm_caa_pa) = caa;
1347+
}

arch/x86/kernel/sev.c

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2108,6 +2108,13 @@ bool __head snp_init(struct boot_params *bp)
21082108

21092109
setup_cpuid_table(cc_info);
21102110

2111+
/*
2112+
* Record the SVSM Calling Area address (CAA) if the guest is not
2113+
* running at VMPL0. The CA will be used to communicate with the
2114+
* SVSM to perform the SVSM services.
2115+
*/
2116+
svsm_setup_ca(cc_info);
2117+
21112118
/*
21122119
* The CC blob will be used later to access the secrets page. Cache
21132120
* it here like the boot kernel does.

0 commit comments

Comments
 (0)