Skip to content

Commit 0f52b3a

Browse files
maheshsalmpe
authored andcommitted
powerpc/mce: Fix SLB rebolting during MCE recovery path.
The commit e7e8184 ("powerpc/64s: move machine check SLB flushing to mm/slb.c") introduced a bug in reloading bolted SLB entries. Unused bolted entries are stored with .esid=0 in the slb_shadow area, and that value is now used directly as the RB input to slbmte, which means the RB[52:63] index field is set to 0, which causes SLB entry 0 to be cleared. Fix this by storing the index bits in the unused bolted entries, which directs the slbmte to the right place. The SLB shadow area is also used by the hypervisor, but PAPR is okay with that, from LoPAPR v1.1, 14.11.1.3 SLB Shadow Buffer: Note: SLB is filled sequentially starting at index 0 from the shadow buffer ignoring the contents of RB field bits 52-63 Fixes: e7e8184 ("powerpc/64s: move machine check SLB flushing to mm/slb.c") Signed-off-by: Mahesh Salgaonkar <[email protected]> Signed-off-by: Nicholas Piggin <[email protected]> Reviewed-by: Nicholas Piggin <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
1 parent 8cfbdbd commit 0f52b3a

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

arch/powerpc/mm/slb.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@ static inline void slb_shadow_update(unsigned long ea, int ssize,
7070

7171
static inline void slb_shadow_clear(enum slb_index index)
7272
{
73-
WRITE_ONCE(get_slb_shadow()->save_area[index].esid, 0);
73+
WRITE_ONCE(get_slb_shadow()->save_area[index].esid, cpu_to_be64(index));
7474
}
7575

7676
static inline void create_shadowed_slbe(unsigned long ea, int ssize,

0 commit comments

Comments
 (0)