Skip to content

Commit 8e64806

Browse files
wildea01Russell King
authored andcommitted
ARM: 8299/1: mm: ensure local active ASID is marked as allocated on rollover
Commit e1a5848 ("ARM: 7924/1: mm: don't bother with reserved ttbr0 when running with LPAE") removed the use of the reserved TTBR0 value for LPAE systems, since the ASID is held in the TTBR and can be updated atomicly with the pgd of the next mm. Unfortunately, this patch forgot to update flush_context, which deliberately avoids marking the local active ASID as allocated, since we used to switch via ASID zero and didn't need to allocate the ASID of the previous mm. The side-effect of this is that we can allocate the same ASID to the next mm and, between flushing the local TLB and updating TTBR0, we can perform speculative TLB fills for userspace nG mappings using the page table of the previous mm. The consequence of this is that the next mm can erroneously hit some mappings of the previous mm. Note that this was made significantly harder to hit by a391263 ("ARM: 8203/1: mm: try to re-use old ASID assignments following a rollover") but is still theoretically possible. This patch fixes the problem by removing the code from flush_context that forces the allocated ASID to zero for the local CPU. Many thanks to the Broadcom guys for tracking this one down. Fixes: e1a5848 ("ARM: 7924/1: mm: don't bother with reserved ttbr0 when running with LPAE") Cc: <[email protected]> # v3.14+ Reported-by: Raymond Ngun <[email protected]> Tested-by: Raymond Ngun <[email protected]> Reviewed-by: Gregory Fong <[email protected]> Signed-off-by: Will Deacon <[email protected]> Signed-off-by: Russell King <[email protected]>
1 parent fba2890 commit 8e64806

File tree

1 file changed

+11
-15
lines changed

1 file changed

+11
-15
lines changed

arch/arm/mm/context.c

Lines changed: 11 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -144,21 +144,17 @@ static void flush_context(unsigned int cpu)
144144
/* Update the list of reserved ASIDs and the ASID bitmap. */
145145
bitmap_clear(asid_map, 0, NUM_USER_ASIDS);
146146
for_each_possible_cpu(i) {
147-
if (i == cpu) {
148-
asid = 0;
149-
} else {
150-
asid = atomic64_xchg(&per_cpu(active_asids, i), 0);
151-
/*
152-
* If this CPU has already been through a
153-
* rollover, but hasn't run another task in
154-
* the meantime, we must preserve its reserved
155-
* ASID, as this is the only trace we have of
156-
* the process it is still running.
157-
*/
158-
if (asid == 0)
159-
asid = per_cpu(reserved_asids, i);
160-
__set_bit(asid & ~ASID_MASK, asid_map);
161-
}
147+
asid = atomic64_xchg(&per_cpu(active_asids, i), 0);
148+
/*
149+
* If this CPU has already been through a
150+
* rollover, but hasn't run another task in
151+
* the meantime, we must preserve its reserved
152+
* ASID, as this is the only trace we have of
153+
* the process it is still running.
154+
*/
155+
if (asid == 0)
156+
asid = per_cpu(reserved_asids, i);
157+
__set_bit(asid & ~ASID_MASK, asid_map);
162158
per_cpu(reserved_asids, i) = asid;
163159
}
164160

0 commit comments

Comments
 (0)