Skip to content

Commit 288be97

Browse files
Ard Biesheuvelwildea01
authored andcommitted
arm64/lib: copy_page: use consistent prefetch stride
The optional prefetch instructions in the copy_page() routine are inconsistent: at the start of the function, two cachelines are prefetched beyond the one being loaded in the first iteration, but in the loop, the prefetch is one more line ahead. This appears to be unintentional, so let's fix it. While at it, fix the comment style and white space. Signed-off-by: Ard Biesheuvel <[email protected]> Signed-off-by: Will Deacon <[email protected]>
1 parent ece4b20 commit 288be97

File tree

1 file changed

+5
-4
lines changed

1 file changed

+5
-4
lines changed

arch/arm64/lib/copy_page.S

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -30,9 +30,10 @@
3030
*/
3131
ENTRY(copy_page)
3232
alternative_if ARM64_HAS_NO_HW_PREFETCH
33-
# Prefetch two cache lines ahead.
34-
prfm pldl1strm, [x1, #128]
35-
prfm pldl1strm, [x1, #256]
33+
// Prefetch three cache lines ahead.
34+
prfm pldl1strm, [x1, #128]
35+
prfm pldl1strm, [x1, #256]
36+
prfm pldl1strm, [x1, #384]
3637
alternative_else_nop_endif
3738

3839
ldp x2, x3, [x1]
@@ -50,7 +51,7 @@ alternative_else_nop_endif
5051
subs x18, x18, #128
5152

5253
alternative_if ARM64_HAS_NO_HW_PREFETCH
53-
prfm pldl1strm, [x1, #384]
54+
prfm pldl1strm, [x1, #384]
5455
alternative_else_nop_endif
5556

5657
stnp x2, x3, [x0]

0 commit comments

Comments
 (0)