-
Notifications
You must be signed in to change notification settings - Fork 14.3k
[X86] SimplifyDemandedVectorEltsForTargetNode - reduce the size of VPERMV/VPERMV3 nodes if the upper elements are not demanded #133923
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -1113,8 +1113,8 @@ define <16 x i8> @evenelts_v32i16_trunc_v16i16_to_v16i8(<32 x i16> %n2) nounwind | |
; | ||
; AVX512VBMI-FAST-LABEL: evenelts_v32i16_trunc_v16i16_to_v16i8: | ||
; AVX512VBMI-FAST: # %bb.0: | ||
; AVX512VBMI-FAST-NEXT: vmovdqa {{.*#+}} xmm1 = [0,4,8,12,16,20,24,28,32,36,40,44,48,52,56,79] | ||
; AVX512VBMI-FAST-NEXT: vpxor %xmm2, %xmm2, %xmm2 | ||
; AVX512VBMI-FAST-NEXT: vmovdqa {{.*#+}} xmm1 = [64,65,66,67,68,69,24,28,32,36,40,44,48,52,56,79] | ||
; AVX512VBMI-FAST-NEXT: vpmovdb %ymm0, %xmm2 | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is not expected either, right? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This change allows the VPMOVDB node to be created from another VPERMV3 node, so we no longer have 2 VPERMV3 nodes that we can easily fold together. We're still struggling to combine shuffles across different vector widths - it will be handled eventually after #133947 but that is a much larger WIP patch that is highly dependent on us getting this in first....... |
||
; AVX512VBMI-FAST-NEXT: vpermi2b %zmm2, %zmm0, %zmm1 | ||
; AVX512VBMI-FAST-NEXT: vextracti32x4 $3, %zmm0, %xmm0 | ||
; AVX512VBMI-FAST-NEXT: vpextrw $6, %xmm0, %eax | ||
|
@@ -1124,14 +1124,14 @@ define <16 x i8> @evenelts_v32i16_trunc_v16i16_to_v16i8(<32 x i16> %n2) nounwind | |
; | ||
; AVX512VBMI-SLOW-LABEL: evenelts_v32i16_trunc_v16i16_to_v16i8: | ||
; AVX512VBMI-SLOW: # %bb.0: | ||
; AVX512VBMI-SLOW-NEXT: vmovdqa {{.*#+}} xmm1 = [0,4,8,12,16,20,24,28,32,36,40,44,48,77,78,79] | ||
; AVX512VBMI-SLOW-NEXT: vpxor %xmm2, %xmm2, %xmm2 | ||
; AVX512VBMI-SLOW-NEXT: vpermi2b %zmm2, %zmm0, %zmm1 | ||
; AVX512VBMI-SLOW-NEXT: vmovdqa {{.*#+}} xmm1 = [0,1,2,3,4,5,6,92,96,100,104,108,112,13,14,15] | ||
; AVX512VBMI-SLOW-NEXT: vpmovdb %ymm0, %xmm2 | ||
; AVX512VBMI-SLOW-NEXT: vpermt2b %zmm0, %zmm1, %zmm2 | ||
; AVX512VBMI-SLOW-NEXT: vextracti32x4 $3, %zmm0, %xmm0 | ||
; AVX512VBMI-SLOW-NEXT: vpextrw $6, %xmm0, %eax | ||
; AVX512VBMI-SLOW-NEXT: vpextrw $4, %xmm0, %ecx | ||
; AVX512VBMI-SLOW-NEXT: vpextrw $2, %xmm0, %edx | ||
; AVX512VBMI-SLOW-NEXT: vpinsrb $13, %edx, %xmm1, %xmm0 | ||
; AVX512VBMI-SLOW-NEXT: vpinsrb $13, %edx, %xmm2, %xmm0 | ||
; AVX512VBMI-SLOW-NEXT: vpinsrb $14, %ecx, %xmm0, %xmm0 | ||
; AVX512VBMI-SLOW-NEXT: vpinsrb $15, %eax, %xmm0, %xmm0 | ||
; AVX512VBMI-SLOW-NEXT: vzeroupper | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why
VT.getScalarSizeInBits() <= 16
? I don't see a test for it.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK - I'll try to add test coverage.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added test coverage and handling for 128-bit 32/64element shuffles to correctly use VPERMILPS/D instructions (which can then further simplify to VPSHUFD etc.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean 2426ac6? I don't see the test in this patch?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the new VPERMILP code path is now being hit by the 2426ac6 test - but it still eventually folds to the same VSHUFPS as it did before, just via a different set of combines.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see the point.
The initial question is about the
VT.getScalarSizeInBits() <= 16
, where I expect to see some change on vpermi2b/w that don't have 512-bit size, but I didn't find them in tests.