-
Notifications
You must be signed in to change notification settings - Fork 14.3k
[RISCV] Default to MicroOpBufferSize = 1 for scheduling purposes #126608
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RISCV] Default to MicroOpBufferSize = 1 for scheduling purposes #126608
Conversation
This change introduces a default schedule model for the RISCV target which leaves everything unchanged except the MicroOpBufferSize. The default value of this flag in NoSched is 0. Both configurations represent in order cores (i.e. no reorder window), the difference between them comes down to whether heuristics other than latency are allowed to apply. (Implementation details below) I left the processor models which explicitly set MicroOpBufferSize=0 unchanged in this patch, but strongly suspect we should change those too. Honestly, I think the LLVM wide default for this flag should be changed, but don't have the energy to manage the updates for all targets. Implementation wise, the effect of this change is that schedule units which are ready to run *except that* one of their predecessors may not have completed yet are added to the Available list, not the Pending one. The result of this is that it becomes possible to chose to schedule a node before it's ready cycle if the heuristics prefer. This is essentially chosing to insert a resource stall instead of e.g. increasing register pressure. Note that I was initially concerned there might be a correctness aspect (as in some kind of exposed pipeline design), but the generic scheduler doesn't seem to know how to insert noop instructions. Without that, a program wouldn't be guaranteed to schedule on an exposed pipeline depending on the program and schedule model in question. The effect of this is that we sometimes prefer register pressure in codegen results. This is mostly churn (or small wins) on scalar because we have many more registers, but is of major importance on vector - particularly high LMUL - because we effectively have many fewer registers and the relative cost of spilling is much higher. This is a significant improvement on high LMUL code quality for default rva23u configurations - or any non -mcpu vector configuration for that matter.
@llvm/pr-subscribers-backend-risc-v Author: Philip Reames (preames) ChangesThis change introduces a default schedule model for the RISCV target which leaves everything unchanged except the MicroOpBufferSize. The default value of this flag in NoSched is 0. Both configurations represent in order cores (i.e. no reorder window), the difference between them comes down to whether heuristics other than latency are allowed to apply. (Implementation details below) I left the processor models which explicitly set MicroOpBufferSize=0 unchanged in this patch, but strongly suspect we should change those too. Honestly, I think the LLVM wide default for this flag should be changed, but don't have the energy to manage the updates for all targets. Implementation wise, the effect of this change is that schedule units which are ready to run except that one of their predecessors may not have completed yet are added to the Available list, not the Pending one. The result of this is that it becomes possible to chose to schedule a node before it's ready cycle if the heuristics prefer. This is essentially chosing to insert a resource stall instead of e.g. increasing register pressure. Note that I was initially concerned there might be a correctness aspect (as in some kind of exposed pipeline design), but the generic scheduler doesn't seem to know how to insert noop instructions. Without that, a program wouldn't be guaranteed to schedule on an exposed pipeline depending on the program and schedule model in question. The effect of this is that we sometimes prefer register pressure in codegen results. This is mostly churn (or small wins) on scalar because we have many more registers, but is of major importance on vector - particularly high LMUL - because we effectively have many fewer registers and the relative cost of spilling is much higher. This is a significant improvement on high LMUL code quality for default rva23u configurations - or any non -mcpu vector configuration for that matter. Patch is 4.54 MiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/126608.diff 402 Files Affected:
diff --git a/llvm/lib/Target/RISCV/RISCVProcessors.td b/llvm/lib/Target/RISCV/RISCVProcessors.td
index b5eea138732a557..c54afa1e6e72e0c 100644
--- a/llvm/lib/Target/RISCV/RISCVProcessors.td
+++ b/llvm/lib/Target/RISCV/RISCVProcessors.td
@@ -88,21 +88,30 @@ class RISCVTuneProcessorModel<string n,
defvar GenericTuneFeatures = [TuneOptimizedNF2SegmentLoadStore];
+// Adjust the default cost model to enable all heuristics, not just latency
+// In particular, this enables register pressure heustics which are very
+// important for high LMUL vector code, and have little negative impact
+// on other configurations,
+def GenericModel : SchedMachineModel {
+ let MicroOpBufferSize = 1;
+ let CompleteModel = 0;
+}
+
def GENERIC_RV32 : RISCVProcessorModel<"generic-rv32",
- NoSchedModel,
+ GenericModel,
[Feature32Bit,
FeatureStdExtI],
GenericTuneFeatures>,
GenericTuneInfo;
def GENERIC_RV64 : RISCVProcessorModel<"generic-rv64",
- NoSchedModel,
+ GenericModel,
[Feature64Bit,
FeatureStdExtI],
GenericTuneFeatures>,
GenericTuneInfo;
// Support generic for compatibility with other targets. The triple will be used
// to change to the appropriate rv32/rv64 version.
-def GENERIC : RISCVTuneProcessorModel<"generic", NoSchedModel>, GenericTuneInfo;
+def GENERIC : RISCVTuneProcessorModel<"generic", GenericModel>, GenericTuneInfo;
def MIPS_P8700 : RISCVProcessorModel<"mips-p8700",
MIPSP8700Model,
@@ -496,7 +505,7 @@ def TENSTORRENT_ASCALON_D8 : RISCVProcessorModel<"tt-ascalon-d8",
TunePostRAScheduler]>;
def VENTANA_VEYRON_V1 : RISCVProcessorModel<"veyron-v1",
- NoSchedModel,
+ GenericModel,
[Feature64Bit,
FeatureStdExtI,
FeatureStdExtZifencei,
@@ -556,7 +565,7 @@ def XIANGSHAN_NANHU : RISCVProcessorModel<"xiangshan-nanhu",
TuneShiftedZExtWFusion]>;
def SPACEMIT_X60 : RISCVProcessorModel<"spacemit-x60",
- NoSchedModel,
+ GenericModel,
!listconcat(RVA22S64Features,
[FeatureStdExtV,
FeatureStdExtSscofpmf,
@@ -581,7 +590,7 @@ def SPACEMIT_X60 : RISCVProcessorModel<"spacemit-x60",
}
def RP2350_HAZARD3 : RISCVProcessorModel<"rp2350-hazard3",
- NoSchedModel,
+ GenericModel,
[Feature32Bit,
FeatureStdExtI,
FeatureStdExtM,
diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/add-imm.ll b/llvm/test/CodeGen/RISCV/GlobalISel/add-imm.ll
index 0fd23a7d346dfd3..1b96189aaea5c7c 100644
--- a/llvm/test/CodeGen/RISCV/GlobalISel/add-imm.ll
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/add-imm.ll
@@ -212,30 +212,30 @@ define i64 @add64_accept(i64 %a) nounwind {
define void @add32_reject() nounwind {
; RV32I-LABEL: add32_reject:
; RV32I: # %bb.0:
-; RV32I-NEXT: lui a0, %hi(ga)
-; RV32I-NEXT: lui a1, %hi(gb)
-; RV32I-NEXT: lw a2, %lo(ga)(a0)
-; RV32I-NEXT: lw a3, %lo(gb)(a1)
-; RV32I-NEXT: lui a4, 1
-; RV32I-NEXT: addi a4, a4, -1096
-; RV32I-NEXT: add a2, a2, a4
-; RV32I-NEXT: add a3, a3, a4
-; RV32I-NEXT: sw a2, %lo(ga)(a0)
-; RV32I-NEXT: sw a3, %lo(gb)(a1)
+; RV32I-NEXT: lui a0, 1
+; RV32I-NEXT: lui a1, %hi(ga)
+; RV32I-NEXT: lui a2, %hi(gb)
+; RV32I-NEXT: lw a3, %lo(ga)(a1)
+; RV32I-NEXT: lw a4, %lo(gb)(a2)
+; RV32I-NEXT: addi a0, a0, -1096
+; RV32I-NEXT: add a3, a3, a0
+; RV32I-NEXT: add a0, a4, a0
+; RV32I-NEXT: sw a3, %lo(ga)(a1)
+; RV32I-NEXT: sw a0, %lo(gb)(a2)
; RV32I-NEXT: ret
;
; RV64I-LABEL: add32_reject:
; RV64I: # %bb.0:
-; RV64I-NEXT: lui a0, %hi(ga)
-; RV64I-NEXT: lui a1, %hi(gb)
-; RV64I-NEXT: lw a2, %lo(ga)(a0)
-; RV64I-NEXT: lw a3, %lo(gb)(a1)
-; RV64I-NEXT: lui a4, 1
-; RV64I-NEXT: addi a4, a4, -1096
-; RV64I-NEXT: add a2, a2, a4
-; RV64I-NEXT: add a3, a3, a4
-; RV64I-NEXT: sw a2, %lo(ga)(a0)
-; RV64I-NEXT: sw a3, %lo(gb)(a1)
+; RV64I-NEXT: lui a0, 1
+; RV64I-NEXT: lui a1, %hi(ga)
+; RV64I-NEXT: lui a2, %hi(gb)
+; RV64I-NEXT: lw a3, %lo(ga)(a1)
+; RV64I-NEXT: lw a4, %lo(gb)(a2)
+; RV64I-NEXT: addi a0, a0, -1096
+; RV64I-NEXT: add a3, a3, a0
+; RV64I-NEXT: add a0, a4, a0
+; RV64I-NEXT: sw a3, %lo(ga)(a1)
+; RV64I-NEXT: sw a0, %lo(gb)(a2)
; RV64I-NEXT: ret
%1 = load i32, ptr @ga, align 4
%2 = load i32, ptr @gb, align 4
diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/combine-neg-abs.ll b/llvm/test/CodeGen/RISCV/GlobalISel/combine-neg-abs.ll
index 3a55189076deeea..5b9f0e60e7d808e 100644
--- a/llvm/test/CodeGen/RISCV/GlobalISel/combine-neg-abs.ll
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/combine-neg-abs.ll
@@ -93,49 +93,49 @@ define i32 @expanded_neg_abs32_unsigned(i32 %x) {
define i64 @expanded_neg_abs64(i64 %x) {
; RV32I-LABEL: expanded_neg_abs64:
; RV32I: # %bb.0:
-; RV32I-NEXT: snez a2, a0
-; RV32I-NEXT: neg a3, a1
-; RV32I-NEXT: sub a2, a3, a2
-; RV32I-NEXT: neg a3, a0
-; RV32I-NEXT: beq a2, a1, .LBB2_2
+; RV32I-NEXT: neg a2, a0
+; RV32I-NEXT: snez a3, a0
+; RV32I-NEXT: neg a4, a1
+; RV32I-NEXT: sub a3, a4, a3
+; RV32I-NEXT: beq a3, a1, .LBB2_2
; RV32I-NEXT: # %bb.1:
-; RV32I-NEXT: slt a4, a1, a2
+; RV32I-NEXT: slt a4, a1, a3
; RV32I-NEXT: beqz a4, .LBB2_3
; RV32I-NEXT: j .LBB2_4
; RV32I-NEXT: .LBB2_2:
-; RV32I-NEXT: sltu a4, a0, a3
+; RV32I-NEXT: sltu a4, a0, a2
; RV32I-NEXT: bnez a4, .LBB2_4
; RV32I-NEXT: .LBB2_3:
-; RV32I-NEXT: mv a3, a0
-; RV32I-NEXT: mv a2, a1
+; RV32I-NEXT: mv a2, a0
+; RV32I-NEXT: mv a3, a1
; RV32I-NEXT: .LBB2_4:
-; RV32I-NEXT: neg a0, a3
-; RV32I-NEXT: snez a1, a3
-; RV32I-NEXT: neg a2, a2
+; RV32I-NEXT: neg a0, a2
+; RV32I-NEXT: snez a1, a2
+; RV32I-NEXT: neg a2, a3
; RV32I-NEXT: sub a1, a2, a1
; RV32I-NEXT: ret
;
; RV32ZBB-LABEL: expanded_neg_abs64:
; RV32ZBB: # %bb.0:
-; RV32ZBB-NEXT: snez a2, a0
-; RV32ZBB-NEXT: neg a3, a1
-; RV32ZBB-NEXT: sub a2, a3, a2
-; RV32ZBB-NEXT: neg a3, a0
-; RV32ZBB-NEXT: beq a2, a1, .LBB2_2
+; RV32ZBB-NEXT: neg a2, a0
+; RV32ZBB-NEXT: snez a3, a0
+; RV32ZBB-NEXT: neg a4, a1
+; RV32ZBB-NEXT: sub a3, a4, a3
+; RV32ZBB-NEXT: beq a3, a1, .LBB2_2
; RV32ZBB-NEXT: # %bb.1:
-; RV32ZBB-NEXT: slt a4, a1, a2
+; RV32ZBB-NEXT: slt a4, a1, a3
; RV32ZBB-NEXT: beqz a4, .LBB2_3
; RV32ZBB-NEXT: j .LBB2_4
; RV32ZBB-NEXT: .LBB2_2:
-; RV32ZBB-NEXT: sltu a4, a0, a3
+; RV32ZBB-NEXT: sltu a4, a0, a2
; RV32ZBB-NEXT: bnez a4, .LBB2_4
; RV32ZBB-NEXT: .LBB2_3:
-; RV32ZBB-NEXT: mv a3, a0
-; RV32ZBB-NEXT: mv a2, a1
+; RV32ZBB-NEXT: mv a2, a0
+; RV32ZBB-NEXT: mv a3, a1
; RV32ZBB-NEXT: .LBB2_4:
-; RV32ZBB-NEXT: neg a0, a3
-; RV32ZBB-NEXT: snez a1, a3
-; RV32ZBB-NEXT: neg a2, a2
+; RV32ZBB-NEXT: neg a0, a2
+; RV32ZBB-NEXT: snez a1, a2
+; RV32ZBB-NEXT: neg a2, a3
; RV32ZBB-NEXT: sub a1, a2, a1
; RV32ZBB-NEXT: ret
;
@@ -163,49 +163,49 @@ define i64 @expanded_neg_abs64(i64 %x) {
define i64 @expanded_neg_abs64_unsigned(i64 %x) {
; RV32I-LABEL: expanded_neg_abs64_unsigned:
; RV32I: # %bb.0:
-; RV32I-NEXT: snez a2, a0
-; RV32I-NEXT: neg a3, a1
-; RV32I-NEXT: sub a2, a3, a2
-; RV32I-NEXT: neg a3, a0
-; RV32I-NEXT: beq a2, a1, .LBB3_2
+; RV32I-NEXT: neg a2, a0
+; RV32I-NEXT: snez a3, a0
+; RV32I-NEXT: neg a4, a1
+; RV32I-NEXT: sub a3, a4, a3
+; RV32I-NEXT: beq a3, a1, .LBB3_2
; RV32I-NEXT: # %bb.1:
-; RV32I-NEXT: sltu a4, a1, a2
+; RV32I-NEXT: sltu a4, a1, a3
; RV32I-NEXT: beqz a4, .LBB3_3
; RV32I-NEXT: j .LBB3_4
; RV32I-NEXT: .LBB3_2:
-; RV32I-NEXT: sltu a4, a0, a3
+; RV32I-NEXT: sltu a4, a0, a2
; RV32I-NEXT: bnez a4, .LBB3_4
; RV32I-NEXT: .LBB3_3:
-; RV32I-NEXT: mv a3, a0
-; RV32I-NEXT: mv a2, a1
+; RV32I-NEXT: mv a2, a0
+; RV32I-NEXT: mv a3, a1
; RV32I-NEXT: .LBB3_4:
-; RV32I-NEXT: neg a0, a3
-; RV32I-NEXT: snez a1, a3
-; RV32I-NEXT: neg a2, a2
+; RV32I-NEXT: neg a0, a2
+; RV32I-NEXT: snez a1, a2
+; RV32I-NEXT: neg a2, a3
; RV32I-NEXT: sub a1, a2, a1
; RV32I-NEXT: ret
;
; RV32ZBB-LABEL: expanded_neg_abs64_unsigned:
; RV32ZBB: # %bb.0:
-; RV32ZBB-NEXT: snez a2, a0
-; RV32ZBB-NEXT: neg a3, a1
-; RV32ZBB-NEXT: sub a2, a3, a2
-; RV32ZBB-NEXT: neg a3, a0
-; RV32ZBB-NEXT: beq a2, a1, .LBB3_2
+; RV32ZBB-NEXT: neg a2, a0
+; RV32ZBB-NEXT: snez a3, a0
+; RV32ZBB-NEXT: neg a4, a1
+; RV32ZBB-NEXT: sub a3, a4, a3
+; RV32ZBB-NEXT: beq a3, a1, .LBB3_2
; RV32ZBB-NEXT: # %bb.1:
-; RV32ZBB-NEXT: sltu a4, a1, a2
+; RV32ZBB-NEXT: sltu a4, a1, a3
; RV32ZBB-NEXT: beqz a4, .LBB3_3
; RV32ZBB-NEXT: j .LBB3_4
; RV32ZBB-NEXT: .LBB3_2:
-; RV32ZBB-NEXT: sltu a4, a0, a3
+; RV32ZBB-NEXT: sltu a4, a0, a2
; RV32ZBB-NEXT: bnez a4, .LBB3_4
; RV32ZBB-NEXT: .LBB3_3:
-; RV32ZBB-NEXT: mv a3, a0
-; RV32ZBB-NEXT: mv a2, a1
+; RV32ZBB-NEXT: mv a2, a0
+; RV32ZBB-NEXT: mv a3, a1
; RV32ZBB-NEXT: .LBB3_4:
-; RV32ZBB-NEXT: neg a0, a3
-; RV32ZBB-NEXT: snez a1, a3
-; RV32ZBB-NEXT: neg a2, a2
+; RV32ZBB-NEXT: neg a0, a2
+; RV32ZBB-NEXT: snez a1, a2
+; RV32ZBB-NEXT: neg a2, a3
; RV32ZBB-NEXT: sub a1, a2, a1
; RV32ZBB-NEXT: ret
;
@@ -315,49 +315,49 @@ define i32 @expanded_neg_inv_abs32_unsigned(i32 %x) {
define i64 @expanded_neg_inv_abs64(i64 %x) {
; RV32I-LABEL: expanded_neg_inv_abs64:
; RV32I: # %bb.0:
-; RV32I-NEXT: snez a2, a0
-; RV32I-NEXT: neg a3, a1
-; RV32I-NEXT: sub a2, a3, a2
-; RV32I-NEXT: neg a3, a0
-; RV32I-NEXT: beq a2, a1, .LBB6_2
+; RV32I-NEXT: neg a2, a0
+; RV32I-NEXT: snez a3, a0
+; RV32I-NEXT: neg a4, a1
+; RV32I-NEXT: sub a3, a4, a3
+; RV32I-NEXT: beq a3, a1, .LBB6_2
; RV32I-NEXT: # %bb.1:
-; RV32I-NEXT: slt a4, a2, a1
+; RV32I-NEXT: slt a4, a3, a1
; RV32I-NEXT: beqz a4, .LBB6_3
; RV32I-NEXT: j .LBB6_4
; RV32I-NEXT: .LBB6_2:
-; RV32I-NEXT: sltu a4, a3, a0
+; RV32I-NEXT: sltu a4, a2, a0
; RV32I-NEXT: bnez a4, .LBB6_4
; RV32I-NEXT: .LBB6_3:
-; RV32I-NEXT: mv a3, a0
-; RV32I-NEXT: mv a2, a1
+; RV32I-NEXT: mv a2, a0
+; RV32I-NEXT: mv a3, a1
; RV32I-NEXT: .LBB6_4:
-; RV32I-NEXT: neg a0, a3
-; RV32I-NEXT: snez a1, a3
-; RV32I-NEXT: neg a2, a2
+; RV32I-NEXT: neg a0, a2
+; RV32I-NEXT: snez a1, a2
+; RV32I-NEXT: neg a2, a3
; RV32I-NEXT: sub a1, a2, a1
; RV32I-NEXT: ret
;
; RV32ZBB-LABEL: expanded_neg_inv_abs64:
; RV32ZBB: # %bb.0:
-; RV32ZBB-NEXT: snez a2, a0
-; RV32ZBB-NEXT: neg a3, a1
-; RV32ZBB-NEXT: sub a2, a3, a2
-; RV32ZBB-NEXT: neg a3, a0
-; RV32ZBB-NEXT: beq a2, a1, .LBB6_2
+; RV32ZBB-NEXT: neg a2, a0
+; RV32ZBB-NEXT: snez a3, a0
+; RV32ZBB-NEXT: neg a4, a1
+; RV32ZBB-NEXT: sub a3, a4, a3
+; RV32ZBB-NEXT: beq a3, a1, .LBB6_2
; RV32ZBB-NEXT: # %bb.1:
-; RV32ZBB-NEXT: slt a4, a2, a1
+; RV32ZBB-NEXT: slt a4, a3, a1
; RV32ZBB-NEXT: beqz a4, .LBB6_3
; RV32ZBB-NEXT: j .LBB6_4
; RV32ZBB-NEXT: .LBB6_2:
-; RV32ZBB-NEXT: sltu a4, a3, a0
+; RV32ZBB-NEXT: sltu a4, a2, a0
; RV32ZBB-NEXT: bnez a4, .LBB6_4
; RV32ZBB-NEXT: .LBB6_3:
-; RV32ZBB-NEXT: mv a3, a0
-; RV32ZBB-NEXT: mv a2, a1
+; RV32ZBB-NEXT: mv a2, a0
+; RV32ZBB-NEXT: mv a3, a1
; RV32ZBB-NEXT: .LBB6_4:
-; RV32ZBB-NEXT: neg a0, a3
-; RV32ZBB-NEXT: snez a1, a3
-; RV32ZBB-NEXT: neg a2, a2
+; RV32ZBB-NEXT: neg a0, a2
+; RV32ZBB-NEXT: snez a1, a2
+; RV32ZBB-NEXT: neg a2, a3
; RV32ZBB-NEXT: sub a1, a2, a1
; RV32ZBB-NEXT: ret
;
@@ -385,49 +385,49 @@ define i64 @expanded_neg_inv_abs64(i64 %x) {
define i64 @expanded_neg_inv_abs64_unsigned(i64 %x) {
; RV32I-LABEL: expanded_neg_inv_abs64_unsigned:
; RV32I: # %bb.0:
-; RV32I-NEXT: snez a2, a0
-; RV32I-NEXT: neg a3, a1
-; RV32I-NEXT: sub a2, a3, a2
-; RV32I-NEXT: neg a3, a0
-; RV32I-NEXT: beq a2, a1, .LBB7_2
+; RV32I-NEXT: neg a2, a0
+; RV32I-NEXT: snez a3, a0
+; RV32I-NEXT: neg a4, a1
+; RV32I-NEXT: sub a3, a4, a3
+; RV32I-NEXT: beq a3, a1, .LBB7_2
; RV32I-NEXT: # %bb.1:
-; RV32I-NEXT: sltu a4, a2, a1
+; RV32I-NEXT: sltu a4, a3, a1
; RV32I-NEXT: beqz a4, .LBB7_3
; RV32I-NEXT: j .LBB7_4
; RV32I-NEXT: .LBB7_2:
-; RV32I-NEXT: sltu a4, a3, a0
+; RV32I-NEXT: sltu a4, a2, a0
; RV32I-NEXT: bnez a4, .LBB7_4
; RV32I-NEXT: .LBB7_3:
-; RV32I-NEXT: mv a3, a0
-; RV32I-NEXT: mv a2, a1
+; RV32I-NEXT: mv a2, a0
+; RV32I-NEXT: mv a3, a1
; RV32I-NEXT: .LBB7_4:
-; RV32I-NEXT: neg a0, a3
-; RV32I-NEXT: snez a1, a3
-; RV32I-NEXT: neg a2, a2
+; RV32I-NEXT: neg a0, a2
+; RV32I-NEXT: snez a1, a2
+; RV32I-NEXT: neg a2, a3
; RV32I-NEXT: sub a1, a2, a1
; RV32I-NEXT: ret
;
; RV32ZBB-LABEL: expanded_neg_inv_abs64_unsigned:
; RV32ZBB: # %bb.0:
-; RV32ZBB-NEXT: snez a2, a0
-; RV32ZBB-NEXT: neg a3, a1
-; RV32ZBB-NEXT: sub a2, a3, a2
-; RV32ZBB-NEXT: neg a3, a0
-; RV32ZBB-NEXT: beq a2, a1, .LBB7_2
+; RV32ZBB-NEXT: neg a2, a0
+; RV32ZBB-NEXT: snez a3, a0
+; RV32ZBB-NEXT: neg a4, a1
+; RV32ZBB-NEXT: sub a3, a4, a3
+; RV32ZBB-NEXT: beq a3, a1, .LBB7_2
; RV32ZBB-NEXT: # %bb.1:
-; RV32ZBB-NEXT: sltu a4, a2, a1
+; RV32ZBB-NEXT: sltu a4, a3, a1
; RV32ZBB-NEXT: beqz a4, .LBB7_3
; RV32ZBB-NEXT: j .LBB7_4
; RV32ZBB-NEXT: .LBB7_2:
-; RV32ZBB-NEXT: sltu a4, a3, a0
+; RV32ZBB-NEXT: sltu a4, a2, a0
; RV32ZBB-NEXT: bnez a4, .LBB7_4
; RV32ZBB-NEXT: .LBB7_3:
-; RV32ZBB-NEXT: mv a3, a0
-; RV32ZBB-NEXT: mv a2, a1
+; RV32ZBB-NEXT: mv a2, a0
+; RV32ZBB-NEXT: mv a3, a1
; RV32ZBB-NEXT: .LBB7_4:
-; RV32ZBB-NEXT: neg a0, a3
-; RV32ZBB-NEXT: snez a1, a3
-; RV32ZBB-NEXT: neg a2, a2
+; RV32ZBB-NEXT: neg a0, a2
+; RV32ZBB-NEXT: snez a1, a2
+; RV32ZBB-NEXT: neg a2, a3
; RV32ZBB-NEXT: sub a1, a2, a1
; RV32ZBB-NEXT: ret
;
diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/double-arith.ll b/llvm/test/CodeGen/RISCV/GlobalISel/double-arith.ll
index cb2037f5fb0271e..28dde9a3472c253 100644
--- a/llvm/test/CodeGen/RISCV/GlobalISel/double-arith.ll
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/double-arith.ll
@@ -424,11 +424,11 @@ define double @fmsub_d(double %a, double %b, double %c) nounwind {
; RV32I-NEXT: mv s2, a2
; RV32I-NEXT: mv s3, a3
; RV32I-NEXT: mv a0, a4
-; RV32I-NEXT: lui a1, %hi(.LCPI12_0)
-; RV32I-NEXT: addi a1, a1, %lo(.LCPI12_0)
-; RV32I-NEXT: lw a2, 0(a1)
-; RV32I-NEXT: lw a3, 4(a1)
; RV32I-NEXT: mv a1, a5
+; RV32I-NEXT: lui a2, %hi(.LCPI12_0)
+; RV32I-NEXT: addi a3, a2, %lo(.LCPI12_0)
+; RV32I-NEXT: lw a2, 0(a3)
+; RV32I-NEXT: lw a3, 4(a3)
; RV32I-NEXT: call __adddf3
; RV32I-NEXT: mv a4, a0
; RV32I-NEXT: lui a5, 524288
@@ -454,9 +454,9 @@ define double @fmsub_d(double %a, double %b, double %c) nounwind {
; RV64I-NEXT: sd s1, 8(sp) # 8-byte Folded Spill
; RV64I-NEXT: mv s0, a0
; RV64I-NEXT: mv s1, a1
-; RV64I-NEXT: lui a0, %hi(.LCPI12_0)
-; RV64I-NEXT: ld a1, %lo(.LCPI12_0)(a0)
; RV64I-NEXT: mv a0, a2
+; RV64I-NEXT: lui a1, %hi(.LCPI12_0)
+; RV64I-NEXT: ld a1, %lo(.LCPI12_0)(a1)
; RV64I-NEXT: call __adddf3
; RV64I-NEXT: li a1, -1
; RV64I-NEXT: slli a1, a1, 63
@@ -511,20 +511,20 @@ define double @fnmadd_d(double %a, double %b, double %c) nounwind {
; RV32I-NEXT: mv s0, a2
; RV32I-NEXT: mv s1, a3
; RV32I-NEXT: mv s2, a4
+; RV32I-NEXT: mv s3, a5
; RV32I-NEXT: lui a2, %hi(.LCPI13_0)
; RV32I-NEXT: addi a2, a2, %lo(.LCPI13_0)
-; RV32I-NEXT: lw s3, 0(a2)
-; RV32I-NEXT: lw s4, 4(a2)
-; RV32I-NEXT: mv s5, a5
-; RV32I-NEXT: mv a2, s3
-; RV32I-NEXT: mv a3, s4
+; RV32I-NEXT: lw s4, 0(a2)
+; RV32I-NEXT: lw s5, 4(a2)
+; RV32I-NEXT: mv a2, s4
+; RV32I-NEXT: mv a3, s5
; RV32I-NEXT: call __adddf3
; RV32I-NEXT: mv s6, a0
; RV32I-NEXT: mv s7, a1
; RV32I-NEXT: mv a0, s2
-; RV32I-NEXT: mv a1, s5
-; RV32I-NEXT: mv a2, s3
-; RV32I-NEXT: mv a3, s4
+; RV32I-NEXT: mv a1, s3
+; RV32I-NEXT: mv a2, s4
+; RV32I-NEXT: mv a3, s5
; RV32I-NEXT: call __adddf3
; RV32I-NEXT: mv a4, a0
; RV32I-NEXT: lui a5, 524288
@@ -556,14 +556,14 @@ define double @fnmadd_d(double %a, double %b, double %c) nounwind {
; RV64I-NEXT: sd s2, 16(sp) # 8-byte Folded Spill
; RV64I-NEXT: sd s3, 8(sp) # 8-byte Folded Spill
; RV64I-NEXT: mv s0, a1
+; RV64I-NEXT: mv s1, a2
; RV64I-NEXT: lui a1, %hi(.LCPI13_0)
-; RV64I-NEXT: ld s1, %lo(.LCPI13_0)(a1)
-; RV64I-NEXT: mv s2, a2
-; RV64I-NEXT: mv a1, s1
+; RV64I-NEXT: ld s2, %lo(.LCPI13_0)(a1)
+; RV64I-NEXT: mv a1, s2
; RV64I-NEXT: call __adddf3
; RV64I-NEXT: mv s3, a0
-; RV64I-NEXT: mv a0, s2
-; RV64I-NEXT: mv a1, s1
+; RV64I-NEXT: mv a0, s1
+; RV64I-NEXT: mv a1, s2
; RV64I-NEXT: call __adddf3
; RV64I-NEXT: li a1, -1
; RV64I-NEXT: slli a2, a1, 63
@@ -625,20 +625,20 @@ define double @fnmadd_d_2(double %a, double %b, double %c) nounwind {
; RV32I-NEXT: mv a0, a2
; RV32I-NEXT: mv a1, a3
; RV32I-NEXT: mv s2, a4
+; RV32I-NEXT: mv s3, a5
; RV32I-NEXT: lui a2, %hi(.LCPI14_0)
; RV32I-NEXT: addi a2, a2, %lo(.LCPI14_0)
-; RV32I-NEXT: lw s3, 0(a2)
-; RV32I-NEXT: lw s4, 4(a2)
-; RV32I-NEXT: mv s5, a5
-; RV32I-NEXT: mv a2, s3
-; RV32I-NEXT: mv a3, s4
+; RV32I-NEXT: lw s4, 0(a2)
+; RV32I-NEXT: lw s5, 4(a2)
+; RV32I-NEXT: mv a2, s4
+; RV32I-NEXT: mv a3, s5
; RV32I-NEXT: call __adddf3
; RV32I-NEXT: mv s6, a0
; RV32I-NEXT: mv s7, a1
; RV32I-NEXT: mv a0, s2
-; RV32I-NEXT: mv a1, s5
-; RV32I-NEXT: mv a2, s3
-; RV32I-NEXT: mv a3, s4
+; RV32I-NEXT: mv a1, s3
+; RV32I-NEXT: mv a2, s4
+; RV32I-NEXT: mv a3, s5
; RV32I-NEXT: call __adddf3
; RV32I-NEXT: mv a4, a0
; RV32I-NEXT: lui a5, 524288
@@ -670,14 +670,14 @@ define double @fnmadd_d_2(double %a, double %b, double %c) nounwind {
; RV64I-NEXT: sd s3, 8(sp) # 8-byte Folded Spill
; RV64I-NEXT: mv s0, a0
; RV64I-NEXT: mv a0, a1
+; RV64I-NEXT: mv s1, a2
; RV64I-NEXT: lui a1, %hi(.LCPI14_0)
-; RV64I-NEXT: ld s1, %lo(.LCPI14_0)(a1)
-; RV64I-NEXT: mv s2, a2
-; RV64I-NEXT: mv a1, s1
+; RV64I-NEXT: ld s2, %lo(.LCPI14_0)(a1)
+; RV64I-NEXT: mv a1, s2
; RV64I-NEXT: call __adddf3
; RV64I-NEXT: mv s3, a0
-; RV64I-NEXT: mv a0, s2
-; RV64I-NEXT: mv a1, s1
+; RV64I-NEXT: mv a0, s1
+; RV64I-NEXT: mv a1, s2
; RV64I-NEXT: call __adddf3
; RV64I-NEXT: li a1, -1
; RV64I-NEXT: slli a2, a1, 63
@@ -799,11 +799,11 @@ define double @fnmsub_d(double %a, double %b, double %c) nounwind {
; RV32I-NEXT: mv s0, a2
; RV32I-NEXT: mv s1, a3
; RV32I-NEXT: mv s2, a4
+; RV32I-NEXT: mv s3, a5
; RV32I-NEXT: lui a2, %hi(.LCPI17_0)
; RV32I-NEXT: addi a3, a2, %lo(.LCPI17_0)
; RV32I-NEXT: lw a2, 0(a3)
; RV32I-NEXT: lw a3, 4(a3)
-; RV32I-NEXT: mv s3, a5
; RV32I-NEXT: call __adddf3
; RV32I-NEXT: lui a2, 524288
; RV32I-NEXT: xor a1, a1, a2
@@ -827,9 +827,9 @@ define double @fnmsub_d(double %a, double %b, double %c) nounwind {
; RV64I-NEXT: sd s0, 16(sp) # 8-byte Folded Spill
; RV64I-NEXT: sd s1, 8(sp) # 8-byte Folded Spill
; RV64I-NEXT: mv s0, a1
+; RV64I-NEXT: mv s1, a2
; RV64I-NEXT: lui a1, %hi(.LCPI17_0)
; RV64I-NEXT: ld a1, %lo(.LCPI17_0)(a1)
-; RV64I-NEXT: mv s1, a2
; RV64I-NEXT: call __adddf3
; RV64I-NEXT: li a1, -1
; RV64I-NEXT: slli a1, a1, 63
@@ -880,1...
[truncated]
|
@llvm/pr-subscribers-llvm-globalisel Author: Philip Reames (preames) ChangesThis change introduces a default schedule model for the RISCV target which leaves everything unchanged except the MicroOpBufferSize. The default value of this flag in NoSched is 0. Both configurations represent in order cores (i.e. no reorder window), the difference between them comes down to whether heuristics other than latency are allowed to apply. (Implementation details below) I left the processor models which explicitly set MicroOpBufferSize=0 unchanged in this patch, but strongly suspect we should change those too. Honestly, I think the LLVM wide default for this flag should be changed, but don't have the energy to manage the updates for all targets. Implementation wise, the effect of this change is that schedule units which are ready to run except that one of their predecessors may not have completed yet are added to the Available list, not the Pending one. The result of this is that it becomes possible to chose to schedule a node before it's ready cycle if the heuristics prefer. This is essentially chosing to insert a resource stall instead of e.g. increasing register pressure. Note that I was initially concerned there might be a correctness aspect (as in some kind of exposed pipeline design), but the generic scheduler doesn't seem to know how to insert noop instructions. Without that, a program wouldn't be guaranteed to schedule on an exposed pipeline depending on the program and schedule model in question. The effect of this is that we sometimes prefer register pressure in codegen results. This is mostly churn (or small wins) on scalar because we have many more registers, but is of major importance on vector - particularly high LMUL - because we effectively have many fewer registers and the relative cost of spilling is much higher. This is a significant improvement on high LMUL code quality for default rva23u configurations - or any non -mcpu vector configuration for that matter. Patch is 4.54 MiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/126608.diff 402 Files Affected:
diff --git a/llvm/lib/Target/RISCV/RISCVProcessors.td b/llvm/lib/Target/RISCV/RISCVProcessors.td
index b5eea138732a557..c54afa1e6e72e0c 100644
--- a/llvm/lib/Target/RISCV/RISCVProcessors.td
+++ b/llvm/lib/Target/RISCV/RISCVProcessors.td
@@ -88,21 +88,30 @@ class RISCVTuneProcessorModel<string n,
defvar GenericTuneFeatures = [TuneOptimizedNF2SegmentLoadStore];
+// Adjust the default cost model to enable all heuristics, not just latency
+// In particular, this enables register pressure heustics which are very
+// important for high LMUL vector code, and have little negative impact
+// on other configurations,
+def GenericModel : SchedMachineModel {
+ let MicroOpBufferSize = 1;
+ let CompleteModel = 0;
+}
+
def GENERIC_RV32 : RISCVProcessorModel<"generic-rv32",
- NoSchedModel,
+ GenericModel,
[Feature32Bit,
FeatureStdExtI],
GenericTuneFeatures>,
GenericTuneInfo;
def GENERIC_RV64 : RISCVProcessorModel<"generic-rv64",
- NoSchedModel,
+ GenericModel,
[Feature64Bit,
FeatureStdExtI],
GenericTuneFeatures>,
GenericTuneInfo;
// Support generic for compatibility with other targets. The triple will be used
// to change to the appropriate rv32/rv64 version.
-def GENERIC : RISCVTuneProcessorModel<"generic", NoSchedModel>, GenericTuneInfo;
+def GENERIC : RISCVTuneProcessorModel<"generic", GenericModel>, GenericTuneInfo;
def MIPS_P8700 : RISCVProcessorModel<"mips-p8700",
MIPSP8700Model,
@@ -496,7 +505,7 @@ def TENSTORRENT_ASCALON_D8 : RISCVProcessorModel<"tt-ascalon-d8",
TunePostRAScheduler]>;
def VENTANA_VEYRON_V1 : RISCVProcessorModel<"veyron-v1",
- NoSchedModel,
+ GenericModel,
[Feature64Bit,
FeatureStdExtI,
FeatureStdExtZifencei,
@@ -556,7 +565,7 @@ def XIANGSHAN_NANHU : RISCVProcessorModel<"xiangshan-nanhu",
TuneShiftedZExtWFusion]>;
def SPACEMIT_X60 : RISCVProcessorModel<"spacemit-x60",
- NoSchedModel,
+ GenericModel,
!listconcat(RVA22S64Features,
[FeatureStdExtV,
FeatureStdExtSscofpmf,
@@ -581,7 +590,7 @@ def SPACEMIT_X60 : RISCVProcessorModel<"spacemit-x60",
}
def RP2350_HAZARD3 : RISCVProcessorModel<"rp2350-hazard3",
- NoSchedModel,
+ GenericModel,
[Feature32Bit,
FeatureStdExtI,
FeatureStdExtM,
diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/add-imm.ll b/llvm/test/CodeGen/RISCV/GlobalISel/add-imm.ll
index 0fd23a7d346dfd3..1b96189aaea5c7c 100644
--- a/llvm/test/CodeGen/RISCV/GlobalISel/add-imm.ll
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/add-imm.ll
@@ -212,30 +212,30 @@ define i64 @add64_accept(i64 %a) nounwind {
define void @add32_reject() nounwind {
; RV32I-LABEL: add32_reject:
; RV32I: # %bb.0:
-; RV32I-NEXT: lui a0, %hi(ga)
-; RV32I-NEXT: lui a1, %hi(gb)
-; RV32I-NEXT: lw a2, %lo(ga)(a0)
-; RV32I-NEXT: lw a3, %lo(gb)(a1)
-; RV32I-NEXT: lui a4, 1
-; RV32I-NEXT: addi a4, a4, -1096
-; RV32I-NEXT: add a2, a2, a4
-; RV32I-NEXT: add a3, a3, a4
-; RV32I-NEXT: sw a2, %lo(ga)(a0)
-; RV32I-NEXT: sw a3, %lo(gb)(a1)
+; RV32I-NEXT: lui a0, 1
+; RV32I-NEXT: lui a1, %hi(ga)
+; RV32I-NEXT: lui a2, %hi(gb)
+; RV32I-NEXT: lw a3, %lo(ga)(a1)
+; RV32I-NEXT: lw a4, %lo(gb)(a2)
+; RV32I-NEXT: addi a0, a0, -1096
+; RV32I-NEXT: add a3, a3, a0
+; RV32I-NEXT: add a0, a4, a0
+; RV32I-NEXT: sw a3, %lo(ga)(a1)
+; RV32I-NEXT: sw a0, %lo(gb)(a2)
; RV32I-NEXT: ret
;
; RV64I-LABEL: add32_reject:
; RV64I: # %bb.0:
-; RV64I-NEXT: lui a0, %hi(ga)
-; RV64I-NEXT: lui a1, %hi(gb)
-; RV64I-NEXT: lw a2, %lo(ga)(a0)
-; RV64I-NEXT: lw a3, %lo(gb)(a1)
-; RV64I-NEXT: lui a4, 1
-; RV64I-NEXT: addi a4, a4, -1096
-; RV64I-NEXT: add a2, a2, a4
-; RV64I-NEXT: add a3, a3, a4
-; RV64I-NEXT: sw a2, %lo(ga)(a0)
-; RV64I-NEXT: sw a3, %lo(gb)(a1)
+; RV64I-NEXT: lui a0, 1
+; RV64I-NEXT: lui a1, %hi(ga)
+; RV64I-NEXT: lui a2, %hi(gb)
+; RV64I-NEXT: lw a3, %lo(ga)(a1)
+; RV64I-NEXT: lw a4, %lo(gb)(a2)
+; RV64I-NEXT: addi a0, a0, -1096
+; RV64I-NEXT: add a3, a3, a0
+; RV64I-NEXT: add a0, a4, a0
+; RV64I-NEXT: sw a3, %lo(ga)(a1)
+; RV64I-NEXT: sw a0, %lo(gb)(a2)
; RV64I-NEXT: ret
%1 = load i32, ptr @ga, align 4
%2 = load i32, ptr @gb, align 4
diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/combine-neg-abs.ll b/llvm/test/CodeGen/RISCV/GlobalISel/combine-neg-abs.ll
index 3a55189076deeea..5b9f0e60e7d808e 100644
--- a/llvm/test/CodeGen/RISCV/GlobalISel/combine-neg-abs.ll
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/combine-neg-abs.ll
@@ -93,49 +93,49 @@ define i32 @expanded_neg_abs32_unsigned(i32 %x) {
define i64 @expanded_neg_abs64(i64 %x) {
; RV32I-LABEL: expanded_neg_abs64:
; RV32I: # %bb.0:
-; RV32I-NEXT: snez a2, a0
-; RV32I-NEXT: neg a3, a1
-; RV32I-NEXT: sub a2, a3, a2
-; RV32I-NEXT: neg a3, a0
-; RV32I-NEXT: beq a2, a1, .LBB2_2
+; RV32I-NEXT: neg a2, a0
+; RV32I-NEXT: snez a3, a0
+; RV32I-NEXT: neg a4, a1
+; RV32I-NEXT: sub a3, a4, a3
+; RV32I-NEXT: beq a3, a1, .LBB2_2
; RV32I-NEXT: # %bb.1:
-; RV32I-NEXT: slt a4, a1, a2
+; RV32I-NEXT: slt a4, a1, a3
; RV32I-NEXT: beqz a4, .LBB2_3
; RV32I-NEXT: j .LBB2_4
; RV32I-NEXT: .LBB2_2:
-; RV32I-NEXT: sltu a4, a0, a3
+; RV32I-NEXT: sltu a4, a0, a2
; RV32I-NEXT: bnez a4, .LBB2_4
; RV32I-NEXT: .LBB2_3:
-; RV32I-NEXT: mv a3, a0
-; RV32I-NEXT: mv a2, a1
+; RV32I-NEXT: mv a2, a0
+; RV32I-NEXT: mv a3, a1
; RV32I-NEXT: .LBB2_4:
-; RV32I-NEXT: neg a0, a3
-; RV32I-NEXT: snez a1, a3
-; RV32I-NEXT: neg a2, a2
+; RV32I-NEXT: neg a0, a2
+; RV32I-NEXT: snez a1, a2
+; RV32I-NEXT: neg a2, a3
; RV32I-NEXT: sub a1, a2, a1
; RV32I-NEXT: ret
;
; RV32ZBB-LABEL: expanded_neg_abs64:
; RV32ZBB: # %bb.0:
-; RV32ZBB-NEXT: snez a2, a0
-; RV32ZBB-NEXT: neg a3, a1
-; RV32ZBB-NEXT: sub a2, a3, a2
-; RV32ZBB-NEXT: neg a3, a0
-; RV32ZBB-NEXT: beq a2, a1, .LBB2_2
+; RV32ZBB-NEXT: neg a2, a0
+; RV32ZBB-NEXT: snez a3, a0
+; RV32ZBB-NEXT: neg a4, a1
+; RV32ZBB-NEXT: sub a3, a4, a3
+; RV32ZBB-NEXT: beq a3, a1, .LBB2_2
; RV32ZBB-NEXT: # %bb.1:
-; RV32ZBB-NEXT: slt a4, a1, a2
+; RV32ZBB-NEXT: slt a4, a1, a3
; RV32ZBB-NEXT: beqz a4, .LBB2_3
; RV32ZBB-NEXT: j .LBB2_4
; RV32ZBB-NEXT: .LBB2_2:
-; RV32ZBB-NEXT: sltu a4, a0, a3
+; RV32ZBB-NEXT: sltu a4, a0, a2
; RV32ZBB-NEXT: bnez a4, .LBB2_4
; RV32ZBB-NEXT: .LBB2_3:
-; RV32ZBB-NEXT: mv a3, a0
-; RV32ZBB-NEXT: mv a2, a1
+; RV32ZBB-NEXT: mv a2, a0
+; RV32ZBB-NEXT: mv a3, a1
; RV32ZBB-NEXT: .LBB2_4:
-; RV32ZBB-NEXT: neg a0, a3
-; RV32ZBB-NEXT: snez a1, a3
-; RV32ZBB-NEXT: neg a2, a2
+; RV32ZBB-NEXT: neg a0, a2
+; RV32ZBB-NEXT: snez a1, a2
+; RV32ZBB-NEXT: neg a2, a3
; RV32ZBB-NEXT: sub a1, a2, a1
; RV32ZBB-NEXT: ret
;
@@ -163,49 +163,49 @@ define i64 @expanded_neg_abs64(i64 %x) {
define i64 @expanded_neg_abs64_unsigned(i64 %x) {
; RV32I-LABEL: expanded_neg_abs64_unsigned:
; RV32I: # %bb.0:
-; RV32I-NEXT: snez a2, a0
-; RV32I-NEXT: neg a3, a1
-; RV32I-NEXT: sub a2, a3, a2
-; RV32I-NEXT: neg a3, a0
-; RV32I-NEXT: beq a2, a1, .LBB3_2
+; RV32I-NEXT: neg a2, a0
+; RV32I-NEXT: snez a3, a0
+; RV32I-NEXT: neg a4, a1
+; RV32I-NEXT: sub a3, a4, a3
+; RV32I-NEXT: beq a3, a1, .LBB3_2
; RV32I-NEXT: # %bb.1:
-; RV32I-NEXT: sltu a4, a1, a2
+; RV32I-NEXT: sltu a4, a1, a3
; RV32I-NEXT: beqz a4, .LBB3_3
; RV32I-NEXT: j .LBB3_4
; RV32I-NEXT: .LBB3_2:
-; RV32I-NEXT: sltu a4, a0, a3
+; RV32I-NEXT: sltu a4, a0, a2
; RV32I-NEXT: bnez a4, .LBB3_4
; RV32I-NEXT: .LBB3_3:
-; RV32I-NEXT: mv a3, a0
-; RV32I-NEXT: mv a2, a1
+; RV32I-NEXT: mv a2, a0
+; RV32I-NEXT: mv a3, a1
; RV32I-NEXT: .LBB3_4:
-; RV32I-NEXT: neg a0, a3
-; RV32I-NEXT: snez a1, a3
-; RV32I-NEXT: neg a2, a2
+; RV32I-NEXT: neg a0, a2
+; RV32I-NEXT: snez a1, a2
+; RV32I-NEXT: neg a2, a3
; RV32I-NEXT: sub a1, a2, a1
; RV32I-NEXT: ret
;
; RV32ZBB-LABEL: expanded_neg_abs64_unsigned:
; RV32ZBB: # %bb.0:
-; RV32ZBB-NEXT: snez a2, a0
-; RV32ZBB-NEXT: neg a3, a1
-; RV32ZBB-NEXT: sub a2, a3, a2
-; RV32ZBB-NEXT: neg a3, a0
-; RV32ZBB-NEXT: beq a2, a1, .LBB3_2
+; RV32ZBB-NEXT: neg a2, a0
+; RV32ZBB-NEXT: snez a3, a0
+; RV32ZBB-NEXT: neg a4, a1
+; RV32ZBB-NEXT: sub a3, a4, a3
+; RV32ZBB-NEXT: beq a3, a1, .LBB3_2
; RV32ZBB-NEXT: # %bb.1:
-; RV32ZBB-NEXT: sltu a4, a1, a2
+; RV32ZBB-NEXT: sltu a4, a1, a3
; RV32ZBB-NEXT: beqz a4, .LBB3_3
; RV32ZBB-NEXT: j .LBB3_4
; RV32ZBB-NEXT: .LBB3_2:
-; RV32ZBB-NEXT: sltu a4, a0, a3
+; RV32ZBB-NEXT: sltu a4, a0, a2
; RV32ZBB-NEXT: bnez a4, .LBB3_4
; RV32ZBB-NEXT: .LBB3_3:
-; RV32ZBB-NEXT: mv a3, a0
-; RV32ZBB-NEXT: mv a2, a1
+; RV32ZBB-NEXT: mv a2, a0
+; RV32ZBB-NEXT: mv a3, a1
; RV32ZBB-NEXT: .LBB3_4:
-; RV32ZBB-NEXT: neg a0, a3
-; RV32ZBB-NEXT: snez a1, a3
-; RV32ZBB-NEXT: neg a2, a2
+; RV32ZBB-NEXT: neg a0, a2
+; RV32ZBB-NEXT: snez a1, a2
+; RV32ZBB-NEXT: neg a2, a3
; RV32ZBB-NEXT: sub a1, a2, a1
; RV32ZBB-NEXT: ret
;
@@ -315,49 +315,49 @@ define i32 @expanded_neg_inv_abs32_unsigned(i32 %x) {
define i64 @expanded_neg_inv_abs64(i64 %x) {
; RV32I-LABEL: expanded_neg_inv_abs64:
; RV32I: # %bb.0:
-; RV32I-NEXT: snez a2, a0
-; RV32I-NEXT: neg a3, a1
-; RV32I-NEXT: sub a2, a3, a2
-; RV32I-NEXT: neg a3, a0
-; RV32I-NEXT: beq a2, a1, .LBB6_2
+; RV32I-NEXT: neg a2, a0
+; RV32I-NEXT: snez a3, a0
+; RV32I-NEXT: neg a4, a1
+; RV32I-NEXT: sub a3, a4, a3
+; RV32I-NEXT: beq a3, a1, .LBB6_2
; RV32I-NEXT: # %bb.1:
-; RV32I-NEXT: slt a4, a2, a1
+; RV32I-NEXT: slt a4, a3, a1
; RV32I-NEXT: beqz a4, .LBB6_3
; RV32I-NEXT: j .LBB6_4
; RV32I-NEXT: .LBB6_2:
-; RV32I-NEXT: sltu a4, a3, a0
+; RV32I-NEXT: sltu a4, a2, a0
; RV32I-NEXT: bnez a4, .LBB6_4
; RV32I-NEXT: .LBB6_3:
-; RV32I-NEXT: mv a3, a0
-; RV32I-NEXT: mv a2, a1
+; RV32I-NEXT: mv a2, a0
+; RV32I-NEXT: mv a3, a1
; RV32I-NEXT: .LBB6_4:
-; RV32I-NEXT: neg a0, a3
-; RV32I-NEXT: snez a1, a3
-; RV32I-NEXT: neg a2, a2
+; RV32I-NEXT: neg a0, a2
+; RV32I-NEXT: snez a1, a2
+; RV32I-NEXT: neg a2, a3
; RV32I-NEXT: sub a1, a2, a1
; RV32I-NEXT: ret
;
; RV32ZBB-LABEL: expanded_neg_inv_abs64:
; RV32ZBB: # %bb.0:
-; RV32ZBB-NEXT: snez a2, a0
-; RV32ZBB-NEXT: neg a3, a1
-; RV32ZBB-NEXT: sub a2, a3, a2
-; RV32ZBB-NEXT: neg a3, a0
-; RV32ZBB-NEXT: beq a2, a1, .LBB6_2
+; RV32ZBB-NEXT: neg a2, a0
+; RV32ZBB-NEXT: snez a3, a0
+; RV32ZBB-NEXT: neg a4, a1
+; RV32ZBB-NEXT: sub a3, a4, a3
+; RV32ZBB-NEXT: beq a3, a1, .LBB6_2
; RV32ZBB-NEXT: # %bb.1:
-; RV32ZBB-NEXT: slt a4, a2, a1
+; RV32ZBB-NEXT: slt a4, a3, a1
; RV32ZBB-NEXT: beqz a4, .LBB6_3
; RV32ZBB-NEXT: j .LBB6_4
; RV32ZBB-NEXT: .LBB6_2:
-; RV32ZBB-NEXT: sltu a4, a3, a0
+; RV32ZBB-NEXT: sltu a4, a2, a0
; RV32ZBB-NEXT: bnez a4, .LBB6_4
; RV32ZBB-NEXT: .LBB6_3:
-; RV32ZBB-NEXT: mv a3, a0
-; RV32ZBB-NEXT: mv a2, a1
+; RV32ZBB-NEXT: mv a2, a0
+; RV32ZBB-NEXT: mv a3, a1
; RV32ZBB-NEXT: .LBB6_4:
-; RV32ZBB-NEXT: neg a0, a3
-; RV32ZBB-NEXT: snez a1, a3
-; RV32ZBB-NEXT: neg a2, a2
+; RV32ZBB-NEXT: neg a0, a2
+; RV32ZBB-NEXT: snez a1, a2
+; RV32ZBB-NEXT: neg a2, a3
; RV32ZBB-NEXT: sub a1, a2, a1
; RV32ZBB-NEXT: ret
;
@@ -385,49 +385,49 @@ define i64 @expanded_neg_inv_abs64(i64 %x) {
define i64 @expanded_neg_inv_abs64_unsigned(i64 %x) {
; RV32I-LABEL: expanded_neg_inv_abs64_unsigned:
; RV32I: # %bb.0:
-; RV32I-NEXT: snez a2, a0
-; RV32I-NEXT: neg a3, a1
-; RV32I-NEXT: sub a2, a3, a2
-; RV32I-NEXT: neg a3, a0
-; RV32I-NEXT: beq a2, a1, .LBB7_2
+; RV32I-NEXT: neg a2, a0
+; RV32I-NEXT: snez a3, a0
+; RV32I-NEXT: neg a4, a1
+; RV32I-NEXT: sub a3, a4, a3
+; RV32I-NEXT: beq a3, a1, .LBB7_2
; RV32I-NEXT: # %bb.1:
-; RV32I-NEXT: sltu a4, a2, a1
+; RV32I-NEXT: sltu a4, a3, a1
; RV32I-NEXT: beqz a4, .LBB7_3
; RV32I-NEXT: j .LBB7_4
; RV32I-NEXT: .LBB7_2:
-; RV32I-NEXT: sltu a4, a3, a0
+; RV32I-NEXT: sltu a4, a2, a0
; RV32I-NEXT: bnez a4, .LBB7_4
; RV32I-NEXT: .LBB7_3:
-; RV32I-NEXT: mv a3, a0
-; RV32I-NEXT: mv a2, a1
+; RV32I-NEXT: mv a2, a0
+; RV32I-NEXT: mv a3, a1
; RV32I-NEXT: .LBB7_4:
-; RV32I-NEXT: neg a0, a3
-; RV32I-NEXT: snez a1, a3
-; RV32I-NEXT: neg a2, a2
+; RV32I-NEXT: neg a0, a2
+; RV32I-NEXT: snez a1, a2
+; RV32I-NEXT: neg a2, a3
; RV32I-NEXT: sub a1, a2, a1
; RV32I-NEXT: ret
;
; RV32ZBB-LABEL: expanded_neg_inv_abs64_unsigned:
; RV32ZBB: # %bb.0:
-; RV32ZBB-NEXT: snez a2, a0
-; RV32ZBB-NEXT: neg a3, a1
-; RV32ZBB-NEXT: sub a2, a3, a2
-; RV32ZBB-NEXT: neg a3, a0
-; RV32ZBB-NEXT: beq a2, a1, .LBB7_2
+; RV32ZBB-NEXT: neg a2, a0
+; RV32ZBB-NEXT: snez a3, a0
+; RV32ZBB-NEXT: neg a4, a1
+; RV32ZBB-NEXT: sub a3, a4, a3
+; RV32ZBB-NEXT: beq a3, a1, .LBB7_2
; RV32ZBB-NEXT: # %bb.1:
-; RV32ZBB-NEXT: sltu a4, a2, a1
+; RV32ZBB-NEXT: sltu a4, a3, a1
; RV32ZBB-NEXT: beqz a4, .LBB7_3
; RV32ZBB-NEXT: j .LBB7_4
; RV32ZBB-NEXT: .LBB7_2:
-; RV32ZBB-NEXT: sltu a4, a3, a0
+; RV32ZBB-NEXT: sltu a4, a2, a0
; RV32ZBB-NEXT: bnez a4, .LBB7_4
; RV32ZBB-NEXT: .LBB7_3:
-; RV32ZBB-NEXT: mv a3, a0
-; RV32ZBB-NEXT: mv a2, a1
+; RV32ZBB-NEXT: mv a2, a0
+; RV32ZBB-NEXT: mv a3, a1
; RV32ZBB-NEXT: .LBB7_4:
-; RV32ZBB-NEXT: neg a0, a3
-; RV32ZBB-NEXT: snez a1, a3
-; RV32ZBB-NEXT: neg a2, a2
+; RV32ZBB-NEXT: neg a0, a2
+; RV32ZBB-NEXT: snez a1, a2
+; RV32ZBB-NEXT: neg a2, a3
; RV32ZBB-NEXT: sub a1, a2, a1
; RV32ZBB-NEXT: ret
;
diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/double-arith.ll b/llvm/test/CodeGen/RISCV/GlobalISel/double-arith.ll
index cb2037f5fb0271e..28dde9a3472c253 100644
--- a/llvm/test/CodeGen/RISCV/GlobalISel/double-arith.ll
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/double-arith.ll
@@ -424,11 +424,11 @@ define double @fmsub_d(double %a, double %b, double %c) nounwind {
; RV32I-NEXT: mv s2, a2
; RV32I-NEXT: mv s3, a3
; RV32I-NEXT: mv a0, a4
-; RV32I-NEXT: lui a1, %hi(.LCPI12_0)
-; RV32I-NEXT: addi a1, a1, %lo(.LCPI12_0)
-; RV32I-NEXT: lw a2, 0(a1)
-; RV32I-NEXT: lw a3, 4(a1)
; RV32I-NEXT: mv a1, a5
+; RV32I-NEXT: lui a2, %hi(.LCPI12_0)
+; RV32I-NEXT: addi a3, a2, %lo(.LCPI12_0)
+; RV32I-NEXT: lw a2, 0(a3)
+; RV32I-NEXT: lw a3, 4(a3)
; RV32I-NEXT: call __adddf3
; RV32I-NEXT: mv a4, a0
; RV32I-NEXT: lui a5, 524288
@@ -454,9 +454,9 @@ define double @fmsub_d(double %a, double %b, double %c) nounwind {
; RV64I-NEXT: sd s1, 8(sp) # 8-byte Folded Spill
; RV64I-NEXT: mv s0, a0
; RV64I-NEXT: mv s1, a1
-; RV64I-NEXT: lui a0, %hi(.LCPI12_0)
-; RV64I-NEXT: ld a1, %lo(.LCPI12_0)(a0)
; RV64I-NEXT: mv a0, a2
+; RV64I-NEXT: lui a1, %hi(.LCPI12_0)
+; RV64I-NEXT: ld a1, %lo(.LCPI12_0)(a1)
; RV64I-NEXT: call __adddf3
; RV64I-NEXT: li a1, -1
; RV64I-NEXT: slli a1, a1, 63
@@ -511,20 +511,20 @@ define double @fnmadd_d(double %a, double %b, double %c) nounwind {
; RV32I-NEXT: mv s0, a2
; RV32I-NEXT: mv s1, a3
; RV32I-NEXT: mv s2, a4
+; RV32I-NEXT: mv s3, a5
; RV32I-NEXT: lui a2, %hi(.LCPI13_0)
; RV32I-NEXT: addi a2, a2, %lo(.LCPI13_0)
-; RV32I-NEXT: lw s3, 0(a2)
-; RV32I-NEXT: lw s4, 4(a2)
-; RV32I-NEXT: mv s5, a5
-; RV32I-NEXT: mv a2, s3
-; RV32I-NEXT: mv a3, s4
+; RV32I-NEXT: lw s4, 0(a2)
+; RV32I-NEXT: lw s5, 4(a2)
+; RV32I-NEXT: mv a2, s4
+; RV32I-NEXT: mv a3, s5
; RV32I-NEXT: call __adddf3
; RV32I-NEXT: mv s6, a0
; RV32I-NEXT: mv s7, a1
; RV32I-NEXT: mv a0, s2
-; RV32I-NEXT: mv a1, s5
-; RV32I-NEXT: mv a2, s3
-; RV32I-NEXT: mv a3, s4
+; RV32I-NEXT: mv a1, s3
+; RV32I-NEXT: mv a2, s4
+; RV32I-NEXT: mv a3, s5
; RV32I-NEXT: call __adddf3
; RV32I-NEXT: mv a4, a0
; RV32I-NEXT: lui a5, 524288
@@ -556,14 +556,14 @@ define double @fnmadd_d(double %a, double %b, double %c) nounwind {
; RV64I-NEXT: sd s2, 16(sp) # 8-byte Folded Spill
; RV64I-NEXT: sd s3, 8(sp) # 8-byte Folded Spill
; RV64I-NEXT: mv s0, a1
+; RV64I-NEXT: mv s1, a2
; RV64I-NEXT: lui a1, %hi(.LCPI13_0)
-; RV64I-NEXT: ld s1, %lo(.LCPI13_0)(a1)
-; RV64I-NEXT: mv s2, a2
-; RV64I-NEXT: mv a1, s1
+; RV64I-NEXT: ld s2, %lo(.LCPI13_0)(a1)
+; RV64I-NEXT: mv a1, s2
; RV64I-NEXT: call __adddf3
; RV64I-NEXT: mv s3, a0
-; RV64I-NEXT: mv a0, s2
-; RV64I-NEXT: mv a1, s1
+; RV64I-NEXT: mv a0, s1
+; RV64I-NEXT: mv a1, s2
; RV64I-NEXT: call __adddf3
; RV64I-NEXT: li a1, -1
; RV64I-NEXT: slli a2, a1, 63
@@ -625,20 +625,20 @@ define double @fnmadd_d_2(double %a, double %b, double %c) nounwind {
; RV32I-NEXT: mv a0, a2
; RV32I-NEXT: mv a1, a3
; RV32I-NEXT: mv s2, a4
+; RV32I-NEXT: mv s3, a5
; RV32I-NEXT: lui a2, %hi(.LCPI14_0)
; RV32I-NEXT: addi a2, a2, %lo(.LCPI14_0)
-; RV32I-NEXT: lw s3, 0(a2)
-; RV32I-NEXT: lw s4, 4(a2)
-; RV32I-NEXT: mv s5, a5
-; RV32I-NEXT: mv a2, s3
-; RV32I-NEXT: mv a3, s4
+; RV32I-NEXT: lw s4, 0(a2)
+; RV32I-NEXT: lw s5, 4(a2)
+; RV32I-NEXT: mv a2, s4
+; RV32I-NEXT: mv a3, s5
; RV32I-NEXT: call __adddf3
; RV32I-NEXT: mv s6, a0
; RV32I-NEXT: mv s7, a1
; RV32I-NEXT: mv a0, s2
-; RV32I-NEXT: mv a1, s5
-; RV32I-NEXT: mv a2, s3
-; RV32I-NEXT: mv a3, s4
+; RV32I-NEXT: mv a1, s3
+; RV32I-NEXT: mv a2, s4
+; RV32I-NEXT: mv a3, s5
; RV32I-NEXT: call __adddf3
; RV32I-NEXT: mv a4, a0
; RV32I-NEXT: lui a5, 524288
@@ -670,14 +670,14 @@ define double @fnmadd_d_2(double %a, double %b, double %c) nounwind {
; RV64I-NEXT: sd s3, 8(sp) # 8-byte Folded Spill
; RV64I-NEXT: mv s0, a0
; RV64I-NEXT: mv a0, a1
+; RV64I-NEXT: mv s1, a2
; RV64I-NEXT: lui a1, %hi(.LCPI14_0)
-; RV64I-NEXT: ld s1, %lo(.LCPI14_0)(a1)
-; RV64I-NEXT: mv s2, a2
-; RV64I-NEXT: mv a1, s1
+; RV64I-NEXT: ld s2, %lo(.LCPI14_0)(a1)
+; RV64I-NEXT: mv a1, s2
; RV64I-NEXT: call __adddf3
; RV64I-NEXT: mv s3, a0
-; RV64I-NEXT: mv a0, s2
-; RV64I-NEXT: mv a1, s1
+; RV64I-NEXT: mv a0, s1
+; RV64I-NEXT: mv a1, s2
; RV64I-NEXT: call __adddf3
; RV64I-NEXT: li a1, -1
; RV64I-NEXT: slli a2, a1, 63
@@ -799,11 +799,11 @@ define double @fnmsub_d(double %a, double %b, double %c) nounwind {
; RV32I-NEXT: mv s0, a2
; RV32I-NEXT: mv s1, a3
; RV32I-NEXT: mv s2, a4
+; RV32I-NEXT: mv s3, a5
; RV32I-NEXT: lui a2, %hi(.LCPI17_0)
; RV32I-NEXT: addi a3, a2, %lo(.LCPI17_0)
; RV32I-NEXT: lw a2, 0(a3)
; RV32I-NEXT: lw a3, 4(a3)
-; RV32I-NEXT: mv s3, a5
; RV32I-NEXT: call __adddf3
; RV32I-NEXT: lui a2, 524288
; RV32I-NEXT: xor a1, a1, a2
@@ -827,9 +827,9 @@ define double @fnmsub_d(double %a, double %b, double %c) nounwind {
; RV64I-NEXT: sd s0, 16(sp) # 8-byte Folded Spill
; RV64I-NEXT: sd s1, 8(sp) # 8-byte Folded Spill
; RV64I-NEXT: mv s0, a1
+; RV64I-NEXT: mv s1, a2
; RV64I-NEXT: lui a1, %hi(.LCPI17_0)
; RV64I-NEXT: ld a1, %lo(.LCPI17_0)(a1)
-; RV64I-NEXT: mv s1, a2
; RV64I-NEXT: call __adddf3
; RV64I-NEXT: li a1, -1
; RV64I-NEXT: slli a1, a1, 63
@@ -880,1...
[truncated]
|
Additional context because the review description was getting rather long... I believe this fixes the same issue as @BeMg 's #125468. The root cause discussion on that was useful for confirming my own understanding, thanks! I also benefited from offline conversation with both @lukel97 and @topperc when thinking about how to approach this. This isn't a strict improvement. There are a couple cases in the tests where we actually see increased spills, but a) this are outweighed by the improvements and b) represent cases where -mcpu=something-with-a-model would already see regressions today. I think these are easily ignorable, but have only skimmed the tests, please point out anything you find particularly concerning. This parallels what is done in X86, though the code structure differs a bit. There both the default (which uses the sandybridge model), and GenericX86Model use values for MicroOpBufferSize which imply out of order designs. I deliberately exclude that from this change semantic diff. (Edit: Delete incorrect comment about the math function expansion) |
Linking issue #107532 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! As before, I support such change (and I was going to do the same thing actually).
The default model for mature targets:
- AArch64 -> Cortex A510
- ARM -> Cortex A8
- X86 -> Sandy Bridge
- PPC -> G3
None of them are using the LLVM's default configurations.
@@ -88,21 +88,30 @@ class RISCVTuneProcessorModel<string n, | |||
|
|||
defvar GenericTuneFeatures = [TuneOptimizedNF2SegmentLoadStore]; | |||
|
|||
// Adjust the default cost model to enable all heuristics, not just latency |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it worths a standalone file just like in #120712. As discussed, we may add a generic in-order scheduling model.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. For most case, MicroOpBufferSize = 1 will cover release pending queue approach.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
I just finished running SPEC CPU 2017 on rva22u64_v before and after this change: https://lnt.lukelau.me/db_default/v4/nts/191?show_delta=yes&show_previous=yes&show_stddev=yes&show_mad=yes&show_all=yes&show_all_samples=yes&show_sample_counts=yes&show_small_diff=yes&num_comparison_runs=0&test_filter=&test_min_value_filter=&aggregation_fn=min&MW_confidence_lv=0.05&compare_to=196&submit=Update The TL;DR is that while 510.parest_r and gcc_r saw a 3.5% improvement, there's regressions on 8 other benchmarks ranging from 1-7%. Note: the 20% perlbench result in that first run is misleading due to noise, see here for a more recent run https://lnt.lukelau.me/db_default/v4/nts/197?show_delta=yes&show_previous=yes&show_stddev=yes&show_mad=yes&show_all=yes&show_all_samples=yes&show_sample_counts=yes&num_comparison_runs=0&test_filter=&test_min_value_filter=&aggregation_fn=min&MW_confidence_lv=0.05&compare_to=198&submit=Update The diff in the number of static spills + reloads doesn't look like much, so I presume the performance impact isn't coming from register pressure but from scheduling changes. Did the swap from NoSchedModel to SchedMachineModel change some default latencies?
|
…m#126608) This change introduces a default schedule model for the RISCV target which leaves everything unchanged except the MicroOpBufferSize. The default value of this flag in NoSched is 0. Both configurations represent in order cores (i.e. no reorder window), the difference between them comes down to whether heuristics other than latency are allowed to apply. (Implementation details below) I left the processor models which explicitly set MicroOpBufferSize=0 unchanged in this patch, but strongly suspect we should change those too. Honestly, I think the LLVM wide default for this flag should be changed, but don't have the energy to manage the updates for all targets. Implementation wise, the effect of this change is that schedule units which are ready to run *except that* one of their predecessors may not have completed yet are added to the Available list, not the Pending one. The result of this is that it becomes possible to chose to schedule a node before it's ready cycle if the heuristics prefer. This is essentially chosing to insert a resource stall instead of e.g. increasing register pressure. Note that I was initially concerned there might be a correctness aspect (as in some kind of exposed pipeline design), but the generic scheduler doesn't seem to know how to insert noop instructions. Without that, a program wouldn't be guaranteed to schedule on an exposed pipeline depending on the program and schedule model in question. The effect of this is that we sometimes prefer register pressure in codegen results. This is mostly churn (or small wins) on scalar because we have many more registers, but is of major importance on vector - particularly high LMUL - because we effectively have many fewer registers and the relative cost of spilling is much higher. This is a significant improvement on high LMUL code quality for default rva23u configurations - or any non -mcpu vector configuration for that matter. Fixes llvm#107532
No I don't think so. But we may need to define sched classes for Besides, you may try it with llvm-project/llvm/lib/CodeGen/MachineScheduler.cpp Lines 3544 to 3547 in 6fb1d40
I think this is the major influenced part apart from cycles. |
The nightly run for rva22u64 (scalar) just came in and is also seeing significant regressions unfortunately: https://lnt.lukelau.me/db_default/v4/nts/195 |
I'm going to revert while investigating. The regression was unexpected. |
…m#126608) This change introduces a default schedule model for the RISCV target which leaves everything unchanged except the MicroOpBufferSize. The default value of this flag in NoSched is 0. Both configurations represent in order cores (i.e. no reorder window), the difference between them comes down to whether heuristics other than latency are allowed to apply. (Implementation details below) I left the processor models which explicitly set MicroOpBufferSize=0 unchanged in this patch, but strongly suspect we should change those too. Honestly, I think the LLVM wide default for this flag should be changed, but don't have the energy to manage the updates for all targets. Implementation wise, the effect of this change is that schedule units which are ready to run *except that* one of their predecessors may not have completed yet are added to the Available list, not the Pending one. The result of this is that it becomes possible to chose to schedule a node before it's ready cycle if the heuristics prefer. This is essentially chosing to insert a resource stall instead of e.g. increasing register pressure. Note that I was initially concerned there might be a correctness aspect (as in some kind of exposed pipeline design), but the generic scheduler doesn't seem to know how to insert noop instructions. Without that, a program wouldn't be guaranteed to schedule on an exposed pipeline depending on the program and schedule model in question. The effect of this is that we sometimes prefer register pressure in codegen results. This is mostly churn (or small wins) on scalar because we have many more registers, but is of major importance on vector - particularly high LMUL - because we effectively have many fewer registers and the relative cost of spilling is much higher. This is a significant improvement on high LMUL code quality for default rva23u configurations - or any non -mcpu vector configuration for that matter. Fixes llvm#107532
…ses (llvm#126608)" and follow up commit. This reverts commit 9cc8442. This reverts commit 859c871. A performance regression was reported on the original review. There appears to have been an unexpected interaction here. Reverting during investigation.
…m#126608) This change introduces a default schedule model for the RISCV target which leaves everything unchanged except the MicroOpBufferSize. The default value of this flag in NoSched is 0. Both configurations represent in order cores (i.e. no reorder window), the difference between them comes down to whether heuristics other than latency are allowed to apply. (Implementation details below) I left the processor models which explicitly set MicroOpBufferSize=0 unchanged in this patch, but strongly suspect we should change those too. Honestly, I think the LLVM wide default for this flag should be changed, but don't have the energy to manage the updates for all targets. Implementation wise, the effect of this change is that schedule units which are ready to run *except that* one of their predecessors may not have completed yet are added to the Available list, not the Pending one. The result of this is that it becomes possible to chose to schedule a node before it's ready cycle if the heuristics prefer. This is essentially chosing to insert a resource stall instead of e.g. increasing register pressure. Note that I was initially concerned there might be a correctness aspect (as in some kind of exposed pipeline design), but the generic scheduler doesn't seem to know how to insert noop instructions. Without that, a program wouldn't be guaranteed to schedule on an exposed pipeline depending on the program and schedule model in question. The effect of this is that we sometimes prefer register pressure in codegen results. This is mostly churn (or small wins) on scalar because we have many more registers, but is of major importance on vector - particularly high LMUL - because we effectively have many fewer registers and the relative cost of spilling is much higher. This is a significant improvement on high LMUL code quality for default rva23u configurations - or any non -mcpu vector configuration for that matter. Fixes llvm#107532
…ses (llvm#126608)" and follow up commit. This reverts commit 9cc8442. This reverts commit 859c871. A performance regression was reported on the original review. There appears to have been an unexpected interaction here. Reverting during investigation.
This change introduces a default schedule model for the RISCV target which leaves everything unchanged except the MicroOpBufferSize. The default value of this flag in NoSched is 0. Both configurations represent in order cores (i.e. no reorder window), the difference between them comes down to whether heuristics other than latency are allowed to apply. (Implementation details below)
I left the processor models which explicitly set MicroOpBufferSize=0 unchanged in this patch, but strongly suspect we should change those too. Honestly, I think the LLVM wide default for this flag should be changed, but don't have the energy to manage the updates for all targets.
Implementation wise, the effect of this change is that schedule units which are ready to run except that one of their predecessors may not have completed yet are added to the Available list, not the Pending one. The result of this is that it becomes possible to chose to schedule a node before it's ready cycle if the heuristics prefer. This is essentially chosing to insert a resource stall instead of e.g. increasing register pressure.
Note that I was initially concerned there might be a correctness aspect (as in some kind of exposed pipeline design), but the generic scheduler doesn't seem to know how to insert noop instructions. Without that, a program wouldn't be guaranteed to schedule on an exposed pipeline depending on the program and schedule model in question.
The effect of this is that we sometimes prefer register pressure in codegen results. This is mostly churn (or small wins) on scalar because we have many more registers, but is of major importance on vector - particularly high LMUL - because we effectively have many fewer registers and the relative cost of spilling is much higher. This is a significant improvement on high LMUL code quality for default rva23u configurations - or any non -mcpu vector configuration for that matter.
Fixes #107532