Skip to content

[RISCV][TII] Add and use new hook to optimize/canonicalize instructions after MachineCopyPropagation #137973

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 25 commits into from
May 8, 2025

Conversation

asb
Copy link
Contributor

@asb asb commented Apr 30, 2025

PR #136875 was posted as a draft PR that handled a subset of these cases, using the CompressPat mechanism. The consensus from that discussion (and a conclusion I agree with) is that it would be beneficial doing this optimisation earlier on, and in a way that isn't limited just to cases that can be handled by instruction compression.

The most common source for instructions that can be optimized/canonicalized in this way is through tail duplication in MachineBlockPlacement followed by machine copy propagation. For RISC-V, choosing a more canonical instruction allows it to be compressed when it couldn't be before. There is the potential that it would make other MI-level optimisations easier.

This modifies ~910 instructions across an llvm-test-suite build including SPEC2017, targeting rva22u64. Looking at the diff, it seems there's room for eliminating instructions or further propagating after this.

Coverage of instructions is based on observations from a script written to find redundant or improperly canonicalized instructions (though I aim to support all instructions in a 'group' at once, e.g. MUL* even if I only saw some variants of MUL in practice).


The obvious thing to bikeshed is the name of the hook. I do worry based on the name, it might be assumed to be a more generic form of other hooks like optimizeCompareInstr or optimizeSelect. Alternate names welcome. optimizeMutatedInstruction might capture the idea that this is intended to be run after you mutate the operands of an instruction in order to optimize/canonicalize it to a "better" one if possible.

This ended up covering rather a lot more instructions than I originally thought, so I'd appreciate extra eyes checking there are no silly mistakes in the matched patterns.

…s after MachineCopyPropagation

PR llvm#136875 was posted as a draft PR that handled a subset of these
cases, using the CompressPat mechanism. The consensus from that
discussion (and a conclusion I agree with) is that it would be
beneficial doing this optimisation earlier on, and in a way that isn't
limited just to cases that can be handled by instruction compression.

The most common source for instructions that can be
optimized/canonicalized in this way is through tail duplication followed
by machine copy propagation. For RISC-V, choosing a more canonical
instruction allows it to be compressed when it couldn't be before. There
is the potential that it would make other MI-level optimisations easier.

This modifies ~910 instructions across an llvm-test-suite build
including SPEC2017, targeting rva22u64.

Coverage of instructions is based on observations from a script written
to find redundant or improperly canonicalized instructions (though I
aim to support all instructions in a 'group' at once, e.g. MUL* even if
I only saw some variants of MUL in practice).
@llvmbot
Copy link
Member

llvmbot commented Apr 30, 2025

@llvm/pr-subscribers-backend-risc-v

@llvm/pr-subscribers-llvm-regalloc

Author: Alex Bradbury (asb)

Changes

PR #136875 was posted as a draft PR that handled a subset of these cases, using the CompressPat mechanism. The consensus from that discussion (and a conclusion I agree with) is that it would be beneficial doing this optimisation earlier on, and in a way that isn't limited just to cases that can be handled by instruction compression.

The most common source for instructions that can be optimized/canonicalized in this way is through tail duplication followed by machine copy propagation. For RISC-V, choosing a more canonical instruction allows it to be compressed when it couldn't be before. There is the potential that it would make other MI-level optimisations easier.

This modifies ~910 instructions across an llvm-test-suite build including SPEC2017, targeting rva22u64. Looking at the diff, it seems there's room for eliminating instructions or further propagating after this.

Coverage of instructions is based on observations from a script written to find redundant or improperly canonicalized instructions (though I aim to support all instructions in a 'group' at once, e.g. MUL* even if I only saw some variants of MUL in practice).


The obvious thing to bikeshed is the name of the hook. I do worry based on the name, it might be assumed to be a more generic form of other hooks like optimizeCompareInstr or optimizeSelect. Alternate names welcome. optimizeMutatedInstruction might capture the idea that this is intended to be run after you mutate the operands of an instruction in order to optimize/canonicalize it to a "better" one if possible.

This ended up covering rather a lot more instructions than I originally thought, so I'd appreciate extra eyes checking there are no silly mistakes in the matched patterns.


Patch is 26.46 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/137973.diff

5 Files Affected:

  • (modified) llvm/include/llvm/CodeGen/TargetInstrInfo.h (+10)
  • (modified) llvm/lib/CodeGen/MachineCopyPropagation.cpp (+6)
  • (modified) llvm/lib/Target/RISCV/RISCVInstrInfo.cpp (+218)
  • (modified) llvm/lib/Target/RISCV/RISCVInstrInfo.h (+2)
  • (added) llvm/test/CodeGen/RISCV/machine-copyprop-optimizeinstr.mir (+685)
diff --git a/llvm/include/llvm/CodeGen/TargetInstrInfo.h b/llvm/include/llvm/CodeGen/TargetInstrInfo.h
index 0aac02d3dc786..3b33989d28c77 100644
--- a/llvm/include/llvm/CodeGen/TargetInstrInfo.h
+++ b/llvm/include/llvm/CodeGen/TargetInstrInfo.h
@@ -510,6 +510,16 @@ class TargetInstrInfo : public MCInstrInfo {
     return false;
   }
 
+  /// If possible, converts the instruction to a more 'optimized'/canonical
+  /// form. Returns true if the instruction was modified.
+  ///
+  /// This function is only called after register allocation. The MI will be
+  /// modified in place. This is called by passes such as
+  /// MachineCopyPropagation, where their mutation of the MI operands may
+  /// expose opportunities to convert the instruction to a simpler form (e.g.
+  /// a load of 0).
+  virtual bool optimizeInstruction(MachineInstr &MI) const { return false; }
+
   /// A pair composed of a register and a sub-register index.
   /// Used to give some type checking when modeling Reg:SubReg.
   struct RegSubRegPair {
diff --git a/llvm/lib/CodeGen/MachineCopyPropagation.cpp b/llvm/lib/CodeGen/MachineCopyPropagation.cpp
index ff75b87b23128..6f1e6e46eef8b 100644
--- a/llvm/lib/CodeGen/MachineCopyPropagation.cpp
+++ b/llvm/lib/CodeGen/MachineCopyPropagation.cpp
@@ -867,6 +867,12 @@ void MachineCopyPropagation::forwardUses(MachineInstr &MI) {
          make_range(Copy->getIterator(), std::next(MI.getIterator())))
       KMI.clearRegisterKills(CopySrcReg, TRI);
 
+    // Attempt to canonicalize/optimize the instruction now its arguments have
+    // been mutated.
+    if (TII->optimizeInstruction(MI)) {
+      LLVM_DEBUG(dbgs() << "MCP: After optimizeInstruction: " << MI << "\n");
+    }
+
     ++NumCopyForwards;
     Changed = true;
   }
diff --git a/llvm/lib/Target/RISCV/RISCVInstrInfo.cpp b/llvm/lib/Target/RISCV/RISCVInstrInfo.cpp
index c4a2784263af0..3e2549d4b17bd 100644
--- a/llvm/lib/Target/RISCV/RISCVInstrInfo.cpp
+++ b/llvm/lib/Target/RISCV/RISCVInstrInfo.cpp
@@ -2344,6 +2344,21 @@ static unsigned getSHXADDShiftAmount(unsigned Opc) {
   }
 }
 
+// Returns the shift amount from a SHXADD.UW instruction. Returns 0 if the
+// instruction is not a SHXADD.UW.
+static unsigned getSHXADDUWShiftAmount(unsigned Opc) {
+  switch (Opc) {
+  default:
+    return 0;
+  case RISCV::SH1ADD_UW:
+    return 1;
+  case RISCV::SH2ADD_UW:
+    return 2;
+  case RISCV::SH3ADD_UW:
+    return 3;
+  }
+}
+
 // Look for opportunities to combine (sh3add Z, (add X, (slli Y, 5))) into
 // (sh3add (sh2add Y, Z), X).
 static bool getSHXADDPatterns(const MachineInstr &Root,
@@ -3850,6 +3865,209 @@ MachineInstr *RISCVInstrInfo::commuteInstructionImpl(MachineInstr &MI,
   return TargetInstrInfo::commuteInstructionImpl(MI, NewMI, OpIdx1, OpIdx2);
 }
 
+bool RISCVInstrInfo::optimizeInstruction(MachineInstr &MI) const {
+  switch (MI.getOpcode()) {
+  default:
+    break;
+  case RISCV::OR:
+  case RISCV::XOR:
+      // Normalize:
+      // [x]or rd, zero, rs => [x]or rd, rs, zero
+      if (MI.getOperand(1).getReg() == RISCV::X0) {
+        MachineOperand MO1 = MI.getOperand(1);
+        MI.removeOperand(1);
+        MI.addOperand(MO1);
+      }
+      // [x]or rd, rs, zero => addi rd, rs, 0
+      if (MI.getOperand(2).getReg() == RISCV::X0) {
+        MI.getOperand(2).ChangeToImmediate(0);
+        MI.setDesc(get(RISCV::ADDI));
+        return true;
+      }
+      // xor rd, rs, rs => li rd, rs, 0
+      if (MI.getOpcode() == RISCV::XOR && MI.getOperand(1).getReg() == MI.getOperand(2).getReg()) {
+        MI.getOperand(2).ChangeToImmediate(0);
+        MI.setDesc(get(RISCV::ADDI));
+        return true;
+      }
+      break;
+  case RISCV::ADDW:
+      // Normalize:
+      // addw rd, zero, rs => addw rd, rs, zero
+      if (MI.getOperand(1).getReg() == RISCV::X0) {
+        MachineOperand MO1 = MI.getOperand(1);
+        MI.removeOperand(1);
+        MI.addOperand(MO1);
+      }
+      // addw rd, rs, zero => addiw rd, rs, 0
+      if (MI.getOperand(2).getReg() == RISCV::X0) {
+        MI.getOperand(2).ChangeToImmediate(0);
+        MI.setDesc(get(RISCV::ADDIW));
+        return true;
+      }
+      break;
+  case RISCV::SUB:
+  case RISCV::PACK:
+  case RISCV::PACKW:
+      // sub rd, rs, zero => addi rd, rs, 0
+      // pack[w] rd, rs, zero => addi rd, rs, zero
+      if (MI.getOperand(2).getReg() == RISCV::X0) {
+        MI.getOperand(2).ChangeToImmediate(0);
+        MI.setDesc(get(RISCV::ADDI));
+        return true;
+      }
+      break;
+  case RISCV::SUBW:
+      // subw rd, rs, zero => addiw rd, rs, 0
+      if (MI.getOperand(2).getReg() == RISCV::X0) {
+        MI.getOperand(2).ChangeToImmediate(0);
+        MI.setDesc(get(RISCV::ADDIW));
+        return true;
+      }
+      break;
+  case RISCV::SH1ADD:
+  case RISCV::SH1ADD_UW:
+  case RISCV::SH2ADD:
+  case RISCV::SH2ADD_UW:
+  case RISCV::SH3ADD:
+  case RISCV::SH3ADD_UW:
+      // shNadd[.uw] rd, zero, rs => addi rd, rs, 0
+      if (MI.getOperand(1).getReg() == RISCV::X0) {
+        MI.removeOperand(1);
+        MI.addOperand(MachineOperand::CreateImm(0));
+        MI.setDesc(get(RISCV::ADDI));
+        return true;
+      }
+      // shNadd[.uw] rd, rs, zero => slli[.uw] rd, rs, N
+      if (MI.getOperand(2).getReg() == RISCV::X0) {
+        MI.removeOperand(2);
+        unsigned Opc = MI.getOpcode();
+        if (Opc == RISCV::SH1ADD_UW || Opc == RISCV::SH2ADD_UW || Opc == RISCV::SH3ADD_UW) {
+          MI.addOperand(MachineOperand::CreateImm(getSHXADDUWShiftAmount(Opc)));
+          MI.setDesc(get(RISCV::SLLI_UW));
+          return true;
+        }
+        MI.addOperand(MachineOperand::CreateImm(getSHXADDShiftAmount(Opc)));
+        MI.setDesc(get(RISCV::SLLI));
+        return true;
+      }
+      break;
+  case RISCV::ANDI:
+      // andi rd, zero, C => li rd, 0
+      if (MI.getOperand(1).getReg() == RISCV::X0) {
+        MI.getOperand(2).setImm(0);
+        MI.setDesc(get(RISCV::ADDI));
+        return true;
+      }
+      break;
+  case RISCV::AND:
+  case RISCV::MUL:
+  case RISCV::MULH:
+  case RISCV::MULHSU:
+  case RISCV::MULHU:
+  case RISCV::MULW:
+      // and rd, rs, zero => li rd, 0
+      // and rd, zero, rs => li rd, 0
+      // mul* rd, rs, zero => li rd, 0
+      // mul* rd, zero, rs => li rd, 0
+      if (MI.getOperand(1).getReg() == RISCV::X0) {
+        MI.removeOperand(2);
+        MI.addOperand(MachineOperand::CreateImm(0));
+        MI.setDesc(get(RISCV::ADDI));
+        return true;
+      }
+      if (MI.getOperand(2).getReg() == RISCV::X0) {
+        MI.removeOperand(1);
+        MI.addOperand(MachineOperand::CreateImm(0));
+        MI.setDesc(get(RISCV::ADDI));
+        return true;
+      }
+      break;
+  case RISCV::SLLI:
+  case RISCV::SRLI:
+  case RISCV::SRAI:
+  case RISCV::SLLIW:
+  case RISCV::SRLIW:
+  case RISCV::SRAIW:
+  case RISCV::SLLI_UW:
+      // shiftimm rd, zero, N => li rd, 0
+      if (MI.getOperand(1).getReg() == RISCV::X0) {
+        MI.getOperand(2).setImm(0);
+        MI.setDesc(get(RISCV::ADDI));
+        return true;
+      }
+      break;
+  case RISCV::ORI:
+  case RISCV::XORI:
+      // [x]ori rd, zero, N => li rd, N
+      if (MI.getOperand(1).getReg() == RISCV::X0) {
+        MI.setDesc(get(RISCV::ADDI));
+        return true;
+      }
+      break;
+  case RISCV::SLTIU:
+      // seqz rd, zero => li rd, 1
+      if (MI.getOperand(1).getReg() == RISCV::X0 && MI.getOperand(2).getImm() == 1) {
+        MI.setDesc(get(RISCV::ADDI));
+        return true;
+      }
+      break;
+  case RISCV::SLTU:
+  case RISCV::ADD_UW:
+    // snez rd, zero => li rd, 0
+    // zext.w rd, zero => li rd, 0
+    if (MI.getOperand(1).getReg() == RISCV::X0 && MI.getOperand(2).getReg() == RISCV::X0) {
+      MI.getOperand(2).ChangeToImmediate(0);
+      MI.setDesc(get(RISCV::ADDI));
+      return true;
+    }
+    // add.uw rd, zero, rs => add.uw rd, rs, zero (canonical zext.w)
+    if (MI.getOpcode() == RISCV::ADD_UW && MI.getOperand(1).getReg() == RISCV::X0) {
+      MachineOperand MO1 = MI.getOperand(1);
+      MI.removeOperand(1);
+      MI.addOperand(MO1);
+    }
+    break;
+  case RISCV::SEXT_H:
+  case RISCV::SEXT_B:
+  case RISCV::ZEXT_H_RV32:
+  case RISCV::ZEXT_H_RV64:
+    // sext.[hb] rd, zero => li rd, 0
+    // zext.h rd, zero => li rd, 0
+    if (MI.getOperand(1).getReg() == RISCV::X0) {
+      MI.addOperand(MachineOperand::CreateImm(0));
+      MI.setDesc(get(RISCV::ADDI));
+      return true;
+    }
+    break;
+  case RISCV::SLL:
+  case RISCV::SRL:
+  case RISCV::SRA:
+  case RISCV::SLLW:
+  case RISCV::SRLW:
+  case RISCV::SRAW:
+    // shift rd, zero, rs => li rd, 0
+    if (MI.getOperand(1).getReg() == RISCV::X0) {
+      MI.getOperand(2).ChangeToImmediate(0);
+      MI.setDesc(get(RISCV::ADDI));
+      return true;
+    }
+    break;
+  case RISCV::MIN:
+  case RISCV::MINU:
+  case RISCV::MAX:
+  case RISCV::MAXU:
+    // min|max rd, rs, rs => addi rd, rs, 0
+    if (MI.getOperand(1).getReg() == MI.getOperand(2).getReg()) {
+      MI.getOperand(2).ChangeToImmediate(0);
+      MI.setDesc(get(RISCV::ADDI));
+      return true;
+    }
+      break;
+  }
+  return false;
+}
+
 #undef CASE_RVV_OPCODE_UNMASK_LMUL
 #undef CASE_RVV_OPCODE_MASK_LMUL
 #undef CASE_RVV_OPCODE_LMUL
diff --git a/llvm/lib/Target/RISCV/RISCVInstrInfo.h b/llvm/lib/Target/RISCV/RISCVInstrInfo.h
index 67e457d64f6e3..ccca9b7120e02 100644
--- a/llvm/lib/Target/RISCV/RISCVInstrInfo.h
+++ b/llvm/lib/Target/RISCV/RISCVInstrInfo.h
@@ -242,6 +242,8 @@ class RISCVInstrInfo : public RISCVGenInstrInfo {
                                        unsigned OpIdx1,
                                        unsigned OpIdx2) const override;
 
+  bool optimizeInstruction(MachineInstr &MI) const override;
+
   MachineInstr *convertToThreeAddress(MachineInstr &MI, LiveVariables *LV,
                                       LiveIntervals *LIS) const override;
 
diff --git a/llvm/test/CodeGen/RISCV/machine-copyprop-optimizeinstr.mir b/llvm/test/CodeGen/RISCV/machine-copyprop-optimizeinstr.mir
new file mode 100644
index 0000000000000..ed4e7cb8df6b5
--- /dev/null
+++ b/llvm/test/CodeGen/RISCV/machine-copyprop-optimizeinstr.mir
@@ -0,0 +1,685 @@
+# NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py UTC_ARGS: --version 5
+# RUN: llc -o - %s -mtriple=riscv64 -run-pass=machine-cp -mcp-use-is-copy-instr | FileCheck %s
+
+---
+name: or1
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: or1
+    ; CHECK: renamable $x10 = ADDI $x12, 0
+    ; CHECK-NEXT: PseudoRET implicit $x10
+    renamable $x11 = COPY $x12
+    renamable $x10 = OR renamable $x11, $x0
+    PseudoRET implicit $x10
+...
+---
+name: or2
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: or2
+    ; CHECK: renamable $x10 = ADDI $x12, 0
+    ; CHECK-NEXT: PseudoRET implicit $x10
+    renamable $x11 = COPY $x12
+    renamable $x10 = OR $x0, renamable $x11
+    PseudoRET implicit $x10
+...
+---
+name: xor1
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: xor1
+    ; CHECK: renamable $x10 = ADDI $x12, 0
+    ; CHECK-NEXT: PseudoRET implicit $x10
+    renamable $x11 = COPY $x12
+    renamable $x10 = XOR renamable $x11, $x0
+    PseudoRET implicit $x10
+...
+---
+name: xor2
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: xor2
+    ; CHECK: renamable $x10 = ADDI $x12, 0
+    ; CHECK-NEXT: PseudoRET implicit $x10
+    renamable $x11 = COPY $x12
+    renamable $x10 = XOR $x0, renamable $x11
+    PseudoRET implicit $x10
+...
+---
+name: addw1
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: addw1
+    ; CHECK: renamable $x10 = ADDIW $x12, 0
+    ; CHECK-NEXT: PseudoRET implicit $x10
+    renamable $x11 = COPY $x0
+    renamable $x10 = ADDW renamable $x11, $x12
+    PseudoRET implicit $x10
+...
+---
+name: addw2
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: addw2
+    ; CHECK: renamable $x10 = ADDIW $x12, 0
+    ; CHECK-NEXT: PseudoRET implicit $x10
+    renamable $x11 = COPY $x0
+    renamable $x10 = ADDW $x12, renamable $x11
+    PseudoRET implicit $x10
+...
+---
+name: sub
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: sub
+    ; CHECK: renamable $x10 = ADDI $x12, 0
+    ; CHECK-NEXT: PseudoRET implicit $x10
+    renamable $x11 = COPY $x12
+    renamable $x10 = SUB renamable $x11, $x0
+    PseudoRET implicit $x10
+...
+---
+name: pack
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: pack
+    ; CHECK: renamable $x10 = ADDI $x12, 0
+    ; CHECK-NEXT: PseudoRET implicit $x10
+    renamable $x11 = COPY $x12
+    renamable $x10 = PACK renamable $x11, $x0
+    PseudoRET implicit $x10
+...
+---
+name: packw
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: packw
+    ; CHECK: renamable $x10 = ADDI $x12, 0
+    ; CHECK-NEXT: PseudoRET implicit $x10
+    renamable $x11 = COPY $x12
+    renamable $x10 = PACKW renamable $x11, $x0
+    PseudoRET implicit $x10
+...
+---
+name: subw
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: subw
+    ; CHECK: renamable $x10 = ADDIW $x12, 0
+    ; CHECK-NEXT: PseudoRET implicit $x10
+    renamable $x11 = COPY $x12
+    renamable $x10 = SUBW renamable $x11, $x0
+    PseudoRET implicit $x10
+...
+---
+name: sh1add1
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: sh1add1
+    ; CHECK: renamable $x10 = ADDI $x12, 0
+    ; CHECK-NEXT: PseudoRET implicit $x10
+    renamable $x11 = COPY $x12
+    renamable $x10 = SH1ADD $x0, renamable $x11
+    PseudoRET implicit $x10
+...
+---
+name: sh1add2
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: sh1add2
+    ; CHECK: renamable $x10 = SLLI $x12, 1
+    ; CHECK-NEXT: PseudoRET implicit $x10
+    renamable $x11 = COPY $x12
+    renamable $x10 = SH1ADD renamable $x11, $x0
+    PseudoRET implicit $x10
+...
+---
+name: sh1add.uw1
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: sh1add.uw1
+    ; CHECK: renamable $x10 = ADDI $x12, 0
+    ; CHECK-NEXT: PseudoRET implicit $x10
+    renamable $x11 = COPY $x12
+    renamable $x10 = SH1ADD_UW $x0, renamable $x11
+    PseudoRET implicit $x10
+...
+---
+name: sh1add.uw2
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: sh1add.uw2
+    ; CHECK: renamable $x10 = SLLI_UW $x12, 1
+    ; CHECK-NEXT: PseudoRET implicit $x10
+    renamable $x11 = COPY $x12
+    renamable $x10 = SH1ADD_UW renamable $x11, $x0
+    PseudoRET implicit $x10
+...
+---
+name: sh2add1
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: sh2add1
+    ; CHECK: renamable $x10 = ADDI $x12, 0
+    ; CHECK-NEXT: PseudoRET implicit $x10
+    renamable $x11 = COPY $x12
+    renamable $x10 = SH2ADD $x0, renamable $x11
+    PseudoRET implicit $x10
+...
+---
+name: sh2add2
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: sh2add2
+    ; CHECK: renamable $x10 = SLLI $x12, 2
+    ; CHECK-NEXT: PseudoRET implicit $x10
+    renamable $x11 = COPY $x12
+    renamable $x10 = SH2ADD renamable $x11, $x0
+    PseudoRET implicit $x10
+...
+---
+name: sh2add.uw1
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: sh2add.uw1
+    ; CHECK: renamable $x10 = ADDI $x12, 0
+    ; CHECK-NEXT: PseudoRET implicit $x10
+    renamable $x11 = COPY $x12
+    renamable $x10 = SH2ADD_UW $x0, renamable $x11
+    PseudoRET implicit $x10
+...
+---
+name: sh2add.uw2
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: sh2add.uw2
+    ; CHECK: renamable $x10 = SLLI_UW $x12, 2
+    ; CHECK-NEXT: PseudoRET implicit $x10
+    renamable $x11 = COPY $x12
+    renamable $x10 = SH2ADD_UW renamable $x11, $x0
+    PseudoRET implicit $x10
+...
+---
+name: sh3add1
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: sh3add1
+    ; CHECK: renamable $x10 = ADDI $x12, 0
+    ; CHECK-NEXT: PseudoRET implicit $x10
+    renamable $x11 = COPY $x12
+    renamable $x10 = SH3ADD $x0, renamable $x11
+    PseudoRET implicit $x10
+...
+---
+name: sh3add2
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: sh3add2
+    ; CHECK: renamable $x10 = SLLI $x12, 3
+    ; CHECK-NEXT: PseudoRET implicit $x10
+    renamable $x11 = COPY $x12
+    renamable $x10 = SH3ADD renamable $x11, $x0
+    PseudoRET implicit $x10
+...
+---
+name: sh3add.uw1
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: sh3add.uw1
+    ; CHECK: renamable $x10 = ADDI $x12, 0
+    ; CHECK-NEXT: PseudoRET implicit $x10
+    renamable $x11 = COPY $x12
+    renamable $x10 = SH3ADD_UW $x0, renamable $x11
+    PseudoRET implicit $x10
+...
+---
+name: sh3add.uw2
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: sh3add.uw2
+    ; CHECK: renamable $x10 = SLLI_UW $x12, 3
+    ; CHECK-NEXT: PseudoRET implicit $x10
+    renamable $x11 = COPY $x12
+    renamable $x10 = SH3ADD_UW renamable $x11, $x0
+    PseudoRET implicit $x10
+...
+---
+name: andi
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: andi
+    ; CHECK: renamable $x10 = ADDI $x0, 0
+    ; CHECK-NEXT: PseudoRET implicit $x10
+    renamable $x11 = COPY $x0
+    renamable $x10 = ANDI renamable $x11, 13
+    PseudoRET implicit $x10
+...
+---
+name: and1
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: and1
+    ; CHECK: renamable $x10 = ADDI $x0, 0
+    ; CHECK-NEXT: PseudoRET implicit $x10
+    renamable $x11 = COPY $x12
+    renamable $x10 = AND renamable $x11, $x0
+    PseudoRET implicit $x10
+...
+---
+name: and2
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: and2
+    ; CHECK: renamable $x10 = ADDI $x0, 0
+    ; CHECK-NEXT: PseudoRET implicit $x10
+    renamable $x11 = COPY $x12
+    renamable $x10 = AND $x0, renamable $x11
+    PseudoRET implicit $x10
+...
+---
+name: mul1
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: mul1
+    ; CHECK: renamable $x10 = ADDI $x0, 0
+    ; CHECK-NEXT: PseudoRET implicit $x10
+    renamable $x11 = COPY $x12
+    renamable $x10 = MUL renamable $x11, $x0
+    PseudoRET implicit $x10
+...
+---
+name: mul2
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: mul2
+    ; CHECK: renamable $x10 = ADDI $x0, 0
+    ; CHECK-NEXT: PseudoRET implicit $x10
+    renamable $x11 = COPY $x12
+    renamable $x10 = MUL $x0, renamable $x11
+    PseudoRET implicit $x10
+...
+---
+name: mulh1
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: mulh1
+    ; CHECK: renamable $x10 = ADDI $x0, 0
+    ; CHECK-NEXT: PseudoRET implicit $x10
+    renamable $x11 = COPY $x12
+    renamable $x10 = MULH renamable $x11, $x0
+    PseudoRET implicit $x10
+...
+---
+name: mulh2
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: mulh2
+    ; CHECK: renamable $x10 = ADDI $x0, 0
+    ; CHECK-NEXT: PseudoRET implicit $x10
+    renamable $x11 = COPY $x12
+    renamable $x10 = MULH $x0, renamable $x11
+    PseudoRET implicit $x10
+...
+---
+name: mulhsu1
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: mulhsu1
+    ; CHECK: renamable $x10 = ADDI $x0, 0
+    ; CHECK-NEXT: PseudoRET implicit $x10
+    renamable $x11 = COPY $x12
+    renamable $x10 = MULHSU renamable $x11, $x0
+    PseudoRET implicit $x10
+...
+---
+name: mulhsu2
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: mulhsu2
+    ; CHECK: renamable $x10 = ADDI $x0, 0
+    ; CHECK-NEXT: PseudoRET implicit $x10
+    renamable $x11 = COPY $x12
+    renamable $x10 = MULHSU $x0, renamable $x11
+    PseudoRET implicit $x10
+...
+---
+name: mulhu1
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: mulhu1
+    ; CHECK: renamable $x10 = ADDI $x0, 0
+    ; CHECK-NEXT: PseudoRET implicit $x10
+    renamable $x11 = COPY $x12
+    renamable $x10 = MULHU renamable $x11, $x0
+    PseudoRET implicit $x10
+...
+---
+name: mulhu2
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: mulhu2
+    ; CHECK: renamable $x10 = ADDI $x0, 0
+    ; CHECK-NEXT: PseudoRET implicit $x10
+    renamable $x11 = COPY $x12
+    renamable $x10 = MULHU $x0, renamable $x11
+    PseudoRET implicit $x10
+...
+---
+name: mulw1
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: mulw1
+    ; CHECK: renamable $x10 = ADDI $x0, 0
+    ; CHECK-NEXT: PseudoRET implicit $x10
+    renamable $x11 = COPY $x12
+    renamable $x10 = MULW renamable $x11, $x0
+    PseudoRET implicit $x10
+...
+---
+name: mulw2
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: mulw2
+    ; CHECK: renamable $x10 = ADDI $x0, 0
+    ; CHECK-NEXT: PseudoRET implicit $x10
+    renamable $x11 = COPY $x12
+    renamable $x10 = MULW $x0, renamable $x11
+    PseudoRET implicit $x10
+...
+---
+name: slli
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: slli
+    ; CHECK: renamable $x10 = ADDI $x0, 0
+    ; CHECK-NEXT: PseudoRET implicit $x10
+    renamable $x11 = COPY $x0
+    renamable $x10 = SLLI renamable $x11, 13
+    PseudoRET implicit $x10
+...
+---
+name: srli
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: srli
+    ; CHECK: renamable $x10 = ADDI $x0, 0
+    ; CHECK-NEXT: PseudoRET implicit $x10
+    renamable $x11 = COPY $x0
+    renamable $x10 = SRLI renamable $x11, 13
+    PseudoRET implicit $x10
+...
+---
+name: srai
+body: |
+  bb.0:
+    ; CHECK-LABEL: name: srai
+    ; CHECK: renamable $x10 = A...
[truncated]

@asb asb changed the title [RISCV][TII] Add and use new hook fo optimize/canonicalize instructions after MachineCopyPropagation [RISCV][TII] Add and use new hook to optimize/canonicalize instructions after MachineCopyPropagation Apr 30, 2025
/// MachineCopyPropagation, where their mutation of the MI operands may
/// expose opportunities to convert the instruction to a simpler form (e.g.
/// a load of 0).
virtual bool optimizeInstruction(MachineInstr &MI) const { return false; }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Name is way too general for a very narrowly applied optimization. Can't this just go in a separate post-RA pass

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@asb Can you say a bit about why this needs to be in MCP? As opposed to just after MCP? Does doing this in the process of copy propagation expose additional copies? (Not implausible, but do you actually see this?)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem this patch is solving is probably not unique to RISC-V. Having the ability for other targets to do this type of canonicalization seems like a good idea.

Are we suggesting a separate target independent pass using this hook? Or a target specific pass?

Copy link
Contributor Author

@asb asb May 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Craig's comment captures well the reason why I looked to do this in MCP. it seems like a transformation that's generally useful to other targets, and by adding and using a TII hook it's easy to reuse if we find any other target-specific or generic passes that might benefit (though as I say in the summary, all of the instances of these instructions I can find come from MCP). It could be done in another pass, but it felt like a simple change to have MCP "check its own work" might be cleaner than having yet another pass.

This specific change doesn't induce additional copy propagation, but looking at the generated diffs there are cases where you would expect copy propagation should be able to run again. I nod to this in the PR description above, but as this change alone is an improvement on before I leave that to a followup.

e.g. this snippet from imagick:

@@ -2753,11 +2753,11 @@
        bnez    s11, .LBB0_353
 .LBB0_357:                              #   in Loop: Header=BB0_310 Depth=1
        li      a2, 0
-       srliw   a5, zero, 31
-       slli    a3, zero, 33
+       li      a5, 0
+       li      a3, 0
        slli    a5, a5, 15
        srli    a3, a3, 56
-       and     a4, zero, a1
+       li      a4, 0
        bgeu    s8, a3, .LBB0_354

The slli/srli should be removable (canonicalised to loadimm of 0, and then redundant with the previous li). I haven't yet stepped through to see if this is a matter of doing another iteration in MCP or if there's some other barrier that prevents current MCP from handling the case.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is something that can constant fold, but didn't earlier? At this point I would hope that would have been cleaned long before this

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This specific example does look weird. From offline conversation, I'd thought this was mostly catching cleanup after tail duplication, but this looks like some kind of loop structure?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh I see the cause of confusion - this is indeed cleaning up after tail duplication - the use of the TailDuplicator utility class (llvm/lib/CodeGen/TailDuplicator.cpp) in MachineBlockPlacement.

Here is a roughly reduced example representing the above snippet:

; ModuleID = '<stdin>'
source_filename = "<stdin>"
target datalayout = "e-m:e-p:64:64-i64:64-i128:128-n32:64-S128"
target triple = "riscv64-unknown-linux-gnu"

define i64 @ham(i1 %arg, i32 %arg1) {
bb:
  br label %bb2

bb2:                                              ; preds = %bb15, %bb
  %and = and i32 0, 8388607
  br i1 %arg, label %bb4, label %bb3

bb3:                                              ; preds = %bb2
  br label %bb4

bb4:                                              ; preds = %bb3, %bb2
  %phi = phi i32 [ %arg1, %bb3 ], [ 0, %bb2 ]
  %lshr = lshr i32 %phi, 16
  %and5 = and i32 %lshr, 32768
  %lshr6 = lshr i32 %phi, 23
  %and7 = and i32 %lshr6, 255
  %and8 = and i32 %phi, 8388607
  %icmp = icmp ult i32 %and7, 113
  br i1 %icmp, label %bb9, label %bb10

bb9:                                              ; preds = %bb4
  %or = or i32 %and8, %and5
  %trunc = trunc i32 %or to i16
  br label %bb15

bb10:                                             ; preds = %bb4
  br i1 %arg, label %bb11, label %bb13

bb11:                                             ; preds = %bb10
  %trunc12 = trunc i32 %and5 to i16
  br label %bb15

bb13:                                             ; preds = %bb10
  %trunc14 = trunc i32 %and8 to i16
  br label %bb15

bb15:                                             ; preds = %bb13, %bb11, %bb9
  %phi16 = phi i16 [ %trunc, %bb9 ], [ %trunc12, %bb11 ], [ %trunc14, %bb13 ]
  %trunc17 = trunc i16 %phi16 to i8
  store i8 %trunc17, ptr null, align 1
  br label %bb2
}

If you run that through llc -O3 you'll see the tail duplication happening as part of MachineBlockPlacement and then MCP runs.

So you have a block:

bb.4.bb4:
; predecessors: %bb.3, %bb.2
  successors: %bb.5(0x40000000), %bb.6(0x40000000); %bb.5(50.00%), %bb.6(50.00%)
  liveins: $x10, $x11, $x12, $x13, $x15
  renamable $x14 = SRLIW renamable $x15, 31
  renamable $x16 = SLLI renamable $x15, 33
  renamable $x14 = SLLI killed renamable $x14, 15
  renamable $x16 = SRLI killed renamable $x16, 56
  renamable $x15 = AND killed renamable $x15, renamable $x12
  BLTU renamable $x13, killed renamable $x16, %bb.6

After tail duplication is applied in MBP you get:

bb.2:
; predecessors: %bb.1
  successors: %bb.5(0x40000000), %bb.6(0x40000000); %bb.5(50.00%), %bb.6(50.00%)
  liveins: $x10, $x11, $x12, $x13
  $x15 = ADDI $x0, 0
  renamable $x14 = SRLIW renamable $x15, 31
  renamable $x16 = SLLI renamable $x15, 33
  renamable $x14 = SLLI killed renamable $x14, 15
  renamable $x16 = SRLI killed renamable $x16, 56
  renamable $x15 = AND killed renamable $x15, renamable $x12
  BGEU renamable $x13, killed renamable $x16, %bb.5
  PseudoBR %bb.6

bb.3.bb3:
; predecessors: %bb.1
  successors: %bb.5(0x40000000), %bb.6(0x40000000); %bb.5(50.00%), %bb.6(50.00%)
  liveins: $x10, $x11, $x12, $x13
  $x15 = ADDI renamable $x11, 0
  renamable $x14 = SRLIW renamable $x15, 31
  renamable $x16 = SLLI renamable $x15, 33
  renamable $x14 = SLLI killed renamable $x14, 15
  renamable $x16 = SRLI killed renamable $x16, 56
  renamable $x15 = AND killed renamable $x15, renamable $x12
  BGEU renamable $x13, killed renamable $x16, %bb.5

Where obviously bb.2 can be cleaned up which MCP does to a certain extent.

asb added a commit to asb/llvm-project that referenced this pull request May 1, 2025
clang-format was turned off for the defines, but there was no matching
`// clang-format on` comment at the end. Ran into this in llvm#137973
Copy link
Collaborator

@preames preames left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another round of minor comments. I generally think this is a good idea, subject to getting the interface clarified.

/// MachineCopyPropagation, where their mutation of the MI operands may
/// expose opportunities to convert the instruction to a simpler form (e.g.
/// a load of 0).
virtual bool optimizeInstruction(MachineInstr &MI) const { return false; }
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This specific example does look weird. From offline conversation, I'd thought this was mostly catching cleanup after tail duplication, but this looks like some kind of loop structure?

case RISCV::SLTIU:
// sltiu rd, zero, 1 => addi rd, zero, 1
if (MI.getOperand(1).getReg() == RISCV::X0 &&
MI.getOperand(2).getImm() == 1) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should work for any non-zero immediate shouldn't it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I'd been checking for snez in my script and just transcribed that into a pattern. But it makes more sense to be more general. I now handle sltiu rd, zero, NZC and sltiu rd, zero, 0 for completeness. Though there are zero instances in current codegen.

Copy link

github-actions bot commented May 1, 2025

✅ With the latest revision this PR passed the C/C++ code formatter.

@asb
Copy link
Contributor Author

asb commented May 6, 2025

Thank you everyone. I believe all of the comments on the details of the implementation have been addressed an are reflected in the pushed version of the code. I have tweaked the commit message to clarify that the tail duplication that produces the opportunity for these transformations is from tail duplication performed in MachineBlockPlacement.

I believe the open issues are:

  • Hook naming. Some ideas
    • simplifyInstruction (on the basis that it applies basic simplifications - e.g. to li or mv)
    • optimizeMutatedInstruction (on the basis that it is an optimisation intended to be run after an instruction's operands have been mutated an instruction, which may present new opportunities for simplifying)
    • yourIdeaHere - honestly, I don't have strong views!
  • Whether this makes sense as a tweak to MCP as proposed, or a separate pass
    • It would be possible to add a target specific pass or add to a new one, but it does seem like this is relevant to targets beyond just RISC-V and requires only a minimal tweak to MCP, with an interface that could be used from other target-specific passes that might need it. So if the preference is to not go this route, I'd like to better understand the argument against.

@asb asb requested a review from preames May 6, 2025 14:10
IanWood1 pushed a commit to IanWood1/llvm-project that referenced this pull request May 6, 2025
clang-format was turned off for the defines, but there was no matching
`// clang-format on` comment at the end. Ran into this in llvm#137973
IanWood1 pushed a commit to IanWood1/llvm-project that referenced this pull request May 6, 2025
clang-format was turned off for the defines, but there was no matching
`// clang-format on` comment at the end. Ran into this in llvm#137973
IanWood1 pushed a commit to IanWood1/llvm-project that referenced this pull request May 6, 2025
clang-format was turned off for the defines, but there was no matching
`// clang-format on` comment at the end. Ran into this in llvm#137973
Copy link
Collaborator

@preames preames left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM - I'm not really a fan of the generic name here, but don't have a better suggestion, and don't want this blocked indefinitely.

Once this lands, I will be posting a follow up patch which reorganizes MCP such that after this simplification, we can further propagate the copy propagation. I feel the follow on with respect to that is key to motivating this direction.

// Attempt to canonicalize/optimize the instruction now its arguments have
// been mutated.
if (TII->optimizeInstruction(MI)) {
LLVM_DEBUG(dbgs() << "MCP: After optimizeInstruction: " << MI);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to set Changed = true here or is guaranteed it was already set earlier?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question. It's not clear to me that Changed = true will always have been set, so I've added an explicit assignment so it's obvious.

}
break;
case RISCV::SUB:
// sub rd, rs, zero => addi rd, rs, 0
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does add rd, rs, zero never show up or we just aren't converting it to ADDI?

Copy link
Contributor Author

@asb asb May 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does show up, but we do have a a CompressPat that produces c.mv for either case. So there's no real difference if C is enabled, and I've been running my script on build directories that have C enabled.

But there's no reason not to handle ADD here, and in the case of a target without the C extension it at least means you'll get a canonical mv in such cases which is nicer to read in disassembly if nothing else. I've gone ahead and added it and a test.

@asb
Copy link
Contributor Author

asb commented May 7, 2025

To my mind simplifyInstruction is at least slightly better, so I've changed to that in the absence of strong views on alternate names.

GeorgeARM pushed a commit to GeorgeARM/llvm-project that referenced this pull request May 7, 2025
clang-format was turned off for the defines, but there was no matching
`// clang-format on` comment at the end. Ran into this in llvm#137973
@asb asb merged commit 52b345d into llvm:main May 8, 2025
6 of 9 checks passed
Ankur-0429 pushed a commit to Ankur-0429/llvm-project that referenced this pull request May 9, 2025
clang-format was turned off for the defines, but there was no matching
`// clang-format on` comment at the end. Ran into this in llvm#137973
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants