Skip to content

[reland][flang] Initial debug info support for local variables #92304

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
May 16, 2024

Conversation

abidh
Copy link
Contributor

@abidh abidh commented May 15, 2024

This is same as #90905 with an added fix. The issue was that we generated variable info even when user asked for line-tables-only. This caused llvm dwarf generation code to fail an assertion as it expected an empty variable list.

Fixed by not generating debug info for variables when user wants only line table. I also updated a test check for this case.

abidh added 7 commits May 15, 2024 19:47
Currently, cg-rewrite removes the DeclareOp. As AddDebugInfo runs after
that, it cannot process the DeclareOp. My initial plan was to make the
AddDebugInfo pass run before the cg-rewrite but that has few issues.

1. Initially I was thinking to use the memref op to carry the variable
attr. But as @tblah suggested in the llvm#86939, it makes more sense to
carry that information on DeclareOp. It also makes it easy to handle it
in codegen and there is no special handling needed for arguments. For
that, we need to preserve the DeclareOp till the codegen.

2. Running earlier, we will miss the changes in passes that run between
cg-rewrite and codegen.

But not removing the DeclareOp in cg-rewrite has the issue that ShapeOp
remains and it causes errors during codegen. To solve this problem, I
convert DeclareOp to XDeclareOp in cg-rewrite instead of removing
it. This was mentioned as possible solution by @jeanPerier in
https://reviews.llvm.org/D136254

The conversion follows similar logic as used for other operators in that
file. The FortranAttr and CudaAttr are currently not converted but left
as TODO when the need arise. A later commit will use the XDeclareOp to
extract the variable information.
This commit extracts information about local variables from XDeclareOp
and creates DILocalVariableAttr. These are attached to DeclareOp using
FusedLoc approach. Codegen can use them to create DbgDeclareOp.

I have added tests that checks the debug information in mlir from
and also in llvm ir.
The previous placeholder type was basic type with DW_ATE_address
encoding. When variables are added, it started causing assertions in
the llvm debug info generation logic for some types. It has been changed
to an integer type.
Following changes were done.

1. Discard variables if they are not of NameKind::VARIABLE kind.

2. Use casting to BlockArgument to check if it is a dummy argument and use getArgNumber() to get its number.
When this option is true, DeclareOp is converted to XDeclareOp and this OP is used later for debug info generation. When it is false, DeclareOp is removed.
The DeclareOp is also generated for module variable used in a function.
We reject them to avoid creating local variables for them.
We were generating variable info even when user asked for only line table. This caused an assertion to fail in llvm dwarf generation code. Fixed by respecting the flag and not generating variables info in that case.
@llvmbot llvmbot added flang Flang issues not falling into any other category flang:fir-hlfir flang:codegen labels May 15, 2024
@llvmbot
Copy link
Member

llvmbot commented May 15, 2024

@llvm/pr-subscribers-flang-fir-hlfir

@llvm/pr-subscribers-flang-codegen

Author: Abid Qadeer (abidh)

Changes

This is same as #90905 with an added fix. The issue was that we generated variable info even when user asked for line-tables-only. This caused llvm dwarf generation code to fail an assertion as it expected an empty variable list.

Fixed by not generating debug info for variables when user wants only line table. I also updated a test check for this case.


Patch is 31.81 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/92304.diff

14 Files Affected:

  • (renamed) flang/include/flang/Optimizer/CodeGen/CGOps.h (+1)
  • (modified) flang/include/flang/Optimizer/CodeGen/CGOps.td (+34)
  • (modified) flang/include/flang/Optimizer/CodeGen/CGPasses.td (+4)
  • (modified) flang/include/flang/Optimizer/CodeGen/CodeGen.h (+4-2)
  • (modified) flang/include/flang/Tools/CLOptions.inc (+7-4)
  • (modified) flang/lib/Optimizer/CodeGen/CGOps.cpp (+1-1)
  • (modified) flang/lib/Optimizer/CodeGen/CodeGen.cpp (+36-14)
  • (modified) flang/lib/Optimizer/CodeGen/PreCGRewrite.cpp (+41-8)
  • (modified) flang/lib/Optimizer/Transforms/AddDebugInfo.cpp (+56-4)
  • (modified) flang/lib/Optimizer/Transforms/DebugTypeGenerator.cpp (+5-5)
  • (modified) flang/test/Fir/declare-codegen.fir (+16-6)
  • (modified) flang/test/Fir/dummy-scope-codegen.fir (+8-3)
  • (added) flang/test/Transforms/debug-local-var-2.f90 (+94)
  • (added) flang/test/Transforms/debug-local-var.f90 (+54)
diff --git a/flang/lib/Optimizer/CodeGen/CGOps.h b/flang/include/flang/Optimizer/CodeGen/CGOps.h
similarity index 94%
rename from flang/lib/Optimizer/CodeGen/CGOps.h
rename to flang/include/flang/Optimizer/CodeGen/CGOps.h
index b5a6d5bb9a9e6..df909d9ee81cb 100644
--- a/flang/lib/Optimizer/CodeGen/CGOps.h
+++ b/flang/include/flang/Optimizer/CodeGen/CGOps.h
@@ -13,6 +13,7 @@
 #ifndef OPTIMIZER_CODEGEN_CGOPS_H
 #define OPTIMIZER_CODEGEN_CGOPS_H
 
+#include "flang/Optimizer/Dialect/FIRAttr.h"
 #include "flang/Optimizer/Dialect/FIRType.h"
 #include "mlir/Dialect/Func/IR/FuncOps.h"
 
diff --git a/flang/include/flang/Optimizer/CodeGen/CGOps.td b/flang/include/flang/Optimizer/CodeGen/CGOps.td
index 35e70fa2ffa3f..c375edee1fa77 100644
--- a/flang/include/flang/Optimizer/CodeGen/CGOps.td
+++ b/flang/include/flang/Optimizer/CodeGen/CGOps.td
@@ -16,6 +16,8 @@
 
 include "mlir/IR/SymbolInterfaces.td"
 include "flang/Optimizer/Dialect/FIRTypes.td"
+include "flang/Optimizer/Dialect/FIRAttr.td"
+include "mlir/IR/BuiltinAttributes.td"
 
 def fircg_Dialect : Dialect {
   let name = "fircg";
@@ -202,4 +204,36 @@ def fircg_XArrayCoorOp : fircg_Op<"ext_array_coor", [AttrSizedOperandSegments]>
   }];
 }
 
+// Extended Declare operation.
+def fircg_XDeclareOp : fircg_Op<"ext_declare", [AttrSizedOperandSegments]> {
+  let summary = "for internal conversion only";
+
+  let description = [{
+    Prior to lowering to LLVM IR dialect, a DeclareOp will
+    be converted to an extended DeclareOp.
+  }];
+
+  let arguments = (ins
+    AnyRefOrBox:$memref,
+    Variadic<AnyIntegerType>:$shape,
+    Variadic<AnyIntegerType>:$shift,
+    Variadic<AnyIntegerType>:$typeparams,
+    Optional<fir_DummyScopeType>:$dummy_scope,
+    Builtin_StringAttr:$uniq_name
+  );
+  let results = (outs AnyRefOrBox);
+
+  let assemblyFormat = [{
+    $memref (`(` $shape^ `)`)? (`origin` $shift^)? (`typeparams` $typeparams^)?
+    (`dummy_scope` $dummy_scope^)?
+    attr-dict `:` functional-type(operands, results)
+  }];
+
+  let extraClassDeclaration = [{
+    // Shape is optional, but if it exists, it will be at offset 1.
+    unsigned shapeOffset() { return 1; }
+    unsigned shiftOffset() { return shapeOffset() + getShape().size(); }
+  }];
+}
+
 #endif
diff --git a/flang/include/flang/Optimizer/CodeGen/CGPasses.td b/flang/include/flang/Optimizer/CodeGen/CGPasses.td
index f524fb4237344..565920e55e6a8 100644
--- a/flang/include/flang/Optimizer/CodeGen/CGPasses.td
+++ b/flang/include/flang/Optimizer/CodeGen/CGPasses.td
@@ -47,6 +47,10 @@ def CodeGenRewrite : Pass<"cg-rewrite", "mlir::ModuleOp"> {
   let dependentDialects = [
     "fir::FIROpsDialect", "fir::FIRCodeGenDialect"
   ];
+  let options = [
+    Option<"preserveDeclare", "preserve-declare", "bool", /*default=*/"false",
+           "Preserve DeclareOp during pre codegen re-write.">
+  ];
   let statistics = [
     Statistic<"numDCE", "num-dce'd", "Number of operations eliminated">
   ];
diff --git a/flang/include/flang/Optimizer/CodeGen/CodeGen.h b/flang/include/flang/Optimizer/CodeGen/CodeGen.h
index 26097dabf56c4..4d2b191b46d08 100644
--- a/flang/include/flang/Optimizer/CodeGen/CodeGen.h
+++ b/flang/include/flang/Optimizer/CodeGen/CodeGen.h
@@ -30,7 +30,8 @@ struct NameUniquer;
 
 /// Prerequiste pass for code gen. Perform intermediate rewrites to perform
 /// the code gen (to LLVM-IR dialect) conversion.
-std::unique_ptr<mlir::Pass> createFirCodeGenRewritePass();
+std::unique_ptr<mlir::Pass> createFirCodeGenRewritePass(
+    CodeGenRewriteOptions Options = CodeGenRewriteOptions{});
 
 /// FirTargetRewritePass options.
 struct TargetRewriteOptions {
@@ -88,7 +89,8 @@ void populateFIRToLLVMConversionPatterns(fir::LLVMTypeConverter &converter,
                                          fir::FIRToLLVMPassOptions &options);
 
 /// Populate the pattern set with the PreCGRewrite patterns.
-void populatePreCGRewritePatterns(mlir::RewritePatternSet &patterns);
+void populatePreCGRewritePatterns(mlir::RewritePatternSet &patterns,
+                                  bool preserveDeclare);
 
 // declarative passes
 #define GEN_PASS_REGISTRATION
diff --git a/flang/include/flang/Tools/CLOptions.inc b/flang/include/flang/Tools/CLOptions.inc
index cc3431d5b71d2..761315e0abc81 100644
--- a/flang/include/flang/Tools/CLOptions.inc
+++ b/flang/include/flang/Tools/CLOptions.inc
@@ -169,9 +169,11 @@ inline void addMemoryAllocationOpt(mlir::PassManager &pm) {
 }
 
 #if !defined(FLANG_EXCLUDE_CODEGEN)
-inline void addCodeGenRewritePass(mlir::PassManager &pm) {
-  addPassConditionally(
-      pm, disableCodeGenRewrite, fir::createFirCodeGenRewritePass);
+inline void addCodeGenRewritePass(mlir::PassManager &pm, bool preserveDeclare) {
+  fir::CodeGenRewriteOptions options;
+  options.preserveDeclare = preserveDeclare;
+  addPassConditionally(pm, disableCodeGenRewrite,
+      [&]() { return fir::createFirCodeGenRewritePass(options); });
 }
 
 inline void addTargetRewritePass(mlir::PassManager &pm) {
@@ -353,7 +355,8 @@ inline void createDefaultFIRCodeGenPassPipeline(mlir::PassManager &pm,
     MLIRToLLVMPassPipelineConfig config, llvm::StringRef inputFilename = {}) {
   fir::addBoxedProcedurePass(pm);
   addNestedPassToAllTopLevelOperations(pm, fir::createAbstractResultOpt);
-  fir::addCodeGenRewritePass(pm);
+  fir::addCodeGenRewritePass(
+      pm, (config.DebugInfo != llvm::codegenoptions::NoDebugInfo));
   fir::addTargetRewritePass(pm);
   fir::addExternalNameConversionPass(pm, config.Underscoring);
   fir::createDebugPasses(pm, config.DebugInfo, config.OptLevel, inputFilename);
diff --git a/flang/lib/Optimizer/CodeGen/CGOps.cpp b/flang/lib/Optimizer/CodeGen/CGOps.cpp
index 44d07d26dd2b6..6b8ba74525556 100644
--- a/flang/lib/Optimizer/CodeGen/CGOps.cpp
+++ b/flang/lib/Optimizer/CodeGen/CGOps.cpp
@@ -10,7 +10,7 @@
 //
 //===----------------------------------------------------------------------===//
 
-#include "CGOps.h"
+#include "flang/Optimizer/CodeGen/CGOps.h"
 #include "flang/Optimizer/Dialect/FIRDialect.h"
 #include "flang/Optimizer/Dialect/FIROps.h"
 #include "flang/Optimizer/Dialect/FIRType.h"
diff --git a/flang/lib/Optimizer/CodeGen/CodeGen.cpp b/flang/lib/Optimizer/CodeGen/CodeGen.cpp
index 21154902d23f8..72172f63888e1 100644
--- a/flang/lib/Optimizer/CodeGen/CodeGen.cpp
+++ b/flang/lib/Optimizer/CodeGen/CodeGen.cpp
@@ -12,7 +12,7 @@
 
 #include "flang/Optimizer/CodeGen/CodeGen.h"
 
-#include "CGOps.h"
+#include "flang/Optimizer/CodeGen/CGOps.h"
 #include "flang/Optimizer/CodeGen/CodeGenOpenMP.h"
 #include "flang/Optimizer/CodeGen/FIROpPatterns.h"
 #include "flang/Optimizer/CodeGen/TypeConverter.h"
@@ -170,6 +170,28 @@ genAllocationScaleSize(OP op, mlir::Type ity,
   return nullptr;
 }
 
+namespace {
+struct DeclareOpConversion : public fir::FIROpConversion<fir::cg::XDeclareOp> {
+public:
+  using FIROpConversion::FIROpConversion;
+  mlir::LogicalResult
+  matchAndRewrite(fir::cg::XDeclareOp declareOp, OpAdaptor adaptor,
+                  mlir::ConversionPatternRewriter &rewriter) const override {
+    auto memRef = adaptor.getOperands()[0];
+    if (auto fusedLoc = mlir::dyn_cast<mlir::FusedLoc>(declareOp.getLoc())) {
+      if (auto varAttr =
+              mlir::dyn_cast_or_null<mlir::LLVM::DILocalVariableAttr>(
+                  fusedLoc.getMetadata())) {
+        rewriter.create<mlir::LLVM::DbgDeclareOp>(memRef.getLoc(), memRef,
+                                                  varAttr, nullptr);
+      }
+    }
+    rewriter.replaceOp(declareOp, memRef);
+    return mlir::success();
+  }
+};
+} // namespace
+
 namespace {
 /// convert to LLVM IR dialect `alloca`
 struct AllocaOpConversion : public fir::FIROpConversion<fir::AllocaOp> {
@@ -3714,19 +3736,19 @@ void fir::populateFIRToLLVMConversionPatterns(
       BoxOffsetOpConversion, BoxProcHostOpConversion, BoxRankOpConversion,
       BoxTypeCodeOpConversion, BoxTypeDescOpConversion, CallOpConversion,
       CmpcOpConversion, ConstcOpConversion, ConvertOpConversion,
-      CoordinateOpConversion, DTEntryOpConversion, DivcOpConversion,
-      EmboxOpConversion, EmboxCharOpConversion, EmboxProcOpConversion,
-      ExtractValueOpConversion, FieldIndexOpConversion, FirEndOpConversion,
-      FreeMemOpConversion, GlobalLenOpConversion, GlobalOpConversion,
-      HasValueOpConversion, InsertOnRangeOpConversion, InsertValueOpConversion,
-      IsPresentOpConversion, LenParamIndexOpConversion, LoadOpConversion,
-      MulcOpConversion, NegcOpConversion, NoReassocOpConversion,
-      SelectCaseOpConversion, SelectOpConversion, SelectRankOpConversion,
-      SelectTypeOpConversion, ShapeOpConversion, ShapeShiftOpConversion,
-      ShiftOpConversion, SliceOpConversion, StoreOpConversion,
-      StringLitOpConversion, SubcOpConversion, TypeDescOpConversion,
-      TypeInfoOpConversion, UnboxCharOpConversion, UnboxProcOpConversion,
-      UndefOpConversion, UnreachableOpConversion,
+      CoordinateOpConversion, DTEntryOpConversion, DeclareOpConversion,
+      DivcOpConversion, EmboxOpConversion, EmboxCharOpConversion,
+      EmboxProcOpConversion, ExtractValueOpConversion, FieldIndexOpConversion,
+      FirEndOpConversion, FreeMemOpConversion, GlobalLenOpConversion,
+      GlobalOpConversion, HasValueOpConversion, InsertOnRangeOpConversion,
+      InsertValueOpConversion, IsPresentOpConversion, LenParamIndexOpConversion,
+      LoadOpConversion, MulcOpConversion, NegcOpConversion,
+      NoReassocOpConversion, SelectCaseOpConversion, SelectOpConversion,
+      SelectRankOpConversion, SelectTypeOpConversion, ShapeOpConversion,
+      ShapeShiftOpConversion, ShiftOpConversion, SliceOpConversion,
+      StoreOpConversion, StringLitOpConversion, SubcOpConversion,
+      TypeDescOpConversion, TypeInfoOpConversion, UnboxCharOpConversion,
+      UnboxProcOpConversion, UndefOpConversion, UnreachableOpConversion,
       UnrealizedConversionCastOpConversion, XArrayCoorOpConversion,
       XEmboxOpConversion, XReboxOpConversion, ZeroOpConversion>(converter,
                                                                 options);
diff --git a/flang/lib/Optimizer/CodeGen/PreCGRewrite.cpp b/flang/lib/Optimizer/CodeGen/PreCGRewrite.cpp
index 5bd3ec8d18450..c54a7457db761 100644
--- a/flang/lib/Optimizer/CodeGen/PreCGRewrite.cpp
+++ b/flang/lib/Optimizer/CodeGen/PreCGRewrite.cpp
@@ -12,8 +12,8 @@
 
 #include "flang/Optimizer/CodeGen/CodeGen.h"
 
-#include "CGOps.h"
 #include "flang/Optimizer/Builder/Todo.h" // remove when TODO's are done
+#include "flang/Optimizer/CodeGen/CGOps.h"
 #include "flang/Optimizer/Dialect/FIRDialect.h"
 #include "flang/Optimizer/Dialect/FIROps.h"
 #include "flang/Optimizer/Dialect/FIRType.h"
@@ -270,13 +270,43 @@ class ArrayCoorConversion : public mlir::OpRewritePattern<fir::ArrayCoorOp> {
 };
 
 class DeclareOpConversion : public mlir::OpRewritePattern<fir::DeclareOp> {
+  bool preserveDeclare;
+
 public:
   using OpRewritePattern::OpRewritePattern;
+  DeclareOpConversion(mlir::MLIRContext *ctx, bool preserveDecl)
+      : OpRewritePattern(ctx), preserveDeclare(preserveDecl) {}
 
   mlir::LogicalResult
   matchAndRewrite(fir::DeclareOp declareOp,
                   mlir::PatternRewriter &rewriter) const override {
-    rewriter.replaceOp(declareOp, declareOp.getMemref());
+    if (!preserveDeclare) {
+      rewriter.replaceOp(declareOp, declareOp.getMemref());
+      return mlir::success();
+    }
+    auto loc = declareOp.getLoc();
+    llvm::SmallVector<mlir::Value> shapeOpers;
+    llvm::SmallVector<mlir::Value> shiftOpers;
+    if (auto shapeVal = declareOp.getShape()) {
+      if (auto shapeOp = mlir::dyn_cast<fir::ShapeOp>(shapeVal.getDefiningOp()))
+        populateShape(shapeOpers, shapeOp);
+      else if (auto shiftOp =
+                   mlir::dyn_cast<fir::ShapeShiftOp>(shapeVal.getDefiningOp()))
+        populateShapeAndShift(shapeOpers, shiftOpers, shiftOp);
+      else if (auto shiftOp =
+                   mlir::dyn_cast<fir::ShiftOp>(shapeVal.getDefiningOp()))
+        populateShift(shiftOpers, shiftOp);
+      else
+        return mlir::failure();
+    }
+    // FIXME: Add FortranAttrs and CudaAttrs
+    auto xDeclOp = rewriter.create<fir::cg::XDeclareOp>(
+        loc, declareOp.getType(), declareOp.getMemref(), shapeOpers, shiftOpers,
+        declareOp.getTypeparams(), declareOp.getDummyScope(),
+        declareOp.getUniqName());
+    LLVM_DEBUG(llvm::dbgs()
+               << "rewriting " << declareOp << " to " << xDeclOp << '\n');
+    rewriter.replaceOp(declareOp, xDeclOp.getOperation()->getResults());
     return mlir::success();
   }
 };
@@ -297,6 +327,7 @@ class DummyScopeOpConversion
 
 class CodeGenRewrite : public fir::impl::CodeGenRewriteBase<CodeGenRewrite> {
 public:
+  CodeGenRewrite(fir::CodeGenRewriteOptions opts) : Base(opts) {}
   void runOnOperation() override final {
     mlir::ModuleOp mod = getOperation();
 
@@ -314,7 +345,7 @@ class CodeGenRewrite : public fir::impl::CodeGenRewriteBase<CodeGenRewrite> {
                    mlir::cast<fir::BaseBoxType>(embox.getType()).getEleTy()));
     });
     mlir::RewritePatternSet patterns(&context);
-    fir::populatePreCGRewritePatterns(patterns);
+    fir::populatePreCGRewritePatterns(patterns, preserveDeclare);
     if (mlir::failed(
             mlir::applyPartialConversion(mod, target, std::move(patterns)))) {
       mlir::emitError(mlir::UnknownLoc::get(&context),
@@ -330,12 +361,14 @@ class CodeGenRewrite : public fir::impl::CodeGenRewriteBase<CodeGenRewrite> {
 
 } // namespace
 
-std::unique_ptr<mlir::Pass> fir::createFirCodeGenRewritePass() {
-  return std::make_unique<CodeGenRewrite>();
+std::unique_ptr<mlir::Pass>
+fir::createFirCodeGenRewritePass(fir::CodeGenRewriteOptions Options) {
+  return std::make_unique<CodeGenRewrite>(Options);
 }
 
-void fir::populatePreCGRewritePatterns(mlir::RewritePatternSet &patterns) {
+void fir::populatePreCGRewritePatterns(mlir::RewritePatternSet &patterns,
+                                       bool preserveDeclare) {
   patterns.insert<EmboxConversion, ArrayCoorConversion, ReboxConversion,
-                  DeclareOpConversion, DummyScopeOpConversion>(
-      patterns.getContext());
+                  DummyScopeOpConversion>(patterns.getContext());
+  patterns.add<DeclareOpConversion>(patterns.getContext(), preserveDeclare);
 }
diff --git a/flang/lib/Optimizer/Transforms/AddDebugInfo.cpp b/flang/lib/Optimizer/Transforms/AddDebugInfo.cpp
index 908c8fc96f633..07e8aed4cd07b 100644
--- a/flang/lib/Optimizer/Transforms/AddDebugInfo.cpp
+++ b/flang/lib/Optimizer/Transforms/AddDebugInfo.cpp
@@ -15,6 +15,7 @@
 #include "flang/Common/Version.h"
 #include "flang/Optimizer/Builder/FIRBuilder.h"
 #include "flang/Optimizer/Builder/Todo.h"
+#include "flang/Optimizer/CodeGen/CGOps.h"
 #include "flang/Optimizer/Dialect/FIRDialect.h"
 #include "flang/Optimizer/Dialect/FIROps.h"
 #include "flang/Optimizer/Dialect/FIRType.h"
@@ -45,13 +46,59 @@ namespace fir {
 namespace {
 
 class AddDebugInfoPass : public fir::impl::AddDebugInfoBase<AddDebugInfoPass> {
+  void handleDeclareOp(fir::cg::XDeclareOp declOp,
+                       mlir::LLVM::DIFileAttr fileAttr,
+                       mlir::LLVM::DIScopeAttr scopeAttr,
+                       fir::DebugTypeGenerator &typeGen);
+
 public:
   AddDebugInfoPass(fir::AddDebugInfoOptions options) : Base(options) {}
   void runOnOperation() override;
 };
 
+static uint32_t getLineFromLoc(mlir::Location loc) {
+  uint32_t line = 1;
+  if (auto fileLoc = mlir::dyn_cast<mlir::FileLineColLoc>(loc))
+    line = fileLoc.getLine();
+  return line;
+}
+
 } // namespace
 
+void AddDebugInfoPass::handleDeclareOp(fir::cg::XDeclareOp declOp,
+                                       mlir::LLVM::DIFileAttr fileAttr,
+                                       mlir::LLVM::DIScopeAttr scopeAttr,
+                                       fir::DebugTypeGenerator &typeGen) {
+  mlir::MLIRContext *context = &getContext();
+  mlir::OpBuilder builder(context);
+  auto result = fir::NameUniquer::deconstruct(declOp.getUniqName());
+
+  if (result.first != fir::NameUniquer::NameKind::VARIABLE)
+    return;
+
+  // Only accept local variables.
+  if (result.second.procs.empty())
+    return;
+
+  // FIXME: There may be cases where an argument is processed a bit before
+  // DeclareOp is generated. In that case, DeclareOp may point to an
+  // intermediate op and not to BlockArgument. We need to find those cases and
+  // walk the chain to get to the actual argument.
+
+  unsigned argNo = 0;
+  if (auto Arg = llvm::dyn_cast<mlir::BlockArgument>(declOp.getMemref()))
+    argNo = Arg.getArgNumber() + 1;
+
+  auto tyAttr = typeGen.convertType(fir::unwrapRefType(declOp.getType()),
+                                    fileAttr, scopeAttr, declOp.getLoc());
+
+  auto localVarAttr = mlir::LLVM::DILocalVariableAttr::get(
+      context, scopeAttr, mlir::StringAttr::get(context, result.second.name),
+      fileAttr, getLineFromLoc(declOp.getLoc()), argNo, /* alignInBits*/ 0,
+      tyAttr);
+  declOp->setLoc(builder.getFusedLoc({declOp->getLoc()}, localVarAttr));
+}
+
 void AddDebugInfoPass::runOnOperation() {
   mlir::ModuleOp module = getOperation();
   mlir::MLIRContext *context = &getContext();
@@ -144,14 +191,19 @@ void AddDebugInfoPass::runOnOperation() {
       subprogramFlags =
           subprogramFlags | mlir::LLVM::DISubprogramFlags::Definition;
     }
-    unsigned line = 1;
-    if (auto funcLoc = mlir::dyn_cast<mlir::FileLineColLoc>(l))
-      line = funcLoc.getLine();
-
+    unsigned line = getLineFromLoc(l);
     auto spAttr = mlir::LLVM::DISubprogramAttr::get(
         context, id, compilationUnit, fileAttr, funcName, fullName,
         funcFileAttr, line, line, subprogramFlags, subTypeAttr);
     funcOp->setLoc(builder.getFusedLoc({funcOp->getLoc()}, spAttr));
+
+    // Don't process variables if user asked for line tables only.
+    if (debugLevel == mlir::LLVM::DIEmissionKind::LineTablesOnly)
+      return;
+
+    funcOp.walk([&](fir::cg::XDeclareOp declOp) {
+      handleDeclareOp(declOp, fileAttr, spAttr, typeGen);
+    });
   });
 }
 
diff --git a/flang/lib/Optimizer/Transforms/DebugTypeGenerator.cpp b/flang/lib/Optimizer/Transforms/DebugTypeGenerator.cpp
index e5b4050dfb242..64c6547e06e0f 100644
--- a/flang/lib/Optimizer/Transforms/DebugTypeGenerator.cpp
+++ b/flang/lib/Optimizer/Transforms/DebugTypeGenerator.cpp
@@ -24,11 +24,6 @@ DebugTypeGenerator::DebugTypeGenerator(mlir::ModuleOp m)
   LLVM_DEBUG(llvm::dbgs() << "DITypeAttr generator\n");
 }
 
-static mlir::LLVM::DITypeAttr genPlaceholderType(mlir::MLIRContext *context) {
-  return mlir::LLVM::DIBasicTypeAttr::get(
-      context, llvm::dwarf::DW_TAG_base_type, "void", 32, 1);
-}
-
 static mlir::LLVM::DITypeAttr genBasicType(mlir::MLIRContext *context,
                                            mlir::StringAttr name,
                                            unsigned bitSize,
@@ -37,6 +32,11 @@ static mlir::LLVM::DITypeAttr genBasicType(mlir::MLIRContext *context,
       context, llvm::dwarf::DW_TAG_base_type, name, bitSize, decoding);
 }
 
+static mlir::LLVM::DITypeAttr genPlaceholderType(mlir::MLIRContext *context) {
+  return genBasicType(context, mlir::StringAttr::get(context, "integer"), 32,
+                      llvm::dwarf::DW_ATE_signed);
+}
+
 mlir::LLVM::DITypeAttr
 DebugTypeGenerator::convertType(mlir::Type Ty, mlir::LLVM::DIFileAttr fileAttr,
                                 mlir::LLVM::DIScopeAttr scope,
diff --git a/flang/test/Fir/declare-codegen.fir b/flang/test/Fir/declare-codegen.fir
index 9d68d3b2f9d4d..c5879facb1572 100644
--- a/flang/test/Fir/declare-codegen.fir
+++ b/flang/test/Fir/declare-codegen.fir
@@ -1,5 +1,7 @@
 // Test rewrite of fir.declare. The result is replaced by the memref operand.
-// RUN: fir-opt --cg-rewrite %s -o - | FileCheck %s
+// RUN: fir-opt --cg-rewrite="preserve-declare=true" %s -o - | FileCheck %s --check-prefixes DECL
+// RUN: fir-opt --cg-rewrite="preserve-declare=false" %s -o - | FileCheck %s --check-prefixes NODECL
+// RUN: fir-opt --cg-rewrite %s -o - | FileCheck %s --check-prefixes NODECL
 
 
 func.func @test(%arg0: !fir.ref<!fir.array<12x23xi32>>) {
@@ -15,9 +17,14 @@ func.func @test(%arg0: !fir.ref<!fir.array<12x23xi32>>) {
 func.func private @bar(%arg0: !fir.ref<!fir.array<12x23xi32>>)
 
 
-// CHECK-LABEL: func.func @test(
-// CHECK-SAME: %[[arg0:.*]]: !fir.ref<!fir.array<12x23xi32>>) {
-// CHECK-NEXT: fir.call @bar(%[[arg0]]) : (!fir.ref<!fir.array<12x23xi32>>) -> ()
+// NODECL...
[truncated]

Copy link
Contributor

@psteinfeld psteinfeld left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the quick action. Two of the original problems I saw have been fixed. I'll kick off more testing. But, so far, so good.

@abidh abidh merged commit cd5ee27 into llvm:main May 16, 2024
8 checks passed
@abidh abidh deleted the local_var2 branch June 4, 2024 09:43
qiaojbao pushed a commit to GPUOpen-Drivers/llvm-project that referenced this pull request Jun 5, 2024
…20e417d38

Local branch amd-gfx d3020e4 Merged main:772b1b0cb26c66804d0a7e416dc7a5742b7f8db2 into amd-gfx:43125be5ad0d
Remote branch main cd5ee27 [reland][flang] Initial debug info support for local variables (llvm#92304)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
flang:codegen flang:fir-hlfir flang Flang issues not falling into any other category
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants