Skip to content

[flang][OpenMP] Add TODOs for target [teams|parallel] private #143706

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 12, 2025

Conversation

mrkajetanp
Copy link
Contributor

Using the private clause on target teams or target parallel is not currently implemented and causes crashes during lowering. Add appropriate TODOs.

Resolves #116428.

Using the private clause on `target teams` or `target parallel` is
not currently implemented and causes crashes during lowering.
Add appropriate TODOs.

Signed-off-by: Kajetan Puchalski <[email protected]>
@llvmbot llvmbot added flang Flang issues not falling into any other category flang:fir-hlfir flang:openmp labels Jun 11, 2025
@llvmbot
Copy link
Member

llvmbot commented Jun 11, 2025

@llvm/pr-subscribers-flang-fir-hlfir

@llvm/pr-subscribers-flang-openmp

Author: Kajetan Puchalski (mrkajetanp)

Changes

Using the private clause on target teams or target parallel is not currently implemented and causes crashes during lowering. Add appropriate TODOs.

Resolves #116428.


Full diff: https://github.com/llvm/llvm-project/pull/143706.diff

3 Files Affected:

  • (modified) flang/lib/Lower/OpenMP/OpenMP.cpp (+7)
  • (added) flang/test/Lower/OpenMP/Todo/target-parallel-private.f90 (+13)
  • (added) flang/test/Lower/OpenMP/Todo/target-teams-private.f90 (+13)
diff --git a/flang/lib/Lower/OpenMP/OpenMP.cpp b/flang/lib/Lower/OpenMP/OpenMP.cpp
index 6892e571e62a3..7fde4f842c5a5 100644
--- a/flang/lib/Lower/OpenMP/OpenMP.cpp
+++ b/flang/lib/Lower/OpenMP/OpenMP.cpp
@@ -4317,6 +4317,13 @@ static void genOMP(lower::AbstractConverter &converter, lower::SymMap &symTable,
           parser::ToUpperCaseLetters(llvm::omp::getOpenMPClauseName(clause.id));
       TODO(clauseLocation, name + " clause is not implemented yet");
     }
+
+    if (std::holds_alternative<clause::Private>(clause.u) &&
+        origDirective == llvm::omp::Directive::OMPD_target_teams)
+      TODO(clauseLocation, "TARGET TEAMS PRIVATE is not implemented yet");
+    if (std::holds_alternative<clause::Private>(clause.u) &&
+        origDirective == llvm::omp::Directive::OMPD_target_parallel)
+      TODO(clauseLocation, "TARGET PARALLEL PRIVATE is not implemented yet");
   }
 
   llvm::omp::Directive directive =
diff --git a/flang/test/Lower/OpenMP/Todo/target-parallel-private.f90 b/flang/test/Lower/OpenMP/Todo/target-parallel-private.f90
new file mode 100644
index 0000000000000..e820143021f9a
--- /dev/null
+++ b/flang/test/Lower/OpenMP/Todo/target-parallel-private.f90
@@ -0,0 +1,13 @@
+! RUN: %not_todo_cmd bbc -emit-fir -fopenmp -fopenmp-version=50 -o - %s 2>&1 | FileCheck %s
+! RUN: %not_todo_cmd %flang_fc1 -emit-fir -fopenmp -fopenmp-version=50 -o - %s 2>&1 | FileCheck %s
+
+!===============================================================================
+! `private` clause on `target parallel`
+!===============================================================================
+
+! CHECK: not yet implemented: TARGET PARALLEL PRIVATE is not implemented yet
+subroutine target_teams_private()
+integer, dimension(3) :: i
+!$omp target parallel private(i)
+!$omp end target parallel
+end subroutine
diff --git a/flang/test/Lower/OpenMP/Todo/target-teams-private.f90 b/flang/test/Lower/OpenMP/Todo/target-teams-private.f90
new file mode 100644
index 0000000000000..c8d998a5cbf94
--- /dev/null
+++ b/flang/test/Lower/OpenMP/Todo/target-teams-private.f90
@@ -0,0 +1,13 @@
+! RUN: %not_todo_cmd bbc -emit-fir -fopenmp -fopenmp-version=50 -o - %s 2>&1 | FileCheck %s
+! RUN: %not_todo_cmd %flang_fc1 -emit-fir -fopenmp -fopenmp-version=50 -o - %s 2>&1 | FileCheck %s
+
+!===============================================================================
+! `private` clause on `target teams`
+!===============================================================================
+
+! CHECK: not yet implemented: TARGET TEAMS PRIVATE is not implemented yet
+subroutine target_teams_private()
+integer, dimension(3) :: i
+!$omp target teams private(i)
+!$omp end target teams
+end subroutine

Copy link
Contributor

@tblah tblah left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to clarify: both of these are being disabled because they seem to crash the compiler in all cases.

We are aiming to get compiler crashes resolved so that the experimental warning can be removed. Implementing features can come later.

Comment on lines +4321 to +4326
if (std::holds_alternative<clause::Private>(clause.u) &&
origDirective == llvm::omp::Directive::OMPD_target_teams)
TODO(clauseLocation, "TARGET TEAMS PRIVATE is not implemented yet");
if (std::holds_alternative<clause::Private>(clause.u) &&
origDirective == llvm::omp::Directive::OMPD_target_parallel)
TODO(clauseLocation, "TARGET PARALLEL PRIVATE is not implemented yet");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please could you do this inside of genTeamsClauses: that would be more in line with how other TODOs in this file are located.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My rationale for putting this here was that by the time we're inside genTeamsClauses we're no longer processing the compound "target teams", rather just the "teams" construct on its own. Do we just want to show the TODO whenever teams is used with private in any scenario?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ahh good point. I'm not sure about the state of teams private but lets not disable it just in case.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From what I can tell, teams + private is just not implemented outright so it should be okay in that case. But parallel + private without target seems to be implemented and tested quite extensively.

Copy link
Contributor

@tblah tblah left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM but wait for a review from somebody who works on target stuff.

@kparzysz
Copy link
Contributor

@ergawy: Kareem, these look like bugs, are those cases supposed to work with the current code?

I checked a simple testcase

subroutine t
  integer :: i
  !$omp target parallel private(i)
  !$omp end target parallel
end

The problem in the generated MLIR was that the argument to omp.parallel is defined outside of the omp.target region, which causes verification failure: "'omp.parallel' op using value defined outside the region".

  "func.func"() <{function_type = () -> (), sym_name = "_QPt"}> ({
    %0 = "fir.dummy_scope"() : () -> !fir.dscope
    %1 = "fir.alloca"() <{bindc_name = "i", in_type = i32, operandSegmentSizes = array<i32: 0, 0>, uniq_name = "_QFtEi"}> : () -> !fir.ref<i32>
    %2:2 = "hlfir.declare"(%1) <{operandSegmentSizes = array<i32: 1, 0, 0, 0>, uniq_name = "_QFtEi"}> : (!fir.ref<i32>) -> (!fir.ref<i32>, !fir.ref<i32>)
    %3 = "omp.map.info"(%2#1) <{map_capture_type = #omp<variable_capture_kind(ByCopy)>, map_type = 512 : ui64, name = "i", operandSegmentSizes = array<i32: 1, 0, 0, 0>, partial_map = false, var_type = i32}> : (!fir.ref<i32>) -> !fir.ref<i32>
    "omp.target"(%3) <{operandSegmentSizes = array<i32: 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0>}> ({
    ^bb0(%arg0: !fir.ref<i32>):
      %4:2 = "hlfir.declare"(%arg0) <{operandSegmentSizes = array<i32: 1, 0, 0, 0>, uniq_name = "_QFtEi"}> : (!fir.ref<i32>) -> (!fir.ref<i32>, !fir.ref<i32>)
      %5 = "fir.undefined"() : () -> index
      "omp.parallel"(%2#0) <{operandSegmentSizes = array<i32: 0, 0, 0, 0, 1, 0>, private_syms = [@_QFtEi_private_i32]}> ({
      ^bb0(%arg1: !fir.ref<i32>):
        %6:2 = "hlfir.declare"(%arg1) <{operandSegmentSizes = array<i32: 1, 0, 0, 0>, uniq_name = "_QFtEi"}> : (!fir.ref<i32>) -> (!fir.ref<i32>, !fir.ref<i32>)
        "omp.terminator"() : () -> ()
      }) : (!fir.ref<i32>) -> ()
      "omp.terminator"() : () -> ()
    }) : (!fir.ref<i32>) -> ()
  }) : () -> ()

@ergawy
Copy link
Member

ergawy commented Jun 11, 2025

@ergawy: Kareem, these look like bugs, are those cases supposed to work with the current code?

I checked a simple testcase


subroutine t

  integer :: i

  !$omp target parallel private(i)

  !$omp end target parallel

end

The problem in the generated MLIR was that the argument to omp.parallel is defined outside of the omp.target region, which causes verification failure: "'omp.parallel' op using value defined outside the region".


  "func.func"() <{function_type = () -> (), sym_name = "_QPt"}> ({

    %0 = "fir.dummy_scope"() : () -> !fir.dscope

    %1 = "fir.alloca"() <{bindc_name = "i", in_type = i32, operandSegmentSizes = array<i32: 0, 0>, uniq_name = "_QFtEi"}> : () -> !fir.ref<i32>

    %2:2 = "hlfir.declare"(%1) <{operandSegmentSizes = array<i32: 1, 0, 0, 0>, uniq_name = "_QFtEi"}> : (!fir.ref<i32>) -> (!fir.ref<i32>, !fir.ref<i32>)

    %3 = "omp.map.info"(%2#1) <{map_capture_type = #omp<variable_capture_kind(ByCopy)>, map_type = 512 : ui64, name = "i", operandSegmentSizes = array<i32: 1, 0, 0, 0>, partial_map = false, var_type = i32}> : (!fir.ref<i32>) -> !fir.ref<i32>

    "omp.target"(%3) <{operandSegmentSizes = array<i32: 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0>}> ({

    ^bb0(%arg0: !fir.ref<i32>):

      %4:2 = "hlfir.declare"(%arg0) <{operandSegmentSizes = array<i32: 1, 0, 0, 0>, uniq_name = "_QFtEi"}> : (!fir.ref<i32>) -> (!fir.ref<i32>, !fir.ref<i32>)

      %5 = "fir.undefined"() : () -> index

      "omp.parallel"(%2#0) <{operandSegmentSizes = array<i32: 0, 0, 0, 0, 1, 0>, private_syms = [@_QFtEi_private_i32]}> ({

      ^bb0(%arg1: !fir.ref<i32>):

        %6:2 = "hlfir.declare"(%arg1) <{operandSegmentSizes = array<i32: 1, 0, 0, 0>, uniq_name = "_QFtEi"}> : (!fir.ref<i32>) -> (!fir.ref<i32>, !fir.ref<i32>)

        "omp.terminator"() : () -> ()

      }) : (!fir.ref<i32>) -> ()

      "omp.terminator"() : () -> ()

    }) : (!fir.ref<i32>) -> ()

  }) : () -> ()

Looks like incorrect symbol binding indeed. Thanks for bringing that to my attention. I will take a look tomorrow.

@ergawy
Copy link
Member

ergawy commented Jun 12, 2025

The privatization crashe for both target parallel and target teams can be resolved by pushing a symbol map scope for both constructs: parallel and teams:

diff --git a/flang/lib/Lower/OpenMP/OpenMP.cpp b/flang/lib/Lower/OpenMP/OpenMP.cpp
index c13fa471978d..fa11c237282d 100644
--- a/flang/lib/Lower/OpenMP/OpenMP.cpp
+++ b/flang/lib/Lower/OpenMP/OpenMP.cpp
@@ -2668,6 +2668,7 @@ genTeamsOp(lower::AbstractConverter &converter, lower::SymMap &symTable,
            semantics::SemanticsContext &semaCtx, lower::pft::Evaluation &eval,
            mlir::Location loc, const ConstructQueue &queue,
            ConstructQueue::const_iterator item) {
+  lower::SymMapScope scope(symTable);
   mlir::omp::TeamsOperands clauseOps;
   llvm::SmallVector<const semantics::Symbol *> reductionSyms;
   genTeamsClauses(converter, semaCtx, stmtCtx, item->clauses, loc, clauseOps,
@@ -2975,6 +2976,7 @@ static mlir::omp::ParallelOp genStandaloneParallel(
     lower::StatementContext &stmtCtx, semantics::SemanticsContext &semaCtx,
     lower::pft::Evaluation &eval, mlir::Location loc,
     const ConstructQueue &queue, ConstructQueue::const_iterator item) {
+  lower::SymMapScope scope(symTable);
   mlir::omp::ParallelOperands parallelClauseOps;
   llvm::SmallVector<const semantics::Symbol *> parallelReductionSyms;
   genParallelClauses(converter, semaCtx, stmtCtx, item->clauses, loc,

I think adding scopes makes sense for these constructs.

This produces correct IR for the sample that @kparzysz shared (and also if we modify it to use teams instead of parallel). Running flang-check with the above changes, all tests pass except for one test: Lower/OpenMP/copyin.f90.

The reason this test fails is that sections is handled differently from other constructs. In particular, if we have a parallel sections construct, the genOMP([...], OpenMPSectionsConstruct) function is not called from the parent genOMP(....) call (e.g. the call to generate the parallel op). So, the scope added (by the above diff) to parallel is popped before we generate the sections op. This special handling of sections was added in #97858. In particular, see line 2086 in OpenMP.cpp in #97858 to see the change I am referring to. @tblah I am not sure how difficult would it be to go back to making sections generation nested within parent constructs!

@tblah
Copy link
Contributor

tblah commented Jun 12, 2025

Thanks for taking a look at these bugs Kareem.

@tblah I am not sure how difficult would it be to go back to making sections generation nested within parent constructs!

I definitely tried at the time and wasn't able to find a less hacky solution. The problem is the parser::OmpSectionBlocks argument for genSectionsOp. I tried something like this just now

diff --git a/flang/lib/Lower/OpenMP/OpenMP.cpp b/flang/lib/Lower/OpenMP/OpenMP.cpp
index 784749bba5a0..bfee1ddad565 100644
--- a/flang/lib/Lower/OpenMP/OpenMP.cpp
+++ b/flang/lib/Lower/OpenMP/OpenMP.cpp
@@ -49,12 +49,16 @@ using namespace Fortran::common::openmp;
 // Code generation helper functions
 //===----------------------------------------------------------------------===//
 
-static void genOMPDispatch(lower::AbstractConverter &converter,
-                           lower::SymMap &symTable,
-                           semantics::SemanticsContext &semaCtx,
-                           lower::pft::Evaluation &eval, mlir::Location loc,
-                           const ConstructQueue &queue,
-                           ConstructQueue::const_iterator item);
+/// Pointer must not be null. Reference types aren't allowed inside of
+/// std::variant.
+using ExtraCodegenInfo = std::variant<const parser::OmpSectionBlocks *>;
+
+static void
+genOMPDispatch(lower::AbstractConverter &converter, lower::SymMap &symTable,
+               semantics::SemanticsContext &semaCtx,
+               lower::pft::Evaluation &eval, mlir::Location loc,
+               const ConstructQueue &queue, ConstructQueue::const_iterator item,
+               const std::optional<ExtraCodegenInfo> &extraInfo = std::nullopt);
 
 static void processHostEvalClauses(lower::AbstractConverter &converter,
                                    semantics::SemanticsContext &semaCtx,
@@ -3808,7 +3812,8 @@ static void genOMPDispatch(lower::AbstractConverter &converter,
                            semantics::SemanticsContext &semaCtx,
                            lower::pft::Evaluation &eval, mlir::Location loc,
                            const ConstructQueue &queue,
-                           ConstructQueue::const_iterator item) {
+                           ConstructQueue::const_iterator item,
+                           const std::optional<ExtraCodegenInfo> &extraInfo) {
   assert(item != queue.end());
 
   lower::StatementContext stmtCtx;
@@ -3875,10 +3880,16 @@ static void genOMPDispatch(lower::AbstractConverter &converter,
     // Lowered in the enclosing genSectionsOp.
     break;
   case llvm::omp::Directive::OMPD_sections:
-    // Called directly from genOMP([...], OpenMPSectionsConstruct) because it
-    // has a different prototype.
-    // This code path is still taken when iterating through the construct queue
-    // in genBodyOfOp
+    if (extraInfo) {
+      if (const auto *sectionBlocks =
+              std::get_if<const parser::OmpSectionBlocks *>(&*extraInfo)) {
+        assert(*sectionBlocks && "Must not store nullptr in ExtraLoweringInfo");
+        newOp = genSectionsOp(converter, symTable, semaCtx, eval, loc, queue,
+                              item, **sectionBlocks);
+        break;
+      }
+    }
+    llvm_unreachable("OMPD_sections without parser::OmpSectionBlocks");
     break;
   case llvm::omp::Directive::OMPD_simd:
     newOp =
@@ -4438,20 +4449,8 @@ static void genOMP(lower::AbstractConverter &converter, lower::SymMap &symTable,
                           eval, source, directive, clauses)};
   ConstructQueue::iterator next = queue.begin();
   // Generate constructs that come first e.g. Parallel
-  while (next != queue.end() &&
-         next->id != llvm::omp::Directive::OMPD_sections) {
-    genOMPDispatch(converter, symTable, semaCtx, eval, currentLocation, queue,
-                   next);
-    next = std::next(next);
-  }
-
-  // call genSectionsOp directly (not via genOMPDispatch) so that we can add the
-  // sectionBlocks argument
-  assert(next != queue.end());
-  assert(next->id == llvm::omp::Directive::OMPD_sections);
-  genSectionsOp(converter, symTable, semaCtx, eval, currentLocation, queue,
-                next, sectionBlocks);
-  assert(std::next(next) == queue.end());
+  genOMPDispatch(converter, symTable, semaCtx, eval, currentLocation, queue,
+                 next, ExtraCodegenInfo{&sectionBlocks});
 }
 
 static void genOMP(lower::AbstractConverter &converter, lower::SymMap &symTable,

This doesn't work because the extra argument isn't added when this comes from a parallel sections. But I don't think there's any good way to avoid needing to construct each section inside of the overall sections construct. But I would be very happy if you had a better idea.

@ergawy
Copy link
Member

ergawy commented Jun 12, 2025

Not the best solution but there is a precedent for using a global to handle cross-construct communication:

diff --git a/flang/lib/Lower/OpenMP/OpenMP.cpp b/flang/lib/Lower/OpenMP/OpenMP.cpp
index c13fa471978d..d14464f0e854 100644
--- a/flang/lib/Lower/OpenMP/OpenMP.cpp
+++ b/flang/lib/Lower/OpenMP/OpenMP.cpp
@@ -201,6 +201,8 @@ private:
 /// structures, but it will probably still require some further work to support
 /// reverse offloading.
 static llvm::SmallVector<HostEvalInfo, 0> hostEvalInfo;
+static llvm::SmallVector<const parser::OpenMPSectionsConstruct *, 0>
+    sectionsStack;
 
 /// Bind symbols to their corresponding entry block arguments.
 ///
@@ -2220,8 +2222,12 @@ static mlir::omp::SectionsOp
 genSectionsOp(lower::AbstractConverter &converter, lower::SymMap &symTable,
               semantics::SemanticsContext &semaCtx,
               lower::pft::Evaluation &eval, mlir::Location loc,
-              const ConstructQueue &queue, ConstructQueue::const_iterator item,
-              const parser::OmpSectionBlocks &sectionBlocks) {
+              const ConstructQueue &queue,
+              ConstructQueue::const_iterator item) {
+  assert(!sectionsStack.empty());
+  const auto &sectionBlocks =
+      std::get<parser::OmpSectionBlocks>(sectionsStack.back()->t);
+  sectionsStack.pop_back();
   mlir::omp::SectionsOperands clauseOps;
   llvm::SmallVector<const semantics::Symbol *> reductionSyms;
   genSectionsClauses(converter, semaCtx, item->clauses, loc, clauseOps,
@@ -2668,6 +2674,7 @@ genTeamsOp(lower::AbstractConverter &converter, lower::SymMap &symTable,
            semantics::SemanticsContext &semaCtx, lower::pft::Evaluation &eval,
            mlir::Location loc, const ConstructQueue &queue,
            ConstructQueue::const_iterator item) {
+  lower::SymMapScope scope(symTable);
   mlir::omp::TeamsOperands clauseOps;
   llvm::SmallVector<const semantics::Symbol *> reductionSyms;
   genTeamsClauses(converter, semaCtx, stmtCtx, item->clauses, loc, clauseOps,
@@ -2975,6 +2982,7 @@ static mlir::omp::ParallelOp genStandaloneParallel(
     lower::StatementContext &stmtCtx, semantics::SemanticsContext &semaCtx,
     lower::pft::Evaluation &eval, mlir::Location loc,
     const ConstructQueue &queue, ConstructQueue::const_iterator item) {
+  lower::SymMapScope scope(symTable);
   mlir::omp::ParallelOperands parallelClauseOps;
   llvm::SmallVector<const semantics::Symbol *> parallelReductionSyms;
   genParallelClauses(converter, semaCtx, stmtCtx, item->clauses, loc,
@@ -3462,6 +3470,7 @@ static void genOMPDispatch(lower::AbstractConverter &converter,
     // has a different prototype.
     // This code path is still taken when iterating through the construct queue
     // in genBodyOfOp
+    genSectionsOp(converter, symTable, semaCtx, eval, loc, queue, item);
     break;
   case llvm::omp::Directive::OMPD_simd:
     newOp =
@@ -4143,22 +4152,9 @@ static void genOMP(lower::AbstractConverter &converter, lower::SymMap &symTable,
   ConstructQueue queue{
       buildConstructQueue(converter.getFirOpBuilder().getModule(), semaCtx,
                           eval, source, directive, clauses)};
-  ConstructQueue::iterator next = queue.begin();
-  // Generate constructs that come first e.g. Parallel
-  while (next != queue.end() &&
-         next->id != llvm::omp::Directive::OMPD_sections) {
-    genOMPDispatch(converter, symTable, semaCtx, eval, currentLocation, queue,
-                   next);
-    next = std::next(next);
-  }
-
-  // call genSectionsOp directly (not via genOMPDispatch) so that we can add the
-  // sectionBlocks argument
-  assert(next != queue.end());
-  assert(next->id == llvm::omp::Directive::OMPD_sections);
-  genSectionsOp(converter, symTable, semaCtx, eval, currentLocation, queue,
-                next, sectionBlocks);
-  assert(std::next(next) == queue.end());
+  sectionsStack.push_back(&sectionsConstruct);
+  genOMPDispatch(converter, symTable, semaCtx, eval, currentLocation, queue,
+                 queue.begin());
 }
 
 static void genOMP(lower::AbstractConverter &converter, lower::SymMap &symTable,

This passes all lit tests and also handles the target [parallel|teams] private(...) cases.

@tblah
Copy link
Contributor

tblah commented Jun 12, 2025

Nice solution. It is a shame to have to use global variables but I haven't got any better ideas and I guess it is okay if there is already a precedent.

@ergawy
Copy link
Member

ergawy commented Jun 12, 2025

Nice solution. It is a shame to have to use global variables but I haven't got any better ideas and I guess it is okay if there is already a precedent.

I will open 2 PRs then, one of the sections changes and one for the teams|parallel scope changes (will do that tomorrow). @mrkajetanp I think it is still useful to merge this PR since the PRs I will open might need some discussions before merging.

@mrkajetanp mrkajetanp merged commit f12b1ed into llvm:main Jun 12, 2025
11 checks passed
ergawy added a commit that referenced this pull request Jun 13, 2025
… `genXXXOp` functions

Unifies the prototype of `genSectionsOp` to match other ops generators.
Doing so, we are able to call `genSectionsOp` directtly from
`genOMPDispatch` instead of the special handling needed now to pass the
section blocks. This is useful because now we can handle symbol mapping
scopes easier for nested OpenMP directives. See
#143706 (comment)
and the following discussion for more info.
ergawy added a commit that referenced this pull request Jun 13, 2025
… `genXXXOp` functions

Unifies the prototype of `genSectionsOp` to match other ops generators.
Doing so, we are able to call `genSectionsOp` directtly from
`genOMPDispatch` instead of the special handling needed now to pass the
section blocks. This is useful because now we can handle symbol mapping
scopes easier for nested OpenMP directives. See
#143706 (comment)
and the following discussion for more info.
tomtor pushed a commit to tomtor/llvm-project that referenced this pull request Jun 14, 2025
…43706)

Using the private clause on `target teams` or `target parallel` is not
currently implemented and causes crashes during lowering. Add
appropriate TODOs.

Resolves llvm#116428.

Signed-off-by: Kajetan Puchalski <[email protected]>
@skatrak
Copy link
Member

skatrak commented Jun 16, 2025

Nice solution. It is a shame to have to use global variables but I haven't got any better ideas and I guess it is okay if there is already a precedent.

Yeah, I didn't like having to add the hostEvalInfo global initially either. In mlir::LLVM::ModuleTranslation we have StackFrame and associated methods for these situations. I think we might want to replicate that feature in Flang lowering and remove the need for globals.

ergawy added a commit that referenced this pull request Jun 17, 2025
… `genXXXOp` functions (#144013)

Unifies the prototype of `genSectionsOp` to match other ops generators.
Doing so, we are able to call `genSectionsOp` directtly from
`genOMPDispatch` instead of the special handling needed now to pass the
section blocks. This is useful because now we can handle symbol mapping
scopes easier for nested OpenMP directives. See

#143706 (comment)
and the following discussion for more info.
llvm-sync bot pushed a commit to arm/arm-toolchain that referenced this pull request Jun 17, 2025
…match other `genXXXOp` functions (#144013)

Unifies the prototype of `genSectionsOp` to match other ops generators.
Doing so, we are able to call `genSectionsOp` directtly from
`genOMPDispatch` instead of the special handling needed now to pass the
section blocks. This is useful because now we can handle symbol mapping
scopes easier for nested OpenMP directives. See

llvm/llvm-project#143706 (comment)
and the following discussion for more info.
ajaden-codes pushed a commit to Jaddyen/llvm-project that referenced this pull request Jun 17, 2025
… `genXXXOp` functions (llvm#144013)

Unifies the prototype of `genSectionsOp` to match other ops generators.
Doing so, we are able to call `genSectionsOp` directtly from
`genOMPDispatch` instead of the special handling needed now to pass the
section blocks. This is useful because now we can handle symbol mapping
scopes easier for nested OpenMP directives. See

llvm#143706 (comment)
and the following discussion for more info.
fschlimb pushed a commit to fschlimb/llvm-project that referenced this pull request Jun 18, 2025
… `genXXXOp` functions (llvm#144013)

Unifies the prototype of `genSectionsOp` to match other ops generators.
Doing so, we are able to call `genSectionsOp` directtly from
`genOMPDispatch` instead of the special handling needed now to pass the
section blocks. This is useful because now we can handle symbol mapping
scopes easier for nested OpenMP directives. See

llvm#143706 (comment)
and the following discussion for more info.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
flang:fir-hlfir flang:openmp flang Flang issues not falling into any other category
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Flang][OpenMP] private clause on target teams directive violates assertion
6 participants