Skip to content

[CIR] Upstream support for range-based for loops #138176

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
May 1, 2025

Conversation

andykaylor
Copy link
Contributor

This upstreams the code needed to handle CXXForRangeStmt.

This upstreams the code needed to handle CXXForRangeStmt.
@llvmbot llvmbot added clang Clang issues not falling into any other category ClangIR Anything related to the ClangIR project labels May 1, 2025
@llvmbot
Copy link
Member

llvmbot commented May 1, 2025

@llvm/pr-subscribers-clangir

Author: Andy Kaylor (andykaylor)

Changes

This upstreams the code needed to handle CXXForRangeStmt.


Full diff: https://github.com/llvm/llvm-project/pull/138176.diff

5 Files Affected:

  • (modified) clang/lib/CIR/CodeGen/CIRGenExpr.cpp (+35)
  • (modified) clang/lib/CIR/CodeGen/CIRGenExprScalar.cpp (+3)
  • (modified) clang/lib/CIR/CodeGen/CIRGenFunction.h (+6)
  • (modified) clang/lib/CIR/CodeGen/CIRGenStmt.cpp (+79-1)
  • (modified) clang/test/CIR/CodeGen/loop.cpp (+111)
diff --git a/clang/lib/CIR/CodeGen/CIRGenExpr.cpp b/clang/lib/CIR/CodeGen/CIRGenExpr.cpp
index da5a0b97a395e..471c8b3975d96 100644
--- a/clang/lib/CIR/CodeGen/CIRGenExpr.cpp
+++ b/clang/lib/CIR/CodeGen/CIRGenExpr.cpp
@@ -948,6 +948,41 @@ void CIRGenFunction::emitIgnoredExpr(const Expr *e) {
   emitLValue(e);
 }
 
+Address CIRGenFunction::emitArrayToPointerDecay(const Expr *e) {
+  assert(e->getType()->isArrayType() &&
+         "Array to pointer decay must have array source type!");
+
+  // Expressions of array type can't be bitfields or vector elements.
+  LValue lv = emitLValue(e);
+  Address addr = lv.getAddress();
+
+  // If the array type was an incomplete type, we need to make sure
+  // the decay ends up being the right type.
+  auto lvalueAddrTy = mlir::cast<cir::PointerType>(addr.getPointer().getType());
+
+  if (e->getType()->isVariableArrayType())
+    return addr;
+
+  auto pointeeTy = mlir::cast<cir::ArrayType>(lvalueAddrTy.getPointee());
+
+  mlir::Type arrayTy = convertType(e->getType());
+  assert(mlir::isa<cir::ArrayType>(arrayTy) && "expected array");
+  assert(pointeeTy == arrayTy);
+
+  // The result of this decay conversion points to an array element within the
+  // base lvalue. However, since TBAA currently does not support representing
+  // accesses to elements of member arrays, we conservatively represent accesses
+  // to the pointee object as if it had no any base lvalue specified.
+  // TODO: Support TBAA for member arrays.
+  QualType eltType = e->getType()->castAsArrayTypeUnsafe()->getElementType();
+  assert(!cir::MissingFeatures::opTBAA());
+
+  mlir::Value ptr = builder.maybeBuildArrayDecay(
+      cgm.getLoc(e->getSourceRange()), addr.getPointer(),
+      convertTypeForMem(eltType));
+  return Address(ptr, addr.getAlignment());
+}
+
 /// Emit an `if` on a boolean condition, filling `then` and `else` into
 /// appropriated regions.
 mlir::LogicalResult CIRGenFunction::emitIfOnBoolExpr(const Expr *cond,
diff --git a/clang/lib/CIR/CodeGen/CIRGenExprScalar.cpp b/clang/lib/CIR/CodeGen/CIRGenExprScalar.cpp
index 78eb3cbd430bc..423cddd374e8f 100644
--- a/clang/lib/CIR/CodeGen/CIRGenExprScalar.cpp
+++ b/clang/lib/CIR/CodeGen/CIRGenExprScalar.cpp
@@ -1567,6 +1567,9 @@ mlir::Value ScalarExprEmitter::VisitCastExpr(CastExpr *ce) {
     return v;
   }
 
+  case CK_ArrayToPointerDecay:
+    return cgf.emitArrayToPointerDecay(subExpr).getPointer();
+
   case CK_NullToPointer: {
     if (mustVisitNullValue(subExpr))
       cgf.emitIgnoredExpr(subExpr);
diff --git a/clang/lib/CIR/CodeGen/CIRGenFunction.h b/clang/lib/CIR/CodeGen/CIRGenFunction.h
index d50abfcfbc867..ac5d39fc61795 100644
--- a/clang/lib/CIR/CodeGen/CIRGenFunction.h
+++ b/clang/lib/CIR/CodeGen/CIRGenFunction.h
@@ -449,6 +449,8 @@ class CIRGenFunction : public CIRGenTypeCache {
 
   LValue emitArraySubscriptExpr(const clang::ArraySubscriptExpr *e);
 
+  Address emitArrayToPointerDecay(const Expr *array);
+
   AutoVarEmission emitAutoVarAlloca(const clang::VarDecl &d);
 
   /// Emit code and set up symbol table for a variable declaration with auto,
@@ -485,6 +487,10 @@ class CIRGenFunction : public CIRGenTypeCache {
   LValue emitCompoundAssignmentLValue(const clang::CompoundAssignOperator *e);
 
   mlir::LogicalResult emitContinueStmt(const clang::ContinueStmt &s);
+
+  mlir::LogicalResult emitCXXForRangeStmt(const CXXForRangeStmt &s,
+                                          llvm::ArrayRef<const Attr *> attrs);
+
   mlir::LogicalResult emitDoStmt(const clang::DoStmt &s);
 
   /// Emit an expression as an initializer for an object (variable, field, etc.)
diff --git a/clang/lib/CIR/CodeGen/CIRGenStmt.cpp b/clang/lib/CIR/CodeGen/CIRGenStmt.cpp
index dffa71046df1d..ee4dcc861a1f2 100644
--- a/clang/lib/CIR/CodeGen/CIRGenStmt.cpp
+++ b/clang/lib/CIR/CodeGen/CIRGenStmt.cpp
@@ -97,6 +97,8 @@ mlir::LogicalResult CIRGenFunction::emitStmt(const Stmt *s,
     return emitWhileStmt(cast<WhileStmt>(*s));
   case Stmt::DoStmtClass:
     return emitDoStmt(cast<DoStmt>(*s));
+  case Stmt::CXXForRangeStmtClass:
+    return emitCXXForRangeStmt(cast<CXXForRangeStmt>(*s), attr);
   case Stmt::OpenACCComputeConstructClass:
     return emitOpenACCComputeConstruct(cast<OpenACCComputeConstruct>(*s));
   case Stmt::OpenACCLoopConstructClass:
@@ -137,7 +139,6 @@ mlir::LogicalResult CIRGenFunction::emitStmt(const Stmt *s,
   case Stmt::CoroutineBodyStmtClass:
   case Stmt::CoreturnStmtClass:
   case Stmt::CXXTryStmtClass:
-  case Stmt::CXXForRangeStmtClass:
   case Stmt::IndirectGotoStmtClass:
   case Stmt::GCCAsmStmtClass:
   case Stmt::MSAsmStmtClass:
@@ -547,6 +548,83 @@ mlir::LogicalResult CIRGenFunction::emitSwitchCase(const SwitchCase &s,
   llvm_unreachable("expect case or default stmt");
 }
 
+mlir::LogicalResult
+CIRGenFunction::emitCXXForRangeStmt(const CXXForRangeStmt &s,
+                                    ArrayRef<const Attr *> forAttrs) {
+  cir::ForOp forOp;
+
+  // TODO(cir): pass in array of attributes.
+  auto forStmtBuilder = [&]() -> mlir::LogicalResult {
+    mlir::LogicalResult loopRes = mlir::success();
+    // Evaluate the first pieces before the loop.
+    if (s.getInit())
+      if (emitStmt(s.getInit(), /*useCurrentScope=*/true).failed())
+        return mlir::failure();
+    if (emitStmt(s.getRangeStmt(), /*useCurrentScope=*/true).failed())
+      return mlir::failure();
+    if (emitStmt(s.getBeginStmt(), /*useCurrentScope=*/true).failed())
+      return mlir::failure();
+    if (emitStmt(s.getEndStmt(), /*useCurrentScope=*/true).failed())
+      return mlir::failure();
+
+    assert(!cir::MissingFeatures::loopInfoStack());
+    // From LLVM: if there are any cleanups between here and the loop-exit
+    // scope, create a block to stage a loop exit along.
+    // We probably already do the right thing because of ScopeOp, but make
+    // sure we handle all cases.
+    assert(!cir::MissingFeatures::requiresCleanups());
+
+    forOp = builder.createFor(
+        getLoc(s.getSourceRange()),
+        /*condBuilder=*/
+        [&](mlir::OpBuilder &b, mlir::Location loc) {
+          assert(!cir::MissingFeatures::createProfileWeightsForLoop());
+          assert(!cir::MissingFeatures::emitCondLikelihoodViaExpectIntrinsic());
+          mlir::Value condVal = evaluateExprAsBool(s.getCond());
+          builder.createCondition(condVal);
+        },
+        /*bodyBuilder=*/
+        [&](mlir::OpBuilder &b, mlir::Location loc) {
+          // https://en.cppreference.com/w/cpp/language/for
+          // In C++ the scope of the init-statement and the scope of
+          // statement are one and the same.
+          bool useCurrentScope = true;
+          if (emitStmt(s.getLoopVarStmt(), useCurrentScope).failed())
+            loopRes = mlir::failure();
+          if (emitStmt(s.getBody(), useCurrentScope).failed())
+            loopRes = mlir::failure();
+          emitStopPoint(&s);
+        },
+        /*stepBuilder=*/
+        [&](mlir::OpBuilder &b, mlir::Location loc) {
+          if (s.getInc())
+            if (emitStmt(s.getInc(), /*useCurrentScope=*/true).failed())
+              loopRes = mlir::failure();
+          builder.createYield(loc);
+        });
+    return loopRes;
+  };
+
+  mlir::LogicalResult res = mlir::success();
+  mlir::Location scopeLoc = getLoc(s.getSourceRange());
+  builder.create<cir::ScopeOp>(scopeLoc, /*scopeBuilder=*/
+                               [&](mlir::OpBuilder &b, mlir::Location loc) {
+                                 // Create a cleanup scope for the condition
+                                 // variable cleanups. Logical equivalent from
+                                 // LLVM codegn for LexicalScope
+                                 // ConditionScope(*this, S.getSourceRange())...
+                                 LexicalScope lexScope{
+                                     *this, loc, builder.getInsertionBlock()};
+                                 res = forStmtBuilder();
+                               });
+
+  if (res.failed())
+    return res;
+
+  terminateBody(builder, forOp.getBody(), getLoc(s.getEndLoc()));
+  return mlir::success();
+}
+
 mlir::LogicalResult CIRGenFunction::emitForStmt(const ForStmt &s) {
   cir::ForOp forOp;
 
diff --git a/clang/test/CIR/CodeGen/loop.cpp b/clang/test/CIR/CodeGen/loop.cpp
index c69d5097bbdf7..82fa508d4f869 100644
--- a/clang/test/CIR/CodeGen/loop.cpp
+++ b/clang/test/CIR/CodeGen/loop.cpp
@@ -190,6 +190,117 @@ void l3() {
 // OGCG:   store i32 0, ptr %[[I]], align 4
 // OGCG:   br label %[[FOR_COND]]
 
+void l4() {
+  int a[10];
+  for (int n : a)
+    ;
+}
+
+// CIR: cir.func @_Z2l4v
+// CIR:   %[[A_ADDR:.*]] = cir.alloca !cir.array<!s32i x 10>, !cir.ptr<!cir.array<!s32i x 10>>, ["a"] {alignment = 16 : i64}
+// CIR:   cir.scope {
+// CIR:     %[[RANGE_ADDR:.*]] = cir.alloca !cir.ptr<!cir.array<!s32i x 10>>, !cir.ptr<!cir.ptr<!cir.array<!s32i x 10>>>, ["__range1", init, const] {alignment = 8 : i64}
+// CIR:     %[[BEGIN_ADDR:.*]] = cir.alloca !cir.ptr<!s32i>, !cir.ptr<!cir.ptr<!s32i>>, ["__begin1", init] {alignment = 8 : i64}
+// CIR:     %[[END_ADDR:.*]] = cir.alloca !cir.ptr<!s32i>, !cir.ptr<!cir.ptr<!s32i>>, ["__end1", init] {alignment = 8 : i64}
+// CIR:     %[[N_ADDR:.*]] = cir.alloca !s32i, !cir.ptr<!s32i>, ["n", init] {alignment = 4 : i64}
+// CIR:     cir.store %[[A_ADDR]], %[[RANGE_ADDR]] : !cir.ptr<!cir.array<!s32i x 10>>, !cir.ptr<!cir.ptr<!cir.array<!s32i x 10>>>
+// CIR:     %[[RANGE_LOAD:.*]] = cir.load %[[RANGE_ADDR]] : !cir.ptr<!cir.ptr<!cir.array<!s32i x 10>>>, !cir.ptr<!cir.array<!s32i x 10>>
+// CIR:     %[[RANGE_CAST:.*]] = cir.cast(array_to_ptrdecay, %[[RANGE_LOAD]] : !cir.ptr<!cir.array<!s32i x 10>>), !cir.ptr<!s32i>
+// CIR:     cir.store %[[RANGE_CAST]], %[[BEGIN_ADDR]] : !cir.ptr<!s32i>, !cir.ptr<!cir.ptr<!s32i>>
+// CIR:     %[[BEGIN:.*]] = cir.load %[[RANGE_ADDR]] : !cir.ptr<!cir.ptr<!cir.array<!s32i x 10>>>, !cir.ptr<!cir.array<!s32i x 10>>
+// CIR:     %[[BEGIN_CAST:.*]] = cir.cast(array_to_ptrdecay, %[[BEGIN]] : !cir.ptr<!cir.array<!s32i x 10>>), !cir.ptr<!s32i>
+// CIR:     %[[TEN:.*]] = cir.const #cir.int<10> : !s64i
+// CIR:     %[[END_PTR:.*]] = cir.ptr_stride(%[[BEGIN_CAST]] : !cir.ptr<!s32i>, %[[TEN]] : !s64i), !cir.ptr<!s32i>
+// CIR:     cir.store %[[END_PTR]], %[[END_ADDR]] : !cir.ptr<!s32i>, !cir.ptr<!cir.ptr<!s32i>>
+// CIR:     cir.for : cond {
+// CIR:       %[[CUR:.*]] = cir.load %[[BEGIN_ADDR]] : !cir.ptr<!cir.ptr<!s32i>>, !cir.ptr<!s32i>
+// CIR:       %[[END:.*]] = cir.load %[[END_ADDR]] : !cir.ptr<!cir.ptr<!s32i>>, !cir.ptr<!s32i>
+// CIR:       %[[CMP:.*]] = cir.cmp(ne, %[[CUR]], %[[END]]) : !cir.ptr<!s32i>, !cir.bool
+// CIR:       cir.condition(%[[CMP]])
+// CIR:     } body {
+// CIR:       %[[CUR:.*]] = cir.load deref %[[BEGIN_ADDR]] : !cir.ptr<!cir.ptr<!s32i>>, !cir.ptr<!s32i>
+// CIR:       %[[N:.*]] = cir.load %[[CUR]] : !cir.ptr<!s32i>, !s32i
+// CIR:       cir.store %[[N]], %[[N_ADDR]] : !s32i, !cir.ptr<!s32i>
+// CIR:       cir.yield
+// CIR:     } step {
+// CIR:       %[[CUR:.*]] = cir.load %[[BEGIN_ADDR]] : !cir.ptr<!cir.ptr<!s32i>>, !cir.ptr<!s32i>
+// CIR:       %[[ONE:.*]] = cir.const #cir.int<1> : !s32i
+// CIR:       %[[NEXT:.*]] = cir.ptr_stride(%[[CUR]] : !cir.ptr<!s32i>, %[[ONE]] : !s32i), !cir.ptr<!s32i>
+// CIR:       cir.store %[[NEXT]], %[[BEGIN_ADDR]] : !cir.ptr<!s32i>, !cir.ptr<!cir.ptr<!s32i>>
+// CIR:       cir.yield
+// CIR:     }
+// CIR:   }
+
+// LLVM: define void @_Z2l4v() {
+// LLVM:   %[[RANGE_ADDR:.*]] = alloca ptr, i64 1, align 8
+// LLVM:   %[[BEGIN_ADDR:.*]] = alloca ptr, i64 1, align 8
+// LLVM:   %[[END_ADDR:.*]] = alloca ptr, i64 1, align 8
+// LLVM:   %[[N_ADDR:.*]] = alloca i32, i64 1, align 4
+// LLVM:   %[[A_ADDR:.*]] = alloca [10 x i32], i64 1, align 16
+// LLVM:   br label %[[SETUP:.*]]
+// LLVM: [[SETUP]]:
+// LLVM:   store ptr %[[A_ADDR]], ptr %[[RANGE_ADDR]], align 8
+// LLVM:   %[[BEGIN:.*]] = load ptr, ptr %[[RANGE_ADDR]], align 8
+// LLVM:   %[[BEGIN_CAST:.*]] = getelementptr i32, ptr %[[BEGIN]], i32 0
+// LLVM:   store ptr %[[BEGIN_CAST]], ptr %[[BEGIN_ADDR]], align 8
+// LLVM:   %[[RANGE:.*]] = load ptr, ptr %[[RANGE_ADDR]], align 8
+// LLVM:   %[[RANGE_CAST:.*]] = getelementptr i32, ptr %[[RANGE]], i32 0
+// LLVM:   %[[END_PTR:.*]] = getelementptr i32, ptr %[[RANGE_CAST]], i64 10
+// LLVM:   store ptr %[[END_PTR]], ptr %[[END_ADDR]], align 8
+// LLVM:   br label %[[COND:.*]]
+// LLVM: [[COND]]:
+// LLVM:   %[[BEGIN:.*]] = load ptr, ptr %[[BEGIN_ADDR]], align 8
+// LLVM:   %[[END:.*]] = load ptr, ptr %[[END_ADDR]], align 8
+// LLVM:   %[[CMP:.*]] = icmp ne ptr %[[BEGIN]], %[[END]]
+// LLVM:   br i1 %[[CMP]], label %[[BODY:.*]], label %[[END:.*]]
+// LLVM: [[BODY]]:
+// LLVM:   %[[CUR:.*]] = load ptr, ptr %[[BEGIN_ADDR]], align 8
+// LLVM:   %[[A_CUR:.*]] = load i32, ptr %[[CUR]], align 4
+// LLVM:   store i32 %[[A_CUR]], ptr %[[N_ADDR]], align 4
+// LLVM:   br label %[[STEP:.*]]
+// LLVM: [[STEP]]:
+// LLVM:   %[[BEGIN:.*]] = load ptr, ptr %[[BEGIN_ADDR]], align 8
+// LLVM:   %[[NEXT:.*]] = getelementptr i32, ptr %[[BEGIN]], i64 1
+// LLVM:   store ptr %[[NEXT]], ptr %[[BEGIN_ADDR]], align 8
+// LLVM:   br label %[[COND]]
+// LLVM: [[END]]:
+// LLVM:   br label %[[EXIT:.*]]
+// LLVM: [[EXIT]]:
+// LLVM:   ret void
+
+// OGCG: define{{.*}} void @_Z2l4v()
+// OGCG:   %[[A_ADDR:.*]] = alloca [10 x i32], align 16
+// OGCG:   %[[RANGE_ADDR:.*]] = alloca ptr, align 8
+// OGCG:   %[[BEGIN_ADDR:.*]] = alloca ptr, align 8
+// OGCG:   %[[END_ADDR:.*]] = alloca ptr, align 8
+// OGCG:   %[[N_ADDR:.*]] = alloca i32, align 4
+// OGCG:   store ptr %[[A_ADDR]], ptr %[[RANGE_ADDR]], align 8
+// OGCG:   %[[BEGIN:.*]] = load ptr, ptr %[[RANGE_ADDR]], align 8
+// OGCG:   %[[BEGIN_CAST:.*]] = getelementptr inbounds [10 x i32], ptr %[[BEGIN]], i64 0, i64 0
+// OGCG:   store ptr %[[BEGIN_CAST]], ptr %[[BEGIN_ADDR]], align 8
+// OGCG:   %[[RANGE:.*]] = load ptr, ptr %[[RANGE_ADDR]], align 8
+// OGCG:   %[[RANGE_CAST:.*]] = getelementptr inbounds [10 x i32], ptr %[[RANGE]], i64 0, i64 0
+// OGCG:   %[[END_PTR:.*]] = getelementptr inbounds i32, ptr %[[RANGE_CAST]], i64 10
+// OGCG:   store ptr %[[END_PTR]], ptr %[[END_ADDR]], align 8
+// OGCG:   br label %[[COND:.*]]
+// OGCG: [[COND]]:
+// OGCG:   %[[BEGIN:.*]] = load ptr, ptr %[[BEGIN_ADDR]], align 8
+// OGCG:   %[[END:.*]] = load ptr, ptr %[[END_ADDR]], align 8
+// OGCG:   %[[CMP:.*]] = icmp ne ptr %[[BEGIN]], %[[END]]
+// OGCG:   br i1 %[[CMP]], label %[[BODY:.*]], label %[[END:.*]]
+// OGCG: [[BODY]]:
+// OGCG:   %[[CUR:.*]] = load ptr, ptr %[[BEGIN_ADDR]], align 8
+// OGCG:   %[[A_CUR:.*]] = load i32, ptr %[[CUR]], align 4
+// OGCG:   store i32 %[[A_CUR]], ptr %[[N_ADDR]], align 4
+// OGCG:   br label %[[STEP:.*]]
+// OGCG: [[STEP]]:
+// OGCG:   %[[BEGIN:.*]] = load ptr, ptr %[[BEGIN_ADDR]], align 8
+// OGCG:   %[[NEXT:.*]] = getelementptr inbounds nuw i32, ptr %[[BEGIN]], i32 1
+// OGCG:   store ptr %[[NEXT]], ptr %[[BEGIN_ADDR]], align 8
+// OGCG:   br label %[[COND]]
+// OGCG: [[END]]:
+// OGCG:   ret void
+
 void test_do_while_false() {
   do {
   } while (0);

@llvmbot
Copy link
Member

llvmbot commented May 1, 2025

@llvm/pr-subscribers-clang

Author: Andy Kaylor (andykaylor)

Changes

This upstreams the code needed to handle CXXForRangeStmt.


Full diff: https://github.com/llvm/llvm-project/pull/138176.diff

5 Files Affected:

  • (modified) clang/lib/CIR/CodeGen/CIRGenExpr.cpp (+35)
  • (modified) clang/lib/CIR/CodeGen/CIRGenExprScalar.cpp (+3)
  • (modified) clang/lib/CIR/CodeGen/CIRGenFunction.h (+6)
  • (modified) clang/lib/CIR/CodeGen/CIRGenStmt.cpp (+79-1)
  • (modified) clang/test/CIR/CodeGen/loop.cpp (+111)
diff --git a/clang/lib/CIR/CodeGen/CIRGenExpr.cpp b/clang/lib/CIR/CodeGen/CIRGenExpr.cpp
index da5a0b97a395e..471c8b3975d96 100644
--- a/clang/lib/CIR/CodeGen/CIRGenExpr.cpp
+++ b/clang/lib/CIR/CodeGen/CIRGenExpr.cpp
@@ -948,6 +948,41 @@ void CIRGenFunction::emitIgnoredExpr(const Expr *e) {
   emitLValue(e);
 }
 
+Address CIRGenFunction::emitArrayToPointerDecay(const Expr *e) {
+  assert(e->getType()->isArrayType() &&
+         "Array to pointer decay must have array source type!");
+
+  // Expressions of array type can't be bitfields or vector elements.
+  LValue lv = emitLValue(e);
+  Address addr = lv.getAddress();
+
+  // If the array type was an incomplete type, we need to make sure
+  // the decay ends up being the right type.
+  auto lvalueAddrTy = mlir::cast<cir::PointerType>(addr.getPointer().getType());
+
+  if (e->getType()->isVariableArrayType())
+    return addr;
+
+  auto pointeeTy = mlir::cast<cir::ArrayType>(lvalueAddrTy.getPointee());
+
+  mlir::Type arrayTy = convertType(e->getType());
+  assert(mlir::isa<cir::ArrayType>(arrayTy) && "expected array");
+  assert(pointeeTy == arrayTy);
+
+  // The result of this decay conversion points to an array element within the
+  // base lvalue. However, since TBAA currently does not support representing
+  // accesses to elements of member arrays, we conservatively represent accesses
+  // to the pointee object as if it had no any base lvalue specified.
+  // TODO: Support TBAA for member arrays.
+  QualType eltType = e->getType()->castAsArrayTypeUnsafe()->getElementType();
+  assert(!cir::MissingFeatures::opTBAA());
+
+  mlir::Value ptr = builder.maybeBuildArrayDecay(
+      cgm.getLoc(e->getSourceRange()), addr.getPointer(),
+      convertTypeForMem(eltType));
+  return Address(ptr, addr.getAlignment());
+}
+
 /// Emit an `if` on a boolean condition, filling `then` and `else` into
 /// appropriated regions.
 mlir::LogicalResult CIRGenFunction::emitIfOnBoolExpr(const Expr *cond,
diff --git a/clang/lib/CIR/CodeGen/CIRGenExprScalar.cpp b/clang/lib/CIR/CodeGen/CIRGenExprScalar.cpp
index 78eb3cbd430bc..423cddd374e8f 100644
--- a/clang/lib/CIR/CodeGen/CIRGenExprScalar.cpp
+++ b/clang/lib/CIR/CodeGen/CIRGenExprScalar.cpp
@@ -1567,6 +1567,9 @@ mlir::Value ScalarExprEmitter::VisitCastExpr(CastExpr *ce) {
     return v;
   }
 
+  case CK_ArrayToPointerDecay:
+    return cgf.emitArrayToPointerDecay(subExpr).getPointer();
+
   case CK_NullToPointer: {
     if (mustVisitNullValue(subExpr))
       cgf.emitIgnoredExpr(subExpr);
diff --git a/clang/lib/CIR/CodeGen/CIRGenFunction.h b/clang/lib/CIR/CodeGen/CIRGenFunction.h
index d50abfcfbc867..ac5d39fc61795 100644
--- a/clang/lib/CIR/CodeGen/CIRGenFunction.h
+++ b/clang/lib/CIR/CodeGen/CIRGenFunction.h
@@ -449,6 +449,8 @@ class CIRGenFunction : public CIRGenTypeCache {
 
   LValue emitArraySubscriptExpr(const clang::ArraySubscriptExpr *e);
 
+  Address emitArrayToPointerDecay(const Expr *array);
+
   AutoVarEmission emitAutoVarAlloca(const clang::VarDecl &d);
 
   /// Emit code and set up symbol table for a variable declaration with auto,
@@ -485,6 +487,10 @@ class CIRGenFunction : public CIRGenTypeCache {
   LValue emitCompoundAssignmentLValue(const clang::CompoundAssignOperator *e);
 
   mlir::LogicalResult emitContinueStmt(const clang::ContinueStmt &s);
+
+  mlir::LogicalResult emitCXXForRangeStmt(const CXXForRangeStmt &s,
+                                          llvm::ArrayRef<const Attr *> attrs);
+
   mlir::LogicalResult emitDoStmt(const clang::DoStmt &s);
 
   /// Emit an expression as an initializer for an object (variable, field, etc.)
diff --git a/clang/lib/CIR/CodeGen/CIRGenStmt.cpp b/clang/lib/CIR/CodeGen/CIRGenStmt.cpp
index dffa71046df1d..ee4dcc861a1f2 100644
--- a/clang/lib/CIR/CodeGen/CIRGenStmt.cpp
+++ b/clang/lib/CIR/CodeGen/CIRGenStmt.cpp
@@ -97,6 +97,8 @@ mlir::LogicalResult CIRGenFunction::emitStmt(const Stmt *s,
     return emitWhileStmt(cast<WhileStmt>(*s));
   case Stmt::DoStmtClass:
     return emitDoStmt(cast<DoStmt>(*s));
+  case Stmt::CXXForRangeStmtClass:
+    return emitCXXForRangeStmt(cast<CXXForRangeStmt>(*s), attr);
   case Stmt::OpenACCComputeConstructClass:
     return emitOpenACCComputeConstruct(cast<OpenACCComputeConstruct>(*s));
   case Stmt::OpenACCLoopConstructClass:
@@ -137,7 +139,6 @@ mlir::LogicalResult CIRGenFunction::emitStmt(const Stmt *s,
   case Stmt::CoroutineBodyStmtClass:
   case Stmt::CoreturnStmtClass:
   case Stmt::CXXTryStmtClass:
-  case Stmt::CXXForRangeStmtClass:
   case Stmt::IndirectGotoStmtClass:
   case Stmt::GCCAsmStmtClass:
   case Stmt::MSAsmStmtClass:
@@ -547,6 +548,83 @@ mlir::LogicalResult CIRGenFunction::emitSwitchCase(const SwitchCase &s,
   llvm_unreachable("expect case or default stmt");
 }
 
+mlir::LogicalResult
+CIRGenFunction::emitCXXForRangeStmt(const CXXForRangeStmt &s,
+                                    ArrayRef<const Attr *> forAttrs) {
+  cir::ForOp forOp;
+
+  // TODO(cir): pass in array of attributes.
+  auto forStmtBuilder = [&]() -> mlir::LogicalResult {
+    mlir::LogicalResult loopRes = mlir::success();
+    // Evaluate the first pieces before the loop.
+    if (s.getInit())
+      if (emitStmt(s.getInit(), /*useCurrentScope=*/true).failed())
+        return mlir::failure();
+    if (emitStmt(s.getRangeStmt(), /*useCurrentScope=*/true).failed())
+      return mlir::failure();
+    if (emitStmt(s.getBeginStmt(), /*useCurrentScope=*/true).failed())
+      return mlir::failure();
+    if (emitStmt(s.getEndStmt(), /*useCurrentScope=*/true).failed())
+      return mlir::failure();
+
+    assert(!cir::MissingFeatures::loopInfoStack());
+    // From LLVM: if there are any cleanups between here and the loop-exit
+    // scope, create a block to stage a loop exit along.
+    // We probably already do the right thing because of ScopeOp, but make
+    // sure we handle all cases.
+    assert(!cir::MissingFeatures::requiresCleanups());
+
+    forOp = builder.createFor(
+        getLoc(s.getSourceRange()),
+        /*condBuilder=*/
+        [&](mlir::OpBuilder &b, mlir::Location loc) {
+          assert(!cir::MissingFeatures::createProfileWeightsForLoop());
+          assert(!cir::MissingFeatures::emitCondLikelihoodViaExpectIntrinsic());
+          mlir::Value condVal = evaluateExprAsBool(s.getCond());
+          builder.createCondition(condVal);
+        },
+        /*bodyBuilder=*/
+        [&](mlir::OpBuilder &b, mlir::Location loc) {
+          // https://en.cppreference.com/w/cpp/language/for
+          // In C++ the scope of the init-statement and the scope of
+          // statement are one and the same.
+          bool useCurrentScope = true;
+          if (emitStmt(s.getLoopVarStmt(), useCurrentScope).failed())
+            loopRes = mlir::failure();
+          if (emitStmt(s.getBody(), useCurrentScope).failed())
+            loopRes = mlir::failure();
+          emitStopPoint(&s);
+        },
+        /*stepBuilder=*/
+        [&](mlir::OpBuilder &b, mlir::Location loc) {
+          if (s.getInc())
+            if (emitStmt(s.getInc(), /*useCurrentScope=*/true).failed())
+              loopRes = mlir::failure();
+          builder.createYield(loc);
+        });
+    return loopRes;
+  };
+
+  mlir::LogicalResult res = mlir::success();
+  mlir::Location scopeLoc = getLoc(s.getSourceRange());
+  builder.create<cir::ScopeOp>(scopeLoc, /*scopeBuilder=*/
+                               [&](mlir::OpBuilder &b, mlir::Location loc) {
+                                 // Create a cleanup scope for the condition
+                                 // variable cleanups. Logical equivalent from
+                                 // LLVM codegn for LexicalScope
+                                 // ConditionScope(*this, S.getSourceRange())...
+                                 LexicalScope lexScope{
+                                     *this, loc, builder.getInsertionBlock()};
+                                 res = forStmtBuilder();
+                               });
+
+  if (res.failed())
+    return res;
+
+  terminateBody(builder, forOp.getBody(), getLoc(s.getEndLoc()));
+  return mlir::success();
+}
+
 mlir::LogicalResult CIRGenFunction::emitForStmt(const ForStmt &s) {
   cir::ForOp forOp;
 
diff --git a/clang/test/CIR/CodeGen/loop.cpp b/clang/test/CIR/CodeGen/loop.cpp
index c69d5097bbdf7..82fa508d4f869 100644
--- a/clang/test/CIR/CodeGen/loop.cpp
+++ b/clang/test/CIR/CodeGen/loop.cpp
@@ -190,6 +190,117 @@ void l3() {
 // OGCG:   store i32 0, ptr %[[I]], align 4
 // OGCG:   br label %[[FOR_COND]]
 
+void l4() {
+  int a[10];
+  for (int n : a)
+    ;
+}
+
+// CIR: cir.func @_Z2l4v
+// CIR:   %[[A_ADDR:.*]] = cir.alloca !cir.array<!s32i x 10>, !cir.ptr<!cir.array<!s32i x 10>>, ["a"] {alignment = 16 : i64}
+// CIR:   cir.scope {
+// CIR:     %[[RANGE_ADDR:.*]] = cir.alloca !cir.ptr<!cir.array<!s32i x 10>>, !cir.ptr<!cir.ptr<!cir.array<!s32i x 10>>>, ["__range1", init, const] {alignment = 8 : i64}
+// CIR:     %[[BEGIN_ADDR:.*]] = cir.alloca !cir.ptr<!s32i>, !cir.ptr<!cir.ptr<!s32i>>, ["__begin1", init] {alignment = 8 : i64}
+// CIR:     %[[END_ADDR:.*]] = cir.alloca !cir.ptr<!s32i>, !cir.ptr<!cir.ptr<!s32i>>, ["__end1", init] {alignment = 8 : i64}
+// CIR:     %[[N_ADDR:.*]] = cir.alloca !s32i, !cir.ptr<!s32i>, ["n", init] {alignment = 4 : i64}
+// CIR:     cir.store %[[A_ADDR]], %[[RANGE_ADDR]] : !cir.ptr<!cir.array<!s32i x 10>>, !cir.ptr<!cir.ptr<!cir.array<!s32i x 10>>>
+// CIR:     %[[RANGE_LOAD:.*]] = cir.load %[[RANGE_ADDR]] : !cir.ptr<!cir.ptr<!cir.array<!s32i x 10>>>, !cir.ptr<!cir.array<!s32i x 10>>
+// CIR:     %[[RANGE_CAST:.*]] = cir.cast(array_to_ptrdecay, %[[RANGE_LOAD]] : !cir.ptr<!cir.array<!s32i x 10>>), !cir.ptr<!s32i>
+// CIR:     cir.store %[[RANGE_CAST]], %[[BEGIN_ADDR]] : !cir.ptr<!s32i>, !cir.ptr<!cir.ptr<!s32i>>
+// CIR:     %[[BEGIN:.*]] = cir.load %[[RANGE_ADDR]] : !cir.ptr<!cir.ptr<!cir.array<!s32i x 10>>>, !cir.ptr<!cir.array<!s32i x 10>>
+// CIR:     %[[BEGIN_CAST:.*]] = cir.cast(array_to_ptrdecay, %[[BEGIN]] : !cir.ptr<!cir.array<!s32i x 10>>), !cir.ptr<!s32i>
+// CIR:     %[[TEN:.*]] = cir.const #cir.int<10> : !s64i
+// CIR:     %[[END_PTR:.*]] = cir.ptr_stride(%[[BEGIN_CAST]] : !cir.ptr<!s32i>, %[[TEN]] : !s64i), !cir.ptr<!s32i>
+// CIR:     cir.store %[[END_PTR]], %[[END_ADDR]] : !cir.ptr<!s32i>, !cir.ptr<!cir.ptr<!s32i>>
+// CIR:     cir.for : cond {
+// CIR:       %[[CUR:.*]] = cir.load %[[BEGIN_ADDR]] : !cir.ptr<!cir.ptr<!s32i>>, !cir.ptr<!s32i>
+// CIR:       %[[END:.*]] = cir.load %[[END_ADDR]] : !cir.ptr<!cir.ptr<!s32i>>, !cir.ptr<!s32i>
+// CIR:       %[[CMP:.*]] = cir.cmp(ne, %[[CUR]], %[[END]]) : !cir.ptr<!s32i>, !cir.bool
+// CIR:       cir.condition(%[[CMP]])
+// CIR:     } body {
+// CIR:       %[[CUR:.*]] = cir.load deref %[[BEGIN_ADDR]] : !cir.ptr<!cir.ptr<!s32i>>, !cir.ptr<!s32i>
+// CIR:       %[[N:.*]] = cir.load %[[CUR]] : !cir.ptr<!s32i>, !s32i
+// CIR:       cir.store %[[N]], %[[N_ADDR]] : !s32i, !cir.ptr<!s32i>
+// CIR:       cir.yield
+// CIR:     } step {
+// CIR:       %[[CUR:.*]] = cir.load %[[BEGIN_ADDR]] : !cir.ptr<!cir.ptr<!s32i>>, !cir.ptr<!s32i>
+// CIR:       %[[ONE:.*]] = cir.const #cir.int<1> : !s32i
+// CIR:       %[[NEXT:.*]] = cir.ptr_stride(%[[CUR]] : !cir.ptr<!s32i>, %[[ONE]] : !s32i), !cir.ptr<!s32i>
+// CIR:       cir.store %[[NEXT]], %[[BEGIN_ADDR]] : !cir.ptr<!s32i>, !cir.ptr<!cir.ptr<!s32i>>
+// CIR:       cir.yield
+// CIR:     }
+// CIR:   }
+
+// LLVM: define void @_Z2l4v() {
+// LLVM:   %[[RANGE_ADDR:.*]] = alloca ptr, i64 1, align 8
+// LLVM:   %[[BEGIN_ADDR:.*]] = alloca ptr, i64 1, align 8
+// LLVM:   %[[END_ADDR:.*]] = alloca ptr, i64 1, align 8
+// LLVM:   %[[N_ADDR:.*]] = alloca i32, i64 1, align 4
+// LLVM:   %[[A_ADDR:.*]] = alloca [10 x i32], i64 1, align 16
+// LLVM:   br label %[[SETUP:.*]]
+// LLVM: [[SETUP]]:
+// LLVM:   store ptr %[[A_ADDR]], ptr %[[RANGE_ADDR]], align 8
+// LLVM:   %[[BEGIN:.*]] = load ptr, ptr %[[RANGE_ADDR]], align 8
+// LLVM:   %[[BEGIN_CAST:.*]] = getelementptr i32, ptr %[[BEGIN]], i32 0
+// LLVM:   store ptr %[[BEGIN_CAST]], ptr %[[BEGIN_ADDR]], align 8
+// LLVM:   %[[RANGE:.*]] = load ptr, ptr %[[RANGE_ADDR]], align 8
+// LLVM:   %[[RANGE_CAST:.*]] = getelementptr i32, ptr %[[RANGE]], i32 0
+// LLVM:   %[[END_PTR:.*]] = getelementptr i32, ptr %[[RANGE_CAST]], i64 10
+// LLVM:   store ptr %[[END_PTR]], ptr %[[END_ADDR]], align 8
+// LLVM:   br label %[[COND:.*]]
+// LLVM: [[COND]]:
+// LLVM:   %[[BEGIN:.*]] = load ptr, ptr %[[BEGIN_ADDR]], align 8
+// LLVM:   %[[END:.*]] = load ptr, ptr %[[END_ADDR]], align 8
+// LLVM:   %[[CMP:.*]] = icmp ne ptr %[[BEGIN]], %[[END]]
+// LLVM:   br i1 %[[CMP]], label %[[BODY:.*]], label %[[END:.*]]
+// LLVM: [[BODY]]:
+// LLVM:   %[[CUR:.*]] = load ptr, ptr %[[BEGIN_ADDR]], align 8
+// LLVM:   %[[A_CUR:.*]] = load i32, ptr %[[CUR]], align 4
+// LLVM:   store i32 %[[A_CUR]], ptr %[[N_ADDR]], align 4
+// LLVM:   br label %[[STEP:.*]]
+// LLVM: [[STEP]]:
+// LLVM:   %[[BEGIN:.*]] = load ptr, ptr %[[BEGIN_ADDR]], align 8
+// LLVM:   %[[NEXT:.*]] = getelementptr i32, ptr %[[BEGIN]], i64 1
+// LLVM:   store ptr %[[NEXT]], ptr %[[BEGIN_ADDR]], align 8
+// LLVM:   br label %[[COND]]
+// LLVM: [[END]]:
+// LLVM:   br label %[[EXIT:.*]]
+// LLVM: [[EXIT]]:
+// LLVM:   ret void
+
+// OGCG: define{{.*}} void @_Z2l4v()
+// OGCG:   %[[A_ADDR:.*]] = alloca [10 x i32], align 16
+// OGCG:   %[[RANGE_ADDR:.*]] = alloca ptr, align 8
+// OGCG:   %[[BEGIN_ADDR:.*]] = alloca ptr, align 8
+// OGCG:   %[[END_ADDR:.*]] = alloca ptr, align 8
+// OGCG:   %[[N_ADDR:.*]] = alloca i32, align 4
+// OGCG:   store ptr %[[A_ADDR]], ptr %[[RANGE_ADDR]], align 8
+// OGCG:   %[[BEGIN:.*]] = load ptr, ptr %[[RANGE_ADDR]], align 8
+// OGCG:   %[[BEGIN_CAST:.*]] = getelementptr inbounds [10 x i32], ptr %[[BEGIN]], i64 0, i64 0
+// OGCG:   store ptr %[[BEGIN_CAST]], ptr %[[BEGIN_ADDR]], align 8
+// OGCG:   %[[RANGE:.*]] = load ptr, ptr %[[RANGE_ADDR]], align 8
+// OGCG:   %[[RANGE_CAST:.*]] = getelementptr inbounds [10 x i32], ptr %[[RANGE]], i64 0, i64 0
+// OGCG:   %[[END_PTR:.*]] = getelementptr inbounds i32, ptr %[[RANGE_CAST]], i64 10
+// OGCG:   store ptr %[[END_PTR]], ptr %[[END_ADDR]], align 8
+// OGCG:   br label %[[COND:.*]]
+// OGCG: [[COND]]:
+// OGCG:   %[[BEGIN:.*]] = load ptr, ptr %[[BEGIN_ADDR]], align 8
+// OGCG:   %[[END:.*]] = load ptr, ptr %[[END_ADDR]], align 8
+// OGCG:   %[[CMP:.*]] = icmp ne ptr %[[BEGIN]], %[[END]]
+// OGCG:   br i1 %[[CMP]], label %[[BODY:.*]], label %[[END:.*]]
+// OGCG: [[BODY]]:
+// OGCG:   %[[CUR:.*]] = load ptr, ptr %[[BEGIN_ADDR]], align 8
+// OGCG:   %[[A_CUR:.*]] = load i32, ptr %[[CUR]], align 4
+// OGCG:   store i32 %[[A_CUR]], ptr %[[N_ADDR]], align 4
+// OGCG:   br label %[[STEP:.*]]
+// OGCG: [[STEP]]:
+// OGCG:   %[[BEGIN:.*]] = load ptr, ptr %[[BEGIN_ADDR]], align 8
+// OGCG:   %[[NEXT:.*]] = getelementptr inbounds nuw i32, ptr %[[BEGIN]], i32 1
+// OGCG:   store ptr %[[NEXT]], ptr %[[BEGIN_ADDR]], align 8
+// OGCG:   br label %[[COND]]
+// OGCG: [[END]]:
+// OGCG:   ret void
+
 void test_do_while_false() {
   do {
   } while (0);

@andykaylor andykaylor merged commit a76936f into llvm:main May 1, 2025
11 checks passed
IanWood1 pushed a commit to IanWood1/llvm-project that referenced this pull request May 6, 2025
This upstreams the code needed to handle CXXForRangeStmt.
IanWood1 pushed a commit to IanWood1/llvm-project that referenced this pull request May 6, 2025
This upstreams the code needed to handle CXXForRangeStmt.
IanWood1 pushed a commit to IanWood1/llvm-project that referenced this pull request May 6, 2025
This upstreams the code needed to handle CXXForRangeStmt.
GeorgeARM pushed a commit to GeorgeARM/llvm-project that referenced this pull request May 7, 2025
This upstreams the code needed to handle CXXForRangeStmt.
Ankur-0429 pushed a commit to Ankur-0429/llvm-project that referenced this pull request May 9, 2025
This upstreams the code needed to handle CXXForRangeStmt.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
clang Clang issues not falling into any other category ClangIR Anything related to the ClangIR project
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants