Skip to content

[mlir][mesh, MPI] Mesh2mpi #104566

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 15 commits into from
Nov 28, 2024
Merged

[mlir][mesh, MPI] Mesh2mpi #104566

merged 15 commits into from
Nov 28, 2024

Conversation

fschlimb
Copy link
Contributor

Pass for lowering Mesh to MPI.
Initial commit lowers UpdateHaloOp only.

@fschlimb fschlimb marked this pull request as draft August 16, 2024 09:09
@llvmbot llvmbot added the mlir label Aug 16, 2024
@llvmbot
Copy link
Member

llvmbot commented Aug 16, 2024

@llvm/pr-subscribers-mlir

Author: Frank Schlimbach (fschlimb)

Changes

Pass for lowering Mesh to MPI.
Initial commit lowers UpdateHaloOp only.


Patch is 28.98 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/104566.diff

9 Files Affected:

  • (added) mlir/include/mlir/Conversion/MeshToMPI/MeshToMPI.h (+27)
  • (modified) mlir/include/mlir/Conversion/Passes.h (+1)
  • (modified) mlir/include/mlir/Conversion/Passes.td (+17)
  • (modified) mlir/include/mlir/Dialect/Mesh/IR/MeshOps.td (+33)
  • (modified) mlir/lib/Conversion/CMakeLists.txt (+1)
  • (added) mlir/lib/Conversion/MeshToMPI/CMakeLists.txt (+22)
  • (added) mlir/lib/Conversion/MeshToMPI/MeshToMPI.cpp (+225)
  • (modified) mlir/lib/Dialect/Mesh/IR/MeshOps.cpp (+19)
  • (added) mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir (+173)
diff --git a/mlir/include/mlir/Conversion/MeshToMPI/MeshToMPI.h b/mlir/include/mlir/Conversion/MeshToMPI/MeshToMPI.h
new file mode 100644
index 00000000000000..6a2c196da45577
--- /dev/null
+++ b/mlir/include/mlir/Conversion/MeshToMPI/MeshToMPI.h
@@ -0,0 +1,27 @@
+//===- MeshToMPI.h - Convert Mesh to MPI dialect --*- C++ -*-===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef MLIR_CONVERSION_MESHTOMPI_MESHTOMPI_H
+#define MLIR_CONVERSION_MESHTOMPI_MESHTOMPI_H
+
+#include "mlir/Pass/Pass.h"
+#include "mlir/Support/LLVM.h"
+
+namespace mlir {
+class Pass;
+
+#define GEN_PASS_DECL_CONVERTMESHTOMPIPASS
+#include "mlir/Conversion/Passes.h.inc"
+
+/// Lowers Mesh communication operations (updateHalo, AllGater, ...)
+/// to MPI primitives.
+std::unique_ptr<Pass> createConvertMeshToMPIPass();
+
+} // namespace mlir
+
+#endif // MLIR_CONVERSION_MESHTOMPI_MESHTOMPI_H
\ No newline at end of file
diff --git a/mlir/include/mlir/Conversion/Passes.h b/mlir/include/mlir/Conversion/Passes.h
index 208f26489d6c39..ad8e98442ab8bc 100644
--- a/mlir/include/mlir/Conversion/Passes.h
+++ b/mlir/include/mlir/Conversion/Passes.h
@@ -51,6 +51,7 @@
 #include "mlir/Conversion/MemRefToEmitC/MemRefToEmitCPass.h"
 #include "mlir/Conversion/MemRefToLLVM/MemRefToLLVM.h"
 #include "mlir/Conversion/MemRefToSPIRV/MemRefToSPIRVPass.h"
+#include "mlir/Conversion/MeshToMPI/MeshToMPI.h"
 #include "mlir/Conversion/NVGPUToNVVM/NVGPUToNVVM.h"
 #include "mlir/Conversion/NVVMToLLVM/NVVMToLLVM.h"
 #include "mlir/Conversion/OpenACCToSCF/ConvertOpenACCToSCF.h"
diff --git a/mlir/include/mlir/Conversion/Passes.td b/mlir/include/mlir/Conversion/Passes.td
index 7bde9e490e4f4e..f9a6f52a22c6ed 100644
--- a/mlir/include/mlir/Conversion/Passes.td
+++ b/mlir/include/mlir/Conversion/Passes.td
@@ -869,6 +869,23 @@ def ConvertMemRefToSPIRV : Pass<"convert-memref-to-spirv"> {
   ];
 }
 
+//===----------------------------------------------------------------------===//
+// MeshToMPI
+//===----------------------------------------------------------------------===//
+
+def ConvertMeshToMPIPass : Pass<"convert-mesh-to-mpi"> {
+  let summary = "Convert Mesh dialect to MPI dialect.";
+  let description = [{
+    This pass converts communication operations
+    from the Mesh dialect to operations from the MPI dialect.
+  }];
+  let dependentDialects = [
+    "memref::MemRefDialect",
+    "mpi::MPIDialect",
+    "scf::SCFDialect"
+  ];
+}
+
 //===----------------------------------------------------------------------===//
 // NVVMToLLVM
 //===----------------------------------------------------------------------===//
diff --git a/mlir/include/mlir/Dialect/Mesh/IR/MeshOps.td b/mlir/include/mlir/Dialect/Mesh/IR/MeshOps.td
index 8f696bbc1a0f6e..9d1684b78f34f2 100644
--- a/mlir/include/mlir/Dialect/Mesh/IR/MeshOps.td
+++ b/mlir/include/mlir/Dialect/Mesh/IR/MeshOps.td
@@ -155,6 +155,39 @@ def Mesh_ProcessLinearIndexOp : Mesh_Op<"process_linear_index", [
   ];
 }
 
+def Mesh_NeighborsLinearIndicesOp : Mesh_Op<"neighbors_linear_indices", [
+  Pure,
+  DeclareOpInterfaceMethods<SymbolUserOpInterface>,
+  DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>
+]> {
+  let summary =
+      "For given split axes get the linear index the direct neighbor processes.";
+  let description = [{
+    Example:
+    ```
+    %idx = mesh.neighbor_linear_index on @mesh for $device 
+               split_axes = $split_axes : index
+    ```
+    Given `@mesh` with shape `(10, 20, 30)`,
+          `device` = `(1, 2, 3)`
+          `$split_axes` = `[1]`
+    it returns the linear indices of the processes at positions `(1, 1, 3)`: `633`
+    and `(1, 3, 3)`: `693`.
+
+    A negative value is returned if `$device` has no neighbor in the given
+    direction along the given `split_axes`.
+  }];
+  let arguments = (ins FlatSymbolRefAttr:$mesh,
+                       Variadic<Index>:$device,
+                       Mesh_MeshAxesAttr:$split_axes);
+  let results = (outs Index:$neighbor_down, Index:$neighbor_up);
+  let assemblyFormat =  [{
+      `on` $mesh `[` $device `]`
+      `split_axes` `=` $split_axes
+      attr-dict `:` type(results)
+  }];
+}
+
 //===----------------------------------------------------------------------===//
 // Sharding operations.
 //===----------------------------------------------------------------------===//
diff --git a/mlir/lib/Conversion/CMakeLists.txt b/mlir/lib/Conversion/CMakeLists.txt
index 813f700c5556e1..3ee237f4e62acd 100644
--- a/mlir/lib/Conversion/CMakeLists.txt
+++ b/mlir/lib/Conversion/CMakeLists.txt
@@ -41,6 +41,7 @@ add_subdirectory(MathToSPIRV)
 add_subdirectory(MemRefToEmitC)
 add_subdirectory(MemRefToLLVM)
 add_subdirectory(MemRefToSPIRV)
+add_subdirectory(MeshToMPI)
 add_subdirectory(NVGPUToNVVM)
 add_subdirectory(NVVMToLLVM)
 add_subdirectory(OpenACCToSCF)
diff --git a/mlir/lib/Conversion/MeshToMPI/CMakeLists.txt b/mlir/lib/Conversion/MeshToMPI/CMakeLists.txt
new file mode 100644
index 00000000000000..95815a683f6d6a
--- /dev/null
+++ b/mlir/lib/Conversion/MeshToMPI/CMakeLists.txt
@@ -0,0 +1,22 @@
+add_mlir_conversion_library(MLIRMeshToMPI
+  MeshToMPI.cpp
+
+  ADDITIONAL_HEADER_DIRS
+  ${MLIR_MAIN_INCLUDE_DIR}/mlir/Conversion/MeshToMPI
+
+  DEPENDS
+  MLIRConversionPassIncGen
+
+  LINK_COMPONENTS
+  Core
+
+  LINK_LIBS PUBLIC
+  MLIRFuncDialect
+  MLIRIR
+  MLIRLinalgTransforms
+  MLIRMemRefDialect
+  MLIRPass
+  MLIRMeshDialect
+  MLIRMPIDialect
+  MLIRTransforms
+  )
diff --git a/mlir/lib/Conversion/MeshToMPI/MeshToMPI.cpp b/mlir/lib/Conversion/MeshToMPI/MeshToMPI.cpp
new file mode 100644
index 00000000000000..42d885a109ee79
--- /dev/null
+++ b/mlir/lib/Conversion/MeshToMPI/MeshToMPI.cpp
@@ -0,0 +1,225 @@
+//===- MeshToMPI.cpp - Mesh to MPI  dialect conversion -----------------===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+//
+// This file implements a translation of Mesh communication ops tp MPI ops.
+//
+//===----------------------------------------------------------------------===//
+
+#include "mlir/Conversion/MeshToMPI/MeshToMPI.h"
+
+#include "mlir/Dialect/Arith/IR/Arith.h"
+#include "mlir/Dialect/MPI/IR/MPI.h"
+#include "mlir/Dialect/MemRef/IR/MemRef.h"
+#include "mlir/Dialect/Mesh/IR/MeshOps.h"
+#include "mlir/Dialect/SCF/IR/SCF.h"
+#include "mlir/Dialect/Utils/StaticValueUtils.h"
+#include "mlir/IR/Builders.h"
+#include "mlir/IR/BuiltinAttributes.h"
+#include "mlir/IR/BuiltinTypes.h"
+#include "mlir/IR/PatternMatch.h"
+#include "mlir/Transforms/GreedyPatternRewriteDriver.h"
+
+#define DEBUG_TYPE "mesh-to-mpi"
+#define DBGS() (llvm::dbgs() << "[" DEBUG_TYPE "]: ")
+
+namespace mlir {
+#define GEN_PASS_DEF_CONVERTMESHTOMPIPASS
+#include "mlir/Conversion/Passes.h.inc"
+} // namespace mlir
+
+using namespace mlir;
+using namespace mlir::mesh;
+
+namespace {
+
+// This pattern converts the mesh.update_halo operation to MPI calls
+struct ConvertUpdateHaloOp
+    : public mlir::OpRewritePattern<mlir::mesh::UpdateHaloOp> {
+  using OpRewritePattern::OpRewritePattern;
+
+  mlir::LogicalResult
+  matchAndRewrite(mlir::mesh::UpdateHaloOp op,
+                  mlir::PatternRewriter &rewriter) const override {
+    // Halos are exchanged as 2 blocks per dimension (one for each side: down
+    // and up). It is assumed that the last dim in a default memref is
+    // contiguous, hence iteration starts with the complete halo on the first
+    // dim which should be contiguous (unless the source is not). The size of
+    // the exchanged data will decrease when iterating over dimensions. That's
+    // good because the halos of last dim will be most fragmented.
+    // memref.subview is used to read and write the halo data from and to the
+    // local data. subviews and halos have dynamic and static values, so
+    // OpFoldResults are used whenever possible.
+
+    SymbolTableCollection symbolTableCollection;
+    auto loc = op.getLoc();
+
+    // convert a OpFoldResult into a Value
+    auto toValue = [&rewriter, &loc](OpFoldResult &v) {
+      return v.is<Value>()
+                 ? v.get<Value>()
+                 : rewriter.create<::mlir::arith::ConstantOp>(
+                       loc,
+                       rewriter.getIndexAttr(
+                           cast<IntegerAttr>(v.get<Attribute>()).getInt()));
+    };
+
+    auto array = op.getInput();
+    auto rank = array.getType().getRank();
+    auto mesh = op.getMesh();
+    auto meshOp = getMesh(op, symbolTableCollection);
+    auto haloSizes = getMixedValues(op.getStaticHaloSizes(),
+                                    op.getDynamicHaloSizes(), rewriter);
+    // subviews need Index values
+    for (auto &sz : haloSizes) {
+      if (sz.is<Value>()) {
+        sz = rewriter
+                 .create<arith::IndexCastOp>(loc, rewriter.getIndexType(),
+                                             sz.get<Value>())
+                 .getResult();
+      }
+    }
+
+    // most of the offset/size/stride data is the same for all dims
+    SmallVector<OpFoldResult> offsets(rank, rewriter.getIndexAttr(0));
+    SmallVector<OpFoldResult> strides(rank, rewriter.getIndexAttr(1));
+    SmallVector<OpFoldResult> shape(rank);
+    // we need the actual shape to compute offsets and sizes
+    for (auto [i, s] : llvm::enumerate(array.getType().getShape())) {
+      if (ShapedType::isDynamic(s)) {
+        shape[i] = rewriter.create<memref::DimOp>(loc, array, s).getResult();
+      } else {
+        shape[i] = rewriter.getIndexAttr(s);
+      }
+    }
+
+    auto tagAttr = rewriter.getI32IntegerAttr(91); // we just pick something
+    auto tag = rewriter.create<::mlir::arith::ConstantOp>(loc, tagAttr);
+    auto zeroAttr = rewriter.getI32IntegerAttr(0); // for detecting v<0
+    auto zero = rewriter.create<::mlir::arith::ConstantOp>(loc, zeroAttr);
+    SmallVector<Type> indexResultTypes(meshOp.getShape().size(),
+                                       rewriter.getIndexType());
+    auto myMultiIndex =
+        rewriter.create<ProcessMultiIndexOp>(loc, indexResultTypes, mesh)
+            .getResult();
+    // halo sizes are provided for split dimensions only
+    auto currHaloDim = 0;
+
+    for (auto [dim, splitAxes] : llvm::enumerate(op.getSplitAxes())) {
+      if (splitAxes.empty()) {
+        continue;
+      }
+      // Get the linearized ids of the neighbors (down and up) for the
+      // given split
+      auto tmp = rewriter
+                     .create<NeighborsLinearIndicesOp>(loc, mesh, myMultiIndex,
+                                                       splitAxes)
+                     .getResults();
+      // MPI operates on i32...
+      Value neighbourIDs[2] = {rewriter.create<arith::IndexCastOp>(
+                                   loc, rewriter.getI32Type(), tmp[0]),
+                               rewriter.create<arith::IndexCastOp>(
+                                   loc, rewriter.getI32Type(), tmp[1])};
+      // store for later
+      auto orgDimSize = shape[dim];
+      // this dim's offset to the start of the upper halo
+      auto upperOffset = rewriter.create<arith::SubIOp>(
+          loc, toValue(shape[dim]), toValue(haloSizes[currHaloDim * 2 + 1]));
+
+      // Make sure we send/recv in a way that does not lead to a dead-lock.
+      // The current approach is by far not optimal, this should be at least
+      // be a red-black pattern or using MPI_sendrecv.
+      // Also, buffers should be re-used.
+      // Still using temporary contiguous buffers for MPI communication...
+      // Still yielding a "serialized" communication pattern...
+      auto genSendRecv = [&](auto dim, bool upperHalo) {
+        auto orgOffset = offsets[dim];
+        shape[dim] = upperHalo ? haloSizes[currHaloDim * 2 + 1]
+                               : haloSizes[currHaloDim * 2];
+        // Check if we need to send and/or receive
+        // Processes on the mesh borders have only one neighbor
+        auto to = upperHalo ? neighbourIDs[1] : neighbourIDs[0];
+        auto from = upperHalo ? neighbourIDs[0] : neighbourIDs[1];
+        auto hasFrom = rewriter.create<arith::CmpIOp>(
+            loc, arith::CmpIPredicate::sge, from, zero);
+        auto hasTo = rewriter.create<arith::CmpIOp>(
+            loc, arith::CmpIPredicate::sge, to, zero);
+        auto buffer = rewriter.create<memref::AllocOp>(
+            loc, shape, array.getType().getElementType());
+        // if has neighbor: copy halo data from array to buffer and send
+        rewriter.create<scf::IfOp>(
+            loc, hasTo, [&](OpBuilder &builder, Location loc) {
+              offsets[dim] = upperHalo ? OpFoldResult(builder.getIndexAttr(0))
+                                       : OpFoldResult(upperOffset);
+              auto subview = builder.create<memref::SubViewOp>(
+                  loc, array, offsets, shape, strides);
+              builder.create<memref::CopyOp>(loc, subview, buffer);
+              builder.create<mpi::SendOp>(loc, TypeRange{}, buffer, tag, to);
+              builder.create<scf::YieldOp>(loc);
+            });
+        // if has neighbor: receive halo data into buffer and copy to array
+        rewriter.create<scf::IfOp>(
+            loc, hasFrom, [&](OpBuilder &builder, Location loc) {
+              offsets[dim] = upperHalo ? OpFoldResult(upperOffset)
+                                       : OpFoldResult(builder.getIndexAttr(0));
+              builder.create<mpi::RecvOp>(loc, TypeRange{}, buffer, tag, from);
+              auto subview = builder.create<memref::SubViewOp>(
+                  loc, array, offsets, shape, strides);
+              builder.create<memref::CopyOp>(loc, buffer, subview);
+              builder.create<scf::YieldOp>(loc);
+            });
+        rewriter.create<memref::DeallocOp>(loc, buffer);
+        offsets[dim] = orgOffset;
+      };
+
+      genSendRecv(dim, false);
+      genSendRecv(dim, true);
+
+      // prepare shape and offsets for next split dim
+      auto _haloSz =
+          rewriter
+              .create<arith::AddIOp>(loc, toValue(haloSizes[currHaloDim * 2]),
+                                     toValue(haloSizes[currHaloDim * 2 + 1]))
+              .getResult();
+      // the shape for next halo excludes the halo on both ends for the
+      // current dim
+      shape[dim] =
+          rewriter.create<arith::SubIOp>(loc, toValue(orgDimSize), _haloSz)
+              .getResult();
+      // the offsets for next halo starts after the down halo for the
+      // current dim
+      offsets[dim] = haloSizes[currHaloDim * 2];
+      // on to next halo
+      ++currHaloDim;
+    }
+    rewriter.eraseOp(op);
+    return mlir::success();
+  }
+};
+
+struct ConvertMeshToMPIPass
+    : public impl::ConvertMeshToMPIPassBase<ConvertMeshToMPIPass> {
+  using Base::Base;
+
+  /// Run the dialect converter on the module.
+  void runOnOperation() override {
+    auto *ctx = &getContext();
+    mlir::RewritePatternSet patterns(ctx);
+
+    patterns.insert<ConvertUpdateHaloOp>(ctx);
+
+    (void)mlir::applyPatternsAndFoldGreedily(getOperation(),
+                                             std::move(patterns));
+  }
+};
+
+} // namespace
+
+// Create a pass that convert Mesh to MPI
+std::unique_ptr<::mlir::OperationPass<void>> createConvertMeshToMPIPass() {
+  return std::make_unique<ConvertMeshToMPIPass>();
+}
diff --git a/mlir/lib/Dialect/Mesh/IR/MeshOps.cpp b/mlir/lib/Dialect/Mesh/IR/MeshOps.cpp
index c35020b4c20ccc..f25bbbf8e274b6 100644
--- a/mlir/lib/Dialect/Mesh/IR/MeshOps.cpp
+++ b/mlir/lib/Dialect/Mesh/IR/MeshOps.cpp
@@ -730,6 +730,25 @@ void ProcessLinearIndexOp::getAsmResultNames(
   setNameFn(getResult(), "proc_linear_idx");
 }
 
+//===----------------------------------------------------------------------===//
+// mesh.neighbors_linear_indices op
+//===----------------------------------------------------------------------===//
+
+LogicalResult
+NeighborsLinearIndicesOp::verifySymbolUses(SymbolTableCollection &symbolTable) {
+  auto mesh = ::getMeshAndVerify(getOperation(), getMeshAttr(), symbolTable);
+  if (failed(mesh)) {
+    return failure();
+  }
+  return success();
+}
+
+void NeighborsLinearIndicesOp::getAsmResultNames(
+    function_ref<void(Value, StringRef)> setNameFn) {
+  setNameFn(getNeighborDown(), "down_linear_idx");
+  setNameFn(getNeighborUp(), "up_linear_idx");
+}
+
 //===----------------------------------------------------------------------===//
 // collective communication ops
 //===----------------------------------------------------------------------===//
diff --git a/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir b/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir
new file mode 100644
index 00000000000000..5f563364272d96
--- /dev/null
+++ b/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir
@@ -0,0 +1,173 @@
+// RUN: mlir-opt %s -convert-mesh-to-mpi | FileCheck %s
+
+// CHECK: mesh.mesh @mesh0
+mesh.mesh @mesh0(shape = 2x2x4)
+
+// CHECK-LABEL: func @update_halo_1d_first
+func.func @update_halo_1d_first(
+  // CHECK-SAME: [[varg0:%.*]]: memref<12x12xi8>
+    %arg0 : memref<12x12xi8>) {
+  // CHECK-NEXT: [[vc9:%.*]] = arith.constant 9 : index
+  // CHECK-NEXT: [[vc91_i32:%.*]] = arith.constant 91 : i32
+  // CHECK-NEXT: [[vc0_i32:%.*]] = arith.constant 0 : i32
+  // CHECK-NEXT: [[vproc_linear_idx:%.*]]:3 = mesh.process_multi_index on @mesh0 : index, index, index
+  // CHECK-NEXT: [[vdown_linear_idx:%.*]], [[vup_linear_idx:%.*]] = mesh.neighbors_linear_indices on @mesh0[[[vproc_linear_idx]]#0, [[vproc_linear_idx]]#1, [[vproc_linear_idx]]#2] split_axes = [0] : index, index
+  // CHECK-NEXT: [[v0:%.*]] = arith.index_cast [[vdown_linear_idx]] : index to i32
+  // CHECK-NEXT: [[v1:%.*]] = arith.index_cast [[vup_linear_idx]] : index to i32
+  // CHECK-NEXT: [[v2:%.*]] = arith.cmpi sge, [[v1]], [[vc0_i32]] : i32
+  // CHECK-NEXT: [[v3:%.*]] = arith.cmpi sge, [[v0]], [[vc0_i32]] : i32
+  // CHECK-NEXT: [[valloc:%.*]] = memref.alloc() : memref<2x12xi8>
+  // CHECK-NEXT: scf.if [[v3]] {
+  // CHECK-NEXT:   [[vsubview:%.*]] = memref.subview [[varg0]][[[vc9]], 0] [2, 12] [1, 1] : memref<12x12xi8> to memref<2x12xi8, strided<[12, 1], offset: ?>>
+  // CHECK-NEXT:   memref.copy [[vsubview]], [[valloc]] : memref<2x12xi8, strided<[12, 1], offset: ?>> to memref<2x12xi8>
+  // CHECK-NEXT:   mpi.send([[valloc]], [[vc91_i32]], [[v0]]) : memref<2x12xi8>, i32, i32
+  // CHECK-NEXT: }
+  // CHECK-NEXT: scf.if [[v2]] {
+  // CHECK-NEXT:   mpi.recv([[valloc]], [[vc91_i32]], [[v1]]) : memref<2x12xi8>, i32, i32
+  // CHECK-NEXT:   [[vsubview:%.*]] = memref.subview [[varg0]][0, 0] [2, 12] [1, 1] : memref<12x12xi8> to memref<2x12xi8, strided<[12, 1]>>
+  // CHECK-NEXT:   memref.copy [[valloc]], [[vsubview]] : memref<2x12xi8> to memref<2x12xi8, strided<[12, 1]>>
+  // CHECK-NEXT: }
+  // CHECK-NEXT: memref.dealloc [[valloc]] : memref<2x12xi8>
+  // CHECK-NEXT: [[v4:%.*]] = arith.cmpi sge, [[v0]], [[vc0_i32]] : i32
+  // CHECK-NEXT: [[v5:%.*]] = arith.cmpi sge, [[v1]], [[vc0_i32]] : i32
+  // CHECK-NEXT: [[valloc_0:%.*]] = memref.alloc() : memref<3x12xi8>
+  // CHECK-NEXT: scf.if [[v5]] {
+  // CHECK-NEXT:   [[vsubview:%.*]] = memref.subview [[varg0]][0, 0] [3, 12] [1, 1] : memref<12x12xi8> to memref<3x12xi8, strided<[12, 1]>>
+  // CHECK-NEXT:   memref.copy [[vsubview]], [[valloc_0]] : memref<3x12xi8, strided<[12, 1]>> to memref<3x12xi8>
+  // CHECK-NEXT:   mpi.send([[valloc_0]], [[vc91_i32]], [[v1]]) : memref<3x12xi8>, i32, i32
+  // CHECK-NEXT: }
+  // CHECK-NEXT: scf.if [[v4]] {
+  // CHECK-NEXT:   mpi.recv([[valloc_0]], [[vc91_i32]], [[v0]]) : memref<3x12xi8>, i32, i32
+  // CHECK-NEXT:   [[vsubview:%.*]] = memref.subview [[varg0]][[[vc9]], 0] [3, 12] [1, 1] : memref<12x12xi8> to memref<3x12xi8, strided<[12, 1], offset: ?>>
+  // CHECK-NEXT:   memref.copy [[valloc_0]], [[vsubview]] : memref<3x12xi8> to memref<3x12xi8, strided<[12, 1], offset: ?>>
+  // CHECK-NEXT: }
+  // CHECK-NEXT: memref.dealloc [[valloc_0]] : memref<3x12xi8>
+  // CHECK-NEXT: return
+  mesh.update_halo %arg0 on @mesh0 split_axes = [[0]]
...
[truncated]

@fschlimb
Copy link
Contributor Author

@AntonLydike @tkarna Please have a look

@fschlimb fschlimb marked this pull request as ready for review August 20, 2024 09:08
@fschlimb
Copy link
Contributor Author

fschlimb commented Aug 20, 2024

@sogartar @yaochengji , could you take a look at this PR?

Copy link
Contributor

@tkarna tkarna left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, mostly minor remarks about docstrings

using OpRewritePattern::OpRewritePattern;

mlir::LogicalResult
matchAndRewrite(mlir::mesh::UpdateHaloOp op,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your contribution, @fschlimb .

I'm curious as currently all the operations in mesh dialect work on tensor type except update_halo op. So should we perform bufferization before converting from mesh to mpi?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will you also make other operations in Mesh dialect support memref type?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made this work on memref because its semantics are memref-like, e.g. the halos are updated in-place, the input memref gets mutated, it does not return a new tensor or memref. In general we can of course think about adding a updateHalo variant wich operates on tensors and simply returns a new tensor.

Generally we could do the same for other ops, but I thought we should do that once we see need for it. updateHalo is a very special operation which mostly applies to array computations, and is probably less relevant in the tensor/AI world. Currently I do not see that any of the other operations have memref semantics.

Notice: within spmdization and for updateHalo specifically, an spmdize for relevant ops (like an inplace array.insert_slice) would insert bufferization.to_memref and bufferization.to_tensor ops appropriately around updateHalo. In our (@tkarna) experience this approach works fine (even with one-shot.bufferize) when using restrict=true in bufferization.to_tensor.

Wrt to when to apply this pass: generally it is probably a good idea to do the MPI conversion after bufferization. This will require changes to the op spec so that they are allowed to accept memrefs. I don't know enough about bufferization and tensor optimization to tell if converting to MPI right after spmdization (using to_memref/tensor) would disallow any optimization possible otherwise. Any insights are welcome.

I am planning to add send and recv in a follow-up PR.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know enough about bufferization and tensor optimization to tell if converting to MPI right after spmdization (using to_memref/tensor) would disallow any optimization possible otherwise. Any insights are welcome.

In my understanding, usually we prefer to perform optimization on tensor type than memref type because RAW dependency is difficult to detect on memref type.

I made this work on memref because its semantics are memref-like, e.g. the halos are updated in-place
Even if most of case it is updated in place, we could still let it support tensor type and make it "in-place" after bufferization.

Combine the two points together, I would suggest that make updateHalo support tensor type will make it easier to optimize the IR containing both updateHalo and other ops in mesh dialect. Because we only need to handle pure tensor types in that case.

Copy link
Contributor Author

@fschlimb fschlimb Aug 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree, let's add a tensor-based updateHalo once needed.

We need the memref-based version no matter what. Array/numpy semantics are partially reference-based. For subview and insert_slice they have memref semantics, copies are disallowed. This cannot really be expressed on the tensor-level.

My question was not so much about optimizations in general (which of course are simpler on the tensor level). I was wondering if early, implicit bufferization - when converting to MPI - would do any harm, like when done right before bufferization.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the latest version the buffer is now given in destination-passing style and accepts memrefs and tensors. Currently the lowering simply applies bufferization.to_memref/bufferization.to_tensor if a tensor is given. Should this crude approach ever be in the way for some optimization pattern, we can adjust accordingly.

Copy link
Contributor

@AntonLydike AntonLydike left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Frank, thanks for this exciting contribution! This looks to be a very solid first step, although
I can only comment from the MPI perspective, not so much from the mesh dialect. I'm excited to see where we can go from here!

(Sorry for the delay in response, I was on vacation)

}
}

auto tagAttr = rewriter.getI32IntegerAttr(91); // we just pick something
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Super nit: Im curious, why not choose a more "canonical" value like 0?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we have only COMM_WORLD this might reduce the risk of tag conflicts (like in multi-threaded cases).

@fschlimb
Copy link
Contributor Author

fschlimb commented Sep 3, 2024

@yaochengji @tkarna @AntonLydike @sogartar Could you approve if you are ok with this?

Copy link
Contributor

@AntonLydike AntonLydike left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @fschlimb for this contribution! Looks to be good from the MPI side!

Copy link

github-actions bot commented Oct 31, 2024

✅ With the latest revision this PR passed the C/C++ code formatter.

@fschlimb
Copy link
Contributor Author

I realized that I had to fix a few things in mesh before this can be useful. These fixes have been merged now (#114238).
The branch got rebased to be on top these changes.
I also added a few more things here:

  • lowering of LinearIndex, MultiIndex and NeighborsIndex to MPI
  • if a memref.global static_mpi_rank exists the provided (static) value will be usedas MPI_rank. This allows (static) shape propagation which is a requirement for all kinds of optimizations. This could also become a conoicalization pattern for CommRankOp if that's preferred.
  • canonicalization fo send/recv, again to allow static shape propagation

@yaochengji @tkarna @AntonLydike @sogartar @mfrancio could you have a look (again) please?

namespace {
// Create operations converting a linear index to a multi-dimensional index
static SmallVector<Value> linearToMultiIndex(Location loc, OpBuilder b,
Value linearIndex,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

assert that linearIndex is IndexType?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Values in dimensions must be of same integer type as linearIndex.

return linearIndex;
}

// This pattern converts the mesh.update_halo operation to MPI calls
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better docstring: this pattern just converts the process index. Similar issue with the following patterns.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks.
Removed meaningless comments and added more useful ones below.

@fschlimb
Copy link
Contributor Author

@yaochengji @AntonLydike @sogartar @mfrancio could you have a look please?

@yaochengji
Copy link
Member

LGTM, thanks

@rengolin rengolin merged commit 79eb406 into llvm:main Nov 28, 2024
8 checks passed
@llvm-ci
Copy link
Collaborator

llvm-ci commented Nov 28, 2024

LLVM Buildbot has detected a new failure on builder ppc64le-mlir-rhel-clang running on ppc64le-mlir-rhel-test while building mlir at step 6 "test-build-check-mlir-build-only-check-mlir".

Full details are available at: https://lab.llvm.org/buildbot/#/builders/129/builds/10431

Here is the relevant piece of the build log for the reference
Step 6 (test-build-check-mlir-build-only-check-mlir) failure: test (failure)
******************** TEST 'MLIR :: Conversion/MeshToMPI/convert-mesh-to-mpi.mlir' FAILED ********************
Exit Code: 1

Command Output (stdout):
--
# RUN: at line 1
/home/buildbots/llvm-external-buildbots/workers/ppc64le-mlir-rhel-test/ppc64le-mlir-rhel-clang-build/build/bin/mlir-opt /home/buildbots/llvm-external-buildbots/workers/ppc64le-mlir-rhel-test/ppc64le-mlir-rhel-clang-build/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir -convert-mesh-to-mpi -canonicalize -split-input-file | /home/buildbots/llvm-external-buildbots/workers/ppc64le-mlir-rhel-test/ppc64le-mlir-rhel-clang-build/build/bin/FileCheck /home/buildbots/llvm-external-buildbots/workers/ppc64le-mlir-rhel-test/ppc64le-mlir-rhel-clang-build/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir
# executed command: /home/buildbots/llvm-external-buildbots/workers/ppc64le-mlir-rhel-test/ppc64le-mlir-rhel-clang-build/build/bin/mlir-opt /home/buildbots/llvm-external-buildbots/workers/ppc64le-mlir-rhel-test/ppc64le-mlir-rhel-clang-build/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir -convert-mesh-to-mpi -canonicalize -split-input-file
# executed command: /home/buildbots/llvm-external-buildbots/workers/ppc64le-mlir-rhel-test/ppc64le-mlir-rhel-clang-build/build/bin/FileCheck /home/buildbots/llvm-external-buildbots/workers/ppc64le-mlir-rhel-test/ppc64le-mlir-rhel-clang-build/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir
# .---command stderr------------
# | /home/buildbots/llvm-external-buildbots/workers/ppc64le-mlir-rhel-test/ppc64le-mlir-rhel-clang-build/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir:167:17: error: CHECK-NEXT: expected string not found in input
# |  // CHECK-NEXT: [[v0:%.*]] = bufferization.to_memref [[varg0]] : memref<120x120x120xi8>
# |                 ^
# | <stdin>:188:36: note: scanning from here
# |  %c91_i32 = arith.constant 91 : i32
# |                                    ^
# | <stdin>:188:36: note: with "varg0" equal to "%arg0"
# |  %c91_i32 = arith.constant 91 : i32
# |                                    ^
# | <stdin>:189:2: note: possible intended match here
# |  %0 = bufferization.to_memref %arg0 : tensor<120x120x120xi8> to memref<120x120x120xi8>
# |  ^
# | 
# | Input file: <stdin>
# | Check file: /home/buildbots/llvm-external-buildbots/workers/ppc64le-mlir-rhel-test/ppc64le-mlir-rhel-clang-build/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir
# | 
# | -dump-input=help explains the following input dump.
# | 
# | Input was:
# | <<<<<<
# |             .
# |             .
# |             .
# |           183:  func.func @update_halo_3d_tensor(%arg0: tensor<120x120x120xi8>) -> tensor<120x120x120xi8> { 
# |           184:  %c23_i32 = arith.constant 23 : i32 
# |           185:  %c29_i32 = arith.constant 29 : i32 
# |           186:  %c44_i32 = arith.constant 44 : i32 
# |           187:  %c4_i32 = arith.constant 4 : i32 
# |           188:  %c91_i32 = arith.constant 91 : i32 
# | next:167'0                                        X error: no match found
# | next:167'1                                          with "varg0" equal to "%arg0"
# |           189:  %0 = bufferization.to_memref %arg0 : tensor<120x120x120xi8> to memref<120x120x120xi8> 
# | next:167'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# | next:167'2      ?                                                                                      possible intended match
# |           190:  %alloc = memref.alloc() : memref<117x113x5xi8> 
# | next:167'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# |           191:  %subview = memref.subview %0[1, 3, 109] [117, 113, 5] [1, 1, 1] : memref<120x120x120xi8> to memref<117x113x5xi8, strided<[14400, 120, 1], offset: 14869>> 
# | next:167'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# |           192:  memref.copy %subview, %alloc : memref<117x113x5xi8, strided<[14400, 120, 1], offset: 14869>> to memref<117x113x5xi8> 
# | next:167'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
...

@llvm-ci
Copy link
Collaborator

llvm-ci commented Nov 28, 2024

LLVM Buildbot has detected a new failure on builder mlir-rocm-mi200 running on mi200-buildbot while building mlir at step 6 "test-build-check-mlir-build-only-check-mlir".

Full details are available at: https://lab.llvm.org/buildbot/#/builders/177/builds/9141

Here is the relevant piece of the build log for the reference
Step 6 (test-build-check-mlir-build-only-check-mlir) failure: test (failure)
******************** TEST 'MLIR :: Conversion/MeshToMPI/convert-mesh-to-mpi.mlir' FAILED ********************
Exit Code: 1

Command Output (stdout):
--
# RUN: at line 1
/vol/worker/mi200-buildbot/mlir-rocm-mi200/build/bin/mlir-opt /vol/worker/mi200-buildbot/mlir-rocm-mi200/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir -convert-mesh-to-mpi -canonicalize -split-input-file | /vol/worker/mi200-buildbot/mlir-rocm-mi200/build/bin/FileCheck /vol/worker/mi200-buildbot/mlir-rocm-mi200/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir
# executed command: /vol/worker/mi200-buildbot/mlir-rocm-mi200/build/bin/mlir-opt /vol/worker/mi200-buildbot/mlir-rocm-mi200/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir -convert-mesh-to-mpi -canonicalize -split-input-file
# executed command: /vol/worker/mi200-buildbot/mlir-rocm-mi200/build/bin/FileCheck /vol/worker/mi200-buildbot/mlir-rocm-mi200/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir
# .---command stderr------------
# | /vol/worker/mi200-buildbot/mlir-rocm-mi200/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir:167:17: error: CHECK-NEXT: expected string not found in input
# |  // CHECK-NEXT: [[v0:%.*]] = bufferization.to_memref [[varg0]] : memref<120x120x120xi8>
# |                 ^
# | <stdin>:188:36: note: scanning from here
# |  %c91_i32 = arith.constant 91 : i32
# |                                    ^
# | <stdin>:188:36: note: with "varg0" equal to "%arg0"
# |  %c91_i32 = arith.constant 91 : i32
# |                                    ^
# | <stdin>:189:2: note: possible intended match here
# |  %0 = bufferization.to_memref %arg0 : tensor<120x120x120xi8> to memref<120x120x120xi8>
# |  ^
# | 
# | Input file: <stdin>
# | Check file: /vol/worker/mi200-buildbot/mlir-rocm-mi200/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir
# | 
# | -dump-input=help explains the following input dump.
# | 
# | Input was:
# | <<<<<<
# |             .
# |             .
# |             .
# |           183:  func.func @update_halo_3d_tensor(%arg0: tensor<120x120x120xi8>) -> tensor<120x120x120xi8> { 
# |           184:  %c23_i32 = arith.constant 23 : i32 
# |           185:  %c29_i32 = arith.constant 29 : i32 
# |           186:  %c44_i32 = arith.constant 44 : i32 
# |           187:  %c4_i32 = arith.constant 4 : i32 
# |           188:  %c91_i32 = arith.constant 91 : i32 
# | next:167'0                                        X error: no match found
# | next:167'1                                          with "varg0" equal to "%arg0"
# |           189:  %0 = bufferization.to_memref %arg0 : tensor<120x120x120xi8> to memref<120x120x120xi8> 
# | next:167'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# | next:167'2      ?                                                                                      possible intended match
# |           190:  %alloc = memref.alloc() : memref<117x113x5xi8> 
# | next:167'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# |           191:  %subview = memref.subview %0[1, 3, 109] [117, 113, 5] [1, 1, 1] : memref<120x120x120xi8> to memref<117x113x5xi8, strided<[14400, 120, 1], offset: 14869>> 
# | next:167'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# |           192:  memref.copy %subview, %alloc : memref<117x113x5xi8, strided<[14400, 120, 1], offset: 14869>> to memref<117x113x5xi8> 
# | next:167'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
...

@llvm-ci
Copy link
Collaborator

llvm-ci commented Nov 28, 2024

LLVM Buildbot has detected a new failure on builder flang-aarch64-libcxx running on linaro-flang-aarch64-libcxx while building mlir at step 5 "build-unified-tree".

Full details are available at: https://lab.llvm.org/buildbot/#/builders/89/builds/11542

Here is the relevant piece of the build log for the reference
Step 5 (build-unified-tree) failure: build (failure)
...
77.434 [2355/124/4809] Creating library symlink lib/libLLVMPasses.so
77.436 [2355/123/4810] Building CXX object tools/mlir/lib/Dialect/SPIRV/IR/CMakeFiles/obj.MLIRSPIRVDialect.dir/SPIRVOps.cpp.o
77.437 [2355/122/4811] Building CXX object tools/mlir/lib/Dialect/SPIRV/IR/CMakeFiles/obj.MLIRSPIRVDialect.dir/TargetAndABI.cpp.o
77.439 [2355/121/4812] Building CXX object tools/mlir/lib/Dialect/SPIRV/Transforms/CMakeFiles/obj.MLIRSPIRVConversion.dir/SPIRVConversion.cpp.o
77.441 [2355/120/4813] Building CXX object tools/mlir/lib/Dialect/SPIRV/Transforms/CMakeFiles/obj.MLIRSPIRVTransforms.dir/LowerABIAttributesPass.cpp.o
77.443 [2355/119/4814] Building CXX object tools/mlir/lib/Dialect/SPIRV/Transforms/CMakeFiles/obj.MLIRSPIRVTransforms.dir/RewriteInsertsPass.cpp.o
77.444 [2355/118/4815] Building CXX object tools/mlir/lib/Dialect/SPIRV/Transforms/CMakeFiles/obj.MLIRSPIRVTransforms.dir/UnifyAliasedResourcePass.cpp.o
77.446 [2355/117/4816] Building CXX object tools/mlir/lib/Dialect/SPIRV/Transforms/CMakeFiles/obj.MLIRSPIRVTransforms.dir/UpdateVCEPass.cpp.o
77.447 [2355/116/4817] Building CXX object tools/mlir/lib/Dialect/Tensor/Extensions/CMakeFiles/obj.MLIRTensorAllExtensions.dir/AllExtensions.cpp.o
77.509 [2355/115/4818] Linking CXX shared library lib/libMLIRMPIDialect.so.20.0git
FAILED: lib/libMLIRMPIDialect.so.20.0git 
: && /usr/local/bin/c++ -fPIC -stdlib=libc++ -fPIC -fno-semantic-interposition -fvisibility-inlines-hidden -Werror=date-time -Werror=unguarded-availability-new -Wall -Wextra -Wno-unused-parameter -Wwrite-strings -Wcast-qual -Wmissing-field-initializers -pedantic -Wno-long-long -Wc++98-compat-extra-semi -Wimplicit-fallthrough -Wcovered-switch-default -Wno-noexcept-type -Wnon-virtual-dtor -Wdelete-non-virtual-dtor -Wsuggest-override -Wstring-conversion -Wmisleading-indentation -Wctad-maybe-unsupported -fdiagnostics-color -ffunction-sections -fdata-sections -Wundef -Werror=mismatched-tags -Werror=global-constructors -O3 -DNDEBUG  -stdlib=libc++ -Wl,-z,defs -Wl,-z,nodelete   -Wl,-rpath-link,/home/tcwg-buildbot/worker/flang-aarch64-libcxx/build/./lib  -Wl,--gc-sections -shared -Wl,-soname,libMLIRMPIDialect.so.20.0git -o lib/libMLIRMPIDialect.so.20.0git tools/mlir/lib/Dialect/MPI/IR/CMakeFiles/obj.MLIRMPIDialect.dir/MPIOps.cpp.o tools/mlir/lib/Dialect/MPI/IR/CMakeFiles/obj.MLIRMPIDialect.dir/MPI.cpp.o  -Wl,-rpath,"\$ORIGIN/../lib:/home/tcwg-buildbot/worker/flang-aarch64-libcxx/build/lib:"  lib/libMLIRDialect.so.20.0git  lib/libMLIRInferTypeOpInterface.so.20.0git  lib/libMLIRSideEffectInterfaces.so.20.0git  lib/libMLIRIR.so.20.0git  lib/libMLIRSupport.so.20.0git  lib/libLLVMSupport.so.20.0git  -Wl,-rpath-link,/home/tcwg-buildbot/worker/flang-aarch64-libcxx/build/lib && :
/usr/bin/ld: tools/mlir/lib/Dialect/MPI/IR/CMakeFiles/obj.MLIRMPIDialect.dir/MPIOps.cpp.o: in function `(anonymous namespace)::FoldCast<mlir::mpi::SendOp>::matchAndRewrite(mlir::mpi::SendOp, mlir::PatternRewriter&) const':
MPIOps.cpp:(.text._ZNK12_GLOBAL__N_18FoldCastIN4mlir3mpi6SendOpEE15matchAndRewriteES3_RNS1_15PatternRewriterE+0xb0): undefined reference to `mlir::detail::TypeIDResolver<mlir::memref::CastOp, void>::id'
/usr/bin/ld: MPIOps.cpp:(.text._ZNK12_GLOBAL__N_18FoldCastIN4mlir3mpi6SendOpEE15matchAndRewriteES3_RNS1_15PatternRewriterE+0xb4): undefined reference to `mlir::detail::TypeIDResolver<mlir::memref::CastOp, void>::id'
/usr/bin/ld: tools/mlir/lib/Dialect/MPI/IR/CMakeFiles/obj.MLIRMPIDialect.dir/MPIOps.cpp.o: in function `(anonymous namespace)::FoldCast<mlir::mpi::RecvOp>::matchAndRewrite(mlir::mpi::RecvOp, mlir::PatternRewriter&) const':
MPIOps.cpp:(.text._ZNK12_GLOBAL__N_18FoldCastIN4mlir3mpi6RecvOpEE15matchAndRewriteES3_RNS1_15PatternRewriterE+0xb0): undefined reference to `mlir::detail::TypeIDResolver<mlir::memref::CastOp, void>::id'
/usr/bin/ld: MPIOps.cpp:(.text._ZNK12_GLOBAL__N_18FoldCastIN4mlir3mpi6RecvOpEE15matchAndRewriteES3_RNS1_15PatternRewriterE+0xb4): undefined reference to `mlir::detail::TypeIDResolver<mlir::memref::CastOp, void>::id'
clang++: error: linker command failed with exit code 1 (use -v to see invocation)
77.517 [2355/114/4819] Building CXX object tools/mlir/lib/Conversion/NVVMToLLVM/CMakeFiles/obj.MLIRNVVMToLLVM.dir/NVVMToLLVM.cpp.o
77.520 [2355/113/4820] Building CXX object tools/mlir/lib/Dialect/SPIRV/Utils/CMakeFiles/obj.MLIRSPIRVUtils.dir/LayoutUtils.cpp.o
77.532 [2355/112/4821] Building CXX object tools/mlir/lib/Dialect/Tensor/Transforms/CMakeFiles/obj.MLIRTensorTransforms.dir/ConcatOpPatterns.cpp.o
77.534 [2355/111/4822] Building CXX object tools/mlir/lib/Dialect/Tensor/IR/CMakeFiles/obj.MLIRTensorDialect.dir/TensorOps.cpp.o
77.536 [2355/110/4823] Building CXX object tools/mlir/test/lib/Analysis/CMakeFiles/MLIRTestAnalysis.dir/TestLiveness.cpp.o
77.542 [2355/109/4824] Building CXX object tools/mlir/lib/Dialect/Tensor/IR/CMakeFiles/obj.MLIRTensorDialect.dir/ValueBoundsOpInterfaceImpl.cpp.o
77.548 [2355/108/4825] Building CXX object tools/mlir/lib/Dialect/Tensor/IR/CMakeFiles/obj.MLIRTensorInferTypeOpInterfaceImpl.dir/TensorInferTypeOpInterfaceImpl.cpp.o
77.550 [2355/107/4826] Building CXX object tools/mlir/lib/Dialect/Tensor/IR/CMakeFiles/obj.MLIRTensorDialect.dir/TensorDialect.cpp.o
77.553 [2355/106/4827] Building CXX object tools/mlir/lib/Dialect/Tensor/IR/CMakeFiles/obj.MLIRTensorTilingInterfaceImpl.dir/TensorTilingInterfaceImpl.cpp.o
77.657 [2355/105/4828] Building CXX object tools/mlir/lib/Dialect/Linalg/Transforms/CMakeFiles/obj.MLIRLinalgTransforms.dir/DecomposeLinalgOps.cpp.o
78.300 [2355/104/4829] Building CXX object tools/mlir/lib/Conversion/GPUToVulkan/CMakeFiles/obj.MLIRGPUToVulkanTransforms.dir/ConvertLaunchFuncToVulkanCalls.cpp.o
78.499 [2355/103/4830] Building CXX object tools/mlir/lib/Conversion/AffineToStandard/CMakeFiles/obj.MLIRAffineToStandard.dir/AffineToStandard.cpp.o
78.622 [2355/102/4831] Building CXX object tools/mlir/lib/Conversion/VectorToSPIRV/CMakeFiles/obj.MLIRVectorToSPIRV.dir/VectorToSPIRVPass.cpp.o
78.927 [2355/101/4832] Building CXX object tools/mlir/lib/Conversion/ComplexToLLVM/CMakeFiles/obj.MLIRComplexToLLVM.dir/ComplexToLLVM.cpp.o
79.189 [2355/100/4833] Building CXX object tools/mlir/lib/Dialect/Linalg/Transforms/CMakeFiles/obj.MLIRLinalgTransforms.dir/SwapExtractSliceWithFillPatterns.cpp.o
79.350 [2355/99/4834] Building CXX object tools/mlir/lib/Dialect/Linalg/Transforms/CMakeFiles/obj.MLIRLinalgTransforms.dir/TransposeMatmul.cpp.o
79.380 [2355/98/4835] Building CXX object tools/mlir/lib/Dialect/Linalg/Transforms/CMakeFiles/obj.MLIRLinalgTransforms.dir/ConvertToDestinationStyle.cpp.o
79.419 [2355/97/4836] Building CXX object tools/mlir/lib/Dialect/Mesh/Transforms/CMakeFiles/obj.MLIRMeshTransforms.dir/Spmdization.cpp.o
79.445 [2355/96/4837] Building CXX object tools/mlir/lib/Conversion/TosaToLinalg/CMakeFiles/obj.MLIRTosaToLinalg.dir/TosaToLinalgNamed.cpp.o
79.763 [2355/95/4838] Building CXX object tools/mlir/lib/Dialect/Linalg/Transforms/CMakeFiles/obj.MLIRLinalgTransforms.dir/Hoisting.cpp.o
79.802 [2355/94/4839] Building CXX object tools/mlir/lib/Dialect/Linalg/Transforms/CMakeFiles/obj.MLIRLinalgTransforms.dir/ElementwiseToLinalg.cpp.o
79.853 [2355/93/4840] Building CXX object tools/mlir/lib/Dialect/Linalg/Transforms/CMakeFiles/obj.MLIRLinalgTransforms.dir/Interchange.cpp.o
80.181 [2355/92/4841] Building CXX object tools/mlir/lib/Dialect/Linalg/Transforms/CMakeFiles/obj.MLIRLinalgTransforms.dir/DecomposeGenericByUnfoldingPermutation.cpp.o
80.428 [2355/91/4842] Building CXX object tools/mlir/lib/Dialect/Mesh/Transforms/CMakeFiles/obj.MLIRMeshTransforms.dir/Transforms.cpp.o
80.512 [2355/90/4843] Building CXX object tools/mlir/lib/Dialect/Linalg/Transforms/CMakeFiles/obj.MLIRLinalgTransforms.dir/ConvertConv2DToImg2Col.cpp.o
80.557 [2355/89/4844] Building CXX object tools/mlir/lib/Conversion/BufferizationToMemRef/CMakeFiles/obj.MLIRBufferizationToMemRef.dir/BufferizationToMemRef.cpp.o
80.600 [2355/88/4845] Building CXX object tools/mlir/lib/Dialect/Linalg/Transforms/CMakeFiles/obj.MLIRLinalgTransforms.dir/FusePadOpWithLinalgProducer.cpp.o
80.681 [2355/87/4846] Building CXX object tools/mlir/lib/Conversion/ControlFlowToLLVM/CMakeFiles/obj.MLIRControlFlowToLLVM.dir/ControlFlowToLLVM.cpp.o
80.732 [2355/86/4847] Building CXX object tools/mlir/lib/Conversion/GPUToVulkan/CMakeFiles/obj.MLIRGPUToVulkanTransforms.dir/ConvertGPULaunchFuncToVulkanLaunchFunc.cpp.o
80.970 [2355/85/4848] Building CXX object tools/mlir/lib/Conversion/GPUToNVVM/CMakeFiles/obj.MLIRGPUToNVVMTransforms.dir/WmmaOpsToNvvm.cpp.o

@llvm-ci
Copy link
Collaborator

llvm-ci commented Nov 28, 2024

LLVM Buildbot has detected a new failure on builder mlir-nvidia-gcc7 running on mlir-nvidia while building mlir at step 6 "test-build-check-mlir-build-only-check-mlir".

Full details are available at: https://lab.llvm.org/buildbot/#/builders/116/builds/6991

Here is the relevant piece of the build log for the reference
Step 6 (test-build-check-mlir-build-only-check-mlir) failure: test (failure)
******************** TEST 'MLIR :: Conversion/MeshToMPI/convert-mesh-to-mpi.mlir' FAILED ********************
Exit Code: 1

Command Output (stdout):
--
# RUN: at line 1
/vol/worker/mlir-nvidia/mlir-nvidia-gcc7/llvm.obj/bin/mlir-opt /vol/worker/mlir-nvidia/mlir-nvidia-gcc7/llvm.src/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir -convert-mesh-to-mpi -canonicalize -split-input-file | /vol/worker/mlir-nvidia/mlir-nvidia-gcc7/llvm.obj/bin/FileCheck /vol/worker/mlir-nvidia/mlir-nvidia-gcc7/llvm.src/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir
# executed command: /vol/worker/mlir-nvidia/mlir-nvidia-gcc7/llvm.obj/bin/mlir-opt /vol/worker/mlir-nvidia/mlir-nvidia-gcc7/llvm.src/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir -convert-mesh-to-mpi -canonicalize -split-input-file
# executed command: /vol/worker/mlir-nvidia/mlir-nvidia-gcc7/llvm.obj/bin/FileCheck /vol/worker/mlir-nvidia/mlir-nvidia-gcc7/llvm.src/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir
# .---command stderr------------
# | /vol/worker/mlir-nvidia/mlir-nvidia-gcc7/llvm.src/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir:167:17: error: CHECK-NEXT: expected string not found in input
# |  // CHECK-NEXT: [[v0:%.*]] = bufferization.to_memref [[varg0]] : memref<120x120x120xi8>
# |                 ^
# | <stdin>:188:36: note: scanning from here
# |  %c91_i32 = arith.constant 91 : i32
# |                                    ^
# | <stdin>:188:36: note: with "varg0" equal to "%arg0"
# |  %c91_i32 = arith.constant 91 : i32
# |                                    ^
# | <stdin>:189:2: note: possible intended match here
# |  %0 = bufferization.to_memref %arg0 : tensor<120x120x120xi8> to memref<120x120x120xi8>
# |  ^
# | 
# | Input file: <stdin>
# | Check file: /vol/worker/mlir-nvidia/mlir-nvidia-gcc7/llvm.src/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir
# | 
# | -dump-input=help explains the following input dump.
# | 
# | Input was:
# | <<<<<<
# |             .
# |             .
# |             .
# |           183:  func.func @update_halo_3d_tensor(%arg0: tensor<120x120x120xi8>) -> tensor<120x120x120xi8> { 
# |           184:  %c23_i32 = arith.constant 23 : i32 
# |           185:  %c29_i32 = arith.constant 29 : i32 
# |           186:  %c44_i32 = arith.constant 44 : i32 
# |           187:  %c4_i32 = arith.constant 4 : i32 
# |           188:  %c91_i32 = arith.constant 91 : i32 
# | next:167'0                                        X error: no match found
# | next:167'1                                          with "varg0" equal to "%arg0"
# |           189:  %0 = bufferization.to_memref %arg0 : tensor<120x120x120xi8> to memref<120x120x120xi8> 
# | next:167'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# | next:167'2      ?                                                                                      possible intended match
# |           190:  %alloc = memref.alloc() : memref<117x113x5xi8> 
# | next:167'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# |           191:  %subview = memref.subview %0[1, 3, 109] [117, 113, 5] [1, 1, 1] : memref<120x120x120xi8> to memref<117x113x5xi8, strided<[14400, 120, 1], offset: 14869>> 
# | next:167'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# |           192:  memref.copy %subview, %alloc : memref<117x113x5xi8, strided<[14400, 120, 1], offset: 14869>> to memref<117x113x5xi8> 
# | next:167'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
...

@llvm-ci
Copy link
Collaborator

llvm-ci commented Nov 28, 2024

LLVM Buildbot has detected a new failure on builder openmp-offload-sles-build-only running on rocm-worker-hw-04-sles while building mlir at step 10 "Add check check-mlir".

Full details are available at: https://lab.llvm.org/buildbot/#/builders/140/builds/11913

Here is the relevant piece of the build log for the reference
Step 10 (Add check check-mlir) failure: test (failure)
******************** TEST 'MLIR :: Conversion/MeshToMPI/convert-mesh-to-mpi.mlir' FAILED ********************
Exit Code: 1

Command Output (stdout):
--
# RUN: at line 1
/home/botworker/bbot/builds/openmp-offload-sles-build/llvm.build/bin/mlir-opt /home/botworker/bbot/builds/openmp-offload-sles-build/llvm.src/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir -convert-mesh-to-mpi -canonicalize -split-input-file | /home/botworker/bbot/builds/openmp-offload-sles-build/llvm.build/bin/FileCheck /home/botworker/bbot/builds/openmp-offload-sles-build/llvm.src/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir
# executed command: /home/botworker/bbot/builds/openmp-offload-sles-build/llvm.build/bin/mlir-opt /home/botworker/bbot/builds/openmp-offload-sles-build/llvm.src/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir -convert-mesh-to-mpi -canonicalize -split-input-file
# executed command: /home/botworker/bbot/builds/openmp-offload-sles-build/llvm.build/bin/FileCheck /home/botworker/bbot/builds/openmp-offload-sles-build/llvm.src/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir
# .---command stderr------------
# | /home/botworker/bbot/builds/openmp-offload-sles-build/llvm.src/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir:167:17: error: CHECK-NEXT: expected string not found in input
# |  // CHECK-NEXT: [[v0:%.*]] = bufferization.to_memref [[varg0]] : memref<120x120x120xi8>
# |                 ^
# | <stdin>:188:36: note: scanning from here
# |  %c91_i32 = arith.constant 91 : i32
# |                                    ^
# | <stdin>:188:36: note: with "varg0" equal to "%arg0"
# |  %c91_i32 = arith.constant 91 : i32
# |                                    ^
# | <stdin>:189:2: note: possible intended match here
# |  %0 = bufferization.to_memref %arg0 : tensor<120x120x120xi8> to memref<120x120x120xi8>
# |  ^
# | 
# | Input file: <stdin>
# | Check file: /home/botworker/bbot/builds/openmp-offload-sles-build/llvm.src/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir
# | 
# | -dump-input=help explains the following input dump.
# | 
# | Input was:
# | <<<<<<
# |             .
# |             .
# |             .
# |           183:  func.func @update_halo_3d_tensor(%arg0: tensor<120x120x120xi8>) -> tensor<120x120x120xi8> { 
# |           184:  %c23_i32 = arith.constant 23 : i32 
# |           185:  %c29_i32 = arith.constant 29 : i32 
# |           186:  %c44_i32 = arith.constant 44 : i32 
# |           187:  %c4_i32 = arith.constant 4 : i32 
# |           188:  %c91_i32 = arith.constant 91 : i32 
# | next:167'0                                        X error: no match found
# | next:167'1                                          with "varg0" equal to "%arg0"
# |           189:  %0 = bufferization.to_memref %arg0 : tensor<120x120x120xi8> to memref<120x120x120xi8> 
# | next:167'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# | next:167'2      ?                                                                                      possible intended match
# |           190:  %alloc = memref.alloc() : memref<117x113x5xi8> 
# | next:167'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# |           191:  %subview = memref.subview %0[1, 3, 109] [117, 113, 5] [1, 1, 1] : memref<120x120x120xi8> to memref<117x113x5xi8, strided<[14400, 120, 1], offset: 14869>> 
# | next:167'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# |           192:  memref.copy %subview, %alloc : memref<117x113x5xi8, strided<[14400, 120, 1], offset: 14869>> to memref<117x113x5xi8> 
# | next:167'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
...

@fschlimb
Copy link
Contributor Author

fschlimb commented Nov 28, 2024

Working on the post-commit failures. See #117986.

@llvm-ci
Copy link
Collaborator

llvm-ci commented Nov 28, 2024

LLVM Buildbot has detected a new failure on builder mlir-nvidia running on mlir-nvidia while building mlir at step 5 "build-check-mlir-build-only".

Full details are available at: https://lab.llvm.org/buildbot/#/builders/138/builds/7072

Here is the relevant piece of the build log for the reference
Step 5 (build-check-mlir-build-only) failure: build (failure)
...
218.281 [1133/16/3880] Linking CXX shared library lib/libMLIROpenACCDialect.so.20.0git
218.291 [1132/16/3881] Creating library symlink lib/libMLIROpenACCDialect.so
218.322 [1131/16/3882] Building CXX object tools/mlir/lib/Query/CMakeFiles/obj.MLIRQuery.dir/QueryParser.cpp.o
218.332 [1130/16/3883] Building CXX object tools/mlir/lib/Dialect/MPI/IR/CMakeFiles/obj.MLIRMPIDialect.dir/MPIOps.cpp.o
218.341 [1129/16/3884] Linking CXX shared library lib/libMLIRTensorDialect.so.20.0git
218.350 [1128/16/3885] Building CXX object tools/mlir/lib/Dialect/Mesh/Transforms/CMakeFiles/obj.MLIRMeshTransforms.dir/Spmdization.cpp.o
218.351 [1127/16/3886] Creating library symlink lib/libMLIRTensorDialect.so
218.385 [1126/16/3887] Building CXX object tools/mlir/lib/Reducer/CMakeFiles/obj.MLIRReduce.dir/ReductionTreePass.cpp.o
218.430 [1125/16/3888] Linking CXX shared library lib/libMLIRQuery.so.20.0git
218.446 [1124/16/3889] Linking CXX shared library lib/libMLIRMPIDialect.so.20.0git
FAILED: lib/libMLIRMPIDialect.so.20.0git 
: && /usr/bin/clang++ -fPIC -fPIC -fno-semantic-interposition -fvisibility-inlines-hidden -Werror=date-time -Werror=unguarded-availability-new -Wall -Wextra -Wno-unused-parameter -Wwrite-strings -Wcast-qual -Wmissing-field-initializers -pedantic -Wno-long-long -Wc++98-compat-extra-semi -Wimplicit-fallthrough -Wcovered-switch-default -Wno-noexcept-type -Wnon-virtual-dtor -Wdelete-non-virtual-dtor -Wsuggest-override -Wstring-conversion -Wmisleading-indentation -Wctad-maybe-unsupported -fdiagnostics-color -ffunction-sections -fdata-sections -Wundef -Werror=mismatched-tags -Werror=global-constructors -O3 -DNDEBUG  -Wl,-z,defs -Wl,-z,nodelete -fuse-ld=lld -Wl,--color-diagnostics   -Wl,--gc-sections -shared -Wl,-soname,libMLIRMPIDialect.so.20.0git -o lib/libMLIRMPIDialect.so.20.0git tools/mlir/lib/Dialect/MPI/IR/CMakeFiles/obj.MLIRMPIDialect.dir/MPIOps.cpp.o tools/mlir/lib/Dialect/MPI/IR/CMakeFiles/obj.MLIRMPIDialect.dir/MPI.cpp.o  -Wl,-rpath,"\$ORIGIN/../lib:/vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/lib:"  lib/libMLIRDialect.so.20.0git  lib/libMLIRInferTypeOpInterface.so.20.0git  lib/libMLIRSideEffectInterfaces.so.20.0git  lib/libMLIRIR.so.20.0git  lib/libMLIRSupport.so.20.0git  lib/libLLVMSupport.so.20.0git  -Wl,-rpath-link,/vol/worker/mlir-nvidia/mlir-nvidia/llvm.obj/lib && :
ld.lld: error: undefined symbol: mlir::detail::TypeIDResolver<mlir::memref::CastOp, void>::id
>>> referenced by MPIOps.cpp
>>>               tools/mlir/lib/Dialect/MPI/IR/CMakeFiles/obj.MLIRMPIDialect.dir/MPIOps.cpp.o:((anonymous namespace)::FoldCast<mlir::mpi::SendOp>::matchAndRewrite(mlir::mpi::SendOp, mlir::PatternRewriter&) const)
>>> referenced by MPIOps.cpp
>>>               tools/mlir/lib/Dialect/MPI/IR/CMakeFiles/obj.MLIRMPIDialect.dir/MPIOps.cpp.o:((anonymous namespace)::FoldCast<mlir::mpi::RecvOp>::matchAndRewrite(mlir::mpi::RecvOp, mlir::PatternRewriter&) const)
clang: error: linker command failed with exit code 1 (use -v to see invocation)
218.499 [1124/15/3890] Linking CXX shared library lib/libMLIRBufferizationDialect.so.20.0git
218.538 [1124/14/3891] Linking CXX shared library lib/libMLIRSCFDialect.so.20.0git
218.602 [1124/13/3892] Linking CXX shared library lib/libMLIRShapeDialect.so.20.0git
220.098 [1124/12/3893] Building CXX object tools/mlir/lib/Dialect/Linalg/Transforms/CMakeFiles/obj.MLIRLinalgTransforms.dir/Transforms.cpp.o
222.196 [1124/11/3894] Building CXX object tools/mlir/lib/Dialect/Tensor/Extensions/CMakeFiles/obj.MLIRTensorMeshShardingExtensions.dir/MeshShardingExtensions.cpp.o
225.469 [1124/10/3895] Building CXX object tools/mlir/lib/Dialect/Linalg/Transforms/CMakeFiles/obj.MLIRLinalgTransforms.dir/Vectorization.cpp.o
225.823 [1124/9/3896] Building CXX object tools/mlir/lib/Dialect/Tosa/CMakeFiles/obj.MLIRTosaShardingInterfaceImpl.dir/IR/ShardingInterfaceImpl.cpp.o
233.246 [1124/8/3897] Building CXX object tools/mlir/tools/mlir-vulkan-runner/CMakeFiles/mlir-vulkan-runner.dir/mlir-vulkan-runner.cpp.o
236.609 [1124/7/3898] Building CXX object tools/mlir/lib/Dialect/Mesh/IR/CMakeFiles/obj.MLIRMeshDialect.dir/MeshOps.cpp.o
237.108 [1124/6/3899] Building CXX object tools/mlir/lib/Dialect/SparseTensor/Transforms/CMakeFiles/obj.MLIRSparseTensorTransforms.dir/SparseTensorPasses.cpp.o
237.834 [1124/5/3900] Building CXX object tools/mlir/lib/Dialect/Linalg/IR/CMakeFiles/obj.MLIRLinalgDialect.dir/LinalgDialect.cpp.o
240.097 [1124/4/3901] Building CXX object tools/mlir/lib/Dialect/Vector/TransformOps/CMakeFiles/obj.MLIRVectorTransformOps.dir/VectorTransformOps.cpp.o
240.374 [1124/3/3902] Building CXX object tools/mlir/lib/Dialect/NVGPU/TransformOps/CMakeFiles/obj.MLIRNVGPUTransformOps.dir/NVGPUTransformOps.cpp.o
241.237 [1124/2/3903] Building CXX object tools/mlir/lib/Dialect/SparseTensor/Pipelines/CMakeFiles/obj.MLIRSparseTensorPipelines.dir/SparseTensorPipelines.cpp.o
263.870 [1124/1/3904] Building CXX object tools/mlir/lib/Dialect/Tosa/CMakeFiles/obj.MLIRTosaDialect.dir/IR/TosaOps.cpp.o
ninja: build stopped: subcommand failed.

@llvm-ci
Copy link
Collaborator

llvm-ci commented Nov 28, 2024

LLVM Buildbot has detected a new failure on builder mlir-s390x-linux running on systemz-1 while building mlir at step 6 "test-build-unified-tree-check-mlir".

Full details are available at: https://lab.llvm.org/buildbot/#/builders/117/builds/4203

Here is the relevant piece of the build log for the reference
Step 6 (test-build-unified-tree-check-mlir) failure: test (failure)
******************** TEST 'MLIR :: Conversion/MeshToMPI/convert-mesh-to-mpi.mlir' FAILED ********************
Exit Code: 1

Command Output (stdout):
--
# RUN: at line 1
/home/uweigand/sandbox/buildbot/mlir-s390x-linux/build/bin/mlir-opt /home/uweigand/sandbox/buildbot/mlir-s390x-linux/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir -convert-mesh-to-mpi -canonicalize -split-input-file | /home/uweigand/sandbox/buildbot/mlir-s390x-linux/build/bin/FileCheck /home/uweigand/sandbox/buildbot/mlir-s390x-linux/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir
# executed command: /home/uweigand/sandbox/buildbot/mlir-s390x-linux/build/bin/mlir-opt /home/uweigand/sandbox/buildbot/mlir-s390x-linux/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir -convert-mesh-to-mpi -canonicalize -split-input-file
# executed command: /home/uweigand/sandbox/buildbot/mlir-s390x-linux/build/bin/FileCheck /home/uweigand/sandbox/buildbot/mlir-s390x-linux/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir
# .---command stderr------------
# | /home/uweigand/sandbox/buildbot/mlir-s390x-linux/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir:167:17: error: CHECK-NEXT: expected string not found in input
# |  // CHECK-NEXT: [[v0:%.*]] = bufferization.to_memref [[varg0]] : memref<120x120x120xi8>
# |                 ^
# | <stdin>:188:36: note: scanning from here
# |  %c91_i32 = arith.constant 91 : i32
# |                                    ^
# | <stdin>:188:36: note: with "varg0" equal to "%arg0"
# |  %c91_i32 = arith.constant 91 : i32
# |                                    ^
# | <stdin>:189:2: note: possible intended match here
# |  %0 = bufferization.to_memref %arg0 : tensor<120x120x120xi8> to memref<120x120x120xi8>
# |  ^
# | 
# | Input file: <stdin>
# | Check file: /home/uweigand/sandbox/buildbot/mlir-s390x-linux/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir
# | 
# | -dump-input=help explains the following input dump.
# | 
# | Input was:
# | <<<<<<
# |             .
# |             .
# |             .
# |           183:  func.func @update_halo_3d_tensor(%arg0: tensor<120x120x120xi8>) -> tensor<120x120x120xi8> { 
# |           184:  %c23_i32 = arith.constant 23 : i32 
# |           185:  %c29_i32 = arith.constant 29 : i32 
# |           186:  %c44_i32 = arith.constant 44 : i32 
# |           187:  %c4_i32 = arith.constant 4 : i32 
# |           188:  %c91_i32 = arith.constant 91 : i32 
# | next:167'0                                        X error: no match found
# | next:167'1                                          with "varg0" equal to "%arg0"
# |           189:  %0 = bufferization.to_memref %arg0 : tensor<120x120x120xi8> to memref<120x120x120xi8> 
# | next:167'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# | next:167'2      ?                                                                                      possible intended match
# |           190:  %alloc = memref.alloc() : memref<117x113x5xi8> 
# | next:167'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# |           191:  %subview = memref.subview %0[1, 3, 109] [117, 113, 5] [1, 1, 1] : memref<120x120x120xi8> to memref<117x113x5xi8, strided<[14400, 120, 1], offset: 14869>> 
# | next:167'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# |           192:  memref.copy %subview, %alloc : memref<117x113x5xi8, strided<[14400, 120, 1], offset: 14869>> to memref<117x113x5xi8> 
# | next:167'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
...

@llvm-ci
Copy link
Collaborator

llvm-ci commented Nov 28, 2024

LLVM Buildbot has detected a new failure on builder premerge-monolithic-linux running on premerge-linux-1 while building mlir at step 7 "test-build-unified-tree-check-all".

Full details are available at: https://lab.llvm.org/buildbot/#/builders/153/builds/16057

Here is the relevant piece of the build log for the reference
Step 7 (test-build-unified-tree-check-all) failure: test (failure)
******************** TEST 'MLIR :: Conversion/MeshToMPI/convert-mesh-to-mpi.mlir' FAILED ********************
Exit Code: 1

Command Output (stdout):
--
# RUN: at line 1
/build/buildbot/premerge-monolithic-linux/build/bin/mlir-opt /build/buildbot/premerge-monolithic-linux/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir -convert-mesh-to-mpi -canonicalize -split-input-file | /build/buildbot/premerge-monolithic-linux/build/bin/FileCheck /build/buildbot/premerge-monolithic-linux/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir
# executed command: /build/buildbot/premerge-monolithic-linux/build/bin/mlir-opt /build/buildbot/premerge-monolithic-linux/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir -convert-mesh-to-mpi -canonicalize -split-input-file
# note: command had no output on stdout or stderr
# executed command: /build/buildbot/premerge-monolithic-linux/build/bin/FileCheck /build/buildbot/premerge-monolithic-linux/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir
# .---command stderr------------
# | /build/buildbot/premerge-monolithic-linux/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir:167:17: error: CHECK-NEXT: expected string not found in input
# |  // CHECK-NEXT: [[v0:%.*]] = bufferization.to_memref [[varg0]] : memref<120x120x120xi8>
# |                 ^
# | <stdin>:188:36: note: scanning from here
# |  %c91_i32 = arith.constant 91 : i32
# |                                    ^
# | <stdin>:188:36: note: with "varg0" equal to "%arg0"
# |  %c91_i32 = arith.constant 91 : i32
# |                                    ^
# | <stdin>:189:2: note: possible intended match here
# |  %0 = bufferization.to_memref %arg0 : tensor<120x120x120xi8> to memref<120x120x120xi8>
# |  ^
# | 
# | Input file: <stdin>
# | Check file: /build/buildbot/premerge-monolithic-linux/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir
# | 
# | -dump-input=help explains the following input dump.
# | 
# | Input was:
# | <<<<<<
# |             .
# |             .
# |             .
# |           183:  func.func @update_halo_3d_tensor(%arg0: tensor<120x120x120xi8>) -> tensor<120x120x120xi8> { 
# |           184:  %c23_i32 = arith.constant 23 : i32 
# |           185:  %c29_i32 = arith.constant 29 : i32 
# |           186:  %c44_i32 = arith.constant 44 : i32 
# |           187:  %c4_i32 = arith.constant 4 : i32 
# |           188:  %c91_i32 = arith.constant 91 : i32 
# | next:167'0                                        X error: no match found
# | next:167'1                                          with "varg0" equal to "%arg0"
# |           189:  %0 = bufferization.to_memref %arg0 : tensor<120x120x120xi8> to memref<120x120x120xi8> 
# | next:167'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# | next:167'2      ?                                                                                      possible intended match
# |           190:  %alloc = memref.alloc() : memref<117x113x5xi8> 
# | next:167'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# |           191:  %subview = memref.subview %0[1, 3, 109] [117, 113, 5] [1, 1, 1] : memref<120x120x120xi8> to memref<117x113x5xi8, strided<[14400, 120, 1], offset: 14869>> 
# | next:167'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# |           192:  memref.copy %subview, %alloc : memref<117x113x5xi8, strided<[14400, 120, 1], offset: 14869>> to memref<117x113x5xi8> 
...

@llvm-ci
Copy link
Collaborator

llvm-ci commented Nov 28, 2024

LLVM Buildbot has detected a new failure on builder clang-aarch64-sve-vla running on linaro-g3-03 while building mlir at step 7 "ninja check 1".

Full details are available at: https://lab.llvm.org/buildbot/#/builders/17/builds/4214

Here is the relevant piece of the build log for the reference
Step 7 (ninja check 1) failure: stage 1 checked (failure)
******************** TEST 'MLIR :: Conversion/MeshToMPI/convert-mesh-to-mpi.mlir' FAILED ********************
Exit Code: 1

Command Output (stdout):
--
# RUN: at line 1
/home/tcwg-buildbot/worker/clang-aarch64-sve-vla/stage1/bin/mlir-opt /home/tcwg-buildbot/worker/clang-aarch64-sve-vla/llvm/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir -convert-mesh-to-mpi -canonicalize -split-input-file | /home/tcwg-buildbot/worker/clang-aarch64-sve-vla/stage1/bin/FileCheck /home/tcwg-buildbot/worker/clang-aarch64-sve-vla/llvm/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir
# executed command: /home/tcwg-buildbot/worker/clang-aarch64-sve-vla/stage1/bin/mlir-opt /home/tcwg-buildbot/worker/clang-aarch64-sve-vla/llvm/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir -convert-mesh-to-mpi -canonicalize -split-input-file
# executed command: /home/tcwg-buildbot/worker/clang-aarch64-sve-vla/stage1/bin/FileCheck /home/tcwg-buildbot/worker/clang-aarch64-sve-vla/llvm/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir
# .---command stderr------------
# | /home/tcwg-buildbot/worker/clang-aarch64-sve-vla/llvm/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir:167:17: error: CHECK-NEXT: expected string not found in input
# |  // CHECK-NEXT: [[v0:%.*]] = bufferization.to_memref [[varg0]] : memref<120x120x120xi8>
# |                 ^
# | <stdin>:188:36: note: scanning from here
# |  %c91_i32 = arith.constant 91 : i32
# |                                    ^
# | <stdin>:188:36: note: with "varg0" equal to "%arg0"
# |  %c91_i32 = arith.constant 91 : i32
# |                                    ^
# | <stdin>:189:2: note: possible intended match here
# |  %0 = bufferization.to_memref %arg0 : tensor<120x120x120xi8> to memref<120x120x120xi8>
# |  ^
# | 
# | Input file: <stdin>
# | Check file: /home/tcwg-buildbot/worker/clang-aarch64-sve-vla/llvm/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir
# | 
# | -dump-input=help explains the following input dump.
# | 
# | Input was:
# | <<<<<<
# |             .
# |             .
# |             .
# |           183:  func.func @update_halo_3d_tensor(%arg0: tensor<120x120x120xi8>) -> tensor<120x120x120xi8> { 
# |           184:  %c23_i32 = arith.constant 23 : i32 
# |           185:  %c29_i32 = arith.constant 29 : i32 
# |           186:  %c44_i32 = arith.constant 44 : i32 
# |           187:  %c4_i32 = arith.constant 4 : i32 
# |           188:  %c91_i32 = arith.constant 91 : i32 
# | next:167'0                                        X error: no match found
# | next:167'1                                          with "varg0" equal to "%arg0"
# |           189:  %0 = bufferization.to_memref %arg0 : tensor<120x120x120xi8> to memref<120x120x120xi8> 
# | next:167'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# | next:167'2      ?                                                                                      possible intended match
# |           190:  %alloc = memref.alloc() : memref<117x113x5xi8> 
# | next:167'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# |           191:  %subview = memref.subview %0[1, 3, 109] [117, 113, 5] [1, 1, 1] : memref<120x120x120xi8> to memref<117x113x5xi8, strided<[14400, 120, 1], offset: 14869>> 
# | next:167'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# |           192:  memref.copy %subview, %alloc : memref<117x113x5xi8, strided<[14400, 120, 1], offset: 14869>> to memref<117x113x5xi8> 
# | next:167'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
...

@llvm-ci
Copy link
Collaborator

llvm-ci commented Nov 28, 2024

LLVM Buildbot has detected a new failure on builder clang-aarch64-sve-vls running on linaro-g3-01 while building mlir at step 7 "ninja check 1".

Full details are available at: https://lab.llvm.org/buildbot/#/builders/143/builds/3755

Here is the relevant piece of the build log for the reference
Step 7 (ninja check 1) failure: stage 1 checked (failure)
******************** TEST 'MLIR :: Conversion/MeshToMPI/convert-mesh-to-mpi.mlir' FAILED ********************
Exit Code: 1

Command Output (stdout):
--
# RUN: at line 1
/home/tcwg-buildbot/worker/clang-aarch64-sve-vls/stage1/bin/mlir-opt /home/tcwg-buildbot/worker/clang-aarch64-sve-vls/llvm/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir -convert-mesh-to-mpi -canonicalize -split-input-file | /home/tcwg-buildbot/worker/clang-aarch64-sve-vls/stage1/bin/FileCheck /home/tcwg-buildbot/worker/clang-aarch64-sve-vls/llvm/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir
# executed command: /home/tcwg-buildbot/worker/clang-aarch64-sve-vls/stage1/bin/mlir-opt /home/tcwg-buildbot/worker/clang-aarch64-sve-vls/llvm/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir -convert-mesh-to-mpi -canonicalize -split-input-file
# executed command: /home/tcwg-buildbot/worker/clang-aarch64-sve-vls/stage1/bin/FileCheck /home/tcwg-buildbot/worker/clang-aarch64-sve-vls/llvm/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir
# .---command stderr------------
# | /home/tcwg-buildbot/worker/clang-aarch64-sve-vls/llvm/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir:167:17: error: CHECK-NEXT: expected string not found in input
# |  // CHECK-NEXT: [[v0:%.*]] = bufferization.to_memref [[varg0]] : memref<120x120x120xi8>
# |                 ^
# | <stdin>:188:36: note: scanning from here
# |  %c91_i32 = arith.constant 91 : i32
# |                                    ^
# | <stdin>:188:36: note: with "varg0" equal to "%arg0"
# |  %c91_i32 = arith.constant 91 : i32
# |                                    ^
# | <stdin>:189:2: note: possible intended match here
# |  %0 = bufferization.to_memref %arg0 : tensor<120x120x120xi8> to memref<120x120x120xi8>
# |  ^
# | 
# | Input file: <stdin>
# | Check file: /home/tcwg-buildbot/worker/clang-aarch64-sve-vls/llvm/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir
# | 
# | -dump-input=help explains the following input dump.
# | 
# | Input was:
# | <<<<<<
# |             .
# |             .
# |             .
# |           183:  func.func @update_halo_3d_tensor(%arg0: tensor<120x120x120xi8>) -> tensor<120x120x120xi8> { 
# |           184:  %c23_i32 = arith.constant 23 : i32 
# |           185:  %c29_i32 = arith.constant 29 : i32 
# |           186:  %c44_i32 = arith.constant 44 : i32 
# |           187:  %c4_i32 = arith.constant 4 : i32 
# |           188:  %c91_i32 = arith.constant 91 : i32 
# | next:167'0                                        X error: no match found
# | next:167'1                                          with "varg0" equal to "%arg0"
# |           189:  %0 = bufferization.to_memref %arg0 : tensor<120x120x120xi8> to memref<120x120x120xi8> 
# | next:167'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# | next:167'2      ?                                                                                      possible intended match
# |           190:  %alloc = memref.alloc() : memref<117x113x5xi8> 
# | next:167'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# |           191:  %subview = memref.subview %0[1, 3, 109] [117, 113, 5] [1, 1, 1] : memref<120x120x120xi8> to memref<117x113x5xi8, strided<[14400, 120, 1], offset: 14869>> 
# | next:167'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# |           192:  memref.copy %subview, %alloc : memref<117x113x5xi8, strided<[14400, 120, 1], offset: 14869>> to memref<117x113x5xi8> 
# | next:167'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
...

rengolin pushed a commit that referenced this pull request Nov 28, 2024
@llvm-ci
Copy link
Collaborator

llvm-ci commented Nov 28, 2024

LLVM Buildbot has detected a new failure on builder flang-aarch64-latest-gcc running on linaro-flang-aarch64-latest-gcc while building mlir at step 5 "build-unified-tree".

Full details are available at: https://lab.llvm.org/buildbot/#/builders/130/builds/6814

Here is the relevant piece of the build log for the reference
Step 5 (build-unified-tree) failure: build (failure)
...
774.684 [3461/12/3831] Linking CXX shared library lib/libMLIRArmNeonDialect.so.20.0git
774.691 [3461/11/3832] Linking CXX shared library lib/libMLIRControlFlowDialect.so.20.0git
774.745 [3450/21/3833] Creating library symlink lib/libMLIRArmNeonDialect.so
774.746 [3450/20/3834] Creating library symlink lib/libMLIRControlFlowDialect.so
774.746 [3450/19/3835] Creating library symlink lib/libMLIRLLVMDialect.so
774.747 [3450/18/3836] Creating library symlink lib/libMLIRTilingInterface.so
774.812 [3450/17/3837] Linking CXX shared library lib/libMLIRFuncDialect.so.20.0git
774.873 [3450/16/3838] Building CXX object tools/mlir/lib/Interfaces/CMakeFiles/obj.MLIRValueBoundsOpInterface.dir/ValueBoundsOpInterface.cpp.o
774.900 [3450/15/3839] Linking CXX shared library lib/libMLIRVectorInterfaces.so.20.0git
774.977 [3450/14/3840] Linking CXX shared library lib/libMLIRMPIDialect.so.20.0git
FAILED: lib/libMLIRMPIDialect.so.20.0git 
: && /usr/local/bin/c++ -fPIC -fPIC -fno-semantic-interposition -fvisibility-inlines-hidden -Werror=date-time -fno-lifetime-dse -Wall -Wextra -Wno-unused-parameter -Wwrite-strings -Wcast-qual -Wno-missing-field-initializers -pedantic -Wno-long-long -Wimplicit-fallthrough -Wno-maybe-uninitialized -Wno-nonnull -Wno-class-memaccess -Wno-redundant-move -Wno-pessimizing-move -Wno-noexcept-type -Wdelete-non-virtual-dtor -Wsuggest-override -Wno-comment -Wno-misleading-indentation -Wctad-maybe-unsupported -fdiagnostics-color -ffunction-sections -fdata-sections -Wundef -Wno-unused-but-set-parameter -O3 -DNDEBUG  -Wl,-z,defs -Wl,-z,nodelete   -Wl,-rpath-link,/home/tcwg-buildbot/worker/flang-aarch64-latest-gcc/build/./lib  -Wl,--gc-sections -shared -Wl,-soname,libMLIRMPIDialect.so.20.0git -o lib/libMLIRMPIDialect.so.20.0git tools/mlir/lib/Dialect/MPI/IR/CMakeFiles/obj.MLIRMPIDialect.dir/MPIOps.cpp.o tools/mlir/lib/Dialect/MPI/IR/CMakeFiles/obj.MLIRMPIDialect.dir/MPI.cpp.o  -Wl,-rpath,"\$ORIGIN/../lib:/home/tcwg-buildbot/worker/flang-aarch64-latest-gcc/build/lib:"  lib/libMLIRDialect.so.20.0git  lib/libMLIRInferTypeOpInterface.so.20.0git  lib/libMLIRSideEffectInterfaces.so.20.0git  lib/libMLIRIR.so.20.0git  lib/libMLIRSupport.so.20.0git  lib/libLLVMSupport.so.20.0git  -Wl,-rpath-link,/home/tcwg-buildbot/worker/flang-aarch64-latest-gcc/build/lib && :
/usr/bin/ld: tools/mlir/lib/Dialect/MPI/IR/CMakeFiles/obj.MLIRMPIDialect.dir/MPIOps.cpp.o: in function `(anonymous namespace)::FoldCast<mlir::mpi::SendOp>::matchAndRewrite(mlir::mpi::SendOp, mlir::PatternRewriter&) const':
MPIOps.cpp:(.text._ZNK12_GLOBAL__N_18FoldCastIN4mlir3mpi6SendOpEE15matchAndRewriteES3_RNS1_15PatternRewriterE+0x98): undefined reference to `mlir::detail::TypeIDResolver<mlir::memref::CastOp, void>::id'
/usr/bin/ld: MPIOps.cpp:(.text._ZNK12_GLOBAL__N_18FoldCastIN4mlir3mpi6SendOpEE15matchAndRewriteES3_RNS1_15PatternRewriterE+0x9c): undefined reference to `mlir::detail::TypeIDResolver<mlir::memref::CastOp, void>::id'
/usr/bin/ld: tools/mlir/lib/Dialect/MPI/IR/CMakeFiles/obj.MLIRMPIDialect.dir/MPIOps.cpp.o: in function `(anonymous namespace)::FoldCast<mlir::mpi::RecvOp>::matchAndRewrite(mlir::mpi::RecvOp, mlir::PatternRewriter&) const':
MPIOps.cpp:(.text._ZNK12_GLOBAL__N_18FoldCastIN4mlir3mpi6RecvOpEE15matchAndRewriteES3_RNS1_15PatternRewriterE+0x98): undefined reference to `mlir::detail::TypeIDResolver<mlir::memref::CastOp, void>::id'
/usr/bin/ld: MPIOps.cpp:(.text._ZNK12_GLOBAL__N_18FoldCastIN4mlir3mpi6RecvOpEE15matchAndRewriteES3_RNS1_15PatternRewriterE+0x9c): undefined reference to `mlir::detail::TypeIDResolver<mlir::memref::CastOp, void>::id'
collect2: error: ld returned 1 exit status
775.014 [3450/13/3841] Linking CXX shared library lib/libMLIRQuantDialect.so.20.0git
775.033 [3450/12/3842] Linking CXX shared library lib/libMLIREmitCDialect.so.20.0git
775.077 [3450/11/3843] Linking CXX shared library lib/libMLIRPDLDialect.so.20.0git
775.081 [3450/10/3844] Linking CXX shared library lib/libMLIRIndexDialect.so.20.0git
775.138 [3450/9/3845] Linking CXX shared library lib/libMLIRIRDL.so.20.0git
776.753 [3450/8/3846] Building CXX object tools/mlir/lib/Dialect/Tosa/CMakeFiles/obj.MLIRTosaShardingInterfaceImpl.dir/IR/ShardingInterfaceImpl.cpp.o
785.294 [3450/7/3847] Building CXX object tools/mlir/lib/CAPI/Conversion/CMakeFiles/obj.MLIRCAPIConversion.dir/Passes.cpp.o
791.344 [3450/6/3848] Building CXX object tools/mlir/lib/Dialect/SparseTensor/Transforms/CMakeFiles/obj.MLIRSparseTensorTransforms.dir/SparseTensorPasses.cpp.o
791.938 [3450/5/3849] Building CXX object tools/mlir/lib/Dialect/Mesh/IR/CMakeFiles/obj.MLIRMeshDialect.dir/MeshOps.cpp.o
792.302 [3450/4/3850] Building CXX object tools/mlir/lib/Dialect/NVGPU/TransformOps/CMakeFiles/obj.MLIRNVGPUTransformOps.dir/NVGPUTransformOps.cpp.o
795.370 [3450/3/3851] Building CXX object tools/mlir/lib/Dialect/SparseTensor/Pipelines/CMakeFiles/obj.MLIRSparseTensorPipelines.dir/SparseTensorPipelines.cpp.o
797.082 [3450/2/3852] Building CXX object tools/mlir/lib/Dialect/Vector/TransformOps/CMakeFiles/obj.MLIRVectorTransformOps.dir/VectorTransformOps.cpp.o
836.499 [3450/1/3853] Building CXX object tools/mlir/lib/Dialect/Tosa/CMakeFiles/obj.MLIRTosaDialect.dir/IR/TosaOps.cpp.o
ninja: build stopped: subcommand failed.

@llvm-ci
Copy link
Collaborator

llvm-ci commented Nov 28, 2024

LLVM Buildbot has detected a new failure on builder sanitizer-x86_64-linux-fast running on sanitizer-buildbot4 while building mlir at step 2 "annotate".

Full details are available at: https://lab.llvm.org/buildbot/#/builders/169/builds/5827

Here is the relevant piece of the build log for the reference
Step 2 (annotate) failure: 'python ../sanitizer_buildbot/sanitizers/zorg/buildbot/builders/sanitizers/buildbot_selector.py' (failure)
...
llvm-lit: /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/utils/lit/lit/llvm/config.py:506: note: using lld-link: /home/b/sanitizer-x86_64-linux-fast/build/llvm_build_asan_ubsan/bin/lld-link
llvm-lit: /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/utils/lit/lit/llvm/config.py:506: note: using ld64.lld: /home/b/sanitizer-x86_64-linux-fast/build/llvm_build_asan_ubsan/bin/ld64.lld
llvm-lit: /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/utils/lit/lit/llvm/config.py:506: note: using wasm-ld: /home/b/sanitizer-x86_64-linux-fast/build/llvm_build_asan_ubsan/bin/wasm-ld
llvm-lit: /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/utils/lit/lit/llvm/config.py:506: note: using ld.lld: /home/b/sanitizer-x86_64-linux-fast/build/llvm_build_asan_ubsan/bin/ld.lld
llvm-lit: /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/utils/lit/lit/llvm/config.py:506: note: using lld-link: /home/b/sanitizer-x86_64-linux-fast/build/llvm_build_asan_ubsan/bin/lld-link
llvm-lit: /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/utils/lit/lit/llvm/config.py:506: note: using ld64.lld: /home/b/sanitizer-x86_64-linux-fast/build/llvm_build_asan_ubsan/bin/ld64.lld
llvm-lit: /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/utils/lit/lit/llvm/config.py:506: note: using wasm-ld: /home/b/sanitizer-x86_64-linux-fast/build/llvm_build_asan_ubsan/bin/wasm-ld
llvm-lit: /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/utils/lit/lit/main.py:72: note: The test suite configuration requested an individual test timeout of 0 seconds but a timeout of 900 seconds was requested on the command line. Forcing timeout to be 900 seconds.
-- Testing: 87469 of 87470 tests, 88 workers --
Testing:  0.. 10.. 20.. 30.. 40.. 50.. 60.. 70.. 80.. 90..
FAIL: MLIR :: Conversion/MeshToMPI/convert-mesh-to-mpi.mlir (87442 of 87469)
******************** TEST 'MLIR :: Conversion/MeshToMPI/convert-mesh-to-mpi.mlir' FAILED ********************
Exit Code: 1

Command Output (stdout):
--
# RUN: at line 1
/home/b/sanitizer-x86_64-linux-fast/build/llvm_build_asan_ubsan/bin/mlir-opt /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir -convert-mesh-to-mpi -canonicalize -split-input-file | /home/b/sanitizer-x86_64-linux-fast/build/llvm_build_asan_ubsan/bin/FileCheck /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir
# executed command: /home/b/sanitizer-x86_64-linux-fast/build/llvm_build_asan_ubsan/bin/mlir-opt /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir -convert-mesh-to-mpi -canonicalize -split-input-file
# note: command had no output on stdout or stderr
# executed command: /home/b/sanitizer-x86_64-linux-fast/build/llvm_build_asan_ubsan/bin/FileCheck /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir
# .---command stderr------------
# | /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir:167:17: error: CHECK-NEXT: expected string not found in input
# |  // CHECK-NEXT: [[v0:%.*]] = bufferization.to_memref [[varg0]] : memref<120x120x120xi8>
# |                 ^
# | <stdin>:188:36: note: scanning from here
# |  %c91_i32 = arith.constant 91 : i32
# |                                    ^
# | <stdin>:188:36: note: with "varg0" equal to "%arg0"
# |  %c91_i32 = arith.constant 91 : i32
# |                                    ^
# | <stdin>:189:2: note: possible intended match here
# |  %0 = bufferization.to_memref %arg0 : tensor<120x120x120xi8> to memref<120x120x120xi8>
# |  ^
# | 
# | Input file: <stdin>
# | Check file: /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir
# | 
# | -dump-input=help explains the following input dump.
# | 
# | Input was:
# | <<<<<<
# |             .
# |             .
# |             .
# |           183:  func.func @update_halo_3d_tensor(%arg0: tensor<120x120x120xi8>) -> tensor<120x120x120xi8> { 
# |           184:  %c23_i32 = arith.constant 23 : i32 
# |           185:  %c29_i32 = arith.constant 29 : i32 
# |           186:  %c44_i32 = arith.constant 44 : i32 
Step 10 (stage2/asan_ubsan check) failure: stage2/asan_ubsan check (failure)
...
llvm-lit: /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/utils/lit/lit/llvm/config.py:506: note: using lld-link: /home/b/sanitizer-x86_64-linux-fast/build/llvm_build_asan_ubsan/bin/lld-link
llvm-lit: /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/utils/lit/lit/llvm/config.py:506: note: using ld64.lld: /home/b/sanitizer-x86_64-linux-fast/build/llvm_build_asan_ubsan/bin/ld64.lld
llvm-lit: /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/utils/lit/lit/llvm/config.py:506: note: using wasm-ld: /home/b/sanitizer-x86_64-linux-fast/build/llvm_build_asan_ubsan/bin/wasm-ld
llvm-lit: /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/utils/lit/lit/llvm/config.py:506: note: using ld.lld: /home/b/sanitizer-x86_64-linux-fast/build/llvm_build_asan_ubsan/bin/ld.lld
llvm-lit: /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/utils/lit/lit/llvm/config.py:506: note: using lld-link: /home/b/sanitizer-x86_64-linux-fast/build/llvm_build_asan_ubsan/bin/lld-link
llvm-lit: /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/utils/lit/lit/llvm/config.py:506: note: using ld64.lld: /home/b/sanitizer-x86_64-linux-fast/build/llvm_build_asan_ubsan/bin/ld64.lld
llvm-lit: /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/utils/lit/lit/llvm/config.py:506: note: using wasm-ld: /home/b/sanitizer-x86_64-linux-fast/build/llvm_build_asan_ubsan/bin/wasm-ld
llvm-lit: /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/utils/lit/lit/main.py:72: note: The test suite configuration requested an individual test timeout of 0 seconds but a timeout of 900 seconds was requested on the command line. Forcing timeout to be 900 seconds.
-- Testing: 87469 of 87470 tests, 88 workers --
Testing:  0.. 10.. 20.. 30.. 40.. 50.. 60.. 70.. 80.. 90..
FAIL: MLIR :: Conversion/MeshToMPI/convert-mesh-to-mpi.mlir (87442 of 87469)
******************** TEST 'MLIR :: Conversion/MeshToMPI/convert-mesh-to-mpi.mlir' FAILED ********************
Exit Code: 1

Command Output (stdout):
--
# RUN: at line 1
/home/b/sanitizer-x86_64-linux-fast/build/llvm_build_asan_ubsan/bin/mlir-opt /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir -convert-mesh-to-mpi -canonicalize -split-input-file | /home/b/sanitizer-x86_64-linux-fast/build/llvm_build_asan_ubsan/bin/FileCheck /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir
# executed command: /home/b/sanitizer-x86_64-linux-fast/build/llvm_build_asan_ubsan/bin/mlir-opt /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir -convert-mesh-to-mpi -canonicalize -split-input-file
# note: command had no output on stdout or stderr
# executed command: /home/b/sanitizer-x86_64-linux-fast/build/llvm_build_asan_ubsan/bin/FileCheck /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir
# .---command stderr------------
# | /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir:167:17: error: CHECK-NEXT: expected string not found in input
# |  // CHECK-NEXT: [[v0:%.*]] = bufferization.to_memref [[varg0]] : memref<120x120x120xi8>
# |                 ^
# | <stdin>:188:36: note: scanning from here
# |  %c91_i32 = arith.constant 91 : i32
# |                                    ^
# | <stdin>:188:36: note: with "varg0" equal to "%arg0"
# |  %c91_i32 = arith.constant 91 : i32
# |                                    ^
# | <stdin>:189:2: note: possible intended match here
# |  %0 = bufferization.to_memref %arg0 : tensor<120x120x120xi8> to memref<120x120x120xi8>
# |  ^
# | 
# | Input file: <stdin>
# | Check file: /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir
# | 
# | -dump-input=help explains the following input dump.
# | 
# | Input was:
# | <<<<<<
# |             .
# |             .
# |             .
# |           183:  func.func @update_halo_3d_tensor(%arg0: tensor<120x120x120xi8>) -> tensor<120x120x120xi8> { 
# |           184:  %c23_i32 = arith.constant 23 : i32 
# |           185:  %c29_i32 = arith.constant 29 : i32 
# |           186:  %c44_i32 = arith.constant 44 : i32 
Step 13 (stage2/msan check) failure: stage2/msan check (failure)
...
llvm-lit: /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/utils/lit/lit/llvm/config.py:506: note: using lld-link: /home/b/sanitizer-x86_64-linux-fast/build/llvm_build_msan/bin/lld-link
llvm-lit: /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/utils/lit/lit/llvm/config.py:506: note: using ld64.lld: /home/b/sanitizer-x86_64-linux-fast/build/llvm_build_msan/bin/ld64.lld
llvm-lit: /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/utils/lit/lit/llvm/config.py:506: note: using wasm-ld: /home/b/sanitizer-x86_64-linux-fast/build/llvm_build_msan/bin/wasm-ld
llvm-lit: /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/utils/lit/lit/llvm/config.py:506: note: using ld.lld: /home/b/sanitizer-x86_64-linux-fast/build/llvm_build_msan/bin/ld.lld
llvm-lit: /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/utils/lit/lit/llvm/config.py:506: note: using lld-link: /home/b/sanitizer-x86_64-linux-fast/build/llvm_build_msan/bin/lld-link
llvm-lit: /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/utils/lit/lit/llvm/config.py:506: note: using ld64.lld: /home/b/sanitizer-x86_64-linux-fast/build/llvm_build_msan/bin/ld64.lld
llvm-lit: /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/utils/lit/lit/llvm/config.py:506: note: using wasm-ld: /home/b/sanitizer-x86_64-linux-fast/build/llvm_build_msan/bin/wasm-ld
llvm-lit: /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/llvm/utils/lit/lit/main.py:72: note: The test suite configuration requested an individual test timeout of 0 seconds but a timeout of 900 seconds was requested on the command line. Forcing timeout to be 900 seconds.
-- Testing: 87467 tests, 88 workers --
Testing:  0.. 10.. 20.. 30.. 40.. 50.. 60.. 70.. 80.. 90..
FAIL: MLIR :: Conversion/MeshToMPI/convert-mesh-to-mpi.mlir (87435 of 87467)
******************** TEST 'MLIR :: Conversion/MeshToMPI/convert-mesh-to-mpi.mlir' FAILED ********************
Exit Code: 1

Command Output (stdout):
--
# RUN: at line 1
/home/b/sanitizer-x86_64-linux-fast/build/llvm_build_msan/bin/mlir-opt /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir -convert-mesh-to-mpi -canonicalize -split-input-file | /home/b/sanitizer-x86_64-linux-fast/build/llvm_build_msan/bin/FileCheck /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir
# executed command: /home/b/sanitizer-x86_64-linux-fast/build/llvm_build_msan/bin/mlir-opt /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir -convert-mesh-to-mpi -canonicalize -split-input-file
# note: command had no output on stdout or stderr
# executed command: /home/b/sanitizer-x86_64-linux-fast/build/llvm_build_msan/bin/FileCheck /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir
# .---command stderr------------
# | /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir:167:17: error: CHECK-NEXT: expected string not found in input
# |  // CHECK-NEXT: [[v0:%.*]] = bufferization.to_memref [[varg0]] : memref<120x120x120xi8>
# |                 ^
# | <stdin>:188:36: note: scanning from here
# |  %c91_i32 = arith.constant 91 : i32
# |                                    ^
# | <stdin>:188:36: note: with "varg0" equal to "%arg0"
# |  %c91_i32 = arith.constant 91 : i32
# |                                    ^
# | <stdin>:189:2: note: possible intended match here
# |  %0 = bufferization.to_memref %arg0 : tensor<120x120x120xi8> to memref<120x120x120xi8>
# |  ^
# | 
# | Input file: <stdin>
# | Check file: /home/b/sanitizer-x86_64-linux-fast/build/llvm-project/mlir/test/Conversion/MeshToMPI/convert-mesh-to-mpi.mlir
# | 
# | -dump-input=help explains the following input dump.
# | 
# | Input was:
# | <<<<<<
# |             .
# |             .
# |             .
# |           183:  func.func @update_halo_3d_tensor(%arg0: tensor<120x120x120xi8>) -> tensor<120x120x120xi8> { 
# |           184:  %c23_i32 = arith.constant 23 : i32 
# |           185:  %c29_i32 = arith.constant 29 : i32 
# |           186:  %c44_i32 = arith.constant 44 : i32 

@llvm-ci
Copy link
Collaborator

llvm-ci commented Nov 28, 2024

LLVM Buildbot has detected a new failure on builder flang-aarch64-sharedlibs running on linaro-flang-aarch64-sharedlibs while building mlir at step 5 "build-unified-tree".

Full details are available at: https://lab.llvm.org/buildbot/#/builders/80/builds/6949

Here is the relevant piece of the build log for the reference
Step 5 (build-unified-tree) failure: build (failure)

@llvm-ci
Copy link
Collaborator

llvm-ci commented Nov 28, 2024

LLVM Buildbot has detected a new failure on builder clang-arm64-windows-msvc running on linaro-armv8-windows-msvc-04 while building mlir at step 5 "ninja check 1".

Full details are available at: https://lab.llvm.org/buildbot/#/builders/161/builds/3413

Here is the relevant piece of the build log for the reference
Step 5 (ninja check 1) failure: stage 1 checked (failure)
******************** TEST 'MLIR :: Conversion/MeshToMPI/convert-mesh-to-mpi.mlir' FAILED ********************
Exit Code: 1

Command Output (stdout):
--
# RUN: at line 1
c:\users\tcwg\llvm-worker\clang-arm64-windows-msvc\stage1\bin\mlir-opt.exe C:\Users\tcwg\llvm-worker\clang-arm64-windows-msvc\llvm\mlir\test\Conversion\MeshToMPI\convert-mesh-to-mpi.mlir -convert-mesh-to-mpi -canonicalize -split-input-file | c:\users\tcwg\llvm-worker\clang-arm64-windows-msvc\stage1\bin\filecheck.exe C:\Users\tcwg\llvm-worker\clang-arm64-windows-msvc\llvm\mlir\test\Conversion\MeshToMPI\convert-mesh-to-mpi.mlir
# executed command: 'c:\users\tcwg\llvm-worker\clang-arm64-windows-msvc\stage1\bin\mlir-opt.exe' 'C:\Users\tcwg\llvm-worker\clang-arm64-windows-msvc\llvm\mlir\test\Conversion\MeshToMPI\convert-mesh-to-mpi.mlir' -convert-mesh-to-mpi -canonicalize -split-input-file
# executed command: 'c:\users\tcwg\llvm-worker\clang-arm64-windows-msvc\stage1\bin\filecheck.exe' 'C:\Users\tcwg\llvm-worker\clang-arm64-windows-msvc\llvm\mlir\test\Conversion\MeshToMPI\convert-mesh-to-mpi.mlir'
# .---command stderr------------
# | C:\Users\tcwg\llvm-worker\clang-arm64-windows-msvc\llvm\mlir\test\Conversion\MeshToMPI\convert-mesh-to-mpi.mlir:167:17: error: CHECK-NEXT: expected string not found in input
# |  // CHECK-NEXT: [[v0:%.*]] = bufferization.to_memref [[varg0]] : memref<120x120x120xi8>
# |                 ^
# | <stdin>:188:36: note: scanning from here
# |  %c91_i32 = arith.constant 91 : i32
# |                                    ^
# | <stdin>:188:36: note: with "varg0" equal to "%arg0"
# |  %c91_i32 = arith.constant 91 : i32
# |                                    ^
# | <stdin>:189:2: note: possible intended match here
# |  %0 = bufferization.to_memref %arg0 : tensor<120x120x120xi8> to memref<120x120x120xi8>
# |  ^
# | 
# | Input file: <stdin>
# | Check file: C:\Users\tcwg\llvm-worker\clang-arm64-windows-msvc\llvm\mlir\test\Conversion\MeshToMPI\convert-mesh-to-mpi.mlir
# | 
# | -dump-input=help explains the following input dump.
# | 
# | Input was:
# | <<<<<<
# |             .
# |             .
# |             .
# |           183:  func.func @update_halo_3d_tensor(%arg0: tensor<120x120x120xi8>) -> tensor<120x120x120xi8> { 
# |           184:  %c23_i32 = arith.constant 23 : i32 
# |           185:  %c29_i32 = arith.constant 29 : i32 
# |           186:  %c44_i32 = arith.constant 44 : i32 
# |           187:  %c4_i32 = arith.constant 4 : i32 
# |           188:  %c91_i32 = arith.constant 91 : i32 
# | next:167'0                                        X error: no match found
# | next:167'1                                          with "varg0" equal to "%arg0"
# |           189:  %0 = bufferization.to_memref %arg0 : tensor<120x120x120xi8> to memref<120x120x120xi8> 
# | next:167'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# | next:167'2      ?                                                                                      possible intended match
# |           190:  %alloc = memref.alloc() : memref<117x113x5xi8> 
# | next:167'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# |           191:  %subview = memref.subview %0[1, 3, 109] [117, 113, 5] [1, 1, 1] : memref<120x120x120xi8> to memref<117x113x5xi8, strided<[14400, 120, 1], offset: 14869>> 
# | next:167'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# |           192:  memref.copy %subview, %alloc : memref<117x113x5xi8, strided<[14400, 120, 1], offset: 14869>> to memref<117x113x5xi8> 
# | next:167'0     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants