Skip to content

test: update the test for aten::to after fixing #974

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Apr 14, 2022
Merged

Conversation

bowang007
Copy link
Collaborator

@bowang007 bowang007 commented Apr 13, 2022

Signed-off-by: Bo Wang [email protected]

Description

Update the test code for aten::to after fixing the bug #958,

Type of change

  • Bug fix (non-breaking change which fixes an issue)

Checklist:

  • My code follows the style guidelines of this project (You can use the linters)
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas and hacks
  • I have made corresponding changes to the documentation
  • I have added tests to verify my fix or my feature
  • New and existing unit tests pass locally with my changes

@bowang007 bowang007 requested review from narendasan and peri044 April 13, 2022 20:43
@github-actions github-actions bot added the component: tests Issues re: Tests label Apr 13, 2022
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to C++ style guidelines:

diff --git a/workspace/core/partitioning/partitioning.cpp b/tmp/changes.txt
index d171ae1..c69c32e 100644
--- a/workspace/core/partitioning/partitioning.cpp
+++ b/tmp/changes.txt
@@ -231,8 +231,8 @@ std::unordered_map<torch::jit::Value*, usage_info> getInputUsageCounts(
  return usage_counts;
}

-std::unordered_map<size_t, std::list<SegmentedBlock>::iterator>
-getIdxtoIterMap(std::list<SegmentedBlock> &segmented_blocks_list) {
+std::unordered_map<size_t, std::list<SegmentedBlock>::iterator> getIdxtoIterMap(
+    std::list<SegmentedBlock>& segmented_blocks_list) {
  std::unordered_map<size_t, std::list<SegmentedBlock>::iterator> idx_to_iter;
  auto iter = segmented_blocks_list.begin();
  for (int i = 0; i < segmented_blocks_list.size(); ++i, ++iter) {
@@ -283,9 +283,10 @@ void resolveNonTensorInputBlocks(PartitionedGraph& segmented_blocks) {
}

void resolveTensorListInputBlocks(PartitionedGraph& segmented_blocks) {
-  // usage_counts is a map with key as non-tensor/tensorlist inputs and value as the idx of segmented block which produces/contains it.
-  auto usage_counts = getInputUsageCounts(
-      segmented_blocks, [](torch::jit::Value* input) -> bool { return isTensorList(input); });
+  // usage_counts is a map with key as non-tensor/tensorlist inputs and value as the idx of segmented block which
+  // produces/contains it.
+  auto usage_counts =
+      getInputUsageCounts(segmented_blocks, [](torch::jit::Value* input) -> bool { return isTensorList(input); });

  // Get idx of the segblock to its iterator mapping
  std::list<SegmentedBlock> segmented_blocks_list(segmented_blocks.cbegin(), segmented_blocks.cend());
@@ -293,12 +294,13 @@ void resolveTensorListInputBlocks(PartitionedGraph& segmented_blocks) {

  std::unordered_set<int> updated_segments;
  // we need to re-segment TensorRT segments whose inputs are TensorLists
-  for (auto &use : usage_counts) {
+  for (auto& use : usage_counts) {
    auto use_info = use.second;
    // For a particular tensorlist input, traverse through all ids of segmented blocks whose target is TensorRT
    for (auto i : use_info.tensorrt_use_id) {
      if (!updated_segments.count(i)) {
-        // tensorlistinput_to_segblock is a mapping from {tensorlist input : segmented block which produced this tensorlist input}
+        // tensorlistinput_to_segblock is a mapping from {tensorlist input : segmented block which produced this
+        // tensorlist input}
        std::unordered_map<torch::jit::Value*, SegmentedBlock> tensorlistinput_to_segblock;
        for (auto input : segmented_blocks[i].raw_inputs()) {
          if (isTensorList(input)) {
@@ -308,18 +310,20 @@ void resolveTensorListInputBlocks(PartitionedGraph& segmented_blocks) {

        // For each tensorlist input in tensorlistinput_to_segblock, get the node which actually uses this input.
        // Once we retrieve the node, we remove it from the current TensorRT segmented_blocks[i]. This node should be
-        // added to block that generated/produced (can be obtained via produce_id) this tensorlist input in the first place.
+        // added to block that generated/produced (can be obtained via produce_id) this tensorlist input in the first
+        // place.
        auto seg_blocks = segmentBlocksWithTensorListInputs(segmented_blocks[i], tensorlistinput_to_segblock);
        auto append_blocks = seg_blocks.first;
        auto trt_block = seg_blocks.second;
-        // Remove the current TensorRT seg_block and replace it with new TRT block (non empty) which has the node that uses tensorlist input removed.
+        // Remove the current TensorRT seg_block and replace it with new TRT block (non empty) which has the node that
+        // uses tensorlist input removed.
        auto next_iter = segmented_blocks_list.erase(idx_to_iter[i]);
        if (trt_block.raw_nodes().size() > 0) {
          segmented_blocks_list.insert(next_iter, trt_block);
        }

        // append blocks' nodes to the producer seg_block
-        for (auto append_block: append_blocks) {
+        for (auto append_block : append_blocks) {
          auto input = append_block.first; // corresponds to the tensorlist input
          auto block = append_block.second;
          // append nodes to segmented_blocks_list
diff --git a/workspace/core/partitioning/shape_analysis.cpp b/tmp/changes.txt
index 96b1312..e773b8f 100644
--- a/workspace/core/partitioning/shape_analysis.cpp
+++ b/tmp/changes.txt
@@ -1,5 +1,5 @@
-#include <ATen/ATen.h>
#include "core/partitioning/shape_analysis.h"
+#include <ATen/ATen.h>
#include "core/util/prelude.h"
#include "torch/csrc/jit/api/module.h"
#include "torch/csrc/jit/passes/constant_pooling.h"
diff --git a/workspace/core/conversion/evaluators/eval_util.cpp b/tmp/changes.txt
old mode 100755
new mode 100644
diff --git a/workspace/core/conversion/evaluators/prim.cpp b/tmp/changes.txt
old mode 100755
new mode 100644
diff --git a/workspace/core/lowering/passes/reduce_gelu.cpp b/tmp/changes.txt
index 15315ba..946df75 100644
--- a/workspace/core/lowering/passes/reduce_gelu.cpp
+++ b/tmp/changes.txt
@@ -12,8 +12,8 @@ void ReduceGelu(std::shared_ptr<torch::jit::Graph>& graph) {
            %out : Tensor = aten::gelu(%x)
            return (%out))IR";

-  // This gelu_approximate_pattern schema exists in 21.11, 21.12, 22.01 containers of pytorch. These container versions use
-  // an unmerged PR in pytorch : https://github.com/pytorch/pytorch/pull/61439. We reduce this to regular Gelu.
+  // This gelu_approximate_pattern schema exists in 21.11, 21.12, 22.01 containers of pytorch. These container versions
+  // use an unmerged PR in pytorch : https://github.com/pytorch/pytorch/pull/61439. We reduce this to regular Gelu.
  std::string gelu_approximate_pattern = R"IR(
        graph(%x : Tensor, %approx):
            %out : Tensor = aten::gelu(%x, %approx)
@@ -64,7 +64,8 @@ void ReduceGelu(std::shared_ptr<torch::jit::Graph>& graph) {
  map_gelu_to_pointwise_ops.runOnGraph(graph);

  torch::jit::SubgraphRewriter map_gelu_approximate_to_pointwise_ops;
-  map_gelu_approximate_to_pointwise_ops.RegisterRewritePattern(gelu_approximate_pattern, gelu_reduce_multi_input_pattern);
+  map_gelu_approximate_to_pointwise_ops.RegisterRewritePattern(
+      gelu_approximate_pattern, gelu_reduce_multi_input_pattern);
  map_gelu_approximate_to_pointwise_ops.runOnGraph(graph);

  LOG_GRAPH("Post lowering of [aten::gelu] -> " << *graph);
diff --git a/workspace/core/lowering/passes/linear_to_addmm.cpp b/tmp/changes.txt
index c3160a8..e0e9ca3 100644
--- a/workspace/core/lowering/passes/linear_to_addmm.cpp
+++ b/tmp/changes.txt
@@ -1,15 +1,15 @@

#include <torch/csrc/jit/runtime/operator.h>
+#include "core/util/prelude.h"
+#include "torch/csrc/jit/api/function_impl.h"
#include "torch/csrc/jit/ir/alias_analysis.h"
#include "torch/csrc/jit/jit_log.h"
#include "torch/csrc/jit/passes/constant_propagation.h"
#include "torch/csrc/jit/passes/dead_code_elimination.h"
#include "torch/csrc/jit/passes/guard_elimination.h"
#include "torch/csrc/jit/passes/peephole.h"
-#include "torch/csrc/jit/runtime/graph_executor.h"
-#include "torch/csrc/jit/api/function_impl.h"
#include "torch/csrc/jit/passes/subgraph_rewrite.h"
-#include "core/util/prelude.h"
+#include "torch/csrc/jit/runtime/graph_executor.h"

namespace torch_tensorrt {
namespace core {
@@ -34,7 +34,8 @@ void replaceLinearWithBiasNonePattern(std::shared_ptr<torch::jit::Graph> graph)
        continue;
      } else {
        torch::jit::WithInsertPoint guard(*it);
-        std::shared_ptr<torch::jit::Graph> d_graph = toGraphFunction(decompose_funcs.get_function("linear")).graph();;
+        std::shared_ptr<torch::jit::Graph> d_graph = toGraphFunction(decompose_funcs.get_function("linear")).graph();
+        ;
        torch::jit::Value* new_output = insertGraph(*it->owningGraph(), *d_graph, it->inputs()).at(0);
        new_output->setType(it->output()->type());
        it->output()->replaceAllUsesWith(new_output);
diff --git a/workspace/core/conversion/evaluators/eval_util.h b/tmp/changes.txt
old mode 100755
new mode 100644
diff --git a/workspace/tests/core/lowering/test_reduce_to_pass.cpp b/tmp/changes.txt
old mode 100755
new mode 100644
diff --git a/workspace/tests/util/util.h b/tmp/changes.txt
index 38ba81e..b795667 100644
--- a/workspace/tests/util/util.h
+++ b/tmp/changes.txt
@@ -1,8 +1,8 @@
#pragma once

+#include <ATen/ATen.h>
#include <string>
#include <vector>
-#include <ATen/ATen.h>
#include "ATen/Tensor.h"
#include "core/ir/ir.h"
#include "core/util/prelude.h"
ERROR: Some files do not conform to style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to C++ style guidelines:

diff --git a/workspace/core/partitioning/partitioning.cpp b/tmp/changes.txt
index d171ae1..c69c32e 100644
--- a/workspace/core/partitioning/partitioning.cpp
+++ b/tmp/changes.txt
@@ -231,8 +231,8 @@ std::unordered_map<torch::jit::Value*, usage_info> getInputUsageCounts(
  return usage_counts;
}

-std::unordered_map<size_t, std::list<SegmentedBlock>::iterator>
-getIdxtoIterMap(std::list<SegmentedBlock> &segmented_blocks_list) {
+std::unordered_map<size_t, std::list<SegmentedBlock>::iterator> getIdxtoIterMap(
+    std::list<SegmentedBlock>& segmented_blocks_list) {
  std::unordered_map<size_t, std::list<SegmentedBlock>::iterator> idx_to_iter;
  auto iter = segmented_blocks_list.begin();
  for (int i = 0; i < segmented_blocks_list.size(); ++i, ++iter) {
@@ -283,9 +283,10 @@ void resolveNonTensorInputBlocks(PartitionedGraph& segmented_blocks) {
}

void resolveTensorListInputBlocks(PartitionedGraph& segmented_blocks) {
-  // usage_counts is a map with key as non-tensor/tensorlist inputs and value as the idx of segmented block which produces/contains it.
-  auto usage_counts = getInputUsageCounts(
-      segmented_blocks, [](torch::jit::Value* input) -> bool { return isTensorList(input); });
+  // usage_counts is a map with key as non-tensor/tensorlist inputs and value as the idx of segmented block which
+  // produces/contains it.
+  auto usage_counts =
+      getInputUsageCounts(segmented_blocks, [](torch::jit::Value* input) -> bool { return isTensorList(input); });

  // Get idx of the segblock to its iterator mapping
  std::list<SegmentedBlock> segmented_blocks_list(segmented_blocks.cbegin(), segmented_blocks.cend());
@@ -293,12 +294,13 @@ void resolveTensorListInputBlocks(PartitionedGraph& segmented_blocks) {

  std::unordered_set<int> updated_segments;
  // we need to re-segment TensorRT segments whose inputs are TensorLists
-  for (auto &use : usage_counts) {
+  for (auto& use : usage_counts) {
    auto use_info = use.second;
    // For a particular tensorlist input, traverse through all ids of segmented blocks whose target is TensorRT
    for (auto i : use_info.tensorrt_use_id) {
      if (!updated_segments.count(i)) {
-        // tensorlistinput_to_segblock is a mapping from {tensorlist input : segmented block which produced this tensorlist input}
+        // tensorlistinput_to_segblock is a mapping from {tensorlist input : segmented block which produced this
+        // tensorlist input}
        std::unordered_map<torch::jit::Value*, SegmentedBlock> tensorlistinput_to_segblock;
        for (auto input : segmented_blocks[i].raw_inputs()) {
          if (isTensorList(input)) {
@@ -308,18 +310,20 @@ void resolveTensorListInputBlocks(PartitionedGraph& segmented_blocks) {

        // For each tensorlist input in tensorlistinput_to_segblock, get the node which actually uses this input.
        // Once we retrieve the node, we remove it from the current TensorRT segmented_blocks[i]. This node should be
-        // added to block that generated/produced (can be obtained via produce_id) this tensorlist input in the first place.
+        // added to block that generated/produced (can be obtained via produce_id) this tensorlist input in the first
+        // place.
        auto seg_blocks = segmentBlocksWithTensorListInputs(segmented_blocks[i], tensorlistinput_to_segblock);
        auto append_blocks = seg_blocks.first;
        auto trt_block = seg_blocks.second;
-        // Remove the current TensorRT seg_block and replace it with new TRT block (non empty) which has the node that uses tensorlist input removed.
+        // Remove the current TensorRT seg_block and replace it with new TRT block (non empty) which has the node that
+        // uses tensorlist input removed.
        auto next_iter = segmented_blocks_list.erase(idx_to_iter[i]);
        if (trt_block.raw_nodes().size() > 0) {
          segmented_blocks_list.insert(next_iter, trt_block);
        }

        // append blocks' nodes to the producer seg_block
-        for (auto append_block: append_blocks) {
+        for (auto append_block : append_blocks) {
          auto input = append_block.first; // corresponds to the tensorlist input
          auto block = append_block.second;
          // append nodes to segmented_blocks_list
diff --git a/workspace/core/partitioning/shape_analysis.cpp b/tmp/changes.txt
index 96b1312..e773b8f 100644
--- a/workspace/core/partitioning/shape_analysis.cpp
+++ b/tmp/changes.txt
@@ -1,5 +1,5 @@
-#include <ATen/ATen.h>
#include "core/partitioning/shape_analysis.h"
+#include <ATen/ATen.h>
#include "core/util/prelude.h"
#include "torch/csrc/jit/api/module.h"
#include "torch/csrc/jit/passes/constant_pooling.h"
diff --git a/workspace/core/conversion/evaluators/eval_util.cpp b/tmp/changes.txt
old mode 100755
new mode 100644
diff --git a/workspace/core/conversion/evaluators/prim.cpp b/tmp/changes.txt
old mode 100755
new mode 100644
diff --git a/workspace/core/lowering/passes/reduce_gelu.cpp b/tmp/changes.txt
index 15315ba..946df75 100644
--- a/workspace/core/lowering/passes/reduce_gelu.cpp
+++ b/tmp/changes.txt
@@ -12,8 +12,8 @@ void ReduceGelu(std::shared_ptr<torch::jit::Graph>& graph) {
            %out : Tensor = aten::gelu(%x)
            return (%out))IR";

-  // This gelu_approximate_pattern schema exists in 21.11, 21.12, 22.01 containers of pytorch. These container versions use
-  // an unmerged PR in pytorch : https://github.com/pytorch/pytorch/pull/61439. We reduce this to regular Gelu.
+  // This gelu_approximate_pattern schema exists in 21.11, 21.12, 22.01 containers of pytorch. These container versions
+  // use an unmerged PR in pytorch : https://github.com/pytorch/pytorch/pull/61439. We reduce this to regular Gelu.
  std::string gelu_approximate_pattern = R"IR(
        graph(%x : Tensor, %approx):
            %out : Tensor = aten::gelu(%x, %approx)
@@ -64,7 +64,8 @@ void ReduceGelu(std::shared_ptr<torch::jit::Graph>& graph) {
  map_gelu_to_pointwise_ops.runOnGraph(graph);

  torch::jit::SubgraphRewriter map_gelu_approximate_to_pointwise_ops;
-  map_gelu_approximate_to_pointwise_ops.RegisterRewritePattern(gelu_approximate_pattern, gelu_reduce_multi_input_pattern);
+  map_gelu_approximate_to_pointwise_ops.RegisterRewritePattern(
+      gelu_approximate_pattern, gelu_reduce_multi_input_pattern);
  map_gelu_approximate_to_pointwise_ops.runOnGraph(graph);

  LOG_GRAPH("Post lowering of [aten::gelu] -> " << *graph);
diff --git a/workspace/core/lowering/passes/linear_to_addmm.cpp b/tmp/changes.txt
index c3160a8..e0e9ca3 100644
--- a/workspace/core/lowering/passes/linear_to_addmm.cpp
+++ b/tmp/changes.txt
@@ -1,15 +1,15 @@

#include <torch/csrc/jit/runtime/operator.h>
+#include "core/util/prelude.h"
+#include "torch/csrc/jit/api/function_impl.h"
#include "torch/csrc/jit/ir/alias_analysis.h"
#include "torch/csrc/jit/jit_log.h"
#include "torch/csrc/jit/passes/constant_propagation.h"
#include "torch/csrc/jit/passes/dead_code_elimination.h"
#include "torch/csrc/jit/passes/guard_elimination.h"
#include "torch/csrc/jit/passes/peephole.h"
-#include "torch/csrc/jit/runtime/graph_executor.h"
-#include "torch/csrc/jit/api/function_impl.h"
#include "torch/csrc/jit/passes/subgraph_rewrite.h"
-#include "core/util/prelude.h"
+#include "torch/csrc/jit/runtime/graph_executor.h"

namespace torch_tensorrt {
namespace core {
@@ -34,7 +34,8 @@ void replaceLinearWithBiasNonePattern(std::shared_ptr<torch::jit::Graph> graph)
        continue;
      } else {
        torch::jit::WithInsertPoint guard(*it);
-        std::shared_ptr<torch::jit::Graph> d_graph = toGraphFunction(decompose_funcs.get_function("linear")).graph();;
+        std::shared_ptr<torch::jit::Graph> d_graph = toGraphFunction(decompose_funcs.get_function("linear")).graph();
+        ;
        torch::jit::Value* new_output = insertGraph(*it->owningGraph(), *d_graph, it->inputs()).at(0);
        new_output->setType(it->output()->type());
        it->output()->replaceAllUsesWith(new_output);
diff --git a/workspace/core/conversion/evaluators/eval_util.h b/tmp/changes.txt
old mode 100755
new mode 100644
diff --git a/workspace/tests/core/lowering/test_reduce_to_pass.cpp b/tmp/changes.txt
old mode 100755
new mode 100644
diff --git a/workspace/tests/util/util.h b/tmp/changes.txt
index 38ba81e..b795667 100644
--- a/workspace/tests/util/util.h
+++ b/tmp/changes.txt
@@ -1,8 +1,8 @@
#pragma once

+#include <ATen/ATen.h>
#include <string>
#include <vector>
-#include <ATen/ATen.h>
#include "ATen/Tensor.h"
#include "core/ir/ir.h"
#include "core/util/prelude.h"
ERROR: Some files do not conform to style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to C++ style guidelines:

diff --git a/workspace/core/partitioning/partitioning.cpp b/tmp/changes.txt
index d171ae1..c69c32e 100644
--- a/workspace/core/partitioning/partitioning.cpp
+++ b/tmp/changes.txt
@@ -231,8 +231,8 @@ std::unordered_map<torch::jit::Value*, usage_info> getInputUsageCounts(
  return usage_counts;
}

-std::unordered_map<size_t, std::list<SegmentedBlock>::iterator>
-getIdxtoIterMap(std::list<SegmentedBlock> &segmented_blocks_list) {
+std::unordered_map<size_t, std::list<SegmentedBlock>::iterator> getIdxtoIterMap(
+    std::list<SegmentedBlock>& segmented_blocks_list) {
  std::unordered_map<size_t, std::list<SegmentedBlock>::iterator> idx_to_iter;
  auto iter = segmented_blocks_list.begin();
  for (int i = 0; i < segmented_blocks_list.size(); ++i, ++iter) {
@@ -283,9 +283,10 @@ void resolveNonTensorInputBlocks(PartitionedGraph& segmented_blocks) {
}

void resolveTensorListInputBlocks(PartitionedGraph& segmented_blocks) {
-  // usage_counts is a map with key as non-tensor/tensorlist inputs and value as the idx of segmented block which produces/contains it.
-  auto usage_counts = getInputUsageCounts(
-      segmented_blocks, [](torch::jit::Value* input) -> bool { return isTensorList(input); });
+  // usage_counts is a map with key as non-tensor/tensorlist inputs and value as the idx of segmented block which
+  // produces/contains it.
+  auto usage_counts =
+      getInputUsageCounts(segmented_blocks, [](torch::jit::Value* input) -> bool { return isTensorList(input); });

  // Get idx of the segblock to its iterator mapping
  std::list<SegmentedBlock> segmented_blocks_list(segmented_blocks.cbegin(), segmented_blocks.cend());
@@ -293,12 +294,13 @@ void resolveTensorListInputBlocks(PartitionedGraph& segmented_blocks) {

  std::unordered_set<int> updated_segments;
  // we need to re-segment TensorRT segments whose inputs are TensorLists
-  for (auto &use : usage_counts) {
+  for (auto& use : usage_counts) {
    auto use_info = use.second;
    // For a particular tensorlist input, traverse through all ids of segmented blocks whose target is TensorRT
    for (auto i : use_info.tensorrt_use_id) {
      if (!updated_segments.count(i)) {
-        // tensorlistinput_to_segblock is a mapping from {tensorlist input : segmented block which produced this tensorlist input}
+        // tensorlistinput_to_segblock is a mapping from {tensorlist input : segmented block which produced this
+        // tensorlist input}
        std::unordered_map<torch::jit::Value*, SegmentedBlock> tensorlistinput_to_segblock;
        for (auto input : segmented_blocks[i].raw_inputs()) {
          if (isTensorList(input)) {
@@ -308,18 +310,20 @@ void resolveTensorListInputBlocks(PartitionedGraph& segmented_blocks) {

        // For each tensorlist input in tensorlistinput_to_segblock, get the node which actually uses this input.
        // Once we retrieve the node, we remove it from the current TensorRT segmented_blocks[i]. This node should be
-        // added to block that generated/produced (can be obtained via produce_id) this tensorlist input in the first place.
+        // added to block that generated/produced (can be obtained via produce_id) this tensorlist input in the first
+        // place.
        auto seg_blocks = segmentBlocksWithTensorListInputs(segmented_blocks[i], tensorlistinput_to_segblock);
        auto append_blocks = seg_blocks.first;
        auto trt_block = seg_blocks.second;
-        // Remove the current TensorRT seg_block and replace it with new TRT block (non empty) which has the node that uses tensorlist input removed.
+        // Remove the current TensorRT seg_block and replace it with new TRT block (non empty) which has the node that
+        // uses tensorlist input removed.
        auto next_iter = segmented_blocks_list.erase(idx_to_iter[i]);
        if (trt_block.raw_nodes().size() > 0) {
          segmented_blocks_list.insert(next_iter, trt_block);
        }

        // append blocks' nodes to the producer seg_block
-        for (auto append_block: append_blocks) {
+        for (auto append_block : append_blocks) {
          auto input = append_block.first; // corresponds to the tensorlist input
          auto block = append_block.second;
          // append nodes to segmented_blocks_list
diff --git a/workspace/core/partitioning/shape_analysis.cpp b/tmp/changes.txt
index 96b1312..e773b8f 100644
--- a/workspace/core/partitioning/shape_analysis.cpp
+++ b/tmp/changes.txt
@@ -1,5 +1,5 @@
-#include <ATen/ATen.h>
#include "core/partitioning/shape_analysis.h"
+#include <ATen/ATen.h>
#include "core/util/prelude.h"
#include "torch/csrc/jit/api/module.h"
#include "torch/csrc/jit/passes/constant_pooling.h"
diff --git a/workspace/core/conversion/evaluators/eval_util.cpp b/tmp/changes.txt
old mode 100755
new mode 100644
diff --git a/workspace/core/conversion/evaluators/prim.cpp b/tmp/changes.txt
old mode 100755
new mode 100644
diff --git a/workspace/core/lowering/passes/reduce_gelu.cpp b/tmp/changes.txt
index 15315ba..946df75 100644
--- a/workspace/core/lowering/passes/reduce_gelu.cpp
+++ b/tmp/changes.txt
@@ -12,8 +12,8 @@ void ReduceGelu(std::shared_ptr<torch::jit::Graph>& graph) {
            %out : Tensor = aten::gelu(%x)
            return (%out))IR";

-  // This gelu_approximate_pattern schema exists in 21.11, 21.12, 22.01 containers of pytorch. These container versions use
-  // an unmerged PR in pytorch : https://github.com/pytorch/pytorch/pull/61439. We reduce this to regular Gelu.
+  // This gelu_approximate_pattern schema exists in 21.11, 21.12, 22.01 containers of pytorch. These container versions
+  // use an unmerged PR in pytorch : https://github.com/pytorch/pytorch/pull/61439. We reduce this to regular Gelu.
  std::string gelu_approximate_pattern = R"IR(
        graph(%x : Tensor, %approx):
            %out : Tensor = aten::gelu(%x, %approx)
@@ -64,7 +64,8 @@ void ReduceGelu(std::shared_ptr<torch::jit::Graph>& graph) {
  map_gelu_to_pointwise_ops.runOnGraph(graph);

  torch::jit::SubgraphRewriter map_gelu_approximate_to_pointwise_ops;
-  map_gelu_approximate_to_pointwise_ops.RegisterRewritePattern(gelu_approximate_pattern, gelu_reduce_multi_input_pattern);
+  map_gelu_approximate_to_pointwise_ops.RegisterRewritePattern(
+      gelu_approximate_pattern, gelu_reduce_multi_input_pattern);
  map_gelu_approximate_to_pointwise_ops.runOnGraph(graph);

  LOG_GRAPH("Post lowering of [aten::gelu] -> " << *graph);
diff --git a/workspace/core/lowering/passes/linear_to_addmm.cpp b/tmp/changes.txt
index c3160a8..e0e9ca3 100644
--- a/workspace/core/lowering/passes/linear_to_addmm.cpp
+++ b/tmp/changes.txt
@@ -1,15 +1,15 @@

#include <torch/csrc/jit/runtime/operator.h>
+#include "core/util/prelude.h"
+#include "torch/csrc/jit/api/function_impl.h"
#include "torch/csrc/jit/ir/alias_analysis.h"
#include "torch/csrc/jit/jit_log.h"
#include "torch/csrc/jit/passes/constant_propagation.h"
#include "torch/csrc/jit/passes/dead_code_elimination.h"
#include "torch/csrc/jit/passes/guard_elimination.h"
#include "torch/csrc/jit/passes/peephole.h"
-#include "torch/csrc/jit/runtime/graph_executor.h"
-#include "torch/csrc/jit/api/function_impl.h"
#include "torch/csrc/jit/passes/subgraph_rewrite.h"
-#include "core/util/prelude.h"
+#include "torch/csrc/jit/runtime/graph_executor.h"

namespace torch_tensorrt {
namespace core {
@@ -34,7 +34,8 @@ void replaceLinearWithBiasNonePattern(std::shared_ptr<torch::jit::Graph> graph)
        continue;
      } else {
        torch::jit::WithInsertPoint guard(*it);
-        std::shared_ptr<torch::jit::Graph> d_graph = toGraphFunction(decompose_funcs.get_function("linear")).graph();;
+        std::shared_ptr<torch::jit::Graph> d_graph = toGraphFunction(decompose_funcs.get_function("linear")).graph();
+        ;
        torch::jit::Value* new_output = insertGraph(*it->owningGraph(), *d_graph, it->inputs()).at(0);
        new_output->setType(it->output()->type());
        it->output()->replaceAllUsesWith(new_output);
diff --git a/workspace/core/conversion/evaluators/eval_util.h b/tmp/changes.txt
old mode 100755
new mode 100644
diff --git a/workspace/tests/core/lowering/test_reduce_to_pass.cpp b/tmp/changes.txt
old mode 100755
new mode 100644
diff --git a/workspace/tests/util/util.h b/tmp/changes.txt
index 38ba81e..b795667 100644
--- a/workspace/tests/util/util.h
+++ b/tmp/changes.txt
@@ -1,8 +1,8 @@
#pragma once

+#include <ATen/ATen.h>
#include <string>
#include <vector>
-#include <ATen/ATen.h>
#include "ATen/Tensor.h"
#include "core/ir/ir.h"
#include "core/util/prelude.h"
ERROR: Some files do not conform to style guidelines

Copy link
Collaborator

@narendasan narendasan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@narendasan narendasan merged commit e9fb6ff into master Apr 14, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component: tests Issues re: Tests
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants