Skip to content

fix: fix the bug that introduces kLong Tensor in prim::NumToTensor #972

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Apr 12, 2022

Conversation

bowang007
Copy link
Collaborator

@bowang007 bowang007 commented Apr 11, 2022

Signed-off-by: Bo Wang [email protected]

Description

In our evaluation pass the prim::NumToTensor will use the function here https://github.com/pytorch/pytorch/blob/8a7c9a5e01a24d465126210234aa9d3775b25032/aten/src/ATen/ScalarOps.h#L29 to convert a number to a Tensor. However, this will convert int to Tensor kLong, this will incur the conversion phase break down since we don't support kLong types in TRT.
We can either add a truncate_long_and_double in conversion phase right after evaluation or change the native function from pytorch to fix this issue.

Fixes #956

Type of change

  • Bug fix (non-breaking change which fixes an issue)

Checklist:

  • My code follows the style guidelines of this project (You can use the linters)
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas and hacks
  • I have made corresponding changes to the documentation
  • I have added tests to verify my fix or my feature
  • New and existing unit tests pass locally with my changes

@bowang007 bowang007 requested a review from narendasan April 11, 2022 19:05
@github-actions github-actions bot added component: conversion Issues re: Conversion stage component: core Issues re: The core compiler component: evaluators Issues re: Specific op evaluators labels Apr 11, 2022
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to C++ style guidelines:

diff --git a/workspace/core/partitioning/partitioning.cpp b/tmp/changes.txt
index d171ae1..c69c32e 100644
--- a/workspace/core/partitioning/partitioning.cpp
+++ b/tmp/changes.txt
@@ -231,8 +231,8 @@ std::unordered_map<torch::jit::Value*, usage_info> getInputUsageCounts(
  return usage_counts;
}

-std::unordered_map<size_t, std::list<SegmentedBlock>::iterator>
-getIdxtoIterMap(std::list<SegmentedBlock> &segmented_blocks_list) {
+std::unordered_map<size_t, std::list<SegmentedBlock>::iterator> getIdxtoIterMap(
+    std::list<SegmentedBlock>& segmented_blocks_list) {
  std::unordered_map<size_t, std::list<SegmentedBlock>::iterator> idx_to_iter;
  auto iter = segmented_blocks_list.begin();
  for (int i = 0; i < segmented_blocks_list.size(); ++i, ++iter) {
@@ -283,9 +283,10 @@ void resolveNonTensorInputBlocks(PartitionedGraph& segmented_blocks) {
}

void resolveTensorListInputBlocks(PartitionedGraph& segmented_blocks) {
-  // usage_counts is a map with key as non-tensor/tensorlist inputs and value as the idx of segmented block which produces/contains it.
-  auto usage_counts = getInputUsageCounts(
-      segmented_blocks, [](torch::jit::Value* input) -> bool { return isTensorList(input); });
+  // usage_counts is a map with key as non-tensor/tensorlist inputs and value as the idx of segmented block which
+  // produces/contains it.
+  auto usage_counts =
+      getInputUsageCounts(segmented_blocks, [](torch::jit::Value* input) -> bool { return isTensorList(input); });

  // Get idx of the segblock to its iterator mapping
  std::list<SegmentedBlock> segmented_blocks_list(segmented_blocks.cbegin(), segmented_blocks.cend());
@@ -293,12 +294,13 @@ void resolveTensorListInputBlocks(PartitionedGraph& segmented_blocks) {

  std::unordered_set<int> updated_segments;
  // we need to re-segment TensorRT segments whose inputs are TensorLists
-  for (auto &use : usage_counts) {
+  for (auto& use : usage_counts) {
    auto use_info = use.second;
    // For a particular tensorlist input, traverse through all ids of segmented blocks whose target is TensorRT
    for (auto i : use_info.tensorrt_use_id) {
      if (!updated_segments.count(i)) {
-        // tensorlistinput_to_segblock is a mapping from {tensorlist input : segmented block which produced this tensorlist input}
+        // tensorlistinput_to_segblock is a mapping from {tensorlist input : segmented block which produced this
+        // tensorlist input}
        std::unordered_map<torch::jit::Value*, SegmentedBlock> tensorlistinput_to_segblock;
        for (auto input : segmented_blocks[i].raw_inputs()) {
          if (isTensorList(input)) {
@@ -308,18 +310,20 @@ void resolveTensorListInputBlocks(PartitionedGraph& segmented_blocks) {

        // For each tensorlist input in tensorlistinput_to_segblock, get the node which actually uses this input.
        // Once we retrieve the node, we remove it from the current TensorRT segmented_blocks[i]. This node should be
-        // added to block that generated/produced (can be obtained via produce_id) this tensorlist input in the first place.
+        // added to block that generated/produced (can be obtained via produce_id) this tensorlist input in the first
+        // place.
        auto seg_blocks = segmentBlocksWithTensorListInputs(segmented_blocks[i], tensorlistinput_to_segblock);
        auto append_blocks = seg_blocks.first;
        auto trt_block = seg_blocks.second;
-        // Remove the current TensorRT seg_block and replace it with new TRT block (non empty) which has the node that uses tensorlist input removed.
+        // Remove the current TensorRT seg_block and replace it with new TRT block (non empty) which has the node that
+        // uses tensorlist input removed.
        auto next_iter = segmented_blocks_list.erase(idx_to_iter[i]);
        if (trt_block.raw_nodes().size() > 0) {
          segmented_blocks_list.insert(next_iter, trt_block);
        }

        // append blocks' nodes to the producer seg_block
-        for (auto append_block: append_blocks) {
+        for (auto append_block : append_blocks) {
          auto input = append_block.first; // corresponds to the tensorlist input
          auto block = append_block.second;
          // append nodes to segmented_blocks_list
diff --git a/workspace/core/partitioning/shape_analysis.cpp b/tmp/changes.txt
index 96b1312..e773b8f 100644
--- a/workspace/core/partitioning/shape_analysis.cpp
+++ b/tmp/changes.txt
@@ -1,5 +1,5 @@
-#include <ATen/ATen.h>
#include "core/partitioning/shape_analysis.h"
+#include <ATen/ATen.h>
#include "core/util/prelude.h"
#include "torch/csrc/jit/api/module.h"
#include "torch/csrc/jit/passes/constant_pooling.h"
diff --git a/workspace/core/lowering/passes/reduce_gelu.cpp b/tmp/changes.txt
index 15315ba..946df75 100644
--- a/workspace/core/lowering/passes/reduce_gelu.cpp
+++ b/tmp/changes.txt
@@ -12,8 +12,8 @@ void ReduceGelu(std::shared_ptr<torch::jit::Graph>& graph) {
            %out : Tensor = aten::gelu(%x)
            return (%out))IR";

-  // This gelu_approximate_pattern schema exists in 21.11, 21.12, 22.01 containers of pytorch. These container versions use
-  // an unmerged PR in pytorch : https://github.com/pytorch/pytorch/pull/61439. We reduce this to regular Gelu.
+  // This gelu_approximate_pattern schema exists in 21.11, 21.12, 22.01 containers of pytorch. These container versions
+  // use an unmerged PR in pytorch : https://github.com/pytorch/pytorch/pull/61439. We reduce this to regular Gelu.
  std::string gelu_approximate_pattern = R"IR(
        graph(%x : Tensor, %approx):
            %out : Tensor = aten::gelu(%x, %approx)
@@ -64,7 +64,8 @@ void ReduceGelu(std::shared_ptr<torch::jit::Graph>& graph) {
  map_gelu_to_pointwise_ops.runOnGraph(graph);

  torch::jit::SubgraphRewriter map_gelu_approximate_to_pointwise_ops;
-  map_gelu_approximate_to_pointwise_ops.RegisterRewritePattern(gelu_approximate_pattern, gelu_reduce_multi_input_pattern);
+  map_gelu_approximate_to_pointwise_ops.RegisterRewritePattern(
+      gelu_approximate_pattern, gelu_reduce_multi_input_pattern);
  map_gelu_approximate_to_pointwise_ops.runOnGraph(graph);

  LOG_GRAPH("Post lowering of [aten::gelu] -> " << *graph);
diff --git a/workspace/core/lowering/passes/linear_to_addmm.cpp b/tmp/changes.txt
index c3160a8..e0e9ca3 100644
--- a/workspace/core/lowering/passes/linear_to_addmm.cpp
+++ b/tmp/changes.txt
@@ -1,15 +1,15 @@

#include <torch/csrc/jit/runtime/operator.h>
+#include "core/util/prelude.h"
+#include "torch/csrc/jit/api/function_impl.h"
#include "torch/csrc/jit/ir/alias_analysis.h"
#include "torch/csrc/jit/jit_log.h"
#include "torch/csrc/jit/passes/constant_propagation.h"
#include "torch/csrc/jit/passes/dead_code_elimination.h"
#include "torch/csrc/jit/passes/guard_elimination.h"
#include "torch/csrc/jit/passes/peephole.h"
-#include "torch/csrc/jit/runtime/graph_executor.h"
-#include "torch/csrc/jit/api/function_impl.h"
#include "torch/csrc/jit/passes/subgraph_rewrite.h"
-#include "core/util/prelude.h"
+#include "torch/csrc/jit/runtime/graph_executor.h"

namespace torch_tensorrt {
namespace core {
@@ -34,7 +34,8 @@ void replaceLinearWithBiasNonePattern(std::shared_ptr<torch::jit::Graph> graph)
        continue;
      } else {
        torch::jit::WithInsertPoint guard(*it);
-        std::shared_ptr<torch::jit::Graph> d_graph = toGraphFunction(decompose_funcs.get_function("linear")).graph();;
+        std::shared_ptr<torch::jit::Graph> d_graph = toGraphFunction(decompose_funcs.get_function("linear")).graph();
+        ;
        torch::jit::Value* new_output = insertGraph(*it->owningGraph(), *d_graph, it->inputs()).at(0);
        new_output->setType(it->output()->type());
        it->output()->replaceAllUsesWith(new_output);
diff --git a/workspace/tests/util/util.h b/tmp/changes.txt
index 38ba81e..b795667 100644
--- a/workspace/tests/util/util.h
+++ b/tmp/changes.txt
@@ -1,8 +1,8 @@
#pragma once

+#include <ATen/ATen.h>
#include <string>
#include <vector>
-#include <ATen/ATen.h>
#include "ATen/Tensor.h"
#include "core/ir/ir.h"
#include "core/util/prelude.h"
ERROR: Some files do not conform to style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to C++ style guidelines:

diff --git a/workspace/core/partitioning/partitioning.cpp b/tmp/changes.txt
index d171ae1..c69c32e 100644
--- a/workspace/core/partitioning/partitioning.cpp
+++ b/tmp/changes.txt
@@ -231,8 +231,8 @@ std::unordered_map<torch::jit::Value*, usage_info> getInputUsageCounts(
  return usage_counts;
}

-std::unordered_map<size_t, std::list<SegmentedBlock>::iterator>
-getIdxtoIterMap(std::list<SegmentedBlock> &segmented_blocks_list) {
+std::unordered_map<size_t, std::list<SegmentedBlock>::iterator> getIdxtoIterMap(
+    std::list<SegmentedBlock>& segmented_blocks_list) {
  std::unordered_map<size_t, std::list<SegmentedBlock>::iterator> idx_to_iter;
  auto iter = segmented_blocks_list.begin();
  for (int i = 0; i < segmented_blocks_list.size(); ++i, ++iter) {
@@ -283,9 +283,10 @@ void resolveNonTensorInputBlocks(PartitionedGraph& segmented_blocks) {
}

void resolveTensorListInputBlocks(PartitionedGraph& segmented_blocks) {
-  // usage_counts is a map with key as non-tensor/tensorlist inputs and value as the idx of segmented block which produces/contains it.
-  auto usage_counts = getInputUsageCounts(
-      segmented_blocks, [](torch::jit::Value* input) -> bool { return isTensorList(input); });
+  // usage_counts is a map with key as non-tensor/tensorlist inputs and value as the idx of segmented block which
+  // produces/contains it.
+  auto usage_counts =
+      getInputUsageCounts(segmented_blocks, [](torch::jit::Value* input) -> bool { return isTensorList(input); });

  // Get idx of the segblock to its iterator mapping
  std::list<SegmentedBlock> segmented_blocks_list(segmented_blocks.cbegin(), segmented_blocks.cend());
@@ -293,12 +294,13 @@ void resolveTensorListInputBlocks(PartitionedGraph& segmented_blocks) {

  std::unordered_set<int> updated_segments;
  // we need to re-segment TensorRT segments whose inputs are TensorLists
-  for (auto &use : usage_counts) {
+  for (auto& use : usage_counts) {
    auto use_info = use.second;
    // For a particular tensorlist input, traverse through all ids of segmented blocks whose target is TensorRT
    for (auto i : use_info.tensorrt_use_id) {
      if (!updated_segments.count(i)) {
-        // tensorlistinput_to_segblock is a mapping from {tensorlist input : segmented block which produced this tensorlist input}
+        // tensorlistinput_to_segblock is a mapping from {tensorlist input : segmented block which produced this
+        // tensorlist input}
        std::unordered_map<torch::jit::Value*, SegmentedBlock> tensorlistinput_to_segblock;
        for (auto input : segmented_blocks[i].raw_inputs()) {
          if (isTensorList(input)) {
@@ -308,18 +310,20 @@ void resolveTensorListInputBlocks(PartitionedGraph& segmented_blocks) {

        // For each tensorlist input in tensorlistinput_to_segblock, get the node which actually uses this input.
        // Once we retrieve the node, we remove it from the current TensorRT segmented_blocks[i]. This node should be
-        // added to block that generated/produced (can be obtained via produce_id) this tensorlist input in the first place.
+        // added to block that generated/produced (can be obtained via produce_id) this tensorlist input in the first
+        // place.
        auto seg_blocks = segmentBlocksWithTensorListInputs(segmented_blocks[i], tensorlistinput_to_segblock);
        auto append_blocks = seg_blocks.first;
        auto trt_block = seg_blocks.second;
-        // Remove the current TensorRT seg_block and replace it with new TRT block (non empty) which has the node that uses tensorlist input removed.
+        // Remove the current TensorRT seg_block and replace it with new TRT block (non empty) which has the node that
+        // uses tensorlist input removed.
        auto next_iter = segmented_blocks_list.erase(idx_to_iter[i]);
        if (trt_block.raw_nodes().size() > 0) {
          segmented_blocks_list.insert(next_iter, trt_block);
        }

        // append blocks' nodes to the producer seg_block
-        for (auto append_block: append_blocks) {
+        for (auto append_block : append_blocks) {
          auto input = append_block.first; // corresponds to the tensorlist input
          auto block = append_block.second;
          // append nodes to segmented_blocks_list
diff --git a/workspace/core/partitioning/shape_analysis.cpp b/tmp/changes.txt
index 96b1312..e773b8f 100644
--- a/workspace/core/partitioning/shape_analysis.cpp
+++ b/tmp/changes.txt
@@ -1,5 +1,5 @@
-#include <ATen/ATen.h>
#include "core/partitioning/shape_analysis.h"
+#include <ATen/ATen.h>
#include "core/util/prelude.h"
#include "torch/csrc/jit/api/module.h"
#include "torch/csrc/jit/passes/constant_pooling.h"
diff --git a/workspace/core/lowering/passes/reduce_gelu.cpp b/tmp/changes.txt
index 15315ba..946df75 100644
--- a/workspace/core/lowering/passes/reduce_gelu.cpp
+++ b/tmp/changes.txt
@@ -12,8 +12,8 @@ void ReduceGelu(std::shared_ptr<torch::jit::Graph>& graph) {
            %out : Tensor = aten::gelu(%x)
            return (%out))IR";

-  // This gelu_approximate_pattern schema exists in 21.11, 21.12, 22.01 containers of pytorch. These container versions use
-  // an unmerged PR in pytorch : https://github.com/pytorch/pytorch/pull/61439. We reduce this to regular Gelu.
+  // This gelu_approximate_pattern schema exists in 21.11, 21.12, 22.01 containers of pytorch. These container versions
+  // use an unmerged PR in pytorch : https://github.com/pytorch/pytorch/pull/61439. We reduce this to regular Gelu.
  std::string gelu_approximate_pattern = R"IR(
        graph(%x : Tensor, %approx):
            %out : Tensor = aten::gelu(%x, %approx)
@@ -64,7 +64,8 @@ void ReduceGelu(std::shared_ptr<torch::jit::Graph>& graph) {
  map_gelu_to_pointwise_ops.runOnGraph(graph);

  torch::jit::SubgraphRewriter map_gelu_approximate_to_pointwise_ops;
-  map_gelu_approximate_to_pointwise_ops.RegisterRewritePattern(gelu_approximate_pattern, gelu_reduce_multi_input_pattern);
+  map_gelu_approximate_to_pointwise_ops.RegisterRewritePattern(
+      gelu_approximate_pattern, gelu_reduce_multi_input_pattern);
  map_gelu_approximate_to_pointwise_ops.runOnGraph(graph);

  LOG_GRAPH("Post lowering of [aten::gelu] -> " << *graph);
diff --git a/workspace/core/lowering/passes/linear_to_addmm.cpp b/tmp/changes.txt
index c3160a8..e0e9ca3 100644
--- a/workspace/core/lowering/passes/linear_to_addmm.cpp
+++ b/tmp/changes.txt
@@ -1,15 +1,15 @@

#include <torch/csrc/jit/runtime/operator.h>
+#include "core/util/prelude.h"
+#include "torch/csrc/jit/api/function_impl.h"
#include "torch/csrc/jit/ir/alias_analysis.h"
#include "torch/csrc/jit/jit_log.h"
#include "torch/csrc/jit/passes/constant_propagation.h"
#include "torch/csrc/jit/passes/dead_code_elimination.h"
#include "torch/csrc/jit/passes/guard_elimination.h"
#include "torch/csrc/jit/passes/peephole.h"
-#include "torch/csrc/jit/runtime/graph_executor.h"
-#include "torch/csrc/jit/api/function_impl.h"
#include "torch/csrc/jit/passes/subgraph_rewrite.h"
-#include "core/util/prelude.h"
+#include "torch/csrc/jit/runtime/graph_executor.h"

namespace torch_tensorrt {
namespace core {
@@ -34,7 +34,8 @@ void replaceLinearWithBiasNonePattern(std::shared_ptr<torch::jit::Graph> graph)
        continue;
      } else {
        torch::jit::WithInsertPoint guard(*it);
-        std::shared_ptr<torch::jit::Graph> d_graph = toGraphFunction(decompose_funcs.get_function("linear")).graph();;
+        std::shared_ptr<torch::jit::Graph> d_graph = toGraphFunction(decompose_funcs.get_function("linear")).graph();
+        ;
        torch::jit::Value* new_output = insertGraph(*it->owningGraph(), *d_graph, it->inputs()).at(0);
        new_output->setType(it->output()->type());
        it->output()->replaceAllUsesWith(new_output);
diff --git a/workspace/tests/util/util.h b/tmp/changes.txt
index 38ba81e..b795667 100644
--- a/workspace/tests/util/util.h
+++ b/tmp/changes.txt
@@ -1,8 +1,8 @@
#pragma once

+#include <ATen/ATen.h>
#include <string>
#include <vector>
-#include <ATen/ATen.h>
#include "ATen/Tensor.h"
#include "core/ir/ir.h"
#include "core/util/prelude.h"
ERROR: Some files do not conform to style guidelines

@bowang007
Copy link
Collaborator Author

Hi @narendasan, I didn't change anything other than the 3 files that in the commit, why i'm still getting these linting issues?
If I apply linting on these files, it seems that these files in master branch will also be changed.

// won't be upgraded to kDouble or kLong since we don't support these 2 types in conversion
if (device == at::kCPU) {
if (s.isFloatingPoint()) {
LOG_WARNING("Unable to process input type of at::kDouble, truncate type to at::kInt in scalar_to_tensor_util ");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add a comment here that is something like
// TODO: Conditionally enable truncation based on user setting?

// won't be upgraded to kDouble or kLong since we don't support these 2 types in conversion
if (device == at::kCPU) {
if (s.isFloatingPoint()) {
LOG_WARNING("Unable to process input type of at::kDouble, truncate type to at::kInt in scalar_to_tensor_util ");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

truncate type to at::kInt at::kFloat

}
}
if (s.isFloatingPoint()) {
LOG_WARNING("Unable to process input type of at::kDouble, truncate type to at::kInt in scalar_to_tensor_util ");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

truncate type to at::kInt at::kFloat

@@ -119,6 +119,38 @@ void checkSequenceSize(int64_t n, int64_t dim, int64_t seq_size) {
}
}

at::Tensor scalar_to_tensor_util(const at::Scalar& s, const at::Device device = at::kCPU) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can just call this scale_to_tensor since its in our namespace

@@ -31,7 +31,7 @@ auto prim_registrations =
}})
.evaluator({torch::jit::prim::NumToTensor,
[](const torch::jit::Node* n, kwargs& args) -> c10::optional<torch::jit::IValue> {
return at::scalar_to_tensor(args.at(n->input(0)).IValue()->toScalar());
return scalar_to_tensor_util(args.at(n->input(0)).IValue()->toScalar());
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just change this to evaluators::scalar_to_tensor

Copy link
Collaborator

@narendasan narendasan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@narendasan narendasan merged commit c395c21 into master Apr 12, 2022
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to C++ style guidelines:

diff --git a/workspace/core/partitioning/partitioning.cpp b/tmp/changes.txt
index d171ae1..c69c32e 100644
--- a/workspace/core/partitioning/partitioning.cpp
+++ b/tmp/changes.txt
@@ -231,8 +231,8 @@ std::unordered_map<torch::jit::Value*, usage_info> getInputUsageCounts(
  return usage_counts;
}

-std::unordered_map<size_t, std::list<SegmentedBlock>::iterator>
-getIdxtoIterMap(std::list<SegmentedBlock> &segmented_blocks_list) {
+std::unordered_map<size_t, std::list<SegmentedBlock>::iterator> getIdxtoIterMap(
+    std::list<SegmentedBlock>& segmented_blocks_list) {
  std::unordered_map<size_t, std::list<SegmentedBlock>::iterator> idx_to_iter;
  auto iter = segmented_blocks_list.begin();
  for (int i = 0; i < segmented_blocks_list.size(); ++i, ++iter) {
@@ -283,9 +283,10 @@ void resolveNonTensorInputBlocks(PartitionedGraph& segmented_blocks) {
}

void resolveTensorListInputBlocks(PartitionedGraph& segmented_blocks) {
-  // usage_counts is a map with key as non-tensor/tensorlist inputs and value as the idx of segmented block which produces/contains it.
-  auto usage_counts = getInputUsageCounts(
-      segmented_blocks, [](torch::jit::Value* input) -> bool { return isTensorList(input); });
+  // usage_counts is a map with key as non-tensor/tensorlist inputs and value as the idx of segmented block which
+  // produces/contains it.
+  auto usage_counts =
+      getInputUsageCounts(segmented_blocks, [](torch::jit::Value* input) -> bool { return isTensorList(input); });

  // Get idx of the segblock to its iterator mapping
  std::list<SegmentedBlock> segmented_blocks_list(segmented_blocks.cbegin(), segmented_blocks.cend());
@@ -293,12 +294,13 @@ void resolveTensorListInputBlocks(PartitionedGraph& segmented_blocks) {

  std::unordered_set<int> updated_segments;
  // we need to re-segment TensorRT segments whose inputs are TensorLists
-  for (auto &use : usage_counts) {
+  for (auto& use : usage_counts) {
    auto use_info = use.second;
    // For a particular tensorlist input, traverse through all ids of segmented blocks whose target is TensorRT
    for (auto i : use_info.tensorrt_use_id) {
      if (!updated_segments.count(i)) {
-        // tensorlistinput_to_segblock is a mapping from {tensorlist input : segmented block which produced this tensorlist input}
+        // tensorlistinput_to_segblock is a mapping from {tensorlist input : segmented block which produced this
+        // tensorlist input}
        std::unordered_map<torch::jit::Value*, SegmentedBlock> tensorlistinput_to_segblock;
        for (auto input : segmented_blocks[i].raw_inputs()) {
          if (isTensorList(input)) {
@@ -308,18 +310,20 @@ void resolveTensorListInputBlocks(PartitionedGraph& segmented_blocks) {

        // For each tensorlist input in tensorlistinput_to_segblock, get the node which actually uses this input.
        // Once we retrieve the node, we remove it from the current TensorRT segmented_blocks[i]. This node should be
-        // added to block that generated/produced (can be obtained via produce_id) this tensorlist input in the first place.
+        // added to block that generated/produced (can be obtained via produce_id) this tensorlist input in the first
+        // place.
        auto seg_blocks = segmentBlocksWithTensorListInputs(segmented_blocks[i], tensorlistinput_to_segblock);
        auto append_blocks = seg_blocks.first;
        auto trt_block = seg_blocks.second;
-        // Remove the current TensorRT seg_block and replace it with new TRT block (non empty) which has the node that uses tensorlist input removed.
+        // Remove the current TensorRT seg_block and replace it with new TRT block (non empty) which has the node that
+        // uses tensorlist input removed.
        auto next_iter = segmented_blocks_list.erase(idx_to_iter[i]);
        if (trt_block.raw_nodes().size() > 0) {
          segmented_blocks_list.insert(next_iter, trt_block);
        }

        // append blocks' nodes to the producer seg_block
-        for (auto append_block: append_blocks) {
+        for (auto append_block : append_blocks) {
          auto input = append_block.first; // corresponds to the tensorlist input
          auto block = append_block.second;
          // append nodes to segmented_blocks_list
diff --git a/workspace/core/partitioning/shape_analysis.cpp b/tmp/changes.txt
index 96b1312..e773b8f 100644
--- a/workspace/core/partitioning/shape_analysis.cpp
+++ b/tmp/changes.txt
@@ -1,5 +1,5 @@
-#include <ATen/ATen.h>
#include "core/partitioning/shape_analysis.h"
+#include <ATen/ATen.h>
#include "core/util/prelude.h"
#include "torch/csrc/jit/api/module.h"
#include "torch/csrc/jit/passes/constant_pooling.h"
diff --git a/workspace/core/conversion/evaluators/eval_util.cpp b/tmp/changes.txt
old mode 100755
new mode 100644
diff --git a/workspace/core/conversion/evaluators/prim.cpp b/tmp/changes.txt
old mode 100755
new mode 100644
diff --git a/workspace/core/lowering/passes/reduce_gelu.cpp b/tmp/changes.txt
index 15315ba..946df75 100644
--- a/workspace/core/lowering/passes/reduce_gelu.cpp
+++ b/tmp/changes.txt
@@ -12,8 +12,8 @@ void ReduceGelu(std::shared_ptr<torch::jit::Graph>& graph) {
            %out : Tensor = aten::gelu(%x)
            return (%out))IR";

-  // This gelu_approximate_pattern schema exists in 21.11, 21.12, 22.01 containers of pytorch. These container versions use
-  // an unmerged PR in pytorch : https://github.com/pytorch/pytorch/pull/61439. We reduce this to regular Gelu.
+  // This gelu_approximate_pattern schema exists in 21.11, 21.12, 22.01 containers of pytorch. These container versions
+  // use an unmerged PR in pytorch : https://github.com/pytorch/pytorch/pull/61439. We reduce this to regular Gelu.
  std::string gelu_approximate_pattern = R"IR(
        graph(%x : Tensor, %approx):
            %out : Tensor = aten::gelu(%x, %approx)
@@ -64,7 +64,8 @@ void ReduceGelu(std::shared_ptr<torch::jit::Graph>& graph) {
  map_gelu_to_pointwise_ops.runOnGraph(graph);

  torch::jit::SubgraphRewriter map_gelu_approximate_to_pointwise_ops;
-  map_gelu_approximate_to_pointwise_ops.RegisterRewritePattern(gelu_approximate_pattern, gelu_reduce_multi_input_pattern);
+  map_gelu_approximate_to_pointwise_ops.RegisterRewritePattern(
+      gelu_approximate_pattern, gelu_reduce_multi_input_pattern);
  map_gelu_approximate_to_pointwise_ops.runOnGraph(graph);

  LOG_GRAPH("Post lowering of [aten::gelu] -> " << *graph);
diff --git a/workspace/core/lowering/passes/linear_to_addmm.cpp b/tmp/changes.txt
index c3160a8..e0e9ca3 100644
--- a/workspace/core/lowering/passes/linear_to_addmm.cpp
+++ b/tmp/changes.txt
@@ -1,15 +1,15 @@

#include <torch/csrc/jit/runtime/operator.h>
+#include "core/util/prelude.h"
+#include "torch/csrc/jit/api/function_impl.h"
#include "torch/csrc/jit/ir/alias_analysis.h"
#include "torch/csrc/jit/jit_log.h"
#include "torch/csrc/jit/passes/constant_propagation.h"
#include "torch/csrc/jit/passes/dead_code_elimination.h"
#include "torch/csrc/jit/passes/guard_elimination.h"
#include "torch/csrc/jit/passes/peephole.h"
-#include "torch/csrc/jit/runtime/graph_executor.h"
-#include "torch/csrc/jit/api/function_impl.h"
#include "torch/csrc/jit/passes/subgraph_rewrite.h"
-#include "core/util/prelude.h"
+#include "torch/csrc/jit/runtime/graph_executor.h"

namespace torch_tensorrt {
namespace core {
@@ -34,7 +34,8 @@ void replaceLinearWithBiasNonePattern(std::shared_ptr<torch::jit::Graph> graph)
        continue;
      } else {
        torch::jit::WithInsertPoint guard(*it);
-        std::shared_ptr<torch::jit::Graph> d_graph = toGraphFunction(decompose_funcs.get_function("linear")).graph();;
+        std::shared_ptr<torch::jit::Graph> d_graph = toGraphFunction(decompose_funcs.get_function("linear")).graph();
+        ;
        torch::jit::Value* new_output = insertGraph(*it->owningGraph(), *d_graph, it->inputs()).at(0);
        new_output->setType(it->output()->type());
        it->output()->replaceAllUsesWith(new_output);
diff --git a/workspace/core/conversion/evaluators/eval_util.h b/tmp/changes.txt
old mode 100755
new mode 100644
diff --git a/workspace/tests/util/util.h b/tmp/changes.txt
index 38ba81e..b795667 100644
--- a/workspace/tests/util/util.h
+++ b/tmp/changes.txt
@@ -1,8 +1,8 @@
#pragma once

+#include <ATen/ATen.h>
#include <string>
#include <vector>
-#include <ATen/ATen.h>
#include "ATen/Tensor.h"
#include "core/ir/ir.h"
#include "core/util/prelude.h"
ERROR: Some files do not conform to style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component: conversion Issues re: Conversion stage component: core Issues re: The core compiler component: evaluators Issues re: Specific op evaluators
Projects
None yet
Development

Successfully merging this pull request may close these issues.

🐛 [Bug] LongType introduced in Conversion process by prim::NumToTensor
2 participants