Skip to content

Commit 28da0e8

Browse files
committed
chore: rebase to master
Signed-off-by: Dheeraj Peri <[email protected]>
2 parents f205377 + 03a6ca4 commit 28da0e8

File tree

349 files changed

+83571
-184
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

349 files changed

+83571
-184
lines changed

BUILD

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,9 +18,12 @@ pkg_tar(
1818
"//core/conversion/var:include",
1919
"//core/conversion/tensorcontainer:include",
2020
"//core/conversion/evaluators:include",
21-
"//core/plugins:include",
21+
"//core/ir:include",
2222
"//core/lowering:include",
2323
"//core/lowering/passes:include",
24+
"//core/partitioning:include",
25+
"//core/plugins:impl_include",
26+
"//core/plugins:include",
2427
"//core/runtime:include",
2528
"//core/util:include",
2629
"//core/util/logging:include",

CHANGELOG.md

Lines changed: 76 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -614,4 +614,80 @@ Signed-off-by: Naren Dasan <[email protected]>
614614
Signed-off-by: Naren Dasan <[email protected]>
615615

616616

617+
# 0.3.0 (2021-05-13)
618+
619+
620+
### Bug Fixes
621+
622+
* **//plugins:** Readding cuBLAS BUILD to allow linking of libnvinfer_plugin on Jetson ([a8008f4](https://github.com/NVIDIA/TRTorch/commit/a8008f4))
623+
* **//tests/../concat:** Concat test fix ([2432fb8](https://github.com/NVIDIA/TRTorch/commit/2432fb8))
624+
* **//tests/core/partitioning:** Fixing some issues with the partition ([ff89059](https://github.com/NVIDIA/TRTorch/commit/ff89059))
625+
* erase the repetitive nodes in dependency analysis ([80b1038](https://github.com/NVIDIA/TRTorch/commit/80b1038))
626+
* fix a typo for debug ([c823ebd](https://github.com/NVIDIA/TRTorch/commit/c823ebd))
627+
* fix typo bug ([e491bb5](https://github.com/NVIDIA/TRTorch/commit/e491bb5))
628+
* **aten::linear:** Fixes new issues in 1.8 that cause script based ([c5057f8](https://github.com/NVIDIA/TRTorch/commit/c5057f8))
629+
* register the torch_fallback attribute in Python API ([8b7919f](https://github.com/NVIDIA/TRTorch/commit/8b7919f))
630+
* support expand/repeat with IValue type input ([a4882c6](https://github.com/NVIDIA/TRTorch/commit/a4882c6))
631+
* support shape inference for add_, support non-tensor arguments for segmented graphs ([46950bb](https://github.com/NVIDIA/TRTorch/commit/46950bb))
632+
633+
634+
* feat!: Updating versions of CUDA, cuDNN, TensorRT and PyTorch ([71c4dcb](https://github.com/NVIDIA/TRTorch/commit/71c4dcb))
635+
* feat(WORKSPACE)!: Updating PyTorch version to 1.8.1 ([c9aa99a](https://github.com/NVIDIA/TRTorch/commit/c9aa99a))
636+
637+
638+
### Features
639+
640+
* **//.github:** Linter throws 1 when there needs to be style changes to ([a39dea7](https://github.com/NVIDIA/TRTorch/commit/a39dea7))
641+
* **//core:** New API to register arbitrary TRT engines in TorchScript ([3ec836e](https://github.com/NVIDIA/TRTorch/commit/3ec836e))
642+
* **//core/conversion/conversionctx:** Adding logging for truncated ([96245ee](https://github.com/NVIDIA/TRTorch/commit/96245ee))
643+
* **//core/partitioing:** Adding ostream for Partition Info ([b3589c5](https://github.com/NVIDIA/TRTorch/commit/b3589c5))
644+
* **//core/partitioning:** Add an ostream implementation for ([ee536b6](https://github.com/NVIDIA/TRTorch/commit/ee536b6))
645+
* **//core/partitioning:** Refactor top level partitioning API, fix a bug with ([abc63f6](https://github.com/NVIDIA/TRTorch/commit/abc63f6))
646+
* **//core/plugins:** Gating plugin logging based on global config ([1d5a088](https://github.com/NVIDIA/TRTorch/commit/1d5a088))
647+
* added user level API for fallback ([f4c29b4](https://github.com/NVIDIA/TRTorch/commit/f4c29b4))
648+
* allow users to set fallback block size and ops ([6d3064a](https://github.com/NVIDIA/TRTorch/commit/6d3064a))
649+
* insert nodes by dependencies for nonTensor inputs/outputs ([4e32eff](https://github.com/NVIDIA/TRTorch/commit/4e32eff))
650+
* support aten::arange converter ([014e381](https://github.com/NVIDIA/TRTorch/commit/014e381))
651+
* support aten::transpose with negative dim ([4a1d2f3](https://github.com/NVIDIA/TRTorch/commit/4a1d2f3))
652+
* support Int/Bool and other constants' inputs/outputs for TensorRT segments ([54e407e](https://github.com/NVIDIA/TRTorch/commit/54e407e))
653+
* support prim::Param for fallback inputs ([ec2bbf2](https://github.com/NVIDIA/TRTorch/commit/ec2bbf2))
654+
* support prim::Param for input type after refactor ([3cebe97](https://github.com/NVIDIA/TRTorch/commit/3cebe97))
655+
* support Python APIs for Automatic Fallback ([100b090](https://github.com/NVIDIA/TRTorch/commit/100b090))
656+
* support the case when the injected node is not supported in dependency analysis ([c67d8f6](https://github.com/NVIDIA/TRTorch/commit/c67d8f6))
657+
* support truncate long/double to int/float with option ([740eb54](https://github.com/NVIDIA/TRTorch/commit/740eb54))
658+
* Try to submit review before exit ([9a9d7f0](https://github.com/NVIDIA/TRTorch/commit/9a9d7f0))
659+
* update truncate long/double python api ([69e49e8](https://github.com/NVIDIA/TRTorch/commit/69e49e8))
660+
* **//docker:** Adding Docker 21.03 ([9b326e8](https://github.com/NVIDIA/TRTorch/commit/9b326e8))
661+
* update truncate long/double warning message ([60dba12](https://github.com/NVIDIA/TRTorch/commit/60dba12))
662+
* **//docker:** Update CI container ([df63467](https://github.com/NVIDIA/TRTorch/commit/df63467))
663+
* **//py:** Allowing people using the PyTorch backend to use TRTorch/TRT ([6c3e0ad](https://github.com/NVIDIA/TRTorch/commit/6c3e0ad))
664+
* **//py:** Catch when bazel is not in path and error out when running ([1da999d](https://github.com/NVIDIA/TRTorch/commit/1da999d))
665+
* **//py:** Gate partial compilation from to_backend API ([bf1b2d8](https://github.com/NVIDIA/TRTorch/commit/bf1b2d8))
666+
* **//py:** New API to embed engine in new module ([88d07a9](https://github.com/NVIDIA/TRTorch/commit/88d07a9))
667+
* **aten::floor:** Adds floor.int evaluator ([a6a46e5](https://github.com/NVIDIA/TRTorch/commit/a6a46e5))
668+
669+
670+
### BREAKING CHANGES
671+
672+
* PyTorch version has been bumped to 1.8.0
673+
Default CUDA version is CUDA 11.1
674+
TensorRT version is TensorRT 7.2.3.4
675+
cuDNN version is now cuDNN 8.1
676+
677+
Signed-off-by: Naren Dasan <[email protected]>
678+
Signed-off-by: Naren Dasan <[email protected]>
679+
* Due to issues with compatability between PyTorch 1.8.0
680+
and 1.8.1 in the Torch Python API, TRTorch 0.3.0 compiled for 1.8.0 does not
681+
work with PyTorch 1.8.1 and will show an error about use_input_stats.
682+
If you see this error make sure the version of libtorch you are
683+
compiling with is PyTorch 1.8.1
684+
685+
TRTorch 0.3.0 will target PyTorch 1.8.1. There is no backwards
686+
compatability with 1.8.0. If you need this specific version compile from
687+
source with the dependencies in WORKSPACE changed
688+
689+
Signed-off-by: Naren Dasan <[email protected]>
690+
Signed-off-by: Naren Dasan <[email protected]>
691+
692+
617693

WORKSPACE

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -38,6 +38,11 @@ new_local_repository(
3838
path = "/usr/local/cuda-11.1/",
3939
)
4040

41+
new_local_repository(
42+
name = "cublas",
43+
build_file = "@//third_party/cublas:BUILD",
44+
path = "/usr",
45+
)
4146
#############################################################################################################
4247
# Tarballs and fetched dependencies (default - use in cases when building from precompiled bin and tarballs)
4348
#############################################################################################################

core/conversion/converters/impl/activation.cpp

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -161,7 +161,7 @@ auto acthardtanh TRTORCH_UNUSED =
161161
TRTORCH_CHECK(new_layer, "Unable to create layer for aten::elu");
162162
new_layer->setAlpha(alpha);
163163

164-
new_layer->setName(trtorch::core::util::node_info(n).c_str());
164+
new_layer->setName(util::node_info(n).c_str());
165165

166166
auto out_tensor = ctx->AssociateValueAndTensor(n->outputs()[0], new_layer->getOutput(0));
167167
LOG_DEBUG("Output shape: " << out_tensor->getDimensions());
@@ -190,7 +190,7 @@ auto acthardtanh TRTORCH_UNUSED =
190190
TRTORCH_CHECK(gelu_plugin, "Unable to create gelu plugin from TensorRT plugin registry" << *n);
191191
auto new_layer =
192192
ctx->net->addPluginV2(reinterpret_cast<nvinfer1::ITensor* const*>(&in), 1, *gelu_plugin);
193-
new_layer->setName("gelu");
193+
new_layer->setName(util::node_info(n).c_str());
194194
auto out_tensor = new_layer->getOutput(0);
195195
out_tensor = ctx->AssociateValueAndTensor(n->outputs()[0], out_tensor);
196196
LOG_DEBUG("Output shape: " << out_tensor->getDimensions());

core/partitioning/README.md

Lines changed: 67 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,67 @@
1+
# TRTorch Partitioning
2+
3+
TRTorch partitioning phase is developed to support `automatic fallback` feature in TRTorch. This phase won't run by
4+
default until the automatic fallback feature is enabled.
5+
6+
On a high level, TRTorch partitioning phase does the following:
7+
- `Segmentation`. Go through the set of operators in order and verify if there is converter for each operator. Then,
8+
roughly separate the graph into parts that TRTorch can support and parts TRTorch cannot.
9+
- `Dependency Analysis`. For every to be compiled operator there is a "complete dependency graph", which means that
10+
every input can to traced back to an input as Tensor or TensorList. Go through all segments after segmentation then
11+
do dependency analysis to ensure that there are only Tensor/TensorList inputs and outputs for TensorRT segments.
12+
- `Shape Analysis`. For each segments, figure out the input and outputs shapes starting from the provided input shape
13+
from the user. Shapes can be calculated by running the graphs with JIT.
14+
- `Conversion`. Every TensorRT segments will be converted to TensorRT engine. This part is done in compiler.cpp, but
15+
it's still a phase in our partitioning process.
16+
- `Stitching`. Stitch all TensorRT engines with PyTorch nodes altogether.
17+
18+
Test cases for each of these components could be found [here](https://github.com/NVIDIA/TRTorch/tree/master/tests/core/partitioning).
19+
20+
Here is the brief description of functionalities of each file:
21+
- `PartitionInfo.h/cpp`: The automatic fallback APIs that is used for partitioning.
22+
- `SegmentedBlock.h/cpp`: The main data structures that is used to maintain information for each segments after segmentation.
23+
- `shape_analysis.h/cpp`: Code implementation to get the shapes for each segments by running them in JIT.
24+
- `partitioning.h/cpp`: APIs and main code implementation for partitioning phase.
25+
26+
### Automatic Fallback
27+
To enable automatic fallback feature, you can set following attributes in Python:
28+
```python
29+
import torch
30+
import trtorch
31+
32+
...
33+
model = MyModel()
34+
ts_model = torch.jit.script(model)
35+
trt_model = trtorch.compile(model, {
36+
...
37+
"torch_fallback" : {
38+
"enabled" : True,
39+
"min_block_size" : 3,
40+
"forced_fallback_ops": ["aten::add"],
41+
}
42+
})
43+
```
44+
- `enabled`: By default automatic fallback will be off. It is enabled by setting it to True.
45+
- `min_block_size`: The minimum number of consecutive operations that must satisfy to be converted to TensorRT. For
46+
example, if it's set to 3, then there must be 3 consecutive supported operators then this segments will be converted.
47+
- `forced_fallback_ops`: A list of strings that will be the names of operations that the user explicitly want to be in
48+
PyTorch nodes.
49+
50+
To enable automatic fallback feature in C++, following APIs could be uses:
51+
52+
```c++
53+
#include "torch/script.h"
54+
#include "trtorch/trtorch.h"
55+
56+
...
57+
auto in = torch::randn({1, 3, 224, 224}, {torch::kCUDA});
58+
59+
auto mod = trtorch::jit::load("trt_ts_module.ts");
60+
auto input_sizes = std::vector<trtorch::CompileSpec::InputRange>{{in.sizes()}};
61+
trtorch::CompileSpec cfg(input_sizes);
62+
cfg.torch_fallback = trtorch::CompileSpec::TorchFallback(true);
63+
cfg.torch_fallback.min_block_size = 2;
64+
cfg.torch_fallback.forced_fallback_ops.push_back("aten::relu");
65+
auto trt_mod = trtorch::CompileGraph(mod, cfg);
66+
auto out = trt_mod.forward({in});
67+
```

core/plugins/BUILD

Lines changed: 8 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,12 @@ load("@rules_pkg//:pkg.bzl", "pkg_tar")
4242
pkg_tar(
4343
name = "include",
4444
package_dir = "core/plugins/",
45-
srcs = ["plugins.h",
46-
"impl/interpolate_plugin.h",
47-
"impl/normalize_plugin.h"],
45+
srcs = ["plugins.h"],
4846
)
47+
48+
pkg_tar(
49+
name = "impl_include",
50+
package_dir = "core/plugins/impl",
51+
srcs = ["impl/interpolate_plugin.h",
52+
"impl/normalize_plugin.h"],
53+
)

core/plugins/register_plugins.cpp

Lines changed: 10 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -14,36 +14,37 @@ namespace impl {
1414
class TRTorchPluginRegistry {
1515
public:
1616
TRTorchPluginRegistry() {
17-
trtorch_logger.log(util::logging::LogLevel::kINFO, "Instatiated the TRTorch plugin registry class");
1817
// register libNvInferPlugins and TRTorch plugins
1918
// trtorch_logger logging level is set to kERROR and reset back to kDEBUG.
2019
// This is because initLibNvInferPlugins initializes only a subset of plugins and logs them.
2120
// Plugins outside this subset in TensorRT are not being logged in this. So temporarily we disable this to prevent
2221
// multiple logging of same plugins. To provide a clear list of all plugins, we iterate through getPluginRegistry()
2322
// where it prints the list of all the plugins registered in TensorRT with their namespaces.
24-
trtorch_logger.set_reportable_log_level(util::logging::LogLevel::kERROR);
25-
initLibNvInferPlugins(&trtorch_logger, "");
26-
trtorch_logger.set_reportable_log_level(util::logging::LogLevel::kDEBUG);
23+
plugin_logger.set_reportable_log_level(util::logging::LogLevel::kERROR);
24+
initLibNvInferPlugins(&plugin_logger, "");
25+
plugin_logger.set_reportable_log_level(util::logging::get_logger().get_reportable_log_level());
2726

2827
int numCreators = 0;
2928
auto pluginsList = getPluginRegistry()->getPluginCreatorList(&numCreators);
3029
for (int k = 0; k < numCreators; ++k) {
3130
if (!pluginsList[k]) {
32-
trtorch_logger.log(util::logging::LogLevel::kDEBUG, "Plugin creator for plugin " + str(k) + " is a nullptr");
31+
plugin_logger.log(util::logging::LogLevel::kDEBUG, "Plugin creator for plugin " + str(k) + " is a nullptr");
3332
continue;
3433
}
3534
std::string pluginNamespace = pluginsList[k]->getPluginNamespace();
36-
trtorch_logger.log(
35+
plugin_logger.log(
3736
util::logging::LogLevel::kDEBUG,
3837
"Registered plugin creator - " + std::string(pluginsList[k]->getPluginName()) +
3938
", Namespace: " + pluginNamespace);
4039
}
41-
trtorch_logger.log(util::logging::LogLevel::kDEBUG, "Total number of plugins registered: " + str(numCreators));
40+
plugin_logger.log(util::logging::LogLevel::kDEBUG, "Total number of plugins registered: " + str(numCreators));
4241
}
4342

4443
public:
45-
util::logging::TRTorchLogger trtorch_logger =
46-
util::logging::TRTorchLogger("[TRTorch Plugins Context] - ", util::logging::LogLevel::kDEBUG, true);
44+
util::logging::TRTorchLogger plugin_logger = util::logging::TRTorchLogger(
45+
"[TRTorch Plugins Context] - ",
46+
util::logging::get_logger().get_reportable_log_level(),
47+
util::logging::get_logger().get_is_colored_output_on());
4748
};
4849

4950
namespace {

cpp/api/include/trtorch/macros.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@
2020
#define STR(x) XSTR(x)
2121

2222
#define TRTORCH_MAJOR_VERSION 0
23-
#define TRTORCH_MINOR_VERSION 3
23+
#define TRTORCH_MINOR_VERSION 4
2424
#define TRTORCH_PATCH_VERSION 0
2525
#define TRTORCH_VERSION \
2626
STR(TRTORCH_MAJOR_VERSION) \

docker/Dockerfile.21.03

Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,41 @@
1+
FROM nvcr.io/nvidia/pytorch:21.03-py3
2+
3+
RUN apt-get update && apt-get install -y curl gnupg && rm -rf /var/lib/apt/lists/*
4+
5+
RUN curl -fsSL https://bazel.build/bazel-release.pub.gpg | gpg --dearmor > bazel.gpg
6+
RUN mv bazel.gpg /etc/apt/trusted.gpg.d/
7+
RUN echo "deb [arch=amd64] https://storage.googleapis.com/bazel-apt stable jdk1.8" | tee /etc/apt/sources.list.d/bazel.list
8+
9+
RUN apt-get update && apt-get install -y bazel-4.0.0 && rm -rf /var/lib/apt/lists/*
10+
RUN ln -s /usr/bin/bazel-4.0.0 /usr/bin/bazel
11+
12+
RUN pip install notebook
13+
14+
COPY . /opt/trtorch
15+
RUN rm /opt/trtorch/WORKSPACE
16+
COPY ./docker/WORKSPACE.cu.docker /opt/trtorch/WORKSPACE
17+
18+
# Workaround for bazel expecting both static and shared versions, we only use shared libraries inside container
19+
RUN cp /usr/lib/x86_64-linux-gnu/libnvinfer.so /usr/lib/x86_64-linux-gnu/libnvinfer_static.a
20+
21+
WORKDIR /opt/trtorch
22+
RUN bazel build //:libtrtorch --compilation_mode opt
23+
24+
WORKDIR /opt/trtorch/py
25+
26+
RUN pip install ipywidgets --trusted-host pypi.org --trusted-host pypi.python.org --trusted-host=files.pythonhosted.org
27+
RUN jupyter nbextension enable --py widgetsnbextension
28+
29+
# Locale is not set by default
30+
RUN apt-get update && apt-get install -y locales ninja-build && rm -rf /var/lib/apt/lists/* && locale-gen en_US.UTF-8
31+
ENV LANG en_US.UTF-8
32+
ENV LANGUAGE en_US:en
33+
ENV LC_ALL en_US.UTF-8
34+
RUN python3 setup.py install --use-cxx11-abi
35+
36+
RUN conda init bash
37+
38+
ENV LD_LIBRARY_PATH /opt/conda/lib/python3.8/site-packages/torch/lib:$LD_LIBRARY_PATH
39+
40+
WORKDIR /opt/trtorch/
41+
CMD /bin/bash

docs/_cpp_api/class_view_hierarchy.html

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -152,6 +152,9 @@
152152
<a href="https://nvidia.github.io/TRTorch/" title="master">
153153
master
154154
</a>
155+
<a href="https://nvidia.github.io/TRTorch/v0.3.0/" title="v0.3.0">
156+
v0.3.0
157+
</a>
155158
<a href="https://nvidia.github.io/TRTorch/v0.2.0/" title="v0.2.0">
156159
v0.2.0
157160
</a>

docs/_cpp_api/classtrtorch_1_1CompileSpec_1_1DataType.html

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -154,6 +154,9 @@
154154
<a href="https://nvidia.github.io/TRTorch/" title="master">
155155
master
156156
</a>
157+
<a href="https://nvidia.github.io/TRTorch/v0.3.0/" title="v0.3.0">
158+
v0.3.0
159+
</a>
157160
<a href="https://nvidia.github.io/TRTorch/v0.2.0/" title="v0.2.0">
158161
v0.2.0
159162
</a>

docs/_cpp_api/classtrtorch_1_1CompileSpec_1_1Device_1_1DeviceType.html

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -154,6 +154,9 @@
154154
<a href="https://nvidia.github.io/TRTorch/" title="master">
155155
master
156156
</a>
157+
<a href="https://nvidia.github.io/TRTorch/v0.3.0/" title="v0.3.0">
158+
v0.3.0
159+
</a>
157160
<a href="https://nvidia.github.io/TRTorch/v0.2.0/" title="v0.2.0">
158161
v0.2.0
159162
</a>

docs/_cpp_api/classtrtorch_1_1ptq_1_1Int8CacheCalibrator.html

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -154,6 +154,9 @@
154154
<a href="https://nvidia.github.io/TRTorch/" title="master">
155155
master
156156
</a>
157+
<a href="https://nvidia.github.io/TRTorch/v0.3.0/" title="v0.3.0">
158+
v0.3.0
159+
</a>
157160
<a href="https://nvidia.github.io/TRTorch/v0.2.0/" title="v0.2.0">
158161
v0.2.0
159162
</a>

docs/_cpp_api/classtrtorch_1_1ptq_1_1Int8Calibrator.html

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -154,6 +154,9 @@
154154
<a href="https://nvidia.github.io/TRTorch/" title="master">
155155
master
156156
</a>
157+
<a href="https://nvidia.github.io/TRTorch/v0.3.0/" title="v0.3.0">
158+
v0.3.0
159+
</a>
157160
<a href="https://nvidia.github.io/TRTorch/v0.2.0/" title="v0.2.0">
158161
v0.2.0
159162
</a>

docs/_cpp_api/define_macros_8h_1a18d295a837ac71add5578860b55e5502.html

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -154,6 +154,9 @@
154154
<a href="https://nvidia.github.io/TRTorch/" title="master">
155155
master
156156
</a>
157+
<a href="https://nvidia.github.io/TRTorch/v0.3.0/" title="v0.3.0">
158+
v0.3.0
159+
</a>
157160
<a href="https://nvidia.github.io/TRTorch/v0.2.0/" title="v0.2.0">
158161
v0.2.0
159162
</a>

docs/_cpp_api/define_macros_8h_1a20c1fbeb21757871c52299dc52351b5f.html

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -154,6 +154,9 @@
154154
<a href="https://nvidia.github.io/TRTorch/" title="master">
155155
master
156156
</a>
157+
<a href="https://nvidia.github.io/TRTorch/v0.3.0/" title="v0.3.0">
158+
v0.3.0
159+
</a>
157160
<a href="https://nvidia.github.io/TRTorch/v0.2.0/" title="v0.2.0">
158161
v0.2.0
159162
</a>

docs/_cpp_api/define_macros_8h_1a25ee153c325dfc7466a33cbd5c1ff055.html

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -154,6 +154,9 @@
154154
<a href="https://nvidia.github.io/TRTorch/" title="master">
155155
master
156156
</a>
157+
<a href="https://nvidia.github.io/TRTorch/v0.3.0/" title="v0.3.0">
158+
v0.3.0
159+
</a>
157160
<a href="https://nvidia.github.io/TRTorch/v0.2.0/" title="v0.2.0">
158161
v0.2.0
159162
</a>

docs/_cpp_api/define_macros_8h_1a48d6029a45583a06848891cb0e86f7ba.html

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -154,6 +154,9 @@
154154
<a href="https://nvidia.github.io/TRTorch/" title="master">
155155
master
156156
</a>
157+
<a href="https://nvidia.github.io/TRTorch/v0.3.0/" title="v0.3.0">
158+
v0.3.0
159+
</a>
157160
<a href="https://nvidia.github.io/TRTorch/v0.2.0/" title="v0.2.0">
158161
v0.2.0
159162
</a>

0 commit comments

Comments
 (0)