Skip to content

Commit b00e9d6

Browse files
committed
fix: Update new repo link under pytorch org
Signed-off-by: lamhoangtung <[email protected]>
1 parent 6b86ca8 commit b00e9d6

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

48 files changed

+97
-97
lines changed

CONTRIBUTING.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
### Developing Torch-TensorRT
44

5-
Do try to fill an issue with your feature or bug before filling a PR (op support is generally an exception as long as you provide tests to prove functionality). There is also a backlog (https://github.com/NVIDIA/Torch-TensorRT/issues) of issues which are tagged with the area of focus, a coarse priority level and whether the issue may be accessible to new contributors. Let us know if you are interested in working on a issue. We are happy to provide guidance and mentorship for new contributors. Though note, there is no claiming of issues, we prefer getting working code quickly vs. addressing concerns about "wasted work".
5+
Do try to fill an issue with your feature or bug before filling a PR (op support is generally an exception as long as you provide tests to prove functionality). There is also a backlog (https://github.com/pytorch/TensorRT/issues) of issues which are tagged with the area of focus, a coarse priority level and whether the issue may be accessible to new contributors. Let us know if you are interested in working on a issue. We are happy to provide guidance and mentorship for new contributors. Though note, there is no claiming of issues, we prefer getting working code quickly vs. addressing concerns about "wasted work".
66

77
#### Communication
88

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,7 @@ These are the following dependencies used to verify the testcases. Torch-TensorR
118118

119119
## Prebuilt Binaries and Wheel files
120120

121-
Releases: https://github.com/NVIDIA/Torch-TensorRT/releases
121+
Releases: https://github.com/pytorch/TensorRT/releases
122122

123123
## Compiling Torch-TensorRT
124124

@@ -291,7 +291,7 @@ Supported Python versions:
291291

292292
### In Torch-TensorRT?
293293

294-
Thanks for wanting to contribute! There are two main ways to handle supporting a new op. Either you can write a converter for the op from scratch and register it in the NodeConverterRegistry or if you can map the op to a set of ops that already have converters you can write a graph rewrite pass which will replace your new op with an equivalent subgraph of supported ops. Its preferred to use graph rewriting because then we do not need to maintain a large library of op converters. Also do look at the various op support trackers in the [issues](https://github.com/NVIDIA/Torch-TensorRT/issues) for information on the support status of various operators.
294+
Thanks for wanting to contribute! There are two main ways to handle supporting a new op. Either you can write a converter for the op from scratch and register it in the NodeConverterRegistry or if you can map the op to a set of ops that already have converters you can write a graph rewrite pass which will replace your new op with an equivalent subgraph of supported ops. Its preferred to use graph rewriting because then we do not need to maintain a large library of op converters. Also do look at the various op support trackers in the [issues](https://github.com/pytorch/TensorRT/issues) for information on the support status of various operators.
295295

296296
### In my application?
297297

core/partitioning/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ from the user. Shapes can be calculated by running the graphs with JIT.
1515
it's still a phase in our partitioning process.
1616
- `Stitching`. Stitch all TensorRT engines with PyTorch nodes altogether.
1717

18-
Test cases for each of these components could be found [here](https://github.com/NVIDIA/Torch-TensorRT/tree/master/tests/core/partitioning).
18+
Test cases for each of these components could be found [here](https://github.com/pytorch/TensorRT/tree/master/tests/core/partitioning).
1919

2020
Here is the brief description of functionalities of each file:
2121
- `PartitionInfo.h/cpp`: The automatic fallback APIs that is used for partitioning.

core/plugins/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,4 +37,4 @@ If you'd like to compile your plugin with Torch-TensorRT,
3737

3838
Once you've completed the above steps, upon successful compilation of Torch-TensorRT library, your plugin should be available in `libtorchtrt_plugins.so`.
3939

40-
A sample runtime application on how to run a network with plugins can be found <a href="https://github.com/NVIDIA/Torch-TensorRT/tree/master/examples/torchtrt_runtime_example" >here</a>
40+
A sample runtime application on how to run a network with plugins can be found <a href="https://github.com/pytorch/TensorRT/tree/master/examples/torchtrt_runtime_example" >here</a>

docs/_cpp_api/class_view_hierarchy.html

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -130,7 +130,7 @@
130130
</div>
131131
<div class="md-flex__cell md-flex__cell--shrink">
132132
<div class="md-header-nav__source">
133-
<a class="md-source" data-md-source="github" href="https://github.com/nvidia/Torch-TensorRT/" title="Go to repository">
133+
<a class="md-source" data-md-source="github" href="https://github.com/pytorch/TensorRT/" title="Go to repository">
134134
<div class="md-source__icon">
135135
<svg height="28" viewbox="0 0 24 24" width="28" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
136136
<use height="24" width="24" xlink:href="#__github">
@@ -215,7 +215,7 @@
215215
</a>
216216
</label>
217217
<div class="md-nav__source">
218-
<a class="md-source" data-md-source="github" href="https://github.com/nvidia/Torch-TensorRT/" title="Go to repository">
218+
<a class="md-source" data-md-source="github" href="https://github.com/pytorch/TensorRT/" title="Go to repository">
219219
<div class="md-source__icon">
220220
<svg height="28" viewbox="0 0 24 24" width="28" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
221221
<use height="24" width="24" xlink:href="#__github">

docs/_cpp_api/file_view_hierarchy.html

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -130,7 +130,7 @@
130130
</div>
131131
<div class="md-flex__cell md-flex__cell--shrink">
132132
<div class="md-header-nav__source">
133-
<a class="md-source" data-md-source="github" href="https://github.com/nvidia/Torch-TensorRT/" title="Go to repository">
133+
<a class="md-source" data-md-source="github" href="https://github.com/pytorch/TensorRT/" title="Go to repository">
134134
<div class="md-source__icon">
135135
<svg height="28" viewbox="0 0 24 24" width="28" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
136136
<use height="24" width="24" xlink:href="#__github">
@@ -215,7 +215,7 @@
215215
</a>
216216
</label>
217217
<div class="md-nav__source">
218-
<a class="md-source" data-md-source="github" href="https://github.com/nvidia/Torch-TensorRT/" title="Go to repository">
218+
<a class="md-source" data-md-source="github" href="https://github.com/pytorch/TensorRT/" title="Go to repository">
219219
<div class="md-source__icon">
220220
<svg height="28" viewbox="0 0 24 24" width="28" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
221221
<use height="24" width="24" xlink:href="#__github">

docs/_cpp_api/unabridged_api.html

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -130,7 +130,7 @@
130130
</div>
131131
<div class="md-flex__cell md-flex__cell--shrink">
132132
<div class="md-header-nav__source">
133-
<a class="md-source" data-md-source="github" href="https://github.com/nvidia/Torch-TensorRT/" title="Go to repository">
133+
<a class="md-source" data-md-source="github" href="https://github.com/pytorch/TensorRT/" title="Go to repository">
134134
<div class="md-source__icon">
135135
<svg height="28" viewbox="0 0 24 24" width="28" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
136136
<use height="24" width="24" xlink:href="#__github">
@@ -215,7 +215,7 @@
215215
</a>
216216
</label>
217217
<div class="md-nav__source">
218-
<a class="md-source" data-md-source="github" href="https://github.com/nvidia/Torch-TensorRT/" title="Go to repository">
218+
<a class="md-source" data-md-source="github" href="https://github.com/pytorch/TensorRT/" title="Go to repository">
219219
<div class="md-source__icon">
220220
<svg height="28" viewbox="0 0 24 24" width="28" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
221221
<use height="24" width="24" xlink:href="#__github">

docs/_notebooks/CitriNet-example.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1421,7 +1421,7 @@ <h3>FP16 (half precision)<a class="headerlink" href="#FP16-(half-precision)" tit
14211421
</section>
14221422
<section id="What’s-next">
14231423
<h3>What’s next<a class="headerlink" href="#What’s-next" title="Permalink to this headline"></a></h3>
1424-
<p>Now it’s time to try Torch-TensorRT on your own model. Fill out issues at <a class="reference external" href="https://github.com/NVIDIA/Torch-TensorRT">https://github.com/NVIDIA/Torch-TensorRT</a>. Your involvement will help future development of Torch-TensorRT.</p>
1424+
<p>Now it’s time to try Torch-TensorRT on your own model. Fill out issues at <a class="reference external" href="https://github.com/pytorch/TensorRT">https://github.com/pytorch/TensorRT</a>. Your involvement will help future development of Torch-TensorRT.</p>
14251425
<div class="nbinput nblast docutils container">
14261426
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
14271427
</pre></div>

docs/_notebooks/CitriNet-example.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -929,7 +929,7 @@
929929
"In this notebook, we have walked through the complete process of optimizing the Citrinet model with Torch-TensorRT. On an A100 GPU, with Torch-TensorRT, we observe a speedup of ~**2.4X** with FP32, and ~**2.9X** with FP16 at batchsize of 128.\n",
930930
"\n",
931931
"### What's next\n",
932-
"Now it's time to try Torch-TensorRT on your own model. Fill out issues at https://github.com/NVIDIA/Torch-TensorRT. Your involvement will help future development of Torch-TensorRT.\n"
932+
"Now it's time to try Torch-TensorRT on your own model. Fill out issues at https://github.com/pytorch/TensorRT. Your involvement will help future development of Torch-TensorRT.\n"
933933
]
934934
},
935935
{

docs/_notebooks/EfficientNet-example.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1131,7 +1131,7 @@ <h3>FP16 (half precision)<a class="headerlink" href="#FP16-(half-precision)" tit
11311131
</section>
11321132
<section id="What’s-next">
11331133
<h3>What’s next<a class="headerlink" href="#What’s-next" title="Permalink to this headline"></a></h3>
1134-
<p>Now it’s time to try Torch-TensorRT on your own model. If you run into any issues, you can fill them at <a class="reference external" href="https://github.com/NVIDIA/Torch-TensorRT">https://github.com/NVIDIA/Torch-TensorRT</a>. Your involvement will help future development of Torch-TensorRT.</p>
1134+
<p>Now it’s time to try Torch-TensorRT on your own model. If you run into any issues, you can fill them at <a class="reference external" href="https://github.com/pytorch/TensorRT">https://github.com/pytorch/TensorRT</a>. Your involvement will help future development of Torch-TensorRT.</p>
11351135
<div class="nbinput nblast docutils container">
11361136
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
11371137
</pre></div>

docs/_notebooks/EfficientNet-example.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -658,7 +658,7 @@
658658
"In this notebook, we have walked through the complete process of compiling TorchScript models with Torch-TensorRT for EfficientNet-B0 model and test the performance impact of the optimization. With Torch-TensorRT, we observe a speedup of **1.35x** with FP32, and **3.13x** with FP16 on an NVIDIA 3090 GPU. These acceleration numbers will vary from GPU to GPU(as well as implementation to implementation based on the ops used) and we encorage you to try out latest generation of Data center compute cards for maximum acceleration.\n",
659659
"\n",
660660
"### What's next\n",
661-
"Now it's time to try Torch-TensorRT on your own model. If you run into any issues, you can fill them at https://github.com/NVIDIA/Torch-TensorRT. Your involvement will help future development of Torch-TensorRT.\n"
661+
"Now it's time to try Torch-TensorRT on your own model. If you run into any issues, you can fill them at https://github.com/pytorch/TensorRT. Your involvement will help future development of Torch-TensorRT.\n"
662662
]
663663
},
664664
{

docs/_notebooks/Hugging-Face-BERT.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1054,7 +1054,7 @@ <h2>Contents<a class="headerlink" href="#Contents" title="Permalink to this head
10541054
<p>Scripted (GPU): 1.0x Traced (GPU): 1.62x Torch-TensorRT (FP32): 2.14x Torch-TensorRT (FP16): 3.15x</p>
10551055
<section id="What’s-next">
10561056
<h3>What’s next<a class="headerlink" href="#What’s-next" title="Permalink to this headline"></a></h3>
1057-
<p>Now it’s time to try Torch-TensorRT on your own model. If you run into any issues, you can fill them at <a class="reference external" href="https://github.com/NVIDIA/Torch-TensorRT">https://github.com/NVIDIA/Torch-TensorRT</a>. Your involvement will help future development of Torch-TensorRT.</p>
1057+
<p>Now it’s time to try Torch-TensorRT on your own model. If you run into any issues, you can fill them at <a class="reference external" href="https://github.com/pytorch/TensorRT">https://github.com/pytorch/TensorRT</a>. Your involvement will help future development of Torch-TensorRT.</p>
10581058
<div class="nbinput nblast docutils container">
10591059
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
10601060
</pre></div>

docs/_notebooks/Hugging-Face-BERT.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -678,7 +678,7 @@
678678
"Torch-TensorRT (FP16): 3.15x\n",
679679
"\n",
680680
"### What's next\n",
681-
"Now it's time to try Torch-TensorRT on your own model. If you run into any issues, you can fill them at https://github.com/NVIDIA/Torch-TensorRT. Your involvement will help future development of Torch-TensorRT."
681+
"Now it's time to try Torch-TensorRT on your own model. If you run into any issues, you can fill them at https://github.com/pytorch/TensorRT. Your involvement will help future development of Torch-TensorRT."
682682
]
683683
},
684684
{

docs/_notebooks/Resnet50-example.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1350,7 +1350,7 @@ <h3>FP16 (half precision)<a class="headerlink" href="#FP16-(half-precision)" tit
13501350
</section>
13511351
<section id="What’s-next">
13521352
<h3>What’s next<a class="headerlink" href="#What’s-next" title="Permalink to this headline"></a></h3>
1353-
<p>Now it’s time to try Torch-TensorRT on your own model. If you run into any issues, you can fill them at <a class="reference external" href="https://github.com/NVIDIA/Torch-TensorRT">https://github.com/NVIDIA/Torch-TensorRT</a>. Your involvement will help future development of Torch-TensorRT.</p>
1353+
<p>Now it’s time to try Torch-TensorRT on your own model. If you run into any issues, you can fill them at <a class="reference external" href="https://github.com/pytorch/TensorRT">https://github.com/pytorch/TensorRT</a>. Your involvement will help future development of Torch-TensorRT.</p>
13541354
</section>
13551355
</section>
13561356
</section>

docs/_notebooks/Resnet50-example.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -897,7 +897,7 @@
897897
"In this notebook, we have walked through the complete process of compiling TorchScript models with Torch-TensorRT for EfficientNet-B0 model and test the performance impact of the optimization. With Torch-TensorRT, we observe a speedup of **1.84x** with FP32, and **5.2x** with FP16 on an NVIDIA 3090 GPU. These acceleration numbers will vary from GPU to GPU(as well as implementation to implementation based on the ops used) and we encorage you to try out latest generation of Data center compute cards for maximum acceleration.\n",
898898
"\n",
899899
"### What's next\n",
900-
"Now it's time to try Torch-TensorRT on your own model. If you run into any issues, you can fill them at https://github.com/NVIDIA/Torch-TensorRT. Your involvement will help future development of Torch-TensorRT.\n"
900+
"Now it's time to try Torch-TensorRT on your own model. If you run into any issues, you can fill them at https://github.com/pytorch/TensorRT. Your involvement will help future development of Torch-TensorRT.\n"
901901
]
902902
}
903903
],

docs/_notebooks/lenet-getting-started.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1189,7 +1189,7 @@ <h2>Conclusion<a class="headerlink" href="#Conclusion" title="Permalink to this
11891189
<p>In this notebook, we have walked through the complete process of compiling TorchScript models with Torch-TensorRT and test the performance impact of the optimization.</p>
11901190
<section id="What’s-next">
11911191
<h3>What’s next<a class="headerlink" href="#What’s-next" title="Permalink to this headline"></a></h3>
1192-
<p>Now it’s time to try Torch-TensorRT on your own model. Fill out issues at <a class="reference external" href="https://github.com/NVIDIA/Torch-TensorRT">https://github.com/NVIDIA/Torch-TensorRT</a>. Your involvement will help future development of Torch-TensorRT.</p>
1192+
<p>Now it’s time to try Torch-TensorRT on your own model. Fill out issues at <a class="reference external" href="https://github.com/pytorch/TensorRT">https://github.com/pytorch/TensorRT</a>. Your involvement will help future development of Torch-TensorRT.</p>
11931193
</section>
11941194
</section>
11951195
</section>

docs/_notebooks/lenet-getting-started.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -690,7 +690,7 @@
690690
"In this notebook, we have walked through the complete process of compiling TorchScript models with Torch-TensorRT and test the performance impact of the optimization.\n",
691691
"\n",
692692
"### What's next\n",
693-
"Now it's time to try Torch-TensorRT on your own model. Fill out issues at https://github.com/NVIDIA/Torch-TensorRT. Your involvement will help future development of Torch-TensorRT.\n"
693+
"Now it's time to try Torch-TensorRT on your own model. Fill out issues at https://github.com/pytorch/TensorRT. Your involvement will help future development of Torch-TensorRT.\n"
694694
]
695695
}
696696
],

docs/_notebooks/vgg-qat.ipynb

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@
3535
"source": [
3636
"<a id=\"1\"></a>\n",
3737
"## 1. Requirements\n",
38-
"Please install the <a href=\"https://github.com/NVIDIA/Torch-TensorRT/tree/master/examples/int8/training/vgg16#prequisites\">required dependencies</a> and import these libraries accordingly"
38+
"Please install the <a href=\"https://github.com/pytorch/TensorRT/tree/master/examples/int8/training/vgg16#prequisites\">required dependencies</a> and import these libraries accordingly"
3939
]
4040
},
4141
{
@@ -1003,7 +1003,7 @@
10031003
"%quant_weight : Tensor = aten::fake_quantize_per_channel_affine(%394, %640, %641, %637, %638, %639)\n",
10041004
"%input.2 : Tensor = aten::_convolution(%quant_input, %quant_weight, %395, %687, %688, %689, %643, %690, %642, %643, %643, %644, %644)\n",
10051005
"```\n",
1006-
"`aten::fake_quantize_per_*_affine` is converted into `QuantizeLayer` + `DequantizeLayer` in Torch-TensorRT internally. Please refer to <a href=\"https://github.com/NVIDIA/Torch-TensorRT/blob/master/core/conversion/converters/impl/quantization.cpp\">quantization op converters</a> in Torch-TensorRT."
1006+
"`aten::fake_quantize_per_*_affine` is converted into `QuantizeLayer` + `DequantizeLayer` in Torch-TensorRT internally. Please refer to <a href=\"https://github.com/pytorch/TensorRT/blob/master/core/conversion/converters/impl/quantization.cpp\">quantization op converters</a> in Torch-TensorRT."
10071007
]
10081008
},
10091009
{
@@ -1168,8 +1168,8 @@
11681168
"## 9. References\n",
11691169
"* <a href=\"https://arxiv.org/pdf/1409.1556.pdf\">Very Deep Convolution Networks for large scale Image Recognition</a>\n",
11701170
"* <a href=\"https://developer.nvidia.com/blog/achieving-fp32-accuracy-for-int8-inference-using-quantization-aware-training-with-tensorrt/\">Achieving FP32 Accuracy for INT8 Inference Using Quantization Aware Training with NVIDIA TensorRT</a>\n",
1171-
"* <a href=\"https://github.com/NVIDIA/Torch-TensorRT/tree/master/examples/int8/training/vgg16#quantization-aware-fine-tuning-for-trying-out-qat-workflows\">QAT workflow for VGG16</a>\n",
1172-
"* <a href=\"https://github.com/NVIDIA/Torch-TensorRT/tree/master/examples/int8/qat\">Deploying VGG QAT model in C++ using Torch-TensorRT</a>\n",
1171+
"* <a href=\"https://github.com/pytorch/TensorRT/tree/master/examples/int8/training/vgg16#quantization-aware-fine-tuning-for-trying-out-qat-workflows\">QAT workflow for VGG16</a>\n",
1172+
"* <a href=\"https://github.com/pytorch/TensorRT/tree/master/examples/int8/qat\">Deploying VGG QAT model in C++ using Torch-TensorRT</a>\n",
11731173
"* <a href=\"https://github.com/NVIDIA/TensorRT/tree/master/tools/pytorch-quantization\">Pytorch-quantization toolkit from NVIDIA</a>\n",
11741174
"* <a href=\"https://docs.nvidia.com/deeplearning/tensorrt/pytorch-quantization-toolkit/docs/userguide.html\">Pytorch quantization toolkit userguide</a>\n",
11751175
"* <a href=\"https://arxiv.org/pdf/2004.09602.pdf\">Quantization basics</a>"

docs/_sources/_notebooks/CitriNet-example.ipynb.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -929,7 +929,7 @@
929929
"In this notebook, we have walked through the complete process of optimizing the Citrinet model with Torch-TensorRT. On an A100 GPU, with Torch-TensorRT, we observe a speedup of ~**2.4X** with FP32, and ~**2.9X** with FP16 at batchsize of 128.\n",
930930
"\n",
931931
"### What's next\n",
932-
"Now it's time to try Torch-TensorRT on your own model. Fill out issues at https://github.com/NVIDIA/Torch-TensorRT. Your involvement will help future development of Torch-TensorRT.\n"
932+
"Now it's time to try Torch-TensorRT on your own model. Fill out issues at https://github.com/pytorch/TensorRT. Your involvement will help future development of Torch-TensorRT.\n"
933933
]
934934
},
935935
{

0 commit comments

Comments
 (0)