You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
auto trt_mod = torch_tensorrt::ts::compile(ts_mod, compile_settings);
103
-
// Run like normal
104
-
auto results = trt_mod.forward({in_tensor});
105
-
// Save module for later
106
-
trt_mod.save("trt_torchscript_module.ts");
107
-
...
108
-
>>>>>>> 1a89aea5b (Fix minor grammatical corrections (#2779))
109
91
```
110
92
111
93
## Further resources
@@ -142,208 +124,7 @@ These are the following dependencies used to verify the testcases. Torch-TensorR
142
124
143
125
Deprecation is used to inform developers that some APIs and tools are no longer recommended for use. Beginning with version 2.3, Torch-TensorRT has the following deprecation policy:
144
126
145
-
<<<<<<< HEAD
146
127
Deprecation notices are communicated in the Release Notes. Deprecated API functions will have a statement in the source documenting when they were deprecated. Deprecated methods and classes will issue deprecation warnings at runtime, if they are used. Torch-TensorRT provides a 6-month migration period after the deprecation. APIs and tools continue to work during the migration period. After the migration period ends, APIs and tools are removed in a manner consistent with semantic versioning.
147
-
=======
148
-
```
149
-
pip install tensorrt torch-tensorrt
150
-
```
151
-
152
-
## Compiling Torch-TensorRT
153
-
154
-
### Installing Dependencies
155
-
156
-
#### 0. Install Bazel
157
-
158
-
If you don't have bazel installed, the easiest way is to install bazelisk using the method of you choosing https://github.com/bazelbuild/bazelisk
159
-
160
-
Otherwise you can use the following instructions to install binaries https://docs.bazel.build/versions/master/install.html
161
-
162
-
Finally if you need to compile from source (e.g. aarch64 until bazel distributes binaries for the architecture) you can use these instructions
bazel run //cpp/bin/torchtrtc -- $(realpath <PATH TO GRAPH>) out.ts <input-size>
274
-
```
275
-
276
-
## Compiling the Python Package
277
-
278
-
To compile the python package for your local machine, just run `python3 setup.py install` in the `//py` directory.
279
-
To build wheel files for different python versions, first build the Dockerfile in ``//py`` then run the following
280
-
command
281
-
282
-
```
283
-
docker run -it -v$(pwd)/..:/workspace/Torch-TensorRT build_torch_tensorrt_wheel /bin/bash /workspace/Torch-TensorRT/py/build_whl.sh
284
-
```
285
-
286
-
Python compilation expects using the tarball based compilation strategy from above.
287
-
288
-
289
-
## Testing using Python backend
290
-
291
-
Torch-TensorRT supports testing in Python using [nox](https://nox.thea.codes/en/stable)
292
-
293
-
To install the nox using python-pip
294
-
295
-
```
296
-
python3 -m pip install --upgrade nox
297
-
```
298
-
299
-
To list supported nox sessions:
300
-
301
-
```
302
-
nox --session -l
303
-
```
304
-
305
-
Environment variables supported by nox
306
-
307
-
```
308
-
PYT_PATH - To use different PYTHONPATH than system installed Python packages
309
-
TOP_DIR - To set the root directory of the noxfile
310
-
USE_CXX11 - To use cxx11_abi (Defaults to 0)
311
-
USE_HOST_DEPS - To use host dependencies for tests (Defaults to 0)
312
-
```
313
-
314
-
Usage example
315
-
316
-
```
317
-
nox --session l0_api_tests
318
-
```
319
-
320
-
Supported Python versions:
321
-
```
322
-
["3.7", "3.8", "3.9", "3.10"]
323
-
```
324
-
325
-
## How do I add support for a new op...
326
-
327
-
### In Torch-TensorRT?
328
-
329
-
Thanks for wanting to contribute! There are two main ways to handle supporting a new op. Either you can write a converter for the op from scratch and register it in the NodeConverterRegistry or if you can map the op to a set of ops that already have converters you can write a graph rewrite pass which will replace your new op with an equivalent subgraph of supported ops. It's preferred to use graph rewriting because then we do not need to maintain a large library of op converters. Also do look at the various op support trackers in the [issues](https://github.com/pytorch/TensorRT/issues) for information on the support status of various operators.
330
-
331
-
### In my application?
332
-
333
-
> The Node Converter Registry is not exposed in the top level API but in the internal headers shipped with the tarball.
334
-
335
-
You can register a converter for your op using the `NodeConverterRegistry` inside your application.
0 commit comments