Skip to content

Commit 21ba129

Browse files
committed
docs(//README): Update the README with the talk and add a contributor
guide Signed-off-by: Naren Dasan <[email protected]> Signed-off-by: Naren Dasan <[email protected]>
1 parent 4b58d3b commit 21ba129

File tree

2 files changed

+118
-16
lines changed

2 files changed

+118
-16
lines changed

CONTRIBUTING.md

Lines changed: 76 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,76 @@
1+
# Contribution Guidelines
2+
3+
### Developing TRTorch
4+
5+
Do try to fill an issue with your feature or bug before filling a PR (op support is generally an exception as long as you provide tests to prove functionality). There is also a backlog (https://github.com/NVIDIA/TRTorch/issues) of issues which are tagged with the area of focus, a coarse priority level and whether the issue may be accessible to new contributors. Let us know if you are interested in working on a issue. We are happy to provide guidance and mentorship for new contributors. Though note, there is no claiming of issues, we prefer getting working code quickly vs. addressing concerns about "wasted work".
6+
7+
#### Communication
8+
9+
The primary location for discussion is GitHub issues. This is the best place for questions about the project and discussion about specific issues.
10+
11+
We use the PyTorch Slack for communication about core development, integration with PyTorch and other communication that doesn't make sense in GitHub issues. If you need an invite, take a look at the [PyTorch README](https://github.com/pytorch/pytorch/blob/master/README.md) for instructions on requesting one.
12+
13+
### Coding Guidelines
14+
15+
- We generally follow the coding guidelines used in PyTorch, right now this is not strictly enforced, but match the style used in the code already
16+
17+
- Avoid introducing unnecessary complexity into existing code so that maintainability and readability are preserved
18+
19+
- Try to avoid commiting commented out code
20+
21+
- Minimize warnings (and no errors) from the compiler
22+
23+
- Make sure all converter tests and the core module testsuite pass
24+
25+
- New features should have corresponding tests or if its a difficult feature to test in a testing framework, your methodology for testing.
26+
27+
- Comment subtleties and design decisions
28+
29+
- Document hacks, we can discuss it only if we can find it
30+
31+
### Commits and PRs
32+
33+
- Try to keep pull requests focused (multiple pull requests are okay). Typically PRs should focus on a single issue or a small collection of closely related issue.
34+
35+
- Typically we try to follow the guidelines set by https://www.conventionalcommits.org/en/v1.0.0/ for commit messages for clarity. Again not strictly enforced.
36+
37+
- #### Sign Your Work
38+
We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license.
39+
40+
Any contribution which contains commits that are not Signed-Off will not be accepted.
41+
42+
To sign off on a commit you simply use the `--signoff` (or `-s`) option when committing your changes:
43+
44+
$ git commit -s -m "Add cool feature."
45+
46+
This will append the following to your commit message:
47+
48+
Signed-off-by: Your Name <[email protected]>
49+
50+
By doing this you certify the below:
51+
52+
Developer Certificate of Origin
53+
Version 1.1
54+
55+
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
56+
1 Letterman Drive
57+
Suite D4700
58+
San Francisco, CA, 94129
59+
60+
Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
61+
62+
63+
Developer's Certificate of Origin 1.1
64+
65+
By making a contribution to this project, I certify that:
66+
67+
(a) The contribution was created in whole or in part by me and I have the right to submit it under the open source license indicated in the file; or
68+
69+
(b) The contribution is based upon previous work that, to the best of my knowledge, is covered under an appropriate open source license and I have the right under that license to submit that work with modifications, whether created in whole or in part by me, under the same open source license (unless I am permitted to submit under a different license), as indicated in the file; or
70+
71+
(c) The contribution was provided directly to me by some other person who certified (a), (b) or (c) and I have not modified it.
72+
73+
(d) I understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open source license(s) involved.
74+
75+
76+
Thanks in advance for your patience as we review your contributions; we do appreciate them!

README.md

Lines changed: 42 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -1,33 +1,53 @@
1-
# TRTorch
1+
# TRTorch
22

33
> Ahead of Time (AOT) compiling for PyTorch JIT
44
5-
## Compiling TRTorch
5+
TRTorch is a compiler for PyTorch/TorchScript, targeting NVIDIA GPUs via NVIDIA's TensorRT Deep Learning Optimizer and Runtime. Unlike PyTorch's Just-In-Time (JIT) compiler, TRTorch is an Ahead-of-Time (AOT) compiler, meaning that before you deploy your TorchScript code, you go through an explicit compile step to convert a standard TorchScript program into an module targeting a TensorRT engine. TRTorch operates as a PyTorch extention and compiles modules that integrate into the JIT runtime seamlessly. After compilation using the optimized graph should feel no different than running a TorchScript module. You also have access to TensorRT's suite of configurations at compile time, so you are able to specify operating precision (FP32/F16) and other settings for your module.
6+
7+
More Information / System Architecture:
8+
9+
- [GTC 2020 Talk](https://developer.nvidia.com/gtc/2020/video/s21671)
10+
11+
## Platform Support
12+
13+
| Platform | Support |
14+
| -------- | ------- |
15+
| Linux AMD64 / GPU | **Supported** |
16+
| Linux aarch64 / GPU | **Planned/Possible with Native Compiation and small modifications to the build system** |
17+
| Linux aarch64 / DLA | **Planned/Possible with Native Compilation but untested** |
18+
| Windows / GPU | - |
19+
| Linux ppc64le / GPU | - |
620

721
### Dependencies
822

923
- Libtorch 1.4.0
1024
- CUDA 10.1
1125
- cuDNN 7.6
12-
- TensorRT 6.0.1.5
26+
- TensorRT 6.0.1
1327

14-
Install TensorRT, CUDA and cuDNN on the system before starting to compile.
28+
## Prebuilt Binaries
29+
30+
Releases: https://github.com/NVIDIA/TRTorch/releases
1531

32+
## Compiling TRTorch
33+
34+
Install TensorRT, CUDA and cuDNN on the system before starting to compile.
1635

1736
``` shell
18-
bazel build //:libtrtorch --cxxopt="-DNDEBUG"
37+
bazel build //:libtrtorch --compilation_mode=opt
1938
```
2039

21-
### Debug build
40+
### Debug build
2241
``` shell
2342
bazel build //:libtrtorch --compilation_mode=dbg
2443
```
2544

26-
A tarball with the include files and library can then be found in bazel-bin
45+
A tarball with the include files and library can then be found in bazel-bin
2746

28-
### Running TRTorch on a JIT Graph
47+
### Running TRTorch on a JIT Graph
2948

30-
> Make sure to add LibTorch's version of CUDA 10.1 to your LD_LIBRARY_PATH `export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$(pwd)/bazel-TRTorch/external/libtorch/lib`
49+
> Make sure to add LibTorch to your LD_LIBRARY_PATH <br>
50+
>`export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$(pwd)/bazel-TRTorch/external/libtorch/lib`
3151
3252

3353
``` shell
@@ -38,22 +58,28 @@ bazel run //cpp/trtorchexec -- $(realpath <PATH TO GRAPH>) <input-size>
3858

3959
### In TRTorch?
4060

41-
Thanks for wanting to contribute! There are two main ways to handle supporting a new op. Either you can write a converter for the op from scratch and register it in the NodeConverterRegistry or if you can map the op to a set of ops that already have converters you can write a graph rewrite pass which will replace your new op with an equivalent subgraph of supported ops. Its preferred to use graph rewriting because then we do not need to maintain a large library of op converters.
61+
Thanks for wanting to contribute! There are two main ways to handle supporting a new op. Either you can write a converter for the op from scratch and register it in the NodeConverterRegistry or if you can map the op to a set of ops that already have converters you can write a graph rewrite pass which will replace your new op with an equivalent subgraph of supported ops. Its preferred to use graph rewriting because then we do not need to maintain a large library of op converters. Also do look at the various op support trackers in the [issues](https://github.com/NVIDIA/TRTorch/issues) for information on the support status of various operators.
4262

4363
### In my application?
4464

45-
> The Node Converter Registry is not exposed currently in the public API but you can try using internal headers.
65+
> The Node Converter Registry is not exposed in the top level API but you can try using the internal headers shipped with the tarball.
4666
47-
You can register a converter for your op using the NodeConverterRegistry inside your application.
67+
You can register a converter for your op using the NodeConverterRegistry inside your application.
4868

4969
## Structure of the repo
5070

5171
| Component | Description |
5272
| ------------- | ------------------------------------------------------------ |
53-
| [**core**]() | Main JIT ingest, lowering, conversion and execution implementations |
54-
| [**cpp**]() | C++ API for TRTorch |
55-
| [**tests**]() | Unit test for TRTorch |
73+
| [**core**](core) | Main JIT ingest, lowering, conversion and execution implementations |
74+
| [**cpp**](cpp) | C++ specific components including API and example applications |
75+
| [**cpp/api**](cpp/api) | C++ API for TRTorch |
76+
| [**tests**](tests) | Unit test for TRTorch |
77+
78+
## Contributing
79+
80+
Take a look at the [CONTRIBUTING.md](CONTRIBUTING.md)
81+
5682

5783
## License
5884

59-
The TRTorch license can be found in the LICENSE file.
85+
The TRTorch license can be found in the LICENSE file. It is licensed with a BSD Style licence

0 commit comments

Comments
 (0)