-
Notifications
You must be signed in to change notification settings - Fork 363
(//bazel): Native compilation support for Jetson AGX platform #113
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to find a way to elegantly integrate these changes so that it doesn't wildly increase complexity or introduce confusion. We might be able to do some work with select in the BUILD file of the dependencies, this might get us around having 4 cases in every BUILD file. Also can you run all this through buildifier (https://github.com/bazelbuild/buildtools/blob/master/buildifier/README.md)? there are quite a few formatting issues
| Linux aarch64 / GPU | **Planned/Possible with Native Compiation but untested** | | ||
| Linux aarch64 / DLA | **Planned/Possible with Native Compilation but untested** | | ||
| Linux aarch64 / GPU | **Native Compilation Supported on JetPack-4.4** | | ||
| Linux aarch64 / DLA | **Native Compilation Supported on JetPack-4.4 but untested** | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you try resnet 50 on DLA with trtorchc
?
Also we need to change the sphinx documentation to document the new usage for aarch64 (//docsrc/tutorials/installation.rst) |
Closes #36 |
…nd bilinear2d ops Signed-off-by: Abhiram Iyer <[email protected]> Signed-off-by: Abhiram Iyer <[email protected]>
…d op Signed-off-by: Abhiram Iyer <[email protected]> Signed-off-by: Abhiram Iyer <[email protected]>
…ilinear ops. removed redundant tests Signed-off-by: Abhiram Iyer <[email protected]> Signed-off-by: Abhiram Iyer <[email protected]>
…late plugin Signed-off-by: Abhiram Iyer <[email protected]> Signed-off-by: Abhiram Iyer <[email protected]>
…ompiles now. time to test it. Signed-off-by: Abhiram Iyer <[email protected]> Signed-off-by: Abhiram Iyer <[email protected]>
Signed-off-by: Abhiram Iyer <[email protected]> Signed-off-by: Abhiram Iyer <[email protected]>
Signed-off-by: Abhiram Iyer <[email protected]> Signed-off-by: Abhiram Iyer <[email protected]>
Signed-off-by: Abhiram Iyer <[email protected]> Signed-off-by: Abhiram Iyer <[email protected]>
Added support for interpolate plugin, used when align_corners=False and mode is linear Signed-off-by: Abhiram Iyer <[email protected]> Signed-off-by: Abhiram Iyer <[email protected]>
Signed-off-by: Abhiram Iyer <[email protected]> Signed-off-by: Abhiram Iyer <[email protected]>
…plugin, works for mode='linear' Signed-off-by: Abhiram Iyer <[email protected]> Signed-off-by: Abhiram Iyer <[email protected]>
… sized tensors Signed-off-by: Abhiram Iyer <[email protected]> Signed-off-by: Abhiram Iyer <[email protected]>
Signed-off-by: Abhiram Iyer <[email protected]> Signed-off-by: Abhiram Iyer <[email protected]>
Signed-off-by: Abhiram Iyer <[email protected]> Signed-off-by: Abhiram Iyer <[email protected]>
Signed-off-by: Abhiram Iyer <[email protected]> Signed-off-by: Abhiram Iyer <[email protected]>
…ut tensors Signed-off-by: Abhiram Iyer <[email protected]> Signed-off-by: Abhiram Iyer <[email protected]>
Signed-off-by: Abhiram Iyer <[email protected]> Signed-off-by: Abhiram Iyer <[email protected]>
Signed-off-by: Abhiram Iyer <[email protected] Signed-off-by: Abhiram Iyer <[email protected]>
…r adaptive_pool2d Signed-off-by: Abhiram Iyer <[email protected]> Signed-off-by: Abhiram Iyer <[email protected]>
Signed-off-by: Abhiram Iyer <[email protected]> Signed-off-by: Abhiram Iyer <[email protected]>
Signed-off-by: Naren Dasan <[email protected]> Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Abhiram Iyer <[email protected]> Signed-off-by: Abhiram Iyer <[email protected]>
Signed-off-by: Abhiram Iyer <[email protected]> Signed-off-by: Abhiram Iyer <[email protected]>
Signed-off-by: Abhiram Iyer <[email protected]> Signed-off-by: Abhiram Iyer <[email protected]>
… support for aten::select.int Signed-off-by: Abhiram Iyer <[email protected]> Signed-off-by: Abhiram Iyer <[email protected]>
Signed-off-by: Abhiram Iyer <[email protected]> Signed-off-by: Abhiram Iyer <[email protected]>
Signed-off-by: Abhiram Iyer <[email protected]> Signed-off-by: Abhiram Iyer <[email protected]>
Signed-off-by: Abhiram Iyer <[email protected]> Signed-off-by: Abhiram Iyer <[email protected]>
…ndex Signed-off-by: Abhiram Iyer <[email protected]> Signed-off-by: Abhiram Iyer <[email protected]>
Signed-off-by: Abhiram Iyer <[email protected]> Signed-off-by: Abhiram Iyer <[email protected]>
Signed-off-by: anuragd <[email protected]>
Signed-off-by: Anurag Dixit <[email protected]>
Signed-off-by: Anurag Dixit <[email protected]>
cuBLAS into its own dependency should follow the same workflow as CUDA (i.e. local only) Signed-off-by: Naren Dasan <[email protected]> Signed-off-by: Naren Dasan <[email protected]>
The issue was in //third_party/libtorch/BUILD, there was numpy libraries being picked as headers. Also significantly simplifies the build system to only using the cpu flag for configuring precompiled third party dependecies and also removes the pre_cxx11_abi_aarch64 flag. Streamlines WORKSPACE, deduplicating repositories. Now the user workflow for aarch64 compilation is to configure the WORKSPACE to use all local sources. These should be able to be used the same way on x86_64 Also discovered that the NVIDIA PyTorch distribution for aarch64 uses the CXX11 abi so default paths for both will point to the default torch install location for local. TODO: Move from cpu to platforms (@andi4191) TODO: Test on both x86 and aarch64 + DLA (@andi4191) TODO: Before merge reset the WORKSPACE file to the default settings Signed-off-by: Naren Dasan <[email protected]> Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]> Signed-off-by: Naren Dasan <[email protected]>
5e2c081
to
199caa1
Compare
Signed-off-by: andi4191 <[email protected]>
199caa1
to
813289f
Compare
@andi4191 can you rebase this on master the PR? is trying to recommit a bunch of old changes |
There were some issues with the PR, I filed one against your branch to resolve them. I think that we need to get this merged ASAP, it is currently blocking CUDA 11 work. I think if you rebase to remove those extra commits we should be good to merge this and then do documentation and DLA work in separate PRs. |
Closing this PR. Updated PR tracked here #124 |
Description
It includes native compilation support for NVIDIA Jetson AGX platform.
Dependencies:
Cross-compiled libraries from PyTorch on Jetson. Refer https://ngc.nvidia.com/catalog/containers/nvidia:l4t-pytorch
Fixes # (issue)
Type of change
Please delete options that are not relevant and/or add your own.
Checklist:
make html
in docsrc)