Skip to content

717 renaming to main #719

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
May 19, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/pep8.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ on:
# quick tests for every pull request
push:
branches:
- master
- main
pull_request:

jobs:
Expand Down
2 changes: 1 addition & 1 deletion acceleration/threadbuffer_performance.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
"source": [
"## Setup Environment\n",
"\n",
"The current MONAI master branch must be installed for this feature (as of release 0.3.0), skip this step if already installed:"
"The current MONAI main branch must be installed for this feature (as of release 0.3.0), skip this step if already installed:"
]
},
{
Expand Down
6 changes: 3 additions & 3 deletions full_gpu_inference_pipeline/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ Before starting, I highly recommand you to read the the following two links to g
## Prepare the model repository
The full pipeline is as below:

<img src="https://github.com/Project-MONAI/tutorials/raw/master/full_gpu_inference_pipeline/pics/Picture3.png">
<img src="https://github.com/Project-MONAI/tutorials/raw/main/full_gpu_inference_pipeline/pics/Picture3.png">

### Prepare the model repository file directories
The Triton model repository of the experiment can be fast set up by: 
Expand Down Expand Up @@ -176,9 +176,9 @@ Since 3D medical images are generally big, the overhead brought by protocols can
Note that all the processes (pre/post and AI inference) are on GPU.
From the result, we can come to a conclusion that using shared memory will greatly reduce the latency when data transfer is huge.

![](https://github.com/Project-MONAI/tutorials/raw/master/full_gpu_inference_pipeline/pics/Picture2.png)
![](https://github.com/Project-MONAI/tutorials/raw/main/full_gpu_inference_pipeline/pics/Picture2.png)

### Pre/Post-processing on GPU vs. CPU 
After doing pre and post-processing on GPU, we can get a 12x speedup for the full pipeline.

![](https://github.com/Project-MONAI/tutorials/raw/master/full_gpu_inference_pipeline/pics/Picture1.png)
![](https://github.com/Project-MONAI/tutorials/raw/main/full_gpu_inference_pipeline/pics/Picture1.png)