Skip to content

Commit 187484d

Browse files
authored
717 renaming to main (#719)
* 717 renaming Signed-off-by: Wenqi Li <[email protected]> * update docs Signed-off-by: Wenqi Li <[email protected]>
1 parent 6ba18ab commit 187484d

File tree

3 files changed

+5
-5
lines changed

3 files changed

+5
-5
lines changed

.github/workflows/pep8.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ on:
44
# quick tests for every pull request
55
push:
66
branches:
7-
- master
7+
- main
88
pull_request:
99

1010
jobs:

acceleration/threadbuffer_performance.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@
1919
"source": [
2020
"## Setup Environment\n",
2121
"\n",
22-
"The current MONAI master branch must be installed for this feature (as of release 0.3.0), skip this step if already installed:"
22+
"The current MONAI main branch must be installed for this feature (as of release 0.3.0), skip this step if already installed:"
2323
]
2424
},
2525
{

full_gpu_inference_pipeline/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ Before starting, I highly recommand you to read the the following two links to g
2828
## Prepare the model repository
2929
The full pipeline is as below:
3030

31-
<img src="https://github.com/Project-MONAI/tutorials/raw/master/full_gpu_inference_pipeline/pics/Picture3.png">
31+
<img src="https://github.com/Project-MONAI/tutorials/raw/main/full_gpu_inference_pipeline/pics/Picture3.png">
3232

3333
### Prepare the model repository file directories
3434
The Triton model repository of the experiment can be fast set up by: 
@@ -176,9 +176,9 @@ Since 3D medical images are generally big, the overhead brought by protocols can
176176
Note that all the processes (pre/post and AI inference) are on GPU.
177177
From the result, we can come to a conclusion that using shared memory will greatly reduce the latency when data transfer is huge.
178178

179-
![](https://github.com/Project-MONAI/tutorials/raw/master/full_gpu_inference_pipeline/pics/Picture2.png)
179+
![](https://github.com/Project-MONAI/tutorials/raw/main/full_gpu_inference_pipeline/pics/Picture2.png)
180180

181181
### Pre/Post-processing on GPU vs. CPU 
182182
After doing pre and post-processing on GPU, we can get a 12x speedup for the full pipeline.
183183

184-
![](https://github.com/Project-MONAI/tutorials/raw/master/full_gpu_inference_pipeline/pics/Picture1.png)
184+
![](https://github.com/Project-MONAI/tutorials/raw/main/full_gpu_inference_pipeline/pics/Picture1.png)

0 commit comments

Comments
 (0)