Skip to content

Commit 54c57ae

Browse files
committed
717 renaming
Signed-off-by: Wenqi Li <[email protected]>
1 parent 418f9db commit 54c57ae

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

full_gpu_inference_pipeline/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ Before starting, I highly recommand you to read the the following two links to g
2828
## Prepare the model repository
2929
The full pipeline is as below:
3030

31-
<img src="https://github.com/Project-MONAI/tutorials/raw/master/full_gpu_inference_pipeline/pics/Picture3.png">
31+
<img src="https://github.com/Project-MONAI/tutorials/raw/main/full_gpu_inference_pipeline/pics/Picture3.png">
3232

3333
### Prepare the model repository file directories
3434
The Triton model repository of the experiment can be fast set up by: 
@@ -176,9 +176,9 @@ Since 3D medical images are generally big, the overhead brought by protocols can
176176
Note that all the processes (pre/post and AI inference) are on GPU.
177177
From the result, we can come to a conclusion that using shared memory will greatly reduce the latency when data transfer is huge.
178178

179-
![](https://github.com/Project-MONAI/tutorials/raw/master/full_gpu_inference_pipeline/pics/Picture2.png)
179+
![](https://github.com/Project-MONAI/tutorials/raw/main/full_gpu_inference_pipeline/pics/Picture2.png)
180180

181181
### Pre/Post-processing on GPU vs. CPU 
182182
After doing pre and post-processing on GPU, we can get a 12x speedup for the full pipeline.
183183

184-
![](https://github.com/Project-MONAI/tutorials/raw/master/full_gpu_inference_pipeline/pics/Picture1.png)
184+
![](https://github.com/Project-MONAI/tutorials/raw/main/full_gpu_inference_pipeline/pics/Picture1.png)

0 commit comments

Comments
 (0)