You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
SlidingWindowInferer: option to adaptively stitch in cpu memory for large images (#5297)
SlidingWindowInferer: option to adaptively stitch in cpu memory for
large images.
This adds an option to provide maximum input image volume (number of
elements) to dynamically change stitching to cpu memory (to avoid gpu
memory crashes). For example with `cpu_thresh=400*400*400`, all input
images with large volume will be stitched on cpu.
At the moment, a user must decide beforehand, to stitch ALL images on
cpu or gpu (by specifying the 'device' parameter). But in many datasets,
only a few large images require device==cpu, and running inference on
cpu for ALL will be unnecessary slow.
It's related to
#4625#4495#3497#4726#4588
### Types of changes
<!--- Put an `x` in all the boxes that apply, and remove the not
applicable items -->
- [x] Non-breaking change (fix or new feature that would not break
existing functionality).
- [ ] Breaking change (fix or new feature that would cause existing
functionality to change).
- [ ] New tests added to cover the changes.
- [ ] Integration tests passed locally by running `./runtests.sh -f -u
--net --coverage`.
- [ ] Quick tests passed locally by running `./runtests.sh --quick
--unittests --disttests`.
- [ ] In-line docstrings updated.
- [ ] Documentation updated, tested `make html` command in the `docs/`
folder.
Signed-off-by: myron <[email protected]>
Co-authored-by: Wenqi Li <[email protected]>
Signed-off-by: KumoLiu <[email protected]>
0 commit comments