Skip to content

Various fixes to PIN integration example #368

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Oct 19, 2022
Merged

Conversation

CPBridge
Copy link
Collaborator

Hi @aihsani

Thanks again for this nice integration example and sorry for taking so long to get around to looking over it properly. I just played around with it for a while and it seems to be working quite nicely. During this process, I made a bunch of small changes, which I've collected in this pull request:

  • A few fixes to the documentation (small typos and things)
  • Despite the docs saying that a GPU is not required, the device for the inference part is actually hard coded as "cuda", meaning it won't work on CPU and will just crash. I fixed this. Unfortunately this does affect me as I cannot figure out how to get GPUs active within a Docker container when using the version of Docker compose that I have (if you have a quick pointer to a fix I'd appreciate it but I don't necessarily need a solution).
  • I did make a fairly substantial change to the way the GSPS output works. Even though this is not the point of the integration, I think it's important that this follows best practices for GSPS creation. Generally speaking, a single GSPS should relate to an entire series, not one GSPS object per image or detection (as currently implemented). Generally speaking, if there are multiple GSPS objects for a series, a viewer (including eUnity) will only apply a single, manually-selected GSPS at a time. Therefore the current way things work would give rise to a pretty bad UI in the viewer. Therefore, I tweaked this to created a single GSPS for the entire series. This also has the effect of applying the windowing to the whole series.

@CPBridge CPBridge added bug Something isn't working documentation Improvements or additions to documentation labels Oct 13, 2022
@CPBridge CPBridge requested a review from aihsani October 13, 2022 14:56
@CPBridge CPBridge self-assigned this Oct 13, 2022
@MMelQin
Copy link
Collaborator

MMelQin commented Oct 14, 2022

  • Despite the docs saying that a GPU is not required, the device for the inference part is actually hard coded as "cuda", meaning it won't work on CPU and will just crash. I fixed this. Unfortunately this does affect me as I cannot figure out how to get GPUs active within a Docker container when using the version of Docker compose that I have (if you have a quick pointer to a fix I'd appreciate it but I don't necessarily need a solution).

This may not be exactly what you are seeking for, but to achieve this we use NV container base, and run docker with GPU options (--gpus or --runtime=nvidia), and for Docker Compose by defining driver

Signed-off-by: Christopher Bridge <[email protected]>
@CPBridge CPBridge force-pushed the pin_integration_tweaks branch from 42e5c61 to 16bb2d1 Compare October 14, 2022 19:20
Christopher Bridge added 2 commits October 14, 2022 15:37
Signed-off-by: Christopher Bridge <[email protected]>
Signed-off-by: Christopher Bridge <[email protected]>
@CPBridge
Copy link
Collaborator Author

This may not be exactly what you are seeking for, but to achieve this we use NV container base, and run docker with GPU options (--gpus or --runtime=nvidia), and for Docker Compose by defining driver

Thanks Min, but this hasn't been working for me. It's ok, no need to worry about his here.

Please rebase, as the inference.py has been updated in a previous PR by me to make the App SDK compositable with MONAI Core 0.9.1 and newer; for this one there is a small change in the pre-processing to remove the use of ToTensord

Rebased, and fixed some style issues (that bizarrely were not caught before)

@aihsani
Copy link
Contributor

aihsani commented Oct 18, 2022

Ready to merge?

Copy link
Collaborator

@MMelQin MMelQin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Signed-off-by: Christopher Bridge <[email protected]>
@CPBridge
Copy link
Collaborator Author

CPBridge commented Oct 18, 2022

@aihsani I made a couple of small tweaks to the README, as we discussed on slack. I think we are ready to merge

@sonarqubecloud
Copy link

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 0 Code Smells

No Coverage information No Coverage information
0.0% 0.0% Duplication

Copy link
Collaborator

@MMelQin MMelQin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@CPBridge CPBridge merged commit f7176c0 into main Oct 19, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working documentation Improvements or additions to documentation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants