Skip to content

Add documentation for instance group kind of type 'KIND_MODEL' #110

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Jun 13, 2023

Conversation

krishung5
Copy link
Contributor

@krishung5 krishung5 commented Jun 5, 2023

The documentation for #107.

README.md Outdated
[`KIND_MODEL`](https://github.com/triton-inference-server/common/blob/r23.05/protobuf/model_config.proto#L174-L181).
In this case, the inputs reside on the CPU. The backend does not choose the GPU
device for the model; instead, it respects the device(s) specified in the model
and uses them as they are when the instance group kind is set to `KIND_MODEL`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: I think this section could be shortened, which could add clarity. It feels like a bit of a run on. I think something like the previous sections could be good, like "Inputs are located on the devices specified by the model. This feature is available starting in the 23.06 release."

Also, a couple of questions:

  • Is there a default case when the model does not specify a device? Does this fail? If not, wondering if the default value is worth adding here.
  • Is there a test model that we could link to? It's okay if not. If we do already have it, linking to that as an example could be useful. It would show users how to specify devices in a PyTorch model.

Copy link
Contributor Author

@krishung5 krishung5 Jun 8, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Modified the section, let me know if there is anything unclear!

Is there a default case when the model does not specify a device? Does this fail? If not, wondering if the default value is worth adding here.

By default the first available GPU device will be used. Added this to the documentation.

Is there a test model that we could link to? It's okay if not. If we do already have it, linking to that as an example could be useful. It would show users how to specify devices in a PyTorch model.

Added the link to the testing model. (The link will be valid once the server PR merged)

@krishung5 krishung5 merged commit 19028b7 into main Jun 13, 2023
@krishung5 krishung5 deleted the krish-doc branch June 13, 2023 05:43
krishung5 added a commit that referenced this pull request Jun 13, 2023
* Add documentation for instance group kind of type 'KIND_MODEL'

* Address comment

* Address comment
@krishung5 krishung5 restored the krish-doc branch June 13, 2023 05:54
mc-nv pushed a commit that referenced this pull request Jun 13, 2023
* Add documentation for instance group kind of type 'KIND_MODEL'

* Address comment

* Address comment
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

3 participants