We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 67d8faa commit 8505140Copy full SHA for 8505140
doc/frameworks/pytorch/using_pytorch.rst
@@ -772,7 +772,7 @@ The following example is for use cases with multiple GPUs and shows an overridde
772
import torch
773
import numpy as np
774
775
- def predict_fn(input_data, model):
+ def predict_fn(input_data, model, context):
776
device = torch.device("cuda:" + str(context.system_properties.get("gpu_id")) if torch.cuda.is_available() else "cpu")
777
model.to(device)
778
model.eval()
0 commit comments