File tree Expand file tree Collapse file tree 1 file changed +4
-3
lines changed Expand file tree Collapse file tree 1 file changed +4
-3
lines changed Original file line number Diff line number Diff line change @@ -126,8 +126,9 @@ For example:
126
126
import os
127
127
import torch
128
128
129
- model_path = os.path.join(model_dir, " model.pt" )
130
- model = torch.jit.load(model_path)
129
+ # ... train `model`, then save it to `model_dir`
130
+ model_dir = os.path.join(model_dir, " model.pt" )
131
+ torch.jit.save(model, model_dir)
131
132
132
133
Using third-party libraries
133
134
---------------------------
@@ -315,7 +316,7 @@ It loads the model parameters from a ``model.pth`` file in the SageMaker model d
315
316
However, if you are using PyTorch Elastic Inference, you do not have to provide a ``model_fn `` since the PyTorch serving
316
317
container has a default one for you. But please note that if you are utilizing the default ``model_fn ``, please save
317
318
your ScriptModule as ``model.pt ``. If you are implementing your own ``model_fn ``, please use TorchScript and ``torch.jit.save ``
318
- to save your ScriptModule. For more information on inference script, please refer to:
319
+ to save your ScriptModule, then load it in your `` model_fn `` with `` torch.jit.load `` . For more information on inference script, please refer to:
319
320
`SageMaker PyTorch Default Inference Handler <https://github.com/aws/sagemaker-pytorch-serving-container/blob/master/src/sagemaker_pytorch_serving_container/default_inference_handler.py >`_.
320
321
321
322
Serve a PyTorch Model
You can’t perform that action at this time.
0 commit comments