File tree Expand file tree Collapse file tree 1 file changed +2
-2
lines changed Expand file tree Collapse file tree 1 file changed +2
-2
lines changed Original file line number Diff line number Diff line change @@ -184,7 +184,7 @@ inputs. Each inference thread invokes a JIT interpreter that executes the ops
184
184
of a model inline, one by one. This parameter sets the size of this thread
185
185
pool. The default value of this setting is the number of cpu cores. Please refer
186
186
to [ this] ( https://pytorch.org/docs/stable/notes/cpu_threading_torchscript_inference.html )
187
- document for learning how to set this parameter properly.
187
+ document on how to set this parameter properly.
188
188
189
189
The section of model config file specifying this parameter will look like:
190
190
@@ -204,7 +204,7 @@ within the ops (intra-op parallelism). This can be useful in many cases, includi
204
204
element-wise ops on large tensors, convolutions, GEMMs, embedding lookups and
205
205
others. The default value for this setting is the number of CPU cores. Please refer
206
206
to [ this] ( https://pytorch.org/docs/stable/notes/cpu_threading_torchscript_inference.html )
207
- document for learning how to set this parameter properly.
207
+ document on how to set this parameter properly.
208
208
209
209
The section of model config file specifying this parameter will look like:
210
210
You can’t perform that action at this time.
0 commit comments