Skip to content

Commit bb18868

Browse files
tanmayv25kthui
andauthored
Apply suggestions from code review
Co-authored-by: Jacky <[email protected]>
1 parent 603d688 commit bb18868

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -184,7 +184,7 @@ inputs. Each inference thread invokes a JIT interpreter that executes the ops
184184
of a model inline, one by one. This parameter sets the size of this thread
185185
pool. The default value of this setting is the number of cpu cores. Please refer
186186
to [this](https://pytorch.org/docs/stable/notes/cpu_threading_torchscript_inference.html)
187-
document for learning how to set this parameter properly.
187+
document on how to set this parameter properly.
188188

189189
The section of model config file specifying this parameter will look like:
190190

@@ -204,7 +204,7 @@ within the ops (intra-op parallelism). This can be useful in many cases, includi
204204
element-wise ops on large tensors, convolutions, GEMMs, embedding lookups and
205205
others. The default value for this setting is the number of CPU cores. Please refer
206206
to [this](https://pytorch.org/docs/stable/notes/cpu_threading_torchscript_inference.html)
207-
document for learning how to set this parameter properly.
207+
document on how to set this parameter properly.
208208

209209
The section of model config file specifying this parameter will look like:
210210

0 commit comments

Comments
 (0)