Skip to content

Commit f821163

Browse files
authored
Typo fixes in native-execution.md (#1394)
Typo fixes in native-execution.md
1 parent 6895a18 commit f821163

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

docs/native-execution.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -16,14 +16,14 @@ The 'llama runner' is a native standalone application capable of
1616
running a model exported and compiled ahead-of-time with either
1717
Executorch (ET) or AOT Inductor (AOTI). Which model format to use
1818
depends on your requirements and preferences. Executorch models are
19-
optimized for portability across a range of decices, including mobile
19+
optimized for portability across a range of devices, including mobile
2020
and edge devices. AOT Inductor models are optimized for a particular
2121
target architecture, which may result in better performance and
2222
efficiency.
2323

2424
Building the runners is straightforward with the included cmake build
2525
files and is covered in the next sections. We will showcase the
26-
runners using ~~stories15M~~ llama2 7B and llama3.
26+
runners using llama2 7B and llama3.
2727

2828
## What can you do with torchchat's llama runner for native execution?
2929

@@ -160,7 +160,7 @@ and native execution environments, respectively.
160160

161161
After exporting a model, you will want to verify that the model
162162
delivers output of high quality, and works as expected. Both can be
163-
achieved with the Python environment. All torchchat Python comands
163+
achieved with the Python environment. All torchchat Python commands
164164
can work with exported models. Instead of loading the model from a
165165
checkpoint or GGUF file, use the `--dso-path model.so` and
166166
`--pte-path model.pte` for loading both types of exported models. This

0 commit comments

Comments
 (0)