Skip to content

Commit 7a2c168

Browse files
authored
Update examples/README.md with Llama 3 and names
- Added Llama 3 8B - Added llm_manual in the list - changed name from Extensa to Cadence
1 parent b7b40ac commit 7a2c168

File tree

1 file changed

+4
-3
lines changed

1 file changed

+4
-3
lines changed

examples/README.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@ ExecuTorch's extensive support spans from simple modules like "Add" to comprehen
99
## Directory structure
1010
```
1111
examples
12+
├── llm_manual # A storage place for the files that [LLM Maunal](https://pytorch.org/executorch/main/llm/getting-started.html) needs
1213
├── models # Contains a set of popular and representative PyTorch models
1314
├── portable # Contains end-to-end demos for ExecuTorch in portable mode
1415
├── selective_build # Contains demos of selective build for optimizing the binary size of the ExecuTorch runtime
@@ -20,7 +21,7 @@ examples
2021
| └── mps # Contains end-to-end demos of MPS backend
2122
├── arm # Contains demos of the Arm TOSA and Ethos-U NPU flows
2223
├── qualcomm # Contains demos of Qualcomm QNN backend
23-
├── xtensa # Contains demos of exporting and running a simple model on Xtensa Hifi4 DSP
24+
├── cadence # Contains demos of exporting and running a simple model on Xtensa DSPs
2425
├── third-party # Third-party libraries required for working on the demos
2526
└── README.md # This file
2627
```
@@ -30,9 +31,9 @@ examples
3031

3132
A user's journey may commence by exploring the demos located in the [`portable/`](./portable) directory. Here, you will gain insights into the fundamental end-to-end workflow to generate a binary file from a ML model in [portable mode](../docs/source/concepts.md##portable-mode-lean-mode) and run it on the ExecuTorch runtime.
3233

33-
## Demo of Llama2
34+
## Demo of Llama 2 and Llama 3
3435

35-
[This page](./models/llama2/README.md) demonstrates how to run a Llama 2 7B model on mobile via ExecuTorch. We use XNNPACK to accelerate the performance and 4-bit groupwise PTQ quantization to fit the model on Android and iOS mobile phones.
36+
[This page](./models/llama2/README.md) demonstrates how to run Llama 2 7B and Llama 3 8B models on mobile via ExecuTorch. We use XNNPACK to accelerate the performance and 4-bit groupwise PTQ quantization to fit the model on Android and iOS mobile phones.
3637

3738
## Demo of Selective Build
3839

0 commit comments

Comments
 (0)