Skip to content

Commit f965b99

Browse files
mergennachinfacebook-github-bot
authored andcommitted
Prepare Llama2 README.md for consumption (#2831)
Summary: Cleaning up old contents from Llama2. This is purely skeleton. Follow-up diffs will contain fixing individual steps. Differential Revision: D55703398
1 parent 51d3389 commit f965b99

File tree

1 file changed

+32
-20
lines changed

1 file changed

+32
-20
lines changed

examples/models/llama2/README.md

Lines changed: 32 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,7 @@
11
# Summary
2-
This example demonstrates how to Export a [Llama 2](https://ai.meta.com/llama/) model in ExecuTorch such that it can be used in a mobile environment.
2+
This example demonstrates how to run a [Llama 2](https://ai.meta.com/llama/) model on mobile via ExecuTorch. We use XNNPACK to accelerate the performance and 4-bit groupwise PTQ quantization to fit the model on a phone.
3+
4+
35
For Llama2, please refer to [the llama's github page](https://github.com/facebookresearch/llama) for details.
46
Pretrained parameters are not included in this repo. Users are suggested to download them through [the llama's download page](https://ai.meta.com/resources/models-and-libraries/llama-downloads/).
57

@@ -12,31 +14,28 @@ Overall, Llama models are powerful and versatile language models that can be use
1214

1315
Please note that the models are subject to the [acceptable use policy](https://github.com/facebookresearch/llama/blob/main/USE_POLICY.md) and the provided [responsible use guide](https://ai.meta.com/static-resource/responsible-use-guide/).
1416

15-
# Notes
16-
1. This example is to show the feasibility of exporting a Llama2 model in ExecuTorch. There is no guarantee for performance.
17-
2. The provided checkpoint, demo_rand_params.pth is a dummy checkpoint with random parameters. It does not provide meaningful results. It's only for the purpose of demonstration and fast iterations. Use the options `--checkpoint <checkpoint>` and `--params <params>` for custom checkpoints.
18-
1917

20-
# Limitations
21-
This example tries to reuse the Python code, with modifications to make it compatible with current ExecuTorch:
22-
1. Since ExecuTorch does not support complex Tensor data type, use the customized functions to have rotary embedding with real numbers. Please see [GitHub issue: Support complex data type in ExecuTorch](https://github.com/pytorch/executorch/issues/886).
23-
2. No KV cache. The current cache implementation in the original Llama2 repo is not supported by ExecuTorch, because ExecuTorch runtime assumes model data attributes being static. Please see [GitHub issue: Add support of mutable buffers in ExecuTorch](https://github.com/pytorch/executorch/issues/897).
24-
3. No CUDA. ExecuTorch is focused on Edge use cases where CUDA is not available on most of the edge devices.
25-
4. No dependencies on fairscale. The ColumnParallelLinear, ParallelEmbedding and training are not needed and supported in ExecuTorch.
18+
# Results
2619

20+
TODO - Will fill in table of results.
2721

2822
# Instructions:
29-
### Setup
30-
1. Follow the [tutorial](https://pytorch.org/executorch/stable/getting-started-setup) to set up ExecuTorch
31-
2. `cd examples/third-party/llama`
32-
3. `pip install -e .`
33-
4. Go back to `executorch` root, run `bash examples/models/llama2/install_requirements.sh`.
23+
### Step 1: Setup
24+
1. Follow the [tutorial](https://pytorch.org/executorch/main/getting-started-setup) to set up ExecuTorch
25+
2. Run `examples/models/llama2/install_requirements.sh`.
26+
27+
### Step 2: Prepare model
28+
29+
#### Option A: Download and export llama2 model
30+
31+
You can export and run the original Llama2 model.
3432

35-
### Export llama2 models
36-
2. From `executorch` root, run `python3 -m examples.models.llama2.export_llama`. The exported program, llama2.pte would be saved in current directory using the dummy checkpoint.
37-
3. Llama2 pretrained parameters can be downloaded [here](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and run with `python3 -m examples.models.llama2.export_llama --checkpoint <checkpoint.pth> --params <params.json>`.
33+
1. From `executorch` root, run `python3 -m examples.models.llama2.export_llama`. The exported program, llama2.pte would be saved in current directory using the dummy checkpoint.
34+
2. Llama2 pretrained parameters can be downloaded [here](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and run with `python3 -m examples.models.llama2.export_llama --checkpoint <checkpoint.pth> --params <params.json>`.
3835

39-
### Export and run stories110M model
36+
#### Option B: Export stories110M model
37+
38+
If you want to deploy and run a smaller model for education purposes
4039

4140
1. Download `stories110M.pt` and `tokenizer.model` from Github.
4241
```
@@ -59,6 +58,8 @@ This example tries to reuse the Python code, with modifications to make it compa
5958
```
6059
Build with cmake: todo
6160
61+
### Step 3: Run on your computer to validate
62+
6263
5. Run model. Run options available [here](https://github.com/pytorch/executorch/blob/main/examples/models/llama2/main.cpp#L13).
6364
Build with buck2:
6465
```
@@ -67,3 +68,14 @@ This example tries to reuse the Python code, with modifications to make it compa
6768
Build with cmake: todo
6869
6970
See test script [here](https://github.com/pytorch/executorch/blob/main/.ci/scripts/test_llama.sh).
71+
72+
### Step 4: Run benchmark on a phone via adb shell.
73+
74+
### Step 5: Build iOS and Android apps
75+
76+
77+
# Notes
78+
This example tries to reuse the Python code, with minimal modifications to make it compatible with current ExecuTorch:
79+
1. Since ExecuTorch does not support complex Tensor data type, use the customized functions to have rotary embedding with real numbers. Please see [GitHub issue: Support complex data type in ExecuTorch](https://github.com/pytorch/executorch/issues/886).
80+
2. No CUDA. ExecuTorch is focused on Edge use cases where CUDA is not available on most of the edge devices.
81+
3. No dependencies on fairscale. The ColumnParallelLinear, ParallelEmbedding and training are not needed and supported in ExecuTorch.

0 commit comments

Comments
 (0)