Skip to content

Commit ef376d6

Browse files
authored
Update readme. (#3331)
* Update readme. Summary: . Reviewed By: cccclai Differential Revision: D56532283 fbshipit-source-id: 62d7c9e8583fdb5c9a1b2e781e80799c06682aae (cherry picked from commit ce1e9c1) * Update readme. Summary: . Reviewed By: cccclai Differential Revision: D56535633 fbshipit-source-id: 070a3b0af9dea234f8ae4be01c37c03b4e0a56e6 (cherry picked from commit 035aee4)
1 parent fdd266c commit ef376d6

File tree

1 file changed

+22
-7
lines changed

1 file changed

+22
-7
lines changed

examples/demo-apps/apple_ios/LLaMA/README.md

Lines changed: 22 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -2,12 +2,20 @@
22

33
This app demonstrates the use of the LLaMA chat app demonstrating local inference use case with ExecuTorch.
44

5-
<img src="../_static/img/llama_ios_app.png" alt="iOS LLaMA App" /><br>
6-
75
## Prerequisites
8-
* [Xcode 15](https://developer.apple.com/xcode).
9-
* [iOS 17 SDK](https://developer.apple.com/ios).
10-
* Set up your ExecuTorch repo and environment if you haven’t done so by following the [Setting up ExecuTorch](https://pytorch.org/executorch/stable/getting-started-setup) to set up the repo and dev environment.
6+
* [Xcode 15](https://developer.apple.com/xcode)
7+
* [iOS 17 SDK](https://developer.apple.com/ios)
8+
* Set up your ExecuTorch repo and environment if you haven’t done so by following the [Setting up ExecuTorch](https://pytorch.org/executorch/stable/getting-started-setup) to set up the repo and dev environment:
9+
10+
```bash
11+
git clone -b release/0.2 https://github.com/pytorch/executorch.git
12+
cd executorch
13+
git submodule update --init
14+
15+
python3 -m venv .venv && source .venv/bin/activate
16+
17+
./install_requirements.sh
18+
```
1119

1220
## Exporting models
1321
Please refer to the [ExecuTorch Llama2 docs](https://github.com/pytorch/executorch/blob/main/examples/models/llama2/README.md) to export the model.
@@ -16,10 +24,11 @@ Please refer to the [ExecuTorch Llama2 docs](https://github.com/pytorch/executor
1624

1725
1. Open the [project](https://github.com/pytorch/executorch/blob/main/examples/demo-apps/apple_ios/LLaMA/LLaMA.xcodeproj) in Xcode.
1826
2. Run the app (cmd+R).
19-
3. In app UI pick a model and tokenizer to use, type a prompt and tap the arrow buton as on the [video](../_static/img/llama_ios_app.mp4).
27+
3. In app UI pick a model and tokenizer to use, type a prompt and tap the arrow buton
2028

2129
```{note}
22-
ExecuTorch runtime is distributed as a Swift package providing some .xcframework as prebuilt binary targets. Xcode will dowload and cache the package on the first run, which will take some time.
30+
ExecuTorch runtime is distributed as a Swift package providing some .xcframework as prebuilt binary targets.
31+
Xcode will dowload and cache the package on the first run, which will take some time.
2332
```
2433

2534
## Copy the model to Simulator
@@ -33,5 +42,11 @@ ExecuTorch runtime is distributed as a Swift package providing some .xcframework
3342
2. Navigate to the Files tab and drag&drop the model and tokenizer files onto the iLLaMA folder.
3443
3. Wait until the files are copied.
3544

45+
Click the image below to see it in action!
46+
47+
<a href="https://pytorch.org/executorch/main/_static/img/llama_ios_app.mp4">
48+
<img src="https://pytorch.org/executorch/main/_static/img/llama_ios_app.png" width="600" alt="iOS app running a LlaMA model">
49+
</a>
50+
3651
## Reporting Issues
3752
If you encountered any bugs or issues following this tutorial please file a bug/issue here on [Github](https://github.com/pytorch/executorch/issues/new).

0 commit comments

Comments
 (0)