Skip to content

Commit dca1fdf

Browse files
authored
Merge branch 'release/0.6' into cherry-pick-10184-by-pytorch_bot_bot_
2 parents 9f1c753 + 9176b12 commit dca1fdf

File tree

3 files changed

+3
-3
lines changed

3 files changed

+3
-3
lines changed

docs/source/using-executorch-building-from-source.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -113,7 +113,7 @@ Or alternatively, [install conda on your machine](https://conda.io/projects/cond
113113
> # From the root of the executorch repo:
114114
> ./install_executorch.sh --clean
115115
> git submodule sync
116-
> git submodule update --init
116+
> git submodule update --init --recursive
117117
> ```
118118
119119
## Build ExecuTorch C++ runtime from source

examples/demo-apps/apple_ios/LLaMA/docs/delegates/mps_README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,7 @@ Link your binary with the ExecuTorch runtime and any backends or kernels used by
8585

8686
Note: To access logs, link against the Debug build of the ExecuTorch runtime, i.e., the executorch_debug framework. For optimal performance, always link against the Release version of the deliverables (those without the _debug suffix), which have all logging overhead removed.
8787

88-
For more details integrating and Running ExecuTorch on Apple Platforms, checkout this [link](https://pytorch.org/executorch/main/apple-runtime.html).
88+
For more details integrating and Running ExecuTorch on Apple Platforms, checkout this [link](https://pytorch.org/executorch/main/using-executorch-ios.html).
8989

9090
<p align="center">
9191
<img src="https://raw.githubusercontent.com/pytorch/executorch/refs/heads/main/docs/source/_static/img/ios_demo_app_swift_pm.png" alt="iOS LLaMA App Swift PM" style="width:600px">

examples/models/efficient_sam/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ python -m examples.xnnpack.aot_compiler -m efficient_sam
3232

3333
# Performance
3434

35-
Tests were conducted on an Apple M1 Pro chip using the instructions for building and running Executorch with [Core ML](https://pytorch.org/executorch/main/https://pytorch.org/executorch/main/backends-coreml#runtime-integration) and [XNNPACK](https://pytorch.org/executorch/main/tutorial-xnnpack-delegate-lowering#running-the-xnnpack-model-with-cmake) backends.
35+
Tests were conducted on an Apple M1 Pro chip using the instructions for building and running Executorch with [Core ML](https://pytorch.org/executorch/main/backends-coreml#runtime-integration) and [XNNPACK](https://pytorch.org/executorch/main/tutorial-xnnpack-delegate-lowering#running-the-xnnpack-model-with-cmake) backends.
3636

3737
| Backend Configuration | Average Inference Time (seconds) |
3838
| ---------------------- | -------------------------------- |

0 commit comments

Comments
 (0)