Skip to content

Commit 9f1c753

Browse files
shoumikhinpytorchbot
authored andcommitted
FIx links in docs (#10184)
Some links didn't work (cherry picked from commit d37785d)
1 parent b2f5468 commit 9f1c753

File tree

20 files changed

+32
-31
lines changed

20 files changed

+32
-31
lines changed

backends/cadence/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66

77
## Tutorial
88

9-
Please follow the [tutorial](https://pytorch.org/executorch/main/build-run-xtensa.html) for more information on how to run models on Cadence/Xtensa DSPs.
9+
Please follow the [tutorial](https://pytorch.org/executorch/main/backends-cadence) for more information on how to run models on Cadence/Xtensa DSPs.
1010

1111
## Directory Structure
1212

backends/qualcomm/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,9 +6,9 @@ we reserve the right to modify interfaces and implementations.
66

77
This backend is implemented on the top of
88
[Qualcomm AI Engine Direct SDK](https://developer.qualcomm.com/software/qualcomm-ai-engine-direct-sdk).
9-
Please follow [tutorial](../../docs/source/build-run-qualcomm-ai-engine-direct-backend.md) to setup environment, build, and run executorch models by this backend (Qualcomm AI Engine Direct is also referred to as QNN in the source and documentation).
9+
Please follow [tutorial](../../docs/source/backends-qualcomm.md) to setup environment, build, and run executorch models by this backend (Qualcomm AI Engine Direct is also referred to as QNN in the source and documentation).
1010

11-
A website version of the tutorial is [here](https://pytorch.org/executorch/stable/build-run-qualcomm-ai-engine-direct-backend.html).
11+
A website version of the tutorial is [here](https://pytorch.org/executorch/main/backends-qualcomm).
1212

1313
## Delegate Options
1414

backends/qualcomm/setup.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Setting up QNN Backend
22

3-
Please refer to [Building and Running ExecuTorch with Qualcomm AI Engine Direct Backend](../../docs/source/build-run-qualcomm-ai-engine-direct-backend.md).
3+
Please refer to [Building and Running ExecuTorch with Qualcomm AI Engine Direct Backend](../../docs/source/backends-qualcomm.md).
44

55
That is a tutorial for building and running Qualcomm AI Engine Direct backend,
66
including compiling a model on a x64 host and running the inference

backends/xnnpack/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -131,6 +131,6 @@ create an issue on [github](https://www.github.com/pytorch/executorch/issues).
131131

132132

133133
## See Also
134-
For more information about the XNNPACK Delegate, please check out the following resources:
135-
- [ExecuTorch XNNPACK Delegate](https://pytorch.org/executorch/0.2/native-delegates-executorch-xnnpack-delegate.html)
136-
- [Building and Running ExecuTorch with XNNPACK Backend](https://pytorch.org/executorch/0.2/native-delegates-executorch-xnnpack-delegate.html)
134+
For more information about the XNNPACK Backend, please check out the following resources:
135+
- [XNNPACK Backend](https://pytorch.org/executorch/main/backends-xnnpack)
136+
- [XNNPACK Backend Internals](https://pytorch.org/executorch/main/backend-delegates-xnnpack-reference)

docs/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -130,7 +130,7 @@ Use the
130130
to contribute to the documentation.
131131

132132
In addition to that, see
133-
[Markdown in Sphinx Tips and Tricks](https://pytorch.org/executorch/markdown-sphinx-tips-tricks.html)
133+
[Markdown in Sphinx Tips and Tricks](source/markdown-sphinx-tips-tricks.md)
134134
for tips on how to author high-quality markdown pages with Myst Parser.
135135

136136
## Adding Tutorials

docs/source/llm/getting-started.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -588,7 +588,7 @@ The delegated model should be noticeably faster compared to the non-delegated mo
588588

589589
For more information regarding backend delegation, see the ExecuTorch guides
590590
for the [XNNPACK Backend](../backends-xnnpack.md), [Core ML
591-
Backend](../backends-coreml.md) and [Qualcomm AI Engine Direct Backend](build-run-llama3-qualcomm-ai-engine-direct-backend.md).
591+
Backend](../backends-coreml.md) and [Qualcomm AI Engine Direct Backend](../backends-qualcomm.md).
592592

593593
## Quantization
594594

docs/source/tutorials_source/README.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,4 +3,4 @@ Tutorials
33

44
1. tutorials/*
55
Getting Started Tutorials
6-
https://pytorch.org/executorch/tutorials/template_tutorial.html
6+
https://github.com/pytorch/executorch/blob/main/docs/source/tutorials_source/template_tutorial.py

examples/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -71,7 +71,7 @@ You will find demos of [ExecuTorch QNN Backend](./qualcomm) in the [`qualcomm/`]
7171

7272
### Cadence HiFi4 DSP
7373

74-
The [`Cadence/`](./cadence) directory hosts a demo that showcases the process of exporting and executing a model on Xtensa Hifi4 DSP. You can utilize [this tutorial](../docs/source/build-run-xtensa.md) to guide you in configuring the demo and running it.
74+
The [`Cadence/`](./cadence) directory hosts a demo that showcases the process of exporting and executing a model on Xtensa Hifi4 DSP. You can utilize [this tutorial](../docs/source/backends-cadence.md) to guide you in configuring the demo and running it.
7575

7676
## Dependencies
7777

examples/arm/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,6 +34,6 @@ $ executorch/examples/arm/run.sh --model_name=mv2 --target=ethos-u85-128 [--scra
3434

3535
### Online Tutorial
3636

37-
We also have a [tutorial](https://pytorch.org/executorch/stable/executorch-arm-delegate-tutorial.html) explaining the steps performed in these
37+
We also have a [tutorial](https://pytorch.org/executorch/main/backends-arm-ethos-u) explaining the steps performed in these
3838
scripts, expected results, possible problems and more. It is a step-by-step guide
3939
you can follow to better understand this delegate.

examples/models/efficient_sam/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ Follow the [tutorial](https://pytorch.org/executorch/main/getting-started-setup#
1212

1313
### Exporting to Core ML
1414

15-
Make sure to install the [required dependencies](https://pytorch.org/executorch/main/build-run-coreml.html#setting-up-your-developer-environment) for Core ML export.
15+
Make sure to install the [required dependencies](https://pytorch.org/executorch/main/backends-coreml#development-requirements) for Core ML export.
1616

1717
To export the model to Core ML, run the following command:
1818

@@ -32,7 +32,7 @@ python -m examples.xnnpack.aot_compiler -m efficient_sam
3232

3333
# Performance
3434

35-
Tests were conducted on an Apple M1 Pro chip using the instructions for building and running Executorch with [Core ML](https://pytorch.org/executorch/main/build-run-coreml.html#runtime) and [XNNPACK](https://pytorch.org/executorch/main/tutorial-xnnpack-delegate-lowering.html#running-the-xnnpack-model-with-cmake) backends.
35+
Tests were conducted on an Apple M1 Pro chip using the instructions for building and running Executorch with [Core ML](https://pytorch.org/executorch/main/https://pytorch.org/executorch/main/backends-coreml#runtime-integration) and [XNNPACK](https://pytorch.org/executorch/main/tutorial-xnnpack-delegate-lowering#running-the-xnnpack-model-with-cmake) backends.
3636

3737
| Backend Configuration | Average Inference Time (seconds) |
3838
| ---------------------- | -------------------------------- |

examples/models/llama/UTILS.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ From `executorch` root:
2525
## Smaller model delegated to other backends
2626
2727
Currently we supported lowering the stories model to other backends, including, CoreML, MPS and QNN. Please refer to the instruction
28-
for each backend ([CoreML](https://pytorch.org/executorch/main/build-run-coreml.html), [MPS](https://pytorch.org/executorch/main/build-run-mps.html), [QNN](https://pytorch.org/executorch/main/build-run-qualcomm-ai-engine-direct-backend.html)) before trying to lower them. After the backend library is installed, the script to export a lowered model is
28+
for each backend ([CoreML](https://pytorch.org/executorch/main/backends-coreml), [MPS](https://pytorch.org/executorch/main/backends-mps), [QNN](https://pytorch.org/executorch/main/backends-qualcomm)) before trying to lower them. After the backend library is installed, the script to export a lowered model is
2929
3030
- Lower to CoreML: `python -m examples.models.llama.export_llama -kv --disable_dynamic_shape --coreml -c stories110M.pt -p params.json `
3131
- MPS: `python -m examples.models.llama.export_llama -kv --disable_dynamic_shape --mps -c stories110M.pt -p params.json `

examples/models/phi-3-mini-lora/README.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,8 +16,9 @@ To see how you can use the model exported for training in a fully involved finet
1616
python export_model.py
1717
```
1818

19-
2. Run the inference model using an example runtime. For more detailed steps on this, check out [Build & Run](https://pytorch.org/executorch/stable/getting-started-setup.html#build-run).
19+
2. Run the inference model using an example runtime. For more detailed steps on this, check out [Building from Source](https://pytorch.org/executorch/main/using-executorch-building-from-source).
2020
```
21+
2122
# Clean and configure the CMake build system. Compiled programs will appear in the executorch/cmake-out directory we create here.
2223
./install_executorch.sh --clean
2324
(mkdir cmake-out && cd cmake-out && cmake ..)

examples/qualcomm/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -24,13 +24,13 @@ Here are some general information and limitations.
2424

2525
Please finish tutorial [Setting up executorch](https://pytorch.org/executorch/stable/getting-started-setup).
2626

27-
Please finish [setup QNN backend](../../docs/source/build-run-qualcomm-ai-engine-direct-backend.md).
27+
Please finish [setup QNN backend](../../docs/source/backends-qualcomm.md).
2828

2929
## Environment
3030

3131
Please set up `QNN_SDK_ROOT` environment variable.
3232
Note that this version should be exactly same as building QNN backend.
33-
Please check [setup](../../docs/source/build-run-qualcomm-ai-engine-direct-backend.md).
33+
Please check [setup](../../docs/source/backends-qualcomm.md).
3434

3535
Please set up `LD_LIBRARY_PATH` to `$QNN_SDK_ROOT/lib/x86_64-linux-clang`.
3636
Or, you could put QNN libraries to default search path of the dynamic linker.

examples/qualcomm/oss_scripts/llama/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ Hybrid Mode: Hybrid mode leverages the strengths of both AR-N model and KV cache
2828

2929
### Step 1: Setup
3030
1. Follow the [tutorial](https://pytorch.org/executorch/main/getting-started-setup) to set up ExecuTorch.
31-
2. Follow the [tutorial](https://pytorch.org/executorch/stable/build-run-qualcomm-ai-engine-direct-backend.html) to build Qualcomm AI Engine Direct Backend.
31+
2. Follow the [tutorial](https://pytorch.org/executorch/main/backends-qualcomm) to build Qualcomm AI Engine Direct Backend.
3232

3333
### Step 2: Prepare Model
3434

examples/qualcomm/qaihub_scripts/llama/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ Note that the pre-compiled context binaries could not be futher fine-tuned for o
1212
### Instructions
1313
#### Step 1: Setup
1414
1. Follow the [tutorial](https://pytorch.org/executorch/main/getting-started-setup) to set up ExecuTorch.
15-
2. Follow the [tutorial](https://pytorch.org/executorch/stable/build-run-qualcomm-ai-engine-direct-backend.html) to build Qualcomm AI Engine Direct Backend.
15+
2. Follow the [tutorial](https://pytorch.org/executorch/main/backends-qualcomm) to build Qualcomm AI Engine Direct Backend.
1616

1717
#### Step2: Prepare Model
1818
1. Create account for https://aihub.qualcomm.com/
@@ -40,7 +40,7 @@ Note that the pre-compiled context binaries could not be futher fine-tuned for o
4040
### Instructions
4141
#### Step 1: Setup
4242
1. Follow the [tutorial](https://pytorch.org/executorch/main/getting-started-setup) to set up ExecuTorch.
43-
2. Follow the [tutorial](https://pytorch.org/executorch/stable/build-run-qualcomm-ai-engine-direct-backend.html) to build Qualcomm AI Engine Direct Backend.
43+
2. Follow the [tutorial](https://pytorch.org/executorch/main/backends-qualcomm) to build Qualcomm AI Engine Direct Backend.
4444

4545
#### Step2: Prepare Model
4646
1. Create account for https://aihub.qualcomm.com/

examples/qualcomm/qaihub_scripts/stable_diffusion/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ The model architecture, scheduler, and time embedding are from the [stabilityai/
1111
### Instructions
1212
#### Step 1: Setup
1313
1. Follow the [tutorial](https://pytorch.org/executorch/main/getting-started-setup) to set up ExecuTorch.
14-
2. Follow the [tutorial](https://pytorch.org/executorch/stable/build-run-qualcomm-ai-engine-direct-backend.html) to build Qualcomm AI Engine Direct Backend.
14+
2. Follow the [tutorial](https://pytorch.org/executorch/main/backends-qualcomm) to build Qualcomm AI Engine Direct Backend.
1515

1616
#### Step2: Prepare Model
1717
1. Download the context binaries for TextEncoder, UNet, and VAEDecoder under https://huggingface.co/qualcomm/Stable-Diffusion-v2.1/tree/main

examples/qualcomm/qaihub_scripts/utils/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# CLI Tool for Compile / Deploy Pre-Built QNN Artifacts
22

3-
An easy-to-use tool for generating / executing .pte program from pre-built model libraries / context binaries from Qualcomm AI Engine Direct. Tool is verified with [host environement](../../../../docs/source/build-run-qualcomm-ai-engine-direct-backend.md#host-os).
3+
An easy-to-use tool for generating / executing .pte program from pre-built model libraries / context binaries from Qualcomm AI Engine Direct. Tool is verified with [host environement](../../../../docs/source/backends-qualcomm.md#host-os).
44

55
## Description
66

@@ -20,7 +20,7 @@ If users are interested in well-known applications, [Qualcomm AI HUB](https://ai
2020
### Dependencies
2121

2222
* Register for Qualcomm AI HUB.
23-
* Download the corresponding QNN SDK via [link](https://www.qualcomm.com/developer/software/qualcomm-ai-engine-direct-sdk) which your favorite model is compiled with. Ths link will automatically download the latest version at this moment (users should be able to specify version soon, please refer to [this](../../../../docs/source/build-run-qualcomm-ai-engine-direct-backend.md#software) for earlier releases).
23+
* Download the corresponding QNN SDK via [link](https://www.qualcomm.com/developer/software/qualcomm-ai-engine-direct-sdk) which your favorite model is compiled with. Ths link will automatically download the latest version at this moment (users should be able to specify version soon, please refer to [this](../../../../docs/source/backends-qualcomm.md#software) for earlier releases).
2424

2525
### Target Model
2626

examples/xnnpack/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
# XNNPACK Backend
22

33
[XNNPACK](https://github.com/google/XNNPACK) is a library of optimized neural network operators for ARM and x86 CPU platforms. Our delegate lowers models to run using these highly optimized CPU operators. You can try out lowering and running some example models in the demo. Please refer to the following docs for information on the XNNPACK Delegate
4-
- [XNNPACK Backend Delegate Overview](https://pytorch.org/executorch/stable/native-delegates-executorch-xnnpack-delegate.html)
5-
- [XNNPACK Delegate Export Tutorial](https://pytorch.org/executorch/stable/tutorial-xnnpack-delegate-lowering.html)
4+
- [XNNPACK Backend Delegate Overview](https://pytorch.org/executorch/main/backends-xnnpack)
5+
- [XNNPACK Delegate Export Tutorial](https://pytorch.org/executorch/main/tutorial-xnnpack-delegate-lowering)
66

77

88
## Directory structure

extension/llm/export/partitioner_lib.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ def get_mps_partitioner(use_kv_cache: bool = False):
5757
)
5858
except ImportError:
5959
raise ImportError(
60-
"Please install the MPS backend follwing https://pytorch.org/executorch/main/build-run-mps.html"
60+
"Please install the MPS backend follwing https://pytorch.org/executorch/main/backends-mps"
6161
)
6262

6363
compile_specs = [CompileSpec("use_fp16", bytes([True]))]
@@ -81,7 +81,7 @@ def get_coreml_partitioner(
8181
)
8282
except ImportError:
8383
raise ImportError(
84-
"Please install the CoreML backend follwing https://pytorch.org/executorch/main/build-run-coreml.html"
84+
"Please install the CoreML backend follwing https://pytorch.org/executorch/main/backends-coreml"
8585
+ "; for buck users, please add example dependancies: //executorch/backends/apple/coreml:backend, and etc"
8686
)
8787

@@ -195,7 +195,7 @@ def get_qnn_partitioner(
195195
)
196196
except ImportError:
197197
raise ImportError(
198-
"Please install the Qualcomm backend following https://pytorch.org/executorch/main/build-run-qualcomm-ai-engine-direct-backend.html"
198+
"Please install the Qualcomm backend following https://pytorch.org/executorch/main/backends-qualcomm"
199199
)
200200

201201
use_fp16 = True

extension/llm/export/quantizer_lib.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@ def get_qnn_quantizer(
158158

159159
except ImportError:
160160
raise ImportError(
161-
"Please install the Qualcomm backend follwing https://pytorch.org/executorch/main/build-run-qualcomm.html"
161+
"Please install the Qualcomm backend follwing https://pytorch.org/executorch/main/backends-qualcomm"
162162
)
163163

164164
backend, quant_config = pt2e_quantize.split("_")
@@ -217,7 +217,7 @@ def get_coreml_quantizer(pt2e_quantize: str):
217217
from executorch.backends.apple.coreml.quantizer import CoreMLQuantizer
218218
except ImportError:
219219
raise ImportError(
220-
"Please install the CoreML backend follwing https://pytorch.org/executorch/main/build-run-coreml.html"
220+
"Please install the CoreML backend follwing https://pytorch.org/executorch/main/backends-coreml"
221221
)
222222

223223
if pt2e_quantize == "coreml_8a_c8w":

0 commit comments

Comments
 (0)