Skip to content

FIx links in docs #10185

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Apr 15, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion backends/cadence/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

## Tutorial

Please follow the [tutorial](https://pytorch.org/executorch/main/build-run-xtensa.html) for more information on how to run models on Cadence/Xtensa DSPs.
Please follow the [tutorial](https://pytorch.org/executorch/main/backends-cadence) for more information on how to run models on Cadence/Xtensa DSPs.

## Directory Structure

Expand Down
4 changes: 2 additions & 2 deletions backends/qualcomm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,9 @@ we reserve the right to modify interfaces and implementations.

This backend is implemented on the top of
[Qualcomm AI Engine Direct SDK](https://developer.qualcomm.com/software/qualcomm-ai-engine-direct-sdk).
Please follow [tutorial](../../docs/source/build-run-qualcomm-ai-engine-direct-backend.md) to setup environment, build, and run executorch models by this backend (Qualcomm AI Engine Direct is also referred to as QNN in the source and documentation).
Please follow [tutorial](../../docs/source/backends-qualcomm.md) to setup environment, build, and run executorch models by this backend (Qualcomm AI Engine Direct is also referred to as QNN in the source and documentation).

A website version of the tutorial is [here](https://pytorch.org/executorch/stable/build-run-qualcomm-ai-engine-direct-backend.html).
A website version of the tutorial is [here](https://pytorch.org/executorch/main/backends-qualcomm).

## Delegate Options

Expand Down
2 changes: 1 addition & 1 deletion backends/qualcomm/setup.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Setting up QNN Backend

Please refer to [Building and Running ExecuTorch with Qualcomm AI Engine Direct Backend](../../docs/source/build-run-qualcomm-ai-engine-direct-backend.md).
Please refer to [Building and Running ExecuTorch with Qualcomm AI Engine Direct Backend](../../docs/source/backends-qualcomm.md).

That is a tutorial for building and running Qualcomm AI Engine Direct backend,
including compiling a model on a x64 host and running the inference
Expand Down
4 changes: 2 additions & 2 deletions backends/xnnpack/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -132,5 +132,5 @@ create an issue on [github](https://www.github.com/pytorch/executorch/issues).

## See Also
For more information about the XNNPACK Backend, please check out the following resources:
- [XNNPACK Backend](https://pytorch.org/executorch/main/backends-xnnpack.html)
- [XNNPACK Backend Internals](https://pytorch.org/executorch/main/backend-delegates-xnnpack-reference.html)
- [XNNPACK Backend](https://pytorch.org/executorch/main/backends-xnnpack)
- [XNNPACK Backend Internals](https://pytorch.org/executorch/main/backend-delegates-xnnpack-reference)
2 changes: 1 addition & 1 deletion docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ Use the
to contribute to the documentation.

In addition to that, see
[Markdown in Sphinx Tips and Tricks](https://pytorch.org/executorch/markdown-sphinx-tips-tricks.html)
[Markdown in Sphinx Tips and Tricks](source/markdown-sphinx-tips-tricks.md)
for tips on how to author high-quality markdown pages with Myst Parser.

## Adding Tutorials
Expand Down
2 changes: 1 addition & 1 deletion docs/source/llm/getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -588,7 +588,7 @@ The delegated model should be noticeably faster compared to the non-delegated mo

For more information regarding backend delegation, see the ExecuTorch guides
for the [XNNPACK Backend](../backends-xnnpack.md), [Core ML
Backend](../backends-coreml.md) and [Qualcomm AI Engine Direct Backend](build-run-llama3-qualcomm-ai-engine-direct-backend.md).
Backend](../backends-coreml.md) and [Qualcomm AI Engine Direct Backend](../backends-qualcomm.md).

## Quantization

Expand Down
2 changes: 1 addition & 1 deletion docs/source/tutorials_source/README.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,4 @@ Tutorials

1. tutorials/*
Getting Started Tutorials
https://pytorch.org/executorch/tutorials/template_tutorial.html
https://github.com/pytorch/executorch/blob/main/docs/source/tutorials_source/template_tutorial.py
2 changes: 1 addition & 1 deletion examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ You will find demos of [ExecuTorch QNN Backend](./qualcomm) in the [`qualcomm/`]

### Cadence HiFi4 DSP

The [`Cadence/`](./cadence) directory hosts a demo that showcases the process of exporting and executing a model on Xtensa Hifi4 DSP. You can utilize [this tutorial](../docs/source/build-run-xtensa.md) to guide you in configuring the demo and running it.
The [`Cadence/`](./cadence) directory hosts a demo that showcases the process of exporting and executing a model on Xtensa Hifi4 DSP. You can utilize [this tutorial](../docs/source/backends-cadence.md) to guide you in configuring the demo and running it.

## Dependencies

Expand Down
2 changes: 1 addition & 1 deletion examples/arm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,6 @@ $ executorch/examples/arm/run.sh --model_name=mv2 --target=ethos-u85-128 [--scra

### Online Tutorial

We also have a [tutorial](https://pytorch.org/executorch/stable/executorch-arm-delegate-tutorial.html) explaining the steps performed in these
We also have a [tutorial](https://pytorch.org/executorch/main/backends-arm-ethos-u) explaining the steps performed in these
scripts, expected results, possible problems and more. It is a step-by-step guide
you can follow to better understand this delegate.
2 changes: 1 addition & 1 deletion examples/models/efficient_sam/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Follow the [tutorial](https://pytorch.org/executorch/main/getting-started-setup#

### Exporting to Core ML

Make sure to install the [required dependencies](https://pytorch.org/executorch/main/build-run-coreml.html#setting-up-your-developer-environment) for Core ML export.
Make sure to install the [required dependencies](https://pytorch.org/executorch/main/backends-coreml#development-requirements) for Core ML export.

To export the model to Core ML, run the following command:

Expand Down
2 changes: 1 addition & 1 deletion examples/models/llama/UTILS.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ From `executorch` root:
## Smaller model delegated to other backends

Currently we supported lowering the stories model to other backends, including, CoreML, MPS and QNN. Please refer to the instruction
for each backend ([CoreML](https://pytorch.org/executorch/main/build-run-coreml.html), [MPS](https://pytorch.org/executorch/main/build-run-mps.html), [QNN](https://pytorch.org/executorch/main/build-run-qualcomm-ai-engine-direct-backend.html)) before trying to lower them. After the backend library is installed, the script to export a lowered model is
for each backend ([CoreML](https://pytorch.org/executorch/main/backends-coreml), [MPS](https://pytorch.org/executorch/main/backends-mps), [QNN](https://pytorch.org/executorch/main/backends-qualcomm)) before trying to lower them. After the backend library is installed, the script to export a lowered model is

- Lower to CoreML: `python -m examples.models.llama.export_llama -kv --disable_dynamic_shape --coreml -c stories110M.pt -p params.json `
- MPS: `python -m examples.models.llama.export_llama -kv --disable_dynamic_shape --mps -c stories110M.pt -p params.json `
Expand Down
3 changes: 2 additions & 1 deletion examples/models/phi-3-mini-lora/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,9 @@ To see how you can use the model exported for training in a fully involved finet
python export_model.py
```

2. Run the inference model using an example runtime. For more detailed steps on this, check out [Build & Run](https://pytorch.org/executorch/stable/getting-started-setup.html#build-run).
2. Run the inference model using an example runtime. For more detailed steps on this, check out [Building from Source](https://pytorch.org/executorch/main/using-executorch-building-from-source).
```

# Clean and configure the CMake build system. Compiled programs will appear in the executorch/cmake-out directory we create here.
./install_executorch.sh --clean
(mkdir cmake-out && cd cmake-out && cmake ..)
Expand Down
4 changes: 2 additions & 2 deletions examples/qualcomm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,13 +24,13 @@ Here are some general information and limitations.

Please finish tutorial [Setting up executorch](https://pytorch.org/executorch/stable/getting-started-setup).

Please finish [setup QNN backend](../../docs/source/build-run-qualcomm-ai-engine-direct-backend.md).
Please finish [setup QNN backend](../../docs/source/backends-qualcomm.md).

## Environment

Please set up `QNN_SDK_ROOT` environment variable.
Note that this version should be exactly same as building QNN backend.
Please check [setup](../../docs/source/build-run-qualcomm-ai-engine-direct-backend.md).
Please check [setup](../../docs/source/backends-qualcomm.md).

Please set up `LD_LIBRARY_PATH` to `$QNN_SDK_ROOT/lib/x86_64-linux-clang`.
Or, you could put QNN libraries to default search path of the dynamic linker.
Expand Down
2 changes: 1 addition & 1 deletion examples/qualcomm/oss_scripts/llama/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ Hybrid Mode: Hybrid mode leverages the strengths of both AR-N model and KV cache

### Step 1: Setup
1. Follow the [tutorial](https://pytorch.org/executorch/main/getting-started-setup) to set up ExecuTorch.
2. Follow the [tutorial](https://pytorch.org/executorch/stable/build-run-qualcomm-ai-engine-direct-backend.html) to build Qualcomm AI Engine Direct Backend.
2. Follow the [tutorial](https://pytorch.org/executorch/main/backends-qualcomm) to build Qualcomm AI Engine Direct Backend.

### Step 2: Prepare Model

Expand Down
4 changes: 2 additions & 2 deletions examples/qualcomm/qaihub_scripts/llama/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Note that the pre-compiled context binaries could not be futher fine-tuned for o
### Instructions
#### Step 1: Setup
1. Follow the [tutorial](https://pytorch.org/executorch/main/getting-started-setup) to set up ExecuTorch.
2. Follow the [tutorial](https://pytorch.org/executorch/stable/build-run-qualcomm-ai-engine-direct-backend.html) to build Qualcomm AI Engine Direct Backend.
2. Follow the [tutorial](https://pytorch.org/executorch/main/backends-qualcomm) to build Qualcomm AI Engine Direct Backend.

#### Step2: Prepare Model
1. Create account for https://aihub.qualcomm.com/
Expand Down Expand Up @@ -40,7 +40,7 @@ Note that the pre-compiled context binaries could not be futher fine-tuned for o
### Instructions
#### Step 1: Setup
1. Follow the [tutorial](https://pytorch.org/executorch/main/getting-started-setup) to set up ExecuTorch.
2. Follow the [tutorial](https://pytorch.org/executorch/stable/build-run-qualcomm-ai-engine-direct-backend.html) to build Qualcomm AI Engine Direct Backend.
2. Follow the [tutorial](https://pytorch.org/executorch/main/backends-qualcomm) to build Qualcomm AI Engine Direct Backend.

#### Step2: Prepare Model
1. Create account for https://aihub.qualcomm.com/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ The model architecture, scheduler, and time embedding are from the [stabilityai/
### Instructions
#### Step 1: Setup
1. Follow the [tutorial](https://pytorch.org/executorch/main/getting-started-setup) to set up ExecuTorch.
2. Follow the [tutorial](https://pytorch.org/executorch/stable/build-run-qualcomm-ai-engine-direct-backend.html) to build Qualcomm AI Engine Direct Backend.
2. Follow the [tutorial](https://pytorch.org/executorch/main/backends-qualcomm) to build Qualcomm AI Engine Direct Backend.

#### Step2: Prepare Model
1. Download the context binaries for TextEncoder, UNet, and VAEDecoder under https://huggingface.co/qualcomm/Stable-Diffusion-v2.1/tree/main
Expand Down
4 changes: 2 additions & 2 deletions examples/qualcomm/qaihub_scripts/utils/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# CLI Tool for Compile / Deploy Pre-Built QNN Artifacts

An easy-to-use tool for generating / executing .pte program from pre-built model libraries / context binaries from Qualcomm AI Engine Direct. Tool is verified with [host environement](../../../../docs/source/build-run-qualcomm-ai-engine-direct-backend.md#host-os).
An easy-to-use tool for generating / executing .pte program from pre-built model libraries / context binaries from Qualcomm AI Engine Direct. Tool is verified with [host environement](../../../../docs/source/backends-qualcomm.md#host-os).

## Description

Expand All @@ -20,7 +20,7 @@ If users are interested in well-known applications, [Qualcomm AI HUB](https://ai
### Dependencies

* Register for Qualcomm AI HUB.
* Download the corresponding QNN SDK via [link](https://www.qualcomm.com/developer/software/qualcomm-ai-engine-direct-sdk) which your favorite model is compiled with. Ths link will automatically download the latest version at this moment (users should be able to specify version soon, please refer to [this](../../../../docs/source/build-run-qualcomm-ai-engine-direct-backend.md#software) for earlier releases).
* Download the corresponding QNN SDK via [link](https://www.qualcomm.com/developer/software/qualcomm-ai-engine-direct-sdk) which your favorite model is compiled with. Ths link will automatically download the latest version at this moment (users should be able to specify version soon, please refer to [this](../../../../docs/source/backends-qualcomm.md#software) for earlier releases).

### Target Model

Expand Down
4 changes: 2 additions & 2 deletions examples/xnnpack/README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# XNNPACK Backend

[XNNPACK](https://github.com/google/XNNPACK) is a library of optimized neural network operators for ARM and x86 CPU platforms. Our delegate lowers models to run using these highly optimized CPU operators. You can try out lowering and running some example models in the demo. Please refer to the following docs for information on the XNNPACK Delegate
- [XNNPACK Backend Delegate Overview](https://pytorch.org/executorch/stable/native-delegates-executorch-xnnpack-delegate.html)
- [XNNPACK Delegate Export Tutorial](https://pytorch.org/executorch/stable/tutorial-xnnpack-delegate-lowering.html)
- [XNNPACK Backend Delegate Overview](https://pytorch.org/executorch/main/backends-xnnpack)
- [XNNPACK Delegate Export Tutorial](https://pytorch.org/executorch/main/tutorial-xnnpack-delegate-lowering)


## Directory structure
Expand Down
6 changes: 3 additions & 3 deletions extension/llm/export/partitioner_lib.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ def get_mps_partitioner(use_kv_cache: bool = False):
)
except ImportError:
raise ImportError(
"Please install the MPS backend follwing https://pytorch.org/executorch/main/build-run-mps.html"
"Please install the MPS backend follwing https://pytorch.org/executorch/main/backends-mps"
)

compile_specs = [CompileSpec("use_fp16", bytes([True]))]
Expand All @@ -81,7 +81,7 @@ def get_coreml_partitioner(
)
except ImportError:
raise ImportError(
"Please install the CoreML backend follwing https://pytorch.org/executorch/main/build-run-coreml.html"
"Please install the CoreML backend follwing https://pytorch.org/executorch/main/backends-coreml"
+ "; for buck users, please add example dependancies: //executorch/backends/apple/coreml:backend, and etc"
)

Expand Down Expand Up @@ -195,7 +195,7 @@ def get_qnn_partitioner(
)
except ImportError:
raise ImportError(
"Please install the Qualcomm backend following https://pytorch.org/executorch/main/build-run-qualcomm-ai-engine-direct-backend.html"
"Please install the Qualcomm backend following https://pytorch.org/executorch/main/backends-qualcomm"
)

use_fp16 = True
Expand Down
4 changes: 2 additions & 2 deletions extension/llm/export/quantizer_lib.py
Original file line number Diff line number Diff line change
Expand Up @@ -158,7 +158,7 @@ def get_qnn_quantizer(

except ImportError:
raise ImportError(
"Please install the Qualcomm backend follwing https://pytorch.org/executorch/main/build-run-qualcomm.html"
"Please install the Qualcomm backend follwing https://pytorch.org/executorch/main/backends-qualcomm"
)

backend, quant_config = pt2e_quantize.split("_")
Expand Down Expand Up @@ -217,7 +217,7 @@ def get_coreml_quantizer(pt2e_quantize: str):
from executorch.backends.apple.coreml.quantizer import CoreMLQuantizer
except ImportError:
raise ImportError(
"Please install the CoreML backend follwing https://pytorch.org/executorch/main/build-run-coreml.html"
"Please install the CoreML backend follwing https://pytorch.org/executorch/main/backends-coreml"
)

if pt2e_quantize == "coreml_8a_c8w":
Expand Down
Loading