Skip to content

Add MTK NeuroPilot Portal link for SDK in mediatek_README.md #5872

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,11 @@ Phone verified: MediaTek Dimensity 9300 (D9300) chip.
* Download and link the Buck2 build, Android NDK, and MediaTek ExecuTorch Libraries from the MediaTek Backend Readme ([link](https://github.com/pytorch/executorch/tree/main/backends/mediatek/scripts#prerequisites)).
* MediaTek Dimensity 9300 (D9300) chip device
* Desired Llama 3 model weights. You can download them on HuggingFace [Example](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)).
* `libneuronusdk_adapter.mtk.so`, `libneuron_buffer_allocator.so`, and `.whl` files (will be available soon by MediaTek)
* Download NeuroPilot Express SDK from the [MediaTek NeuroPilot Portal](https://neuropilot.mediatek.com/resources/public/npexpress/en/docs/npexpress) (coming soon):
- `libneuronusdk_adapter.mtk.so`: This universal SDK contains the implementation required for executing target-dependent code on the MediaTek chip.
- `libneuron_buffer_allocator.so`: This utility library is designed for allocating DMA buffers necessary for model inference.
- `mtk_converter-8.8.0.dev20240723+public.d1467db9-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl`: This library preprocess the model into a MediaTek representation.
- `mtk_neuron-8.2.2-py3-none-linux_x86_64.whl`: This library converts the model to binaries.

## Setup ExecuTorch
In this section, we will need to set up the ExecuTorch repo first with Conda environment management. Make sure you have Conda available in your system (or follow the instructions to install it [here](https://anaconda.org/anaconda/conda)). The commands below are running on Linux (CentOS).
Expand Down Expand Up @@ -42,11 +46,6 @@ Install dependencies
zstd -cdq "<downloaded_buck2_file>.zst" > "<path_to_store_buck2>/buck2" && chmod +x "<path_to_store_buck2>/buck2"
```

### MediaTek ExecuTorch Libraries
The following libraries will be available soon by MediaTek:
libneuronusdk_adapter.mtk.so: This universal SDK contains the implementation required for executing target-dependent code on the MediaTek chip.
libneuron_buffer_allocator.so: This utility library is designed for allocating DMA buffers necessary for model inference.

### Set Environment Variables
```
export BUCK2=path_to_buck/buck2 # Download BUCK2 and create BUCK2 executable
Expand Down Expand Up @@ -75,7 +74,6 @@ MTK currently supports Llama 3 exporting.
// Ensure that you are inside executorch/examples/mediatek directory
pip3 install -r requirements.txt

// The following .whl file will be available soon
pip3 install mtk_neuron-8.2.2-py3-none-linux_x86_64.whl
pip3 install mtk_converter-8.8.0.dev20240723+public.d1467db9-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
```
Expand Down
Loading