Skip to content

add mps stories end to end in ci #4137

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 9 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 12 additions & 0 deletions .ci/scripts/test_llama.sh
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,14 @@ else
QE=OFF
fi

if [[ "${MODE}" =~ .*mps.* ]]; then
MPS=ON
else
MPS=OFF
fi

echo "MPS option ${MPS}"

if [[ -z "${BUCK:-}" ]]; then
BUCK=buck2
fi
Expand All @@ -77,6 +85,7 @@ cmake_install_executorch_libraries() {
-DEXECUTORCH_BUILD_KERNELS_OPTIMIZED=ON \
-DEXECUTORCH_BUILD_KERNELS_QUANTIZED=ON \
-DEXECUTORCH_BUILD_XNNPACK="$XNNPACK" \
-DEXECUTORCH_BUILD_MPS="$MPS" \
-DPYTHON_EXECUTABLE="$PYTHON_EXECUTABLE" \
-Bcmake-out .
cmake --build cmake-out -j9 --target install --config Debug
Expand Down Expand Up @@ -142,6 +151,9 @@ fi
if [[ "${QE}" == "ON" ]]; then
EXPORT_ARGS="${EXPORT_ARGS} --embedding-quantize 8,1024"
fi
if [[ "${MPS}" == "ON" ]]; then
EXPORT_ARGS="${EXPORT_ARGS} -kv -v --mps --disable_dynamic_shape"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am thinking of flipping this to explicitly op-in for dynamic_shape due to incrase in memory footprint

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😅

fi
# Add dynamically linked library location
$PYTHON_EXECUTABLE -m examples.models.llama2.export_llama ${EXPORT_ARGS}

Expand Down
20 changes: 18 additions & 2 deletions .github/workflows/trunk.yml
Original file line number Diff line number Diff line change
Expand Up @@ -225,7 +225,7 @@ jobs:
matrix:
dtype: [fp32]
build-tool: [buck2, cmake]
mode: [portable, xnnpack+kv+custom]
mode: [portable, xnnpack+kv+custom, mps]
fail-fast: false
with:
runner: macos-m1-stable
Expand All @@ -234,15 +234,31 @@ jobs:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
timeout: 900
script: |
bash .ci/scripts/setup-conda.sh

DTYPE=${{ matrix.dtype }}
BUILD_TOOL=${{ matrix.build-tool }}
MODE=${{ matrix.mode }}

if [[ "${BUILD_TOOL}" == "buck2" ]]; then
# TODO: Will add more modes that don't support buck2
if [[ "${MODE}" == "mps" ]]; then
echo "mps doesn't support buck2."
exit 0
fi
fi

bash .ci/scripts/setup-conda.sh

# Setup executorch
PYTHON_EXECUTABLE=python ${CONDA_RUN} bash .ci/scripts/setup-macos.sh "${BUILD_TOOL}"

if [[ "${MODE}" == "mps" ]]; then
PYTHON_EXECUTABLE=python ${CONDA_RUN} bash backends/apple/mps/install_requirements.sh
echo "Finishing installing mps."
else
echo "Not mps mode, skip installing mps."
fi

# Install requirements for export_llama
PYTHON_EXECUTABLE=python ${CONDA_RUN} bash examples/models/llama2/install_requirements.sh
# Test llama2
Expand Down
Loading