Skip to content

bump torchao pin #11340

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Jun 4, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion .ci/scripts/test_llama_torchao_lowbit.sh
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,6 @@ cmake --build cmake-out -j16 --target install --config Release

# Install llama runner with torchao
cmake -DPYTHON_EXECUTABLE=python \
-DCMAKE_PREFIX_PATH=$(python -c 'from distutils.sysconfig import get_python_lib; print(get_python_lib())') \
-DCMAKE_BUILD_TYPE=Release \
-DEXECUTORCH_BUILD_KERNELS_CUSTOM=ON \
-DEXECUTORCH_BUILD_KERNELS_OPTIMIZED=ON \
Expand Down
2 changes: 1 addition & 1 deletion backends/vulkan/_passes/int4_weight_only_quantizer.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
import torch
import torch.nn.functional as F

from torchao.quantization.GPTQ import _check_linear_int4_k
from torchao.quantization.GPTQ.GPTQ import _check_linear_int4_k
from torchao.quantization.unified import Quantizer
from torchao.quantization.utils import groupwise_affine_quantize_tensor

Expand Down
1 change: 1 addition & 0 deletions examples/models/llama/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -116,6 +116,7 @@ endif()
if(EXECUTORCH_BUILD_TORCHAO)
# Currently only enable this on Arm-based Macs
if(CMAKE_SYSTEM_NAME STREQUAL "Darwin" AND CMAKE_SYSTEM_PROCESSOR STREQUAL "arm64")
set(TORCHAO_BUILD_ATEN_OPS OFF)
set(TORCHAO_BUILD_EXECUTORCH_OPS ON)
set(TORCHAO_BUILD_CPU_AARCH64 ON)
set(TORCHAO_ENABLE_ARM_NEON_DOT ON)
Expand Down
1 change: 0 additions & 1 deletion examples/models/llama/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -447,7 +447,6 @@ Next install the llama runner with torchao kernels enabled (similar to step 3.2

```
cmake -DPYTHON_EXECUTABLE=python \
-DCMAKE_PREFIX_PATH=$(python -c 'from distutils.sysconfig import get_python_lib; print(get_python_lib())') \
-DCMAKE_BUILD_TYPE=Release \
-DEXECUTORCH_BUILD_KERNELS_CUSTOM=ON \
-DEXECUTORCH_BUILD_KERNELS_OPTIMIZED=ON \
Expand Down
2 changes: 1 addition & 1 deletion third-party/ao
Submodule ao updated 100 files
Loading