Skip to content

Commit 88df6cd

Browse files
cccclaifacebook-github-bot
authored andcommitted
skip two ops for coreml partitioner (#2826)
Summary: There are some issues with the current two ops in coreml side. While waiting for the fix, skip them for now ``` "aten.index.tensor", "aten.index_put.default" ``` test with ``` python3 -m examples.models.llama2.export_llama --coreml --use_kv_cache ``` Differential Revision: D55680297
1 parent 540d9df commit 88df6cd

File tree

1 file changed

+5
-1
lines changed

1 file changed

+5
-1
lines changed

examples/models/llama2/export_llama_lib.py

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -714,7 +714,11 @@ def _export_llama(modelname, args) -> str: # noqa: C901
714714
partitioners.append(
715715
# pyre-ignore: Undefined attribute [16]: Module `executorch.backends` has no attribute `apple`
716716
CoreMLPartitioner(
717-
skip_ops_for_coreml_delegation=None, compile_specs=compile_specs
717+
skip_ops_for_coreml_delegation=[
718+
"aten.index.tensor",
719+
"aten.index_put.default",
720+
],
721+
compile_specs=compile_specs,
718722
)
719723
)
720724
modelname = f"coreml_{modelname}"

0 commit comments

Comments
 (0)