Skip to content

Commit 549001c

Browse files
cccclaifacebook-github-bot
authored andcommitted
skip one op for coreml partitioner (#2826)
Summary: There are some issues with the current two ops in coreml side. While waiting for the fix, skip them for now ``` "aten.index.tensor", "aten.index_put.default" ``` test with ``` python3 -m examples.models.llama2.export_llama --coreml --use_kv_cache ``` Reviewed By: mergennachin Differential Revision: D55680297
1 parent 88b6cd2 commit 549001c

File tree

1 file changed

+4
-1
lines changed

1 file changed

+4
-1
lines changed

examples/models/llama2/export_llama_lib.py

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -593,7 +593,10 @@ def _export_llama(modelname, args) -> str: # noqa: C901
593593
partitioners.append(
594594
# pyre-ignore: Undefined attribute [16]: Module `executorch.backends` has no attribute `apple`
595595
CoreMLPartitioner(
596-
skip_ops_for_coreml_delegation=None, compile_specs=compile_specs
596+
skip_ops_for_coreml_delegation=[
597+
"aten.index_put.default",
598+
],
599+
compile_specs=compile_specs,
597600
)
598601
)
599602
modelname = f"coreml_{modelname}"

0 commit comments

Comments
 (0)