Skip to content

Commit 9a15088

Browse files
committed
Update on "Use llm_config instead of args in export_llama functions"
Differential Revision: [D75484927](https://our.internmc.facebook.com/intern/diff/D75484927) [ghstack-poisoned]
2 parents 52455bc + 6235d95 commit 9a15088

File tree

1 file changed

+3
-1
lines changed

1 file changed

+3
-1
lines changed

examples/models/llama/tests/test_export_llama_lib.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@
77
import unittest
88

99
from executorch.devtools.backend_debug import get_delegation_info
10+
from executorch.examples.models.llama.config.llm_config import LlmConfig
1011
from executorch.examples.models.llama.export_llama_lib import (
1112
_export_llama,
1213
build_args_parser,
@@ -40,7 +41,8 @@ def test_has_expected_ops_and_op_counts(self):
4041
args.use_kv_cache = True
4142
args.verbose = True
4243

43-
builder = _export_llama(args)
44+
llm_config = LlmConfig.from_args(args)
45+
builder = _export_llama(llm_config)
4446
graph_module = builder.edge_manager.exported_program().graph_module
4547
delegation_info = get_delegation_info(graph_module)
4648

0 commit comments

Comments
 (0)