Skip to content

Commit 54477dc

Browse files
committed
Update on "Use llm_config instead of args in export_llama functions"
Differential Revision: [D75484927](https://our.internmc.facebook.com/intern/diff/D75484927) [ghstack-poisoned]
2 parents 792022d + 382cea9 commit 54477dc

File tree

9 files changed

+471
-218
lines changed

9 files changed

+471
-218
lines changed

backends/arm/test/models/test_llama.py

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -22,9 +22,7 @@
2222
TosaPipelineMI,
2323
)
2424

25-
from executorch.examples.models.llama.config.llm_config_utils import (
26-
convert_args_to_llm_config,
27-
)
25+
from executorch.examples.models.llama.config.llm_config import LlmConfig
2826
from executorch.examples.models.llama.export_llama_lib import (
2927
build_args_parser,
3028
get_llama_model,
@@ -92,7 +90,7 @@ def prepare_model(self):
9290
]
9391
parser = build_args_parser()
9492
args = parser.parse_args(args)
95-
llm_config = convert_args_to_llm_config(args)
93+
llm_config = LlmConfig.from_args(args)
9694

9795
llama_model, llama_inputs, llama_meta = get_llama_model(llm_config)
9896

examples/models/llama/TARGETS

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -152,7 +152,6 @@ runtime.python_library(
152152
"//ai_codesign/gen_ai/fast_hadamard_transform:fast_hadamard_transform",
153153
"//caffe2:torch",
154154
"//executorch/examples/models/llama/config:llm_config",
155-
"//executorch/examples/models/llama/config:llm_config_utils",
156155
"//executorch/backends/vulkan/_passes:vulkan_passes",
157156
"//executorch/exir/passes:init_mutable_pass",
158157
"//executorch/examples/models:model_base",

0 commit comments

Comments
 (0)