Skip to content

Commit cc82eaf

Browse files
committed
Update on "Reduce memory requirement on export_llama tests with no params"
[ghstack-poisoned]
2 parents 371bf8a + f9ad497 commit cc82eaf

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

examples/models/llama/model.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -98,7 +98,7 @@ def __init__(self, llm_config: Optional[LlmConfig] = None):
9898
checkpoint = torch.load(checkpoint_path, map_location=device, mmap=True)
9999

100100
# If given checkpoint is fairseq, convert to llama checkpoint.
101-
fairseq2_checkpoint = llm_config.base.fairseq2
101+
fairseq2_checkpoint = self.llm_config.base.fairseq2
102102
if fairseq2_checkpoint:
103103
print("Using fairseq2 checkpoint")
104104
checkpoint = convert_to_llama_checkpoint(checkpoint=checkpoint)

0 commit comments

Comments
 (0)