Skip to content

Commit 6d61ecc

Browse files
authored
Update README.md
1 parent ebe3ad9 commit 6d61ecc

File tree

1 file changed

+5
-1
lines changed

1 file changed

+5
-1
lines changed

examples/models/llama2/README.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -98,17 +98,21 @@ If you want to finetune your model based on a specific dataset, PyTorch provides
9898
9999
Once you have [TorchTune installed](https://github.com/pytorch/torchtune?tab=readme-ov-file#get-started) you can finetune Llama2 7B model using LoRA on a single GPU, using the following command. This will produce a checkpoint where the LoRA weights are merged with the base model and so the output checkpoint will be in the same format as the original Llama2 model.
100100
101-
101+
```
102102
tune run lora_finetune_single_device \
103103
--config llama2/7B_lora_single_device \
104104
checkpointer.checkpoint_dir=<path_to_checkpoint_folder> \
105105
tokenizer.path=<path_to_checkpoint_folder>/tokenizer.model
106+
```
107+
106108
To run full finetuning with Llama2 7B on a single device, you can use the following command.
107109
110+
```
108111
tune run full_finetune_single_device \
109112
--config llama2/7B_full_single_device \
110113
checkpointer.checkpoint_dir=<path_to_checkpoint_folder> \
111114
tokenizer.path=<path_to_checkpoint_folder>/tokenizer.model
115+
```
112116
113117
## Step 3: Evaluate model accuracy
114118

0 commit comments

Comments
 (0)