Skip to content

Commit 775ac87

Browse files
authored
finetune: fix typo in README.md (#4733)
Signed-off-by: Daniel Bevenius <[email protected]>
1 parent 58ba655 commit 775ac87

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

examples/finetune/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ For example to apply 40% of the 'shakespeare' LORA adapter, 80% of the 'bible' L
6161
--lora lora-open-llama-3b-v2-q8_0-yet-another-one-LATEST.bin
6262
```
6363

64-
The scale numbers don't need to add up to one, and you can also use numbers greater than 1 to further increase the influence of an adapter. But making the values to big will sometimes result in worse output. Play around to find good values.
64+
The scale numbers don't need to add up to one, and you can also use numbers greater than 1 to further increase the influence of an adapter. But making the values too big will sometimes result in worse output. Play around to find good values.
6565

6666
Gradient checkpointing reduces the memory requirements by ~50% but increases the runtime.
6767
If you have enough RAM, you can make finetuning a bit faster by disabling checkpointing with `--no-checkpointing`.

0 commit comments

Comments
 (0)