Skip to content

Commit fb3ac16

Browse files
orionrmalfet
authored andcommitted
[WIP] Improvements for eval in README.md (#317)
Not sure if these are exactly right, but putting up a patch
1 parent bc7802e commit fb3ac16

File tree

1 file changed

+3
-4
lines changed

1 file changed

+3
-4
lines changed

README.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -109,14 +109,13 @@ For more information run `python torchchat.py eval --help`
109109
**Examples**
110110
Eager mode:
111111
```
112-
# Eval example for Mac with some parameters
113-
python -m torchchat.py eval --device cuda --checkpoint-path ${MODEL_PATH} -d fp32 --limit 5
112+
python torchchat.py eval --checkpoint-path ${MODEL_PATH} -d fp32 --limit 5
114113
```
115114

116-
To test the perplexity for lowered or quantized model, pass it in the same way you would to generate.py:
115+
To test the perplexity for lowered or quantized model, pass it in the same way you would to generate:
117116

118117
```
119-
python3 -m torchchat.py eval --pte <pte> -p <params.json> -t <tokenizer.model> --limit 5
118+
python torchchat.py eval --pte-path stories15m.pte --params-table <params.json> --tokenizer-path <tokenizer.model> --limit 5
120119
```
121120
## Models
122121
These are the supported models

0 commit comments

Comments
 (0)