Skip to content

Commit 47a0862

Browse files
authored
Update quantization.md link to quantize.py (#1392)
#1385
1 parent de2507b commit 47a0862

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

docs/quantization.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ for valid `bitwidth` and `groupsize` values.
5959
| linear with dynamic activations (symmetric) | `'{"linear:a8w4dq" : {"groupsize" : <groupsize>}}'`|
6060
| embedding | `'{"embedding": {"bitwidth": <bitwidth>, "groupsize":<groupsize>}}'` |
6161

62-
See the available quantization schemes [here](https://github.com/pytorch/torchchat/blob/main/torchchat/utils/quantize.py#L1260-L1266).
62+
See the available quantization schemes [here](https://github.com/pytorch/torchchat/blob/b809b69e03f8f4b75a4b27b0778f0d3695ce94c2/torchchat/utils/quantize.py#L887-L894).
6363

6464
In addition to quantization, the [accelerator](model_customization.md#device)
6565
and [precision](model_customization.md#model-precision) can also be specified.

0 commit comments

Comments
 (0)