We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
2 parents b28945f + c33a001 commit e7b4ab6Copy full SHA for e7b4ab6
timm/models/vision_transformer.py
@@ -460,7 +460,7 @@ def __init__(
460
img_size: Input image size.
461
patch_size: Patch size.
462
in_chans: Number of image input channels.
463
- num_classes: Mumber of classes for classification head.
+ num_classes: Number of classes for classification head.
464
global_pool: Type of global pooling for final sequence (default: 'token').
465
embed_dim: Transformer embedding dimension.
466
depth: Depth of transformer.
0 commit comments