hi,sir
#416
Replies: 1 comment 1 reply
-
They are converted from the official models by the paper authors. https://github.com/google-research/vision_transformer Details about how they are trained is in the paper/their code. Any checkpoints with in21k are the weights as trained on in21k but the classification head is zero'd out so can only be used for fine tuning. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I have a question about the pre-training model.Could you elaborate on that jx_vit_large_p16_224-4ee7a4dc.pth,jx_vit_large_p16_384-b3be5167.pth,jx_vit_large_patch16_224_in21k-606da67d.pth.Thank you for your contribution.
Beta Was this translation helpful? Give feedback.
All reactions