bars
learndeep
search
circle-xmark
⌘
Ctrl
k
copy
Copy
chevron-down
1. 开山模型
DeiT:Training data-efficient image transformers & distillation through attention
2012.12877v2
Previous
Distilling the Knowledge in a Neural Network
chevron-left
Next
Swin Transformer:Hierarchical Vision Transformer using Shifted Windows
chevron-right
Last updated
10 months ago