DeiT III: Revenge of the ViT - CNRS - Centre national de la recherche scientifique Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

DeiT III: Revenge of the ViT

Résumé

A Vision Transformer (ViT) is a simple neural architecture amenable to serve several computer vision tasks. It has limited built-in architectural priors, in contrast to more recent architectures that incorporate priors either about the input data or of specific tasks. Recent works show that ViTs benefit from self-supervised pre-training, in particular BerT-like pre-training like BeiT. In this paper, we revisit the supervised training of ViTs. Our procedure builds upon and simplifies a recipe introduced for training ResNet-50. It includes a new simple data-augmentation procedure with only 3 augmentations, closer to the practice in self-supervised learning. Our evaluations on Image classification (ImageNet-1k with and without pre-training on ImageNet-21k), transfer learning and semantic segmentation show that our procedure outperforms by a large margin previous fully supervised training recipes for ViT. It also reveals that the performance of our ViT trained with supervision is comparable to that of more recent architectures. Our results could serve as better baselines for recent self-supervised approaches demonstrated on ViT.
Fichier principal
Vignette du fichier
2204.07118.pdf (8.31 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03945731 , version 1 (18-01-2023)

Identifiants

Citer

Hugo Touvron, Matthieu Cord, Hervé Jégou. DeiT III: Revenge of the ViT. 17th European Conference on Computer Vision (ECCV 2022), Oct 2022, Tel Aviv, Israel. ⟨10.1007/978-3-031-20053-3_30⟩. ⟨hal-03945731⟩
16 Consultations
106 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More