Joint multi-modal Self-Supervised pre-training in Remote Sensing: Application to Methane Source Classification - CNRS - Centre national de la recherche scientifique Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

Joint multi-modal Self-Supervised pre-training in Remote Sensing: Application to Methane Source Classification

Résumé

With the current ubiquity of deep learning methods to solve computer vision and remote sensing specific tasks, the need for labelled data is growing constantly. However, in many cases, the annotation process can be long and tedious depending on the expertise needed to perform reliable annotations. In order to alleviate this need for annotations, several self-supervised methods have recently been proposed in the literature. The core principle behind these methods is to learn an image encoder using solely unlabelled data samples. In earth observation, there are opportunities to exploit domain-specific remote sensing image data in order to improve these methods. Specifically, by leveraging the geographical position associated with each image, it is possible to cross reference a location captured from multiple sensors, leading to multiple views of the same locations. In this paper, we briefly review the core principles behind so-called joint-embeddings methods and investigate the usage of multiple remote sensing modalities in self-supervised pre-training. We evaluate the final performance of the resulting encoders on the task of methane source classification.

Dates et versions

hal-04250902 , version 1 (20-10-2023)

Identifiants

Citer

Paul Berg, Minh-Tan Pham, Nicolas Courty. Joint multi-modal Self-Supervised pre-training in Remote Sensing: Application to Methane Source Classification. IGARSS, Jul 2023, Passadena - CA, United States. ⟨hal-04250902⟩
14 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More