Rémi Flamary

Site web professionel

Home

Je suis Maître de Conférence à l'Université de Nice Sophia-Antipolis au sein du département d'Électronique et du Laboratoire Lagrange. Ce laboratoire fait partie de l'Observatoire de la Côte d'Azur. J'ai auparavant préparé une thèse, sous la direction d'Alain Rakotomamonjy, à l'Université de Rouen et au Laboratoire LITIS.

Sur ce site web, vous trouverez une liste de mes publications, des supports de cours, ainsi que divers logiciels et code source.

Intérêts de recherche

  • Apprentissage statistique et traitement statistique du signal
    • Apprentissage supervisé, classification
    • Méthodes à noyaux, Séparateurs à Vaste Marge
    • Optimisation avec sélection de variable, normes mixtes, non-convexes
    • Apprentissage de représentations, apprentissage de noyaux
    • Réseaux de neurones convolutionels, filtrage, reconstruction d'image
    • Transport Optimal, adaptation de domaine
  • Applications
    • Classification de signaux biomédicaux, Interfaces Cerveaux-Machine
    • Télédétection et imagerie hyperspectrale
    • Traitement d'images astrophysiques

Nuage de mots-clés de mes intérêts de recherche.

Travaux récents

T. Vayer, L. Chapel, R. Flamary, R. Tavenard, N. Courty, Optimal Transport for structured data with application on graphs, International Conference on Machine Learning (ICML), 2019.
Abstract: This work considers the problem of computing distances between structured objects such as undirected graphs, seen as probability distributions in a specific metric space. We consider a new transportation distance (i.e. that minimizes a total cost of transporting probability masses) that unveils the geometric nature of the structured objects space. Unlike Wasserstein or Gromov-Wasserstein metrics that focus solely and respectively on features (by considering a metric in the feature space) or structure (by seeing structure as a metric space), our new distance exploits jointly both information, and is consequently called Fused Gromov-Wasserstein (FGW). After discussing its properties and computational aspects, we show results on a graph classification task, where our method outperforms both graph kernels and deep graph convolutional networks. Exploiting further on the metric properties of FGW, interesting geometric objects such as Fréchet means or barycenters of graphs are illustrated and discussed in a clustering context.
BibTeX:
@inproceedings{vayer2019optimal,
author = { Vayer, Titouan and Chapel, Laetitia and Flamary, Rémi and Tavenard, Romain and  Courty, Nicolas},
title = {Optimal Transport for structured data with application on graphs},
booktitle = {International Conference on Machine Learning (ICML)},
year = {2019}
}
I. Redko, N. Courty, R. Flamary, D. Tuia, Optimal Transport for Multi-source Domain Adaptation under Target Shift, International Conference on Artificial Intelligence and Statistics (AISTAT), 2019.
Abstract: In this paper, we propose to tackle the problem of reducing discrepancies between multiple domains referred to as multi-source domain adaptation and consider it under the target shift assumption: in all domains we aim to solve a classification problem with the same output classes, but with labels' proportions differing across them. We design a method based on optimal transport, a theory that is gaining momentum to tackle adaptation problems in machine learning due to its efficiency in aligning probability distributions. Our method performs multi-source adaptation and target shift correction simultaneously by learning the class probabilities of the unlabeled target sample and the coupling allowing to align two (or more) probability distributions. Experiments on both synthetic and real-world data related to satellite image segmentation task show the superiority of the proposed method over the state-of-the-art.
BibTeX:
@inproceedings{redko2018optimal,
author = { Redko, I. and Courty, N. and Flamary, R. and Tuia, D.},
title = {Optimal Transport for Multi-source Domain Adaptation under Target Shift},
booktitle = { International Conference on Artificial Intelligence and Statistics (AISTAT)},
year = {2019}
}
B. B. Damodaran, B. Kellenberger, R. Flamary, D. Tuia, N. Courty, DeepJDOT: Deep Joint distribution optimal transport for unsupervised domain adaptation, European Conference in Computer Visions (ECCV), 2018.
Abstract: In computer vision, one is often confronted with problems of domain shifts, which occur when one applies a classifier trained on a source dataset to target data sharing similar characteristics (e.g. same classes), but also different latent data structures (e.g. different acquisition conditions). In such a situation, the model will perform poorly on the new data, since the classifier is specialized to recognize visual cues specific to the source domain. In this work we explore a solution, named DeepJDOT, to tackle this problem: through a measure of discrepancy on joint deep representations/labels based on optimal transport, we not only learn new data representations aligned between the source and target domain, but also simultaneously preserve the discriminative information used by the classifier. We applied DeepJDOT to a series of visual recognition tasks, where it compares favorably against state-of-the-art deep domain adaptation methods.
BibTeX:
@inproceedings{damodaran2018deepjdot,
author = { Damodaran, Bharath B. and Kellenberger, Benjamin and Flamary, Rémi and Tuia, Devis and Courty, Nicolas},
title = {DeepJDOT: Deep Joint distribution optimal transport for unsupervised domain adaptation},
booktitle = {European Conference in Computer Visions (ECCV)},
year = {2018}
}
I. Harrane, R. Flamary, C. Richard, On reducing the communication cost of the diffusion LMS algorithm, IEEE Transactions on Signal and Information Processing over Networks (SIPN), 2018.
Abstract: The rise of digital and mobile communications has recently made the world more connected and networked, resulting in an unprecedented volume of data flowing between sources, data centers, or processes. While these data may be processed in a centralized manner, it is often more suitable to consider distributed strategies such as diffusion as they are scalable and can handle large amounts of data by distributing tasks over networked agents. Although it is relatively simple to implement diffusion strategies over a cluster, it appears to be challenging to deploy them in an ad-hoc network with limited energy budget for communication. In this paper, we introduce a diffusion LMS strategy that significantly reduces communication costs without compromising the performance. Then, we analyze the proposed algorithm in the mean and mean-square sense. Next, we conduct numerical experiments to confirm the theoretical findings. Finally, we perform large scale simulations to test the algorithm efficiency in a scenario where energy is limited.
BibTeX:
@article{harrane2018reducing,
author = {Harrane, Ibrahim and Flamary, R. and Richard, C.},
title = {On reducing the communication cost of the diffusion LMS algorithm},
journal = {IEEE Transactions on Signal and Information Processing over Networks (SIPN)},
year = {2018}
}
R. Flamary, M. Cuturi, N. Courty, A. Rakotomamonjy, Wasserstein Discriminant Analysis, Machine learning , 2018.
Abstract: Wasserstein Discriminant Analysis (WDA) is a new supervised method that can improve classification of high-dimensional data by computing a suitable linear map onto a lower dimensional subspace. Following the blueprint of classical Linear Discriminant Analysis (LDA), WDA selects the projection matrix that maximizes the ratio of two quantities: the dispersion of projected points coming from different classes, divided by the dispersion of projected points coming from the same class. To quantify dispersion, WDA uses regularized Wasserstein distances, rather than cross-variance measures which have been usually considered, notably in LDA. Thanks to the the underlying principles of optimal transport, WDA is able to capture both global (at distribution scale) and local (at samples scale) interactions between classes. Regularized Wasserstein distances can be computed using the Sinkhorn matrix scaling algorithm; We show that the optimization of WDA can be tackled using automatic differentiation of Sinkhorn iterations. Numerical experiments show promising results both in terms of prediction and visualization on toy examples and real life datasets such as MNIST and on deep features obtained from a subset of the Caltech dataset.
BibTeX:
@article{flamary2017wasserstein,
author = {Flamary, Remi and Cuturi, Marco and Courty, Nicolas and Rakotomamonjy, Alain},
title = {Wasserstein Discriminant Analysis},
journal = { Machine learning },
year = {2018}
}
N. Courty, R. Flamary, M. Ducoffe, Learning Wasserstein Embeddings, International Conference on Learning Representations (ICLR), 2018.
Abstract: The Wasserstein distance received a lot of attention recently in the community of machine learning, especially for its principled way of comparing distributions. It has found numerous applications in several hard problems, such as domain adaptation, dimensionality reduction or generative models. However, its use is still limited by a heavy computational cost. Our goal is to alleviate this problem by providing an approximation mechanism that allows to break its inherent complexity. It relies on the search of an embedding where the Euclidean distance mimics the Wasserstein distance. We show that such an embedding can be found with a siamese architecture associated with a decoder network that allows to move from the embedding space back to the original input space. Once this embedding has been found, computing optimization problems in the Wasserstein space (e.g. barycenters, principal directions or even archetypes) can be conducted extremely fast. Numerical experiments supporting this idea are conducted on image datasets, and show the wide potential benefits of our method.
BibTeX:
@inproceedings{courty2018learning,
author = {Courty, Nicolas and Flamary, Remi and Ducoffe, Melanie},
title = {Learning Wasserstein Embeddings},
booktitle = {International Conference on Learning Representations (ICLR)},
year = {2018}
}
V. Seguy, B. B. Damodaran, R. Flamary, N. Courty, A. Rolet, M. Blondel, Large-Scale Optimal Transport and Mapping Estimation, International Conference on Learning Representations (ICLR), 2018.
Abstract: This paper presents a novel two-step approach for the fundamental problem of learning an optimal map from one distribution to another. First, we learn an optimal transport (OT) plan, which can be thought as a one-to-many map between the two distributions. To that end, we propose a stochastic dual approach of regularized OT, and show empirically that it scales better than a recent related approach when the amount of samples is very large. Second, we estimate a Monge map as a deep neural network learned by approximating the barycentric projection of the previously-obtained OT plan. We prove two theoretical stability results of regularized OT which show that our estimations converge to the OT plan and Monge map between the underlying continuous measures. We showcase our proposed approach on two applications: domain adaptation and generative modeling.
BibTeX:
@inproceedings{seguy2018large,
author = {Seguy, Vivien. and Damodaran, Bharath B.  and Flamary, Remi and Courty, Nicolas and Rolet, Antoine and Blondel, Mathieu},
title = {Large-Scale Optimal Transport and Mapping Estimation},
booktitle = {International Conference on Learning Representations (ICLR)},
year = {2018}
}
N. Courty, R. Flamary, A. Habrard, A. Rakotomamonjy, Joint Distribution Optimal Transportation for Domain Adaptation, Neural Information Processing Systems (NIPS), 2017.
Abstract: This paper deals with the unsupervised domain adaptation problem, where one wants to estimate a prediction function f in a given target domain without any labeled sample by exploiting the knowledge available from a source domain where labels are known. Our work makes the following assumption: there exists a non-linear transformation between the joint feature/label space distributions of the two domain Ps and Pt. We propose a solution of this problem with optimal transport, that allows to recover an estimated target Pft(X,f(X)) by optimizing simultaneously the optimal coupling and f. We show that our method corresponds to the minimization of a bound on the target error, and provide an efficient algorithmic solution, for which convergence is proved. The versatility of our approach, both in terms of class of hypothesis or loss functions is demonstrated with real world classification and regression problems, for which we reach or surpass state-of-the-art results.
BibTeX:
@inproceedings{courty2017joint,
author = {Courty, Nicolas and Flamary, Remi and Habrard, Amaury and Rakotomamonjy, Alain},
title = {Joint Distribution Optimal Transportation for Domain Adaptation},
booktitle = {Neural Information Processing Systems (NIPS)},
year = {2017}
}
R. Flamary, Astronomical image reconstruction with convolutional neural networks, European Conference on Signal Processing (EUSIPCO), 2017.
Abstract: State of the art methods in astronomical image reconstruction rely on the resolution of a regularized or constrained optimization problem. Solving this problem can be computationally intensive and usually leads to a quadratic or at least superlinear complexity w.r.t. the number of pixels in the image. We investigate in this work the use of convolutional neural networks for image reconstruction in astronomy. With neural networks, the computationally intensive tasks is the training step, but the prediction step has a fixed complexity per pixel, i.e. a linear complexity. Numerical experiments show that our approach is both computationally efficient and competitive with other state of the art methods in addition to being interpretable.
BibTeX:
@inproceedings{flamary2017astro,
author = {Flamary, Remi},
title = {Astronomical image reconstruction with convolutional neural networks},
booktitle = {European Conference on Signal Processing (EUSIPCO)},
year = {2017}
}
P. Hartley, R. Flamary, N. Jackson, A. S. Tagore, R. B. Metcalf, Support Vector Machine classification of strong gravitational lenses, Monthly Notices of the Royal Astronomical Society (MNRAS), 2017.
Abstract: The imminent advent of very large-scale optical sky surveys, such as Euclid and LSST, makes it important to find efficient ways of discovering rare objects such as strong gravitational lens systems, where a background object is multiply gravitationally imaged by a foreground mass. As well as finding the lens systems, it is important to reject false positives due to intrinsic structure in galaxies, and much work is in progress with machine learning algorithms such as neural networks in order to achieve both these aims. We present and discuss a Support Vector Machine (SVM) algorithm which makes use of a Gabor filterbank in order to provide learning criteria for separation of lenses and non-lenses, and demonstrate using blind challenges that under certain circumstances it is a particularly efficient algorithm for rejecting false positives. We compare the SVM engine with a large-scale human examination of 100000 simulated lenses in a challenge dataset, and also apply the SVM method to survey images from the Kilo-Degree Survey.
BibTeX:
@article{hartley2017support,
author = {Hartley, Philippa, and Flamary, Remi and Jackson, Neal and Tagore, A. S. and Metcalf, R. B.},
title = {Support Vector Machine classification of strong gravitational lenses},
journal = {Monthly Notices of the Royal Astronomical Society (MNRAS)},
year = {2017}
}

News

Tutorial Transport Optimal pour l'apprentissage statistique à ISBI 2019

2019-04-08

J'ai présenté un tutorial sur le Transport Optimal pour l'apprentissage statistique à ISBI 2019.

Vous pouvez trouver les présentations ainsi que les notebook Python pour la séances de travaux pratiques ici.

Optimal Transport à Data Science Summer School 2018

2018-06-19

Nous allons donner avec Marco Cuturi et Nicolas Courty deux cours d'une journée sur le Transport optimal appliqué à l'apprentissage statistique pour l'école d'été Data Science Summer School 2018 (DS3) à l'École Polytechnique à Paris/Saclay, France.

Vous pouvez trouver les présentations ainsi que les notebook Python pour la séances de travaux pratiques sur Github.

Optimal Transport à Statlearn 2018

2018-04-05

Nous avons donné avec Nicolas Courty un cours d'une journée sur le Transport optimal appliqué à l'apprentissage statistique pour l'école de printemps Statlearn 2018 à Nice, France.

Vous pouvez trouver les présentations ainsi que les notebook Python pour la séances de travaux pratiques sur Github.