Rémi Flamary

Site web professionel

Home

Je suis Professeur Assistant Monge au sein du département de Mathématiques Appliquées et au Laboratoire CMAP de l'École Polytechnique. Je suis également Maître de Conférence détaché de l'Université Côte d'Azur au sein du département d'Électronique et du Laboratoire Lagrange de l'Observatoire de la Côte d'Azur. J'ai préparé une thèse, sous la direction d'Alain Rakotomamonjy, à l'Université de Rouen et au Laboratoire LITIS.

Sur ce site web, vous trouverez une liste de mes publications, des supports de cours, ainsi que divers logiciels et code source.

Intérêts de recherche

  • Apprentissage statistique et traitement statistique du signal
    • Apprentissage supervisé, classification
    • Méthodes à noyaux, Séparateurs à Vaste Marge
    • Optimisation avec sélection de variable, normes mixtes, non-convexes
    • Apprentissage de représentations, apprentissage de noyaux
    • Réseaux de neurones convolutionels, filtrage, reconstruction d'image
    • Transport Optimal, adaptation de domaine
  • Applications
    • Classification de signaux biomédicaux, Interfaces Cerveaux-Machine
    • Télédétection et imagerie hyperspectrale
    • Énergie et climat
    • Traitement d'images astrophysiques

Nuage de mots-clés de mes intérêts de recherche.

Travaux récents

C. Vincent-Cuaz, T. Vayer, R. Flamary, M. Corneli, N. Courty, Online Graph Dictionary Learning, International Conference on Machine Learning (ICML), 2021.
Abstract: Dictionary learning is a key tool for representation learning that explains the data as linear combination of few basic elements. Yet, this analysis is not amenable in the context of graph learning, as graphs usually belong to different metric spaces. We fill this gap by proposing a new online Graph Dictionary Learning approach, which uses the Gromov Wasserstein divergence for the data fitting term. In our work, graphs are encoded through their nodes' pairwise relations and modeled as convex combination of graph atoms, i.e. dictionary elements, estimated thanks to an online stochastic algorithm, which operates on a dataset of unregistered graphs with potentially different number of nodes. Our approach naturally extends to labeled graphs, and is completed by a novel upper bound that can be used as a fast approximation of Gromov Wasserstein in the embedding space. We provide numerical evidences showing the interest of our approach for unsupervised embedding of graph datasets and for online graph subspace estimation and tracking.
BibTeX:
@inproceedings{vincent2021online,
author = {Vincent-Cuaz, Cédric and Vayer, Titouan and Flamary, Rémi and Corneli, Marco and Courty, Nicolas},
title = {Online Graph Dictionary Learning},
booktitle = {International Conference on Machine Learning (ICML)},
year = {2021}
}
K. Fatras, T. Séjourné, N. Courty, R. Flamary, Unbalanced minibatch Optimal Transport; applications to Domain Adaptation, International Conference on Machine Learning (ICML), 2021.
Abstract: Optimal transport distances have found many applications in machine learning for their capacity to compare non-parametric probability distributions. Yet their algorithmic complexity generally prevents their direct use on large scale datasets. Among the possible strategies to alleviate this issue, practitioners can rely on computing estimates of these distances over subsets of data, \em i.e. minibatches. While computationally appealing, we highlight in this paper some limits of this strategy, arguing it can lead to undesirable smoothing effects. As an alternative, we suggest that the same minibatch strategy coupled with unbalanced optimal transport can yield more robust behavior. We discuss the associated theoretical properties, such as unbiased estimators, existence of gradients and concentration bounds. Our experimental study shows that in challenging problems associated to domain adaptation, the use of unbalanced optimal transport leads to significantly better results, competing with or surpassing recent baselines.
BibTeX:
@inproceedings{fatras2021unbalanced,
author = {Fatras, Kilian and Séjourné, Thibault and Courty, Nicolas and   Flamary, Rémi},
title = {Unbalanced minibatch Optimal Transport; applications to Domain Adaptation},
booktitle = {International Conference on Machine Learning (ICML)},
year = {2021}
}
R. Flamary, N. Courty, A. Gramfort, M. Z. Alaya, A. Boisbunon, S. Chambon, L. Chapel, A. Corenflos, K. Fatras, N. Fournier, L. Gautheron, N. T. Gayraud, H. Janati, A. Rakotomamonjy , I. Redko, A. Rolet, A. Schutz, V. S. a. D. J. Sutherland, R. Tavenard, A. Tong, T. Vayer, POT: Python Optimal Transport, Journal of Machine Learning Research, Vol. 22, N. 78, pp 1-8, 2021.
Abstract: Optimal transport has recently been reintroduced to the machine learning community thanks in part to novel efficient optimization procedures allowing for medium to large scale applications. We propose a Python toolbox that implements several key optimal transport ideas for the machine learning community. The toolbox contains implementations of a number of founding works of OT for machine learning such as Sinkhorn algorithm and Wasserstein barycenters, but also provides generic solvers that can be used for conducting novel fundamental research. This toolbox, named POT for Python Optimal Transport, is open source with an MIT license.
BibTeX:
@article{flamary2021pot,
author = { Rémi Flamary and Nicolas Courty and Alexandre Gramfort and   Mokhtar Z. Alaya and Aurélie Boisbunon and Stanislas Chambon and Laetitia
  Chapel and Adrien Corenflos and Kilian Fatras and Nemo Fournier and Léo
  Gautheron and Nathalie T.H. Gayraud and Hicham Janati and Alain Rakotomamonjy
  and Ievgen Redko and Antoine Rolet and Antony Schutz and Vivien Seguy and
  Danica J. Sutherland and Romain Tavenard and Alexander Tong and Titouan
  Vayer},
title = {POT: Python Optimal Transport},
journal = { Journal of Machine Learning Research},
volume = { 22},
number = { 78},
pages = { 1-8},
year = {2021}
}
I. Redko, T. Vayer, R. Flamary, N. Courty, CO-Optimal Transport, Neural Information Processing Systems (NeurIPS), 2020.
Abstract: Optimal transport (OT) is a powerful geometric and probabilistic tool for finding correspondences and measuring similarity between two distributions. Yet, its original formulation relies on the existence of a cost function between the samples of the two distributions, which makes it impractical for comparing data distributions supported on different topological spaces. To circumvent this limitation, we propose a novel OT problem, named COOT for CO-Optimal Transport, that aims to simultaneously optimize two transport maps between both samples and features. This is different from other approaches that either discard the individual features by focussing on pairwise distances (e.g. Gromov-Wasserstein) or need to model explicitly the relations between the features. COOT leads to interpretable correspondences between both samples and feature representations and holds metric properties. We provide a thorough theoretical analysis of our framework and establish rich connections with the Gromov-Wasserstein distance. We demonstrate its versatility with two machine learning applications in heterogeneous domain adaptation and co-clustering/data summarization, where COOT leads to performance improvements over the competing state-of-the-art methods.
BibTeX:
@inproceedings{redko2020cooptimal,
author = {Ivegen Redko and Titouan Vayer and Rémi Flamary and Nicolas Courty},
title = {CO-Optimal Transport},
booktitle = { Neural Information Processing Systems (NeurIPS)},
year = {2020}
}
D. Marcos, R. Fong, S. Lobry, R. Flamary, N. Courty, D. Tuia, Contextual Semantic Interpretability, Asian Conference on Computer Vision (ACCV), 2020.
Abstract: Convolutional neural networks (CNN) are known to learn an image representation that captures concepts relevant to the task, but do so in an implicit way that hampers model interpretability. However, one could argue that such a representation is hidden in the neurons and can be made explicit by teaching the model to recognize semantically interpretable attributes that are present in the scene. We call such an intermediate layer a \emphsemantic bottleneck. Once the attributes are learned, they can be re-combined to reach the final decision and provide both an accurate prediction and an explicit reasoning behind the CNN decision. In this paper, we look into semantic bottlenecks that capture context: we want attributes to be in groups of a few meaningful elements and participate jointly to the final decision. We use a two-layer semantic bottleneck that gathers attributes into interpretable, sparse groups, allowing them contribute differently to the final output depending on the context. We test our contextual semantic interpretable bottleneck (CSIB) on the task of landscape scenicness estimation and train the semantic interpretable bottleneck using an auxiliary database (SUN Attributes). Our model yields in predictions as accurate as a non-interpretable baseline when applied to a real-world test set of Flickr images, all while providing clear and interpretable explanations for each prediction.
BibTeX:
@inproceedings{marcos2020contextual,
author = {Diego Marcos and Ruth Fong and Sylvain Lobry and Remi Flamary and Nicolas Courty and Devis Tuia},
title = {Contextual Semantic Interpretability},
booktitle = { Asian Conference on Computer Vision (ACCV)},
year = {2020}
}
K. Fatras, Y. Zine, R. Flamary, R. Gribonval, N. Courty, Learning with minibatch Wasserstein : asymptotic and gradient properties, International Conference on Artificial Intelligence and Statistics (AISTAT), 2020.
Abstract: Optimal transport distances are powerful tools to compare probability distributions and have found many applications in machine learning. Yet their algorithmic complexity prevents their direct use on large scale datasets. To overcome this challenge, practitioners compute these distances on minibatches \em i.e. they average the outcome of several smaller optimal transport problems. We propose in this paper an analysis of this practice, which effects are not well understood so far. We notably argue that it is equivalent to an implicit regularization of the original problem, with appealing properties such as unbiased estimators, gradients and a concentration bound around the expectation, but also with defects such as loss of distance property. Along with this theoretical analysis, we also conduct empirical experiments on gradient flows, GANs or color transfer that highlight the practical interest of this strategy.
BibTeX:
@inproceedings{fatras2019learning,
author = {Kilian Fatras and Younes Zine and Rémi Flamary and Rémi Gribonval and Nicolas Courty},
title = {Learning with minibatch Wasserstein : asymptotic and gradient properties},
booktitle = { International Conference on Artificial Intelligence and Statistics (AISTAT)},
year = {2020}
}
T. Vayer, R. Flamary, R. Tavenard, L. Chapel, N. Courty, Sliced Gromov-Wasserstein, Neural Information Processing Systems (NeurIPS), 2019.
Abstract: Recently used in various machine learning contexts, the Gromov-Wasserstein distance (GW) allows for comparing distributions that do not necessarily lie in the same metric space. However, this Optimal Transport (OT) distance requires solving a complex non convex quadratic program which is most of the time very costly both in time and memory. Contrary to GW, the Wasserstein distance (W) enjoys several properties (e.g. duality) that permit large scale optimization. Among those, the Sliced Wasserstein (SW) distance exploits the direct solution of W on the line, that only requires sorting discrete samples in 1D. This paper propose a new divergence based on GW akin to SW. We first derive a closed form for GW when dealing with 1D distributions, based on a new result for the related quadratic assignment problem. We then define a novel OT discrepancy that can deal with large scale distributions via a slicing approach and we show how it relates to the GW distance while being O(n^2) to compute. We illustrate the behavior of this so called Sliced Gromov-Wasserstein (SGW) discrepancy in experiments where we demonstrate its ability to tackle similar problems as GW while being several order of magnitudes faster to compute
BibTeX:
@inproceedings{vayer2019sliced,
author = { Vayer, Titouan and Flamary, Rémi and Tavenard, Romain and Chapel, Laetitia  and  Courty, Nicolas},
title = {Sliced Gromov-Wasserstein},
booktitle = {Neural Information Processing Systems (NeurIPS)},
year = {2019}
}
T. Vayer, L. Chapel, R. Flamary, R. Tavenard, N. Courty, Optimal Transport for structured data with application on graphs, International Conference on Machine Learning (ICML), 2019.
Abstract: This work considers the problem of computing distances between structured objects such as undirected graphs, seen as probability distributions in a specific metric space. We consider a new transportation distance (i.e. that minimizes a total cost of transporting probability masses) that unveils the geometric nature of the structured objects space. Unlike Wasserstein or Gromov-Wasserstein metrics that focus solely and respectively on features (by considering a metric in the feature space) or structure (by seeing structure as a metric space), our new distance exploits jointly both information, and is consequently called Fused Gromov-Wasserstein (FGW). After discussing its properties and computational aspects, we show results on a graph classification task, where our method outperforms both graph kernels and deep graph convolutional networks. Exploiting further on the metric properties of FGW, interesting geometric objects such as Fréchet means or barycenters of graphs are illustrated and discussed in a clustering context.
BibTeX:
@inproceedings{vayer2019optimal,
author = { Vayer, Titouan and Chapel, Laetitia and Flamary, Rémi and Tavenard, Romain and  Courty, Nicolas},
title = {Optimal Transport for structured data with application on graphs},
booktitle = {International Conference on Machine Learning (ICML)},
year = {2019}
}

News

Workshop OTML à NeurIPS 2019

2019-09-02

Nous organisons avec Alexandra Suvorikova, Marco Cuturi et Gabriel Peyré le troisième workshop OTML Workshop (Transport Optimal Pour le Machine Learning) à NeurIPS 2019 le 13/14 Decembre 2019.

La liste des conférenciers invités et [l'appel à contributions](https://sites.google.com/view/otml2019/call-for-contributions sont disponible sur le site du workshop.

Tutorial Transport Optimal pour l'apprentissage statistique à ISBI 2019

2019-04-08

J'ai présenté un tutorial sur le Transport Optimal pour l'apprentissage statistique à ISBI 2019.

Vous pouvez trouver les présentations ainsi que les notebook Python pour la séances de travaux pratiques ici.

Optimal Transport à Data Science Summer School 2018

2018-06-19

Nous allons donner avec Marco Cuturi et Nicolas Courty deux cours d'une journée sur le Transport optimal appliqué à l'apprentissage statistique pour l'école d'été Data Science Summer School 2018 (DS3) à l'École Polytechnique à Paris/Saclay, France.

Vous pouvez trouver les présentations ainsi que les notebook Python pour la séances de travaux pratiques sur Github.