Rémi Flamary

Professional website

Home

photo

I am currently a Monge Assistant Professor at the Applied Mathematics department and CMAP Laboratory from École Polytechnique. I am on leave from Université Côte d'Azur in the Department of Electronics and in the Lagrange Laboratory that is part of the Observatoire de la Côte d'Azur. I was a PhD student and teaching assistant at the LITIS Laboratory and my PhD advisor was Alain Rakotomamonjy at Rouen University.

On this website, you can find a list of my publications and download the corresponding software/code. Some of my french teaching material is also available.

Research Interests

  • Machine learning and statistical signal processing
    • Classification, supervised learning
    • Kernel methods, Support Vector Machines
    • Optimization with sparsity, variable selection, mixed norms, non convex regularization
    • Feature learning, data representation, kernel learning
    • Convolutional neural networks, filter learning, image reconstruction
    • Optimal transport, domain adaptation
  • Applications
    • Biomedical engineering, Brain-Computer Interfaces
    • Remote sensing and hyperspectral Imaging
    • Energy and climate
    • Astronomical image processing

Wordcloud of my research interests.

Recent work

L. Chapel, R. Flamary, H. Wu, C. Févotte, G. Gasso, Unbalanced Optimal Transport through Non-negative Penalized Linear Regression, Neural Information Processing Systems (NeurIPS), 2021.
Abstract: This paper addresses the problem of Unbalanced Optimal Transport (UOT) in which the marginal conditions are relaxed (using weighted penalties in lieu of equality) and no additional regularization is enforced on the OT plan. In this context, we show that the corresponding optimization problem can be reformulated as a non-negative penalized linear regression problem. This reformulation allows us to propose novel algorithms inspired from inverse problems and nonnegative matrix factorization. In particular, we consider majorization-minimization which leads in our setting to efficient multiplicative updates for a variety of penalties. Furthermore, we derive for the first time an efficient algorithm to compute the regularization path of UOT with quadratic penalties. The proposed algorithm provides a continuity of piece-wise linear OT plans converging to the solution of balanced OT (corresponding to infinite penalty weights). We perform several numerical experiments on simulated and real data illustrating the new algorithms, and provide a detailed discussion about more sophisticated optimization tools that can further be used to solve OT problems thanks to our reformulation.
BibTeX:
@inproceedings{chapel2021unbalanced,
author = {Chapel, Laetitia and Flamary, Rémi and Wu, Haoran and Févotte, Cédric   and Gasso, Gilles},
title = {Unbalanced Optimal Transport through Non-negative Penalized Linear Regression},
booktitle = {Neural Information Processing Systems (NeurIPS)},
year = {2021}
}
K. Fatras, B. Bhushan Damodaran, S. Lobry, R. Flamary, D. Tuia, N. Courty, Wasserstein Adversarial Regularization for learning with label noise, Pattern Analysis and Machine Intelligence, IEEE Transactions on , 2021.
Abstract: Noisy labels often occur in vision datasets, especially when they are obtained from crowdsourcing or Web scraping. We propose a new regularization method, which enables learning robust classifiers in presence of noisy data. To achieve this goal, we propose a new adversarial regularization scheme based on the Wasserstein distance. Using this distance allows taking into account specific relations between classes by leveraging the geometric properties of the labels space. Our Wasserstein Adversarial Regularization (WAR) encodes a selective regularization, which promotes smoothness of the classifier between some classes, while preserving sufficient complexity of the decision boundary between others. We first discuss how and why adversarial regularization can be used in the context of label noise and then show the effectiveness of our method on five datasets corrupted with noisy labels: in both benchmarks and real datasets, WAR outperforms the state-of-the-art competitors.
BibTeX:
@article{damodaran2021wasserstein,
author = { Fatras, Kilian and Bhushan Damodaran, Bharath and Lobry, Sylvain and Flamary, Rémi and Tuia, Devis and Courty, Nicolas},
title = {Wasserstein Adversarial Regularization for learning with label          noise},
journal = { Pattern Analysis and Machine Intelligence, IEEE Transactions on },
year = {2021}
}
C. Vincent-Cuaz, T. Vayer, R. Flamary, M. Corneli, N. Courty, Online Graph Dictionary Learning, International Conference on Machine Learning (ICML), 2021.
Abstract: Dictionary learning is a key tool for representation learning that explains the data as linear combination of few basic elements. Yet, this analysis is not amenable in the context of graph learning, as graphs usually belong to different metric spaces. We fill this gap by proposing a new online Graph Dictionary Learning approach, which uses the Gromov Wasserstein divergence for the data fitting term. In our work, graphs are encoded through their nodes' pairwise relations and modeled as convex combination of graph atoms, i.e. dictionary elements, estimated thanks to an online stochastic algorithm, which operates on a dataset of unregistered graphs with potentially different number of nodes. Our approach naturally extends to labeled graphs, and is completed by a novel upper bound that can be used as a fast approximation of Gromov Wasserstein in the embedding space. We provide numerical evidences showing the interest of our approach for unsupervised embedding of graph datasets and for online graph subspace estimation and tracking.
BibTeX:
@inproceedings{vincent2021online,
author = {Vincent-Cuaz, Cédric and Vayer, Titouan and Flamary, Rémi and Corneli, Marco and Courty, Nicolas},
title = {Online Graph Dictionary Learning},
booktitle = {International Conference on Machine Learning (ICML)},
year = {2021}
}
K. Fatras, T. Séjourné, N. Courty, R. Flamary, Unbalanced minibatch Optimal Transport; applications to Domain Adaptation, International Conference on Machine Learning (ICML), 2021.
Abstract: Optimal transport distances have found many applications in machine learning for their capacity to compare non-parametric probability distributions. Yet their algorithmic complexity generally prevents their direct use on large scale datasets. Among the possible strategies to alleviate this issue, practitioners can rely on computing estimates of these distances over subsets of data, \em i.e. minibatches. While computationally appealing, we highlight in this paper some limits of this strategy, arguing it can lead to undesirable smoothing effects. As an alternative, we suggest that the same minibatch strategy coupled with unbalanced optimal transport can yield more robust behavior. We discuss the associated theoretical properties, such as unbiased estimators, existence of gradients and concentration bounds. Our experimental study shows that in challenging problems associated to domain adaptation, the use of unbalanced optimal transport leads to significantly better results, competing with or surpassing recent baselines.
BibTeX:
@inproceedings{fatras2021unbalanced,
author = {Fatras, Kilian and Séjourné, Thibault and Courty, Nicolas and   Flamary, Rémi},
title = {Unbalanced minibatch Optimal Transport; applications to Domain Adaptation},
booktitle = {International Conference on Machine Learning (ICML)},
year = {2021}
}
R. Flamary, N. Courty, A. Gramfort, M. Z. Alaya, A. Boisbunon, S. Chambon, L. Chapel, A. Corenflos, K. Fatras, N. Fournier, L. Gautheron, N. T. Gayraud, H. Janati, A. Rakotomamonjy , I. Redko, A. Rolet, A. Schutz, V. S. a. D. J. Sutherland, R. Tavenard, A. Tong, T. Vayer, POT: Python Optimal Transport, Journal of Machine Learning Research, Vol. 22, N. 78, pp 1-8, 2021.
Abstract: Optimal transport has recently been reintroduced to the machine learning community thanks in part to novel efficient optimization procedures allowing for medium to large scale applications. We propose a Python toolbox that implements several key optimal transport ideas for the machine learning community. The toolbox contains implementations of a number of founding works of OT for machine learning such as Sinkhorn algorithm and Wasserstein barycenters, but also provides generic solvers that can be used for conducting novel fundamental research. This toolbox, named POT for Python Optimal Transport, is open source with an MIT license.
BibTeX:
@article{flamary2021pot,
author = { Rémi Flamary and Nicolas Courty and Alexandre Gramfort and   Mokhtar Z. Alaya and Aurélie Boisbunon and Stanislas Chambon and Laetitia
  Chapel and Adrien Corenflos and Kilian Fatras and Nemo Fournier and Léo
  Gautheron and Nathalie T.H. Gayraud and Hicham Janati and Alain Rakotomamonjy
  and Ievgen Redko and Antoine Rolet and Antony Schutz and Vivien Seguy and
  Danica J. Sutherland and Romain Tavenard and Alexander Tong and Titouan
  Vayer},
title = {POT: Python Optimal Transport},
journal = { Journal of Machine Learning Research},
volume = { 22},
number = { 78},
pages = { 1-8},
year = {2021}
}
I. Redko, T. Vayer, R. Flamary, N. Courty, CO-Optimal Transport, Neural Information Processing Systems (NeurIPS), 2020.
Abstract: Optimal transport (OT) is a powerful geometric and probabilistic tool for finding correspondences and measuring similarity between two distributions. Yet, its original formulation relies on the existence of a cost function between the samples of the two distributions, which makes it impractical for comparing data distributions supported on different topological spaces. To circumvent this limitation, we propose a novel OT problem, named COOT for CO-Optimal Transport, that aims to simultaneously optimize two transport maps between both samples and features. This is different from other approaches that either discard the individual features by focussing on pairwise distances (e.g. Gromov-Wasserstein) or need to model explicitly the relations between the features. COOT leads to interpretable correspondences between both samples and feature representations and holds metric properties. We provide a thorough theoretical analysis of our framework and establish rich connections with the Gromov-Wasserstein distance. We demonstrate its versatility with two machine learning applications in heterogeneous domain adaptation and co-clustering/data summarization, where COOT leads to performance improvements over the competing state-of-the-art methods.
BibTeX:
@inproceedings{redko2020cooptimal,
author = {Ivegen Redko and Titouan Vayer and Rémi Flamary and Nicolas Courty},
title = {CO-Optimal Transport},
booktitle = { Neural Information Processing Systems (NeurIPS)},
year = {2020}
}
D. Marcos, R. Fong, S. Lobry, R. Flamary, N. Courty, D. Tuia, Contextual Semantic Interpretability, Asian Conference on Computer Vision (ACCV), 2020.
Abstract: Convolutional neural networks (CNN) are known to learn an image representation that captures concepts relevant to the task, but do so in an implicit way that hampers model interpretability. However, one could argue that such a representation is hidden in the neurons and can be made explicit by teaching the model to recognize semantically interpretable attributes that are present in the scene. We call such an intermediate layer a \emphsemantic bottleneck. Once the attributes are learned, they can be re-combined to reach the final decision and provide both an accurate prediction and an explicit reasoning behind the CNN decision. In this paper, we look into semantic bottlenecks that capture context: we want attributes to be in groups of a few meaningful elements and participate jointly to the final decision. We use a two-layer semantic bottleneck that gathers attributes into interpretable, sparse groups, allowing them contribute differently to the final output depending on the context. We test our contextual semantic interpretable bottleneck (CSIB) on the task of landscape scenicness estimation and train the semantic interpretable bottleneck using an auxiliary database (SUN Attributes). Our model yields in predictions as accurate as a non-interpretable baseline when applied to a real-world test set of Flickr images, all while providing clear and interpretable explanations for each prediction.
BibTeX:
@inproceedings{marcos2020contextual,
author = {Diego Marcos and Ruth Fong and Sylvain Lobry and Remi Flamary and Nicolas Courty and Devis Tuia},
title = {Contextual Semantic Interpretability},
booktitle = { Asian Conference on Computer Vision (ACCV)},
year = {2020}
}
K. Fatras, Y. Zine, R. Flamary, R. Gribonval, N. Courty, Learning with minibatch Wasserstein : asymptotic and gradient properties, International Conference on Artificial Intelligence and Statistics (AISTAT), 2020.
Abstract: Optimal transport distances are powerful tools to compare probability distributions and have found many applications in machine learning. Yet their algorithmic complexity prevents their direct use on large scale datasets. To overcome this challenge, practitioners compute these distances on minibatches \em i.e. they average the outcome of several smaller optimal transport problems. We propose in this paper an analysis of this practice, which effects are not well understood so far. We notably argue that it is equivalent to an implicit regularization of the original problem, with appealing properties such as unbiased estimators, gradients and a concentration bound around the expectation, but also with defects such as loss of distance property. Along with this theoretical analysis, we also conduct empirical experiments on gradient flows, GANs or color transfer that highlight the practical interest of this strategy.
BibTeX:
@inproceedings{fatras2019learning,
author = {Kilian Fatras and Younes Zine and Rémi Flamary and Rémi Gribonval and Nicolas Courty},
title = {Learning with minibatch Wasserstein : asymptotic and gradient properties},
booktitle = { International Conference on Artificial Intelligence and Statistics (AISTAT)},
year = {2020}
}

News

Optimal Transport for Machine Learning Workshop at NeurIPS 2019

2019-09-02

We are organizing with Alexandra Suvorikova, Marco Cuturi and Gabriel Peyré the third OTML Workshop at NeurIPS 2019 on 13/14 December 2019.

The list of invited speakers and the call for contribution are both available on the Workshop website.

Optimal Transport for Machine Learning Tutorial at ISBI 2019

2019-04-08

I have given a 3h tutorial about Optimal transport for machine learning at ISBI 2019.

You can find the presentation slides and the practical session Python notebook here.

Optimal Transport at Data Science Summer School 2018

2018-06-19

We will be giving two one day courses with Marco Cuturi and Nicolas Courty about Optimal transport for machine learning for the Data Science Summer School 2018 (DS3) at Ecole Polytechnique in Paris/Saclay, France.

You can find the presentation slides and the practical session Python notebook on Github.