Rémi Flamary

Professional website

Home

photo

I am associate professor at Nice-Sophia Antipolis University in the Department of Electronics and in the Lagrange Laboratory. This laboratory is part of the Observatoire de la Côte d'Azur. I was previously a PhD student and teaching assistant at the LITIS Laboratory and my PhD advisor was Alain Rakotomamonjy at Rouen University.

On this website, you can find a list of my publications and download the corresponding software/code. Some of my french teaching material is also available.

Research Interests

  • Machine learning and statistical signal processing
    • Classification, supervised learning
    • Kernel methods, Support Vector Machines
    • Optimization with sparsity, variable selection, mixed norms, non convex regularization
    • Feature learning, data representation, kernel learning
    • Convolutional neural networks, filter learning, image reconstruction
    • Optimal transport, domain adaptation
  • Applications
    • Biomedical engineering, Brain-Computer Interfaces
    • Remote sensing and hyperspectral Imaging
    • Astronomical image processing

Wordcloud of my research interests.

Recent work

R. Flamary, M. Cuturi, N. Courty, A. Rakotomamonjy, "Wasserstein Discriminant Analysis", Machine learning , 2018.
Abstract: Wasserstein Discriminant Analysis (WDA) is a new supervised method that can improve classification of high-dimensional data by computing a suitable linear map onto a lower dimensional subspace. Following the blueprint of classical Linear Discriminant Analysis (LDA), WDA selects the projection matrix that maximizes the ratio of two quantities: the dispersion of projected points coming from different classes, divided by the dispersion of projected points coming from the same class. To quantify dispersion, WDA uses regularized Wasserstein distances, rather than cross-variance measures which have been usually considered, notably in LDA. Thanks to the the underlying principles of optimal transport, WDA is able to capture both global (at distribution scale) and local (at samples scale) interactions between classes. Regularized Wasserstein distances can be computed using the Sinkhorn matrix scaling algorithm; We show that the optimization of WDA can be tackled using automatic differentiation of Sinkhorn iterations. Numerical experiments show promising results both in terms of prediction and visualization on toy examples and real life datasets such as MNIST and on deep features obtained from a subset of the Caltech dataset.
BibTeX:
@article{flamary2017wasserstein,
author = {Flamary, Remi and Cuturi, Marco and Courty, Nicolas and Rakotomamonjy, Alain},
title = {Wasserstein Discriminant Analysis},
journal = { Machine learning },
year = {2018}
}
N. Courty, R. Flamary, M. Ducoffe, "Learning Wasserstein Embeddings", International Conference on Learning Representations (ICLR), 2018.
Abstract: The Wasserstein distance received a lot of attention recently in the community of machine learning, especially for its principled way of comparing distributions. It has found numerous applications in several hard problems, such as domain adaptation, dimensionality reduction or generative models. However, its use is still limited by a heavy computational cost. Our goal is to alleviate this problem by providing an approximation mechanism that allows to break its inherent complexity. It relies on the search of an embedding where the Euclidean distance mimics the Wasserstein distance. We show that such an embedding can be found with a siamese architecture associated with a decoder network that allows to move from the embedding space back to the original input space. Once this embedding has been found, computing optimization problems in the Wasserstein space (e.g. barycenters, principal directions or even archetypes) can be conducted extremely fast. Numerical experiments supporting this idea are conducted on image datasets, and show the wide potential benefits of our method.
BibTeX:
@inproceedings{courty2018learning,
author = {Courty, Nicolas and Flamary, Remi and Ducoffe, Melanie},
title = {Learning Wasserstein Embeddings},
booktitle = {International Conference on Learning Representations (ICLR)},
year = {2018}
}
V. Seguy, B. Bhushan Damodaran, R. Flamary, N. Courty, A. Rolet, M. Blondel, "Large-Scale Optimal Transport and Mapping Estimation", International Conference on Learning Representations (ICLR), 2018.
Abstract: This paper presents a novel two-step approach for the fundamental problem of learning an optimal map from one distribution to another. First, we learn an optimal transport (OT) plan, which can be thought as a one-to-many map between the two distributions. To that end, we propose a stochastic dual approach of regularized OT, and show empirically that it scales better than a recent related approach when the amount of samples is very large. Second, we estimate a Monge map as a deep neural network learned by approximating the barycentric projection of the previously-obtained OT plan. We prove two theoretical stability results of regularized OT which show that our estimations converge to the OT plan and Monge map between the underlying continuous measures. We showcase our proposed approach on two applications: domain adaptation and generative modeling.
BibTeX:
@inproceedings{seguy2018large,
author = {Seguy, Vivien. and Bhushan Damodaran, Bharath and Flamary, Remi and Courty, Nicolas and Rolet, Antoine and Blondel, Mathieu},
title = {Large-Scale Optimal Transport and Mapping Estimation},
booktitle = {International Conference on Learning Representations (ICLR)},
year = {2018}
}
N. Courty, R. Flamary, A. Habrard, A. Rakotomamonjy, "Joint Distribution Optimal Transportation for Domain Adaptation", Neural Information Processing Systems (NIPS), 2017.
Abstract: This paper deals with the unsupervised domain adaptation problem, where one wants to estimate a prediction function f in a given target domain without any labeled sample by exploiting the knowledge available from a source domain where labels are known. Our work makes the following assumption: there exists a non-linear transformation between the joint feature/label space distributions of the two domain Ps and Pt. We propose a solution of this problem with optimal transport, that allows to recover an estimated target Pft(X,f(X)) by optimizing simultaneously the optimal coupling and f. We show that our method corresponds to the minimization of a bound on the target error, and provide an efficient algorithmic solution, for which convergence is proved. The versatility of our approach, both in terms of class of hypothesis or loss functions is demonstrated with real world classification and regression problems, for which we reach or surpass state-of-the-art results.
BibTeX:
@inproceedings{courty2017joint,
author = {Courty, Nicolas and Flamary, Remi and Habrard, Amaury and Rakotomamonjy, Alain},
title = {Joint Distribution Optimal Transportation for Domain Adaptation},
booktitle = {Neural Information Processing Systems (NIPS)},
year = {2017}
}
R. Flamary, "Astronomical image reconstruction with convolutional neural networks", European Conference on Signal Processing (EUSIPCO), 2017.
Abstract: State of the art methods in astronomical image reconstruction rely on the resolution of a regularized or constrained optimization problem. Solving this problem can be computationally intensive and usually leads to a quadratic or at least superlinear complexity w.r.t. the number of pixels in the image. We investigate in this work the use of convolutional neural networks for image reconstruction in astronomy. With neural networks, the computationally intensive tasks is the training step, but the prediction step has a fixed complexity per pixel, i.e. a linear complexity. Numerical experiments show that our approach is both computationally efficient and competitive with other state of the art methods in addition to being interpretable.
BibTeX:
@inproceedings{flamary2017astro,
author = {Flamary, Remi},
title = {Astronomical image reconstruction with convolutional neural networks},
booktitle = {European Conference on Signal Processing (EUSIPCO)},
year = {2017}
}
P. Hartley, R. Flamary, N. Jackson, A. S. Tagore, R. B. Metcalf, "Support Vector Machine classification of strong gravitational lenses", Monthly Notices of the Royal Astronomical Society (MNRAS), 2017.
Abstract: The imminent advent of very large-scale optical sky surveys, such as Euclid and LSST, makes it important to find efficient ways of discovering rare objects such as strong gravitational lens systems, where a background object is multiply gravitationally imaged by a foreground mass. As well as finding the lens systems, it is important to reject false positives due to intrinsic structure in galaxies, and much work is in progress with machine learning algorithms such as neural networks in order to achieve both these aims. We present and discuss a Support Vector Machine (SVM) algorithm which makes use of a Gabor filterbank in order to provide learning criteria for separation of lenses and non-lenses, and demonstrate using blind challenges that under certain circumstances it is a particularly efficient algorithm for rejecting false positives. We compare the SVM engine with a large-scale human examination of 100000 simulated lenses in a challenge dataset, and also apply the SVM method to survey images from the Kilo-Degree Survey.
BibTeX:
@article{hartley2017support,
author = {Hartley, Philippa, and Flamary, Remi and Jackson, Neal and Tagore, A. S. and Metcalf, R. B.},
title = {Support Vector Machine classification of strong gravitational lenses},
journal = {Monthly Notices of the Royal Astronomical Society (MNRAS)},
year = {2017}
}
R. Flamary, C. Févotte, N. Courty, V. Emyia, "Optimal spectral transportation with application to music transcription", Neural Information Processing Systems (NIPS), 2016.
Abstract: Many spectral unmixing methods rely on the non-negative decomposition of spectral data onto a dictionary of spectral templates. In particular, state-of-the-art music transcription systems decompose the spectrogram of the input signal onto a dictionary of representative note spectra. The typical measures of fit used to quantify the adequacy of the decomposition compare the data and template entries frequency-wise. As such, small displacements of energy from a frequency bin to another as well as variations of timber can disproportionally harm the fit. We address these issues by means of optimal transportation and propose a new measure of fit that treats the frequency distributions of energy holistically as opposed to frequency-wise. Building on the harmonic nature of sound, the new measure is invariant to shifts of energy to harmonically-related frequencies, as well as to small and local displacements of energy. Equipped with this new measure of fit, the dictionary of note templates can be considerably simplified to a set of Dirac vectors located at the target fundamental frequencies (musical pitch values). This in turns gives ground to a very fast and simple decomposition algorithm that achieves state-of-the-art performance on real musical data.
BibTeX:
@inproceedings{flamary2016ost,
author = {Flamary, Remi and Févotte, Cédric and Courty, N. and  Emyia, Valentin},
title = {Optimal spectral transportation with application to music transcription},
booktitle = { Neural Information Processing Systems (NIPS)},
year = {2016}
}
M. Perrot, N. Courty, R. Flamary, A. Habrard, "Mapping estimation for discrete optimal transport", Neural Information Processing Systems (NIPS), 2016.
Abstract: We are interested in the computation of the transport map of an Optimal Transport problem. Most of the computational approaches of Optimal Transport use the Kantorovich relaxation of the problem to learn a probabilistic coupling but do not address the problem of learning the transport map linked to the original Monge problem. Consequently, it lowers the potential usage of such methods in contexts where out-of-samples computations are mandatory. In this paper we propose a new way to jointly learn the coupling and an approximation of the transport map. We use a jointly convex formulation which can be efficiently optimized. Additionally, jointly learning the coupling and the transport map allows to smooth the result of the Optimal Transport and generalize it on out-of-samples examples. Empirically, we show the interest and the relevance of our method in two tasks: domain adaptation and image editing.
BibTeX:
@inproceedings{perrot2016mapping,
author = {Perrot, M. and Courty, N. and Flamary, R. and Habrard, A.},
title = {Mapping estimation for discrete optimal transport},
booktitle = {Neural Information Processing Systems (NIPS)},
year = {2016}
}
N. Courty, R. Flamary, D. Tuia, A. Rakotomamonjy, "Optimal transport for domain adaptation", Pattern Analysis and Machine Intelligence, IEEE Transactions on , 2016.
Abstract: Domain adaptation is one of the most challenging tasks of modern data analytics. If the adaptation is done correctly, models built on a specific data representations become more robust when confronted to data depicting the same semantic concepts (the classes), but observed by another observation system with its own specificities. Among the many strategies proposed to adapt a domain to another, finding domain-invariant representations has shown excellent properties, as a single classifier can use labelled samples from the source domain under this representation to predict the unlabelled samples of the target domain. In this paper, we propose a regularized unsupervised optimal transportation model to perform the alignment of the representations in the source and target domains. We learn a transportation plan matching both PDFs, which constrains labelled samples in the source domain to remain close during transport. This way, we exploit at the same time the few labeled information in the source and distributions of the input/observation variables observed in both domains. Experiments in toy and challenging real visual adaptation examples show the interest of the method, that consistently outperforms state of the art approaches.
BibTeX:
@article{courty2016optimal,
author = { Courty, N. and Flamary, R.  and Tuia, D. and Rakotomamonjy, A.},
title = {Optimal transport for domain adaptation},
journal = { Pattern Analysis and Machine Intelligence, IEEE Transactions on },
year = {2016}
}

News

Optimal Transport at Statlearn 2018

2018-04-05

We have given a one day course with Nicolas Courty about Optimal transport for machine learning for the Statlearn 2018 summer school in Nice, France.

You can find the presentation slides and the practical session Python notebook on Github.

Talk at GDR ISIS General Meeting

2017-11-17

I had the honor to be invited for a talk at the GDR ISI General meeting in Sète.

I presented a short introduction to optimal transport and discussed some recent applications of OT to in the machine learning comunity. the slides in english are available here

Domain adaptation paper accepted at NIPS 2017 and OTML 2017 Workshop

2017-09-17

My collaborators and I have been accepted to present the following paper at NIPS 2017

N. Courty, R. Flamary, A. Habrard, A. Rakotomamonjy, "Joint Distribution Optimal Transportation for Domain Adaptation", Neural Information Processing Systems (NIPS), 2017.

Abstract: This paper deals with the unsupervised domain adaptation problem, where one wants to estimate a prediction function f in a given target domain without any labeled sample by exploiting the knowledge available from a source domain where labels are known. Our work makes the following assumption: there exists a non-linear transformation between the joint feature/label space distributions of the two domain Ps and Pt. We propose a solution of this problem with optimal transport, that allows to recover an estimated target Pft(X,f(X)) by optimizing simultaneously the optimal coupling and f. We show that our method corresponds to the minimization of a bound on the target error, and provide an efficient algorithmic solution, for which convergence is proved. The versatility of our approach, both in terms of class of hypothesis or loss functions is demonstrated with real world classification and regression problems, for which we reach or surpass state-of-the-art results.
BibTeX:
@inproceedings{courty2017joint,
author = {Courty, Nicolas and Flamary, Remi and Habrard, Amaury and Rakotomamonjy, Alain},
title = {Joint Distribution Optimal Transportation for Domain Adaptation},
booktitle = {Neural Information Processing Systems (NIPS)},
editor = {},
year = {2017}
} 

I have been invited to present at the OTML 2017 Workshop and we also have two additional posters there.

Feel free to come and see us at our NIPS poster or at the workshop.