# Rémi Flamary

Professional website

## News

#### Optimal Transport at Data Science Summer School 2018

2018-06-19

We will be giving two one day courses with Marco Cuturi and Nicolas Courty about Optimal transport for machine learning for the Data Science Summer School 2018 (DS3) at Ecole Polytechnique in Paris/Saclay, France.

You can find the presentation slides and the practical session Python notebook on Github.

#### Optimal Transport at Statlearn 2018

2018-04-05

We have given a one day course with Nicolas Courty about Optimal transport for machine learning for the Statlearn 2018 summer school in Nice, France.

You can find the presentation slides and the practical session Python notebook on Github.

#### Talk at GDR ISIS General Meeting

2017-11-17

I had the honor to be invited for a talk at the GDR ISI General meeting in Sète.

I presented a short introduction to optimal transport and discussed some recent applications of OT to in the machine learning comunity. the slides in english are available here

#### Domain adaptation paper accepted at NIPS 2017 and OTML 2017 Workshop

2017-09-17

My collaborators and I have been accepted to present the following paper at NIPS 2017

N. Courty, R. Flamary, A. Habrard, A. Rakotomamonjy, Joint Distribution Optimal Transportation for Domain Adaptation, Neural Information Processing Systems (NIPS), 2017.

Abstract: This paper deals with the unsupervised domain adaptation problem, where one wants to estimate a prediction function f in a given target domain without any labeled sample by exploiting the knowledge available from a source domain where labels are known. Our work makes the following assumption: there exists a non-linear transformation between the joint feature/label space distributions of the two domain Ps and Pt. We propose a solution of this problem with optimal transport, that allows to recover an estimated target Pft(X,f(X)) by optimizing simultaneously the optimal coupling and f. We show that our method corresponds to the minimization of a bound on the target error, and provide an efficient algorithmic solution, for which convergence is proved. The versatility of our approach, both in terms of class of hypothesis or loss functions is demonstrated with real world classification and regression problems, for which we reach or surpass state-of-the-art results.
BibTeX:
@inproceedings{courty2017joint,
author = {Courty, Nicolas and Flamary, Remi and Habrard, Amaury and Rakotomamonjy, Alain},
title = {Joint Distribution Optimal Transportation for Domain Adaptation},
booktitle = {Neural Information Processing Systems (NIPS)},
editor = {},
year = {2017}
} 

I have been invited to present at the OTML 2017 Workshop and we also have two additional posters there.

Feel free to come and see us at our NIPS poster or at the workshop.

#### POT Python Optimal Transport library

2016-11-07

We proposed recently a general purpose Python library for Optimal Transport called POT. The library is available on Github and can be easily installed using PyPI. The toolbox implement a number of solvers from the image and machine learning literature (see README and the Documentation for more details).

We also give several examples of the potential uses of OT in the form of Python scripts and Python notebook that show the toolbox in use without requiring Python.

Here is a list of the Python notebooks if you want a quick look:

Feel free to use and contribute to the library.

#### Two papers in optimal transport accepted at NIPS 2016

2016-08-04

My collaborators and I have been accepted to present the following two papers at NIPS 2016

R. Flamary, C. Févotte, N. Courty, V. Emyia, Optimal spectral transportation with application to music transcription, Neural Information Processing Systems (NIPS), 2016.

Abstract: Many spectral unmixing methods rely on the non-negative decomposition of spectral data onto a dictionary of spectral templates. In particular, state-of-the-art music transcription systems decompose the spectrogram of the input signal onto a dictionary of representative note spectra. The typical measures of fit used to quantify the adequacy of the decomposition compare the data and template entries frequency-wise. As such, small displacements of energy from a frequency bin to another as well as variations of timber can disproportionally harm the fit. We address these issues by means of optimal transportation and propose a new measure of fit that treats the frequency distributions of energy holistically as opposed to frequency-wise. Building on the harmonic nature of sound, the new measure is invariant to shifts of energy to harmonically-related frequencies, as well as to small and local displacements of energy. Equipped with this new measure of fit, the dictionary of note templates can be considerably simplified to a set of Dirac vectors located at the target fundamental frequencies (musical pitch values). This in turns gives ground to a very fast and simple decomposition algorithm that achieves state-of-the-art performance on real musical data.
BibTeX:
@inproceedings{flamary2016ost,
author = {Flamary, Remi and Févotte, Cédric and Courty, N. and  Emyia, Valentin},
title = {Optimal spectral transportation with application to music transcription},
booktitle = { Neural Information Processing Systems (NIPS)},
editor = {},
year = {2016}
} 

M. Perrot, N. Courty, R. Flamary, A. Habrard, Mapping estimation for discrete optimal transport, Neural Information Processing Systems (NIPS), 2016.

Abstract: We are interested in the computation of the transport map of an Optimal Transport problem. Most of the computational approaches of Optimal Transport use the Kantorovich relaxation of the problem to learn a probabilistic coupling but do not address the problem of learning the transport map linked to the original Monge problem. Consequently, it lowers the potential usage of such methods in contexts where out-of-samples computations are mandatory. In this paper we propose a new way to jointly learn the coupling and an approximation of the transport map. We use a jointly convex formulation which can be efficiently optimized. Additionally, jointly learning the coupling and the transport map allows to smooth the result of the Optimal Transport and generalize it on out-of-samples examples. Empirically, we show the interest and the relevance of our method in two tasks: domain adaptation and image editing.
BibTeX:
@inproceedings{perrot2016mapping,
author = {Perrot, M. and Courty, N. and Flamary, R. and Habrard, A.},
title = {Mapping estimation for discrete optimal transport},
booktitle = {Neural Information Processing Systems (NIPS)},
editor = {},
year = {2016}
} 

Feel free to come and see us at our posters, we will have real life demonstrations of audio musical annotation and seamless copy in images.

#### Helava award of the best paper in ISPRS Journal period 2012-2015

2016-07-06

Our paper has been selected for the Helava Award, i.e. best paper in the ISPRS Journal of Photogrammetry and Remote Sensing for the 2012-2015 period.

D. Tuia, R. Flamary, N. Courty, Multiclass feature learning for hyperspectral image classification: sparse and hierarchical solutions, ISPRS Journal of Photogrammetry and Remote Sensing, 2015.

Abstract: In this paper, we tackle the question of discovering an effective set of spatial filters to solve hyperspectral classification problems. Instead of fixing a priori the filters and their parameters using expert knowledge, we let the model find them within random draws in the (possibly infinite) space of possible filters. We define an active set feature learner that includes in the model only features that improve the classifier. To this end, we consider a fast and linear classifier, multiclass logistic classification, and show that with a good representation (the filters discovered), such a simple classifier can reach at least state of the art performances. We apply the proposed active set learner in four hyperspectral image classification problems, including agricultural and urban classification at different resolutions, as well as multimodal data. We also propose a hierarchical setting, which allows to generate more complex banks of features that can better describe the nonlinearities present in the data.
BibTeX:
@article{tuia2015multiclass,
author = {Tuia, D. and Flamary, R. and  Courty, N.},
title = {Multiclass feature learning for hyperspectral image classification: sparse and hierarchical solutions},
journal = {ISPRS Journal of Photogrammetry and Remote Sensing},
editor = {},
year = {2015}
} 

It is a great honor for us and I will be present at the ISPRS Congress 2016 on July 12 to receive the prize on behalf of all authors. This is a joint work with Devis Tuia and Nicolas Courty.

#### Statlearn 2016

2016-03-20

I have been invited to the Statlearn 2016 Workshop in Vannes, France.

I will present on April 8th our works on Optimal transport for domain adaptation in collaboration with Nicolas Courty , Devis Tuia et Alain Rakotomamonjy. The slides are available here.

#### PhD thesis proposal

2015-04-03

Cedric Richard an I are proposing a PhD thesis subject starting in 2015 on the subject of Distributed estimation over multitask networks.

If you are interested, contact Cédric or me before June 6.

For more details see the complete proposal.

#### BasMatI Summer School

2015-02-13

We are organizing a french summer school with Céline Theys, David Mary et Claude Aime about Mathemathics for signal and image processing in astronomy You can find pore information on the website.

#### Best paper at PCV 2014

2014-09-10

Our paper has been chosen for a best paper award at the Photogrammetric Computer Vision symposium (PCV 2014).

D. Tuia, N. Courty, R. Flamary, A group-lasso active set strategy for multiclass hyperspectral image classification, Photogrammetric Computer Vision (PCV), 2014.
Abstract: Hyperspectral images have a strong potential for landcover/landuse classification, since the spectra of the pixels can highlight subtle differences between materials and provide information beyond the visible spectrum. Yet, a limitation of most current approaches is the hypothesis of spatial independence between samples: images are spatially correlated and the classification map should exhibit spatial regularity. One way of integrating spatial smoothness is to augment the input spectral space with filtered versions of the bands. However, open questions remain, such as the selection of the bands to be filtered, or the filterbank to be used. In this paper, we consider the entirety of the possible spatial filters by using an incremental feature learning strategy that assesses whether a candidate feature would improve the model if added to the current input space. Our approach is based on a multiclass logistic classifier with group-lasso regularization. The optimization of this classifier yields an optimality condition, that can easily be used to assess the interest of a candidate feature without retraining the model, thus allowing drastic savings in computational time. We apply the proposed method to three challenging hyperspectral classification scenarios, including agricultural and urban data, and study both the ability of the incremental setting to learn features that always improve the model and the nature of the features selected.
BibTeX:
@inproceedings{tuia2014grouplasso,
author = {Tuia, D. and Courty, N. and Flamary, R.},
title = {A group-lasso active set strategy for multiclass hyperspectral image classification},
booktitle = {Photogrammetric Computer Vision (PCV)},
editor = {},
year = {2014}
} 

This is a joint work with Devis Tuia and Nicolas Courty.

#### ICASSP 2014

2014-04-30

I will be at ICASSP 2014 in Firenze. I will present the paper Active set strategy for high-dimensional non-convex sparse optimization problems on Wednesday, May 7 in the special session Optimization algorithms for high dimensional signal processing.

This is a joint work with Aurélie Boisbunon and Alain Rakotomamonjy.

#### AMOR project is online

2013-12-04

The AMOR project is a Young researchers project that is financed by GdR ISIS and the GRETSI association.

The wep page of the project is now available here.

#### Talk about learning with infinitely many features

2013-01-02

I have been invited to present our work about learning with infinitely many features at a GDR ISIS reunion.

The slides of the presentation (in english) are now available here.

#### New SVM Toolbox

2012-09-10

The code for my linear SVM toolbox is now available in the software section of the website. It can learn linear SVM with a wide class of regularization terms such as the l1 norm or the l1-lp mixed norms.

This toolbox is in Matlab and the solver used is a Forward-Backward Splitting algorithm from the paper FISTA.

#### Website update

2012-07-25

I just updated this website! The software page is now cleaner and the website is now generated by Webgen.py instead of webgen.

Moreover you can now download my PhD thesis (in french) from here or from thèse en ligne.

#### PhD Defence

2011-12-06

I defended my PdD at Rouen university on december 6th. The final manuscript (in french) will be available shortly and the presentation slides are available here.

The Jury was composed of :

Reviewers :

• Dr. Michele Sebag, LRI CNRS

Examiners:

• Pr. Jocelyn Chanussot, INP Grenoble
• Pr. Liva Ralaivola, Université Aix-Marseille II
• Pr. Stéphane Canu, INSA Rouen

• Pr. Alain Rakotomamonjy, Université de Rouen

#### Source code for SVM with uncertain labels

2011-07-06

This year, I was at the SSP 2011 workshop at Nice, France. I had the pleasure to present our works with Emilie Niaf about SVM classification with uncertain labels and probabilistic prediction. The poster is now online. Note that the Matlab source code of our approach can now be downloaded.

#### BCI poster at CAP 2011

2011-05-10

CAP is a French conference in Machine Learning that will be held May 17-20th 2011 at Chambéry. This year, I will be presenting a poster about BCI for general public.

#### MLSP 2010 Presentation

2010-08-31

I provide here the presentation at MLSP for the Paper “Learning spatial filters for multispectral image segmentation”. The draft version of the paper will be available shortly.

#### MLSP 2010

2010-08-22

I’m going to MLSP 2010 in Finland. I will be presenting there a joint work with Devis Tuia and Gustavo Camp-Valls from IPL in Valencia. We propose a method to learn a large margin spatial filtering for image segmentation.

The presentation and the paper will be shortly available on this website.

#### CAp 2010 / BCI in Paris

2010-06-20

I have presented in may my works concerning large margin filtering for non-linear problems in the French conference CAp 2010 and in a BCI meeting in Paris organized by Cédric Gouy-Pailler .

The sildes are in French but the paper is in English.

#### MLSP 2010 Competitio

2010-04-28

This year, MLSP Workshop (MACHINE LEARNING FOR SIGNAL PROCESSING) organized a BCI competition : “MIND READING”. The aim was to predict rare events during a rapid image presentation. 35 international teams were part of this competition.

The BCI LITIS team obtained good results (3,5,6 et 8 of 35) in the competition. Results are available here.

Team members :

• Rémi Flamary (2)
• Benjamin Labbé (1)
• Grégoire Mesnil (2)
• Xilan Tian(1)
• Florian Yger (2)
• Alain Rakotomomamonjy (2) – study supervision
• Gilles Gasso (1) – study supervision

Institution :

• (1) INSA de Rouen – LITIS
• (2) Université de Rouen – LITIS

This new was extracted from the LITIS website

#### ICASSP 2010

2010-03-18

I will be at ICASSP 2010 conference and I will present my works about Large Margin Filtering.

#### Presentation at CREATIS

2010-01-27

In January, I did a presentation at the CREATIS Laboratory about SVM methods, Kernel Learning and Large Margin Filtering. The slides in french can be downloaded.

#### MLSP 2009 In Grenoble

2009-07-30

I will present my paper Variational Sequence Labeling at the Machine Learning For Signal Processing Workshop 2009 (MLSP 09) on the 2th of September.

This paper has been writer jointly with Jean Loïc Rose .