# Rémi Flamary

Site web professionel

## Publications

Recherche rapide:

Sélectionnés: 0.

Search Settings

### Travaux soumis ou sous presse

N. Courty, R. Flamary, A. Habrard, A. Rakotomamonjy, "Joint Distribution Optimal Transportation for Domain Adaptation" (Submited), 2017.
Abstract: This paper deals with the unsupervised domain adaptation problem, where one wants to estimate a prediction function f in a given target domain without any labeled sample by exploiting the knowledge available from a source domain where labels are known. Our work makes the following assumption: there exists a non-linear transformation between the joint feature/label space distributions of the two domain Ps and Pt. We propose a solution of this problem with optimal transport, that allows to recover an estimated target Pft(X,f(X)) by optimizing simultaneously the optimal coupling and f. We show that our method corresponds to the minimization of a bound on the target error, and provide an efficient algorithmic solution, for which convergence is proved. The versatility of our approach, both in terms of class of hypothesis or loss functions is demonstrated with real world classification and regression problems, for which we reach or surpass state-of-the-art results.
BibTeX:
@inproceedings{courty2017joint,
author = {Courty, Nicolas and Flamary, Remi and Habrard, Amaury and Rakotomamonjy, Alain},
title = {Joint Distribution Optimal Transportation for Domain Adaptation},
year = {2017 (Submited)}
}
P. Hartley, R. Flamary, N. Jackson, A. S. Tagore, R. B. Metcalf, "Support Vector Machine classification of strong gravitational lenses" (Submited), 2017.
Abstract: The imminent advent of very large-scale optical sky surveys, such as Euclid and LSST, makes it important to find efficient ways of discovering rare objects such as strong gravitational lens systems, where a background object is multiply gravitationally imaged by a foreground mass. As well as finding the lens systems, it is important to reject false positives due to intrinsic structure in galaxies, and much work is in progress with machine learning algorithms such as neural networks in order to achieve both these aims. We present and discuss a Support Vector Machine (SVM) algorithm which makes use of a Gabor filterbank in order to provide learning criteria for separation of lenses and non-lenses, and demonstrate using blind challenges that under certain circumstances it is a particularly efficient algorithm for rejecting false positives. We compare the SVM engine with a large-scale human examination of 100000 simulated lenses in a challenge dataset, and also apply the SVM method to survey images from the Kilo-Degree Survey.
BibTeX:
@article{hartley2017support,
author = {Hartley, Philippa, and Flamary, Remi and Jackson, Neal and Tagore, A. S. and Metcalf, R. B.},
title = {Support Vector Machine classification of strong gravitational lenses},
year = {2017 (Submited)}
}
R. Mourya, A. Ferrari, R. Flamary, P. Bianchi, C. Richard, "Distributed Deblurring of Large Images of Wide Field-Of-View" (Submited), 2017.
Abstract: Image deblurring is an economic way to reduce certain degradations (blur and noise) in acquired images. Thus, it has become essential tool in high resolution imaging in many applications, e.g., astronomy, microscopy or computational photography. In applications such as astronomy and satellite imaging, the size of acquired images can be extremely large (up to gigapixels) covering wide field-of-view suffering from shift-variant blur. Most of the existing image deblurring techniques are designed and implemented to work efficiently on centralized computing system having multiple processors and a shared memory. Thus, the largest image that can be handle is limited by the size of the physical memory available on the system. In this paper, we propose a distributed nonblind image deblurring algorithm in which several connected processing nodes (with reasonable computational resources) process simultaneously different portions of a large image while maintaining certain coherency among them to finally obtain a single crisp image. Unlike the existing centralized techniques, image deblurring in distributed fashion raises several issues. To tackle these issues, we consider certain approximations that trade-offs between the quality of deblurred image and the computational resources required to achieve it. The experimental results show that our algorithm produces the similar quality of images as the existing centralized techniques while allowing distribution, and thus being cost effective for extremely large images.
BibTeX:
@article{mourya2017distdeblur,
author = {Mourya, Rahul and Ferrari, Andre and Flamary, Remi and Bianchi, Pascal and Richard, Cedric},
title = {Distributed Deblurring of Large Images of Wide Field-Of-View},
year = {2017 (Submited)}
}
R. Flamary, M. Cuturi, N. Courty, A. Rakotomamonjy, "Wasserstein Discriminant Analysis" (Submited), 2017.
Abstract: Wasserstein Discriminant Analysis (WDA) is a new supervised method that can improve classification of high-dimensional data by computing a suitable linear map onto a lower dimensional subspace. Following the blueprint of classical Linear Discriminant Analysis (LDA), WDA selects the projection matrix that maximizes the ratio of two quantities: the dispersion of projected points coming from different classes, divided by the dispersion of projected points coming from the same class. To quantify dispersion, WDA uses regularized Wasserstein distances, rather than cross-variance measures which have been usually considered, notably in LDA. Thanks to the the underlying principles of optimal transport, WDA is able to capture both global (at distribution scale) and local (at samples scale) interactions between classes. Regularized Wasserstein distances can be computed using the Sinkhorn matrix scaling algorithm; We show that the optimization of WDA can be tackled using automatic differentiation of Sinkhorn iterations. Numerical experiments show promising results both in terms of prediction and visualization on toy examples and real life datasets such as MNIST and on deep features obtained from a subset of the Caltech dataset.
BibTeX:
@inproceedings{flamary2017wasserstein,
author = {Flamary, Remi and Cuturi, Marco and Courty, Nicolas and Rakotomamonjy, Alain},
title = {Wasserstein Discriminant Analysis},
year = {2017 (Submited)}
}

### 2017

R. Mourya, A. Ferrari, R. Flamary, P. Bianchi, C. Richard, "Distributed Approach for Deblurring Large Images with Shift-Variant Blur", European Conference on Signal Processing (EUSIPCO), 2017.
Abstract: Image deblurring techniques are effective tools to obtain high quality image from acquired image degraded by blur and noise. In applications such as astronomy and satellite imaging, size of acquired images can be extremely large (up to gigapixels) covering a wide field-of-view suffering from shift-variant blur. Most of the existing deblurring techniques are designed to be cost effective on a centralized computing system having a shared memory and possibly multicore processor. The largest image they can handle is then conditioned by the memory capacity of the system. In this paper, we propose a distributed shift-variant image deblurring algorithm in which several connected processing units (each with reasonable computational resources) can deblur simultaneously different portions of a large image while maintaining a certain coherency among them to finally obtain a single crisp image. The proposed algorithm is based on a distributed Douglas-Rachford splitting algorithm with a specific structure of the penalty parameters used in the proximity operator. Numerical experiments show that the proposed algorithm produces images of similar quality as the existing centralized techniques while being distributed and being cost effective for extremely large images.
BibTeX:
@inproceedings{mourya2017distributed,
author = {Mourya, Rahul and Ferrari, Andre and Flamary, Remi and Bianchi, Pascal and Richard, Cedric},
title = {Distributed Approach for Deblurring Large Images with Shift-Variant Blur},
booktitle = {European Conference on Signal Processing (EUSIPCO)},
year = {2017}
}
R. Flamary, "Astronomical image reconstruction with convolutional neural networks", European Conference on Signal Processing (EUSIPCO), 2017.
Abstract: State of the art methods in astronomical image reconstruction rely on the resolution of a regularized or constrained optimization problem. Solving this problem can be computationally intensive and usually leads to a quadratic or at least superlinear complexity w.r.t. the number of pixels in the image. We investigate in this work the use of convolutional neural networks for image reconstruction in astronomy. With neural networks, the computationally intensive tasks is the training step, but the prediction step has a fixed complexity per pixel, i.e. a linear complexity. Numerical experiments show that our approach is both computationally efficient and competitive with other state of the art methods in addition to being interpretable.
BibTeX:
@inproceedings{flamary2017astro,
author = {Flamary, Remi},
title = {Astronomical image reconstruction with convolutional neural networks},
booktitle = {European Conference on Signal Processing (EUSIPCO)},
year = {2017}
}
R. Ammanouil, A. Ferrari, R. Flamary, C. Ferrari, D. Mary, "Multi-frequency image reconstruction for radio-interferometry with self-tuned regularization parameters", European Conference on Signal Processing (EUSIPCO), 2017.
Abstract: As the world's largest radio telescope, the Square Kilometer Array (SKA) will provide radio interferometric data with unprecedented detail. Image reconstruction algorithms for radio interferometry are challenged to scale well with TeraByte image sizes never seen before. In this work, we investigate one such 3D image reconstruction algorithm known as MUFFIN (MUlti-Frequency image reconstruction For radio INterferometry). In particular, we focus on the challenging task of automatically finding the optimal regularization parameter values. In practice, finding the regularization parameters using classical grid search is computationally intensive and nontrivial due to the lack of ground- truth. We adopt a greedy strategy where, at each iteration, the optimal parameters are found by minimizing the predicted Stein unbiased risk estimate (PSURE). The proposed self-tuned version of MUFFIN involves parallel and computationally efficient steps, and scales well with large- scale data. Finally, numerical results on a 3D image are presented to showcase the performance of the proposed approach.
BibTeX:
@inproceedings{ammanouil2017multi,
author = {Ammanouil, Rita and Ferrari, Andre and Flamary, Remi and Ferrari, Chiara and Mary, David},
title = {Multi-frequency image reconstruction for radio-interferometry with self-tuned regularization parameters},
booktitle = {European Conference on Signal Processing (EUSIPCO)},
year = {2017}
}
R. Rougeot, R. Flamary, D. Galano, C. Aime, "Performance of hybrid externally occulted Lyot solar coronagraph, Application to ASPIICS", Astronomy and Astrophysics, 2017.
Abstract: Context. The future ESA Formation Flying mission Proba-3 will fly the solar coronagraph ASPIICS which couples a Lyot coronagraph of 50mm and an external occulter of 1.42m diameter set 144m before. Aims. We perform a numerical study on the theoretical performance of the hybrid coronagraph such ASPIICS. In this system, an internal occulter is set on the image of the external occulter instead of a Lyot mask on the solar image. First, we determine the rejection due to the external occulter alone. Second, the effects of sizing the internal occulter and the Lyot stop are analyzed. This work also applies to the classical Lyot coronagraph alone and the external solar coronagraph. Methods. The numerical computation uses the parameters of ASPIICS. First we take the approach of Aime, C. 2013, A&A 558, A138, to express the wave front from Fresnel diffraction at the entrance aperture of the Lyot coronagraph. From there, each wave front coming from a given point of the Sun is propagated through the Lyot coronagraph in three steps, from the aperture to the image of the external occulter, where the internal occulter is set, from this plane to the image of the entrance aperture, where the Lyot stop is set, and from there to the final observing plane. Making use of the axis-symmetry, wave fronts originating from one radius of the Sun are computed and the intensities circularly averaged. Results. As expected, the image of the external occulter appears as a bright circle, which locally exceeds the brightness of the Sun observed without external occulter. However, residual sunlight is below 10e-8 outside 1.5R. The Lyot coronagraph effectively complements the external occultation. At the expense of a small reduction in flux and resolution, reducing the Lyot stop allows a clear gain in rejection. Oversizing the internal occulter produces a similar effect but tends to exclude observations very close to the limb. We provide a graph that allows simply estimating the performance as a function of sizes of the internal occulter and Lyot stop.
BibTeX:
@article{rougeot2016performance,
author = { Rougeot, Raphael and Flamary, Remi and Galano, Damien and Aime, Claude},
title = {Performance of hybrid externally occulted Lyot solar coronagraph, Application to ASPIICS},
journal = { Astronomy and Astrophysics},
year = {2017}
}

### 2016

D. Mary, R. Flamary, C. Theys, C. Aime, "Mathematical Tools for Instrumentation and Signal Processing in Astronomy", 2016.
Abstract: This book is a collection of 13 articles corresponding to lectures and research works exposed at the Summer school of the CNRS titled « Bases mathématiques pour l’instrumentation et le traitement du signal en astronomie ». The school took place in Nice and Porquerolles, France, from June 1 to 5, 2015. This book contains three parts: I. Astronomy in the coming decade and beyond The three chapters of this part emphasize the strong interdisciplinary nature of Astrophysics, both at theoretical and observational levels, and the increasingly larger sizes of data sets produced by increasingly more complex instruments and infrastructures. These remarkable features call in the same time for more mathematical tools in signal processing and instrumentation, in particular in statistical modeling, large scale inference, data mining, machine learning, and for efficient processing solutions allowing their implementation. II. Mathematical concepts, methods and tools The first chapter of this part starts with an example of how pure mathematics can lead to new instrumental concepts, in this case for exoplanet detection. The four other chapters of this part provide a detailed introduction to four main topics: Orthogonal functions as a powerful tool for modeling signals and images, covering Fourier, Fourier-Legendre, Fourier-Bessel series for 1D signals and Spherical Harmonic series for 2D signals; Optimization and machine learning methods with application to inverse problems, denoising and classication, with on-line numerical experiments; Large scale statistical inference with adaptive procedures allowing to control the False Discovery Rate, like the Benjamini-Hochberg procedure, its Bayesian interpretation and some variations; Processing solutions for large data sets, covering the Hadoop framework and YARN, the main tools for the management of both the storage and computing capacities of a cluster of machines and also recent solutions like Spark. III. Application: tools in action This parts collects a number of current research works where some tools above are presented in action: optimization for deconvolution, statistical modeling, multiple testing, optical and instrumental models. The applications of this part include astronomical imaging, detection and estimation of circumgalactic structures, and detection of exoplanets.
BibTeX:
@book{mary2016mathematical,
author = {Mary, David and Flamary, Remi and Theys, Celine and Aime, Claude},
title = {Mathematical Tools for Instrumentation and Signal Processing in Astronomy},
publisher = {EDP Sciences},
year = {2016}
}
R. Flamary, C. Févotte, N. Courty, V. Emyia, "Optimal spectral transportation with application to music transcription", Neural Information Processing Systems (NIPS), 2016.
Abstract: Many spectral unmixing methods rely on the non-negative decomposition of spectral data onto a dictionary of spectral templates. In particular, state-of-the-art music transcription systems decompose the spectrogram of the input signal onto a dictionary of representative note spectra. The typical measures of fit used to quantify the adequacy of the decomposition compare the data and template entries frequency-wise. As such, small displacements of energy from a frequency bin to another as well as variations of timber can disproportionally harm the fit. We address these issues by means of optimal transportation and propose a new measure of fit that treats the frequency distributions of energy holistically as opposed to frequency-wise. Building on the harmonic nature of sound, the new measure is invariant to shifts of energy to harmonically-related frequencies, as well as to small and local displacements of energy. Equipped with this new measure of fit, the dictionary of note templates can be considerably simplified to a set of Dirac vectors located at the target fundamental frequencies (musical pitch values). This in turns gives ground to a very fast and simple decomposition algorithm that achieves state-of-the-art performance on real musical data.
BibTeX:
@inproceedings{flamary2016ost,
author = {Flamary, Remi and Févotte, Cédric and Courty, N. and  Emyia, Valentin},
title = {Optimal spectral transportation with application to music transcription},
booktitle = { Neural Information Processing Systems (NIPS)},
year = {2016}
}
M. Perrot, N. Courty, R. Flamary, A. Habrard, "Mapping estimation for discrete optimal transport", Neural Information Processing Systems (NIPS), 2016.
Abstract: We are interested in the computation of the transport map of an Optimal Transport problem. Most of the computational approaches of Optimal Transport use the Kantorovich relaxation of the problem to learn a probabilistic coupling but do not address the problem of learning the transport map linked to the original Monge problem. Consequently, it lowers the potential usage of such methods in contexts where out-of-samples computations are mandatory. In this paper we propose a new way to jointly learn the coupling and an approximation of the transport map. We use a jointly convex formulation which can be efficiently optimized. Additionally, jointly learning the coupling and the transport map allows to smooth the result of the Optimal Transport and generalize it on out-of-samples examples. Empirically, we show the interest and the relevance of our method in two tasks: domain adaptation and image editing.
BibTeX:
@inproceedings{perrot2016mapping,
author = {Perrot, M. and Courty, N. and Flamary, R. and Habrard, A.},
title = {Mapping estimation for discrete optimal transport},
booktitle = {Neural Information Processing Systems (NIPS)},
year = {2016}
}
N. Courty, R. Flamary, D. Tuia, A. Rakotomamonjy, "Optimal transport for domain adaptation", Pattern Analysis and Machine Intelligence, IEEE Transactions on , 2016.
Abstract: Domain adaptation is one of the most challenging tasks of modern data analytics. If the adaptation is done correctly, models built on a specific data representations become more robust when confronted to data depicting the same semantic concepts (the classes), but observed by another observation system with its own specificities. Among the many strategies proposed to adapt a domain to another, finding domain-invariant representations has shown excellent properties, as a single classifier can use labelled samples from the source domain under this representation to predict the unlabelled samples of the target domain. In this paper, we propose a regularized unsupervised optimal transportation model to perform the alignment of the representations in the source and target domains. We learn a transportation plan matching both PDFs, which constrains labelled samples in the source domain to remain close during transport. This way, we exploit at the same time the few labeled information in the source and distributions of the input/observation variables observed in both domains. Experiments in toy and challenging real visual adaptation examples show the interest of the method, that consistently outperforms state of the art approaches.
BibTeX:
@article{courty2016optimal,
author = { Courty, N. and Flamary, R.  and Tuia, D. and Rakotomamonjy, A.},
title = {Optimal transport for domain adaptation},
journal = { Pattern Analysis and Machine Intelligence, IEEE Transactions on },
year = {2016}
}
I. Harrane, R. Flamary, C. Richard, "Doubly partial-diffusion LMS over adaptive networks", Asilomar Conference on Signals, Systems and Computers (ASILOMAR), 2016.
Abstract: Diffusion LMS is an efficient strategy for solving distributed optimization problems with cooperating agents. Nodes are interested in estimating the same parameter vector and exchange information with their neighbors to improve their local estimates. However, successful implementation of such applications depends on a substantial amount of communication resources. In this paper, we introduce diffusion algorithms that have a significantly reduced communication load without compromising performance. We also perform analyses in the mean and mean-square sense. Simulations results are provided to confirm the theoretical findings.
BibTeX:
@inproceedings{harrane2016doubly,
author = {Harrane, Ibrahim and Flamary, R. and Richard, C.},
title = {Doubly partial-diffusion LMS over adaptive networks},
booktitle = {Asilomar Conference on Signals, Systems and Computers (ASILOMAR)},
year = {2016}
}
S. Nakhostin, N. Courty, R. Flamary, D. Tuia, T. Corpetti, "Supervised planetary unmixing with optimal transport", Whorkshop on Hyperspectral Image and Signal Processing : Evolution in Remote Sensing (WHISPERS), 2016.
Abstract: This paper is focused on spectral unmixing and present an original technique based on Optimal Transport. Optimal Transport consists in estimating a plan that transports a spectrum onto another with minimal cost, enabling to compute an associated distance (Wasserstein distance) that can be used as an alternative metric to compare hyperspectral data. This is exploited for spectral unmixing where abundances in each pixel are estimated on the basis of their projections in a Wasserstein sense (Bregman projections) onto known endmembers. In this work an over-complete dictionary is used to deal with internal variability between endmembers, while a regularization term, also based on Wasserstein distance, is used to promote prior proportion knowledge in the endmember groups. Experiments are performed on real hyperspectral data of asteroid 4-Vesta.
BibTeX:
@inproceedings{nakhostin2016planetary,
author = {Nakhostin, Sina  and Courty, Nicolas and Flamary, Remi and Tuia, D. and Corpetti, Thomas},
title = {Supervised planetary unmixing with optimal transport},
booktitle = {Whorkshop on Hyperspectral Image and Signal Processing : Evolution in Remote Sensing (WHISPERS)},
year = {2016}
}
S. Canu, R. Flamary, D. Mary, "Introduction to optimization with applications in astronomy and astrophysics", Mathematical tools for instrumentation and signal processing in astronomy, 2016.
Abstract: This chapter aims at providing an introduction to numerical optimization with some applications in astronomy and astrophysics. We provide important preliminary definitions that will guide the reader towards different optimization procedures. We discuss three families of optimization problems and describe numerical algorithms allowing, when this is possible, to solve these problems. For each family, we present in detail simple examples and more involved advanced examples. As a final illustration, we focus on two worked-out examples of optimization applied to astronomical data. The first application is a supervised classification of RR-Lyrae stars. The second one is the denoising of galactic spectra formulated by means of sparsity inducing models in a redundant dictionary.
BibTeX:
@incollection{canu2016introduction,
author = { Canu, Stephane, and Flamary, Remi and Mary, David},
title = {Introduction to optimization with applications in astronomy and astrophysics},
booktitle = { Mathematical tools for instrumentation and signal processing in astronomy},
editor = { {Mary, David and Flamary, Remi, and Theys, Celine, and Aime, Claude}},
year = {2016}
}
R. Flamary, A. Rakotomamonjy, M. Sebag, "Apprentissage statistique pour les BCI", Les interfaces cerveau-ordinateur 1, fondements et méthodes, pp 197-215, 2016.
Abstract: Ce chapitre introduit l'apprentissage statistique et son application aux interfaces cerveau-machine. Dans un premier temps, le principe général de l'apprentissage supervisé est présenté et les difficultés de mise en oeuvre sont discutées, en particulier les aspects relatifs a la sélection de capteurs et l'apprentissage multi- sujets. Ce chapitre détaille également la validation d'une approche d'apprentissage, incluant les différentes mesures de performance et l’optimisation des hyper-paramètres de l'algorithme considéré. Le lecteur est invité à expérimenter les algorithmes décrits : une boite a outils Matlab/Octave 1 permet de reproduire les expériences illustrant le chapitre et contient les détails d'implémentation des différentes méthodes.
BibTeX:
@incollection{flamary2016apprentissage,
author = { Flamary, Remi and Rakotomamonjy, Alain, and Sebag, Michele},
title = {Apprentissage statistique pour les BCI},
pages = { 197-215},
booktitle = { Les interfaces cerveau-ordinateur 1, fondements et méthodes},
editor = { {Clerc, Maureen and Bougrain, Laurent and Lotte, Fabien}},
publisher = { ISTE Editions},
year = {2016}
}
R. Flamary, A. Rakotomamonjy, M. Sebag, "Statistical learning for BCIs", Brain Computer Interfaces 1: Fundamentals and Methods, pp 185-206, 2016.
Abstract: This chapter introduces statistical learning and its applications to brain–computer interfaces. We begin by presenting the general principles of supervised learning and discussing the difficulties raised by its implementation, with a particular focus on aspects related to selecting sensors and multisubject learning. This chapter also describes in detail how a learning approach may be validated, including various metrics of performance and optimization of the hyperparameters of the considered algorithms. We invite the reader to experiment with the algorithms described here: the illustrative experiments included in this chapter may be reproduced using a Matlab/Octave toolbox, which contains the implementation details of the various different methods.
BibTeX:
@incollection{flamary2016statistical,
author = { Flamary, Remi and Rakotomamonjy, Alain, and Sebag, Michele},
title = {Statistical learning for BCIs},
pages = { 185-206},
booktitle = { Brain Computer Interfaces 1: Fundamentals and Methods},
editor = { {Clerc, Maureen and Bougrain, Laurent and Lotte, Fabien}},
publisher = { ISTE Ltd and John Wiley and Sons Inc },
year = {2016}
}
D. Tuia, R. Flamary, M. Barlaud, "Non-convex regularization in remote sensing", Geoscience and Remote Sensing, IEEE Transactions on, 2016.
Abstract: In this paper, we study the effect of different regularizers and their implications in high dimensional image classification and sparse linear unmixing. Although kernelization or sparse methods are globally accepted solutions for processing data in high dimensions, we present here a study on the impact of the form of regularization used and its parametrization. We consider regularization via traditional squared (l2) and sparsity-promoting (l1) norms, as well as more unconventional nonconvex regularizers (lp and Log Sum Penalty). We compare their properties and advantages on several classification and linear unmixing tasks and provide advices on the choice of the best regularizer for the problem at hand. Finally, we also provide a fully functional toolbox for the community
BibTeX:
@article{tuia2016nonconvex,
author = {Tuia, D. and  Flamary, R. and Barlaud, M.},
title = {Non-convex regularization in remote sensing},
journal = {Geoscience and Remote Sensing, IEEE Transactions on},
year = {2016}
}
N. Courty, R. Flamary, D. Tuia, T. Corpetti, "Optimal transport for data fusion in remote sensing", International Geoscience and Remote Sensing Symposium (IGARSS), 2016.
Abstract: One of the main objective of data fusion is the integration of several acquisition of the same physical object, in order to build a new consistent representation that embeds all the information from the different modalities. In this paper, we propose the use of optimal transport theory as a powerful mean of establishing correspondences between the modalities. After reviewing important properties and computational aspects, we showcase its application to three remote sensing fusion problems: domain adaptation, time series averaging and change detection in LIDAR data.
BibTeX:
@inproceedings{courty2016optimalrs,
author = {Courty, N. and Flamary, R. and Tuia, D. and Corpetti, T.},
title = {Optimal transport for data fusion in remote sensing},
booktitle = {International Geoscience and Remote Sensing Symposium (IGARSS)},
year = {2016}
}
I. Harrane, R. Flamary, C. Richard, "Toward privacy-preserving diffusion strategies for adaptation and learning over networks", European Conference on Signal Processing (EUSIPCO), 2016.
Abstract: Distributed optimization allows to address inference problems in a decentralized manner over networks, where agents can exchange information with their neighbors to improve their local estimates. Privacy preservation has become an important issue in many data mining applications. It aims at protecting the privacy of individual data in order to prevent the disclosure of sensitive information during the learning process. In this paper, we derive a diffusion strategy of the LMS type to solve distributed inference problems in the case where agents are also interested in preserving the privacy of the local measurements. We carry out a detailed mean and mean-square error analysis of the algorithm. Simulations are provided to check the theoretical findings.
BibTeX:
@inproceedings{haranne2016toward,
author = {Harrane, I. and Flamary, R. and Richard, C.},
title = {Toward privacy-preserving diffusion strategies for adaptation and learning over networks},
booktitle = {European Conference on Signal Processing (EUSIPCO)},
year = {2016}
}
A. Rakotomamonjy, R. Flamary, G. Gasso, "DC Proximal Newton for Non-Convex Optimization Problems", Neural Networks and Learning Systems, IEEE Transactions on, Vol. 27, N. 3, pp 636-647, 2016.
Abstract: We introduce a novel algorithm for solving learning problems where both the loss function and the regularizer are non-convex but belong to the class of difference of convex (DC) functions. Our contribution is a new general purpose proximal Newton algorithm that is able to deal with such a situation. The algorithm consists in obtaining a descent direction from an approximation of the loss function and then in performing a line search to ensure sufficient descent. A theoretical analysis is provided showing that the iterates of the proposed algorithm admit as limit points stationary points of the DC objective function. Numerical experiments show that our approach is more efficient than current state of the art for a problem with a convex loss functions and non-convex regularizer. We have also illustrated the benefit of our algorithm in high-dimensional transductive learning problem where both loss function and regularizers are non-convex.
BibTeX:
@article{rakoto2015dcprox,
author = { Rakotomamonjy, A. and Flamary, R. and Gasso, G.},
title = {DC Proximal Newton for Non-Convex Optimization Problems},
journal = { Neural Networks and Learning Systems, IEEE Transactions on},
volume = {27},
number = {3},
pages = {636-647},
year = {2016}
}

### 2015

R. Flamary, A. Rakotomamonjy, G. Gasso, "Importance Sampling Strategy for Non-Convex Randomized Block-Coordinate Descent", IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), 2015.
Abstract: As the number of samples and dimensionality of optimization problems related to statistics and machine learning explode, block coordinate descent algorithms have gained popularity since they reduce the original problem to several smaller ones. Coordinates to be optimized are usually selected randomly according to a given probability distribution. We introduce an importance sampling strategy that helps randomized coordinate descent algorithms to focus on blocks that are still far from convergence. The framework applies to problems composed of the sum of two possibly non-convex terms, one being separable and non-smooth. We have compared our algorithm to a full gradient proximal approach as well as to a randomized block coordinate algorithm that considers uniform sampling and cyclic block coordinate descent. Experimental evidences show the clear benefit of using an importance sampling strategy.
BibTeX:
@inproceedings{flamary2015importance,
author = {Flamary, R. and Rakotomamonjy, A. and  Gasso, G.},
title = {Importance Sampling Strategy for Non-Convex Randomized Block-Coordinate Descent},
booktitle = {IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)},
year = {2015}
}
R. Flamary, I. Harrane, M. Fauvel, S. Valero, M. Dalla Mura, "Discrimination périodique à partir d’observations multi-temporelles", GRETSI, 2015.
Abstract: In this work, we propose a novel linear classification scheme for non-stationary periodic data. We express the classifier in a temporal basis while regularizing its temporal complexity leading to a convex optimization problem. Numerical experiments show very good results on a simulated example and on real life remote sensing image classification problem.
BibTeX:
@inproceedings{flamary2015discrimination,
author = {Flamary, R. and Harrane, I. and Fauvel, M. and Valero, S. and Dalla Mura, M.},
title = {Discrimination périodique à partir d’observations multi-temporelles},
booktitle = {GRETSI},
year = {2015}
}
D. Tuia, R. Flamary, A. Rakotomamonjy, N. Courty, "Multitemporal classification without new labels: a solution with optimal transport", International Workshop on the Analysis of Multitemporal Remote Sensing Images (Multitemp), 2015.
Abstract: We propose to adapt distributions between couples of remote sensing images with regularized optimal transport: we apply two forms of regularizations, namely an entropy-based regularization and a class-based regularization to a series of classification problems involving very high resolution images acquired by the WorldView2 satellite. We study the effect of the two regularizers on the quality of the transport.
BibTeX:
@inproceedings{tuia2015multitemporal,
author = {Tuia, D. and Flamary, R. and Rakotomamonjy, A. and  Courty, N.},
title = {Multitemporal classification without new labels: a solution with optimal transport},
booktitle = {International Workshop on the Analysis of Multitemporal Remote Sensing Images (Multitemp)},
year = {2015}
}
D. Tuia, R. Flamary, M. Barlaud, "To be or not to be convex? A study on regularization in hyperspectral image classification", International Geoscience and Remote Sensing Symposium (IGARSS), 2015.
Abstract: Hyperspectral image classification has long been dominated by convex models, which provide accurate decision functions exploiting all the features in the input space. However, the need for high geometrical details, which are often satisfied by using spatial filters, and the need for compact models (i.e. relying on models issued form reduced input spaces) has pushed research to study alternatives such as sparsity inducing regularization, which promotes models using only a subset of the input features. Although successful in reducing the number of active inputs, these models can be biased and sometimes offer sparsity at the cost of reduced accuracy. In this paper, we study the possibility of using non-convex regularization, which limits the bias induced by the regularization. We present and compare four regularizers, and then apply them to hyperspectral classification with different cost functions.
BibTeX:
@inproceedings{tuia2015tobe,
author = {Tuia, D. and Flamary, R. and Barlaud, M.},
title = {To be or not to be convex? A study on regularization in   hyperspectral image classification},
booktitle = {International Geoscience and Remote Sensing Symposium (IGARSS)},
year = {2015}
}
D. Tuia, R. Flamary, N. Courty, "Multiclass feature learning for hyperspectral image classification: sparse and hierarchical solutions", ISPRS Journal of Photogrammetry and Remote Sensing, 2015.
Abstract: In this paper, we tackle the question of discovering an effective set of spatial filters to solve hyperspectral classification problems. Instead of fixing a priori the filters and their parameters using expert knowledge, we let the model find them within random draws in the (possibly infinite) space of possible filters. We define an active set feature learner that includes in the model only features that improve the classifier. To this end, we consider a fast and linear classifier, multiclass logistic classification, and show that with a good representation (the filters discovered), such a simple classifier can reach at least state of the art performances. We apply the proposed active set learner in four hyperspectral image classification problems, including agricultural and urban classification at different resolutions, as well as multimodal data. We also propose a hierarchical setting, which allows to generate more complex banks of features that can better describe the nonlinearities present in the data.
BibTeX:
@article{tuia2015multiclass,
author = {Tuia, D. and Flamary, R. and  Courty, N.},
title = {Multiclass feature learning for hyperspectral image classification: sparse and hierarchical solutions},
journal = {ISPRS Journal of Photogrammetry and Remote Sensing},
year = {2015}
}
R. Flamary, M. Fauvel, M. Dalla Mura, S. Valero, "Analysis of multi-temporal classification techniques for forecasting image times series", Geoscience and Remote Sensing Letters (GRSL), Vol. 12, N. 5, pp 953-957, 2015.
Abstract: The classification of an annual times series by using data from past years is investigated in this paper. Several classification schemes based on data fusion, sparse learning and semi-supervised learning are proposed to address the problem. Numerical experiments are performed on a MODIS image time series and show that while several approaches have statistically equivalent performances, SVM with 1 regularization leads to a better interpretation of the results due to their inherent sparsity in the temporal domain.
BibTeX:
@article{flamary2014analysis,
author = { Flamary, R. and Fauvel, M. and Dalla Mura, M. and Valero, S.},
title = {Analysis of multi-temporal classification techniques for forecasting image times series},
journal = { Geoscience and Remote Sensing Letters (GRSL)},
volume = {12},
number = {5},
pages = {953-957},
year = {2015}
}

### 2014

R. Flamary, N. Courty, D. Tuia, A. Rakotomamonjy, "Optimal transport with Laplacian regularization: Applications to domain adaptation and shape matching", NIPS Workshop on Optimal Transport and Machine Learning OTML, 2014.
Abstract: We propose a method based on optimal transport for empirical distributions with Laplacian regularization (LOT). Laplacian regularization is a graph-based regularization that can encode neighborhood similarity between samples either on the final position of the transported samples or on their displacement as in the work of Ferradans et al.. In both cases, LOT is expressed as a quadratic programming problem and can be solved with a Frank-Wolfe algorithm with optimal step size. Results on domain adaptation and a shape matching problems show the interest of using this regularization in optimal transport.
BibTeX:
@conference{flamary2014optlaplace,
author = { Flamary, R. and Courty, N.. and Tuia, D. and Rakotomamonjy, A.},
title = {Optimal transport with Laplacian regularization: Applications to domain adaptation and shape matching},
booktitle = { },
howpublished = { NIPS Workshop on Optimal Transport and Machine Learning OTML},
year = {2014}
}
R. Flamary, A. Rakotomamonjy, G. Gasso, "Learning Constrained Task Similarities in Graph-Regularized Multi-Task Learning", Regularization, Optimization, Kernels, and Support Vector Machines, 2014.
BibTeX:
@incollection{flamary2014learning,
author = {  Flamary, R. and  Rakotomamonjy, A. and Gasso, G.},
title = {Learning Constrained Task Similarities in Graph-Regularized Multi-Task Learning},
booktitle = { Regularization, Optimization, Kernels, and Support Vector Machines},
editor = { {Suykens J. A.K. ,  Signoretto M., Argyriou A.}},
year = {2014}
}
R. Flamary, C. Aime, "Optimization of starshades: focal plane versus pupil plane", Astronomy and Astrophysics, Vol. 569, N. A28, pp 10, 2014.
Abstract: We search for the best possible transmission for an external occulter coronagraph that is dedicated to the direct observation of terrestrial exoplanets. We show that better observation conditions are obtained when the flux in the focal plane is minimized in the zone in which the exoplanet is observed, instead of the total flux received by the telescope. We describe the transmission of the occulter as a sum of basis functions. For each element of the basis, we numerically computed the Fresnel diffraction at the aperture of the telescope and the complex amplitude at its focus. The basis functions are circular disks that are linearly apodized over a few centimeters (truncated cones). We complemented the numerical calculation of the Fresnel diffraction for these functions by a comparison with pure circular discs (cylinder) for which an analytical expression, based on a decomposition in Lommel series, is available. The technique of deriving the optimal transmission for a given spectral bandwidth is a classical regularized quadratic minimization of intensities, but linear optimizations can be used as well. Minimizing the integrated intensity on the aperture of the telescope or for selected regions of the focal plane leads to slightly different transmissions for the occulter. For the focal plane optimization, the resulting residual intensity is concentrated behind the geometrical image of the occulter, in a blind region for the observation of an exoplanet, and the level of background residual starlight becomes very low outside this image. Finally, we provide a tolerance analysis for the alignment of the occulter to the telescope which also favors the focal plane optimization. This means that telescope offsets of a few decimeters do not strongly reduce the efficiency of the occulter.
BibTeX:
@article{flamary2014starshade,
author = { Flamary, Remi and Aime, Claude},
title = {Optimization of starshades: focal plane versus pupil plane},
journal = { Astronomy and Astrophysics},
volume = {569},
number = {A28},
pages = { 10},
year = {2014}
}
A. Boisbunon, R. Flamary, A. Rakotomamonjy, A. Giros, J. Zerubia, "Large scale sparse optimization for object detection in high resolution images", IEEE Workshop in Machine Learning for Signal Processing (MLSP), 2014.
Abstract: In this work, we address the problem of detecting objects in images by expressing the image as convolutions between activation matrices and dictionary atoms. The activation matrices are estimated through sparse optimization and correspond to the position of the objects. In particular, we propose an efficient algorithm based on an active set strategy that is easily scalable and can be computed in parallel. We apply it to a toy image and a satellite image where the aim is to detect all the boats in a harbor. These results show the benefit of using nonconvex penalties, such as the log-sum penalty, over the convex l1 penalty.
BibTeX:
@inproceedings{boisbunon2014largescale,
author = {Boisbunon, A. and Flamary, R. and Rakotomamonjy, A. and Giros, A. and Zerubia, J.},
title = {Large scale sparse optimization for object detection in high resolution images},
booktitle = {IEEE Workshop in Machine Learning for Signal Processing (MLSP)},
year = {2014}
}
E. Niaf, R. Flamary, A. Rakotomamonjy, O. Rouvière, C. Lartizien, "SVM with feature selection and smooth prediction in images: application to CAD of prostate cancer", IEEE International Conference on Image Processing (ICIP), 2014.
Abstract: We propose a new computer-aided detection scheme for prostate cancer screening on multiparametric magnetic resonance (mp-MR) images. Based on an annotated training database of mp-MR images from thirty patients, we train a novel support vector machine (SVM)-inspired classifier which simultaneously learns an optimal linear discriminant and a subset of predictor variables (or features) that are most relevant to the classification task, while promoting spatial smoothness of the malignancy prediction maps. The approach uses a $\ell_1$-norm in the regularization term of the optimization problem that rewards sparsity. Spatial smoothness is promoted via an additional cost term that encodes the spatial neighborhood of the voxels, to avoid noisy prediction maps. Experimental comparisons of the proposed $\ell_1$-Smooth SVM scheme to the regular $\ell_2$-SVM scheme demonstrate a clear visual and numerical gain on our clinical dataset.
BibTeX:
@inproceedings{niaf2014svmsmooth,
author = {Niaf, E. and Flamary, R. and Rakotomamonjy, A. and Rouvière, O. and Lartizien, C.},
title = {SVM with feature selection and smooth prediction in images: application to CAD of prostate cancer},
booktitle = {IEEE International Conference on Image Processing (ICIP)},
year = {2014}
}
D. Tuia, N. Courty, R. Flamary, "A group-lasso active set strategy for multiclass hyperspectral image classification", Photogrammetric Computer Vision (PCV), 2014.
Abstract: Hyperspectral images have a strong potential for landcover/landuse classification, since the spectra of the pixels can highlight subtle differences between materials and provide information beyond the visible spectrum. Yet, a limitation of most current approaches is the hypothesis of spatial independence between samples: images are spatially correlated and the classification map should exhibit spatial regularity. One way of integrating spatial smoothness is to augment the input spectral space with filtered versions of the bands. However, open questions remain, such as the selection of the bands to be filtered, or the filterbank to be used. In this paper, we consider the entirety of the possible spatial filters by using an incremental feature learning strategy that assesses whether a candidate feature would improve the model if added to the current input space. Our approach is based on a multiclass logistic classifier with group-lasso regularization. The optimization of this classifier yields an optimality condition, that can easily be used to assess the interest of a candidate feature without retraining the model, thus allowing drastic savings in computational time. We apply the proposed method to three challenging hyperspectral classification scenarios, including agricultural and urban data, and study both the ability of the incremental setting to learn features that always improve the model and the nature of the features selected.
BibTeX:
@inproceedings{tuia2014grouplasso,
author = {Tuia, D. and Courty, N. and Flamary, R.},
title = {A group-lasso active set strategy for multiclass hyperspectral image classification},
booktitle = {Photogrammetric Computer Vision (PCV)},
year = {2014}
}
J. Lehaire, R. Flamary, O. Rouvière, C. Lartizien, "Computer-aided diagnostic for prostate cancer detection and characterization combining learned dictionaries and supervised classification", IEEE International Conference on Image Processing (ICIP), 2014.
Abstract: This paper aims at presenting results of a computer-aided diagnostic (CAD) system for voxel based detection and characterization of prostate cancer in the peripheral zone based on multiparametric magnetic resonance (mp-MR) imaging. We propose an original scheme with the combination of a feature extraction step based on a sparse dictionary learning (DL) method and a supervised classification in order to discriminate normal (N), normal but suspect (NS) tissues as well as different classes of cancer tissue whose aggressiveness is characterized by the Gleason score ranging from 6 (GL6) to 9 (GL9). We compare the classification performance of two supervised methods, the linear support vector machine (SVM) and the multinomial logistic regression (MLR) classifiers in a binary classification task. Classification performances were evaluated over an mp-MR image database of 35 patients where each voxel was labeled, based on a ground truth, by an expert radiologist. Results show that the proposed method in addition to being interpretable thanks to the sparse representation of the voxels compares favorably (AUC>0.8) with recent state of the art performances. Preliminary results on example patients data also indicate that the outputs cancer probability maps are correlated to the Gleason score.
BibTeX:
@inproceedings{lehaire2014dicolearn,
author = {Lehaire, J. and Flamary, R. and Rouvière, O. and Lartizien, C.},
title = {Computer-aided diagnostic for prostate cancer detection and characterization combining learned dictionaries and supervised
classification},
booktitle = {IEEE International Conference on Image Processing (ICIP)},
year = {2014}
}
A. Ferrari, D. Mary, R. Flamary, C. Richard, "Distributed image reconstruction for very large arrays in radio astronomy", IEEE Sensor Array and Multichannel Signal Processing Workshop (SAM), 2014.
Abstract: Current and future radio interferometric arrays such as LOFAR and SKA are characterized by a paradox. Their large number of receptors (up to millions) allow theoretically unprecedented high imaging resolution. In the same time, the ultra massive amounts of samples makes the data transfer and computational loads (correlation and calibration) order of magnitudes too high to allow any currently existing image reconstruction algorithm to achieve, or even approach, the theoretical resolution. We investigate here decentralized and distributed image reconstruction strategies which select, transfer and process only a fraction of the total data. The loss in MSE incurred by the proposed approach is evaluated theoretically and numerically on simple test cases.
BibTeX:
@inproceedings{ferrari2014distributed,
author = {Ferrari, A. and Mary, D. and Flamary, R. and Richard, C.},
title = {Distributed image reconstruction for very large arrays in radio astronomy},
booktitle = {IEEE Sensor Array and Multichannel Signal Processing Workshop (SAM)},
year = {2014}
}
N. Courty, R. Flamary, D. Tuia, "Domain adaptation with regularized optimal transport", European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD), 2014.
Abstract: We present a new and original method to solve the domain adaptation problem using optimal transport. By searching for the best transportation plan between the probability distribution functions of a source and a target domain, a non-linear and invertible transformation of the learning samples can be estimated. Any standard machine learning method can then be applied on the transformed set, which makes our method very generic. We propose a new optimal transport algorithm that incorporates label information in the optimization: this is achieved by combining an efficient matrix scaling technique together with a majoration of a non-convex regularization term. By using the proposed optimal transport with label regularization, we obtain significant increase in performance compared to the original transport solution. The proposed algorithm is computationally efficient and effective, as illustrated by its evaluation on a toy example and a challenging real life vision dataset, against which it achieves competitive results with respect to state-of-the-art methods.
BibTeX:
@inproceedings{courty2014domain,
author = {Courty, N. and Flamary, R. and Tuia, D.},
title = {Domain adaptation with regularized optimal transport},
booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD)},
year = {2014}
}
A. Boisbunon, R. Flamary, A. Rakotomamonjy, "Active set strategy for high-dimensional non-convex sparse optimization problems", International Conference on Acoustic, Speech and Signal Processing (ICASSP), 2014.
Abstract: The use of non-convex sparse regularization has attracted much interest when estimating a very sparse model on high dimensional data. In this work we express the optimality conditions of the optimization problem for a large class of non-convex regularizers. From those conditions, we derive an efficient active set strategy that avoids the computing of unnecessary gradients. Numerical experiments on both generated and real life datasets show a clear gain in computational cost w.r.t. the state of the art when using our method to obtain very sparse solutions.
BibTeX:
@inproceedings{boisbunon2014active,
author = {Boisbunon, A. and Flamary, R. and Rakotomamonjy, A.},
title = {Active set strategy for high-dimensional non-convex sparse optimization problems},
booktitle = {International Conference on Acoustic, Speech and Signal Processing (ICASSP)},
year = {2014}
}
R. Flamary, N. Jrad, R. Phlypo, M. Congedo, A. Rakotomamonjy, "Mixed-Norm Regularization for Brain Decoding", Computational and Mathematical Methods in Medicine, Vol. 2014, N. 1, pp 1-13, 2014.
Abstract: This work investigates the use of mixed-norm regularization for sensor selection in event-related potential (ERP) based brain-computer interfaces (BCI). The classification problem is cast as a discriminative optimization framework where sensor selection is induced through the use of mixed-norms. This framework is extended to the multitask learning situation where several similar classification tasks related to different subjects are learned simultaneously. In this case, multitask learning helps in leveraging data scarcity issue yielding to more robust classifiers. For this purpose, we have introduced a regularizer that induces both sensor selection and classifier similarities. The different regularization approaches are compared on three ERP datasets showing the interest of mixed-norm regularization in terms of sensor selection. The multitask approaches are evaluated when a small number of learning examples are available yielding to significant performance improvements especially for subjects performing poorly.
BibTeX:
@article{flamary2014mixed,
author = {Flamary, R. and Jrad, N. and Phlypo, R. and Congedo, M. and Rakotomamonjy, A.},
title = {Mixed-Norm Regularization for Brain Decoding},
journal = {Computational and Mathematical Methods in Medicine},
volume = {2014},
number = {1},
pages = {1-13},
year = {2014}
}
E. Niaf, R. Flamary, O. Rouvière, C. Lartizien, S. Canu, "Kernel-Based Learning From Both Qualitative and Quantitative Labels: Application to Prostate Cancer Diagnosis Based on Multiparametric MR Imaging", Image Processing, IEEE Transactions on, Vol. 23, N. 3, pp 979-991, 2014.
Abstract: Building an accurate training database is challenging in supervised classification. For instance, in medical imaging, radiologists often delineate malignant and benign tissues without access to the histological ground truth, leading to uncertain data sets. This paper addresses the pattern classification problem arising when available target data include some uncertainty information. Target data considered here are both qualitative (a class label) or quantitative (an estimation of the posterior probability). In this context, usual discriminative methods, such as the support vector machine (SVM), fail either to learn a robust classifier or to predict accurate probability estimates. We generalize the regular SVM by introducing a new formulation of the learning problem to take into account class labels as well as class probability estimates. This original reformulation into a probabilistic SVM (P-SVM) can be efficiently solved by adapting existing flexible SVM solvers. Furthermore, this framework allows deriving a unique learned prediction function for both decision and posterior probability estimation providing qualitative and quantitative predictions. The method is first tested on synthetic data sets to evaluate its properties as compared with the classical SVM and fuzzy-SVM. It is then evaluated on a clinical data set of multiparametric prostate magnetic resonance images to assess its performances in discriminating benign from malignant tissues. P-SVM is shown to outperform classical SVM as well as the fuzzy-SVM in terms of probability predictions and classification performances, and demonstrates its potential for the design of an efficient computer-aided decision system for prostate cancer diagnosis based on multiparametric magnetic resonance (MR) imaging.
BibTeX:
@article{niaf2014kernel,
author = {Niaf, E. and Flamary, R. and Rouvière, O. and Lartizien, C. and  Canu, S.},
title = {Kernel-Based Learning From Both Qualitative and Quantitative Labels: Application to Prostate Cancer Diagnosis Based on Multiparametric MR Imaging},
journal = {Image Processing, IEEE Transactions on},
volume = {23},
number = {3},
pages = {979-991},
year = {2014}
}
D. Tuia, M. Volpi, M. Dalla Mura, A. Rakotomamonjy, R. Flamary, "Automatic Feature Learning for Spatio-Spectral Image Classification With Sparse SVM", Geoscience and Remote Sensing, IEEE Transactions on, Vol. 52, N. 10, pp 6062-6074, 2014.
Abstract: Including spatial information is a key step for successful remote sensing image classification. In particular, when dealing with high spatial resolution, if local variability is strongly reduced by spatial filtering, the classification performance results are boosted. In this paper, we consider the triple objective of designing a spatial/spectral classifier, which is compact (uses as few features as possible), discriminative (enhances class separation), and robust (works well in small sample situations). We achieve this triple objective by discovering the relevant features in the (possibly infinite) space of spatial filters by optimizing a margin-maximization criterion. Instead of imposing a filter bank with predefined filter types and parameters, we let the model figure out which set of filters is optimal for class separation. To do so, we randomly generate spatial filter banks and use an active-set criterion to rank the candidate features according to their benefits to margin maximization (and, thus, to generalization) if added to the model. Experiments on multispectral very high spatial resolution (VHR) and hyperspectral VHR data show that the proposed algorithm, which is sparse and linear, finds discriminative features and achieves at least the same performances as models using a large filter bank defined in advance by prior knowledge.
BibTeX:
@article{tuia2014automatic,
author = {Tuia, D. and Volpi, M. and Dalla Mura, M. and Rakotomamonjy, A. and Flamary, R.},
title = {Automatic Feature Learning for Spatio-Spectral Image Classification With Sparse SVM},
journal = {Geoscience and Remote Sensing, IEEE Transactions on},
volume = {52},
number = {10},
pages = {6062-6074},
year = {2014}
}
L. Laporte, R. Flamary, S. Canu, S. Déjean, J. Mothe, "Nonconvex Regularizations for Feature Selection in Ranking With Sparse SVM", Neural Networks and Learning Systems, IEEE Transactions on, Vol. 25, N. 6, pp 1118-1130, 2014.
Abstract: Feature selection in learning to rank has recently emerged as a crucial issue. Whereas several preprocessing approaches have been proposed, only a few works have been focused on integrating the feature selection into the learning process. In this work, we propose a general framework for feature selection in learning to rank using SVM with a sparse regularization term. We investigate both classical convex regularizations such as l1 or weighted l1 and non-convex regularization terms such as log penalty, Minimax Concave Penalty (MCP) or lp pseudo norm with p lower than 1. Two algorithms are proposed, first an accelerated proximal approach for solving the convex problems, second a reweighted l1 scheme to address the non-convex regularizations. We conduct intensive experiments on nine datasets from Letor 3.0 and Letor 4.0 corpora. Numerical results show that the use of non-convex regularizations we propose leads to more sparsity in the resulting models while prediction performance is preserved. The number of features is decreased by up to a factor of six compared to the l1 regularization. In addition, the software is publicly available on the web.
BibTeX:
@article{tnnls2014,
author = { Laporte, L. and Flamary, R. and Canu, S. and Déjean, S. and Mothe, J.},
title = {Nonconvex Regularizations for Feature Selection in Ranking With Sparse SVM},
journal = { Neural Networks and Learning Systems, IEEE Transactions on},
volume = {25},
number = {6},
pages = {1118-1130},
year = {2014}
}

### 2013

W. Gao, J. Chen, C. Richard, J. Huang, R. Flamary, "Kernel LMS algorithm with Forward-Backward splitting for dictionnary learning", International Conference on Acoustic, Speech and Signal Processing (ICASSP), 2013.
Abstract: Nonlinear adaptive filtering with kernels has become a topic of high interest over the last decade. A characteristics of kernel-based techniques is that they deal with kernel expansions whose number of terms is equal to the number of input data, making them unsuitable for online applications. Kernel-based adaptive filtering algorithms generally rely on a two-stage process at each iteration: a model order control stage that limits the increase in the number of terms by including only valuable kernels into the so-called dictionary, and a fil- ter parameter update stage. It is surprising to note that most existing strategies for dictionary update can only incorporate new elements into the dictionary. This unfortunately means that they cannot discard obsolete kernel functions, within the context of a time-varying environment in particular. Recently, to remedy this drawback, it has been proposed to associate an l1-norm regularization criterion with the mean-square error criterion. The aim of this paper is to provide theoretical results on the convergence of this approach.
BibTeX:
@inproceedings{gao2013kernel,
author = {Gao, W. and Chen, J. and Richard, C. and Huang, J. and Flamary, R.},
title = {Kernel LMS algorithm with Forward-Backward splitting for dictionnary learning},
booktitle = {International Conference on Acoustic, Speech and Signal Processing (ICASSP)},
year = {2013}
}
R. Flamary, A. Rakotomamonjy, "Support Vector Machine with spatial regularization for pixel classification", International Workshop on Advances in Regularization, Optimization, Kernel Methods and Support Vector Machines : theory and applications (ROKS), 2013.
Abstract: We propose in this work to regularize the output of a svm classifier on pixels in order to promote smoothness in the predicted image. The learning problem can be cast as a semi-supervised SVM with a particular structure encoding pixel neighborhood in the regularization graph. We provide several optimization schemes in order to solve the problem for linear SVM with l2 or l1 regularization and show the interest of the approach on an image classification example with very few labeled pixels.
BibTeX:
@inproceedings{ROKS2013,
author = {  Flamary, R. and  Rakotomamonjy, A.},
title = {Support Vector Machine with spatial regularization for pixel classification},
booktitle = { International Workshop on Advances in Regularization, Optimization, Kernel Methods and Support Vector Machines : theory and applications (ROKS)},
year = {2013}
}
D. Tuia, M. Volpi, M. Dalla Mura, A. Rakotomamonjy, R. Flamary, "Create the relevant spatial filterbank in the hyperspectral jungle", IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 2013.
Abstract: Inclusion of spatial information is known to be beneficial to the classification of hyperspectral images. However, given the high dimensionality of the data, it is difficult to know before hand which are the bands to filter or what are the filters to be applied. In this paper, we propose an active set algorithm based on a $l_1$ support vector machine that explores the (possibily infinite) space of spatial filters and retrieve automatically the filters that maximize class separation. Experiments on hyperspectral imagery confirms the power of the method, that reaches state of the art performance with small feature sets generated automatically and without prior knowledge.
BibTeX:
@inproceedings{IGARSS2013,
author = {  Tuia, D. and  Volpi, M. and  Dalla Mura, M. and  Rakotomamonjy, A. and  Flamary, R.},
title = {Create the relevant spatial filterbank in the hyperspectral jungle},
booktitle = { IEEE International Geoscience and Remote Sensing Symposium (IGARSS)},
year = {2013}
}
A. Rakotomamonjy, R. Flamary, F. Yger, "Learning with infinitely many features", Machine Learning, Vol. 91, N. 1, pp 43-66, 2013.
Abstract: We propose a principled framework for learning with infinitely many features, situations that are usually induced by continuously parametrized feature extraction methods. Such cases occur for instance when considering Gabor-based features in computer vision problems or when dealing with Fourier features for kernel approximations. We cast the problem as the one of finding a finite subset of features that minimizes a regularized empirical risk. After having analyzed the optimality conditions of such a problem, we propose a simple algorithm which has the avour of a column-generation technique. We also show that using Fourier-based features, it is possible to perform approximate infinite kernel learning. Our experimental results on several datasets show the benefits of the proposed approach in several situations including texture classification and large-scale kernelized problems (involving about 100 thousand examples).
BibTeX:
@article{ml2012,
author = { Rakotomamonjy, A. and Flamary, R. and Yger, F.},
title = {Learning with infinitely many features},
journal = { Machine Learning},
volume = {91},
number = {1},
pages = {43-66},
year = {2013}
}

### 2012

D. Tuia, R. Flamary, M. Volpi, M. Dalla Mura, A. Rakotomamonjy, " Discovering relevant spatial filterbanks for VHR image classification", International Conference on Pattern Recognition (ICPR), 2012.
Abstract: In very high resolution (VHR) image classification it is common to use spatial filters to enhance the discrimination among landuses related to similar spectral properties but different spatial characteristics. However, the filters types that can be used are numerous (e.g. textural, morphological, Gabor, wavelets, etc.) and the user must pre-select a family of features, as well as their specific parameters. This results in features spaces that are high dimensional and redundant, thus requiring long and suboptimal feature selection phases. In this paper, we propose to discover the relevant filters as well as their parameters with a sparsity promoting regularization and an active set algorithm that iteratively adds to the model the most promising features. This way, we explore the filters/parameters input space efficiently (which is infinitely large for continuous parameters) and construct the optimal filterbank for classification without any other information than the types of filters to be used.
BibTeX:
@inproceedings{ICPR2012,
author = {  Tuia, D. and  Flamary, R. and  Volpi, M. and  Dalla Mura, M. and  Rakotomamonjy, A.},
title = { Discovering relevant spatial filterbanks for VHR image classification},
booktitle = { International Conference on Pattern Recognition (ICPR)},
year = {2012}
}
R. Flamary, A. Rakotomamonjy, "Decoding finger movements from ECoG signals using switching linear models", Frontiers in Neuroscience, Vol. 6, N. 29, 2012.
Abstract: One of the most interesting challenges in ECoG-based Brain-Machine Interface is movement prediction. Being able to perform such a prediction paves the way to high-degree precision command for a machine such as a robotic arm or robotic hands. As a witness of the BCI community increasing interest towards such a problem, the fourth BCI Competition provides a dataset which aim is to predict individual finger movements from ECog signals. The difficulty of the problem relies on the fact that there is no simple relation between ECoG signals and finger movements. We propose in this paper, to estimate and decode these finger flexions using switching models controlled by an hidden state. Switching models can integrate prior knowledge about the decoding problem and helps in predicting fine and precise movements. Our model is thus based on a first block which estimates which finger is moving and another block which, knowing which finger is moving, predicts the movements of all other fingers. Numerical results that have been submitted to the Competition show that the model yields high decoding performances when the hidden state is well estimated. This approach achieved the second place in the BCI competition with a correlation measure between real and predicted movements of 0.42.
BibTeX:
@article{frontiers2012,
author = { Flamary, R. and  Rakotomamonjy, A.},
title = {Decoding finger movements from ECoG signals using switching linear models},
journal = { Frontiers in Neuroscience},
volume = { 6},
number = { 29},
year = {2012}
}
R. Flamary, D. Tuia, B. Labbé, G. Camps-Valls, A. Rakotomamonjy, "Large Margin Filtering", IEEE Transactions Signal Processing, Vol. 60, N. 2, pp 648-659, 2012.
Abstract: Many signal processing problems are tackled by filtering the signal for subsequent feature classification or regression. Both steps are critical and need to be designed carefully to deal with the particular statistical characteristics of both signal and noise. Optimal design of the filter and the classifier are typically aborded in a separated way, thus leading to suboptimal classification schemes. This paper proposes an efficient methodology to learn an optimal signal filter and a support vector machine (SVM) classifier jointly. In particular, we derive algorithms to solve the optimization problem, prove its theoretical convergence, and discuss different filter regularizers for automated scaling and selection of the feature channels. The latter gives rise to different formulations with the appealing properties of sparseness and noise-robustness. We illustrate the performance of the method in several problems. First, linear and nonlinear toy classification examples, under the presence of both Gaussian and convolutional noise, show the robustness of the proposed methods. The approach is then evaluated on two challenging real life datasets: BCI time series classification and multispectral image segmentation. In all the examples, large margin filtering shows competitive classification performances while offering the advantage of interpretability of the filtered channels retrieved.
BibTeX:
@article{ieeesp2012,
author = { Flamary, R. and Tuia, D. and Labbé, B. and Camps-Valls, G. and Rakotomamonjy, A.},
title = {Large Margin Filtering},
journal = { IEEE Transactions Signal Processing},
volume = {60},
number = {2},
pages = {648-659},
year = {2012}
}
E. Niaf, R. Flamary, S. Canu, O. Rouvière, C. Lartizien, "Handling learning samples uncertainties in SVM : application to MRI-based prostate cancer Computer-Aided Diagnosis", IEEE International Symposium on Biomedical Imaging , 2012.
Abstract: Building an accurate training database is challenging in supervised classification. Radiologists often delineate malignant and benign tissues without access to the ground truth thus leading to uncertain datasets. We propose to deal with this uncertainty by introducing probabilistic labels in the learning stage. We introduce a probabilistic support vector machine (P-SVM) inspired from the regular C-SVM formulation allowing to consider class labels through a hinge loss and probability estimates using epsilon-insensitive cost function together with a minimum norm (maximum margin) objective. Solution is used for both decision and posterior probability estimation.
BibTeX:
@inproceedings{isbi2012,
author = { Niaf, E. and Flamary, R. and Canu, S. and Rouvière, O. and Lartizien, C.},
title = {Handling learning samples uncertainties in SVM : application to MRI-based prostate cancer Computer-Aided Diagnosis},
booktitle = { IEEE International Symposium on Biomedical Imaging },
year = {2012}
}

### 2011

A. Rakotomamonjy, R. Flamary, G. Gasso, S. Canu, "lp-lq penalty for sparse linear and sparse multiple kernel multi-task learning", IEEE Transactions on Neural Networks, Vol. 22, N. 8, pp 1307-1320, 2011.
Abstract: Recently, there has been a lot of interest around multi-task learning (MTL) problem with the constraints that tasks should share a common sparsity profile. Such a problem can be addressed through a regularization framework where the regularizer induces a joint-sparsity pattern between task decision functions. We follow this principled framework and focus on $\ell_p-\ell_q$ (with $0 \leq p \leq 1$ and $1 \leq q \leq 2$) mixed-norms as sparsity- inducing penalties. Our motivation for addressing such a larger class of penalty is to adapt the penalty to a problem at hand leading thus to better performances and better sparsity pattern. For solving the problem in the general multiple kernel case, we first derive a variational formulation of the $\ell_1-\ell_q$ penalty which helps up in proposing an alternate optimization algorithm. Although very simple, the latter algorithm provably converges to the global minimum of the $\ell_1-\ell_q$ penalized problem. For the linear case, we extend existing works considering accelerated proximal gradient to this penalty. Our contribution in this context is to provide an efficient scheme for computing the $\ell_1-\ell_q$ proximal operator. Then, for the more general case when $0 < p < 1$, we solve the resulting non-convex problem through a majorization-minimization approach. The resulting algorithm is an iterative scheme which, at each iteration, solves a weighted $\ell_1-\ell_q$ sparse MTL problem. Empirical evidences from toy dataset and real-word datasets dealing with BCI single trial EEG classification and protein subcellular localization show the benefit of the proposed approaches and algorithms.
BibTeX:
@article{tnn2011,
author = { Rakotomamonjy, A. and Flamary, R. and Gasso, G. and Canu, S.},
title = {lp-lq penalty for sparse linear and sparse multiple kernel multi-task learning},
journal = { IEEE Transactions on Neural Networks},
volume = {22},
number = {8},
pages = {1307-1320},
year = {2011}
}
R. Flamary, "Apprentissage statistique pour le signal: applications aux interfaces cerveau-machine", Laboratoire LITIS, Université de Rouen, 2011.
Abstract: Brain Computer Interfaces (BCI) require the use of statistical learning methods for signal recognition. In this thesis we propose a general approach using prior knowledge on the problem at hand through regularization. To this end, we learn jointly the classifier and the feature extraction step in a unique optimization problem. We focus on the problem of sensor selection, and propose several regularization terms adapted to the problem. Our first contribution is a filter learning method called large margin filtering. It consists in learning a filtering maximizing the margin between samples of each classe so as to adapt to the properties of the features. In addition, this approach is easy to interpret and can lead to the selection of the most relevant sensors. Numerical experiments on a real life BCI problem and a 2D image classification show the good behaviour of our method both in terms of performance and interpretability. The second contribution is a general sparse multitask learning approach. Several classifiers are learned jointly and discriminant kernels for all the tasks are automatically selected. We propose some efficient algorithms and numerical experiments have shown the interest of our approach. Finally, the third contribution is a direct application of the sparse multitask learning to a BCI event-related potential classification problem. We propose an adapted regularization term that promotes both sensor selection and similarity between the classifiers. Numerical experiments show that the calibration time of a BCI can be drastically reduced thanks to the proposed multitask approach.
BibTeX:
@phdthesis{thesis2011,
author = { Flamary, R.},
title = {Apprentissage statistique pour le signal: applications aux interfaces cerveau-machine},
school = { Laboratoire LITIS, Université de Rouen},
year = {2011}
}
N. Jrad, M. Congedo, R. Phlypo, S. Rousseau, R. Flamary, F. Yger, A. Rakotomamonjy, "sw-SVM: sensor weighting support vector machines for EEG-based brain–computer interfaces", Journal of Neural Engineering, Vol. 8, N. 5, pp 056004, 2011.
Abstract: In many machine learning applications, like brain–computer interfaces (BCI), high-dimensional sensor array data are available. Sensor measurements are often highly correlated and signal-to-noise ratio is not homogeneously spread across sensors. Thus, collected data are highly variable and discrimination tasks are challenging. In this work, we focus on sensor weighting as an efficient tool to improve the classification procedure. We present an approach integrating sensor weighting in the classification framework. Sensor weights are considered as hyper-parameters to be learned by a support vector machine (SVM). The resulting sensor weighting SVM (sw-SVM) is designed to satisfy a margin criterion, that is, the generalization error. Experimental studies on two data sets are presented, a P300 data set and an error-related potential (ErrP) data set. For the P300 data set (BCI competition III), for which a large number of trials is available, the sw-SVM proves to perform equivalently with respect to the ensemble SVM strategy that won the competition. For the ErrP data set, for which a small number of trials are available, the sw-SVM shows superior performances as compared to three state-of-the art approaches. Results suggest that the sw-SVM promises to be useful in event-related potentials classification, even with a small number of training trials.
BibTeX:
@article{jrad2011swsvm,
author = {N. Jrad and M. Congedo and R. Phlypo and S. Rousseau and R. Flamary and F. Yger and A. Rakotomamonjy},
title = {sw-SVM: sensor weighting support vector machines for EEG-based brain–computer interfaces},
journal = {Journal of Neural Engineering},
volume = {8},
number = {5},
pages = {056004},
year = {2011}
}
R. Flamary, F. Yger, A. Rakotomamonjy, " Selecting from an infinite set of features in SVM", European Symposium on Artificial Neural Networks, 2011.
Abstract: Dealing with the continuous parameters of a feature extraction method has always been a difficult task that is usually solved by cross-validation. In this paper, we propose an active set algorithm for selecting automatically these parameters in a SVM classification context. Our experiments on texture recognition and BCI signal classification show that optimizing the feature parameters in a continuous space while learning the decision function yields to better performances than using fixed parameters obtained from a grid sampling
BibTeX:
@inproceedings{ESANN2011,
author = { Flamary, R. and Yger, F. and Rakotomamonjy, A.},
title = { Selecting from an infinite set of features in SVM},
booktitle = { European Symposium on Artificial Neural Networks},
year = {2011}
}
R. Flamary, X. Anguera, N. Oliver, " Spoken WordCloud: Clustering Recurrent Patterns in Speech", International Workshop on Content-Based Multimedia Indexing, 2011.
Abstract: The automatic summarization of speech recordings is typically carried out as a two step process: the speech is first decoded using an automatic speech recognition system and the resulting text transcripts are processed to create the summary. However, this approach might not be suitable with adverse acoustic conditions or languages with limited training resources. In order to address these limitations, we propose in this paper an automatic speech summarization method that is based on the automatic discovery of patterns in the speech: recurrent acoustic patterns are first extracted from the audio and then are clustered and ranked according to the number of repetitions in the recording. This approach allows us to build what we call a Spoken WordCloud because of its similarity with text-based word-clouds. We present an algorithm that achieves a cluster purity of up to 90% and an inverse purity of 71% in preliminary experiments using a small dataset of connected spoken words.
BibTeX:
@inproceedings{CBMI2011,
author = { Flamary, R. and Anguera, X. and Oliver, N.},
title = { Spoken WordCloud: Clustering Recurrent Patterns in Speech},
booktitle = { International Workshop on Content-Based Multimedia Indexing},
year = {2011}
}
E. Niaf, R. Flamary, C. Lartizien, S. Canu, "Handling uncertainties in SVM classification", IEEE Workshop on Statistical Signal Processing , 2011.
Abstract: This paper addresses the pattern classification problem arising when available target data include some uncertainty information. Target data considered here is either qualitative (a class label) or quantitative (an estimation of the posterior probability). Our main contribution is a SVM inspired formulation of this problem allowing to take into account class label through a hinge loss as well as probability estimates using epsilon-insensitive cost function together with a minimum norm (maximum margin) objective. This formulation shows a dual form leading to a quadratic problem and allows the use of a representer theorem and associated kernel. The solution provided can be used for both decision and posterior probability estimation. Based on empirical evidence our method outperforms regular SVM in terms of probability predictions and classification performances.
BibTeX:
@inproceedings{ssp2011,
author = { Niaf, E. and Flamary, R. and Lartizien, C. and Canu, S.},
title = {Handling uncertainties in SVM classification},
booktitle = { IEEE Workshop on Statistical Signal Processing },
year = {2011}
}

### 2010

R. Flamary, B. Labbé, A. Rakotomamonjy, "Large margin filtering for signal sequence labeling", International Conference on Acoustic, Speech and Signal Processing 2010, 2010.
Abstract: Signal Sequence Labeling consists in predicting a sequence of labels given an observed sequence of samples. A naive way is to ﬁlter the signal in order to reduce the noise and to apply a classiﬁcation algorithm on the ﬁltered samples. We propose in this paper to jointly learn the ﬁlter with the classiﬁer leading to a large margin ﬁltering for classiﬁcation. This method allows to learn the optimal cutoff frequency and phase of the ﬁlter that may be different from zero. Two methods are proposed and tested on a toy dataset and on a real life BCI dataset from BCI Competition III.
BibTeX:
@inproceedings{flamaryicassp210,
author = { Flamary, R. and Labbé, B. and Rakotomamonjy, A.},
title = {Large margin filtering for signal sequence labeling},
booktitle = { International Conference on Acoustic, Speech and Signal Processing  2010},
year = {2010}
}
R. Flamary, B. Labbé, A. Rakotomamonjy, "Filtrage vaste marge pour l'étiquetage séquentiel de signaux", Conference en Apprentissage CAp, 2010.
Abstract: Ce papier traite de l’étiquetage séquentiel de signaux, c’est-à-dire de discrimination pour des échantillons temporels. Dans ce contexte, nous proposons une méthode d’apprentissage pour un ﬁltrage vaste-marge séparant au mieux les classes. Nous apprenons ainsi de manière jointe un SVM sur des échantillons et un ﬁltrage temporel de ces échantillons. Cette méthode permet l’étiquetage en ligne d’échantillons temporels. Un décodage de séquence hors ligne optimal utilisant l’algorithme de Viterbi est également proposé. Nous introduisons différents termes de régularisation, permettant de pondérer ou de sélectionner les canaux automatiquement au sens du critère vaste-marge. Finalement, notre approche est testée sur un exemple jouet de signaux non-linéaires ainsi que sur des données réelles d’Interface Cerveau-Machine. Ces expériences montrent l’intérêt de l’apprentissage supervisé d’un ﬁltrage temporel pour l’étiquetage de séquence.
BibTeX:
@inproceedings{flamcap2010,
author = { Flamary, R. and Labbé, B. and Rakotomamonjy, A.},
title = {Filtrage vaste marge pour l'étiquetage séquentiel de signaux},
booktitle = { Conference en Apprentissage CAp},
year = {2010}
}
D. Tuia, G. Camps-Valls, R. Flamary, A. Rakotomamonjy, "Learning spatial filters for multispectral image segmentation", IEEE Workshop in Machine Learning for Signal Processing (MLSP), 2010.
Abstract: We present a novel ﬁltering method for multispectral satellite image classiﬁcation. The proposed method learns a set of spatial ﬁlters that maximize class separability of binary support vector machine (SVM) through a gradient descent approach. Regularization issues are discussed in detail and a Frobenius-norm regularization is proposed to efﬁciently exclude uninformative ﬁlters coefﬁcients. Experiments carried out on multiclass one-against-all classiﬁcation and target detection show the capabilities of the learned spatial ﬁlters
BibTeX:
@inproceedings{mlsp10,
author = { Tuia, D. and Camps-Valls, G. and Flamary, R. and Rakotomamonjy, A.},
title = {Learning spatial filters for multispectral image segmentation},
booktitle = { IEEE Workshop in Machine Learning for Signal Processing (MLSP)},
year = {2010}
}

### 2009

R. Flamary, B. Labbé, A. Rakotomamonjy, "Large margin filtering for signal segmentation", NIPS Workshop on Temporal Segmentation NIPS Workshop in Temporal Segmentation, 2009.
Abstract:
BibTeX:
@conference{nipsworkshop2009,
author = { Flamary, R. and Labbé, B. and Rakotomamonjy, A.},
title = {Large margin filtering for signal segmentation},
booktitle = { NIPS Workshop on Temporal Segmentation},
howpublished = { NIPS Workshop in Temporal Segmentation},
year = {2009}
}
R. Flamary, A. Rakotomamonjy, G. Gasso, S. Canu, "Selection de variables pour l'apprentissage simultanée de tâches", Conférence en Apprentissage (CAp'09), 2009.
Abstract: Cet article traite de la sélection de variables pour l’apprentissage simultanée de taches de discrimination SVM . Nous formulons ce problème comme étant un apprentissage multi-taches avec pour terme de régularisation une norme mixte de type p 2 avec p <1 . Cette dernière permet d’obtenir des modèles de discrimination pour chaque tâche, utilisant un même sous-ensemble des variables. Nous proposons tout d’abord un algorithme permettant de résoudre le problème d’apprentissage lorsque la norme mixte est convexe (p = 1). Ensuite, à l’aide de la programmation DC, nous traitons le cas non-convexe (p < 1) . Nous montrons que ce dernier cas peut être résolu par un algorithme itératif où, à chaque itération, un problème basé sur la norme mixte 1 2 est résolu. Nos expériences montrent l’interêt de la méthode sur quelques problèmes de discriminations simultanées.
BibTeX:
@inproceedings{cap09,
author = { Flamary, R. and Rakotomamonjy, A. and Gasso, G. and Canu, S.},
title = {Selection de variables pour l'apprentissage simultanée de tâches},
booktitle = { Conférence en Apprentissage (CAp'09)},
year = {2009}
}
R. Flamary, A. Rakotomamonjy, G. Gasso, S. Canu, "SVM Multi-Task Learning and Non convex Sparsity Measure", The Learning Workshop The Learning Workshop (Snowbird), 2009.
Abstract:
BibTeX:
@conference{snowbird09,
author = { R. Flamary and A. Rakotomamonjy and G. Gasso and  S. Canu},
title = {SVM Multi-Task Learning and Non convex Sparsity Measure},
booktitle = { The Learning Workshop},
howpublished = { The Learning Workshop (Snowbird)},
year = {2009}
}
R. Flamary, J. Rose, A. Rakotomamonjy, S. Canu, "Variational Sequence Labeling", IEEE Workshop in Machine Learning for Signal Processing (MLSP), 2009.
Abstract: Sequence labeling is concerned with processing an input data sequence and producing an output sequence of discrete labels which characterize it. Common applications includes speech recognition, language processing (tagging, chunking) and bioinformatics. Many solutions have been proposed to partially cope with this problem. These include probabilistic models (HMMs, CRFs) and machine learning algorithm (SVM, Neural nets). In practice, the best results have been obtained by combining several of these methods. However, fusing different signal segmentation methods is not straightforward, particularly when integrating prior information. In this paper the sequence labeling problem is viewed as a multi objective optimization task. Each objective targets a different aspect of sequence labelling such as good classiﬁcation, temporal stability and change detection. The resulting optimization problem turns out to be non convex and plagued with numerous local minima. A region growing algorithm is proposed as a method for ﬁnding a solution to this multi functional optimization task. The proposed algorithm is evaluated on both synthetic and real data (BCI dataset). Results are encouraging and better than those previously reported on these datasets.
BibTeX:
@inproceedings{mlsp09,
author = { R. Flamary and J.L. Rose and A. Rakotomamonjy and S. Canu},
title = {Variational Sequence Labeling},
booktitle = { IEEE Workshop in Machine Learning for Signal Processing (MLSP)},
year = {2009}
}

### 2008

R. Flamary, "Filtrage de surfaces obtenues à partir de structures M-Rep (M-Rep obtained surface filtering)", Laboratoire CREATIS-LRMN, INSA de Lyon, 2008.
Abstract:
BibTeX:
@mastersthesis{mrep08,
author = { Flamary, R.},
title = {Filtrage de surfaces obtenues à partir de structures M-Rep (M-Rep  obtained surface filtering)},
school = { Laboratoire CREATIS-LRMN, INSA de Lyon},
year = {2008}
}