## Home

I am associate professor at Nice-Sophia Antipolis University in the Departement of Electronics and in the Lagrange Laboratory. I was previously a PhD student at the LITIS Laboratory and my PhD advisor was Alain Rakotomamonjy at Rouen University.

On this website, you can find a list of my publications and download the corresponding software/code.

### Research Interests

• Machine Learning
• Kernel methods, Support Vector Machines
• Sparsity, variable selection, mixed norm
• Data representation, kernel learning
• Statistical signal processing
• Classification and segmentation of signals and images
• Filter learning
• Applications
• Brain-Computer Interfaces, sensor selection
• Hyperspectral Imaging

### Recent work

 Laporte, L., Flamary, R., Canu, S., Déjean, S., Mothe, J., "Non-convex Regularizations for Feature Selection in Ranking With Sparse SVM", IEEE Transactions on Neural Networks and Learning Systems, 2013. Abstract: Feature selection in learning to rank has recently emerged as a crucial issue. Whereas several preprocessing approaches have been proposed, only a few works have been focused on integrating the feature selection into the learning process. In this work, we propose a general framework for feature selection in learning to rank using SVM with a sparse regularization term. We investigate both classical convex regularizations such as $\ell_1$ or weighted $\ell_1$ and non-convex regularization terms such as log penalty, Minimax Concave Penalty (MCP) or $\ell_p$ pseudo norm with $p<1$. Two algorithms are proposed, first an accelerated proximal approach for solving the convex problems, second a reweighted $\ell_1$ scheme to address the non-convex regularizations. We conduct intensive experiments on nine datasets from Letor 3.0 and Letor 4.0 corpora. Numerical results show that the use of non-convex regularizations we propose leads to more sparsity in the resulting models while prediction performance is preserved. The number of features is decreased by up to a factor of six compared to the $\ell_1$ regularization. In addition, the software is publicly available on the web. BibTeX:  @article{ tnnls2014, author = { Laporte, L. and Flamary, R. and Canu, S. and Déjean, S. and Mothe, J. }, title = { Non-convex Regularizations for Feature Selection in Ranking With Sparse SVM }, journal = { IEEE Transactions on Neural Networks and Learning Systems }, year = { 2013 } }  Flamary, R., Rakotomamonjy, A., "Support Vector Machine with spatial regularization for pixel classification", International Workshop on Advances in Regularization, Optimization, Kernel Methods and Support Vector Machines : theory and applications (ROKS), 2013. Abstract: We propose in this work to regularize the output of a svm classifier on pixels in order to promote smoothness in the predicted image. The learning problem can be cast as a semi-supervised SVM with a particular structure encoding pixel neighborhood in the regularization graph. We provide several optimization schemes in order to solve the problem for linear SVM with l2 or l1 regularization and show the interest of the approach on an image classification example with very few labeled pixels. BibTeX:  @inproceedings{ ROKS2013, author = { Flamary, R. and Rakotomamonjy, A. }, title = { Support Vector Machine with spatial regularization for pixel classification }, booktitle = { International Workshop on Advances in Regularization, Optimization, Kernel Methods and Support Vector Machines : theory and applications (ROKS) }, year = { 2013 } }  Rakotomamonjy, A., Flamary, R., Yger, F., "Learning with infinitely many features", Machine Learning, 2012. Abstract: We propose a principled framework for learning with infinitely many features, situations that are usually induced by continuously parametrized feature extraction methods. Such cases occur for instance when considering Gabor-based features in computer vision problems or when dealing with Fourier features for kernel approximations. We cast the problem as the one of finding a finite subset of features that minimizes a regularized empirical risk. After having analyzed the optimality conditions of such a problem, we propose a simple algorithm which has the avour of a column-generation technique. We also show that using Fourier-based features, it is possible to perform approximate infinite kernel learning. Our experimental results on several datasets show the benefits of the proposed approach in several situations including texture classification and large-scale kernelized problems (involving about 100 thousand examples). BibTeX:  @article{ ml2012, author = { {R}akotomamonjy, {A}. and {F}lamary, {R}. and Yger, {F}. }, title = { Learning with infinitely many features }, journal = { Machine Learning }, year = { 2012 } }  Flamary, R., Jrad, N., Phlypo, R., Congedo, M., Rakotomamonjy, A., "Mixed-norm Regularization for Brain Decoding", Laboratoire LITIS, Université de Rouen, 2012. Abstract: This work investigates the use of mixed-norm regularization for sensor selection in Event-Related Potential (ERP) based Brain-Computer Interfaces (BCI). The classification problem is cast as a discriminative optimization framework where sensor selection is induced through the use of mixed-norms. This framework is extended to the multi-task learning situation where several similar classification tasks related to different subjects are learned simultaneously. In this case, multi-task learning helps in leveraging data scarcity issue yielding to more robust classifiers. For this purpose, we have introduced a regularizer that induces both sensor selection and classifier similarities. The different regularization approaches are compared on three ERP datasets showing the interest of mixed-norm regularization in terms of sensor selection. The multi-task approaches are evaluated when a small number of learning examples are available yielding to significant performance improvements especially for subjects performing poorly. BibTeX:  @techreport{ BCISEL2012, author = { Flamary, R. and Jrad, N. and Phlypo, R. and Congedo, M. and Rakotomamonjy, A. }, title = { Mixed-norm Regularization for Brain Decoding }, school = { Laboratoire LITIS, Université de Rouen }, year = { 2012 } } 

### News

#### AMOR project is online

2013-12-04

The AMOR project is a Young researchers project that is financed by GdR ISIS and the GRETSI association.

The wep page of the project is now available here.

#### Talk about learning with infinitely many features

2013-01-02

I have been invited to present our work about learning with infinitely many features at a GDR ISIS reunion.

The slides of the presentation (in english) are now available here.

#### New SVM Toolbox

2012-09-10

The code for my linear SVM toolbox is now available in the software section of the website. It can learn linear SVM with a wide class of regularization terms such as the l1 norm or the l1-lp mixed norms.

This toolbox is in Matlab and the solver used is a Forward-Backward Splitting algorithm from the paper FISTA.