Rémi Flamary

Professional website

Linear SVM with general regularization


This package is an implementation of a linear svm solver with a wide class of regularizations on the svm weight vector (l1, l2, mixed norm l1-lq, adaptive lasso). We provide solvers for the classical single task svm problem and for multi-task with joint feature selection or similarity promoting term.

Note that this toolbox has been designed to be efficient for dense data whereas most of the existing linear svm solvers have been designed for sparse datasets.

This is the code that has been used for sensor selection in the paper:

R. Flamary, N. Jrad, R. Phlypo, M. Congedo, A. Rakotomamonjy, "Mixed-Norm Regularization for Brain Decoding", Computational and Mathematical Methods in Medicine, Vol. 2014, N. 1, pp 1-13, 2014.

Abstract: This work investigates the use of mixed-norm regularization for sensor selection in event-related potential (ERP) based brain-computer interfaces (BCI). The classification problem is cast as a discriminative optimization framework where sensor selection is induced through the use of mixed-norms. This framework is extended to the multitask learning situation where several similar classification tasks related to different subjects are learned simultaneously. In this case, multitask learning helps in leveraging data scarcity issue yielding to more robust classifiers. For this purpose, we have introduced a regularizer that induces both sensor selection and classifier similarities. The different regularization approaches are compared on three ERP datasets showing the interest of mixed-norm regularization in terms of sensor selection. The multitask approaches are evaluated when a small number of learning examples are available yielding to significant performance improvements especially for subjects performing poorly.
author = {Flamary, R. and Jrad, N. and Phlypo, R. and Congedo, M. and Rakotomamonjy, A.},
title = {Mixed-Norm Regularization for Brain Decoding},
journal = {Computational and Mathematical Methods in Medicine},
volume = {2014},
number = {1},
pages = {1-13},
editor = {},
year = {2014}


We provide a general solver for squared hinge loss SVM of the form:

where 528871813706c0b8e8f55db15bb9feb4 can be :

  • l1 norm : b03e62430b7c01402f0d69a22669a69b
  • l2 norm (squared or not) : 1f7f482c8227011ea07e7056241ffd6c
  • l1-l2 mixed norm: f1cb3dbba3c5cab3302e9eb66ad74933 where b2f5ff47436671b6e533d8dc3614845d denotes groups of features
  • l1-lp mixed norm (3ce4a4628c47343eb78d49c32a8cbe15 and 71780b661c1b3647ddd29ec0f4dfb059 ): df775be13552c7523adff9d0e07e9c32
  • Adaptive l1-l2: 1d25c577670fd6cb6e02c5c5b9437f0a

We also provide a multitask solver where T tasks can be learned simultaneously with joint sparsity constraints (mixed norm regularization) and/or inter-task similarity promoting regularization.

This toolbox is in Matlab and the algorithm used for solving the problem is a Forward-Backward Splitting algorithm from the FISTA paper.


Current version : 1.0

Download : G-SVM.zip


Quick version:

  • Add all the paths and subpaths to matlab.

Entry points:

  • demo_gsvm.m : demo file showing the use of the single task svm solver with mixed norm regularization (and adaptive approach).
  • demo_mtl_gsvm.m : demo file showing the use of the multitak svm solver with mixed norm regularization and similarity promoting regularization.