Rémi Flamary

Professional website

Linear SVM with general regularization


This package is an implementation of a linear svm solver with a wide class of regularizations on the svm weight vector (l1, l2, mixed norm l1-lq, adaptive lasso). We provide solvers for the classical single task svm problem and for multi-task with joint feature selection or similarity promoting term.

Note that this toolbox has been designed to be efficient for dense data whereas most of the existing linear svm solvers have been designed for sparse datasets.

This is the code that has been used for sensor selection in the paper Mixed-norm Regularization for Brain Decoding.


We provide a general solver for squared hinge loss SVM of the form:

where Omegaw can be :

  • l1 norm : Omegawsumiwi
  • l2 norm (squared or not) : Omegawsumiwi2
  • l1-l2 mixed norm: Omegawsumgwg2 where g denotes groups of features
  • l1-lp mixed norm (1p2 and pinfty ): Omegawsumgwgp
  • Adaptive l1-l2: Omegawsumgbetagwg2

We also provide a multitask solver where T tasks can be learned simultaneously with joint sparsity constraints (mixed norm regularization) and/or inter-task similarity promoting regularization.

This toolbox is in Matlab and the algorithm used for solving the problem is a Forward-Backward Splitting algorithm from the FISTA paper.


Current version : 1.0

Download : G-SVM.zip


Quick version:

  • Add all the paths and subpaths to matlab.

Entry points:

  • demo_gsvm.m : demo file showing the use of the single task svm solver with mixed norm regularization (and adaptive approach).
  • demo_mtl_gsvm.m : demo file showing the use of the multitak svm solver with mixed norm regularization and similarity promoting regularization.