Rémi Flamary

Professional website

RankSVM with non-convex regularization

$$ \def\w{\mathbf{w}} $$

Description

This package is an implementation of a linear RankSVM solver with non-convex regularization. This is the code that has been used for numerical simulation in the paper

L. Laporte, R. Flamary, S. Canu, S. Déjean, J. Mothe, Nonconvex Regularizations for Feature Selection in Ranking With Sparse SVM, Neural Networks and Learning Systems, IEEE Transactions on, Vol. 25, N. 6, pp 1118-1130, 2014.
Abstract: Feature selection in learning to rank has recently emerged as a crucial issue. Whereas several preprocessing approaches have been proposed, only a few works have been focused on integrating the feature selection into the learning process. In this work, we propose a general framework for feature selection in learning to rank using SVM with a sparse regularization term. We investigate both classical convex regularizations such as l1 or weighted l1 and non-convex regularization terms such as log penalty, Minimax Concave Penalty (MCP) or lp pseudo norm with p lower than 1. Two algorithms are proposed, first an accelerated proximal approach for solving the convex problems, second a reweighted l1 scheme to address the non-convex regularizations. We conduct intensive experiments on nine datasets from Letor 3.0 and Letor 4.0 corpora. Numerical results show that the use of non-convex regularizations we propose leads to more sparsity in the resulting models while prediction performance is preserved. The number of features is decreased by up to a factor of six compared to the l1 regularization. In addition, the software is publicly available on the web.
BibTeX:
@article{tnnls2014,
author = { Laporte, L. and Flamary, R. and Canu, S. and Déjean, S. and Mothe, J.},
title = {Nonconvex Regularizations for Feature Selection in Ranking With Sparse SVM},
journal = { Neural Networks and Learning Systems, IEEE Transactions on},
volume = {25},
number = {6},
pages = {1118-1130},
editor = {},
year = {2014}
} 

Solver

We provide a general solver for squared hinge loss RankSVM of the form:

$$ \begin{equation*} \min_{\w} \sum_{i=1}^{n} \max(0,1-(\mathbf{x}_i^T\w))^2 + \Omega(\w) \end{equation*} $$

where $\Omega(\w)$ can be :

  • l1 norm : $\Omega(\w)=\sum_i |w_i|$
  • lp norm with $ 0<p < 1$ : $ \Omega(\w)=\sum_i |w_i|^p $
  • log reg. : $ \Omega(\w)=\sum_i \log(\epsilon+|w_i|) $
  • MCP. : $\Omega(\w)=\left\{\begin{array}{ll}\lambda |w_j|-|w_j|^2/2\gamma & \mbox{ if } |w_j| \leq \gamma\lambda \\ \gamma\lambda^2/2 & \mbox{ if } |w_j| > \gamma\lambda \end{array}\right.$

This toolbox is in Matlab/Octave and should run on both software. The algorithm used for solving the optimization problem is a Difference of Convex approach as described in here and the algorithm used for solving the sub-problem is a Forward-Backward Splitting algorithm from the FISTA paper. The toolbox also contains code from the paper FSMrank as provided by the authors.

If you use this code in your publication please cite the paper Non-convex Regularizations for Feature Selection in Ranking with Sparse SVM (bibtex key given in the publication page).

Download

Current version : 1.0

Download :

Installation

Quick version:

  • Extract both zip files
  • Add all the paths and subpaths of GSVM to matlab/octave.
  • Entry points:
    • test_letor_ranksvmnc.m : Loop for testing non-convex ranking on letor datasets
    • letor_FSMRank.m : Comparison with fast l1 optimization.
    • ncsvmclassNob.m : function for non-convex SVMrank optimization.