Machine Learning consists in minimizing a convex empirical risk function subject to an l1 constraint. Existing work, such as the Lasso
formulation, has focused mainly on Lagrangian penalty approximations, which often require ad hoc or computationally expensive procedures to determine the relaxation parameter.

The structure of the method is that of the classical gradient projection algorithm, which alternates a gradient step on the objective and a projection step onto the lower level set modeling the constraint.The novelty of our approach is that the projection step is implemented via an outer approximation scheme in which the constraint set is approximated by a sequence of simple convex sets consisting of the intersection of
two half-spaces. Experiments on both synthetic and biological data show that $\ell^1$ constraint outperforms the $\ell^1$ penalty approach.

# Recent Results

• Classification of cells : SATT grant Cellid (2011) and  ANR project Phasequant (2014) with Phasics, Tiro and Morpheme
• Biomarker analysis and Predict relapse in early stage lung adenocarcinoma (2014) using Genomic RNAseq  data set  with Pr B. Mari  and A. Paquet IPMC. Results: Complex signature
• Response to treatment (amisulpride) in Psychiatric disorder (2015), SATT grant IPMC (2016) with Pr N. Glaichenhaus IPMC and INSERM Creteil, (European Project "Optimize"). Preliminary result: IL 15 Biomarker signature
• Computational analysis of single cell with Pr Barbry IPMC (starting 2016).