Fred's research

FREDERIC PRECIOSO

Topics

Understanding Deep Learning

Deep Learning is a highly active field of research. Such fast pace makes significantly contributing to the research effort a challenge. Our strategy has thus been to target more unconventional approaches to Deep Learning, while holding potential for future breakthrough, and being keys for understanding Deep Learning.

Active Learning

How to train a deep architecture with very few training samples when transfer learning cannot work? How to select the optimal training samples in order to learn a network with the least data in as few iterations as possible? Active learning is the answer...

(this work corresponds to the PhD of Melanie Ducoffe)

Bio-inspired optimization and interpretation

Some optimization schemes introduced in training deep networks can be (loosely) inspired from biology. Dropout, for instance, can be related to lateral inhibition. New optimization schemes may be inspired from other biological mechanisms...

(this work corresponds to the project Bio-Deep with my Post-doc Geoffrey Portelli and Ass.Prof. Frederic Lavigne, and the project Neuro-inspire with Geoffrey Portelli, Ass.Prof. Frederic Lavigne, and Prof Gilles Bernot)

Deep Natural Language Processing vs Statistical Text Analysis

Understanding the decisions made by deep networks and interpreting/explaining these decisions is a true challenge. One way to address this challenge is to compare in detail the behaviors of competing methods (deep networks and statistical approaches) for text analysis.

(this work corresponds to the project Deep vs statistical text analysis in collaboration with my PhD student Melanie Ducoffe, Laurent Vanni, and CR Damon Mayaffre)

Deep Learning and Knowledge

Combining Deep Learning methods on unstructured data and reasoning methods on structured data could lead to amazingly powerful hybrid approaches. We explore this idea at two different levels: first we investigate how to combine hierarchies of concepts with deep learning for image classification in the context of data related to a taxonomy;

(this work is conducted in collaboration with CR Eric Debreuve)

second we design a new method for improving on one hand the knowledge base associated to an artwork image database with the deep learning classification results of the visual content, and on the other hand the deep network decisions with hierarchical annotations.

(this work is conducted in collaboration with DR Fabien Gandon)

Embedded Deep Learning

Embedding algorithms is currently one of the main challenges of Machine Learning. This is even more crucial for Deep Learning algorithms which are revolutionizing data analysis. In this context, we embed the "Deep Patient" model for Electronic Health Records (EHR) analysis in a cluster of low-power GPUs.

(this work corresponds to John Anderson Garcia Henao PhD co-supervised with Prof. Michel Riveill and Prof. Pascal Staccini)

Under similar constraints, I explore also the implementation/implant of the latest vision algorithms for autonomous cars in collaboration with Renault Software Labs.

Understanding Inhomogeneity

Inhomogeneous data represent a challenge for machine learning methods.

Inhomogeneity in isotropic spaces (e.g. 3D point clouds)

"3D point clouds" are spatially inhomogeneous data since the sparsity increases with the distance from the sensor. We design methods to register very large and far away 3D point clouds despite the sampling non-uniformity. We also aim at detecting 3D objects in 3D scenes as it can be done now in 2D images.

(this work corresponds to Lucas Malleus PhD co-supervised with Ass.Prof. Diane Lingrand and the work of Lirone Samoun and Thomas Fisichella in framework of the H2020 european project DigiArt)

Inhomogeneity in anisotropic spaces (e.g. video data)

Video data are inhomogeneous since the spatial dimensions and the temporal dimension are anisotropic. We focus, in this field, on characterizing the "elasticity" of the time dimension in order to define new video content representation, to summarize videos, and to improve video classification.

(this work corresponds to the PhDs of Katy Blanc, co-supervised with Prof. Diane Lingrand)

Inhomogeneity in content modalities (e.g. multimedia data)

Multimedia data are intrinsically inhomogeneous since they can combine the following content modalities: video, sound, text, metadata, etc. We design new methods able to benefit from this inhomogeneity to outperform predictions and decisions reachable independently from each of the content modalities. We design new multimedia content representation, to summarize sport or broadcast news videos, to detect highlights, and to improve video classification.

(this work corresponds to the PhDs of Melissa Sanabria in collaboration with the SME Wildmoka)

Inhomogeneity in perception (of objects in images)

On Image data, we do not focus on the content of the image but on characterizing the content of interest for a user. We investigate using eye-tracker data what occulometric features could define the intent of the user for a given content. Based on these studies, we design a powerful interactive image content-based search engine.

(this work corresponds to the PhD of Stephanie Lopez co-supervised with Prof. Arnaud Revel and Prof. Diane Lingrand, and the post-doc of Souad Chaabouni, all in the framework of ANR project VISIIR)

More recently we investigate how predicting the attention of a user on visual content with Deep Learning methods (Recurrent Neural Networks, Deep Reinforcement Learning) can be exploited to design new powerful strategies for 360 video streaming, targeting wireless VR headset.

(this work corresponds to the PhDs of Miguel Romero co-supervised with Prof. Lucile Sassatelli, in collaboration with Prof. Ramon Aparicio)

Inhomogeneity in time (e.g. time series)

The latest recent advances in machine learning allow to address very long term dependencies. However, the difficulty is then to discriminate between short-term and long-term dependencies. These latter can then be seen as weak signals inside the time series. We explore how to detect (very) long term dependencies for Adverse Drug Event detection in real clinical notes, for cardiac disease detection in ECG, for sub-vocalized speech recognition in laryngeal EMG, or for software failure detection in system tests.

(these works partially correspond to the PhD of Edson Florez co-supervised with Prof. Michel Riveill and Prof. Pascal Staccini)

Meta Mining

A meta-learning machine is a machine somehow learning to learn, or learning from learning results. The same kind of principle can be considered for Data Mining with algorithms mining in the results of mining algorithms.

AI on-demand Platform

We have been working for 3 years now on an AI on-demand platform (ROCKflows) aiming at helping users creating their own Machine Learning Workflows. Users will be able to describe their datasets and objectives and the platform will find appropiate workflows among which the user can choose. The learning core of this platform is a meta-learning algorithm.

(this work is conducted in collaboration with Prof. Mireille Blay-Fornarino and Prof. Michel Riveill)

Multi-Consensus Clustering

We design a general method for combining several clustering techniques using frequent itemsets as a meta-clustering approach. Our method combines the strength of all the clustering techniques considered without their drawbacks.

(this work corresponds to the PhD of Atheer Al-Najdi co-supervised with Ass.Prof. Nicolas Pasquier) We are currently extending our methods to semi-supervised context and multimodal data.