Recent Results

Nowadays, 3D digitization systems generate numerical representations with high geometric accuracy. This accuracy involves massive - and often oversampled - point clouds, that are unsuitable for visualization, transmission and storage. The resulting meshes have the same limitations. Remeshing (to get structured data), and multiresolution coding are good ways to overcome these issues. Our research in geometry coding follows three ways:

  • Semi-regular meshing and coding. face During Jean-Luc Peyrot’s PhD (2011–2014), in collaboration with the Le2i laboratory of University of Burgundy, we proposed a framework for simplifiying the classical 3D digitization chain, first by improving the sampling of surfaces, and second by shortening the number of required treatments to obtain semi-regular meshes. More precisely, we integrated in a stereoscopic acquisition system:
    • a blue noise sampling that preserves geometrical features [hal-01058835]. This technique ensures the fidelity to the initial shape while limiting the number of points. We proposed two versions: the first version handles meshes obtained by triangulation of the point clouds generated by the acquisition system, and generates the blue noise directly in the 3D space. The second version handles directly the stereoscopic images to get the final surface sampling. To the best of our knowledge, no prior work proposes such an approach, despite the advantage of controlling the number of sampling points at the beginning of the sampling/reconstruction process to avoid oversampled data as output;
    • a semi-regular surface meshing [hal-01236999] that directly works on the stereoscopic 2D images and not on the triangulation of the point cloud generated by such acquisition systems. Our method can be seen as a parameterization based technique, as the remeshing process is driven by the connectivity of the stereoscopic images. Also, our reconstruction method processes the data as much as possible into the image domain, before embedding the surface in the 3D space. Our method is an alternative to the fastidious pipeline to get semi-regular meshes from physical objects.
  • Point clouds coding. dragonDuring Arnaud Bletterer’s PhD (2014 - ...), in collaboration with the Start-Up Cintoo3D in Sophia Antipolis, we propose to use depth maps (and associated projection matrices) to represent massive point clouds. Indeed, they provide an efficient parameterization domain to the unstructured geometry, but also a segmentation according to different points of view (position and orientation of the system during acquisition). We have shown [hal-01237009] that a progressive representation of a point cloud can be built from a multiresolution analysis of depth maps based on wavelets. Our results also show the interest of using image coders for compressing a point cloud.
  • Hexahedral mesh coding. hexa In collaboration with IFP-Energies Nouvelles, we also focus now on the compression of massive structured hexahedral meshes. Such meshes are common in geosciences and, as expected, their size is a drawback for storage, transmission, but also for numerical simulations (flow simulations for instance). Moreover, in this domain, meshes are generally based on a pillar grid structure. This structure has the advantage to give a regular connectivity to hexahedra, while allowing the modeling of geometrical discontinuities that may occur in the meshes coming from geosciences. These discontinuities describe mainly gaps in the physical terrains. Therefore we proposed a novel compression scheme for such data [hal-01315079]. Our scheme generates a hierarchy of meshes at increasing levels of resolution, while ensuring a geometrical coherency over the resolutions. Our main contribution is a lossless and reversible wavelet filtering that takes into account the geometrical discontinuities in order to preserve them whatever the resolution, but also manages the categorial properties that are generally associated with hexahedra in geosciences during analysis.

Recent Results

In the context of still image coding, our work concerned the study of optimal noisy source coding/denoising and has been done during M. Carlavan’s PhD (2010–2013) in collaboration with the CNES Toulouse and Thales Alénia Space (TAS) in Cannes. Most of the bibliography in this domain is based on the fact that the noisy image should be first optimally denoised and this denoised image should then be optimally coded. In many applications however, the layout of the acquisition imaging chain is fixed and cannot be changed, that is a denoising step cannot be inserted before coding. In this configuration, we showed on a simple case how to express the global distortion as a function of the coding and denoising parameters in the case the denoising step is performed after coding/decoding. We showed that the joint optimized distortion slightly outperforms the disjoint optimized distortion on several satellite test images of the post-Pleiades generation. This result appears then to be very significant for future CNES Space missions. High Efficiency Video Coding (HEVC) recently becomes the video compression standard to succeed H.264/AVC. Considering HEVC as video coding basis, we developed in our work a novel concept of smart video decoding. Some contributions were addressed during the Khoa Vo Nguyen’s PhD (2012–2015) in collaboration with Orange-Labs Issy-Les-Moulineaux. General smart coding and decoding schemes were proposed to remove the limit of conventional coding schemes which is related to the increased number of available coding modes. In that context, we developed a novel video coding framework with assisting supervised machine learning algorithms that aim to efficiently compress video by providing bitrate savings. The idea is to predict optimal coding mode of a current block using classification techniques based on already reconstructed frames. A first practical application in HEVC test model software HM12 reports promising and interesting average bitrate savings.

Recent results

Computational neuroscience has made substantial progress during the past three decades in better understanding the internal representation of the sensory world. Based on those results, it is our conviction that the mammalians visual system developed efficient coding strategies that could be used as a source of inspiration to imagine novel image and video compression algorithms. Our work on bio-inspired image coding focused on:

  • Design of a retina-inspired image coding scheme. During Khaled Masmoudi’s PhD (2009–2012) in collaboration with INRIA Sophia Antipolis Méditerranée, we developed a bio-inspired codec for images mainly based on a static approximation of the transformation performed by the outer layers of the retina. We proposed to solve the problem of reconstruction in an original approach by using the frames theory. We also investigated the issue of non-determinism in the retina neural code and proposed to model the retinal noise by a multiscale dither signal with specific statistical properties. The proposed coder gained interesting perceptual features that makes it competitive with the well established JPEG and JPEG 2000 standards. 
  • Development of a dynamic retina inspired filtering. Effrosyni Doutsi’s PhD (in progress since 2013) in collaboration with 4G-SGME in Sophia Antipolis proposed a retina-inspired filter based on a realistic mathematical model of the retina taking into account its dynamic behavior. It is shown that, while time increases, the retina-inspired spectrum varies from a lowpass filter to a bandpass filter. Hence, the retinainspired filter can increase the quality of the image by extracting dynamically more details. Based on the frame theory, we proved that the retina-inspired filter is invertible and we are able to reconstruct the input image after filtering.
  • Statistical detection and classification based on time-encoding. In many ways, computers today are nothing more than number crunchers and information manipulators. They all adhere to the Von Neumann architecture. Hence a new computing model is needed to process unstructured data such as images and video. We have been exploring the relevance of time-encoding: data are encoded as asynchronous pulses. This is typically how the human brain processes information. The exploratory projects COBRA and ENCODIME allowed us to publish some promising results on statistical estimation and classification based on time-encoded signals.

Machine Learning consists in minimizing a convex empirical risk function subject to an l1 constraint. Existing work, such as the Lasso
formulation, has focused mainly on Lagrangian penalty approximations, which often require ad hoc or computationally expensive procedures to determine the relaxation parameter. 

The structure of the method is that of the classical gradient projection algorithm, which alternates a gradient step on the objective and a projection step onto the lower level set modeling the constraint.The novelty of our approach is that the projection step is implemented via an outer approximation scheme in which the constraint set is approximated by a sequence of simple convex sets consisting of the intersection of
two half-spaces. Experiments on both synthetic and biological data show that $\ell^1$ constraint outperforms the $\ell^1$ penalty approach.

Recent Results

  • Classification of cells : SATT grant Cellid (2011) and  ANR project Phasequant (2014) with Phasics, Tiro and Morpheme  
  • Biomarker analysis and Predict relapse in early stage lung adenocarcinoma (2014) using Genomic RNAseq  data set  with Pr B. Mari  and A. Paquet IPMC. Results: Complex signature
  • Response to treatment (amisulpride) in Psychiatric disorder (2015), SATT grant IPMC (2016) with Pr N. Glaichenhaus IPMC and INSERM Creteil, (European Project "Optimize"). Preliminary result: IL 15 Biomarker signature
  • Computational analysis of single cell with Pr Barbry IPMC (starting 2016).