Research

Announcements

1. Call PhD candidate from Oct. 2021(application deadline: May 1st 2021, interviews from May 3rd): Neuromorphic Stereo Vision with Event Cameras

  • Stereopsis enables depth perception of the world, which is a key feature for both artificial and human visual processing systems. Besides, depth is an essential requirement for many practical applications, ranging from fine object manipulation in robotics, to autonomous driving for vehicles. In this PhD proposa, we wish to design and implement a neuromorphic model for stereo matching using event cameras. The project will extend a previous internship work done in the lab in 2020.

2. Call for PhD candidate from Oct. 2021 (application deadline: May 1st 2021, interviews from May 3rd): Neuromorphic Visual Odometry for Intelligent Vehicles with a Bio-inspired Vision Sensor (also with AID)

  • This thesis aims at exploiting biologically devised ‘short cuts’ used by insects with small brains and relatively simple nervous systems to see and perceive their world in real-time. The objective is to develop a biologically-inspired omni-directional event camera model to perform real-time ego-motion estimation and environment mapping. In collaboration with Dr. Andrew Comport.

3. Call for PhD candidate for CSC grant from Oct. 2021 (application deadline is passed): Towards Spike-Based Machine Learning

  • Spiking Neural Networks show many interesting features for a necessary paradigm in information processing and machine learning --in order to face the ever-growing demand in large scale computation--, such as their unsupervised training with Spike-Timing-Dependant Plasticity rules, and their implementation on ultra-low-power neuromorphic hardware. And yet, a number of challenges lie ahead before they become a realistic alternative to deep CNN. The objective this PhD proposal is to gain an in-depth understanding of the theoretical computational properties of SNNs that will help to exhibit their fundamental limits.

 


Towards a neuro-inspired machine learning


The vast majority of modern computer vision approaches heavily rely on machine learning, including deep learning. For almost two decades, deep convolutional artificial neural networks have become the reference method for many machine learning and vision tasks: classification, object or action detection, faces alignment, etc. The availability of both very large amounts of annotated data and huge computational resources has led to remarkable progress in this approach, however this success comes with a significant human cost for manual annotation and huge energy consumption for the training of deep CNNs.

Standing out the main stream of deep learning widely used in machine learning, I am interested in a particular type of neural network: the Spiking Neural Networks (SNN), close to the biological model, in which neurons emit outgoing pulses (action potentials or spikes) asynchronously, depending on the incoming stimulations, also asynchronous. This type of neural network has the advantage of implementing a mostly unsupervised learning (which limits the need for manually annotated data) thanks to the Spike-Timing Dependent Plasticity (STDP) bio-inspired learning rules.

The STDP rule updates synaptic weights based on the cause-effect relationships observed between incoming and outgoing pulses. The purpose of this rule, inspired by Hebb's law, is the strengthening of the incoming connections that are the cause of the outgoing impulses.

The ultimate goal is to use these models of neural networks to solve modern machine learning and computer vision tasks by bypassing two of the main pitfalls of current methods. The expected impact is a paradigm change in machine learning and computer vision, with respect to data-hungry and power-hungry popular methods. SNN show many interesting features for this paradigm change, such a their unsupervised training with STDP rules, and their implementability on ultra-low power neuromorphic hardware. And yet, a number of challenges lie ahead before they become a realistic alternative for facing the ever-growing demand in machine learning.

Unsupervised learning of temporal patterns

I am interested in the characterization of motion using Spiking Neural Networks (SNN). As a part of the thesis of Mr. Veis Oudjail (since October 2018), our first work focused unsupervised detection of the direction of movement of a pattern.

The first results obtained with SNN have demonstrated the ability of these networks to recognize and characterize the motions of simple patterns composed of a few pixels (points, lines, angles) in an image. We consider video sequences encoded in Address-Event Representation (AER) format, like that produced by Dynamic Vision Sensor type sensors. These sensors differ from conventional video sensors because instead of producing RGB or grey-level image sequences sampled at fixed rate, they encoding binary positive or negative brightness variations, independently for each pixel. Thus, each variation of a pixel at a given instant results in a corresponding event transmitted asynchronously, with a high sampling frequency. Moreover, this encoding eliminates much of the redundancy in motion information - at the cost of loss of static texture information (spatial contrast). Thus, these sensors offer a form of native dynamic motion representation.

The asynchronous nature of SNN makes it a model that can naturally handle this type of data: the events generated by the sensor can be interpreted as spikes to feed the input layer of the network.

Optical flow characterization typically relies on complex processing, operators, and calculations related to the nature of conventional RGB physical sensors that produce image sequences. The bio-inspired porting of conventional processing for vision requires rethinking how to process visual information, working directly with temporal contrast and motion. This suggests more efficient processing.

Beyond efficiency in terms of calculations, it is also expected to have better quality information, without any noise induced by the usual capturing and preprocessing steps (demosaicing, compression, etc.) The objective here is to propose a bio-inspired processing chain, from the sensor to the processing of visual information.


bio-inssp.png

APROVIS3D european project (CHIST-ERA, 2020-2023)

Analog PROcessing of bioinspired Vision Sensors for 3D reconstruction

APROVIS3D project targets analog computing for artificial intelligence in the form of Spiking Neural Networks (SNNs) on a mixed analog and digital architecture. The project includes including field programmable analog array (FPAA) and SpiNNaker applied to a stereopsis system dedicated to coastal surveillance using an aerial robot. Computer vision systems widely rely on artificial intelligence and especially neural network based machine learning, which recently gained huge visibility. The training stage for deep convolutional neural networks is both time and energy consuming. In contrast, the human brain has the ability to perform visual tasks with unrivalled computational and energy efficiency. It is believed that one major factor of this efficiency is the fact that information is vastly represented by short pulses (spikes) at analog – not discrete – times. However, computer vision algorithms using such representation still lack in practice, and its high potential is largely underexploited. Inspired from biology, the project addresses the scientific question of developing a low-power, end-to-end analog sensing and processing architecture of 3D visual scenes, running on analog devices, without a central clock and aims to validate them in real-life situations. More specifically, the project will develop new paradigms for biologically inspired vision, from sensing to processing, in order to help machines such as Unmanned Autonomous Vehicles (UAV), autonomous vehicles, or robots gain high-level understanding from visual scenes. The ambitious long-term vision of the project is to develop the next generation AI paradigm that will eventually compete with deep learning. We believe that neuromorphic computing, mainly studied in EU countries, will be a key technology in the next decade. It is therefore both a scientific and strategic challenge for the EU to foster this technological breakthrough. The consortium from four EU countries offers a unique combination of expertise that the project requires. SNNs specialists from various fields, such as visual sensors (IMSE, Spain), neural network architecture and computer vision (Uni. of Lille, France) and computational neuroscience (INT, France) will team up with robotics and automatic control specialists (NTUA, Greece), and low power integrated systems designers (ETHZ, Switzerland) to help geoinformatics researchers (UNIWA, Greece) build a demonstrator UAV for coastal surveillance (TRL5). Adding up to the shared interest regarding analog based computing and computer vision, all team members have a lot to offer given their different and complementary points of view and expertise. Key challenges of this project will be end-to-end analog system design (from sensing to AI-based control of the UAV and 3D coastal volumetric reconstruction), energy efficiency, and practical usability in real conditions. We aim to show that such a bioinspired analog design will bring large benefits in terms of power efficiency, adaptability and efficiency needed to make coastal surveillance with UAVs practical and more efficient than digital approaches.
 

Financial support

The financial support of CHIST-ERA is 867 560 € for 36 month (April 2020-March 2023).

Consortium

Université Côte d'Azur France
Université de Lille France
Institut de Neurosciences de la Timone France
Instituto de Microelectrónica de Sevilla IMSE-CNM Spain
University of West Attica Greece
National Technical University of Athens Greece
ETH Zürich Switzerland
 

Contact

Jean Martinet, Université Côte d'Azur
See also http://www.chistera.eu/projects/aprovis3d
 

Current students

PhD students

M. Veïs Oudjail (Oct. 2018 – now, 100%). Spiking Neural Networks for computer vision. University of Lille. Ministry grant.
Ms. Amélie Gruel (Oct. 2020 – now, 50% with Dr. Laurent Perrinet). Spiking neural networks for event-based stereovision. Université Côte d'Azur. APROVIS3D project.
M. Antoine Grimaldi (Oct. 2020 – now, 50% with Dr. Laurent Perrinet). Ultra-fast vision using Spiking Neural Networks. Institut des Neurosciences de la Timone, Aix Marseille Université. APROVIS3D project.

Amélie's poster at Neuromod inauguration 2021


Amélie's talk at CBMI 2021
 


MSc students

Nicolas Arrieta, Nerea Ramon, Ivan Zabrodin (M1-EIT DSC project 2020-2021). Synaptic delays for temporal pattern recognition.
 

DS4H tutorship students

Thomas Vivancos (2020-2021). Neuromorphic hardware comparison: Human Brain Project SpiNNaker vs Intel Loihi.
Guillaume Cariou (2020-2021). 
Performance assessment of Intel Neural Compute Stick attached to a Raspberry Pi.
 

Past students at Université Côte d'Azur

MSc students

Simone Ballerio (Feb-Jul 2020, with Dr. Andrew Comport).Semantic segmentation for event-based scenes. CSI grant.
Rafael Mosca (Mar-Sept 2020). Spiking neural networks for event-based stereovision. DS4H grant.


Polytech final year project students

Yilei Li and Yijie Wang (2019-2020). Scene segmentation from event-based videos.
Aloïs Turuani and Morgan Briancon (2019-2020, with Dr. Marc Antonini). Video coding using Spiking Neural Networks.
Khunsa Aslam (2019-2020, Erasmus+). Synaptic delays for Spatio-temporal pattern recognition.


Polytech 4th year project students

Sacha Carnière et Guillaume Ladorme (2019-2020). HistoriyGuessR, a history-oriented game inspired from Geoguessr.
Amine Legrifi (2019-2020). Mobile image deblurring using machine learning. 


DS4H tutorship students

Kevin Alessandro (2019-2020, with Dr. Marc Antonini). Video coding using Spiking Neural Networks.

Past PhD students at University Côte d'Azur
None yet!
Past PhD students at Université de Lille

Ms. Jalila Filali (Oct. 2015 – Aug. 4 2020, 50% with Dr. Hajer Baazaoui, ENST, Tunis) Ontology and visual features for image annotation. Ministry grant from Tunisia. Now looking for a postdoc.
M. Cagan Arslan (Oct. 2015 – Oct. 28 2020, 50% with Prof Laurent Grisoni) Visual data fusion for human-machine interaction. University of Lille. Ministry grant. Now unsure.
M. Rémi Auguste (Nov. 2010 – July 2014, 75% with Prof Chaabane Djeraba). Dynamic person recognition in audiovisual content. University of Lille. French ANR PERCOL project, ANR/DGA REPERE challenge. Now CEO of weaverize.
Ms. Amel Aissaoui (Sept. 2010 – June 2014, taux 75% with Prof Chaabane Djeraba). Bimodal face recognition by merging visual and depth features. Ministry grant from Algeria. Now back at University of Lille after serving as an Assistant Professor at University Of Science And Technology Houari Boumediene in Algeria.
M. Ismail El Sayad (July 2008 – Dec. 2011, taux : 50% with Prof Chaabane Djeraba). A higher-level visual representation for semantic learning in image databases. Ministry grant. Now Associate Professor at Lebanese International University.