This research was carried out via the co-supervision on a Phd in collaboration with INRIA Sophia-Antipolis and was funded by French National Project ANR PREDIT CityVIP. The research is aimed at real-time localisation and mapping for autonomous navigation of an urban vehicule. The objective is to design and develop robotic systems which are able to achieve (semi)autonomous missions. The robotic system that was considered is the Cycab. Much research focused on the control of mobile robots and techniques are maturing to the point where hundreds of kilometers are now being traversed autonomously as in the DARPA grand challenge. Many different scenarios may be considered ranging from unstructured environments to structured and indoor environments to outdoor. In this study, the focus will be on an outdoor structured urban environment.

The research here on visual navigation of a mobile robot begins with several assumptions:

  • A stereo vision system is available - The stereo vision system provides constraints on 3D rigid motion. Vision sensors also provide a very rich source of information allowing localization, detection of obstacles, path planning.
  • Navigation is performed in urban environments - This assumption has several impacts on the choice of solution. Firstly, traditional GPS based localisation sensors often fail in urban environments due to the occlusion of satellites by buildings. Thus there is a need for different sensors capable of localising in this situation. Secondly, urban environments are rich in structure which can be exploited effectively by vision systems for localisation.
  • A set of training sequences is available to the system - This set of images can be processed off-line so as to obtain a model of the environment. Having a model of the environment allows to perform on-line missions in a robust manner.

This principal objective is split into two sub-problems:

  1. Off-line - Reconstruct the scene from the training sequence. Reconstruction phase involves recovering both a structural model of environment as well as the trajectory of the training camera with respect to some world coordinate system. This stage has less computational constraints than the on-line stage and both future and past information is available.
  2. On-line - The Cycab is controlled towards its objective using visual information obtained on-line. At this point the Cycab´s initial position can be initialised with respect to the a-priori 3D model of the environment and real-time and robust real-time model-based techniques can be used to track the current position of the vehicle. The aim here is to perform the mission while remaining robust to uncertainties in the environment such as changes in illumination, shadows. occlusions, etc..

 

Some first results on autonomous navigation in real-world urban environments are available in the following videos (please click on the images):

Autonomous Navigation:

  navigation

Learning Phase:

Learning

Comport, A. I., Meilland, M. & Rives, P (2011).  A Real-Time Dense Visual Localisation and Mapping System. In Live Dense  Reconstruction with Moving Cameras Workshop (LDRMC/ICCV), Barcelona, Spain.
Meilland, M., Comport, A. I. & Rives, P (2011). Dense visual mapping of large scale environments for real-time localisation. In IEEE/RSJ International Conference on Intelligent Robots and Systems. San Francisco, California.
Meilland, M., Comport, A. I. & Rives, P (2011). Real-time Dense Visual Tracking under Large Lighting Variations. In British Machine Vision Conference. University of Dundee.
Meilland, M., Comport, A. I. & Rives, P (2010). A Spherical Robot-Centered Representation for Urban Navigation. In IEEE/RSJ International Conference on Intelligent Robots and Systems. Taipei, Taiwan.
Gallegos, G., Meilland, M., Comport, A. I. & Rives, P (2010). Appearance-Based SLAM Relying on a Hybrid Laser/Omnidirectional Sensor. In IEEE/RSJ International Conference on Intelligent Robots and Systems. Taipei, Taiwan.