HomographyLab


HomographyLab library
http://homographylab.i3s.unice.fr
HomographyLab (Lab is an abbreviation of LABoratory) is a library written in C++ combined with OpenCV. This library implements the homography observer proposed in the paper: 

M.-D. Hua, J. Trumpf, T. Hamel, R. Mahony, and P. Morin (2019). Feature-based Recursive Observer Design for Homography Estimation and its Application to Image Stabilization. Asian Journal of Control, Special Issue Recent Advances on Data Fusion, Estimation in Navigation and Control, pp. 1-16.

The library is implemented in C++ classes while most of arithmetic operations of matrices and vectors are implemented using Eigen library. An advanced version of HomographyLab using CUDA OpenCV library for GPU implementation specific to NVIDIA graphic cards is also available.

The library has been designed in a way that the various sub modules like feature extraction and detection, feature matching, nonlinear homography observer that are necessary in order to perform the homography estimation have been written in separate C++ classes. Therefore, the user can use these individual libraries as well to write their own custom code and thus save significant time and resources. The users have also been given the option to modify the default parameters of nonlinear homography observer, parameters related to image processing as well as the camera and IMU parameters to suit the needs of their application.

 
The library has been tested on an Intel Core i7-6400 CPU running at 3.40Ghz and is able to run in real time at a frequency of 25hz. The library with GPU implementation has been implemented on a NVIDIA Jetson TX1 GPU with quad-core ARM Cortex-A57 processor having 256 CUDA cores. The code is able to run much faster at a frequency of 50Hz due to the parallel computing power of the NVIDIA Jetson GPU. The size of the images used for performing the homography estimation is 1280 X 1020 pixels.
 
 
 

Positioning of the implemented algorithm with respect to state-of-the-art algorithms

Originated from the field of Computer Vision, the so-called homography is an invertible mapping that relates two camera views of the same planar scene by encoding in a single matrix the camera pose, the distance between the camera and the scene, along with the normal direction to the scene (e.g., [6]). It plays an important role in numerous computer vision and robotic applications where the scenarios involve man-made environments composed of (near) planar surfaces. The homography has been exploited to estimate the rigid-body pose of a robot equipped with a camera [15], [16]. The navigation of robotic vehicles has been developed using homography sequences [4] and one of the most successful visual servo control paradigms makes use of homographies [12]. The homography has also been exploited for stabilization of Autonomous Underwater Vehicles [10], [14]. Homography-based navigation methods are also particularly well suited to applications where the camera is sufficiently far from the observed scene, such as situations where ground images are taken from an aerial vehicle [2], [4].

Classical algorithms for homography estimation taken from the computer vision community consist in computing the homography on a frame-by-frame basis by solving algebraic constraints related to correspondences of image features (points, lines, conics, contours, etc.) [1], [3], [6], [8], [9]. These algorithms, however, only consider the homography as an incidental variable and are not focused on improving (or filtering) the homography over time. The quality of the homography estimate obtained depends heavily on the number and quality of the data features used in the estimation as well as the algorithm employed. For a well-textured scene, the state-of-the-art methods can provide high quality homography estimates at the cost of significant computational effort (see [13] and references therein). For a scene with poor texture and consequently few reliable feature correspondences, existing homography estimation algorithms perform poorly. Robotic vehicle applications, however, provide temporal sequences of images and it seems natural to exploit the temporal correlation rather than try to compute individual raw homographies for each pair of frames. Nonlinear observers provide a relevant answer to that preoccupation.

In [11] a nonlinear observer for homography estimation was proposed based on the group structure of the set of homographies, the Special Linear group SL(3). This observer uses velocity information to interpolate across a sequence of images and improve the individual homography estimates. However, the observer proposed in [11] still requires individual image homographies to be algebraically computed for each image, which are then smoothed using filtering techniques. Although such an approach provides improved homography estimates, it comes at the cost of running both a classical homography algorithm as well as the temporal filter algorithm, and only functions if each pair of images has sufficient data available to compute a raw homography.

In order to overcome the above drawbacks, in our recent work [5], [7] the question of deriving an observer for a sequence of image homographies that takes image point-feature correspondences directly as input in the design of observer innovation has been considered. This saves considerable computational resources and makes the proposed algorithm suitable for embedded systems with simple point detection and matching software. In contrast with algebraic techniques, the algorithm is well posed even when there is insufficient data for full reconstruction of a homography. For example, if the number of corresponding points between two images drops below four it is impossible to algebraically reconstruct an image homography and the existing algorithms fail [6]. In such situations, the proposed observer will continue to operate by incorporating available information and relying on propagation of prior estimates. Finally, even if a homography can be reconstructed from a small set of feature correspondences, the estimate is often unreliable and the associated error is difficult to characterize. The proposed algorithm integrates information from a sequence of images, and noise in the individual feature correspondences is filtered through the natural low-pass response of the observer, resulting in a highly robust estimate. As a result, we believe that the proposed observer is ideally suited for homography estimation based on small windows of image data associated with specific planar objects in a scene, poorly textured scenes, and real-time implementation; all of which are characteristic of requirements for homography estimation in robotic vehicle applications.

 
REFERENCES
[1] A. Agarwal, C.V. Jawahar, and P.J. Narayanan. A survey of planar homography estimation techniques. Technical Report IIIT/TR/2005/12, IIIT, India, 2005.
[2] F. Caballero, L. Merino, J. Ferruz, and A. Ollero. Homography based Kalman filter for mosaic building. applications to UAV position estimation. In IEEE International Conference on Robotics and Automation (ICRA), pages 2004–2009, 2007.
[3] C. Conomis. Conics-based homography estimation from invariant points and pole-polar relationships. In IEEE Third Int. Symp. on 3D Data Processing, Visualization, and Transmission, pages 908–915, 2006.
[4] H. de Plinval, P. Morin, and P. Mouyon. Stabilization of a class of underactuated vehicles with uncertain position measurements and application to visual servoing. Automatica, 77:155–169, 2017.
[5] T. Hamel, R. Mahony, J. Trumpf, P. Morin, and M.-D. Hua. Homography estimation on the special linear group based on direct point correspondence. In IEEE Conference on Decision and Control (CDC), pages 7902–7908, 2011.
[6] R. Hartley and A. Zisserman. Multiple View Geomerty in Computer Vision. Cambridge University Press, second edition, 2003.
[7] M.-D. Hua, J. Trumpf, T. Hamel, R. Mahony, and P. Morin. Feature-based Recursive Observer Design for Homography Estimation and its Application to Image Stabilization. Asian Journal of Control, Special Issue “Recent Advances on Data Fusion, Estimation in Navigation and Control”, 1-16, 2019.
[8] P.K. Jain. Homography estimation from planar contours. In IEEE Third Int. Symp. on 3D Data Processing, Visualization, and Transmission, pages 877–884, 2006.
[9] J.Y. Kaminski and A. Shashua. Multiple view geometry of general algebraic curves. Int. J. of Computer Vision, 56(3):195–219, 2004.
[10] S. Krupinski, G. Allibert, M.-D. Hua, and T. Hamel. An inertial-aided homography-based visual servo control approach for (almost) fully actuated autonomous underwater vehicles. IEEE Transactions on Robotics, 33(5):1041–1060, 2017.
[11] R. Mahony, T. Hamel, P. Morin, and E. Malis. Nonlinear complementary filters on the special linear group. International Journal of Control, 85(10):1557–1573, 2012.
[12] E. Malis, F. Chaumette, and S. Boudet. 2-1/2-d visual servoing. IEEE Transactions on Robotics and Automation, 15(2):238–250, 1999.
[13] C. Mei, S. Benhimane, E. Malis, and P. Rives. Efficient homography-based tracking and 3-d reconstruction for single-viewpoint sensors. IEEE Transactions on Robotics, 24(6):1352–1364, 2008.
[14] L.-H. Nguyen, M.-D. Hua, G. Allibert, and T. Hamel. Inertial-aided homography-based visual servo control of autonomous underwater vehicles without linear velocity measurements. In 21st International Conference on System Theory, Control and Computing (ICSTCC), pages 9–16, 2017.
[15] O. Saurer, P. Vasseur, R. Boutteau, C. Demonceaux, M. Pollefeys, and F. Fraundorfer. Homography based egomotion estimation with a common direction. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(2):327–341, 2017.
[16] D. Scaramuzza and R. Siegwart. Appearance-guided monocular omnidirectional visual odometry for outdoor ground vehicles. IEEE Transactions on Robotics, 24(5):1015–1026, 2008.