Please use this identifier to cite or link to this item: https://hdl.handle.net/10316/10196
Title: Inertial sensor data integration in computer vision systems
Other Titles: Integração de informação inercial em sistemas de visão por computador
Authors: Lobo, Jorge Nuno de Almeida e Sousa 
Orientador: Dias, Jorge Manuel Miranda
Keywords: Computer vision
Issue Date: 2002
Abstract: Advanced sensor systems, exploring high integration of multiple sensorial modalities, have been significantly increasing the capabilities of autonomous robots and enlarging the application potential of vision systems. In this work I explore the cooperation between image and inertial sensors, motivated by what happens with the vestibular system and vision in humans and animals. Visual and inertial sensing are two sensory modalities that can be explored to give robust solutions on image segmentation and recovery of three-dimensional structure. In this work I overview the currently available low-cost inertial sensors. Using some of these sensors, I have built an inertial system prototype and coupled it to the vision system used in this work. The vision system has a set of stereo cameras with vergence. Using the information about the vision system’s attitude in space, given by the inertial sensors, I obtained some interesting results. I use the inertial information about the vertical to infer one of the intrinsic parameters of the visual sensor - the focal distance. The process involves having at least one image vanishing point, and tracing an artificial horizon. Based on the integration of inertial and visual information, I was able to detect threedimensional world features such as the ground plane and vertical features. Relying on the known vertical reference, and a few system parameters, I was able to determine the ground plane geometric parameters and the stereo pair mapping of image points that belong to the ground plane. This enabled the segmentation and three-dimensional reconstruction of ground plane patches. It was also used to identify the three-dimensional vertical structures in a scene. Since the vertical reference does not give a heading, image vanishing points can be used as an external heading reference. These features can be used to build a metric map useful to improve mobile robot navigation and autonomy.
Description: Dissertação de mestrado em em Engenharia Electrotécnica e de Computadores, apresentada ao Departamento de Engenharia Electrotécnica e de Computadores da Fac. de Ciências e Tecnologia de Coimbra
URI: https://hdl.handle.net/10316/10196
Rights: openAccess
Appears in Collections:UC - Dissertações de Mestrado
FCTUC Eng.Electrotécnica - Teses de Mestrado

Files in This Item:
File Description SizeFormat
jlobo_MSc.pdf3.18 MBAdobe PDFView/Open
Show full item record

Page view(s)

251
checked on Apr 16, 2024

Download(s)

152
checked on Apr 16, 2024

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.