A framework for computer vision and cognitive robotics - Embedded.com

A framework for computer vision and cognitive robotics

Visual perception is of critical importance, as the sensory feedback allows to make decisions, trigger certain behaviours, and adapt these to the current situation.This is not just the case for humans, but also for autonomous robots. The visual feedback enables robots to build up a cognitive mapping between sensory inputs and action outputs, therefore closing the sensorimotor loop. Thus being able to perform actions and adapt to dynamic environments. We are aiming to build a visual perception system for robots, based on human vision, that allows to provide this feedback leading to more autonomous and adaptive behaviours. Our research platform is the open-system humanoid robot iCub developed within the EU funded ‘RobotCub’ project. In our setup, this consists of two anthropomorphic arms, a head and a torso and is roughly the size of a human child. The iCub was designed for object manipulation research. It also is an excellent experimental, high degree-of-freedom (DOF) platform for artificial (and human) cognition research and embodied artificial intelligence (AI) development.

To localise objects in the environment the iCub has to rely solely, similarly to human perception, on a visual system based on stereo vision. The two cameras are mounted in the head. Their pan and tilt can jointly be controlled, with vergence providing a third DOF. The neck provides 3 more DOF for gazing. We describe a framework, named icVision, supporting the learning of hand-eye coordination and object manipulation, by solving visual perception issues in a biologically-inspired way. To read this external content in full, download the paper from the author archives.http://www.idsia.ch/~foerster/2013/1/bica2012_submission_32.pdf

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.