Learning Predictive Features in Affordance based Robotic Perception Systems
Publication from Digital
Paletta L., Fritz G., Ralph Breithaupt, Erich Rome, Georg Dorffner
IEEE/RSJ International Conference on Intelligent Robots and Systems, 2006 (IROS) Oct. 2006, Bejing , 2006
This work is about the relevance of Gibson’s concept of affordances
 for visual perception in interactive and autonomous robotic systems.
In extension to existing functional views on visual feature representations,
we identify the importance of learning in perceptual cueing for the
anticipation of opportunities for interaction of robotic agents.
We investigate how the originally defined representational concept
for the perception of affordances - in terms of using either optical
flow or heuristically determined 3D features of perceptual entities
- should be generalized to using arbitrary visual feature representations.
In this context we demonstrate the learning of causal relationships
between visual cues and predictable interactions, and emphasize on
a novel framework for cueing and hypothesis verification of affordances
that could play an important role in future robot control architectures.
We argue that affordance based perception should enable systems to
react to environment stimuli both more efficient and autonomous,
and provide a potential to plan on the basis of responses to more
complex perceptual configurations. We verify the concept with a concrete
implementation applying state-of-the-art visual descriptors and regions
of interest within a simulated robot scenario and prove that these
features were successfully selected for predicting opportunities
of robot interaction.
Keywords: affordances, visual cueing, feature recognition.