Visual Learning of Affordance based Cues
Publication from Digital
Fritz G., Paletta L., Kumar M, Dorffner G., Breithaupt R., Rome E.
Proc. 9th International Conference on the Simulation of Adaptive Behavior(SAB2006) , 2006
This work is about the relevance of Gibson’s concept of affordances
(Gibson 1979) for visual perception in interactive and autonomous
robotic systems. In extension to existing functional views on visual
feature representations, we identify the importance of learning in
perceptual cueing for the anticipation of opportunities for interaction
of robotic agents. We investigate how the originally defined representational
concept for the perception of affordances - in terms of using either
optical flow or heuristically determined 3D features of perceptual
entities - should be generalized to using arbitrary visual feature
representations. In this context we demonstrate the learning of causal
relationships between visual cues and predictable interactions, using
both 3D and 2D information. In addition, we emphasize a new framework
for cueing and recognition of affordance-like visual entities that
could play an important role in future robot control architectures.
We argue that affordance-like perception should enable systems to
react on environment stimuli both more efficient and autonomous,
and provide a potential to plan on the basis of responses on more
complex perceptual configurations. We verify the concept with a concrete
implementation applying state-of-the-art visual descriptors and regions
of interest that were extracted from a simulated robot scenario and
prove that these features were successfully selected for their relevance
in predicting opportunities of robot interaction.