Perception and Developmental Learning of Affordances in Autonomous Robots KI 2007: Advances in Artificial Intelligence

Publication from Digital

Paletta L., Fritz G., Florian Kintzler, Jörg Irran, Georg Dorffner

Springer Berlin / Heidelberg chapter (pages 235-250 ): Perception and Developmental Learning of Affordances in Autonomous Robots, 2007


Recently, the aspect of visual perception has been explored in the
 context of Gibson’s concept of affordances [1] in various ways. We
 focus in this work on the importance of developmental learning and
 the perceptual cueing for an agent’s anticipation of opportunities
 for interaction, in extension to functional views on visual feature
 representations. The concept for the incremental learning of abstract
 from basic affordances is presented in relation to learning of complex
 affordance features. In addition, the work proposes that the originally
 defined representational concept for the perception of affordances
 - in terms of using either motion or 3D cues - should be generalized
 towards using arbitrary visual feature representations. We demonstrate
 the learning of causal relations between visual cues and associated
 anticipated interactions by reinforcement learning of predictive
 perceptual states. We pursue a recently presented framework for cueing
 and recognition of affordance-based visual entities that obviously
 plays an important role in robot control architectures, in analogy
 to human perception. We experimentally verify the concept within
 a real world robot scenario by learning predictive visual cues using
 reinforcement signals, proving that features were selected for their
 relevance in predicting opportunities for interaction.