Wir verwenden Cookies

Wir nutzen Cookies auf unserer Website. Einige von ihnen sind essenziell, während andere uns helfen, diese Website und Ihre Erfahrung zu verbessern.


Wissenschaftliche Publikation

The MACS project: An Approach to Affordance-inspired Robot Control

Publikation aus Digital

Rome E., Paletta L., Sahin E., Dorffner G., Hertzberg J., May S., Fritz G., Irran J., Kintzler F., Lörken C., Ugur E., Breithaupt R.

Springer Verlag, Berlin, Germany Towards Affordance-based Robot Control, 2008


In this position paper, we present an outline of the MACS approach to affordance-inspired robot control. An affordance, a concept from Ecological Psychology, denotes a specific relationship between an animal and its environment. Perceiving an affordance means perceiving an interaction possibility that is specific for the animal’s perception and action capabilities. Perceiving an affordance does not include appearance-based object recognition, but rather feature-based perception of object functions. The central hypothesis of MACS is that an affordance-inspired control architecture enables a robot to perceive more interaction possibilities than a traditional architecture that relies on appearance-based object recognition alone. We describe how the concept of affordances can be exploited for controlling a mobile robot with manipulation capabilities. Particularly, we will describe how affordance support can be built into robot perception, how learning mechanisms can generate affordance-like relations, how this affordance-related information is represented, and how it can be used by a planner for realizing goal-directed robot behavior. We present both the MACS demonstrator and simulator, and summarize development and experiments that have been performed so far. By interfacing perception and goal-directed action in terms of affordances, we will provide a new way for reasoning and learning to connect with reactive robot control. We will show the potential of this new methodology by going beyond navigation-like tasks towards goal-directed autonomous manipulation in our project demonstrators