Digital

A Computational Model for Visual Learning of Affordance-like Cues

Publikation aus Digital

Paletta L., Fritz G., Rome E., Dorffner G.

Proc. 29th European Conference on Visual Perception, ECVP 2006, St. Petersburg, Russia, August 20-25, Perception 35 , 2007

Abstract:

There are human affordances that are explicitly innate through evolutionary
 development and there are affordances that have to be learned (Gibson
 1979, Edwards et al., 2003, Brain Cognition, 495 - 502). In technical
 vision systems, affordance based visual object representations are
 function based and were so far predetermined by heuristic engineering
 (Stark and Bowyer, 1995, Image Understanding 59 1 - 21). In contrast,
 we propose that the selection of relevant predictive visual cues
 should be performed by machine learning methodology, operating on
 the basis of a complete spectrum of perceptual entities. In particular,
 we investigate local gradient patterns in 2D (SIFT features, Lowe,
 2004 International Journal of Computer Vision 60 91 – 110) in affordance
 cueing among other visual modalities, such as colour, shape, and
 3D information. Predictive features are then derived from attribute
 based rules that are extracted from a decision tree based classifier
 (Quinlan 1993). Decision trees are demonstrated to be capable to
 provide a predictive feature configuration for the representation
 of an affordance-like cue on the basis of an information theoretic
 framework. Experimental results successfully verify the conceptual
 framework from the viewpoint of an autonomous mobile agent that is
 engaged within a robotic system scenario.