Observing and Modeling the Embodiment of Attention in Interpreting Observed Action
Publication from Digital
Report from Dagstuhl Seminar 12491, Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik, Dagstuhl, Germany. , 1/2013
Computational modeling of visual attention has recently been emerging as an important field of computer science and Artificial Intelligence. From human attention we know that many brain areas, including those processing motor signals, are involved in the computation of saliency as an indicator for which information a next action should consider. The concept of embodied attention understands attention processing as a meaningful system component within a perception-action cycle of autonomous systems where saliency computation should be operated according to the task at hand. Previous work (Paletta et al., 2005) developed a model or eye movements and belief aggregation for the task of object recognition where sequential attention strategies are adjusted in the frame of reinforcement learning. Furthermore, contextual rules (Perko et al., 2009) may prime the location of attention processing in the visual information. Current work targets at including physical actions such as body posture and position dynamics into the framework. In a first step, we extract ground truth data from human studies using eye tracking glasses and a tuned framework of SLAM (simultaneous localisation and mapping) that allows to map human
gaze and integrated saliency measures directly onto the acquired three dimensional model of the environment, with high precision, with wearable interfaces that enable natural behaviours and without the use artificial markers (Paletta et al., 2013). Future work will use human ground truth to learn extended models of embodied attention from human behaviour.