DIGITAL

Visual recovery of saliency maps from human attention in 3D environments

Publication from Digital

Santner, K. and Fritz, G. and Paletta, L., Mayer, H.

Proc. IEEE International Conference on Robotics and Automation, ICRA 2013, Karlsruhe, Germany , 1/2013

Abstract:

The estimation of human attention has recently been addressed in the context of human robot interaction. Today, joint work spaces already exist and challenge cooperating systems to jointly focus on common objects, scenes and work niches. With the advent of Google glasses and increasingly affordable wearable eye-tracking, monitoring of human attention will soon become ubiquitous. The presented work describes for the first time a method for the estimation of human fixations in 3D environments that does not require any artificial landmarks in the field of view and enables attention mapping in 3D models. It enables full 3D recovery of the human view frustum and the gaze pointer in a previously acquired 3D model of the environment in real time. The study on the precision of this method reports a mean projection error 1.1 cm and a mean angle error 0.6 within the chosen 3D model - the precision does not go below the one of the technical instrument (1) This innovative methodology will open new opportunities
 for joint attention studies as well as for bringing new potential into automated processing for human factors technologies.