The present invention is a method and system to estimate the visual target that people are looking, based on automatic image measurements. The system utilizes image measurements from both face-view cameras and top-down view cameras. The cameras are calibrated with respect to the site and the visual target, so that the gaze target is determined from the estimated position and gaze direction of a person. Face detection and two-dimensional pose estimation locate and normalize the face of the person so that the eyes can be accurately localized and the three-dimensional facial pose can be estimated. The eye gaze is estimated based on either the positions of localized eyes and irises or on the eye image itself, depending on the quality of the image. The gaze direction is estimated from the eye gaze measurement in the context of the three-dimensional facial pose. From the top-down view the body of the person is detected and tracked, so that the position of the head is estimated using a body blob model that depends on the body position in the view. The gaze target is determined based on the estimated gaze direction, estimated head pose, and the camera calibration. The gaze target estimation can provide a gaze trajectory of the person or a collective gaze map from many instances of gaze.