Repository logo
 
Loading...
Thumbnail Image
Publication

A method to compute saliency regions in 3D video based on fusion of feature maps

Use this identifier to reference this record.

Advisor(s)

Abstract(s)

Efficient computation of visual saliency regions has been a research problem in the recent past, but in the case of 3D content no definite solutions exist. This paper presents a computational method to determine saliency regions in 3D video, based on fusion of three feature maps containing perceptually relevant information from spatial, temporal and depth dimensions. The proposed method follows a bottom-up approach to predict the 3D regions where observers tend to hold their gaze for longer periods. Fusion of the feature maps is combined with a center-bias weighting function to determine 3D visual saliency map. For validation and performance evaluation, a publicly available database of 3D video sequences and corresponding fixation density maps was used as ground-truth. The experimental results show that the proposed method achieves better performance than other state-of-art models.

Description

Conference name IEEE International Conference on Multimedia and Expo, ICME 2015
Conference date 29 June 2015 - 3 July 2015
Article number 7177474

Keywords

3D visual saliency maps Center-Bias weighting and perceptual features visual attention models

Citation

L. Ferreira, L. A. da Silva Cruz and P. Assuncao, "A method to compute saliency regions in 3D video based on fusion of feature maps," 2015 IEEE International Conference on Multimedia and Expo (ICME), Turin, Italy, 2015, pp. 1-6, doi: 10.1109/ICME.2015.7177474.

Research Projects

Organizational Units

Journal Issue

Publisher

IEEE

CC License

Without CC licence

Altmetrics