A salient scene is an area within an image that contains visual elements that stand out from surrounding areas.They are important for distinguishing landmarks in first-person-view(FPV)applications and determining spatial relations in images.The relative spatial relation between salient scenes acts as a visual guide that is easily accepted and understood by users in FPV applications.However,current digitally navigable maps and location-based services fall short of providing information on visual spatial relations for users.This shortcoming has a critical influence on the popularity and innovation of FPV applications.This paper addresses the issue by proposing a method for detecting visually salient scene areas(SSAs)and deriving their relative spatial relationships from continuous panoramas.This method includes three critical steps.First,an SSA detection approach is introduced by fusing region-based saliency derived from super-pixel segmentation and the frequency-tuned saliency model.The method focuses on a segmented landmark area in a panorama.Secondly,a street-view-oriented SSA generation method is introduced by matching and merging the visual SSAs from continuous panoramas.Thirdly,a continuous geotagged panorama-based referencing approach is introduced to derive the relative spatial relationships of SSAs from continuous panoramas.This information includes the relative azimuth,elevation angle,and the relative distance.Experiment results show that the error for the SSA relative azimuth angle is approximately±6°(with an average error of 2.67°),and the SSA relative elevation angle is approximately±4°(with an average error of 1.32°)when using Baidu street-view panoramas.These results demonstrate the feasibility of the proposed approach.The method proposed in this study can facilitate the development of FPV applications such as augmented reality(AR)and pedestrian navigation using proper spatial relation.
Fangli GuanZhixiang FangTao YuMingxiang FengFan Yang