Published in UbiComp, 2015
Camera calibration helps users better interact with the surrounding environments. In this work, we aim at accelerating camera calibration in an indoor setting, by selecting a small but sufficient set of keypoints. Our framework consists of two phases: In the offline phase, we cluster photos labeled with Wi-Fi and gyro sensor data according to a learned distance metric. Photos in each cluster form a “co-scene”. We further select a few frequently appearing keypoints in each co-scene as “useful keypoints” (UKPs). In the online phase, when a query is issued, only UKPs from the nearest co-scene are selected, and subsequently we infer extrinsic camera parameters with multiple view geometry (MVG) technique. Experimental results show that our framework is effective and efficient to support calibration.
Recommended citation: Li, Huan, Pai Peng, Hua Lu, Lidan Shou, Ke Chen, and Gang Chen. "E 2 C 2: efficient and effective camera calibration in indoor environments." In Adjunct Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2015 ACM International Symposium on Wearable Computers, pp. 9-12. i>ACM, 2015.