21 November 2014

Week 3 of The Build

Although we seemed to make good progress last week, the majority of our efforts this week involved having to reimplement a lot of what we had already done and thought we had completed.


We had calibration of the Kinect and projector completed last week but this did not incorporate the depth sensor values for the Kinect which was important to include. Our final calibration method is based on that used by some of the team in the BIG lab.


Above is our calibration in action. A checkerboard is projected and the Kinect recognises this checkerboard pattern. The difference with this calibration though is that we take 6 recordings for calibration instead of just the 1 we used last week. Each recording is with the checkerboard at a slightly different angle and orientation (shown below). This allows the calibration program to calculate the camera intrinsic and extrinsic parameters [1]. A projection matrix and view matrix are then calculated. These are what we feed into our Ogre application.


To improve calibration further, we turned the projector and Kinect to be the correct way up on our rig. This helped us to get a lower error for our calibrations. From talking to BIG researchers, we found that an error value of 2 or less is suitable. Our new rig configuration gives us error values in this range. Previously, our error was about 5 which was far too high.


References:

[1] Sagawa, R.; Yagi, Y., "Accurate calibration of intrinsic camera parameters by observing parallel light pairs," Robotics and Automation, 2008. ICRA 2008. IEEE International Conference on , vol., no., pp.1390,1397, 19-23 May 2008
doi: 10.1109/ROBOT.2008.4543397
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4543397&isnumber=4543169

0 comments:

Post a Comment