I was focused on course projects and TA work for the past two weeks, so I didn’t make too much progress. Now I have built the first demo. I can move the tablet to see a cube from different angles.
The pose data structure returned by Tango actually describes the position of the Target frame in the coordinate system of the Base frame. So if I want to transform a vector from the first frame to the current frame, I need to invert the matrix.
The transformation formula on the document is slightly different from that in the sample code. The sample code version adds another term so that the eye space is aligned to the color camera coordinate system. The document version aligns the eye space to the “device” coordinate system, which is at the center of the tablet. Perhaps this matters for AR applications?
As time elapses, the cube shifts away from its original position. This is because the Motion Tracking module only computes the transformation between two consecutive frames so error accumulates. Maybe the Area Learning module can help with this.