Hello everyone,
My name is Sizhuo. I am a first-year PhD student in Department of Computer Sciences. I started working with Prof. Ponto from this month. I am currently working on Tango (https://www.google.com/atap/project-tango/), and hopefully the Tango tablet will offer a new way to explore the point clouds in the visHOME project.
This week I went through the documents on the Tango’s webpage. I ran some sample code and learned how to use Tango API. There are three parts in the API: Motion Tracking, Area Learning and Depth Perception. The Motion Tracking API is quite easy to use. Basically there are three steps:
- Set up a Tango service.
- Register a listener to Tango events.
- In the callback function, you can get the pose change represented as a translation vector and a quaternion. Then you can do whatever you want.
Tango will run as a back-end service and handle everything for you. Additional care may be needed, such as pause Tango service when the user switch to another app, or attempt to recover when the tracking quality is low.
Now I’m trying to build a graphical demo to see if I can match the pose change of the Tango tablet to the pose change of the virtual camera. After that I can move on to work with some real-world data.
I used to think that Motion Tracking is the only part that I need to use. Later I learned from the documents that Area Learning may also be helpful, since it can keep a memory of the environment to improve the tracking quality, of course at the cost of more processing resources. I would build the demo first and then see if this is needed.