ReKinStruct: To Sum It Up


https://vimeo.com/95045553

What started as a project that would reconstruct point cloud data through Kinect Fusion SDK along with PCL had its limitations that led me to take a detour to getting Time Varying Datasets and finally landing in the Kinect Fusion SDK.

There was a very steep initial learning curve with respect to setting up the drivers and the software. My Macbook was not supporting the Kinect drivers as it had a low end graphics card and I had to use Dr.Ponto’s Alienware laptop with an NVIDIA GTX780 graphics card that was pretty fast. Compiling PCL and its dependencies, OpenNI and PrimeSense was the next step which had a few issues of their own while interacting with Windows drivers. These initial phase was very frustrating as I have not really coded much in the Windows OS and had to figure out how to setup the hardware, drivers and software. It was almost mid-March when I had the entire setup running without crashing in the middle.

Although it was a late start, once the drivers and the software were setup, it was all exciting and fast. I was able to obtain datasets automatically using the OpenNI Grabber interface. I had to just specify the time interval between successive captures and the program saved them as PCD files (Color and Depth). It wasn’t late until I was able to get 400 PCDs of a candle burning down with PCDs captured 1 second apart from each other which would give a realistic 3D rendering of the scenario. The viewer program was pretty similar that took in the number of PCD files and the time interval between the display as arguments and visualized these 3D datasets.

Further on, I also tried learning the Windows SDK that is provided along with the Kinect. The Kinect Fusion basics is a beautifully written piece of code that obtains PLYs when scanned with the Kinect. PCL offers options to convert these into PCDs which was the desired final format. I also tried running multiple Kinects simultaneously to obtain data that would fill in the shadow points of one Kinect but I was not able to debug an error while Windows SDK’s Multi Static Cameras option. Given more time or as a future work, I believe using multiple Kinects to obtain PCD files would be a good area to explore. Working on these obtained PCD files like hole filling and reconstruction would also be good topics to cover in the future.

Here is a comparison of the image that is used in pointclouds.org(the one I put up in my second post as a target) against the image that I have obtained. Both are screenshots of PLYs.

Comparison

Left: Mesh from pointclouds.org; Right: Mesh obtained by me

The datasets can be found at:

Candle: https://uwmadison.box.com/s/j1zheh8b46fbxjs079xs

Walking Person: https://uwmadison.box.com/s/lxkr7a7io5rbz84xy4uw

Office: https://uwmadison.box.com/s/8mfccacpewkptymicx67

All in all, I am happy with the progress of the work. If the drivers were not a big hindrance, I would have had a better start in the beginning of the semester. Nevertheless, it was a great learning experience and an interesting area of study.

ReKinStruct: Time-Varying Kinect Fusion

I have tried to combine Kinect Fusion and the Time-Varying Dataset concept into one by obtaining two time varying PLYs of a scenario and converting them into PCDs. The link to this small dataset could be found at

https://uwmadison.box.com/s/bgw63fi54y5ir5wkedav

Meanwhile, since I did not have OpenNI support in my machine as I have deleted it while installating Kinect SDKs, I could not try running the program to visualise it. Nevertheless, the comparison of the two PLYs looks so in MeshLab.

KinectFusion_Comaprison

The scans look intact and the Kinect’s depth sensors were actually working much better compared to last week since I moved the Kinect slowly across the space.

The codes to the capture and viewing program that I have worked with until now can be found at

https://github.com/nsubramaniam/rekinstruct

For some reason, using the Multi Static Kinects option in the Kinect SDKs which obtains points from two Kinects gives an error while opening the PLY file it saves. The error corresponds to something like “Header has an EoF” which I believe is erroneous metadata. I am looking into it. Will update once I get to know something.!

ReKinStruct: Kinect Fusion Basics

There has been a bit of a change in my approach to obtain PCDs. I have moved on to obtain PCDs(PLY, to be precise) using the Kinect Fusion Colour Basics. It scans the volume for 3D points and allows me to save in one of the three formats (OBJ, STD and PLY). I chose the PLY format as I can easily convert PLYs to PCDs using PCL.

Here is a video of me scanning around the room to obtain the mesh.

http://vimeo.com/93961128

The mesh finally looks like this in MeshLab.

Mesh

The next steps would be to take a few PCDs and probably use the Time Varying PCD program to visualise them like a movie.

Meanwhile, there happened to be a few issues with the datasets in the file locker. So, I have uploaded the files in UW-Madison Box. Here are the links to the datasets.

Candle Dataset 1 second : https://uwmadison.box.com/s/j1zheh8b46fbxjs079xs

Walking Person : https://uwmadison.box.com/s/lxkr7a7io5rbz84xy4uw

ReKinStruct : Shifting Gears

As I said in the last post, I am trying to obtain colour and depth stream from two kinects at once. Apparently, OpenNI is not the best way to do it. So I am trying to go back to Kinect SDKs. I have successfully installed the softwares. There were a lot of dependency interferences from the PrimeSense Kienct Sensors that I had already. Had a bunch of installing and uninstalling to do. Now, the Kinect works to obtain colour and depth images as in one of my first posts :

https://blogs.discovery.wisc.edu/projects/2014/02/09/rekinstruct-abstract/

I am learning how to obtain this through a program in selected intervals so I can make a time varying point cloud. Will keep you updated.!

A good and simple tutorial to install Kinect SDKs that I found was:

http://www.packtpub.com/article/getting-started-with-kinect-for-windows-sdk-programming

Meanwhile, I got the dataset for the candle burning with an interval of one second between consecutive PCDs. The following link has 400 PCDs of a candle burning spaced one second apart.

https://filelocker.discovery.wisc.edu/public_download?shareId=8ab2882502ec2aea65d711cfec4bbdd8

Password: ReKinStruct

ReKinStruct : Candle Dataset

I have uploaded the candle dataset on the file locker system. You can download it from here.

https://filelocker.discovery.wisc.edu/public_download?shareId=1d35394ba86c9251776af95cb2821f46

Password: ReKinStruct

These are 40 PCD files of snapshots taken of two candles with a 10 second interval between consecutive shots.

I have also uploaded a video of the ReKinStruct viewer for the candle dataset here. The video however is like a fast forward version of candles burning where the 40 PCD files are displayed back to back with a time interval of 1 second. This makes a close-to-reality rendering of candles burning but only 10x faster.

http://youtu.be/zwA9J8xv248

I believe I kept the candles too close to the Kinect and didn’t get it’s depth data perfectly. However, the shadows and the candle melting show how the time varying datasets look in 3D.

I have been trying to setup two Kinects to grab data simultaneously and have been getting the classic segmentation fault.

Two Kinects SegFlt

I wonder if it goes to back to the original issue where the OpenNI grabber was not sensing the Kinect. I have looked online for some tutorials and most people seem to have used two OpenNI grabbers simultaneously. Will dig in about this a little more and post progress.!

ReKinStruct: Datasets & Further Plans

I have tried thinking of other datasets that can be obtained with the Kinect Snapshot program. A few interesting ones are a candle burning down, a fast growing plant, melting ice cream, etc. I have uploaded a PCD dataset of a person walking. I will be uploading similar datasets in the days to come.

Link to DatasetA: https://filelocker.discovery.wisc.edu/public_download?shareId=af0daed307f9c66123e3843360f328c8

If it asks for a password, it is ReKinStruct

Meanwhile, I have been thinking about next steps to do with the Kinect and one of the ideas that I have in mind is using two Kinects to obtain these datasets. This way, a lot of points would get filled for the scenario where it is full of shadows now. The concept image looks something like this.

Kinect Stereo

This way I would even get the dark side of the object. There are a few limitations to using two Kinects like interference of the speckle pattern. I am going to hide one Kinect from the other to avoid these effects. Working on setting up two Kinects and will upload some datasets this week.!

ReKinStruct : First Time Varying PCD

So, like I said in the last post, I had got the Kinect to obtain and view point cloud data but only manually. I updated the code a bit to do the above without intervention. Here is the link to the video of a time varying PCD that I obtained.

YouTube link : http://youtu.be/T4IPKq0rGII

Yes, that is me walking like a zombie. Note that I have loaded the points as PointXYZ on purpose to give a feel of the pointcloud. Loading the points as PointXYZRGBA feels like a picture and not a pointcloud.

The idea behind this is 8 PCD files named output1.pcd, output2.pcd and so on. The ReKinStruct_Snapshot.cpp code would take a snapshot once every second and save as binary PCD files with names as listed above. The Binary PCD files are constant in size (4801 KB).

The ReKinStruct_Viewer.cpp loads these files and displays them in sequence with a time delay of 1 second. It uses two pcl::PointXYZ pointers. One loads the output<even>.pcd files and the other loads the output<odd>.pcd files. So when the even-pointer is displaying the point cloud in the viewer, the odd-pointer loads the next .pcd file in the background and vice versa thus abstracting the user from the latency of loading the files.

And for some reason, Visual Studio 2010 didn’t let me use Sleep() or wait() routines. So I had to write my own API as follows.

#include <time.h>

void wait(unsigned int seconds)
{
     clock_t timeToWait = clock () + seconds * CLOCKS_PER_SEC ;
     while (clock() < timeToWait);
}

Next steps would be to obtain a faster scenario and progress through the 3D viewer faster, like real-time motion.

Will keep you posted.!

ReKinStruct: Switching Between PCDs

This week, I tried to switch visualising between two PCD files to check if it was feasible in run-time and to know how long it takes for the process. The following were the two PCD file visualisations I was trying to switch between: couch with the box and without the box.

With Box

Without Box

I tried doing this by two methods.

Method1) Load both the PCD files initially into cloud pointers. Display one and switch to another on the click of a button.

Method2) Load one PCD file and display it. On the click of a button, clear the current pointer, load the other PCD file and display it.

I am attaching videos to show how fast the process was.

Simultaneous Load: http://youtu.be/tFJUoOFaGcY

Sequential Load: http://youtu.be/77-9w1rnUyA

I have loaded the Point Cloud Data without the colours (PointXYZ as opposed to PointXYZRGBA) on purpose to get a feel of the point cloud. Also please note that the time taken to switch between the point clouds is the time from when I click ‘s’ to the time it prints ‘Changing point clouds’ on the console. ‘Changing’ would have made more sense if it was ‘changed’. My apologies.

The main observations were:

1) Switching between PCD files that have already been loaded into memory was faster than loading it from the disk.

2) Loading more PCD files into memory will require a lot of RAM space. There was a mild increase in memory used in the Task Manager during Method1 than Method2 window because only two files were loaded now. I suppose there might be a scenario where we would need to switch between ten or more PCD files that might end up using a large chunk of the main memory.

The moire pattern on the wall that was far from Kinect was due to poor resolution of the Kinect with respect to distance. The coloured pictures on the top show the same PCD files without the Moire pattern as the display windows are small and hence the reduced resolution.

This week I am going to try getting more PCD files from an interesting scenario and try switching between them automatically. I hope the video looks interesting. Will keep you posted.!

Note: An interesting find was that the pcd_viewer_release.exe always loaded my PCD files in a rotated axis. I had to almost rotate the point of view by 180 degrees on the Z-axis to view the data. However, the pcd visualiser class loads the data as how the PCD was recorded ( in our case, the snapshot point cloud data). In cases where we need to rotate the Point Cloud Data while opening, the pcl::transformPointCloud() could be of use.

ReKinStruct: Obtaining Kintinuous PCD

Last week, I focussed on fixing the Kinect and started obtaining Point Cloud Data. This week I have obtained a continuous PCD using the SCENECT software. The SCENECT software is fairly easy to use and complements for not having KinFu. It obtains data from the Kinect and forms a 3D PCD by registering the frames as we move the Kinect.

Scenect Scan

The window on the right shows the frame that is being currently read from the Kinect and the window on the left shows the registered PCD. The yellow and green points on the right show the registration checkpoints for the frames. In my opinion, it is fairly good for registration and colour values. However, scanning and registration take a bit of time. For example, as you can see from the small window on the extreme left, it took around 2600 frames to register this small point cloud data. I have not worked with KinFu so I do not have anything to compare against but all in all, I think it is a good GUI to obtain data. It also offers a lot of post-processing options which I will try to figure out this week.

Below is the final PCD obtained from the scanning. Scenect Final

However, SCENECT does not readily allow us to export the scan points as a .pcd file. The easiest way to go around this is to save it as a .xyz file and write a program that reads every line containing XYZRGB values and write a .pcd. There are two ways to do this:

1. Based on the tutorial from http://pointclouds.org/documentation/tutorials/writing_pcd.php. For this method, you need to know the number of points in the cloud beforehand. This essentially means that you go throughout the .xyz file twice: first to know the number of points to create the cloud of the necessary size; second to read in the values from the .xyz file

2. The easier and the simpler solution is to just create a cloud pointer as

pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloud;

and push_back the points as and when read from the file.

( I do not know why pointclouds.org has not written the tutorial based on the second method )

On a parallel note, I also tried playing around with the OpenNI grabber to obtain PCDs. The salient difference in using this method is that it saves a .pcd of the one frame that is being read from the Kinect at that instant of time. Thus, there is no possibility of registering frames and making a huge point cloud. For a start, I read the frame as

OpenNI Viewerand the PCD file that was saved appears something like

PCD Viewer

That is the couch and the air conditioner nearby saved as a point cloud and viewed using the pcd_viewer_release.exe given by PCL. Though this method of obtaining PCDs doesn’t have a  potential advantage over the SCENECT, the OpenNI grabber PCDs can be used to obtain time varying PCD frames. That will be my goal for the upcoming week. Try to obtain time varying point clouds (like a candle melting) and switch through them. The dream would be to switch through ‘n’ number of point clouds fast enough that it will appear like a 3D movie.! Sounds cool, right?