Splatting vs PointSprites

Low-resolution normals

Normals and their respective point radius are now stored as 8-bit signed chars and converted to floats when uploaded to the GPU. This seems to faster than storing everything as floats and it requires only a quarter of the memory which makes file loading faster as well.

There was also quite a headscratching bug in there. I transfer the normals+radius for each point as a signed character vec4. You cannot normalize this to 0..1 as this mixes both values. Instead, the normal was extracted from the first three components and normalized (the easy part) but the radius has to be manually divided by 127 in the shader to get the correct value. The result can then be multiplied by the predetermined max splat radius.

Point sprites (left) vs splats (right)

Point sprites (left) vs splats (right)

Performance

I found two major problems with the splatting:

  1. Splatting is very sensitive to normal changes, whereas point sprites (in our current implementation) are spheres and therefore rotation invariant. Normal calculation is in effect an estimation and it can be _way_ off leading to rendering artifacts.In theory it should provide a smooth surface as the splats are oriented along the normals as opposed to the organic, `bubbly’ surface of point sprites. When looking at the example figures in the splatting papers, it looks like the models/point clouds chosen quite carefully or prepared rather well with no outliers of the dataset and continuous surfaces. I found out that normal estimation breaks down at these points which becomes very noticeable with splats, moreso than with point sprites.
    Even worse, when splats are oriented at a `wrong’ angle it is possible to actually create holes in surfaces, as the splats are oriented the wrong way.
  2. When enabling splatting framerate drops noticeably from about 40 FPS for point sprites to 15 for splatting (without online calculation). It seems to me that the increased number of primitives created in the geometry shader maxes out the pipeline.
    However, gDebugger shows no increased number of primitives created (maybe it cannot inspect that `deep’) and my understanding of point sprites is that they are a `default’/hardware geometry shader (at least in their functionality) that turn points into textured quads.
    Furthermore, as splats are point samples, the fragment shader currently discards all fragments that do not lie within the circle described by the point sprite. This seem to decrease frame rate even further?

 

Splatting silhouette of a sphere

Splatting silhouette of a sphere

Results

Quality improvements are mostly visible very close-up and along planar surfaces, eg walls and silhouettes, eg window frames. However, considering the perfomance hit it is questionable whether this slight increase in quality is worth the effort. I noticed that some moiree patterns got worse at mid and long range, probably due to splats oriented at an oblique angle.

Overall I would rather implement a LOD scheme with points and point sprites: at close distances, (< 1-1.5m) the point sprite shader should be used to fill all the gaps. Everything beyond that distance already appears solid due to the high density of points even when rendering points at size 1.0.

Continuing stylus integration & key mapping

For this week I would like to focus on a couple of things. The first being getting the rotation fixed with the stylus. I found some c++ code for converting a rotational matrix into a quaternion. A sample of the code can be found here: http://www.euclideanspace.com/maths/geometry/rotations/conversions/matrixToQuaternion/ . This should solve the problem as to why the cursor is being display slightly to the right of the stylus.

After this is complete, I would like to start mapping the keys from the stylus into the world builder application. This is done in a similar fashion as to how the stylus was integrated. After that, I would like to make this application self reliant meaning it would no longer need to use skeletal viewer to function.

A Video of the stylus tracking is to come in 40 minutes once Vimeo converts my clip. The output is showing the position of the stylus relative to the Zspace screen.

Also, just found a neat little link on vrpn integration. This could be helpful.
https://support.zspace.com/entries/23780202-How-to-Setup-zSpace-VRPN

More elbow room in Box

Images for the onion test are being transferred to “the Box” on my university webspace account (they are retiring the old Goliath server and transferring to the new “Box.com” system this summer, so I figured I’d make the switch right away instead of waiting until summer. The Box has 10GB of space, which also facilitates things… the entire onion file is 1.12 GB and did not fit on the mywebspace).

At home the upload speed of my connection is 2.5 Mbits per second, so I am heading to the library for a few hours to hopefully take advantage of the faster time and avoid waiting 14.5 days for an upload that is just another test image. My computer is slow, my connection is slow, that’s life, so I am doing what I can to facilitate this process.

EDIT: already so much faster!
EDIT EDIT: well, not that much faster… JPEG might be the way to go for test? smaller sizes?

Screenshot 2014-03-02 12.31.58

For the final, I am re-doing the main outline to clean up some areas, since every detail really shows up in the large zoom. As I am working on the cell detail in the root area, the main drawing is wibbly enough that it requires some smoothing.
The brushes in Photoshop seem to favor a hard brush rather than a touch=sensitive one for the really close up shots, just to maintain a consistent line for cell walls.

The UW Botany dept is very generous with their time and resources, and I will be using their high-quality scanner to get cross-section images of my onion once it has started to sprout. They have many slides of root cell samples, which have been very helpful with that area of the drawing, but the leaves will have to wait until I have a sample to section and view… so I am growing one.

In the meantime, I am working on the overlaying images for a cricket (Order: Grillidae) based on drawings made during our cricket dissections in Entomology – including transparent underlays of the main outline and other organ systems so that it is easier to see how each organ interacts with other systems.

(image to be added later tonight)

Aspects I would like to figure out:
Lables: how to include them so they link to certain areas of the image and stay within the viewing frame. Most of the Google Maps information talks about linking to atlases, and has good information about creating point labels, but I’m still a little unclear about creating your own specific “street names” (in a manner of speaking) for the parts of the drawing.
Layers: Make it easy to switch back and forth between organ systems at various levels of zoom without having to start at the beginning. The developer page of google maps has some good information on this.

How to feel like a complete idiot:
Attempt doing simple tasks in a medium you are entirely unfamiliar with.

How to learn new things:
See above.

Normal Estimation and Point Splatting

This week was spent on getting the point splatting to work in our OOCViewer.

Right now, normals are calculated for each voxel independently. This can either be done during runtime or in a pre-processing step. In the first case, normals are cached on the disk after calculation and can be re-used. This also has the advantage that ‘mixed’ scenes are possible in which some voxels have normals but others dont:

Online calculation of normals

Online calculation of normals. Some voxels have normal data, others don’t. The voxels in the background right have not been loaded yet.

Calculation time depends mostly on the number of points in a voxel. pcl’s estimateNormals method turned out to be faster (especially when using the multi-threaded OMP variant) than the naive normal estimation approach and was used. In a second pass, a K-nearest neighbour search is performed for each point in the point cloud and the average distance to these neighbour points is used as a starting radius for the splat size.

The drawbacks are increased memory size. On average each pcd file now has an accompanying normal cache file that is 1.5 the size. Normal data is currently not compressed. Another option would be to store normal+radius data as 4 signed chars (32 bit total) and normalize the value in the shaders.

Pre-calc time is pretty high as there are many small files and a lot of time is spent onopening and closing files. On the other hand, this has to be performed only once.

There are some sampling problems with the normals like in this image:

Normal discontinuities

As a side note: merging two branches is harder than it should be. Maybe we could organize the git branches a bit better?

IPD Fix. Starting stylus integration.

The issue that I was having last week with two images appearing was fixed today. The solution to this problem was to make the inter-Pupillary Distance (IPD) smaller. The IPD is the distance between the two pupils of the viewers eyes. By making this value smaller, it made the two images that are rendered in the world builder application appear closer together. Below is an image demonstrating this.

At first I thought this could be a problem with the Nvidia Quadro drivers, and luckily it wasn’t. The next step is to get the Stylus operating within the world builder application. The plan is to spoof the current code for the joystick and use it with the stylus. After this is complete I believe it would be beneficial to get the world builder application to run with out the skeletal viewer. Doing this would decrease start up time and will improve the speed of the world builder when using the leap motion.

 

UPDATE:
Stylus is now being detect in the world builder application. Next step is to add some type of beem or cursor to the stylus. After this is accomplished I can map the key bindings so that you can select and rotate an object in the world builder app.

Not as easy as it looks…

…has been my on-going experience with the copper taffeta experiments.  My second attempts didn’t even make a dent in the taffeta fabric for the first day or so.  I left it in the bath (because, why not) and when I checked it again several days later the copper had etched away – including the portions covered by Vaseline!  I think that screen printing creates a film of Vaseline that is too thin to adequately resist.  I’m thinking that stencils will be a better choice going forward, with the possibility of laser cutting them once the forms become more complex.  I started a third vinegar bath last night with a thicker Vaseline layer and am hopeful I’ll get a cleaner outcome.

I did a little testing with my multimeter on my original sample.  As expected, my readings were all over the board.  I think the key will be to get a crisp separation between copper coated areas and base polyester areas.  In my research I stumbled across a diagram of the Lilypad PCB:

This would theoretically allow me to build an Arduino into the actual fabric of a project.  Finds like this make me even more determined to work out the bugs on the copper taffeta process.

I also received several samples of muscle wire from SparkFun, both Nitinol and Flexinol.  I’ve been doing some research on the specifics of working with these materials.  Since I have 4 different samples, I’m going to try 2 variations in each wire type – one set up utilizing the natural contraction of the wire and another where I “train” the wire to a shape.  I’m using this article by Jie Qi as a jumping off point.  The first steps will be purchasing crimp beads to facilitate attachments and an appropriate power source.  It sounds like if any of my experiments are going to end in fire, it’s this one!

Tiling and Hosting and Post-Its

First tiling of the large image done, just as a test (only used plain outline of onion)

Using the Bramus adaptation of various tiling scripts, this version was written as a Photoshop plug-in so it could be done directly, without exporting. The 32k x 32k image took about 2 hours to finish, much smoother than I expected! (this used my plain onion outline, rather than the cell-version still in progress, to test the simplest line for speed. We’ll see how the added detail changes things…)

As a map it worked! With some slight adjustments to the x-axis parameters, (the image was repeating horizontally).
The trick is getting the images posted online so that it can be shared. Seems quite simple… post the images on the “mywebspace” through UW and tell the code where to access them, put code into WordPress, easy-peasy.

MYWEBSPACE
is a monster.

I can only upload files one by one. I can only upload folders if I use the Java Plug-In, which does not work on Chrome and refuses to allow access through both Safari and Firefox, citing that Blackboard (the underlying system of mywebspace, presumably?) does not have sufficient permissions to run, and it errors out, giving this text file when I ask for more details, which only seems to be a list of possible error codes:

Missing required Permissions Manifest Attribute in Main Jar

Missing Application-Name manifest attribute for: https://mywebspace.wisc.edu/xythoswfs/lib/XythosUpload.jar
Java Plug-in 10.51.2.13
Using JRE version 1.7.0_51-b13 Java HotSpot(TM) 64-Bit Server VM
User home directory = /Users/JackiW
—————————————————-
c:   clear console window
f:   finalize objects on finalization queue
g:   garbage collect
h:   display this help message
l:   dump classloader list
m:   print memory usage
o:   trigger logging
q:   hide console
r:   reload policy configuration
s:   dump system and deployment properties
t:   dump thread list
v:   dump thread stack
x:   clear classloader cache
0-5: set trace level to <n>
—————————————————-

I naively tried just dropping the folder of images into my list of folders to get around the problem, but that is just an obtusely simple solution. Mywebspace 1 GB of space, which is enough for the moment, and I’m sure I can request more as it is needed. The uploading issue is something a little bigger. I will try contacting DoIT to see if they have alternative suggestions for navigating this.

I will also have more time to look at this tonight, and most likely will host my images through my work website if I can’t get this sorted. Uploading these one by one would be like wallpapering your house with post-its.

(…. that might actually look pretty interesting, especially with a slight breeze…)

Below: a bit of the level 3 zoom magnification of a (very rough) line drawing of an onion. Each tile 256 x 256.

3_3_0

3_3_1

3_3_2

3_3_3

3_3_4

3_3_5

3_3_6

3_3_7

ReKinStruct : The Hardware-Software Battle

So, I was trying to obtain PCDs from the Kinect and in order to do so, I wanted to read something from the Kinect. Like I mentioned in the previous post, I had PCL and MSVC2010 installed. I had also obtained the latest version of OpenNI and SensorKinect to serve as drivers for the Kinect. And the following happened.

I had one line of code that initialises the Kinect and returns a pointer to grab data. It kept crashing at run-time and there was no stack trace. A little bit of googling led me to believe that OpenNI does not comply with my laptop’s underlying hardware.

Okay, *deep breaths* that is not a dead-end. There had to be something else. So I got Kinect Fusion installed from Microsoft site. And from the tutorials, all I needed to do was type one command line code and my Kinect would (magically?) start obtaining a PCD. Although what happened was a different story.

There was something wrong with my Graphics card. I tried installing the latest CUDA drivers from NVIDIA but that did not help too. I have an Intel 4000HD Graphics card on my laptop and I wonder if KinFu works only with NVIDIA Graphics cards which would mean I would have to start using another machine.

So, the plan for this week is to check if the Kinect works on another machine and start proceeding from there. I also have a bad ( and a good ) feeling that I might have missed installing something on my laptop and so I can get the Kinect working on my machine too. Will keep you posted.

All in all, the classic conclusion, Hardware and Software do not like each other.

ReKinStruct – PCL : Check

So, the aim was to get PCDs from Kinect as soon as possible so I can start working with the point cloud data. On the way, I stumbled into getting the hang of how Kinect works, some basic programs that came along with the Kinect like obtaining a color+depth image, skeletal viewer etc.

Apologies if it looks distorted, had to both be in the image and take a screenshot (And taking a screenshot on Windows with a Mac keyboard requires clicking keys on all corners).

There were few more installations before even getting the Kinect running. Point Cloud Library with its 3rd party dependencies and OpenNI framework are installed and running. Compiled a simple program that reads data from Kinect and displays it on the screen. Stumbling into runtime errors and code halts. Memory usage is already up to 3GB which is three-fourths of the total. Let’s hope it doesn’t crash any soon.

Essentially, would like to get something like this in the next few weeks.

Image Courtesy: pointclouds.org

The target for this week is to get the program working on the IDE and start reading some data from the Kinect and store them as PCDs.

3D Space Exploration

This week I read through the documentation and got acquainted with both the Leap Motion  and the ZSpace SDK’s.

An obstacle in this project is that the leap motion is very particular in the way you place your hand over the device. It almost always needs to be perfect in order for the leap motion to detect the hands correctly. (I will post screen shots tomorrow). Also, when you use the ZSpace with the world builder application, you will see two images on the screen while the glass are on, if you close one eye, or take off the glasses, you will only see 1 image.

Ross was able to help me get the leap motion to recognize and display hands is the world builder application. This is a great stepping stone to my the final piece of my project. I first want to get the stylus from the ZSpace to work in the world builder application, there is great documentation in the ZSpace SDK for this.

Goals this week:

1) Get stereo image working, currently showing two images unless glasses are off/glasses are on with one eye closed. **fixed** A simple restart of the computer fixed this issue, surprisingly enough. Next I will try to add an object to the world builder application and see if the 3d effect is working.

2) The world builder application runs smoothly when the skeletal viewer is not selected. But in order for world builder to recognize the leap motion, the skeletal viewer needs to be selected and this causes the application to run very slowly. For this goal, I would like to flag the skeletal viewer to show leap motion data with out the skeletal viewer showing. This will greatly improve the performance of the application.

3) Get the Stylus to work in the world builder application.

So as of right now I am seeing both the right and left images. They are not converging in the world builder application. This obviously doesnt give the immersive effect of being able to rotate the head and see around the object. So in order to move forward, this will need to be fixed.

The figure above demonstrates how objects are seen on the ZSpace screen.

This is what an object looks like with the glasses on. This picture was taken with the glasses off but the two objects are similar when the glasses are on/off.