Mirrors in VizHome

Started exporting/importing data about mirrors from SCENE into the VizHome viewer. Currently all surfaces with mirrors are manually preprocessed (to get rid of the false mirror points), so adding a few steps to this process seems okay for now.

Right now it works like this: in the scan (2D) view, select a region of the mirror and delete these scan points. Then, redraw this region a bit larger and create a plane. Check in the 3D view that the plane aligns with the wall/mirror. Then, in the 2D view again, create 4-many corner vertices of the mirror using plane/intersection points and save them all in a VRML file.

Now, SCENE is quite inconsistent when it comes to point/model transforms and during export it does not apply the global scanner transform to these points. Additionally, the EulerAngle transformation provided by SCENE does not say which rotation is applied first. Luckily it also stores the rotatation in axis/angle format. When loading the points in VizHome you therefore also have to specify the axis/angle and translation as the global transform for each mirror.

The current mirror file (for Ross’ house) looks something like this:

new_mirror kitchen_mirror
translation 1.268844 2.561399 8.353854
rotation_axis -0 -0.002 1
rotation_angle 138.859284
points 4
-5.09820000 1.16210000 2.61530000
-5.10310000 0.26780000 2.61300000
-4.95730000 0.26350000 1.41670000
-4.95220000 1.17380000 1.41860000
end_mirror
new_mirror bathroom_mirror
translation 0 0 1.066999
rotation_axis -0.003 0.001 -1
rotation_angle 136.756113
points 4
-0.09780000 1.14360000 5.32700000
-0.09620000 0.36530000 5.32200000
0.02890000 0.37050000 4.28740000
0.02730000 1.14700000 4.29170000
end_mirror

 

Most entries can be created by copy-pasting and the file supports multiple mirrors. Both mirrors have different transformations as they were marked on different scans. Once loaded, it currently looks like this:

Mirror geometry loaded

Mirror geometry loaded

We also now have an (optional) background:

Background rendering

Background rendering

Colors can be changed in the shader. The background is drawn using a single screen-filling quad and determining and interpolating the current view-vector for each vertex.

ReKinStruct : Apple to Alienware

I mentioned in my last post that I was having some trouble with getting KinFu running on my laptop. My Macbook Pro came with an Intel graphics card while KinFu (and most graphics intense applications) run only on machines with NVIDIA graphics card. Thankfully, Dr.Kevin Ponto has an Alienware laptop with an NVIDIA graphics card and is offering me. It should enable me to run KinFu without any trouble. I should be getting the laptop sometime soon and the first of many things to do would be set the IDE and PCL up on the laptop (Do ReKinStruct: First Look all over again). I hope to get a PCD by this weekend.

Meanwhile, Dr.Ponto and I have been discussing about other cool stuff we could do with Kinects. One potential idea is to obtain the color+depth images of a swiftly changing scenario (like a candle burning down or an ice melting) from a single vantage point. By obtaining data periodically, we can play through it making it look like a 3D movie. The same can also be done by playing PCD sequentially. However, changing PCD Visualization at the click of a button would require some high performance RAM and Solid State Disks and most of all, a new technique.

Will keep you posted.

Apologies for this unexpected delay. Pictures coming up soon.

Splatting vs PointSprites

Low-resolution normals

Normals and their respective point radius are now stored as 8-bit signed chars and converted to floats when uploaded to the GPU. This seems to faster than storing everything as floats and it requires only a quarter of the memory which makes file loading faster as well.

There was also quite a headscratching bug in there. I transfer the normals+radius for each point as a signed character vec4. You cannot normalize this to 0..1 as this mixes both values. Instead, the normal was extracted from the first three components and normalized (the easy part) but the radius has to be manually divided by 127 in the shader to get the correct value. The result can then be multiplied by the predetermined max splat radius.

Point sprites (left) vs splats (right)

Point sprites (left) vs splats (right)

Performance

I found two major problems with the splatting:

  1. Splatting is very sensitive to normal changes, whereas point sprites (in our current implementation) are spheres and therefore rotation invariant. Normal calculation is in effect an estimation and it can be _way_ off leading to rendering artifacts.In theory it should provide a smooth surface as the splats are oriented along the normals as opposed to the organic, `bubbly’ surface of point sprites. When looking at the example figures in the splatting papers, it looks like the models/point clouds chosen quite carefully or prepared rather well with no outliers of the dataset and continuous surfaces. I found out that normal estimation breaks down at these points which becomes very noticeable with splats, moreso than with point sprites.
    Even worse, when splats are oriented at a `wrong’ angle it is possible to actually create holes in surfaces, as the splats are oriented the wrong way.
  2. When enabling splatting framerate drops noticeably from about 40 FPS for point sprites to 15 for splatting (without online calculation). It seems to me that the increased number of primitives created in the geometry shader maxes out the pipeline.
    However, gDebugger shows no increased number of primitives created (maybe it cannot inspect that `deep’) and my understanding of point sprites is that they are a `default’/hardware geometry shader (at least in their functionality) that turn points into textured quads.
    Furthermore, as splats are point samples, the fragment shader currently discards all fragments that do not lie within the circle described by the point sprite. This seem to decrease frame rate even further?

 

Splatting silhouette of a sphere

Splatting silhouette of a sphere

Results

Quality improvements are mostly visible very close-up and along planar surfaces, eg walls and silhouettes, eg window frames. However, considering the perfomance hit it is questionable whether this slight increase in quality is worth the effort. I noticed that some moiree patterns got worse at mid and long range, probably due to splats oriented at an oblique angle.

Overall I would rather implement a LOD scheme with points and point sprites: at close distances, (< 1-1.5m) the point sprite shader should be used to fill all the gaps. Everything beyond that distance already appears solid due to the high density of points even when rendering points at size 1.0.

Continuing stylus integration & key mapping

For this week I would like to focus on a couple of things. The first being getting the rotation fixed with the stylus. I found some c++ code for converting a rotational matrix into a quaternion. A sample of the code can be found here: http://www.euclideanspace.com/maths/geometry/rotations/conversions/matrixToQuaternion/ . This should solve the problem as to why the cursor is being display slightly to the right of the stylus.

After this is complete, I would like to start mapping the keys from the stylus into the world builder application. This is done in a similar fashion as to how the stylus was integrated. After that, I would like to make this application self reliant meaning it would no longer need to use skeletal viewer to function.

A Video of the stylus tracking is to come in 40 minutes once Vimeo converts my clip. The output is showing the position of the stylus relative to the Zspace screen.

Also, just found a neat little link on vrpn integration. This could be helpful.
https://support.zspace.com/entries/23780202-How-to-Setup-zSpace-VRPN

More elbow room in Box

Images for the onion test are being transferred to “the Box” on my university webspace account (they are retiring the old Goliath server and transferring to the new “Box.com” system this summer, so I figured I’d make the switch right away instead of waiting until summer. The Box has 10GB of space, which also facilitates things… the entire onion file is 1.12 GB and did not fit on the mywebspace).

At home the upload speed of my connection is 2.5 Mbits per second, so I am heading to the library for a few hours to hopefully take advantage of the faster time and avoid waiting 14.5 days for an upload that is just another test image. My computer is slow, my connection is slow, that’s life, so I am doing what I can to facilitate this process.

EDIT: already so much faster!
EDIT EDIT: well, not that much faster… JPEG might be the way to go for test? smaller sizes?

Screenshot 2014-03-02 12.31.58

For the final, I am re-doing the main outline to clean up some areas, since every detail really shows up in the large zoom. As I am working on the cell detail in the root area, the main drawing is wibbly enough that it requires some smoothing.
The brushes in Photoshop seem to favor a hard brush rather than a touch=sensitive one for the really close up shots, just to maintain a consistent line for cell walls.

The UW Botany dept is very generous with their time and resources, and I will be using their high-quality scanner to get cross-section images of my onion once it has started to sprout. They have many slides of root cell samples, which have been very helpful with that area of the drawing, but the leaves will have to wait until I have a sample to section and view… so I am growing one.

In the meantime, I am working on the overlaying images for a cricket (Order: Grillidae) based on drawings made during our cricket dissections in Entomology – including transparent underlays of the main outline and other organ systems so that it is easier to see how each organ interacts with other systems.

(image to be added later tonight)

Aspects I would like to figure out:
Lables: how to include them so they link to certain areas of the image and stay within the viewing frame. Most of the Google Maps information talks about linking to atlases, and has good information about creating point labels, but I’m still a little unclear about creating your own specific “street names” (in a manner of speaking) for the parts of the drawing.
Layers: Make it easy to switch back and forth between organ systems at various levels of zoom without having to start at the beginning. The developer page of google maps has some good information on this.

How to feel like a complete idiot:
Attempt doing simple tasks in a medium you are entirely unfamiliar with.

How to learn new things:
See above.