Splatting vs PointSprites

Low-resolution normals

Normals and their respective point radius are now stored as 8-bit signed chars and converted to floats when uploaded to the GPU. This seems to faster than storing everything as floats and it requires only a quarter of the memory which makes file loading faster as well.

There was also quite a headscratching bug in there. I transfer the normals+radius for each point as a signed character vec4. You cannot normalize this to 0..1 as this mixes both values. Instead, the normal was extracted from the first three components and normalized (the easy part) but the radius has to be manually divided by 127 in the shader to get the correct value. The result can then be multiplied by the predetermined max splat radius.

Point sprites (left) vs splats (right)

Point sprites (left) vs splats (right)

Performance

I found two major problems with the splatting:

  1. Splatting is very sensitive to normal changes, whereas point sprites (in our current implementation) are spheres and therefore rotation invariant. Normal calculation is in effect an estimation and it can be _way_ off leading to rendering artifacts.In theory it should provide a smooth surface as the splats are oriented along the normals as opposed to the organic, `bubbly’ surface of point sprites. When looking at the example figures in the splatting papers, it looks like the models/point clouds chosen quite carefully or prepared rather well with no outliers of the dataset and continuous surfaces. I found out that normal estimation breaks down at these points which becomes very noticeable with splats, moreso than with point sprites.
    Even worse, when splats are oriented at a `wrong’ angle it is possible to actually create holes in surfaces, as the splats are oriented the wrong way.
  2. When enabling splatting framerate drops noticeably from about 40 FPS for point sprites to 15 for splatting (without online calculation). It seems to me that the increased number of primitives created in the geometry shader maxes out the pipeline.
    However, gDebugger shows no increased number of primitives created (maybe it cannot inspect that `deep’) and my understanding of point sprites is that they are a `default’/hardware geometry shader (at least in their functionality) that turn points into textured quads.
    Furthermore, as splats are point samples, the fragment shader currently discards all fragments that do not lie within the circle described by the point sprite. This seem to decrease frame rate even further?

 

Splatting silhouette of a sphere

Splatting silhouette of a sphere

Results

Quality improvements are mostly visible very close-up and along planar surfaces, eg walls and silhouettes, eg window frames. However, considering the perfomance hit it is questionable whether this slight increase in quality is worth the effort. I noticed that some moiree patterns got worse at mid and long range, probably due to splats oriented at an oblique angle.

Overall I would rather implement a LOD scheme with points and point sprites: at close distances, (< 1-1.5m) the point sprite shader should be used to fill all the gaps. Everything beyond that distance already appears solid due to the high density of points even when rendering points at size 1.0.

Normal Estimation and Point Splatting

This week was spent on getting the point splatting to work in our OOCViewer.

Right now, normals are calculated for each voxel independently. This can either be done during runtime or in a pre-processing step. In the first case, normals are cached on the disk after calculation and can be re-used. This also has the advantage that ‘mixed’ scenes are possible in which some voxels have normals but others dont:

Online calculation of normals

Online calculation of normals. Some voxels have normal data, others don’t. The voxels in the background right have not been loaded yet.

Calculation time depends mostly on the number of points in a voxel. pcl’s estimateNormals method turned out to be faster (especially when using the multi-threaded OMP variant) than the naive normal estimation approach and was used. In a second pass, a K-nearest neighbour search is performed for each point in the point cloud and the average distance to these neighbour points is used as a starting radius for the splat size.

The drawbacks are increased memory size. On average each pcd file now has an accompanying normal cache file that is 1.5 the size. Normal data is currently not compressed. Another option would be to store normal+radius data as 4 signed chars (32 bit total) and normalize the value in the shaders.

Pre-calc time is pretty high as there are many small files and a lot of time is spent onopening and closing files. On the other hand, this has to be performed only once.

There are some sampling problems with the normals like in this image:

Normal discontinuities

As a side note: merging two branches is harder than it should be. Maybe we could organize the git branches a bit better?