Sparse Photo Angles – Bryce and Andrew Update

(For some reason embedding images is not working, but images are visible on clicking)

Texture

4 Positions 3 elevations Good Alignment

No Texture Pudding

4 Positions 3 elevations Good Alignment No Texture

We wanted to look at how the sparseness of the photo angles affected the resulting model. The goal of this was to simulate the possible camera configurations we would have for high speed setup. Given that a 360 model would require many more cameras than we have available we decided to target only one side of the model to maximize overlap. We found that the best results could be obtained via close camera angles and multiple elevations, however these were not sufficient to preserve much detail in the final model.  This is most evident in the base model, once it is textured the models tend to look deceptively better. Dense point clouds are very computationally intensive to generate. For a 3D model with 9060 faces the dense point cloud is over 10,000,000 points. We also found that separation from the background was difficult to maintain with the few angles.

2 position 2 elevations with texture applied

2 position 2 elevations, texture applied

2 position 2 elevation no texture maps

2 position 2 elevations, no texture

With 4 images from the same elevation we were only able to get 56 points of correlation and no surface formation.

Empty Surface, 4 images same elevation

The feasibility of highs speed capture with any significant detail has been called in to question with these experiments. The high resolution of the source images (14MP) as well as the good exposure of the images leads to sub par results in maintaining detail in the model. Thus with the much lower resolution and difficult exposure of high speed capture leads us to suspect that models produced in such a manner would have very limited to no ability to discern detail.

We would like to meet up with Kevin at some point to discuss possible different project goals.

 

chat Update 3

I have now decided to use Photon’s suite of services to drive both Audio and Text chat. Photon provides both the server backend and Unity plugins for implementing voice and chat texts. The service is free for under 20 concurrent users, so there’s no problem there.

I think I will try to stick with the card UI language, but I may experiment with a 3D manifestation like this walkie-talkie I sketched up. It would be cool to have a physical (virtual) object that would have a 3D sound layer when you brought it up to your ear, more of like a binaural sound than just hearing it from a source in front of you, much like the phone in Job Simulator. I would imagine that the indicator light on the top would light up when you have a new message and putting it up to your ear would play the message. The trigger on the back would enable the player to record a message or directly talk back. The Vive and Oculus CV1 both have microphones by the face, so it would be quite easy to record the voice.

Concept for a walkie-talkie

Concept for a walkie-talkie

I am currently in the process of setting up Photon and modeling the the walkie-talkie above to act as the point of interaction for the voice and am also trying to work out a way to adapt the custom emoji support in Photon to act as a quick way to send large emoji to another person.