chat Update 2

First, I’d like to apologize for not posting more frequently. I’ve run into some architectural issues that I’m trying to figure out before I move forward and start coding.

When trying to create a solution that has to interact with multiple different frameworks and platforms, things can start to get complicated. Let’s break things down and look at the options that we have for each.

Backend Framework

Angular.js

Node.js

Game Engine

Unity Engine

Unreal Engine

Input Solution

Text-to-speech

Voice Messages

Sending emoji (pictures)

 

Thankfully, we can take one factor out of the equation right from the start. I will be implementing everything through the Unity gaming engine. I will also be targeting the HTC Vive because of it’s great resolution and natural controllers.

Originally, I had wanted to test a system that wanted to test a variety of chatting methods in a lightweight way in a virtual environment. I think that for the sake of time and scope over the semester, I will focus on sending emoji and other images, mostly because of the fact that emoji is an emerging form of communication and I think it’s effects in a VR setting might show to be quote interesting.

When I was researching what was possible with emoji and a lightweight, over the internet P2P solution, I ran into a couple of issues. Currently, Apple’s version of emoji are the most ubiquitous and up to date of all of the current Emoji typefaces. Emoji standards and definitions are set by a Unicode. Companies and organizations create the font based off of these standards. If I were to continue using Apple Color Emoji, my best option would be to use exported PNG versions instead, as otherwise it would look different on each computer. I could also use EmojiOne, an open-source Emoji font as well. It will come down to whether or not sending text over a chat library is easier than sending images. I could also simply send a code between the users and assign that code to a corresponding PNG version when it reaches the second user. All are viable options at this point.

This next week I will be building a web-based version to try to test these different methods.

Until next time,

Tyler

High Speed Modeling – Update 2

-Andrew Chase and Bryce Sprecher

Progress

In the last week, Bryce and I managed to try constructing 3D models with Agisoft Photoscan, to familiarize ourselves with the software.  All in all, we were successful, and managed to find out what works well and what didn’t.  Here’s our results:

(Click photos for a larger view)

Paint_Tree

Outdoor lighting, or any environment with a lot of ambient light works best.  The less shadows/reflection we have in the modeling subject, the better.

Power_box

However, things can still go wrong, such as in this case when we tried to model a simple power box.  However, this was rushed, to see how rigorous we needed to be in our photo collection, and it still turned out alright.  One simple solution to this would be to just mask each image before aligning the photos, and we wouldn’t have background interference with the model.

Texturing seems to work well with Photoscan, as seem by this power pole.

Power_pole

We’ve tried a few indoor models, and found that we need to have a better lighting set up (light boxes, as opposed to generic LED bulbs/halogen lights, to increase diffusion).  This, and find a way to prop up our samples to allow below-horizon shots on the object, which we didn’t do on the controller example. We also tried a figurine model, but we had issues with it.  It was a small object, so it leads us to believe that smaller objects require a more sophisticated set up.  Bryce is very familiar with the camera (and knows a lot about optimal photography) at this point, and is able to adjust ISO sensitivity, shutter speed, and various other setting to optimize our data collection, so we are unsure why smaller objects are still difficult to model.

controller

Next Steps

Our next goal is to investigate better lighting techniques, and see how we can construct a model with limited camera angles.  If we are to model an object over time with only four cameras, for example, we need to see how the model can turn our with such limited perspective.  If anything, we may need to create a “one-sided model”, where it is 3D from one side, but has no mesh construction on the other.

In addition, we plan to look into how to synchronize cameras for simultaneous capture.  There is software/an app for this camera, and we plan to see what limitations there are on it.  If the software cannot handle such a task, we will then look into camera how we can use post-processing to align and extract frames from each camera’s perspective, and create models from that.