chat Update 3

I have now decided to use Photon’s suite of services to drive both Audio and Text chat. Photon provides both the server backend and Unity plugins for implementing voice and chat texts. The service is free for under 20 concurrent users, so there’s no problem there.

I think I will try to stick with the card UI language, but I may experiment with a 3D manifestation like this walkie-talkie I sketched up. It would be cool to have a physical (virtual) object that would have a 3D sound layer when you brought it up to your ear, more of like a binaural sound than just hearing it from a source in front of you, much like the phone in Job Simulator. I would imagine that the indicator light on the top would light up when you have a new message and putting it up to your ear would play the message. The trigger on the back would enable the player to record a message or directly talk back. The Vive and Oculus CV1 both have microphones by the face, so it would be quite easy to record the voice.

Concept for a walkie-talkie

Concept for a walkie-talkie

I am currently in the process of setting up Photon and modeling the the walkie-talkie above to act as the point of interaction for the voice and am also trying to work out a way to adapt the custom emoji support in Photon to act as a quick way to send large emoji to another person.

chat Update 2

First, I’d like to apologize for not posting more frequently. I’ve run into some architectural issues that I’m trying to figure out before I move forward and start coding.

When trying to create a solution that has to interact with multiple different frameworks and platforms, things can start to get complicated. Let’s break things down and look at the options that we have for each.

Backend Framework

Angular.js

Node.js

Game Engine

Unity Engine

Unreal Engine

Input Solution

Text-to-speech

Voice Messages

Sending emoji (pictures)

 

Thankfully, we can take one factor out of the equation right from the start. I will be implementing everything through the Unity gaming engine. I will also be targeting the HTC Vive because of it’s great resolution and natural controllers.

Originally, I had wanted to test a system that wanted to test a variety of chatting methods in a lightweight way in a virtual environment. I think that for the sake of time and scope over the semester, I will focus on sending emoji and other images, mostly because of the fact that emoji is an emerging form of communication and I think it’s effects in a VR setting might show to be quote interesting.

When I was researching what was possible with emoji and a lightweight, over the internet P2P solution, I ran into a couple of issues. Currently, Apple’s version of emoji are the most ubiquitous and up to date of all of the current Emoji typefaces. Emoji standards and definitions are set by a Unicode. Companies and organizations create the font based off of these standards. If I were to continue using Apple Color Emoji, my best option would be to use exported PNG versions instead, as otherwise it would look different on each computer. I could also use EmojiOne, an open-source Emoji font as well. It will come down to whether or not sending text over a chat library is easier than sending images. I could also simply send a code between the users and assign that code to a corresponding PNG version when it reaches the second user. All are viable options at this point.

This next week I will be building a web-based version to try to test these different methods.

Until next time,

Tyler

chat – A lightweight tool for interacting with others in virtual reality.

Chat allows users in a virtual reality (VR) homespace a quick, easy, and fun way to send voice messages, stickers, and even presents to other VR users.

Rather than copying traditional messaging interfaces into a VR homespace, I will explore and execute upon methods that have emphasis on light and natural forms of communication.

The following examples show mockups for a four different types of communication that could work in VR:

  • Voice – Allows a user to capture a voice message and quickly send it to another user.
  • Text – Allows a user to either choose from predefined responses or use voice-to-text.
  • Emoji – Allows a user to select from pre-defined emojis or stickers in a quick way.
  • Present – Allows a user to send a virtual gift to instantiate in the other user’s scene.

chat Concepts-01

Mockup showing voice, including a microphone button that will allow them to record their voice.

 

chat Concepts-02

Mockup showing text, including a predefined message.

 

chat Concepts-03

Mockup showing emoji, allowing users to pick from a grid of reactions.

 

chat Concepts-04

Mockup showing present, including a 3D model of a dog to be sent to another user’s space.


 

Talking Points

  • Traditional 2D interactions versus rich 3D experiences
  • Implement a system of basic chat and simple 2D games across players
  • Explore different methodologies
  • Scaling from 2 people in a chat to a whole group of people and what that entails
  • How to share information and resources
  • How can people collaborate differently in VR compared to in what we have today?
  • Potentially measure the effectiveness of simple chat experiences vs. full 3D experiences
  • Does a “2D” social experience feel natural in VR?

Existing VR social spaces: AltspaceVR, VRChat, Convrge, vTime, Oculus Social (Alpha)

Deliverable: A collaborative social application (in Unity) (“2D”) that can run with two or more players and a short study on how people respond to the application and what steps can be taken to improve the application.

Update 1: turingVR

BACK-END DEVELOPMENT

We are continuing to structure the backend of our system and map out user (and data) flows and are starting to implement the services together as a cohesive, scalable system. We are looking to hire a student hourly to help the backend development move quickly.

FRONT-END DEVELOPMENT

We continue to brainstorm and implement different interfaces. We are now looking at these ideas under more scrutiny with certain UI/UX principles in mind. Development in Unity has been slowed down as we are transitioning from the IoT Lab to our new office at SoHE, but will pick up in the coming weeks.

BUSINESS DEVELOPMENT

As we refine our high level product and go to market solutions, our first phase continues with advancing our efforts in connecting with developers. Through meetings and conferences we have established initial contact with a lot of the developers that we want to be a part of our ecosystem. The next step is to follow up with those initial contacts and continue expanding our network of developers. We’ve come up an email draft to initiate conference calls to keep the developers in our network up to speed on what we’re working on so they can quickly be brought into test our alpha and beta builds.