CS699, Final Post

CS699 Final Post by Dongxia Wu

Project1: 3D video implementation

Project2: 3D scene implementation

Description:

This semester I successfully built and tested two Unity projects: one for 3D video, one for 3D scene for the tiled display system, including 6 machines and 11 monitors connected by the network. For the 3D video project, the goal is to show a high resolution 3D video on this system. To achieve this, I successfully built the tiled display environment, and then attached the 3D video to the scene. For the 3D scene project, the goal is to attach the prefabs controlling the tiled display environment for a 3D scene project, and to make sure the display system shows the correct picture from the 3D scene.

For both of the project, the first step is to build the tiled display environment to make the monitor array look like a series of windows on virtual scene from a given perspective. In order to achieve this, I used two UniCave prefabs: Network_Manager, and Tri_Monitor for projects. UniCave is the a Unity3D plugin for non-head mounted virtual reality display systems provided by the Kevin’s group. I changed the parameters with correct IP address of corresponding computer host for the head node and all the sub (display) nodes. With Yuzi’s help, we measured the relative position and size of every monitor corresponding to the head (audience’s eye position). After the adjustment, we set the head at the center of the monitor array (XY plane) and about 3 meters away from the monitor (z axis).

After that,  I attached the 3D video, a few minutes animation which is found from online source. After adjusting the position and the size the video to fit the display array, I successfully built and tested the 3D video project. In order to test the project conveniently, I also used python script to realize cluster launching, which enables starting the executable on all of the machines through the head node.

For the test part, there were some spatial misalignments since Yuzi and I had not measured the relative position of the monitors precisely. The performance looks much better after tuning the monitors. Also, I tested the time alignment, the tiled display environment successfully check and tune the frame to the similar number for each monitor automatically.

For 3D scene project, the concept is to make my prefabs for tiled display system work for other 3D scene project. In order to achieve this goal, I used the courtyard as our 3D scene, which is a free source provided by unity, and tried to add the prefabs I made for 3D video project to realize the tiled display function since these two project shared the same physical display system. The method seems straightforward: just add the prefabs under playerController object as its child in order to getting the same position and rotation information from controller. However, it seems that these UniCave prefabs will get confused about which cameras to be selected when there are more than one camera for different purposes. Therefore, I deleted all the cameras instead of the camera for player controller.

Feelings:

I feel happy of what I have achieved this semester. Since I didn’t have much background of VI, computer graphics, and special software like Unity and Microsoft Mixed Reality tools, I was really struggling on my HoloLens project. But things become better after I switch my project to tiled display system. Thanks for help from Kevin, Ross, and Benny, I successfully finished 2 projects for 3D video and 3D scene implementations. During this process, I kept challenging myself and learnt new things independently, especially the knowledge of Unity. Now I am feeling more confident to use this powerful tool for future project.

Challenges:

One of long-standing challenges is the connecting process from the head node. Since these two projects are all based on the tiled display system, they are highly dependent on correct and fast network connecting from head node to each sub nodes. However, firewall system for each machine sometimes can be very annoying for our system. The cluster launching scripts were able to work before. But after all machines updated to Windows 10, it somehow does’t work.

Another challenge for both 3D video and 3D scene project is the frame difference for different displays. When looping to the start point, the frame difference became more apparent. I have changed the related parameters, but the performance didn’t improve a lot. Such problem may result from different graphics card settings and hardware configurations for different displays.

Finally, the courtyard demo I chose for the 3D scene project is a very large and complicated  scene. There are many cameraHolders and objects for controlling. These objects are related with each other, which makes it challenging to figure out their function and relation.  I tried to delete unnecessary objects to make it simple and find the right position to add the prefab to inherit the right position and rotation from the playerController.

Future work:

I will be very excited if I can apply what I have learnt from this semester’s work to new 3D display system. It seems that the physical performance of our display and computer is not very good and becomes a limiting factor. For example,  the system sometimes has a serious delay when playing. It will be interesting to see the performance of tiled display system with new displays and computers.

Also, I want to apply UniCave prefab for polymorphic device system such as building a virtual reality display environment in a room combined with display and projector. Compared with 2D display array system, I think this way will generate a more immersive feeling.

Video Link:

project1_3D_video

project2_3D_scene_control_video

 

Visual Acuity, Final Post

Description:

The final outcome of this semester is the Landolt C Visual Acuity VR application, or Landolt VR.  This application allows a user to run the Landolt C visual acuity test in three configurations across all devices compatible with Unity’s mixed VR plugin. The goal of which is to better measure both individual visual acuity in a single device, and visual quality output by different devices.

The three configurations are a fixed head perspective standard 70 trial test, a free head perspective standard 70 trial test, and a custom test that can be run in either head position. The custom test allows the user to change the start distance of the C object, the number of trials before the test completes, and the linear movement distance.

The procedure is best explained in the context of the 70 trial standard test. When this option is selected, the subject will be asked to decide the direction the C object faces 5 times per 14 different distances, with the C distance increasing each time. Between each guess, there is a brief rest period where a blank screen is displayed. Upon completion of the test, a Snellen Letter score will be displayed on the screen, and the user can exit the test to the main screen. Also upon completion a log file is created that contains all information pertinent to the test, and includes a record of each individual trial.

Creating this project took several weeks of research prior to the start of coding. In those weeks I learned the founding principles of visual acuity testing, the standard optometry procedures for issuing a visual acuity test, analysis of visual acuity testing results, and finally, how to construct and validate new visual acuity tests using standard symbols. I also had to learn some basic physiology of the eye to familiarize myself with vocabulary ahead of conducting other research.

Project Feelings:

I’m quite happy with the results of this semester. Having been a relative novice in terms of developing VR applications, it was satisfying alone to create a fully functioning project on my own. On top of that, I was challenged to learn material outside of my field, and bring results from research into a new medium. While there was plenty of content regarding visual acuity testing, and the basics of optometry practice, there was little to nothing about administering such a test on screen, let alone in VR. While I initially found that frustrating, it became something I found enjoyment in, working on something that perhaps no one has done before.

This is only my second time developing a VR application, and the first time it’s been non-game related. I found that I was able to create something functional, and relatively easy / aesthetically pleasing to use. Unity is a complicated but rewarding tool to learn, after this semester I feel much more confident in my development skills using it. The final product, combined with the improvement of my research and Unity skills was, and is satisfying.

Project Challenges:

The biggest hurdle was the lack of information out there with any relation to what I was attempting to build. I had a good idea of what I was trying to accomplish fairly quickly, but without anything to reference I was pushed outside the comfort zone of feeling what I was doing was correct. I still wanted to produce something accurate, that performed as close to a standard as I could get. It took some realizing that things like these are never done perfectly the first time, and validation comes from experimentation in the case of doing something new.

The next considerable challenge was threading. It took a good few weeks to figure out how to time things for the periods of rest and the periods of trial. While the co-routine / invoke functions unity provides work perfectly fine, my lack of Unity knowledge pushed me towards my C# comfort zone. As someone who uses C# for .NET web development, I foolishly thought Unity would play nice with C# timers, and even went so far as trying to write my own multi threading setup using them. Guidance ultimately put me back on the right track. Asking for help is never bad.

More time:

Given more time I would have loved to see this through testing. I feel like this is something useful, and would have preferred to run more trials on people that I don’t know so well. Not only to verify the process, but to improve the design of the application.

The next thing I would’ve liked to focus on is design. While it’s pleasant enough to look at, I would’ve liked more time to learn more about certain alignment techniques for Unity UI’s. I think that would be a small improvement that could make this product better.

It would’ve also been nice to run this past an Optometrist. While I was confident enough in my own research regarding the basic principles, I’m sure there are inaccuracies in my procedure that someone with more experience could help iron out.